TCIS

Abstract

TCIS is a automotive infotainment concept developed during the fourth semester at the HfG Schwäbisch Gmünd. It involves a haptical item, which is placed on top of a touchscreen. The item can snap into different positions on the surface. These positions represent different functions of the car, for example climate controls, media controls among others.

Analysis

Current automotive systems usually rely on either a touchscreen or haptic input, such as controllers, that fulfil multiple purposes simultaneously. Both concepts come with their own set of difficulties.

Touch screens have the advantage, that content as well as controls can react to the current state of the system. This made them highly popular in the design of complex digital systems, mainly smartphones and tablets, and has led to an implementation in various other contexts as well, one being the automotive interior. The general problem with touch screens is the lack of haptic feedback, which leads to a great amount of attention the user has to devote to the system - an amount of attention the driver cannot devote to comparably negligible settings, such as the temperature, as he has to mainly focus on the task of driving.

Interior of a 2017 Tesla Model 3[1]

Haptic input solutions are the older approach of the two. They map different functionality onto different hardware buttons, each button uniquely dedicated to controlling one setting of the automotive. A couple of years ago, automotive manufacturers started mapping not only one setting to one button, but expanding the possibilities of the buttons themselves, leading to controllers, which can be moved and rotated in different axis. These controllers are then usually mapped to a digital screen, which represents the current state of the UI. Examples for such UIs include BMW’s iDrive and Audi’s MMI. The problems with these types of UIs are mainly, that the distance between interaction and content is comparably high - at times it feels more like remote controlling your car instead of experiencing an immersive interaction. The other problem is that by not only removing the content from the actual point of interaction, but also linking remote buttons to abstract functionality, learning time as well as comprehensibility is reduced.

Interior of a 2003 BMW M5[2]

All in all, both concepts have their own set of weaknesses and it is safe to say that rethinking to some extent is necessary to push the boundaries of automotive user interfaces. This is where the Tangible Car Infotainment System (TCIS) kicks in. TCIS is a radical concept exploring possibilities of interaction that involve haptic items as well as touch sensitive surfaces, trying to combine the best of the two approaches.

Research

As previously mentioned, one main problem of the current controller based approaches is the distance of content to the point of interaction. This problem is not present when dealing with touch screens, as the user directly interacts with the content, therefore one of the first requirements of this concept was to minimise the distance from interaction to content, such as already done in touchscreen user interfaces. Nevertheless, haptic feedback should be provided. In general haptic feedback can happen in two different ways, one way being a feedback as soon as the possibility of an action is enabled (translated to a scenario of a typical hardware button, this would be the moment the user has touched the button, but has not pressed it down yet, we will call this “early feedback”), the other way being a response as a signal of the action being accomplished (with a typical hardware button, this would be the resistance felt when the button is fully pressed down, we will call this “late feedback”).

Current technology tries to establish feedback in touch screens through various ways, but most production ready solutions only enable “late” feedback, so the type of feedback that only communicates the success of a certain action, for example by vibrating, to the user. First approaches of “early” feedback can be found in experiments with gel-touch screens at TU Berlin[3]. Another possibility of integrating full haptic feedback, so “early” and “late” feedback, to digital multitouch surfaces is the use of tangible item technology. This technology involves real-world hardware items being placed on a multitouch surface, which recognises them based on specific properties of the item, such as QR-codes. The software can then determine the position of the item on the surface and react accordingly. This technology is typically used in exhibition contexts to enable engaging interactions with digital systems. TCIS shows how an advancement of this technology can be used in one to one interactions between a user and a complex digital system, exemplified by a car.

Concept

Tangible items as a possibility for interacting became the main focus of the project, due to the at the time unexplored state of their usage in one to one interaction as well as them meeting the requirement of minimised distance between interaction and content. But in what way could the tangible items be used in the context of automotive infotainment systems?

Software is commonly divided into multiple segments, which are controlled by some sort of navigation unit. Typical examples for this concept can be found in mobile operating systems, such as Apple’s iOS or Google’s Android. Here different segments of the capabilities of the digital system (the smartphone in this case) can be accessed through “Apps”, little programs that build up a contained ecosystem of interactivity inside of them. Inside of this ecosystem it is often the case, that further division and thus further navigation is required. Depending on the specific operating system, os developers suggest different solutions to the app developers, these being navigation bars on iOS or sidebars on Android. If we go back to the top level, the selection of apps, we notice that these applications are arranged in a two dimensional grid. The user can hereby define where the apps should be placed on his own. In the further process of using his phone he then memorises the positions of the individual apps and the different functions of his phone are eventually mapped to certain positions (basically, x- and y-coordinates) on his phone’s screen. This idea is called spatial memory and has proven to improve the speed of interaction, a crucial factor while working in the context of automotive interaction systems. Spatial memory can very nicely be combined with the concept of tangible items, as tangible items are in fact moveable objects on a defined surface.

The basic problem of surface that use tangible items, and to some extent spatial memory or at least spatial awareness, to control their state is that the precision of the user placing an item in a certain spot cannot be ensured, so the tolerance area must be large enough to put up with a certain amount of inaccuracy. For this reason these surfaces are mostly fairly large, mostly starting at about 60cm in diameter. This sort of space is not available in current automotive interiors, therefore a solution had to be found to deal with the factor of inaccuracy while minimising the required amount of space. A common approach for revision of inaccurate input is the concept of snapping. When broken down, snapping describes correcting a user’s input because of his inability in a certain moment to deliver the required or satisfactory amount of precision. The system helps him by recognising his intentions, executing the appropriate actions and finalising the input the user was not able to fully deliver. As a result, this not only leads to less attention the user has to devote to the system, but also to faster opportunities of interaction.

Snapping is typically used in software, due to it often being difficult to implement in hardware, though there are some examples of it being successfully done on a production scale. One example for this being a control hub developed by the German kitchen manufacturer NEFF, where a control unit is held onto one specific position via a magnet. The experience while using this product was astonishing, as it was feeling almost like magic to experience such an effortless feedback.

The concept of TCIS evolved out of the basic principles of minimised distance between interaction and content, spatial memory for the location of detailed content and snapping to come up for inaccuracy during use. Different functions of the car were clustered in content-related groups, such as “climate” or “media”. These content-groups were then made accessible through different positions on a two dimensional grid, with magnets below the surface so that the item that is used to control the system snaps into the correct position, even if the input isn’t that accurate.

The item itself was equipped with a single button on top to enable deeper interaction, such as opening context-related menus, whereas the rotation of the item was mapped to top-level interaction of the appropriate content-group. In the case of the media group this top-level interaction meant changing the volume, as this is the most common interaction. Rotating the item in the climate-group position allows fast changes of the current inside temperature in a similar layout as traditional air conditioners in cars enable one to do.

Though it was possible to determine a main functionality that had to be made accessible as quickly as possible in each individual area, there are further functions the user must be able to access without having to dive to deep into the system, an example for this being skipping to the next song. As interaction with the item does not require navigation through any menus, it is the faster and more reachable way to interact. These mid-level interactions therefore had to be possible using only the item. As skipping to the next song can be seen as a forward direction movement, this control is typically placed on the right of the media control unit. To emulate a similar interaction, snapping was again used to add another type of interaction with the item - instead of a full group switch, a slight tap in one direction can also be used to cover certain functions, such as a tap to the right can be used to skip to the next song and a tap to the left can be used to jump to the previous song.

Clicking on the button on the item opens a context-related menu, depending on the currently active content-group (so the position of the item). This menu offers various sub-topics to the user, which he can navigate through be rotating the item. To select a sub-topic, he can again click on the item, which leads to a screen change and detailed access to the appropriate sub-topic. There are two possibilities of returning to the initial screen, if the menu was opened, one being a tap of the item downwards the other being a timeout if the user hasn’t further interacted with the system for a certain amount of time.

Depending on the sub-topic, it is possible that the interaction either takes place with the item directly, as it has already been with functions such as temperature or volume, or on the touchscreen, if more detailed interaction is required. This more detailed interaction is necessary, if the requested input can not be efficiently described in a range of numbers or any other representation of a “1 out of x” range (an example for a “1 out of x” non-numerical range being the context-related menu for choosing sub-topics). If we take a look at the albums-view, which is opened by choosing the appropriate sub-topic from the media menu, we basically also have a “1 out of x”-situation, the difference to the previous selections being the amount of options. The amount of albums is highly variable, as a users music library can be as little or as large as he desires, though the amount typically exceeds 10, which makes a selection by rotating the item a rather tedious task. Here a strength of touch screen based devices is used, as the distance of the content is minimised, the user can directly tap the option he wants to select, reducing the time spent on navigating to the desired option. The same principle applies to selecting a song inside an album.

Conclusion

Obviously, this is a rather radical concept which does not cover all functions which have to be accessed inside a car, it is meant as food for thought. While talking to various representatives of car manufacturers the topic of security often came up as a flaw of the concept, due to a loose item being placed inside the car. Though these concerns are justified for implementation in a real scenario, this is not the core of the project. So let’s dive further in to the interactions that were previously described.

A core principle of TCIS is the ability to control different aspects through the same interaction in different scenarios, these different scenarios being the different positions of the item. Basically, by placing the item in a specific position, you set the system in a certain mode and further interactivity depends on this mode. But modes are a concept that has been proven not to work in certain situations. For example, early text editors were based on modes. To type, one would first have to go into “insertion-mode”, to append text, one had to go into “append-mode”, and so on. This led to even more advanced users being trapped in a certain mode and the computer not behaving as they had expected it to, as Larry Tesler observed in the 1970’s at Xerox Park. “No person should be trapped in a mode” became the principle he based his further work on, inventing a text-editor that in it’s core works on the basic principle, that if one hits a letter-key on his keyboard, this letter is inserted on the screen at all times. Instead of different modes, he invented different commands the user could specifically request for certain tasks, such as cut, copy and paste.

Of course these modes are reasoned by the combination of haptic input and touch screen capabilities, but in some ways, TCIS takes a step back 45 years of software design, reintroducing the concept of modes, in a slightly different way. During the development of the project, the reasoning was viewed as strong enough to justify the use of modes, but in retrospect if these modes should really work, they have to be communicated much louder. Possibilities that could be explored would involve integrating the instrument cluster to display the current mode or context-aware quick switching between modes.

All in all, this concept helped me extend my understanding of the development of alternative interaction principles, the theoretical foundation behind user interaction and a critical, iterative evaluation of experimental user interfaces.

Facts

References