Reimagining the Car’s Future: Role of Multi-Modal Interfaces and Automotive Displays
OEMs, Tier 1 suppliers, and university researchers are reimagining the automobile and a wide variety of displays, including displays that do not yet exist.
Display Week 2017, being held in Los Angeles May 21-26 at the Los Angeles Convention Center, will feature an Automotive Market Conference on Tuesday May 23, and a substantial Vehicle Track on May 24 and May 25.
That makes sense, because automotive original equipment manufacturers (OEMs) and their Tier 1 suppliers are projecting an increase in the number and average size of automotive displays for the foreseeable future.
As advanced driver assist systems (ADAS), connected-car technology, and varying levels of vehicle autonomy proliferate, the display suite must work together with other input/output technologies such as touch, audio, gesture control, and haptics to create a reliable and effortless human-machine interface (HMI). Given the increasingly multi-modal nature of these interfaces, system designers are calling them MMIs (multi-modal interfaces), and there will be presentations at Display Week evaluating the relative effectiveness of different modal combinations and implementations.
Touch-panel makers worry about how to make touch panels reliable in automotive environments; but no matter how reliable they are, a driver reaching to touch a large, far-off center-stack panel will be distracted and may have to remove his eyes from the road. Here is one place where haptics developers see an opportunity, both for displays and soft buttons on steering wheels and elsewhere.
And voice recognition development for automotive use is inevitable. It could solve the problem of button stretch, and, given the rapidly increasing sophistication of digital assistants using AI and voice recognition, we are sure to see its increasing use for information input and increasingly complex control functions. Voice recognition complements touch screens; it could also compete with them. What are the most effective ways of making a voice-display system?
Situational Awareness in 10 Seconds?
The SAE defines five levels of automation for vehicles. Today’s ADAS’s bridge Levels 1 and 2, depending on whether we’re talking about a single function or the integration of two or more functions. In either case, the driver is always responsible for controlling the car, even if the car may intervene under special circumstance, such as automatic emergency braking. Level 5 is complete automation under all circumstances; Level 4 is complete automation under defined sets of circumstances (such as clear weather, highway driving, and no more than moderate congestion).
That leaves Level 3: autonomous driving some of the time, and with the expectation that the car can always pass control back to the driver. Unfortunately, people are very bad at this kind of task-switching. Ford announced last month that it will not wrestle with the problems of Level 3 and will skip directly to Level 5. The company will introduce driverless cars in 2021.
On the other hand, BMW, Audi, and other OEMs will be rolling out Level 3 cars next year. These cars will give drivers at least 10 seconds to take over from the system. What kinds of MMI’s will it take to arouse drivers from whatever they were doing and provide the driver with situational awareness in 10 seconds?
When your car does the driving, what are you going to do? The car must entertain and inform you, but the ways in which it does that will be subject to constant change. When OEMs can no longer differentiate their products based on the conventional driving experience, they will have to innovate elsewhere. Displays, perhaps displays that conform to the curved interior surfaces, will be critical, as will continual novelty.