Driving Along at Full Speed for Autonomous Vehicles



Sensors and processor technologies are already in place for the next generation of assisted driving and self-driving vehicles. Caroline Hayes asks, “How soon can we relinquish control of the car, and will we be happy with technology in the driving seat?”

Although analysts cannot seem to agree on a timeframe for autonomous vehicles to be on our roads, there is consensus that they are inevitable. French firm Yole Développement believes that technology is slowly replacing the driver in vehicles. It says that by 2045 over 70% of vehicles sold will integrate autonomous functionalities. More cautiously, Strategy Analytics says that, despite “the considerable market hype,” any volume market for fully autonomous vehicles is unlikely before 2025. Its Autonomous Vehicles Service report, Autonomous Vehicle Market Scenarios, offers three possible levels of adoption for automation. The first sees automakers following an evolutionary path to roll-out autonomous vehicles, but such vehicles will be far less in number than conventional car production, up to 2050; a second scenario is the adoption of city cars using autonomous driving to accelerate the market growth; and the third possibility is the increased use of partially autonomous technology, resulting in a slower roll-out.

Falling Sensor and Camera Costs Should Push Adoption

The increased use of sensors to detect the vehicle environment, temperature or seat position will be complemented with short and long range radar sensors, ultra-sonic sensors and Light Detection and Ranging (LIDAR) sensors and cameras. This last sector breaks into sub-groups of Near Infra-Red (NIR) and Long Wavelength Infra-Red (LWIR) cameras. Collectively, these sensors and cameras will add between $10,000 and $15,000 to the cost of the vehicle, predicts the report. This amount is expected to decrease over the next decade, further driving adoption.

Figure 1: Yole Développement’s Sensors & Data Management for Autonomous Vehicles cites the connected car as the first step towards autonomous vehicles.

Figure 1: Yole Développement’s Sensors & Data Management for Autonomous Vehicles cites the connected car as the first step towards autonomous vehicles.

The powerhouse behind the connected car is the technology that can transmit and receive data from an internal and external network of sensors. This technology has to analyze and respond to the data scenario in real-time. The level of data throughput is expected to be 1-GByte per second in a vehicle system’s RTOS (Real Time Operating System). The levels of computing performance will need to be able to analyze this data quickly enough so that the vehicle can react immediately to any changes, whether that is in temperatures inside the vehicle, or to obstacles/distances changes outside of the vehicle.

Anticipating Centralized Systems’ Resurgence

Many in the automotive supply chain are providers of both hardware and software. Intel® believes that supply chain structure  has led to the emergence of distributed computing systems within the vehicles to accommodate a growing ecosystem of Embedded Control Units (ECUs). However, this adds to both complexity and the cost of an embedded system. The company has identified a move towards a more centralized computing system, with its advantages of reducing complexity yet with a breadth of support that can only encourage the adoption of self-driving, and then autonomous, cars.

“There is a move towards a more centralized computing system, with its advantages of reducing complexity yet with a breadth of support that can only encourage the adoption of self-driving, or autonomous cars.”

Another benefit of a homogeneous computing system within the vehicle is that the ever-increasing volumes of data can be transmitted and received safely and securely. A centralized computing system is more secure than one with a selection of discrete technologies. The source of the data is also critical to prevent inaccurate, malicious threats or hacks. Data generated from sources inside the vehicle are more secure, and less vulnerable to attack, than data generated outside of it.

Security Arsenal Grows

Intel’s Core™ and Xeon® processors use its Data Protection Technology, with Advanced Encryption Standard – New Instructions (AES-NI). The AES algorithm protects data traffic and, in self-driving vehicles, data will be received not only from sensors and cameras within the vehicle, but also from Web or Cloud services as well as the surrounding infrastructure, such as traffic systems. The integrity of that data is critical, and the inclusion of NI accelerates the AES encryption and decryption without any performance overhead, so processing performance is unaffected.

Another weapon in the security arsenal is Hardware-Assisted Security (HAS), which adds layers of protection to prevent stealthware attacks, where a bug can be introduced and take over a system. Another level of protection is offered with its Platform Protection Technology, which prevents unauthorized software and malware operation. This can be Intel OS Guard to protect deep levels of a system, even if an application has been compromised.

Assisted driving systems use Intel Core i7 processors today. They can be found in adaptive cruise control, emergency braking and lane assistance systems. These are functions that are more compute-intensive than assisted driving functions whose job is to inform the driver of conditions. Examples of this kind of computation are lane departure warning and parking assistance. These require processor performance of less than 100,000 DMIPS and are presently carried out by a processor in the Intel Atom class.

Functions such as adaptive cruise control, emergency braking and lane keeping are categorized as assisted driving and require processor performance above 100,000 DMIPS. These are performed by processors in the Intel Core i7 family. The next level, self-driving, requires even higher DMIPS performance and are the domain of the Xeon processor family (see Figure 2).

Figure 2: Progressing from IVI (In-Vehicle Infotainment) Intel categorizes driving stages as Inform, Assist and Assume functions, each with their own performance level requirements.

Figure 2: Progressing from IVI (In-Vehicle Infotainment) Intel categorizes driving stages as Inform, Assist and Assume functions, each with their own performance level requirements.

Intel’s Turbo Boost Technology accelerates the processor performance, enabling the calibre of high performance workloads needed for in-vehicle systems that need to receive, analyze and act upon large volumes of data from various sources, such as information from an external camera, to an ECU and the braking system in less than a second.

The Core i7’s Smart Cache sub-system is another way that the processor can be optimized for this and other multi-threading operations that are commonplace in assisted driving systems.

All of this has to be executed in an energy-efficient manner. Intel’s integrated memory controller has low latency and high bandwidth (up to 25.6-GByte per second) memory bandwidth to handle data-intensive operations without adversely impacting operation performance.

Collaboration

Many automotive manufacturers are working with semiconductor companies to produce specific driver assistance systems.

Figure 3: Project Mobii is a collaborative research project between Intel and Ford to explore how interior cameras and vehicle data can be used to enhance the driving experience.

Figure 3: Project Mobii is a collaborative research project between Intel and Ford to explore how interior cameras and vehicle data can be used to enhance the driving experience.

The Mobile Interior Imaging (Mobii) Project developed by Intel and Ford may be in the prototype phase, but indicates how the smartphone, and consumer electronics, are making their way into the driving experience. It uses interior cameras, integrated with sensor technology and data collected from around the vehicle to personalize the driver’s setting and experience.

Intel ethnographers, anthropologists and engineers worked alongside Ford research engineers to develop Project Mobii, described as perceptual computing technology for an intuitive vehicle experience. The aim was to improve the driver experience, using interior cameras with existing car sensor data and the driver’s behavioral patterns, explains Tim Plowman, Experience Solutions Architect, Intel Labs.

“Our goal with the Mobii research is to explore how drivers interact with technology in the car and how we can then make that interaction more intuitive and predictive,” said Paul Mascarenas, Chief Technical Officer and Vice President, Ford Research and Innovation.

Based around a smartphone app on any Intel phone, one use for Project Mobii is to use the in-vehicle camera to search the interior of the vehicle for, typically, a purse, wallet, or a child’s toy, while the owner is panicking inside the house.

Also part of Mobii is face recognition software in a front-facing camera. It automatically identifies a driver and sets in-car console preferences to display information specific to the driver. It can also be used to set the driver preferences, for communications, music and schedule, when the app recognizes the driver is behind the wheel of the car. If the driver is not alone but does not want a passenger to see the selection of guilty-pleasures music tracks, this can be locked and the central display information filtered, so the passenger sees the locked console’s navigation map or other choice of graphics.

“The Mobile Interior Imaging (Mobii) Project developed by Intel and Ford may be in the prototype phase, but indicates the how the smartphone, and consumer electronics, are making their way into the driving experience.”

The same face recognition software can identify new drivers, which can be approved by the authorized owner of the vehicle. Settings can be adjusted to be appropriate, for example, for a newly qualified driver and with permissions such as driving perimeter distance, speed limit and number of passengers.

Who’s There?

Cameras that sense who is reaching for the center screen can be used, speculates Plowman, for safer driving. If the system senses the driver is requesting a destination, it can request a voice command, but if it senses that a passenger is reaching for the center console for navigational directions, then it displays a keyboard for the destination to be typed in. In this way, drilling down through sub-systems can be eliminated.

The principle of safe driving is to ensure the driver’s eyes are on the road ahead. Sensors in Project Mobii can determine if the driver’s head is down or turned. It sends an alert to the heads up display and center console to ‘remind’ the driver of the task in hand and re-direct concentration to the road ahead.

Thinking Ahead

Following connectivity within the car, and connectivity to the immediate infrastructure, the next step is for cars to communicate with each other. Intel Research Scientist Jennifer Healey spoke of vehicles ‘gossiping’ to each other to create safer roads. In a Technology, Entertainment and Design (TED) Talk in April 2013, she spoke about technology available at the time that would allow cars to exchange data with each other to make roads safer. As she points out, when a driver says a motorcyclist “came out of nowhere,” it can’t be true; he/she was on the road for perhaps 30 minutes before  lane-splitting in front of you. “Cars are three dimensional objects that have a fixed position and velocity,” she said. They also travel along published road routes. “So, it is not that hard to make reasonable predictions about where a car is going to be in the near future . . . .as soon as one car sees that motorcyclist, and puts him on the map [noting] position, velocity and estimate he’ll continue going 85 miles an hour, you’ll know, because….the other car will have whispered [to your car]  something like: ‘By the way, five minutes, motorcyclist—watch out’. “

Healey proposed that to get to this level of ‘chatter’ between vehicles, a GPS and a camera in the car are the basic requirements. “Using computer vision, I can estimate where the cars around me are —sort of—and where they are going. What happens if two cars share that data—if they talk to each other?” she asked and answered: “They both improve.” Robots, with stereo cameras, GPS, two dimensional laser range finders and short-range communication radio have been demonstrated to talk to each other, precisely track each other’s position and to avoid each other. Healey does caution against too much chatter, as this can result in a lot of data packets to process, in which case they need to be prioritized. This can be streamlined by eliminating the ones that are following an expected course, to concentrate on the vehicle that is going off-course. By identifying a road user that could be a problem because he is going off a predicted course, it is also possible to predict the new trajectory. “So you don’t only know he is going off-course, but you know how, and you know which drivers to alert to get out of the way,” she told the audience.

Assisted driving, in varying degrees, is a reality on our roads today. There are degrees of use in city, freeway and countryside driving that need to be defined, but the sensor and processor technology needed to collect and act on data is available today. Those component parts will not be a roadblock, but reducing costs and gaining public acceptance might be a longer road to travel.

CarolineHayes_ThumbCaroline Hayes has been a journalist, covering the electronics sector for over 20 years. She has edited UK and pan-European titles, covering design and technology for established and emerging applications.

Share and Enjoy:
  • Facebook
  • Google
  • TwitThis