Hail a Cab and All Hail the GPU



The autonomous car will rely on data that has to be processed in real-time and adapt through machine learning for safe passage whatever the conditions.

Autonomous vehicles will rely on artificial intelligence (AI) to process safety vision systems and navigation systems. Although not a new idea, machine learning has become significant today, as it contributes to improving algorithms that can refine and enhance applications currently used in Advanced Driver Assistance Systems (ADAS) and, in the future, for control of autonomous cars.

Robotaxis require a small data center’s worth of servers in the trunk to run the deep learning, parallel computing and computer vision algorithms to accommodate the constantly changing landscape and driving conditions.

The increased volumes of data currently being generated have prompted the advance of machine learning, which relies on data for its ‘training.’ Around 90 percent of digital data was created in the last two years, and this increased volume has been used to accelerate the development of better algorithms used in machine learning.

To process all this data, there has also been a surge in interest in the Graphic Processing Unit (GPU). At GTC Europe 2017, Jensen Huang, NVIDIA’s President and CEO, explained how the GPU’s ability to rapidly display graphics makes it suitable for machine learning and AI applications. “The GPU now finds itself with a rich suite of applications,” he said. GPUs serve “just about every industry,” he continued, listing High Power Computing (HPC) and Internet services as examples. Practically every query, photo search, and recommendation on a mobile phone relies on a GPU, he told the conference.

Its advantage, says Huang, is its parallel operation. “The only way to make a Central Processing Unit (CPU) go faster is higher clock speeds,” said Huang. “Not so with GPU computing. Applying parallelism is a special way to solve algorithms,” he said as he introduced the Volta GPU, describing it as the largest single processor ever made, using 21 billion transistors. It delivers 120 TFLOPS of deep learning performance and can replace an entire rack of servers, Huang told attendees.

Figure 1: Jensen Huang introduced the Volta GPU on stage at GTC Europe 2017. Production is expected to be Q1 2018.

Use of a GPU can help engineers develop better algorithms while also allowing devices to process large data sets quickly. Data can also be shared with other robots in various locations, via the Cloud, to pool information and accelerate learning.

Huang’s vision for transportation is that the Volta GPU will “turbo charge” the transportation sector. He believes that deep learning will accelerate applications “far faster” than Moore’s Law. Deep learning researchers are already using NVIDIA GPUs and realizing that they can overcome the main handicap of machine learning, the processing of data to create algorithms for behavior. Huang reported the researchers were finding the GPU to be incredibly effective if trained on a large amount of data which requires trillions and trillions of operations.

The software writes itself enthuses Huang, by taking in large amounts of data. Combined with CUDA, the company’s parallel computing architecture, with associated libraries, compilers and development tools, Huang said that these two forces, when they converge, will turbo charge the company’s GPUs. “The GPU will revolutionize the car industry,” he said. “It has always been in design and simulation as part of the workflow. Now it will be in the car, solving one of the greatest challenges of computing, planning,” he said. For autonomous vehicles, this will mean hundreds of millions of cars and hundreds of millions of people in vehicles and outside of them, to be assigned and safeguarded, as well as route planning and navigation.

At the same event, the company launched Pegasus, believed to be the world’s first AI computer and part of the DRIVE PX AI computing platform. It boasts 320 trillion operations per second for deep learning calculations to run numerous deep neural networks. It is designed to handle Level 5 driverless vehicles, i.e. with no driver required, so the vehicle does not have pedals, steering wheel, or any controls that can be operated by a human.

Figure 2: The latest addition to DRIVE PX AI, codenamed Pegasus, will usher in robotaxis.

These vehicles are a new genre — robotaxis. They can be summoned to an address and take passengers from there to their end destination. This type of driverless vehicle will bring mobility to disabled or elderly users who otherwise have to rely on private hire or the goodwill of friends and family to travel.

For this type of vehicle, masses of complex data will need to be processed to calculate pedestrians and hazards on the roads, other vehicles and their routes, and to plot and follow navigation paths that may need to be updated in real-time. These operations require large amounts of visual data from sensors and information systems in the car, from roadside information systems, and from satellite systems. Furthermore, processing has to include many levels of redundancy to meet the safety levels required in automotive use. As a result, robotaxis require a small data center’s worth of servers in the trunk to run the deep learning, parallel computing, and computer vision algorithms to accommodate the constantly changing landscape and driving conditions. For example, a DRIVE PX AI supercomputer in the vehicle and GPUs in the data center can combine to create highly detailed maps for autonomous vehicle navigation systems.

The decrease in size represented by the Pegasus is practical on two levels, not just conserving space in the trunk, but also saving weight in the vehicle to increase fuel efficiency.

The license plate-sized Pegasus is powered by two NVIDIA Xavier System on Chips (SoCs) which have a Volta GPU, and two GPUs to accelerate deep learning and computer vision algorithms.  It is designed for Automotive Safety Integrity Level (ASIL) D certification. This is the highest, most stringent safety level, to safeguard against life-threatening or fatal injury in the event of a malfunction. It also has Inputs/Outputs (I/Os) for a Controller Area Network (CAN), Flexray, dedicated high-speed inputs for RADAR, LIDAR and ultrasonic sensors, 10Gbit Ethernet connectors and one TeraByte per second memory capability.

It is expected to be available to automotive partners in the second half of 2018.

NVIDIA’s DRIVE IX software can be coupled with DRIVE IX software to process sensor data inside and outside of the vehicle.

DRIVE PX 2 configurations are available now.


Caroline Hayes has been a journalist covering the electronics sector for more than 20 years. She has worked on several European titles, reporting on a variety of industries, including communications, broadcast and automotive.

 

Share and Enjoy:
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google

Tags: