Driver Assistance Systems with the Power of FPGAs



Automakers can now differentiate their vehicles from those of their competitors in a hotly contested market with driver assistance applications based on an all-programmable, customized solution.

In recent years, the automotive industry has made remarkable advances in driver assistance (DA) systems that truly enrich the driving experience and provide drivers with new forms of information about the roadway around them. This article looks at how FPGAs can be leveraged to quickly bring new driver assistance innovations to market. [Editor’s note: driver assistance systems are sometimes referred to as ADAS: Advanced Driver Alert Systems.]

Driver Assistance Introduction
Since the early 1990s, developers of advanced DA systems have envisioned a safer, more convenient driving experience. Over the past two decades, DA features such as ultrasonic park assist, adaptive cruise control and lane-departure warning systems in high-end vehicles have been deployed. Recently, automotive manufacturers have added rear-view cameras, blind-spot detection and surround-vision systems as options. Except for ultrasonic park assist, deployment volumes for DA systems have been limited. However, the research firm Strategy Analytics forecasts that DA system deployment will rise dramatically over the next decade, including growth from $170 billion in 2011 to $266 billion by 2016 – a compound average annual growth rate of 9.3%.

In addition to government legislation and strong consumer interest in safety features, innovations in remote sensors and associated processing algorithms that extract and interpret critical information are fueling an increase in DA system deployment. Over time, these DA systems will become more sophisticated and move from high-end to mainstream vehicles, with FPGA-based processing playing a major role.

Driver Assistance Sensing Technology Trends
Sensor research and development activities have leveraged adjacent markets, such as cell phone cameras, to produce devices that not only perform in the automotive environment, but also meet strict cost targets. Similarly, developers have refined complex processing algorithms using PC-based tools and are transitioning them to embedded platforms.

fig1
Figure 1: Driver Assistance Sensors Market

While ultrasonic sensing technology has led the market, IMS Research (Figure 1) shows camera sensors dominating in the coming years.

A unique attribute of camera sensors is the value of both the raw and processed outputs. Raw video from a camera can be directly displayed for a driver to identify and assess hazardous conditions, something not possible with other types of remote sensors (for example, radar). Alternatively (or even simultaneously), the video output can be processed using image analytics to extract key information, such as the location and motion of pedestrians. Developers can further expand this “dual-use” concept of camera sensor data by bundling multiple consumer features based on a single set of cameras, as illustrated in Figure 2.

fig2
Figure 2: Bundling Multiple Automotive Features

From such applications, it is possible to draw a number of conclusions regarding the requirements of suitable processing platforms for camera-based DA systems:

  • They must support both video processing and image processing. In this case, video processing refers to proper handling of raw camera data for display to the driver, and image processing refers to the application of analytics to extract information (for example, motion) from a video stream.
  • They must provide parallel data paths for algorithms associated with features that will run concurrently.
  • Given that many new features require megapixel image resolution, connectivity and memory bandwidth are just as critical as raw processing power.

Meeting DA Processing Platform Requirements
FPGAs are well suited to meet DA processing platform requirements. For example, in a wide-field-of-view, single-camera system that incorporates a rear cross-path warning feature, the system’s intent is to provide a distortion-corrected image of the area behind the vehicle. In addition, object-detection and motion-estimation algorithms generate an audible warning if an object is entering the projected vehicle path from the side.

Figure 3 illustrates how the camera signal is split between the video- and image-processing functions. The raw processing power needed to perform these functions can quickly exceed what is available in a serial digital signal processor (DSP). Parallel processing along with hardware acceleration is a viable solution.

fig3
Figure 3: Video and Image Processing Functions

FPGAs offer highly flexible architectures to address various processing strategies. Within the FPGA logic, it is a simple matter to split the camera signal to feed independent video- and image-processing intellectual property (IP) blocks. Unlike serial processor implementations, which must time-multiplex resources across functions, the FPGA can execute and clock processing blocks independently. Additionally, if it becomes necessary to make a change in the processing architecture, the ability of the FPGA to reprogram hardware blocks surpasses solutions based on specialized application-specific standard products (ASSPs) and application-specific integrated circuits (ASICs), giving FPGA implementations a significant advantage when anticipating the future evolution of advanced algorithms.

Another benefit of FPGA implementation is device scalability. As OEMs look to bundle more features, processing needs will rise. For example, the rear-view camera might need to host a monocular ranging algorithm to provide drivers with information on object distance. The added functionality requires yet another parallel-processing path. Implementing this in a specialized ASIC or ASSP could be problematic, if not impossible, unless the designers made provisions for such expansion ahead of time.

Attempting to add this functionality to a serial DSP could require a complete re-architecture of the software design, even after moving to a more powerful device in the family (if it is plausible at all). By contrast, an FPGA-based implementation allows the new functional block to be added, utilizing previously unused FPGA logic and leaving existing blocks virtually intact. Even if the new function requires more resources than are available in the original device, part/package combinations frequently support moving to a denser device (that is, one with more processing resources) without the need to redesign the circuit board or existing IP blocks.
Finally, the reprogrammable nature of FPGAs offers “silicon reuse” for mutually exclusive DA functions. In the rear-looking camera example, the features described are useful while a vehicle is backing up, but an FPGA-based system could leverage the same sensor and processing electronics while the vehicle is moving forward, with a feature such as blind-spot detection. In this application, the system analyzes the camera image to determine the location and relative motion of detected objects. Since this feature and its associated processing functions are not required at the same time as the backup feature, the system can reconfigure the FPGA logic within several hundred milliseconds based on the vehicle state. This allows the complete reuse of the FPGA to provide totally different functionality at very little cost.

Meeting DA External Memory Bandwidth Requirements
In addition to raw processing performance, camera-based DA applications require significant external memory access bandwidth. The most stringent requirements come from multi-camera systems with centralized processing, for example, a four-camera surround-view system. Assuming 4 megapixel imagers (1,280 x 960), 24-bit color processing, and performance of 30 frames per second (FPS), just storing the imagers in external buffers requires 3.6 Gb/s of memory access. If the images need to be simultaneously read and written, the requirement doubles to 7.2 Gb/s. With an 80 percent read/write burst efficiency, the requirement increases to 8.5 Gb/s. This estimate does not include other interim image storage or code access needs. With these requirements, it is clear that camera-based DA applications are memory bandwidth-intensive.

These systems also commonly require memory controllers; however, adding one in a cost-effective manner requires efficient system-level design. Again, developers can leverage the flexibility of the FPGA to meet this need. To summarize, FPGA memory controllers provide customized external memory interface design options to meet DA bandwidth needs and optimize all aspects of the cost equation (memory device type, number of PCB layers, etc.).

DA Image Processing Need for On-Chip Memory Resources
In addition to external memory needs, camera-based DA processing can benefit from on-chip memory that serves as line buffers for processing streaming video or analyzing blocks of image data. Bayer transform, lens distortion correction and optical-flow motion-analysis are examples of functions that require video line buffering. For a brief quantitative analysis, a Bayer transform function using 12-bit-pixel Bayer pattern intensity information to produce 24-bit color data is examined. Implemented as a raw streaming process, a bicubic interpolation process requires buffering four lines of image data. Packing the 12-bit-intensity data into 16-bit locations requires approximately 20.5 kb of storage per line, or 82 kb for four lines of data.

As part of their suite of on-chip resources, today’s FPGAs offer localized memory called Block RAM. The BRAM supports line buffer storage of image data in close proximity to fabric-based image processing cores. As FPGAs now target vision applications, the relative amount of BRAM resources has increased with each product family.

A Single All-Programmable Platform
In addition to external memory bandwidth requirements and image processing needs, having a single, all-programmable system on a chip (SoC)-based platform for DA applications offers automotive manufacturers the unique ability to address both the technical challenges and business goals in their DA designs. This type of all-programmable platform offers designers an integrated, flexible, power optimized solution with high computational performance that automotive manufacturers and their electronics suppliers can combine with their own hardware and software, available IP and design frameworks to reduce development time, bill of material (BOM) costs and risk for next-generation DA solutions.

Currently, this type of platform has only been offered as a multi-chip solution, which can require additional processing that keeps BOM costs high, and reduces flexibility options to scale between vehicle platforms. Yet automotive designers can now take advantage of the industry’s first SoC family that incorporates an ARM dual-core Cortex-A9 MPCore processing system with tightly coupled programmable logic on a single die. This combination dramatically increases performance, which is critical for processing-intensive real-time DA applications, and enables greater system integration, allowing the bundling of multiple DA application, while simultaneously reducing BOM costs by minimizing device cost and the cost of additional hardware platforms.

Automakers are eager to offer car buyers increasingly advanced DA applications, which have already proven to be quite popular in manufacturers’ high-end vehicles. By presenting new DA applications and being able to offer multiple DA applications per vehicle using an all programmable, customized solution, automakers are now given the opportunity to differentiate their vehicles from those of their competitors in a hotly contested market.


zoratti_paulPaul Zoratti is a member of the Xilinx Automotive Team. As a senior system architect and manager of driver assistance platforms, his primary responsibility is the global application of Xilinx technology to automotive driver assistance systems. Zoratti holds master’s degrees in both electrical engineering and business administration, both from the University of Michigan. He also has a specialized graduate certification in intelligent transportation systems, also from the University of Michigan. Zoratti has been awarded 16 United States patents associated with vehicle safety technology.

Share and Enjoy:
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • TwitThis

Tags: