An All-Round View of FPGAs in Surveillance Systems



Surveillance camera networks elicit a variety of responses, but whether for or against, they are here to stay and increasing in number. FPGAs play a role in the embedded systems that are used to integrate a camera into a “smart” network.

Surveillance systems are proliferating. The UK is considered the country with the highest number of cameras installed, with a Closed Circuit Television (CCTV) for every 11 people. There are 420,000 in London alone, eclipsed only by Beijing, China with 470,000 cameras installed around the city.

The integration of surveillance security systems is characterized by the use of ‘smart’ or edge computing in the systems. “Surveillance is becoming less simply acquisition devices and becoming more smart edge computing devices—this helps reduce latency,” says Aaron Behman, Director, Video & Vision, Corporate Strategy & Marketing, Xilinx.

Figure 1: The diverse requirements of surveillance systems mean that a variety of I/O and interfaces are required. The MachXO3 from Lattice Semiconductor can expand functionality of existing systems.

Figure 1: The diverse requirements of surveillance systems mean that a variety of I/O and interfaces are required. The MachXO3 from Lattice Semiconductor can expand functionality of existing systems.

David Wang, Technical Marketing Manager, Industrial and Automotive Solutions, Lattice Semiconductor, believes that FPGAs suit the performance levels required in current, integrated systems. “The real-time video and image processing/analytics, especially in multi-camera surveillance systems, are increasingly, computationally intensive, and are typically not well suited for implementation on the control CPU,” he says. By using the FPGA co-processor to offload the CPU tasks, the workload is lightened and the image processing is accelerated, by processing data in parallel with the embedded DSP blocks, he explains.

Depending on the surveillance level required, more or less detail is needed. Some cameras may be subjected to different light levels, for example, if they are in doorways and subject to daylight as well as a darker interior. Other examples are underground car parks or tunnels, or outdoor applications, where some areas are in light and others are in deep shadows. This requires a wide dynamic range (WDR) image. There are specifications that need to detect objects in outline, and still others may require more frames per second processing rates to capture details such as a license plate number or facial features.

FPGAs can be used to perform high dynamic range (HDR) processing, whereby multiple exposures are taken. Ted Marena, Director of SoC/FPGA Product Marketing, Microsemi, believes that FPGAs are ideally suited to this task, which provides tone and shade, lightens dark areas and modifies brighter areas for a clear view. “In every frame, an image sensor takes a short exposure, a medium exposure and a long exposure,” he begins. As any photographer knows, short exposure captures bright images, the medium captures general lighting and the long exposure is best to capture dark areas. “Once these three exposures are sent out from an image sensor, an FPGA is ideal to interface to each one, and with external memory, it can save each, line by line,” he says. The FPGA performs mapping algorithms, including tone mapping. “The result,” he says, “is very clear and has a high dynamic range.”

Image Sensors At Work

Higher video resolution and increased frame rates place further demands on computing performance for the image sensors and for video processing within the surveillance system.  As a result, many embedded systems use the FPGAs’ DSP, memory and logic elements to perform real-time, parallel operations.

FPGAs can support multiple sensors and cameras, for example in a surround-view camera. It is also flexible, and able to adapt. Based on the requirement, the FPGA can provide enough I/O to support multiple sensor interfaces and video algorithms. “FPGA control logic can either switch between the images of different sensors or stitch them together with the required image processing algorithms,” says Wang. He also points out that, unlike Application Specific Integrated Circuits (ASICs) or ASSPs Application Specific Standard Products (ASSPs), they can be reprogrammed to support multiple sensor interfaces and video algorithms, as required, rather than implementing only fixed video algorithms.

Lattice’s  MachXO3 family (Figure 1) can expand functionality to legacy microcontrollers, with instant-on GPIO and image sensor interfacing and support for Serial Peripheral Interface (SPI), I2C, Camera Serial Interface 2 (CSI-2) and Display Serial Interface (DSI) buses.

Figure 2: Example of a camera block diagram, using Microsemi’s SmartFusion2 SoC FPGA, showing the camera’s connectivity options and use of multiple sensors.

Figure 2: Example of a camera block diagram, using Microsemi’s SmartFusion2 SoC FPGA, showing the camera’s connectivity options and use of multiple sensors.

Multiple sensors can be stitched together to create surround-views. Microsemi’s Marena says, “Using an FPGA, one can stitch together three image sensors, offset at 60 degrees each, to create a 180-degree view camera. The FPGA synchronizes the image sensors.” The FPGA has enough internal memory to support line buffers, allowing for start-up times for each sensor to be varied. The combined image can be sent to an Image Signal Processor (ISP) or the FPGA can process the image (Figure 2).

According to Behman, the video encoder feature in Xilinx’s UltraScale+ MPSoC, or Multicore and multiprocessor System on Chip, targets surveillance. It supports both encoding and decoding. “Encoding is critical for surveillance,” he says. Together with higher performance quad core ARM Cortex-A53 processors and the ability to scale up to almost one million logic cells, the architecture is designed for heterogeneous multiprocessing, demanded by the ‘smart’ surveillance systems, with video processing algorithms, object detection and analytics (Figure 3).

Surveillance systems typically customize image sensor pipeline and video processing IP for the different requirements to provide wide dynamic range, or the required resolutions and frame rates. “The FPGA can address demands by offering flexible I/O configuration, parameterized IP, embedded DSP and memory blocks, in-system programmability and field upgradability,” observes Wang.  FPGAs are particularly good at addressing issues around IP integration, he adds, because of the in-system programmability, block modular design flow, timing analysis tools and on-chip debugging capability.

Figure 3: The Xilinx UltraScale MPSoC architecture includes 64-bit Quad core ARM Cortex-A54 processors and ARM Mali-400MP graphic processor.

Figure 3: The Xilinx UltraScale MPSoC architecture includes 64-bit Quad core ARM Cortex-A54 processors and ARM Mali-400MP graphic processor.

Care also has to be taken that connections within the system are reliable. “Because the video data are processed and transferred isochronously, if the connection, such as the PCI Express bus between the main CPU and the FPGA co-processor is not designed carefully, the whole system can be bogged down and the video frame data will be lost,” says Wang. To address this, he suggests one of two resolutions. The first is to process the video image interpretation and analytics in the FPGA, and transfer only the extracted, useful information to the CPU, or to transfer information when certain events are detected. The second method is to compress video image data with an algorithm such as the Video Electronics Standards Association (VESA) Display Stream Compression, which is easy and  inexpensive. And the decoder does not take a lot of computing power from the CPU, he counsels.

Convoluted Neural Networks

“We are seeing FPGAs are particularly well suited to support some of the emerging algorithms and applications,” says Behman. FPGAs maintain critical power envelopes, at an economic price point, he says. “A figure of merit is how many images can the algorithm qualify and at what clock speed and what power consumption.”

One emerging, future applications is Convolutional Neural Networks. This is research to ‘train’ an algorithm to identify images. At a low level, this means systems zooming in on small pixels, identifying patterns; at a higher level, says Behman, arbitrators can identify details, such as a curve, to confirm that it is a tire on a vehicle. Applying these algorithms to very large data sets, e. g. ImageNet (an open source image database), and making them ready for deployment in situations where there is computational intensity, he says, can be supported by FPGAs.


Caroline_Hayes_ThumbCaroline Hayes has been a journalist, covering the electronics sector for over 20 years. She has edited UK and pan-European titles, covering design and technology for established and emerging applications.

Share and Enjoy:
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • TwitThis

Tags: