The Car, Scene Inside and Out: Q & A with FotoNation



Looking at what’s moving autonomous vehicles closer to reality, who’s driving the car—and what’s in the back seat.

Mehra Full ResSumat Mehra, senior vice president of marketing and business development at FotoNation, spoke recently with EECatalog about the news that FotoNation and Kyocera have partnered to develop vision solutions for automotive applications.

EECatalog: What are some of the technologies experiencing improvement as the autonomous and semi-autonomous vehicle market develops?

Sumat Mehra, FotoNation: Advanced camera systems, RADAR, LiDAR, and other types of sensors that have been made available for automotive applications have definitely improved dramatically. Image processing, object recognition, scene understanding, and machine learning in general with convolutional neural networks have also seen huge enhancements and impact. Other areas where the autonomous driving initiative is spurring advances include sensor fusion and the car-to-car communication infrastructure.

Figure 1: Sumat Mehra, senior vice president of marketing and business development at FotoNation, noted that the company has already been working on metrics applicable to the computer vision related areas of object detection and scene understanding.

Figure 1: Sumat Mehra, senior vice president of marketing and business development at FotoNation, noted that the company has already been working on metrics applicable to the computer vision related areas of object detection and scene understanding.

EECatalog: What are three key things embedded designers working on automotive solutions for semi-autonomous and autonomous driving should anticipate?

Mehra, FotoNation: One, advances in machine learning. Second, through heterogeneous computing various general-purpose processors—CPUs, GPUs, DSPs—are all being made available for programming. Hardware developers as well as software engineers will use not only heterogeneous computing, but also other dedicated hardware accelerator blocks, such as our Image Processing Unit (IPU). The IPU enables super high performance at very low latency and with very low energy use. For example, the IPU makes it possible to run 4k video and process it for stabilization at extremely low power—18 milliwatts for 4k 60 frames per second video.

Third, sensors have come down dramatically in price and offer improved signal-to-noise ratios, resolution, and distance-to-subject performance.

We’re also seeing improved middleware, APIs and SDKs. Plus a framework to provide reliable and portable tool kits to build solutions around, much like what happened in the gaming industry.

EECatalog: Will looking to the gaming industry help avoid some re-invention of the wheel?

Mehra, FotoNation: Certainly. The need for compute power is something gaming and the automotive industry have in common, and we’ve seen companies with a gaming pedigree making efforts [in the automotive sector]. And, thanks to the mobile industry, sensors have come down in price to the point where they can be used for much more than having a large sensor with very large optics in one’s pocket. Sensors can now be embedded into bumpers, into side view mirrors, into the front and back ends of cars to enable much more power and vision functionality.

EECatalog: Will the efforts to enable self-driving cars be similar to the space program in that some of the research and development will result in solutions for nonautomotive applications?

Mehra, FotoNation: Yes. For example, collision avoidance and scene understanding are two of the applications that are driving machine learning and advances toward automotive self-driving. These are problems similar to those that robotics and drone applications face. Drones need to avoid trees, power lines, buildings, etc. while in flight, and robots in motion need to be aware of their surroundings and avoid collisions.

And other areas, including factory automation, home automation, and surveillance, will gain from advances taking place in automotive. Medical robots that can help with mobility [are another] example of a market that will benefit from the forward strides of the automotive sector.

EECatalog: How has FotoNation’s experience added to the capabilities the company has today?

Mehra, FotoNation: FotoNation has evolved dramatically. We have been in existence for more than 15 years, and when we started, it was still the era of film cameras. The first problem we started tackling was, “How do you transfer pictures from a device onto a computer or vice versa?”

So we worked in the field of picture transfer protocols, of taking pictures on and off devices. Then, when we came into the digital still camera space through this avenue, we realized there were other imaging problems that needed to be addressed.

We solved problems such as red eye removal through computational imaging. Understanding the pixels, understanding the images, understanding what’s being looked at—and being able to correct for it—relates to advances in facial detection, because the most important thing you want to understand in a scene is a person.

Then, as cameras became available for automotive applications, new problems arose. We drew from all that we had been learning through our experience with the entire gamut of image processing. The metrics FotoNation has been working on in different areas have become applicable to such automotive challenges as object detection and scene understanding.

As pioneers in imaging, we don’t deliver just standard software or an algorithm to the software for any one type of standard processor. We offer a hybrid architecture, where our IPU enables hardware acceleration that does specialized computer vision tasks like object recognition or video image stabilization at much higher performance and much lower power than a CPU.   We deliver our IPU as a netlist that goes into a system on chip (SOC).  Hybrid HW/SW architectures are important for applications such as automotive where high performance and low power are both required. Performance is required for low latency, to make decisions as fast as possible; you cannot wait for a car moving at 60 miles per hour to take extra frames (at 16 to 33 milliseconds per frame) to decide whether it is going to hit something.  Low power is required to avoid excessive thermal dissipation (heat), which is a serious problem for electronics, especially image sensors.

EECatalog: When it comes down to choosing FotoNation over another company with which to work, what reasons for selecting FotoNation are given to potential customers?

Mehra, FotoNation: One reason is experience. Our team has more than 1000 man-years of experience in embedded imaging. A lot of other companies come from the field of imaging processing for computers or desktops and then moved into embedded. We have lived and breathed embedded imaging, and the algorithms and solutions that we develop reflect that.

The scope of imaging that we cover ranges all the way from photons to pixels. Experience with the entire imaging subsystem is a key strength:  We understand how the optics, color filters, sensors, processors, software and hardware work independently and in conjunction with each other.

Another reason is that a high proportion of our engineers are PhDs who look at various ways of solving problems, refusing to be pigeonholed into addressing challenges in a single way. We have a strong legacy of technology innovation, demonstrated through our portfolio of close to 700 granted and applied for patents.

EECatalog: Had the press release about FotoNation’s working with Kyocera Corporation to develop vision solutions for automotive been longer, what additional information would you convey?

Mehra, FotoNation: More on our IPU, and how OEMs in the automotive area would definitely gain from the architectural advantages it delivers. The IPU is our greatest differentiator, and we would like our audience to understand more about it.

Another thing we would have liked to include is more on the importance of driver identification and biometrics. FotoNation acquired a company for iris biometrics a year ago, Smart Sensors, and we will be [applying] those capabilities toward driver monitoring system capabilities. The first step to autonomous vehicles is semi-autonomous vehicles, where drivers are sitting behind the steering wheel but not necessarily driving the car. And for that first step you need to know who the driver is. What the biometrics bring you is that capability of understanding the driver.

Other metrics include being able to look at the driver to tell whether he is drowsy, paying attention or looking somewhere else—decision making becomes easier when [the vehicle] knows what is going on inside the car, not just outside the car—that is an area where FotoNation is very strong.

EECatalog: In a situation where the car is being shared, a vehicle might have to recognize, for example, “Is this one of the 10 drivers authorized to share the car?”

Mehra, FotoNation: Absolutely, and the car’s behavior should be able to factor whether it is a teenager or adult getting behind the wheel, then risk assessments can begin to happen. All of this additional driver information can assist in better driving, and ultimately increased driver and pedestrian safety.

And we see [what’s ahead as] not just driver monitoring, but in-cabin monitoring through a 360-degree camera that is sitting inside the cockpit and able to see what is going on: Is there a dog in the back seat, which is about to jump into the front? Is there a child who is getting irate? All of those things can aid the whole experience and reduce the possibility of accidents.

Share and Enjoy:
  • Facebook
  • Google
  • TwitThis

Tags: