The one day Embedded Vision Summit shows developers how to make their systems smarter with cameras, DSP and other sensors.
Jeff Bier: head of the Embedded Vision Alliance’s Embedded Vision Summit.
Update 5/9/14: typo, caption and URL corrections.
BDTI’s Jeff Bier is known in the industry as a rock-solid guy, an expert on all things DSP, and the man behind the company that publishes processor benchmarks and analyses that are on par with IEEE peer-reviewed content. And Jeff doesn’t jump up and down with excitement much. At least, I’ve never seen it. Look at his photo and you’ll see what I mean.
But he’s virtually hopping from foot to foot with excitement about the 29 May 2014 one-day Embedded Vision Summit to be held at the Santa Clara Convention Center. This fourth annual conference is Jeff’s brainchild because he sees “embedded vision as the next most important use for DSP [devices], algorithms, and their associated sensors.”
“More significant than software defined radio?” I asked. “Yes,” he said.
“Than cellular baseband processing?” “Yep.”
“Than the image processing done on the world’s billions of smartphones?”
“That,” he said, “is a perfect example of embedded vision.”
Definition: Helping machines see
Most people yawn or politely make excuses to water the cactus at the mention of “computer vision”. To me, it’s a camera-based system doing high-speed QA on an assembly line. Snore.
But embedded vision, says Jeff, is the practical use of computer vision in applications ranging from smartphone photography, augmented reality, Microsoft Kinect-/Minority Report-like 3-space gestures, facial detection, video games, and so on.
Embedded vision is not your father’s computer vision; rather, it’s a deployable software-defined sensor system that’s:
- practical, and
- extracts new meaning from (primarily image) sensors.
With low-cost embedded vision on board, machines become dramatically smarter about the world around them. The up-and-coming Embedded Vision Summit won’t have presentations by assembly line companies like Campbell’s Soup or Procter and Gamble.
But there might be a presentation from a factory company like Ford Motor, because automobiles are one of the “killer apps” for embedded vision. Instead of Ford, Google will be there discussing their self-driving car.
Google’s self-driving Lexus. I guess the low-end Google Prius takes the Street View images while the luxurious Lexus gets the swanky job of shuttling around wide-eyed passengers. (Courtesy: Google.)
Interested yet in embedded vision?
Automotive embedded vision
Google’s self-driving car is a perfect example of embedded vision, combining cameras, radar and ultrasonic sensors with DSP algorithms and processors. The Embedded Vision Summit will include Google’s Nathaniel Fairfield speaking about “Self-Driving Cars”. (See full agenda snapshot down at the bottom of this post.)
Many auto manufacturers are already fusing cameras and other sensors into Advanced Driver Alert/Assistance Systems (ADAS) for lane departure warning, anti-collision emergency braking, blind spot detection, and the most basic of all: the “steerable” back-up camera with overlay.
ADAS systems surround next-gen cars. Embedded vision may use cameras along with, or in lieu of, these systems for lower cost implementations.
(Courtesy: Analog Devices. As reported in
“Automotive sensors may usher in self-driving cars; EDN. See: http://edn.com/design/automotive/4368069/Automobile-sensors-may-usher-in-self-driving-cars )
Subaru’s EyeSight system uses cameras mounted alongside the rearview mirror. While Mercedes uses a combo camera/radar in its ADAS systems, cameras are by far the cheaper alternative—embedded vision provides added capability with these low cost sensors. Analyst firm Strategy Analytics estimates “100 million cameras will be fitted to light vehicles in 2020” (Roger C. Lanctot; GTC: Merging ADAS and Infotainment for Cloud Enhanced Safety).
Subaru’s EyeSight system uses twin forward-facing cameras for lane departure and other adaptive safety features. (Courtesy: Subaru of America.)
Embedded vision…coming to your next design project
Merging the vision sensor (typically one or more cameras) with DSP algorithms and processors creates a software-defined sensor that makes the end system dramatically smarter. Beyond automobiles, embedded vision is already installed in smartphones.
HDR (high dynamic range), panoramic stitching, facial recognition (in photos taken or in Android to unlock your device), red eye removal, back/fore ground blurring are some of the myriad examples of in-production embedded vision. And these are right in your pocket or purse.
According to the Embedded Vision Alliance (a key sponsor of the Summit), the augmented reality market could top $1B by 2018 (according to the market research firm Markets&Markets). Augmented reality applications provide overlay information on top of a live or stored image. At the grocery store or in your kitchen pantry, Amazon’s Flow app (available in the iTunes store) lets a user aim their smartphone camera at a product and order it through Amazon. Ikea has a related augmented reality application that relies on embedded vision to superimpose Ikea furniture and products in your home environment. Now you can decide if blonde is really your color or not.
Amazon’s Flow app lets a user aim their smart phone at a product and order directly from Amazon. (Courtesy: Amazon.com.)
Get healthy; live better
Point-of-sale terminals or vending machines might use facial recognition to authenticate a user or go beyond a bar or QR code when searching for information about a product held in front of a sensor. Even better, gesture recognition might provide for a better UI input—or perhaps a more sanitary one at markets that seem so obsessed with germicide wipes for shopping carts.
In medical situations, embedded vision could be of great benefit. Most designers could envision (no pun) a doctor pulling up a patient’s “chart” in their Google Glass display (geek factor notwithstanding). Google Glass, says Embedded Vision Summit’s Jeff Bier, is merely a platform and not at-present a complete embedded vision system.
But the company OrCam is going several steps further than Google Glass by offering a device that helps vision-impaired people “read”. A tiny eyeglasses-mounted camera performs text and object recognition and provides audio information to the wearer. Product labels can be “read”, along with newspaper text, bus numbers, and the state of street crossing signals. And there’s more capability on the way as algorithms and GPGPU processing power improves from companies like Nvidia.
OrCam’s augmented reality device helps vision-impaired people to “see” and “read” with audio cues. (Courtesy: OrCam.com .)
Embedded Vision Summit Agenda
The one-day Summit agenda focuses on evangelizing and educating hardware, software and system designers. At its core, the briefings are compelling and mix “how to” with “how your future system could do this!”
By the way, I neglected to mention that there’s the obligatory exhibit hall showcase experience, too. Unlike other user events, this one promises to be pretty cool since the exhibitors are the “who’s who” of signal processing, and high-performance hardware/software companies.
Attendees should leave with an understanding of embedded vision…plus ideas for mixing a sensor with many embedded designs to help their machine see the future.
The full agenda schedule grid is shown below.
2014 Embedded Vision Summit agenda, held 29 May at the Santa Clara convention center.