Archive for May, 2016

Next Page »

Intersil Unveils Industry’s Highest Performance Laser Diode Driver for Automotive Heads-Up Displays

Tuesday, May 31st, 2016

High-speed, quad-channel ISL78365 pulses high intensity lasers up to 750mA, projects full-HD resolution video onto windshield

Intersil Corporation (NASDAQ: ISIL), a leading provider of innovative power management and precision analog solutions, today announced the ISL78365 laser diode driver for automotive heads-up display (HUD) systems. The highly integrated device pulses four high intensity lasers up to 750mA for projecting full-HD color video onto the windshield at nearly twice the current of competitive solutions. The ISL78365’s higher current and faster switching speed enables HUDs with high resolution, high color-depth and high frame-rate projections.

Laser HUDs are the latest innovation in advanced driver assistance systems (ADAS). A vehicle’s HUD keeps drivers focused on the road, safely providing speed, warning signals and other vital vehicle and navigation information on the windshield directly in the driver’s line of sight. The new generation of augmented reality laser HUDs offer near-zero latency and a wide field of view. They overlay additional real-time information such as traffic signs and a planned turning lane, virtually painting arrows and lines on the road ahead to make navigation directions obvious and easy to follow.

The quad-channel ISL78365 is a complete solution for driving lasers in the scanned-MEMS laser projection systems being deployed in next generation autos. It is the industry’s only laser driver with a fourth channel for supporting a wide variety of laser diode configurations, allowing system designers to achieve the desired brightness and sharp, rich colors with high contrast. The ISL78365 provides sub-1.5ns rise and fall times for faster switching speed than competitive devices, resulting in high frame rate, HD color video. The device also offers 10-bit color and 10-bit scale resolution to support a wide variety of contrast levels for each driver channel. And its flexible synchronous parallel video interface supports pixel rates up to 150MHz or 1900 pixels per line.

The ISL78365’s dynamic power management optimizes the laser-diode power supply and offers three power saving modes for improved efficiency and reduced power dissipation to meet system thermal requirements. The programmable multi-pulse return to zero (RTZ) feature reduces speckling, and the device’s wettable flank QFN package simplifies integration into compact laser projection HUDs.

“The ISL78365 offers carmakers the high drive capability, switching speed and color video accuracy that’s enabling advanced laser heads-up displays for today’s luxury vehicles and tomorrow’s mid-range car models,” said Philip Chesley, senior vice president of Precision Products at Intersil. “Our feature-rich laser diode driver minimizes system size and complexity while delivering significantly higher performance than the competition.”

Key Features and Specifications

·        Up to 750mA of peak current output per channel

·        Fast output switching speeds with pulse rise/fall times of 1.5ns typical for crisp pixels

·        Supports up to 150MHz maximum output pixel clock

·        Laser voltage sampler with integrated dynamic power optimization controller to conserve system power

·        Flexible data order supports multiple RGB laser diode opto-mechanical placement

·        Blanking time power reduction reduces laser diode driver current consumption

·        Programmable multi-pulse RTZ for maximum flexibility and speckle reduction

·        Single 3.3V supply and 1.8V video interface for low power operation

·        3-wire serial peripheral interface

·        AEC-Q100 Grade-1 qualified for operation from -40°C to +125°C

·        Wettable flank QFN package allows optical inspection of solder joints for lower manufacturing cost

The ISL78365 can be combined with the ISL78206 2.5A synchronous buck regulator, ISL78201 2.5A synchronous buck/boost regulator, ISL78233 3A synchronous buck regulator, ISL78302 dual 302mA LDO and ISL29125 digital RGB color light sensor to provide a complete power supply system for automotive laser projection HUDs.

Pricing and Availability

The quad-channel ISL78365 laser diode driver is available now in a 6mm x 6mm 40-lead WFQFN package and is priced at $9.82 USD in 1k quantities. For more information, please visit: www.intersil.com/products/isl78365.

About Intersil

Intersil Corporation is a leading provider of innovative power management and precision analog solutions. The company’s products form the building blocks of increasingly intelligent, mobile and power hungry electronics, enabling advances in power management to improve efficiency and extend battery life. With a deep portfolio of intellectual property and a rich history of design and process innovation, Intersil is the trusted partner to leading companies in some of the world’s largest markets, including industrial and infrastructure, mobile computing, automotive and aerospace. For more information about Intersil, visit our website at www.intersil.com.

Contact Information

intersil

1001 Murphy Ranch Road
Milpitas, CA, 95035
USA

tele: 408-432-8888
toll-free: 1-888-INTERSIL
fax: 408-432-8888
http://www.intersil.com/

Advanced Power Technology from STMicroelectronics Enables Unique Portable e-Car Charger from Zaptec

Monday, May 30th, 2016

Industry-acclaimed silicon-carbide (SiC) power electronics from STMicroelectronics , a global semiconductor leader serving customers across the spectrum of electronics applications, has enabled the creation of ZapCharger Portable, the world’s smallest, smartest, and safest electric-car charging station from Zaptec, an innovative start-up company that has revolutionized the transformer industry.

The market-first portable electric-car charger with built-in electronic transformer, ZapCharger works with any electric car on any grid. Excellent power-conversion capabilities of ST’s SiC MOSFET[1] devices have enabled Zaptec engineers to design a portable, yet powerful piece of equipment. Ten times smaller and lighter than products with the comparable performance, the 3kg, 45 x 10 x 10cm charger delivers an energy efficiency of 97%.

Uncompromising on safety, the water- and weather-proof ZapCharger is fully galvanically insulated and continuously monitors the grid it is connected to. It dynamically adjusts the amount of power it delivers and can shut down immediately if it detects a fault, to protect the car. The charger offers GPRS connectivity and operates over an extended temperature range from -40°C to +55°C.

Inside the ZapCharger, 32 high-voltage SiC Power MOSFETs from ST deliver efficient power conversion with minimum losses. Compared with traditional (silicon) solutions, these components can sustain much higher voltages, currents, and temperatures, and their power-conversion circuits operate faster, enabling smaller, lighter designs, higher system efficiency, and reduced cooling requirements.

“The key for us was to find a power technology with a very high efficiency so we could reduce the overall size of the charger without compromising performance. ST’s silicon-carbide offering was the perfect match,” said Jonas Helmikstøl, COO, Zaptec. “The support of ST as a strong and reliable partner helped us transform our invention into a product that dramatically changes the user experience and by allowing consumers to take their chargers anywhere eliminates ‘range anxiety” and can accelerate the adoption of electric vehicles worldwide.”

“Leveraging the exceptional efficiency of ST SiC Power MOSFETs, ingenious solutions like ZapCharger that can enable drivers to safely charge their vehicles anywhere are set to catalyze the growth of the e-car market and the smart-energy ecosystem as a whole,” said Philip Lolies, EMEA Vice President, Marketing & Application, STMicroelectronics. “Zaptec’s decision to rely on our advanced power technology confirms ST’s industry leadership and enabling role in the global trend towards Greener and Smarter Living.”

In addition to electric-car charging, Zaptec’s patented, prize-awarded electronic-transformer technology[2] targets new applications in Industrial, Marine, and Space.

After successful field tests, ZapCharger is starting pilot production now, with volume ramp-up scheduled at the end of Q3 2016.

For further information on ST’s portfolio of SiC MOSFETs please refer to www.st.com/sicmos

About STMicroelectronics

ST is a global semiconductor leader delivering intelligent and energy-efficient products and solutions that power the electronics at the heart of everyday life. ST’s products are found everywhere today, and together with our customers, we are enabling smarter driving and smarter factories, cities and homes, along with the next generation of mobile and Internet of Things devices.

By getting more from technology to get more from life, ST stands for life.augmented.

In 2015, the Company’s net revenues were $6.90 billion, serving more than 100,000 customers worldwide. Further information can be found at www.st.com.

About Zaptec

Zaptec is a Norwegian company that was established in Stavanger (2012), in order to commercialise 10 years of research and development in the field of super-compact power electronics. Zaptec’s patented electronic transformer is the first of its kind in the world, and a key enabling technology for the Internet of Energy.

Today Zaptec develops products with Norwegian and international partners in the car, energy, and space industry.

[1] The metal–oxide–semiconductor field-effect transistor (MOSFET) is a type of transistor used for amplifying or switching electronic signals [2] 2015 Norwegian Tech Award

Contact Information

STMicroelectronics

39, Chemin du Champ des Filles
C. P. 21 CH 1228 Plan-Les-Ouates
GENEVA ,
Switzerland

toll-free: +41 22 929 29 29
fax: +41 22 929 29 88

World’s First H.264 Video CODECs for MOST & ADAS

Thursday, May 26th, 2016
YouTube Preview Image

[MNV234] Microchip introduces world’s first H.264 video I/O companions optimized for MOST® high-speed automotive infotainment and ADAS Networks.

Contact Information

Microchip Technology Inc.

2355 W. Chandler Blvd.
Chandler, AZ, 85224
USA

tele: 480.792.7200
toll-free: 888.MCU.MCHP
fax: 480.792.7277
here2help@microchip.com
www.microchip.com

The Car, Scene Inside and Out: Q & A with FotoNation

Tuesday, May 24th, 2016

Looking at what’s moving autonomous vehicles closer to reality, who’s driving the car—and what’s in the back seat.

Mehra Full ResSumat Mehra, senior vice president of marketing and business development at FotoNation, spoke recently with EECatalog about the news that FotoNation and Kyocera have partnered to develop vision solutions for automotive applications.

EECatalog: What are some of the technologies experiencing improvement as the autonomous and semi-autonomous vehicle market develops?

Sumat Mehra, FotoNation: Advanced camera systems, RADAR, LiDAR, and other types of sensors that have been made available for automotive applications have definitely improved dramatically. Image processing, object recognition, scene understanding, and machine learning in general with convolutional neural networks have also seen huge enhancements and impact. Other areas where the autonomous driving initiative is spurring advances include sensor fusion and the car-to-car communication infrastructure.

Figure 1: Sumat Mehra, senior vice president of marketing and business development at FotoNation, noted that the company has already been working on metrics applicable to the computer vision related areas of object detection and scene understanding.

Figure 1: Sumat Mehra, senior vice president of marketing and business development at FotoNation, noted that the company has already been working on metrics applicable to the computer vision related areas of object detection and scene understanding.

EECatalog: What are three key things embedded designers working on automotive solutions for semi-autonomous and autonomous driving should anticipate?

Mehra, FotoNation: One, advances in machine learning. Second, through heterogeneous computing various general-purpose processors—CPUs, GPUs, DSPs—are all being made available for programming. Hardware developers as well as software engineers will use not only heterogeneous computing, but also other dedicated hardware accelerator blocks, such as our Image Processing Unit (IPU). The IPU enables super high performance at very low latency and with very low energy use. For example, the IPU makes it possible to run 4k video and process it for stabilization at extremely low power—18 milliwatts for 4k 60 frames per second video.

Third, sensors have come down dramatically in price and offer improved signal-to-noise ratios, resolution, and distance-to-subject performance.

We’re also seeing improved middleware, APIs and SDKs. Plus a framework to provide reliable and portable tool kits to build solutions around, much like what happened in the gaming industry.

EECatalog: Will looking to the gaming industry help avoid some re-invention of the wheel?

Mehra, FotoNation: Certainly. The need for compute power is something gaming and the automotive industry have in common, and we’ve seen companies with a gaming pedigree making efforts [in the automotive sector]. And, thanks to the mobile industry, sensors have come down in price to the point where they can be used for much more than having a large sensor with very large optics in one’s pocket. Sensors can now be embedded into bumpers, into side view mirrors, into the front and back ends of cars to enable much more power and vision functionality.

EECatalog: Will the efforts to enable self-driving cars be similar to the space program in that some of the research and development will result in solutions for nonautomotive applications?

Mehra, FotoNation: Yes. For example, collision avoidance and scene understanding are two of the applications that are driving machine learning and advances toward automotive self-driving. These are problems similar to those that robotics and drone applications face. Drones need to avoid trees, power lines, buildings, etc. while in flight, and robots in motion need to be aware of their surroundings and avoid collisions.

And other areas, including factory automation, home automation, and surveillance, will gain from advances taking place in automotive. Medical robots that can help with mobility [are another] example of a market that will benefit from the forward strides of the automotive sector.

EECatalog: How has FotoNation’s experience added to the capabilities the company has today?

Mehra, FotoNation: FotoNation has evolved dramatically. We have been in existence for more than 15 years, and when we started, it was still the era of film cameras. The first problem we started tackling was, “How do you transfer pictures from a device onto a computer or vice versa?”

So we worked in the field of picture transfer protocols, of taking pictures on and off devices. Then, when we came into the digital still camera space through this avenue, we realized there were other imaging problems that needed to be addressed.

We solved problems such as red eye removal through computational imaging. Understanding the pixels, understanding the images, understanding what’s being looked at—and being able to correct for it—relates to advances in facial detection, because the most important thing you want to understand in a scene is a person.

Then, as cameras became available for automotive applications, new problems arose. We drew from all that we had been learning through our experience with the entire gamut of image processing. The metrics FotoNation has been working on in different areas have become applicable to such automotive challenges as object detection and scene understanding.

As pioneers in imaging, we don’t deliver just standard software or an algorithm to the software for any one type of standard processor. We offer a hybrid architecture, where our IPU enables hardware acceleration that does specialized computer vision tasks like object recognition or video image stabilization at much higher performance and much lower power than a CPU.   We deliver our IPU as a netlist that goes into a system on chip (SOC).  Hybrid HW/SW architectures are important for applications such as automotive where high performance and low power are both required. Performance is required for low latency, to make decisions as fast as possible; you cannot wait for a car moving at 60 miles per hour to take extra frames (at 16 to 33 milliseconds per frame) to decide whether it is going to hit something.  Low power is required to avoid excessive thermal dissipation (heat), which is a serious problem for electronics, especially image sensors.

EECatalog: When it comes down to choosing FotoNation over another company with which to work, what reasons for selecting FotoNation are given to potential customers?

Mehra, FotoNation: One reason is experience. Our team has more than 1000 man-years of experience in embedded imaging. A lot of other companies come from the field of imaging processing for computers or desktops and then moved into embedded. We have lived and breathed embedded imaging, and the algorithms and solutions that we develop reflect that.

The scope of imaging that we cover ranges all the way from photons to pixels. Experience with the entire imaging subsystem is a key strength:  We understand how the optics, color filters, sensors, processors, software and hardware work independently and in conjunction with each other.

Another reason is that a high proportion of our engineers are PhDs who look at various ways of solving problems, refusing to be pigeonholed into addressing challenges in a single way. We have a strong legacy of technology innovation, demonstrated through our portfolio of close to 700 granted and applied for patents.

EECatalog: Had the press release about FotoNation’s working with Kyocera Corporation to develop vision solutions for automotive been longer, what additional information would you convey?

Mehra, FotoNation: More on our IPU, and how OEMs in the automotive area would definitely gain from the architectural advantages it delivers. The IPU is our greatest differentiator, and we would like our audience to understand more about it.

Another thing we would have liked to include is more on the importance of driver identification and biometrics. FotoNation acquired a company for iris biometrics a year ago, Smart Sensors, and we will be [applying] those capabilities toward driver monitoring system capabilities. The first step to autonomous vehicles is semi-autonomous vehicles, where drivers are sitting behind the steering wheel but not necessarily driving the car. And for that first step you need to know who the driver is. What the biometrics bring you is that capability of understanding the driver.

Other metrics include being able to look at the driver to tell whether he is drowsy, paying attention or looking somewhere else—decision making becomes easier when [the vehicle] knows what is going on inside the car, not just outside the car—that is an area where FotoNation is very strong.

EECatalog: In a situation where the car is being shared, a vehicle might have to recognize, for example, “Is this one of the 10 drivers authorized to share the car?”

Mehra, FotoNation: Absolutely, and the car’s behavior should be able to factor whether it is a teenager or adult getting behind the wheel, then risk assessments can begin to happen. All of this additional driver information can assist in better driving, and ultimately increased driver and pedestrian safety.

And we see [what’s ahead as] not just driver monitoring, but in-cabin monitoring through a 360-degree camera that is sitting inside the cockpit and able to see what is going on: Is there a dog in the back seat, which is about to jump into the front? Is there a child who is getting irate? All of those things can aid the whole experience and reduce the possibility of accidents.

Questions to Ask on the Journey to Autonomous Vehicles

Monday, May 23rd, 2016

In or out of Earth’s orbit, the journey will show similarities to the space race.

What comes first, connected vehicles or smart cities?

Smart cities will come first and play a critical role in the adoption of connected vehicles. The federal government is also investing money into these programs in many ways. USDOT has finalized seven cities that include San Francisco, Portland, Austin, Denver, Kansas City, Columbus and Pittsburgh through their Smart City Challenge Program.

Many of the remaining cities/states are finding alternate sources to fund their smart city deployments.

When we look at a co-operative safety initiative such as V2X (Vehicle-to-Everything), we see that it requires a majority of the vehicles to be supportive of the same technology. Proliferation of V2X is going to take few years to reach critical mass. This is the reason connected vehicles equipped with V2X are looking at smart city infrastructure as a way to demonstrate the use case scenarios for the “Day One Applications.”

What are the chief pillars of the autonomous vehicle (AV) market?

The three core pillars of the autonomous vehicle market will be:

  • Number Crunching Systems
    • Development of multicore processors has helped fuel the AI engines that are needed for the autonomous vehicle market. More and more companies are using GPUs and multicore processors for their complex algorithms. It is estimated that these systems process 1GB of data every second.
  • ADAS Sensors
    • The cost/performance ratio for ADAS sensors like lidars, radars and cameras has improved significantly over the past couple of years.  All of this will reduce the total cost of the core systems needed for autonomous vehicle systems, making the technology more mainstream.
  • Connectivity and Security
    • Connectivity will play a key role for such systems. Autonomous vehicles depend heavily on information from external sources like the cloud, other vehicles and infrastructure. These systems need to validate their sources and build a secure firewall to protect their information.

Total BOM for a complete system in the next five years will be around $5,000, and the total cost of the system to consumers will only add $20,000 or less to the vehicle’s sticker price. For a relatively small increase, consumers will get numerous benefits, ranging from enhanced safety to stress-free driving. This is one of the reasons why companies like Cruise got acquired for such huge valuations.

What three key events should embedded designers working on automotive solutions for semi-autonomous and autonomous driving anticipate?

  • Sensor Fusion
    • Standards will need to be developed to allow free integration of ADAS sensors, connecting all the various ADAS applications and supporting data sharing between these sensors.
  • Advances in Parallel Computing Inside Automotive Electronics
    • ECU systems inside the cars will eventually be replaced with complex parallel computing ADAS platforms. Artificial intelligence engines inside these platforms need to take advantage of parallel computing when processing gigabytes of data per second. Real time systems that can ascertain the decision making process in a split second will make all the difference.
  • Redundancy
    • Finally, the industry needs to create a redundant fault tolerant architecture. When talking about autonomous vehicles, the systems that enable autonomous driving need to have redundancy to ensure the system is always operating as designed.

How will the push to create self-driving cars (similar to what happened in the space race) result in useful technology for other areas?

The drone/surveillance video market will benefit from the push to create self-driving technology. Drones have similar characteristics to self-driving cars, just on a much smaller scale. The complexities around drone airspace management will definitely need some industry rules and support. This market will benefit from the advances and rule-making experience leveraged from self-driving cars.

What was the role of USDOT pilots and other research for enabling the autonomous vehicle market?

The role of the USDOT pilots has been predominantly focused on connected vehicles, and not much has happened yet with autonomous vehicles. The deployment of connected vehicle technology infrastructure can determine the usefulness to improve the robustness of data received by vehicles. This infrastructure for connected vehicles will pave the way for autonomous vehicles. Roadside infrastructure will play a role in monitoring rogue vehicles.

USDOT is also focusing on creating regulation and policies for autonomous vehicle deployments. Several test tracks around the United States (California, Michigan and Florida) have been funded by the USDOT. These proving grounds are setup with miles of paved roads that simulate an urban driving environment.

Many automakers have set 2020 as the goal for automated-driving technology in production models. Pilots and research by USDOT represent a huge reduction in risk for the automotive OEMs.

What else should embedded designers keep in mind when the topic is autonomous vehicles?

  • 100 Million Lines of Code
    • Connected vehicle technology is the single most complex system that is built by mankind. It takes about 100 million lines of code to build such a system and is more complex than a space shuttle, an operating system like Linux kernel, and smartphones. We recommend that the embedded designers depend on well tested and pre-defined middleware blocks to accelerate their design process.
  • FOTA and SOTA Updates
    • We also recommend that embedded designers build systems that depend heavily on firmware over the air (FOTA) and software over the air (SOTA) systems. We know that cars are going to follow the same trend as smartphones that require frequent software updates. Tesla has set a great example of this process with its updates and has said that its vehicles will constantly improve over time.
  • Aftermarket Systems as a Way to Introduce New Capabilities
    • Finally, embedded designers need to look at aftermarket systems as way to introduce semi-autonomous features to determine the feasibility and acceptance of these building blocks before they become part of the mainstream.

Puvvala_thumbRavi Puvvala is CEO of Savari.With 20+ years of experience in the telecommunications industry, including leading positions at Nokia and Qualcomm Atheros, Puvvala is the founder of Savari and a visionary of the future of mobility. He serves as an advisory member to transportation institutes and government bodies.

The Rise of Ethernet as FlexRay Changes Lanes

Friday, May 20th, 2016

There are five popular protocols for in-vehicle networking. Caroline Hayes examines the structure and merits of each.

Today’s vehicles use a range of technologies, systems and components to make each journey a safe, comfortable, and enjoyable experience. From infotainment systems to keep the driver informed and passengers entertained, to Advanced Driver Assistance Systems (ADAS) to keep road users safe, networked systems communicate within the vehicle. Vehicle systems such as engine control, anti-lock braking and battery management, air bags and immobilizers are integrated into the vehicle’s systems. In the driver cockpit, there are instrument clusters and drowsy-driver detection systems, as well as ADAS back-up cameras, automatic parking and automatic braking systems. For convenience, drivers are used to keyless entry, mirror and window control as well as interior lighting, all controlled via an in-vehicle network. All rely on a connected car and in-vehicle communication networks.

There are five in-vehicle network standards in use today, Local Interconnect Network (LIN), Controlled Area Network (CAN), Ethernet, Media Oriented Systems Transport (MOST) and FlexRay.

Evolving Standards
LIN targets control within a vehicle. It is a simple, standard UART interface, allowing sensors and actuators to be implemented, as well as lighting and cooling fans to be easily replaced. The single-wire, serial communications system operates at 19.2-kbit/s, to control intelligent sensors and switches, in windows, for example.

Figure 1: Microchip supports all automotive network protocols with devices, development tools and ecosystem for vehicle networking.

Figure 1: Microchip supports all automotive network protocols with devices, development tools and ecosystem for vehicle networking.

This data transfer rate is slower than CAN’s 1-Mbit/s (maximum) operation. CAN is used for high-performance, embedded applications. An evolution of CAN is CAN FD (Flexible Data rate), initiated in 2011 to meet increasing bandwidth needs. It operates at 2-Mbit/s, increasing to 5-Mbit/s when used point-to-point for software downloads. The higher data rate of CAN allows for a two-wire, untwisted pair cable structure, to accommodate a differential signal.

As well as boosting transmission rates, CAN FD extended the data field from 8-byte to 64-byte. When only one node is transmitting, increasing the bit rate is possible, as nodes do not need to be synchronized.

LIN debuted at the same time as vehicles saw more sensors and actuators arrive. At this juncture, point-to-point wiring became too heavy, and CAN became too expensive. Summarizing LIN, CAN and CAN FD, Johann Stelzer, Senior Marketing Manager for Automotive Information Systems (AIS), Automotive Product Group, Microchip, says: “CAN and CAN FD have a broadcast quality. Any node can be the master, whereas LIN uses master-slave communication.”

K2L’s Matthais Karcher: CAN FD’s higher payload can add security to the network.

K2L’s Matthais Karcher: CAN FD’s higher payload can add security to the network.

The higher bandwidth of CAN FD allows for security features to be added. “The larger payload can be used to transfer keys with multiple bytes as well as open up secure communications between two devices,” says Matthias Karcher, Senior Manager AIS Marketing Group, at K2L. The Microchip subsidiary provides development tools for automotive networks.

CAN FD’s ability to use an existing wiring harness to transfer more data from one electronic control unit to another, using a backbone or a diagnostic interface, is compelling, says Stelzer. It enables faster download of driver assistance or infotainment control software, for example, making it attractive to carmakers.

Microchip’s Johann Stelzer: Ethernet will evolve from diagnostics to become a communications backbone.

Microchip’s Johann Stelzer: Ethernet will evolve from diagnostics to become a communications backbone.

Ethernet as Communications Backbone

Ethernet uses packet data, but at the moment its use is restricted to diagnostics and software downloads. It acts as a bridge network, yet while it is flexible, it is also complex, laments Stelzer. As in-vehicle networks increase, so high-speed switching increases, adding to the complexity, requiring a high power microcontroller or microprocessor as well as requiring validating and debugging, which can add to development time.

In the future, asserts Stelzer, Ethernet will be used as the backbone communications between domains, such as safety, power and control, in the vehicle. When connected via a backbone it will be able to exchange software and data quickly, at up to 100-Mbit/s, or 100 times faster than CAN and 50 times faster than CAN FD.

At present, IEEE 802.3 operates at 100BaseTX, the predominant Fast Ethernet speed. The next stage is to operate at 100BaseT1, which is also 100-Mbit/s Ethernet over a single twisted wire pair. The implementation of Ethernet 100BaseT1 will be big, says Stelzer. “This represents a big jump in bandwidth,” he points out, “with less utilization overhead.” IEEE 802.3bw, finalized in 2014, will deliver 100-Mbit over a single twisted pair wire to reduce wiring, promoting the trend of deploying Ethernet in vehicles.

Figure 2: K2L offers the OptoLyzer MOCCA FD, a multi-bus user interface for CAN FD, CAN and LIN development.

Figure 2: K2L offers the OptoLyzer MOCCA FD, a multi-bus user interface for CAN FD, CAN and LIN development.

Increased deployment will come about when the development tools are in place. In each point-to-point node in the network, developers will have to integrate a tool in each section. “[The industry] will need good solutions,” he says, “to avoid overhead.” K2L offers evaluation boards, apps notes, software, Integrated Design Environment (IDE) support and development tools for standard Ethernet in vehicles. The company will announce the availability of support for Standard Ethernet T1 next year.

MOST for Media

MOST relates to high-speed networking and is predominantly used in infotainment systems in vehicles. It addresses all seven layers in the Open Systems Interconnection (OSI) for data communications, not just the physical and data link layers but also system services and apps.

The network is typically a ring structure and can include up to 64 devices. Total available bandwidth for synchronous data transmission and asynchronous data transmission (packet data) is around 23-MBaud.

MOST is flexible, with devices able to be added or removed. Each node becomes the master in the network, controlling the timing of transmission, although adding parameters can add to complexity. One solution, says Karcher, is for a customer to use Linux OS and a Linux driver to handle the generation distribution to encapsulate MOST for the apps layer. This allows the customer to concentrate on designing differentiation into the product. K2L provides software drivers and software libraries for MOST, as well as reference designs for analog front-ends, demonstration kits and evaluation boards. The level of hardware and software support, says Karcher, allows developers to focus on the application. Hardware can connect to MOST and also to CAN and LIN, he continues, adding that tools can connect and safeguard both system and application, reducing complexity and time-to-market.

The FlexRay Consortium, which was disbanded in 2009, developed FlexRay for on-board computing. There have not been any new developments in FlexRay, notes Karcher, who believes its use is limited to safety applications. Although K2L supplies tools to test and simulate FlexRay, “in the long run, it is hard to see a future for FlexRay,” says Karcher, citing the fact that there are no new designs or applications.


Caroline_Hayes_ThumbCaroline Hayes has been a journalist, covering the electronics sector for over 20 years. She has worked on many titles, most recently the pan-European magazine, EPN.

Vehicle-to-Everything (V2X) Technology Will Be a Literal Life Saver – But What Is It?

Thursday, May 19th, 2016

Increased safety and smarter energy are among the expected results as V2X gets underway: Here’s a look at its progress.

A massive consumer-focused industry like automobiles is up close and personal with people—so up close that safety and driver protection from harm are top of mind for manufacturers.  Although human error is the prevailing cause of collisions, creators of technologies used in vehicles have an obvious vested interest in helping lower the distressing statistics.  After all, pedestrian deaths rose by 3.1 percent in 2014 according to the National Highway Traffic Safety Administration’s Fatal Analysis Reporting System (FARS). In that year, 726 cyclists and 4,884 pedestrians were killed in motor vehicle crashes. And this damage to innocent bystanders doesn’t include the growing death rate of drivers and their passengers.

Figure 1: Benefits to driver and pedestrian safety, as well as increased power efficiency, are the aims of V2X. (Courtesy Movimento)

Figure 1: Benefits to driver and pedestrian safety, as well as increased power efficiency, are the aims of V2X. (Courtesy Movimento)

Distracted driving accounted for 10 percent of all crash fatalities, killing 3,179 people in 2014 while drowsy driving accounted for 2.6 percent of all crash fatalities, killing 846 people in 2014.  The road carnage is hardly limited to the United States. The International Organization for Road Accident Prevention noted a few years ago that 1.3 million road deaths occur worldwide annually and more than 50 million people are seriously injured. There are 3,500 deaths a day or 150 every hour and nearly three people get killed on the road every minute.

A Perplexing Stew

Thus it’s about time for increasingly sophisticated technology to step in and help protect distracted drivers from themselves. The centerpiece of what’s coming is so-called Vehicle to Everything (V2X) technology. Once it’s deployed, the advantages of V2X are extensive, alerting drivers to road hazards, the approach of emergency vehicles, pedestrians or cyclists, changing lights, traffic jams and more. In fact, the advantages extend even beyond the freeways and into residential streets where V2X technology helps improve power consumption and safety.

About the only problem with V2X is that it’s emerging as a perplexing stew of acronyms (V2V, V2I, V2D, V2H, V2G, V2P) that require some explanation—and the technology, while important, isn’t universally quite here yet.  But the significance of this technology is undeniable. And getting proficient in understanding V2X is valuable in tracking future vehicle features that will link cars to the world around them and make driving safer in the process.

Here’s an overview of the elements of V2X and predictions for when it will hit the roads, from the soonest to appear to the last.

Vehicle to Vehicle (V2V)

Vehicle to Vehicle (V2V) communication is a system that enables cars to talk to each other via Dedicated Short-Range Communication (DSRC), with the primary goal being to communicate wirelessly about speed and position and to utilize power in the most productive manner in order to warn drivers to take immediate action to avoid a collision. Also termed car-to-car communication, the technology makes driving much safer by alerting one vehicle about the presence of others. An embedded or aftermarket V2V module in the car allows vehicles to broadcast their position, speed, steering wheel position, brake status and other related data by DSRC to other vehicles in close proximity.

Clearly, V2V is expected to reduce vehicle collisions and crashes. It’s likely that this technology will enable multiple levels of autonomy, delivering assisted driver services like collision warnings but with the ultimate responsibility still belonging to the driver. V2V relies on DSRC, which is still in its infancy because the need remains to address security, mutual authentication and dynamic vehicle issues.

V2V is already making its way into new cars. For example, Toyota developed a communicating radar cruise control that uses V2V to make it easier for preceding and following vehicles to keep a safe distance apart. This is an element in a new “intelligent transportation system” that the company said was initially available at the end of 2015 on a few models in Japan. Meanwhile, 16 European vehicle manufacturers and related vendors launched the Car 2 Car Communication Consortium, which intends to speed time to market for V2V and V2I solutions and to ensure that products are interoperable. Plans call for “earliest possible” deployment. 

One key issue with V2V is that to be most effective, it should reside in all cars on the road. Nevertheless, this technology has to start somewhere, so Mercedes-Benz announced that its 2017 Mercedes E Class would be equipped with V2V, one of the first such solutions to go into production.

Vehicle to Device (V2D)

Vehicle to Device (V2D) communication is a system that links cars to many external receiving devices but will be particularly heralded by two-wheeled commuters.  It enables cars to communicate via DSRC with the V2D device on the cycle, sending an alert of traffic ahead. Given the fact that biking to work is the fastest-growing mode of transportation, increasing 60 percent in the past decade, V2D can potentially help prevent accidents. 

Although bicycle commuting is healthier than sitting in a car, issues like dark streets in the evening and heavy traffic flow make this mode problematic when it comes to accident potential.  Although less healthful, traveling by motorcycle and other two-wheel devices also has an element of risk because larger vehicles on the road tend to dominate.

V2D is tied to V2V because they both depend on DSRC, so V2D should begin to pop up after V2V rolls off the assembly line in 2017 and later. It will likely appear as aftermarket products for bicycles, motorcycles and other such vehicles starting in 2018.  Spurring the creation of V2D products have been quite a few crowd-funded efforts as well as government grants like the U.S. Department of Transportation’s  (DOT) Smart City Challenge that will pledge to the winner up to $40 million in funding for creating the nation’s most tech-savvy transportation network in a municipality.  Finalists (Denver, Austin, Columbus, Kansas City, Pittsburgh, Portland, San Francisco) have already been chosen and they are busy producing proposals.

DOT has other initiatives aimed at encouraging the creation of various V2X technologies. V2D is one of the application areas in DOT’s IntelliDrive program, a joint public/private effort to enhance safety and provide traffic management and traveler information. The goal is the development of applications in which warnings are transmitted to various devices such as cell phones or traffic control devices.

Vehicle to Pedestrian (V2P)

Vehicle to Pedestrian (V2P) communication is a system that communicates between cars and pedestrians and will particularly benefit elderly persons, school kids and physically challenged persons. V2P establishes a communications mechanism between pedestrians’ smartphones and vehicles and acts as an advisory to avoid imminent collision.

The concept is simple: V2P will reduce road accidents by alerting pedestrians crossing the road of approaching vehicles and vice versa. It’s expected to become a smartphone feature beginning in 2018 but, like V2D, requires the presence of DSRC capabilities in vehicles.  Ultimately, the DSRC version of V2P will be replaced by a higher-performance LTE version starting in 2020.

While there aren’t any V2P solutions currently available, this area is a hotbed of development, particularly when one includes the full gamut of possible technologies and includes multiple vehicle types such as public transit. Given the significant role that V2P can play in preventing damage to humans, the U.S. Department of Transportation maintains and updates a database of technologies in process. Of the current 86 V2P technologies listed, none are yet commercially available but a number are currently undergoing field tests.

A particularly fruitful approach to developing effective V2P products is a research partnership between telecom and automotive companies. For example, Honda R&D Americas and Qualcomm collaborated on a DSRC system that sends warnings to both a car’s heads-up display and a pedestrian’s device screen when there is a chance of colliding. Although the project won an award as an outstanding transportation system, there’s no word yet when this might appear commercially.

In another collaboration, Hitachi Automotive Systems teamed with Clarion, the Japan-based manufacturer of in-car infotainment products, navigation systems and more on a V2P solution that predicts pedestrian movements and rapidly calculates optimum speed patterns in real time. Undergoing field testing, this is another promising product to look for in the future.

Vehicle to Home (V2H)

Vehicle to Home (V2H) communication involves linkage between a vehicle and the owner’s domicile, sharing the task of providing energy.  During emergency or power outages, the vehicle’s battery can be used as a power source. Given the reality of severe weather and its effect on power supplies, this capability has been needed for a while, with disruptions in power after storms and other weather emergencies impacting many thousands of U.S. families annually.

V2H is a two-way street, with the vehicle powering the home and vice versa based on cost and demand for home energy. The car battery is used for energy storage, taking place when energy is cheap or “green.”

During power outages, power from a vehicle’s battery can be used to run domestic appliances and power can be drawn from the vehicle when utility prices are high. In areas with frequent power outages, the battery can be used to buffer energy to avoid flickering, and it can be used as an emergency survival kit.

It’s expected that V2H will kick into higher gear in 2019, playing a significant role when the number of plug-in hybrid Electric Vehicle (PHEV) and Electric Vehicles (EVs) make up over 20% of the total new cars sold in the United States. But a few projects have been underway for a while, such as a Nissan V2H solution that was already tested widely in Japan and launched in 2012 as the “Leaf to Home” V2H Power Supply System. Relying on an EV power station unit from Nichicon, this was one of the first backup power supply systems using an EV’s large-capacity battery.

Other Japanese car manufacturers have dabbled in these systems, including Mitsubishi and Toyota. Mitsubishi announced in 2014 that its Outlander PHEV vehicle could be used to power homes—only in Japan so far. There are other approaches to utilizing an EV’s battery for home use, such as some currently available devices that can not only charge a battery, but also supply the stored electricity to the home. One example is the SEVD-VI cable from Sumitomo Electric.

Vehicle to Grid (V2G)

Vehicle to Grid (V2G) communication is a system in which EVs communicate with the power grid to return electricity to the grid or throttle the vehicle’s charging rate. It will be an element in some EVs like plug-in models and is used as a power grid modulator to dynamically adjust energy demand.

A benefit of V2G is helping maintain the grid level and acting as a renewable power source alternative. This system could determine the best time to charge car batteries and enable energy flow in the opposite direction for shorter periods when the public grid is in need of power and the vehicle is not.

Given its key role in battery charging, this V2X technology is appearing soon—in affordable EVs like the Tesla model 3, which can now be advance ordered. Other products and companies like Faraday Future, NextEV, Apple Car, Uber and Lyft are all planning to launch EVs between 2017-2020. V2G is an extremely relevant area because it creates the obvious need for cities to start thinking and planning now about how they will support a large-scale EV society. Otherwise, energy utility companies will be in a panic situation and may resort to drastic measures such as rationing energy per household.

Figure 2: V2X technology will be part of the Tesla Model 3. [Photo: By Steve Jurvetson [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons]Other activity in the V2G area includes a partnership between Pacific Gas and Electric Company (PG&E) and BMW to test the ability of EV batteries to provide services to the grid. The automaker created a large energy storage unit made from re-utilized lithium-ion batteries while enlisting San Francisco Bay Area drivers of BMW 100 i3 cars to take part in what’s called the ChargeForward program. A pilot study, this now-underway project is giving qualifying i3 drivers up to $1,540 in charging incentives.

Another intriguing effort involves Nissan and Spain’s largest electric utility, which collaborated on a mass-market V2G system that was initially demonstrated in Spain last year but is aimed at the European market.  Like the BMW/PG&E program, this also involves re-purposed EV batteries for stationary energy storage.  V2G is a very promising market pegged to surpass $190 million worldwide by 2022 according to industry analysts.

Vehicle to Infrastructure (V2I)

Vehicle to Infrastructure (V2I) communication will likely be the last V2X system to appear. It’s the wireless exchange of critical safety and operational data between vehicles and roadway infrastructure, like traffic lights. V2I alerts drivers of upcoming red lights and prevents traffic congestion. The system will streamline traffic and enable drivers to maneuver away from heavy traffic flow.

Despite the enormous impact this technology will have on driver safety, the degree of infrastructure investment required is so massive that it will take time to implement.  Some question whether DSRC-based V2I with its questionable return on investment will ever take place, but there is more hope for LTE-based V2I.

This approach might play a key role starting in 2020 and be rolling along by 2022. Nevertheless, there are promising V2I projects already happening in countries where it’s easier to conduct massive public initiatives, such as China. A field test being run on public roads in Taicang, Jiangsu Province, China, involves buses that receive road condition data and thus can avoid stopping at lights when safe. Tongji University and Denso Corporation developed this project. 

Another recent collaboration involves Siemens and Cohda Wireless to develop intelligent road signs and traffic lights in which critical safety and operational data is exchanged with equipped vehicles.  In the United States, DOT is highly involved in working with state and local transportation agencies along with researchers and private-sector stakeholders to develop and test V2I technologies through test beds and pilot deployments.

Communication is the next frontier of car technology, and this is the bedrock of all the V2X capabilities appearing in the future. And none too soon. According to the World Health Organization (WHO), the incidence of traffic fatalities will continue to expand across the globe as vehicles are more prevalent. WHO notes that this increase is 67 percent through 2020. Having smarter, safer cars and communications systems for the drivers, pedestrians and cyclists who can be impacted by these vehicles could turn around this trend.  Add to that the aspects of flexible electricity storage and usage, and V2X becomes an even more promising technology.


Mahbubul_AlamMahbubul Alam is CTO and CMO of Movimento Group. A frequent author, speaker and multiple patent holder in the area of the new software defined car and all things IoT. He was previously a strategist for Cisco’s Internet-of-Things (IoT) and Machine-to-Machine (M2M) platforms.  Read more from Mahbubul at http://mahbubulalam.com/blog/.

STMicroelectronics and Autotalks Fuse Satellite Navigation with Vehicle-to-Vehicle and -Infrastructure Communication (V2X)

Thursday, May 19th, 2016

Combining GNSS with V2X ranging creates “V2X-Enhanced GNSS” 
to ensure security, accuracy, and reliability of positioning information in difficult urban environments

Lane-level accuracy in urban canyons, tunnels, and parking structures will enable the development of new applications, such as autonomous on-street and in-garage parking and available-spot identification

STMicroelectronics (NYSE: STM), a global semiconductor leader serving customers across the spectrum of electronics applications, and Israel-based Autotalks, a V2X-chipset market pioneer and leader in the first wave of V2X deployments, have announced their fusion of GNSS technology and V2X ranging. The new “V2X-Enhanced GNSS” ensures authenticated and secure vehicle localization for extreme accuracy and reliability of positioning information, especially in urban canyons, tunnels, and parking structures, where accurate absolute and relative positioning—to other vehicles and infrastructure—is critical in progress toward semi- and fully-autonomous vehicles.

Autotalks’ and ST’s development of V2X-Enhanced GNSS builds on the companies’ existing successes in co-developing a world-class V2X chipset that connects vehicles to other vehicles and infrastructure within wireless range for safety and mobility applications. The promise of efficient, coordinated, and safe driving of autonomous cars can result only from the accurate positioning that the fusion of GNSS with V2X technology achieves.

“Autotalks fully recognizes that autonomous driving requires equal measures of reliability, accuracy, and security and no driver would sacrifice any of these,” said Hagai Zyss, CEO of Autotalks.

“Our solutions have been architected from the beginning to enable automated driving and because we recognize positioning for autonomous vehicles as critical, Autotalks, with ST, continues to optimize accurate V2X positioning—and we believe that our customers understand the value and potential.”

V2X-Enhanced GNSS technology, when coupled with V2X-enabled infrastructure, can uniquely provide absolute positioning to vehicles to assure lane-level accuracy. This precision improves navigation in urban canyons and tunnels and is also being used to develop myriad new applications, such as autonomous on-street and in-garage parking and available-spot identification.

“To fully realize the safety, convenience, and other benefits of autonomous driving, we need confidence in the security, reliability, and accuracy of the communications between our vehicle and its surroundings to know precisely how close we are to things, whether—and in what direction—they are moving, and what they are telling us—such as when there are roadworks or an accident ahead,” said Antonio Radaelli, Director, Infotainment, Automotive Digital Division, STMicroelectronics. “Building upon our successful collaboration with Autotalks, we are combining ST’s state-of-the-art positioning technology and roadmap for high-precision Automotive GNSS supporting satellite signal authentication with Autotalks’ expertise in advanced signal-processing algorithms for ranging, to smoothly pave the road to secure, accurate, and reliable V2X-Enhanced GNSS.”

Field trials in an Asian country, monitored by a government agency, are being used to test this technology in 2016.

Technical notes for editors

V2X ranging between vehicles and roadside infrastructure provides an additional level of absolute accuracy beyond that offered by GNSS, which can vary significantly because of atmospheric signal interference, the number and angle of constellation satellites in view, multi-path reflection, antenna configuration, and other factors. US government data suggest that a high-quality Federal Aviation Administration (FAA) Standard Positioning Service (SPS) receiver provides better than 3.5 meters (11.5 feet) horizontal accuracy. Connection of GNSS with a secure V2X chipset, and fusion of the two technologies, GNSS and V2X ranging, offer a trusted positioning reference, in which vehicle localization is authenticated and secure, as well as the link between GNSS and the V2X chipset.

About STMicroelectronics

ST is a global semiconductor leader delivering intelligent and energy-efficient products and solutions that power the electronics at the heart of everyday life. ST’s products are found everywhere today, and together with our customers, we are enabling smarter driving and smarter factories, cities and homes, along with the next generation of mobile and Internet of Things devices.

By getting more from technology to get more from life, ST stands for life.augmented.

In 2015, the Company’s net revenues were $6.90 billion, serving more than 100,000 customers worldwide. Further information can be found at www.st.com.

About Autotalks Ltd.

Autotalks enables the vehicle-to-vehicle and vehicle-to-infrastructure communication revolution by providing automotive qualified VLSI solutions, containing the entire ECU functionality. The unique technology of Autotalks addresses V2X challenges: communication reliability, communication security, positioning accuracy and vehicle installation while maintaining flexibility for V2X system cost optimization. Further information can be found at www.auto-talks.com

Contact Information

STMicroelectronics

39, Chemin du Champ des Filles
C. P. 21 CH 1228 Plan-Les-Ouates
GENEVA ,
Switzerland

toll-free: +41 22 929 29 29
fax: +41 22 929 29 88

The Road to Full Autonomous Driving: Mobileye and STMicroelectronics to Develop EyeQ®5 System-on-Chip, Targeting Sensor Fusion Central Computer for Autonomous Vehicles

Tuesday, May 17th, 2016

5th-generation System-on-Chip, scheduled to sample in H1 2018, builds on long-standing cooperation between Mobileye and ST and market success of EyeQ technology, available now or in the near future on vehicles from 
25 car manufacturers

Mobileye (NYSE:MBLY) and STMicroelectronics (NYSE:STM) today announced that the two companies are co-developing the next (5th) generation of Mobileye’s SoC, the EyeQ®5, to act as the central computer performing sensor fusion for Fully Autonomous Driving (FAD) vehicles starting in 2020.

To meet power consumption and performance targets, the EyeQ5 will be designed in advanced 10nm or below FinFET technology node and will feature eight multithreaded CPU cores coupled with eighteen cores of Mobileye’s next-generation, innovative, and well-proven vision processors. Taken together, these enhancements will increase performance 8x times over the current 4th generation EyeQ4. The EyeQ5 will produce more than 12 Tera operations per second, while keeping power consumption below 5W, to maintain passive cooling at extraordinary performance. Engineering samples of EyeQ5 are expected to be available by first half of 2018.

The EyeQ5 continues Mobileye’s long-standing cooperation with STMicroelectronics. Leveraging its substantial experience in automotive-grade designs, ST will support state-of-the-art physical implementation, specific memory and high-speed interfaces, and system-in-package design to ensure the EyeQ5 meets the full qualification process aligned with the highest automotive standards. ST will also contribute to the overall safety- and security-related architecture of the product.

“EyeQ5 is designed to serve as the central processor for future fully-autonomous driving for both the sheer computing density, which can handle around 20 high-resolution sensors and for increased functional safety,” said Prof. Amnon Shashua, cofounder, CTO and Chairman of Mobileye. “The EyeQ5 continues the legacy Mobileye began in 2004 with EyeQ1, in which we leveraged our deep understanding of computer vision processing to develop highly optimized architectures to support extremely intensive computations at power levels below 5W to allow passive cooling in an automotive environment.”

“Each generation of the EyeQ technology has proven its value to drivers and ST has proven its value to Mobileye as a manufacturing, design, and R&D partner since beginning our cooperation on the EyeQ1,” said Marco Monti, Executive Vice President and General Manager Automotive and Discrete Group, STMicroelectronics. “With our joint commitment to the 5th-generation of the industry’s leading Advanced Driver Assistance System (ADAS) technology, ST will continue to provide a safer, more convenient smart driving experience.”

Technical Details

EyeQ5’s proprietary accelerator cores are optimized for a wide variety of computer-vision, signal-processing, and machine-learning tasks, including deep neural networks. EyeQ5 features heterogeneous, fully programmable accelerators, with each of the four accelerator types in the chip optimized for its own family of algorithms. This diversity of accelerator architectures enables applications to save both computational time and energy by using the most suitable core for every task. This optimized assignment ensures the EyeQ5 provides “super-computer” capabilities within a low-power envelope to enable price-efficient passive cooling. Mobileye’s investment in several programmable domain-specific accelerator families is enabled by its focus on the ADAS and autonomous-driving markets.

Autonomous driving requires an unprecedented level of focus on functional safety. EyeQ5 is designed for systems that meet the highest grade of safety in automotive applications (ASIL B(D), according to the ISO 26262 standard).

Mobileye has built the EyeQ5’s security defenses based on the integrated Hardware Security Module. This enables system integrators to support over-the-air software updates, secure in-vehicle communication, etc. The root of trust is created based on a secure boot from an encrypted storage device.

EyeQ5 will be delivered to carmakers and Tier1s along with a full suite of hardware accelerated algorithms and applications that are required for autonomous driving. Along with this, Mobileye will support an automotive-grade standard operating system and provide a complete software development kit (SDK) to allow customers to differentiate their solutions by deploying their algorithms on EyeQ5. The SDK may also be used for prototyping and deployment of Neural Networks, and for access to Mobileye pre-trained network layers. Uses of EyeQ5 as an Open Software Platform are facilitated by such architectural elements as hardware virtualization and full cache coherency between CPUs and accelerators.

Autonomous driving requires fusion processing of dozens of sensors, including high-resolution cameras, radars, and LiDARs. The sensor-fusion process has to simultaneously grab and process all the sensors’ data.  For this purpose, the EyeQ5’s dedicated IOs support at least 40Gbps data bandwidth.

EyeQ5 implements two PCIe Gen4 ports for inter-processor communication, which could enable system expansion with multiple EyeQ5 devices or for connectivity with an application processor.

High computational and data bandwidth requirements are supported with four 32-bit LPDDR4 channels, operating at 4267MT/s.

Availability

Engineering samples of EyeQ5 are expected to be available by first half of 2018. First development hardware with the full suite of applications and SDK are expected by the second half of 2018.

About Mobileye

Mobileye N.V. is the global leader in the development of computer vision and machine learning, data analysis, localization and mapping for Advanced Driver Assistance Systems and autonomous driving. Our technology keeps passengers safer on the roads, reduces the risks of traffic accidents, saves lives and has the potential to revolutionize the driving experience by enabling autonomous driving.  Our proprietary software algorithms and EyeQ® chips perform detailed interpretations of the visual field in order to anticipate possible collisions with other vehicles, pedestrians, cyclists, animals, debris and other obstacles. Mobileye’s products are also able to detect roadway markings such as lanes, road boundaries, barriers and similar items; identify and read traffic signs, directional signs and traffic lights; create a Roadbook™ of localized drivable paths and visual landmarks using REM™; and provide mapping for autonomous driving. Our products are or will be integrated into car models from 25 global automakers.  Our products are also available in the aftermarket. Further information about Mobileye can be found at: http://www.mobileye.com/

About STMicroelectronics

ST is a global semiconductor leader delivering intelligent and energy-efficient products and solutions that power the electronics at the heart of everyday life. ST’s products are found everywhere today, and together with our customers, we are enabling smarter driving and smarter factories, cities and homes, along with the next generation of mobile and Internet of Things devices. By getting more from technology to get more from life, ST stands for life.augmented.

In 2015, the Company’s net revenues were $6.90 billion, serving more than 100,000 customers worldwide. Further information can be found at www.st.com.

Contact Information

STMicroelectronics

39, Chemin du Champ des Filles
C. P. 21 CH 1228 Plan-Les-Ouates
GENEVA ,
Switzerland

toll-free: +41 22 929 29 29
fax: +41 22 929 29 88

FotoNation® Partners With Kyocera to Develop Intelligent Automotive Camera Technology

Tuesday, May 17th, 2016

Jointly Developed Solutions to Enable Next Generation of Automotive Vision Systems

FotoNation Limited, a wholly owned subsidiary of Tessera Technologies, Inc. (NASDAQ: TSRA) and the leading provider of computational imaging and computer vision solutions for consumer and enterprise applications, has partnered with Kyocera Corporation (NYSE: KYO) (TOKYO: 6971) to develop advanced, intelligent, vision solutions for automotive applications.  As part of the partnership and using FotoNation technology as a foundation, the two companies will jointly develop advanced computer vision solutions for the automotive market.

More stringent safety regulations and the promise of semi and fully autonomous vehicles are creating tremendous technology demands from automakers and automotive OEMs.  By using the solutions that are being jointly developed by FotoNation and Kyocera, automakers and automotive OEMs will not only be able to enhance driver and pedestrian safety, but will also be able to accelerate the adoption of semi and fully autonomous vehicles.

Kyocera is a recognized provider of rear-view cameras systems to the automotive industry.  This experience provides Kyocera with a deep understanding of the automotive industry’s future requirements for vision systems.  Incorporating higher levels of intelligence into Kyocera’s camera systems will enable Kyocera to bring new, feature-rich products such as surround view cameras to market.

“These technologies will provide a safer driving environment for occupants and pedestrians in urban areas, by enabling cars to see and interact with drivers,” said Norio Okuda, Manager, Kyocera.  “FotoNation is focused on delivering complex computational imaging solutions for automotive applications, and together we will develop technologies that will transform the future of driving.”

A pioneer in developing advanced computational imaging algorithms, FotoNation provides best-in-class hardware accelerated imaging solutions for a number of automotive applications including driver monitoring systems (DMS), driver identification, surround view, e-mirror, smart rearview cameras and 360 degree occupancy monitoring.

“Increasing interest from the automotive industry for vision systems to enhance vehicle safety represents an opportunity for significant growth for FotoNation, driven mainly by adoption of our advanced imaging systems by tier-one automotive suppliers and OEMs,” stated Sumat Mehra, senior vice president of marketing and business development at FotoNation.  “Kyocera has a strong reputation as a leading technology innovator, and we are pleased to be working with them as a valued technology partner to bring these cutting-edge vision solutions to market.”

For more information about this significant automotive market opportunity please click here.  For information about FotoNation, please visit http://www.fotonation.com/

About KYOCERA

Kyocera Corporation (NYSE:KYO)(TOKYO:6971) (http://global.kyocera.com/), the parent and global headquarters of the Kyocera Group, was founded in 1959 as a producer of fine ceramics (also known as “advanced ceramics”). By combining these engineered materials with metals and integrating them with other technologies, Kyocera has become a leading supplier of electronic components, semiconductor packages, solar power generating systems, mobile phones, printers, copiers, cutting tools and industrial ceramics. During the year ended March 31, 2015, the company’s net sales totaled 1.53 trillion yen (approx. USD12.7 billion). Kyocera appears on the 2014 and 2015 listings of the “Top 100 Global Innovators” by Thomson Reuters, and is ranked #552 on Forbes magazine’s 2015 “Global 2000” listing of the world’s largest publicly traded companies.

About FotoNation

FotoNation is giving life to computational imaging by merging technology with emotion. With technology in more than 60% of global tier-one smartphones, FotoNation develops technologies that serve the computational imaging space for handsets and cameras, as well as the automotive, surveillance, security, and augmented reality markets. We create, innovate and deliver the next generation of computational imaging algorithms. We engineer new ways to reach the highest possible performance while keeping system requirements to a minimum. We have a long history of innovating and advancing the state of the art in image processing. More than a decade ago, we were the first to integrate a computational imaging solution in an embedded mobile device. Today, FotoNation remains the leader in computational photography and computer vision. Nearly 2 billion digital cameras and smart devices are powered by the imaging technologies designed by the sharp minds and passionate hearts of FotoNation engineers. For more information visit www.fotonation.com.

About Tessera Technologies, Inc.

Tessera Technologies, Inc., including its Invensas and FotoNation subsidiaries, licenses technologies and intellectual property to customers for use in areas such as mobile computing and communications, memory and data storage, and 3D-IC technologies, among others. Our technologies include semiconductor packaging and interconnect solutions, and products and solutions for mobile and computational imaging, including our LifeFocusTM, FaceToolsTM, FacePowerTM, FotoSavvyTM, DigitalApertureTM, face beautification, red-eye removal, High Dynamic Range, autofocus, panorama, and image stabilization intellectual property. For more information, call +1.408.321.6000 or visit www.tessera.com.

Contact Information

Next Page »