Posts Tagged ‘top-story’

Next Page »

Automakers See Opportunity with Embedded Handwriting

Monday, April 10th, 2017

Why handwriting technology in the automotive cockpit will continue to see dramatic growth.

Many people who don’t yet use handwriting technology on phones or tablets as computing input mechanisms may nonetheless already be familiar with digital handwriting technology. There’s a good chance they’ve been introduced to it in what might seem an unlikely place: Their cars.

Figure 1: In the new automotive ecosystem, embedded sensors and display units can communicate with mobile devices inside the car and gather all sorts of external information via the web.

Figure 1: In the new automotive ecosystem, embedded sensors and display units can communicate with mobile devices inside the car and gather all sorts of external information via the web.

Last year, higher-end auto manufacturers like Audi, Mercedes and Tesla began shipping cars with embedded handwriting technology for controlling GPS systems, entertainment systems and other dashboard controls. Watch a 10-second video showing handwriting at work in an Acura here and learn a bit more about the overall concept here.

But all of this is just the beginning. According to Frost & Sullivan, the market for handwriting recognition (HWR) technology in cars will grow at a rate of more than 30 percent each year through 2020. “The industry is now moving towards controlling the entire infotainment with help from HWR,” the firm adds.

Embedded systems are tough to design: By nature, they’re constrained not only by limited storage space, but also limited memory space and typically, lower performance CPUs compared to computational devices. But even bound by these limitations, today’s digital handwriting technology has delivered remarkable accuracy and consistent benefits to the automotive industry. The most recent technology includes the ability to superimpose characters, cursive words or portions of words on top of each other on the touchpad and still accurately recognize input. A keyboard option incorporating smooth typing enables a true multimodal solution. Here are some reasons why handwriting technology in the automotive cockpit will continue to see dramatic growth:

  • Low driver distraction interfaces have evolved to require handwriting. One reason that handwriting provides a more effective option for controlling GPS or entertainment systems compared to voice are because cars are often noisy, making it difficult to reliably give instructions. Another reason is that voice command systems are very difficult to edit, which makes it more challenging to either revise input or correct recognition errors. Finally, handwriting allows drivers to keep their attention safely focused on the road: Today’s tech is designed for use when the driver isn’t looking at what he’s writing on the touchpad.
  • Multimodal systems are easy for the user to manipulate. Car manufacturers care about customer satisfaction, and drivers today demand a consistent user experience when inputting information—whether they’re doing it by hand, keyboard or voice. Drivers want multiple methods to input that information, depending on what’s most convenient and more importantly, safe. Consistency is key: System responses to keyboard input need to be consistent with responses to handwritten input. No one wants to get a different dictionary response to a query if they’re writing by hand rather than  keyboarding, for instance. A single multimodal system pre-emptively solves that potential problem.
  • Multimodal is great for the integrator. What’s great about multimodal design for systems integrators is that they only need to integrate with a single technology provider that handles multiple forms of input instead of integrating several different functional libraries and debugging any adverse interactions. This shortens the development time required for integration and lessens demands on memory resources and storage. Ultimately, integrating a multimodal interface means developing products that are often lower-cost, quicker time-to-market, and easier to test and validate. A big win all around.

Handwriting also wins points for safety and accuracy. The American Automobile Association (AAA) ranked voice-based command systems, such as the iPhone’s Siri, and found that it significantly distracted car drivers. In a worst-case situation, drivers even at the low speed of 25 mph were distracted for up to 27 seconds, during which they travelled more than three football fields in length.

Handwriting adapts well to multiple situations—e.g., character input when driving, and word input when stopped. Drivers can reach down and direct their cars’ GPS or entertainment systems in dozens of languages (as selected by the OEM), via either cursive or block characters that are easily recognizable, and that can even be written at a tilt—up to over 30 degrees off a level line—and still be recognized. The ability to recognize letters even written at a significant tilt allows for a great deal of human error, which in turn enables increased safety.

Figure 2: Embedded handwriting technology, complemented with voice and other multimodal input options, offers today’s drivers an effective way to enjoy more applications with complex features even as states increase regulation

Figure 2: Embedded handwriting technology, complemented with voice and other multimodal input options, offers today’s drivers an effective way to enjoy more applications with complex features even as states increase regulation.

Handwriting, in sum, is a natural fit for inclusion in the auto market because it offers an intuitive method to control the automotive cockpit, assures minimum driver distraction, and provides a natural input method and low learning curve. Drivers of all ages can use it, and it offers high recognition accuracy of letters, numbers and gestures.

And the handwriting technology in cars can blossom into a full note-taking application for drivers to use when they’re stopped. This is ideal for road warrior executives who must constantly attend meetings, travel and share their notes.

Handwritten input, or ‘digital ink,’ is now as fully capable to be interpreted to text as input from the keyboard and mouse. Furthermore, diagrams such as mind maps, organizational charts, and flow diagrams are capable of being fully converted to digital form in a manner that allows for changes and editing. Today’s technology allows you to create content, edit and format that content, create diagrams, input complex math equations, and easily incorporate the interpreted handwriting results into your digital document workflow.

A booming professional services market has emerged to support developers of embedded handwriting technology, too. Handwriting technology vendors are offering in-depth professional engineering services for use cases based upon the SDK packages offered, all the way to complete turnkey subsystem design services.

Handwriting technology is already embedded in millions of cars today. But the most tremendous growth for this market lies ahead in a wide range of embedded applications and IoT devices. For ISVs and OEMs, the ultimate benefit is a massively improved user experience which enhances customer satisfaction and ultimately sales and profits.

Gary-Headshot_hi_resGary Baum is the Vice President of Marketing at MyScript, the source of the most advanced award winning technology for handwriting recognition and digital ink management. At the Car HMI Concepts and Systems conference, MyScript technology was recognized in the ‘Most Innovative Car HMI Technology’ category.

Read more about the MyScript SDK and other tools for the automotive industry.

Control, Drive, Sense: High-Power Density SiC and GaN Power Conversion Applications

Thursday, March 2nd, 2017

New power switch technologies are key to success with the next generation of motor control, solar inverters, energy storage and electric vehicles. Just as important—the ability to drive these technologies safely and sense them more accurately.

Sensing current within these systems while operating at these higher switching rates is becoming more challenging.

Electricity consumption and its generation, which adds to our carbon footprint and affects climate change, is one of the key problems the world faces. The largest global consumption of electricity is from electric motors and the systems they drive. These systems consume more than twice as much electricity as the next largest consumer, lighting. A 2011 International Energy Agency report estimates that electric motor systems account for between 43 and 46 percent of the world’s electricity consumption.

Farther on Less

The need to further shrink our carbon footprint by reducing the CO2 emissions from transportation is a key driver for the electrification of vehicles. With the electrification of vehicles comes the need for them to be able to travel greater distances with less energy consumed. At the same time, we must ensure that the electricity generated for charging these vehicles comes from clean sources. As important as reducing electricity consumption is improving electricity generation methods. Generating energy through renewable resources like the sun requires efficient solar farms that are becoming mainstream in implementations worldwide.

We’ve seen the emergence of Wide Bandgap semiconductor technologies like Silicon Carbide (SiC) and Gallium Nitride (GaN) and the use of power MOSFETS in applications such as solar inverters, motor drives, and electric vehicles. Along with these technologies comes the need for gate drivers that have the capability of driving them efficiently and safely at higher data rates with less dead time in the system. Sensing current within these systems while operating at these higher switching rates is becoming more challenging.

Moving to these new technologies makes electric motors and driving electronics smaller and lighter. Increasing the range of the electric vehicle and decreasing its charging time becomes possible. Higher switching frequencies in solar inverters, as specified in IEC62109-1, will improve the overall efficiency of the systems as well as reducing the size of the line filters. Industrial automation applications where motors are commonly used, as specified in the variable frequency motor drive standard IEC61800-5, will become less bulky and more efficient, reducing the overall energy footprint.

Greater Robustness, Reliability

Isolation is mandated for safety and operation. Implementing the isolation barriers within these applications without compromising on performance is critical. These systems often have long lifetimes and could be implemented in harsh environments, so high levels of component robustness and reliability are a must.
“Sensing current within these systems while operating at these higher switching rates is becoming more challenging.”

One example of a solution for driving new Power switch technologies is Analog Devices iCoupler® digital isolation integrated with gate drivers like the ADuM4121 (Figure 1). It has the capability of driving these new Power switch technologies because of its low industry leading propagation delay of 38ns typical, allowing faster switching and the ability to withstand high Common Mode Transients up to 150kV/µs during fast turn on and turn off events.

Integrating Analog Devices iCoupler digital isolation with industry leading sigma delta analog to digital converters, such as the AD7403, makes it possible to accurately sense the current in high-voltage applications across a smaller shunt resistor, improving system efficiency. This enables the use of higher accuracy shunt-based current measurement architecture rather than Hall Effect systems. Selecting smaller resistors reduces the overall size of the solution.

Figure 1: ADuM4121 Driving GaN MOSFET GS66508B

Figure 1: ADuM4121 Driving GaN MOSFET GS66508B

To demonstrate system performance benefits, Analog Devices has developed a new Half Bridge GaN evaluation platform in collaboration with GaN Systems, as shown in Figure 2. On this platform we have the ADuM4121 isolated gate driver driving the GS66508B GaN MOSFET from GaN Systems that is rated to 650V at 30A. The gate charge requirement of the GS66508B is very low, making it much easier to drive at higher frequencies with a much lower supply voltage on VDD2 of 6V. The ADuM4121 also includes an internal Miller clamp that activates at 2V on the falling edge of the gate drive output, supplying the driven gate with a lower impedance path to reduce the chance of Miller capacitance induced turn on.

Making use of three of these half bridge evaluation boards combined with the Analog Devices Motor Control evaluation platform a demonstration system showcasing a three-phase inverter driving a three-phase motor was built (Figure 3). Within the three-phase inverter, large currents are being switched at high frequencies that can cause radiated and conducted emissions. To reduce the conducted and radiated emissions in the system while operating efficiently, it is critical to slew the edges of the switching waveforms sufficiently by selecting an appropriate gate resistance. This series resistance can further help with dampening the output ringing by matching the source to the load.

Figure 2: Replacing an IGBT inverter with a GaN Inverter

Figure 2: Replacing an IGBT inverter with a GaN Inverter

In this demonstration platform, the ADSP-CM409 generates the PWM signals required to drive the power switches, while the integrated SINC filters allow for direct connection of the Isolated Sigma delta ADC used for accurately sensing the current. The reinforced isolation provided by the isolated gate drivers can withstand up to 5kVrms as well as working voltages as high as 849Vpeak according to VDE0884-10. The isolation AD7403 offers can achieve 5kVrms withstand with a working voltage 1250Vpeak, also according to VDE0884-10.

Figure 3: Three Phase Inverter Motor Control Platform

Figure 3: Three Phase Inverter Motor Control Platform

Implementing a three-phase inverter using GaN suits systems operating up to 650V. SiC, having much higher breakdown voltages, more closely matches systems going up as high as 1200V and 1700V because it will have more margin within three-phase systems with 690Vrms line voltages.

ProfilePicture_webHein Marais is a System Application Engineer at Analog Devices, Inc.

Can Autonomous Vehicles Absolve Human Responsibility?

Monday, January 23rd, 2017

In our rush to embrace the latest technology and take advantage of whatever benefits it offers—greater convenience, higher efficiency, improved reliability, lower cost, etc.—we must not neglect human safety.

Transportation has been a major driver of technological innovation (Figure 1) since the inventions of James Watt, the Wright Brothers and automotive pioneers Daimler and Maybach. Over the years, concerns for occupant safety have led to the development of seat belts and air bags in cars, while such things as improvements in vehicle body materials and profiles, and the deployment of reversing alarms on trucks and buses have reduced the risks of accident and injury to pedestrians, cyclists, and other road users.


Figure 1: Mankind’s need to get from one spot to another has inspired innovators from James Watt to Elon Musk [Left image: By James Eckford Lauder (1811 - 1869) (Scottish) Details of artist on Google Art Project [Public domain], via Wikimedia Commons; Right image: [By jurvetson (Steve Jurvetson) [CC BY 2.0 (], via Wikimedia Commons]

Figure 1: Mankind’s need to get from one spot to another has inspired innovators from James Watt to Elon Musk (Top image: By James Eckford Lauder (1811 - 1869) (Scottish) Details of artist on Google Art Project (Public domain), via Wikimedia Commons; Bottom image: By jurvetson (Steve Jurvetson) - CC BY 2.0 (, via Wikimedia Commons

In more recent times, the technology of artificial intelligence (AI) has started to pervade the various electronic control systems that are an integral part of modern automotive design and today’s driving experience. However, as we move from advanced driver assistance systems (ADAS) to fully autonomous self-driving vehicles we need to recognize the point at which responsibility for safe operation passes from human to machine. The ethics of the autonomous functionality offered by AI in vehicles has parallels with the “three laws of robotics” science-fiction writer Isaac Asimov postulated in 1942, which mostly aimed to protect humans from harm due to the actions of any robots. In similar fashion, implementing AI in vehicles needs ethical decision-making rules to define behavior that eliminates or reduces harm to humans.

From Fighter Pilots to Car Drivers

A fighter jet represents the pinnacle of aircraft evolution in terms of its performance and complexity of operation. Consequently, fighter pilots are assisted in flying them. A comprehensive suite of artificial intelligence algorithms can control almost every aspect of their operation, enhancing the pilot’s capability while still allowing him to take control when the situation demands it. In the same way, equally powerful, game-changing AI technology in automotive applications must account for the ability to return control of the vehicle to the driver.

Within the auto industry today, many electronic technology companies are focusing on the technical needs of ADAS, developing both adaptive and predictive systems and components that will allow for better and safer driving. ADAS assists the driver or any other agent in charge of the vehicle in a number of ways: It may warn the driver or take actions to reduce risk. It may also improve safety and performance by automating some portion of the control task of operating the vehicle.

In its current state ADAS mainly functions in cooperation with the driver, i.e. by providing a human-to-machine interface, which is part of the control system of the vehicle with the human still maintaining overall responsibility for the vehicle. Over time, it is expected that developments in technology will be successful in wielding ever-greater control of the vehicle, so assistance becomes the norm and driver intervention is reduced. ADAS are ultimately expected to develop further into the kind of autonomous system that will offer the ability to respond more quickly and with greater benefits than when a human agent is in control of the vehicle.

ADAS Demands Component Solutions

The development of electronic components for ADAS, and ultimately for truly autonomous vehicles, is being undertaken by leading component manufacturers worldwide. These companies are typically already experienced in meeting the demanding performance, quality and reliability standards expected by the automotive industry. For example, ON Semiconductor provides robust, AEC-qualified, production part approval process (PPAP) capable products for automotive applications, including the NCV78763 Power Ballast and Dual LED Driver for ADAS front headlights. Freescale Semiconductor is helping to drive the world’s most innovative ADAS solutions with its automotive, MCU, analog and sensors, and digital networking portfolio expertise. The development of its latest FXTH8715 Tire Pressure Monitoring Sensors (TPMS), which integrate an 8-bit microcontroller (MCU), pressure sensor, XZ-axis or Z-axis accelerometer and RF transmitter, was driven by a market requirement for improved safety. AVX, a technology leader in the manufacture of passive electronic components, developed the VCAS & VGAS Series TransGuard® Automotive Multi-Layer Varistors (MLVs) to provide protection against automotive-related transients in ADAS applications. Delphi Connection Systems supports challenging automotive applications that demand robust design and reliability with its high-performance APEX® Series Wire Connectors.

The Dream of Vehicle Autonomy

The electronics industry has long been characterized by continual improvements in performance that come at an ever-decreasing cost. This electronics industry has allowed technology that was once the preserve of racing cars and the luxury automobile market to percolate down through mid-range vehicles to everyday family vehicles. Many people, both inside and outside the industry, now dream of a future where completely autonomous vehicles will come to dominate the world’s roads. They visualize benefits in safety, travel efficiency, comfort, and convenience in vehicles that are programmed to avoid accidents, optimize journey times and costs and maximize the functional utility of the vehicle. Clearly, amongst these, preventing injury to passengers and others as well as damage to the vehicle and property is the highest priority.

Autonomous Vehicles Require Ethical Rules

Current laws regulating road use place the responsibility for safety squarely with the human driver. He or she must ensure that other people, both inside and outside the vehicle, are protected from harm arising from his/her operation of the vehicle. While a car may be viewed as a means of getting people from point A to point B as efficiently as possible, its use at excessive speed or in a dangerous manner resulting in an accident that injures or kills a pedestrian would likely be considered a criminal offense. Indeed, the deliberate use of a vehicle to run down and kill someone would, in most cases, constitute murder.

However, these judgments are rarely black or white, and there may be mitigating circumstances, depending on the situation and people involved. Moreover, while we would not expect an autonomous vehicle to exceed speed limits or undertake dangerous maneuvers in a typical situation, there may be occasions when, like a human operator, it needs to make decisions where the outcome may be questionable. These decisions are where we need to understand the ethics involved to apply appropriate rules. This can be appreciated by considering a few hypothetical scenarios:

1. When traveling at speed in traffic, a human driver might react to an animal jumping out into the road by swerving to avoid it and, in doing so, hitting another car. As the driver, you may have saved that animal but what if the result was an accident in which other people were hurt?

2. What if, instead of an animal in the above example, it was a pedestrian who had stepped into the road and hitting them was likely to be fatal. Then the action would have saved a human life at the cost of potential injuries to the occupants of the other vehicle.”

3. An autonomously driven vehicle confronted with the same situation of a pedestrian stepping into the road might decide it cannot run over that person but may also decide it cannot swerve into another vehicle. Instead, it swerves off the road hitting a wall resulting in serious injuries to the human ‘driver’ of the car and potentially any passengers too.

In the latter situation, the human ‘driver’ is not to blame, but equally, there is an ethical dilemma as to whether any fault lies with the autonomous vehicle. Undoubtedly, as we become more reliant on technologies such as ADAS and ultimately on Autonomous Technology Systems (ATS) the responsibility for operating a vehicle becomes less dependent on the individual driver and shifts to the vehicle itself and therefore to the car manufacturer. Not surprisingly, the automotive industry will not want to accept liability for such risks unless the market recognizes this requirement and establishes an appropriate business model that makes economic sense for the manufacturers and doesn’t result in endless litigation.


Technological solutions are now starting to outpace the real-world situations into which they are being introduced. The deployment of artificial intelligence is challenging the status quo and forcing us to consider ethical questions about how machines should operate and who has control and is, therefore, responsible for their behavior.

This moral issue is certainly true of autonomous vehicles where ceding control to the vehicle requires AI that follows agreed ethical rules to protect human life. If we are to benefit from improved transportation systems with greater freedom, flexibility, efficiency, and safety, then it is society as a whole rather than design engineers and vehicle manufacturers that have to face up to this challenge and take on this responsibility.

Photo-RudyRamos_webRudy Ramos is the Project Manager for the Technical Content Marketing team at Mouser Electronics, accountable for the timely delivery of the Application and Technology sites from concept to completion. He has 30 years of experience working with electromechanical systems, manufacturing processes, military hardware, and managing domestic and international technical projects. He holds an MBA from Keller Graduate School of Management with a concentration in Project Management. Prior to Mouser, he worked for National Semiconductor and Texas Instruments. Ramos may be reached at

GENIVI Alliance Announces New Open Source Vehicle Simulator Project

Tuesday, September 20th, 2016

The GENIVI Alliance, a non-profit alliance focused on developing an open in-vehicle infotainment (IVI) and connectivity software platform for the transportation industry, today announced the GENIVI Vehicle Simulator (GVS) open source project has launched, with both developer and end-user code available immediately.

The GVS project and initial source code, developed by Elements Design Group, San Francisco and the Jaguar Land Rover Open Software Technology Center in Portland, Ore., provide an open source, extensible driving simulator that assists adopters to safely develop and test the user interface of an IVI system under simulated driving conditions.

“While there are multiple potential uses for the application, we believe the GVS is the most comprehensive open source vehicle simulator available today,” said Steve Crumb, executive director, GENIVI Alliance. ”Its first use is to test our new GENIVI Development Platform user interface in a virtually simulated environment, to help us identify and execute necessary design changes quickly and efficiently.”

Open to all individuals wishing to collaborate, contribute, or just use the software, the GVS provides a realistic driving experience with a number of unique features including:

  • Obstacles – Obstacles may be triggered by the administrator while driving.  If the driver hits an obstacle in the virtually simulated environment, the event is logged as an infraction that can be reviewed after the driving session.
  • Infraction Logging – A number of infractions can be logged including running stop signs, running red lights, vehicles driving over double yellow lines on a single highway and collisions with terrain, other vehicles, obstacles, etc.
  • Infraction Review – At the end of a driving session, the administrator and driver can review infractions from the most recent session, with screenshots of the infraction along with pertinent vehicle data displayed and saved.

To learn more, review the code, or start setting up your own vehicle simulator, visit

About GENIVI Alliance

The GENIVI Alliance is a non-profit alliance focused on developing an open in-vehicle infotainment (IVI) and connectivity platform for the transportation industry.  The alliance provides its members with a global networking community of more than 140 companies, joining connected car stakeholders with world-class developers in a collaborative environment, resulting in free, open source middleware.  GENIVI is headquartered in San Ramon Calif.

Automotive semiconductor market grows slightly in 2015, ranks shift

Wednesday, June 22nd, 2016

Despite slower growth for the automotive industry and exchange rate fluctuations, the automotive semiconductor market grew at a modest 0.2 percent year over year, reaching $29 billion in 2015, according to IHS (NYSE: IHS), a global source of critical information and insight.

A flurry of mergers and acquisitions last year caused the competitive landscape to shift, including the merger of NXP and Freescale, which created the largest automotive semiconductor supplier in 2015 with a market share of 14.3 percent, IHS said. The acquisition of International Rectifier (IR) helped Infineon overtake Renesas to secure the second-ranked position, with a market share of 9.8 percent. Renesas slipped to third-ranked position in 2015, with a market share of 9.1 percent, followed by STMicroelectronics and Texas Instruments.

“The acquisition of Freescale by NXP created a powerhouse for the automotive market. NXP increased its strength in automotive infotainment systems, thanks to the robust double-digit growth of its i.MX processors,” said Ahad Buksh, automotive semiconductor analyst for IHS Technology. “NXP’s analog integrated circuits also grew by double digits, thanks to the increased penetration rate of keyless-entry systems and in-vehicle networking technologies.”

NXP will now target the machine vision and sensor fusion markets with the S32V family of processors for autonomous functions, according to the IHS Automotive Semiconductor Intelligence Service Even on the radar front, NXP now has a broad portfolio of long- and mid-range silicon-germanium (SiGe) radar chips, as well as short-range complementary metal-oxide semiconductor (CMOS) radar chips under development. “The fusion of magnetic sensors from NXP, with pressure and inertial sensors from Freescale, has created a significant sensor supplier,” Buksh said.

The inclusion of IR, and a strong presence in advanced driver assistance systems (ADAS), hybrid electric vehicles and other growing applications helped Infineon grow 5.5 percent in 2015. Infineon’s 77 gigahertz (GHz) radar system integrated circuit (RASIC) chip family strengthened its position in ADAS. Its 32-bit microcontroller (MCU) solutions, based on TriCore architectures, reinforced the company’s position in the powertrain and chassis and safety domains.

The dollar-to-yen exchange rate worked against the revenue ranking for Renesas for the third consecutive year. A major share of Renesas business is with Japanese customers, which is primarily conducted in yen. Even though Renesas’ automotive semiconductor revenue fell 12 percent, when measured in dollars, the revenue actually grew by about 1 percent in yen. Renesas’ strength continues to be its MCU solutions, where the company is still the leading supplier globally.

STMicroelectronics’ automotive revenue declined 2 percent year over year; however, a larger part of the decline can be attributed to the lower exchange rate of the Euro against the U.S. dollar in 2015, which dropped 20 percent last year. STMicroelectronics’ broad- based portfolio and its presence in every growing automotive domain of the market helped the company maintain its revenue as well as it did. Apart from securing multiple design wins with American and European automotive manufacturers, the company is also strengthening its relationships with Chinese auto manufacturers. Radio and navigation solutions from STMicroelectronics were installed in numerous new vehicle models in 2015.

Texas Instruments has thrived in the automotive semiconductor market for the fourth consecutive year. Year-over-year revenue increased by 16.6 percent in 2015. The company’s success story is not based on any one particular vehicle domain. In fact, while all domains have enjoyed double-digit increases, infotainment, ADAS and hybrid-electric vehicles were the primary drivers of growth.


Other suppliers making inroads in automotive

After the acquisition of CSR, Qualcomm rose from its 42nd ranking in year 2014, to become the 20th largest supplier of automotive semiconductors in 2015. Qualcomm has a strong presence in cellular baseband solutions, with its Snapdragon and Gobi processors; while CSR’s strength lies in wireless application ICs — especially for Bluetooth and Wi-Fi. Qualcomm is now the sixth largest supplier of semiconductors in the infotainment domain.

Moving from 83rd position in 2011 to 37th in 2015, nVidia has used its experience, and its valuable partnership with Audi, to gain momentum in the automotive market. The non-safety critical status of the infotainment domain was a logical stepping stone to carve out a position in the automotive market, but now the company is also moving toward ADAS and other safety applications. The company has had particular success with its Tegra processors.

Due to the consolidation of Freescale, Osram entered the top-10 ranking of automotive suppliers for the first time in 2015. Osram is the global leader in automotive lighting and has enjoyed double-digit growth over the past three years, thanks to the increasing penetration of light-emitting diodes (LEDs) in new vehicles.

The Car, Scene Inside and Out: Q & A with FotoNation

Tuesday, May 24th, 2016

Looking at what’s moving autonomous vehicles closer to reality, who’s driving the car—and what’s in the back seat.

Mehra Full ResSumat Mehra, senior vice president of marketing and business development at FotoNation, spoke recently with EECatalog about the news that FotoNation and Kyocera have partnered to develop vision solutions for automotive applications.

EECatalog: What are some of the technologies experiencing improvement as the autonomous and semi-autonomous vehicle market develops?

Sumat Mehra, FotoNation: Advanced camera systems, RADAR, LiDAR, and other types of sensors that have been made available for automotive applications have definitely improved dramatically. Image processing, object recognition, scene understanding, and machine learning in general with convolutional neural networks have also seen huge enhancements and impact. Other areas where the autonomous driving initiative is spurring advances include sensor fusion and the car-to-car communication infrastructure.

Figure 1: Sumat Mehra, senior vice president of marketing and business development at FotoNation, noted that the company has already been working on metrics applicable to the computer vision related areas of object detection and scene understanding.

Figure 1: Sumat Mehra, senior vice president of marketing and business development at FotoNation, noted that the company has already been working on metrics applicable to the computer vision related areas of object detection and scene understanding.

EECatalog: What are three key things embedded designers working on automotive solutions for semi-autonomous and autonomous driving should anticipate?

Mehra, FotoNation: One, advances in machine learning. Second, through heterogeneous computing various general-purpose processors—CPUs, GPUs, DSPs—are all being made available for programming. Hardware developers as well as software engineers will use not only heterogeneous computing, but also other dedicated hardware accelerator blocks, such as our Image Processing Unit (IPU). The IPU enables super high performance at very low latency and with very low energy use. For example, the IPU makes it possible to run 4k video and process it for stabilization at extremely low power—18 milliwatts for 4k 60 frames per second video.

Third, sensors have come down dramatically in price and offer improved signal-to-noise ratios, resolution, and distance-to-subject performance.

We’re also seeing improved middleware, APIs and SDKs. Plus a framework to provide reliable and portable tool kits to build solutions around, much like what happened in the gaming industry.

EECatalog: Will looking to the gaming industry help avoid some re-invention of the wheel?

Mehra, FotoNation: Certainly. The need for compute power is something gaming and the automotive industry have in common, and we’ve seen companies with a gaming pedigree making efforts [in the automotive sector]. And, thanks to the mobile industry, sensors have come down in price to the point where they can be used for much more than having a large sensor with very large optics in one’s pocket. Sensors can now be embedded into bumpers, into side view mirrors, into the front and back ends of cars to enable much more power and vision functionality.

EECatalog: Will the efforts to enable self-driving cars be similar to the space program in that some of the research and development will result in solutions for nonautomotive applications?

Mehra, FotoNation: Yes. For example, collision avoidance and scene understanding are two of the applications that are driving machine learning and advances toward automotive self-driving. These are problems similar to those that robotics and drone applications face. Drones need to avoid trees, power lines, buildings, etc. while in flight, and robots in motion need to be aware of their surroundings and avoid collisions.

And other areas, including factory automation, home automation, and surveillance, will gain from advances taking place in automotive. Medical robots that can help with mobility [are another] example of a market that will benefit from the forward strides of the automotive sector.

EECatalog: How has FotoNation’s experience added to the capabilities the company has today?

Mehra, FotoNation: FotoNation has evolved dramatically. We have been in existence for more than 15 years, and when we started, it was still the era of film cameras. The first problem we started tackling was, “How do you transfer pictures from a device onto a computer or vice versa?”

So we worked in the field of picture transfer protocols, of taking pictures on and off devices. Then, when we came into the digital still camera space through this avenue, we realized there were other imaging problems that needed to be addressed.

We solved problems such as red eye removal through computational imaging. Understanding the pixels, understanding the images, understanding what’s being looked at—and being able to correct for it—relates to advances in facial detection, because the most important thing you want to understand in a scene is a person.

Then, as cameras became available for automotive applications, new problems arose. We drew from all that we had been learning through our experience with the entire gamut of image processing. The metrics FotoNation has been working on in different areas have become applicable to such automotive challenges as object detection and scene understanding.

As pioneers in imaging, we don’t deliver just standard software or an algorithm to the software for any one type of standard processor. We offer a hybrid architecture, where our IPU enables hardware acceleration that does specialized computer vision tasks like object recognition or video image stabilization at much higher performance and much lower power than a CPU.   We deliver our IPU as a netlist that goes into a system on chip (SOC).  Hybrid HW/SW architectures are important for applications such as automotive where high performance and low power are both required. Performance is required for low latency, to make decisions as fast as possible; you cannot wait for a car moving at 60 miles per hour to take extra frames (at 16 to 33 milliseconds per frame) to decide whether it is going to hit something.  Low power is required to avoid excessive thermal dissipation (heat), which is a serious problem for electronics, especially image sensors.

EECatalog: When it comes down to choosing FotoNation over another company with which to work, what reasons for selecting FotoNation are given to potential customers?

Mehra, FotoNation: One reason is experience. Our team has more than 1000 man-years of experience in embedded imaging. A lot of other companies come from the field of imaging processing for computers or desktops and then moved into embedded. We have lived and breathed embedded imaging, and the algorithms and solutions that we develop reflect that.

The scope of imaging that we cover ranges all the way from photons to pixels. Experience with the entire imaging subsystem is a key strength:  We understand how the optics, color filters, sensors, processors, software and hardware work independently and in conjunction with each other.

Another reason is that a high proportion of our engineers are PhDs who look at various ways of solving problems, refusing to be pigeonholed into addressing challenges in a single way. We have a strong legacy of technology innovation, demonstrated through our portfolio of close to 700 granted and applied for patents.

EECatalog: Had the press release about FotoNation’s working with Kyocera Corporation to develop vision solutions for automotive been longer, what additional information would you convey?

Mehra, FotoNation: More on our IPU, and how OEMs in the automotive area would definitely gain from the architectural advantages it delivers. The IPU is our greatest differentiator, and we would like our audience to understand more about it.

Another thing we would have liked to include is more on the importance of driver identification and biometrics. FotoNation acquired a company for iris biometrics a year ago, Smart Sensors, and we will be [applying] those capabilities toward driver monitoring system capabilities. The first step to autonomous vehicles is semi-autonomous vehicles, where drivers are sitting behind the steering wheel but not necessarily driving the car. And for that first step you need to know who the driver is. What the biometrics bring you is that capability of understanding the driver.

Other metrics include being able to look at the driver to tell whether he is drowsy, paying attention or looking somewhere else—decision making becomes easier when [the vehicle] knows what is going on inside the car, not just outside the car—that is an area where FotoNation is very strong.

EECatalog: In a situation where the car is being shared, a vehicle might have to recognize, for example, “Is this one of the 10 drivers authorized to share the car?”

Mehra, FotoNation: Absolutely, and the car’s behavior should be able to factor whether it is a teenager or adult getting behind the wheel, then risk assessments can begin to happen. All of this additional driver information can assist in better driving, and ultimately increased driver and pedestrian safety.

And we see [what’s ahead as] not just driver monitoring, but in-cabin monitoring through a 360-degree camera that is sitting inside the cockpit and able to see what is going on: Is there a dog in the back seat, which is about to jump into the front? Is there a child who is getting irate? All of those things can aid the whole experience and reduce the possibility of accidents.

Questions to Ask on the Journey to Autonomous Vehicles

Monday, May 23rd, 2016

In or out of Earth’s orbit, the journey will show similarities to the space race.

What comes first, connected vehicles or smart cities?

Smart cities will come first and play a critical role in the adoption of connected vehicles. The federal government is also investing money into these programs in many ways. USDOT has finalized seven cities that include San Francisco, Portland, Austin, Denver, Kansas City, Columbus and Pittsburgh through their Smart City Challenge Program.

Many of the remaining cities/states are finding alternate sources to fund their smart city deployments.

When we look at a co-operative safety initiative such as V2X (Vehicle-to-Everything), we see that it requires a majority of the vehicles to be supportive of the same technology. Proliferation of V2X is going to take few years to reach critical mass. This is the reason connected vehicles equipped with V2X are looking at smart city infrastructure as a way to demonstrate the use case scenarios for the “Day One Applications.”

What are the chief pillars of the autonomous vehicle (AV) market?

The three core pillars of the autonomous vehicle market will be:

  • Number Crunching Systems
    • Development of multicore processors has helped fuel the AI engines that are needed for the autonomous vehicle market. More and more companies are using GPUs and multicore processors for their complex algorithms. It is estimated that these systems process 1GB of data every second.
  • ADAS Sensors
    • The cost/performance ratio for ADAS sensors like lidars, radars and cameras has improved significantly over the past couple of years.  All of this will reduce the total cost of the core systems needed for autonomous vehicle systems, making the technology more mainstream.
  • Connectivity and Security
    • Connectivity will play a key role for such systems. Autonomous vehicles depend heavily on information from external sources like the cloud, other vehicles and infrastructure. These systems need to validate their sources and build a secure firewall to protect their information.

Total BOM for a complete system in the next five years will be around $5,000, and the total cost of the system to consumers will only add $20,000 or less to the vehicle’s sticker price. For a relatively small increase, consumers will get numerous benefits, ranging from enhanced safety to stress-free driving. This is one of the reasons why companies like Cruise got acquired for such huge valuations.

What three key events should embedded designers working on automotive solutions for semi-autonomous and autonomous driving anticipate?

  • Sensor Fusion
    • Standards will need to be developed to allow free integration of ADAS sensors, connecting all the various ADAS applications and supporting data sharing between these sensors.
  • Advances in Parallel Computing Inside Automotive Electronics
    • ECU systems inside the cars will eventually be replaced with complex parallel computing ADAS platforms. Artificial intelligence engines inside these platforms need to take advantage of parallel computing when processing gigabytes of data per second. Real time systems that can ascertain the decision making process in a split second will make all the difference.
  • Redundancy
    • Finally, the industry needs to create a redundant fault tolerant architecture. When talking about autonomous vehicles, the systems that enable autonomous driving need to have redundancy to ensure the system is always operating as designed.

How will the push to create self-driving cars (similar to what happened in the space race) result in useful technology for other areas?

The drone/surveillance video market will benefit from the push to create self-driving technology. Drones have similar characteristics to self-driving cars, just on a much smaller scale. The complexities around drone airspace management will definitely need some industry rules and support. This market will benefit from the advances and rule-making experience leveraged from self-driving cars.

What was the role of USDOT pilots and other research for enabling the autonomous vehicle market?

The role of the USDOT pilots has been predominantly focused on connected vehicles, and not much has happened yet with autonomous vehicles. The deployment of connected vehicle technology infrastructure can determine the usefulness to improve the robustness of data received by vehicles. This infrastructure for connected vehicles will pave the way for autonomous vehicles. Roadside infrastructure will play a role in monitoring rogue vehicles.

USDOT is also focusing on creating regulation and policies for autonomous vehicle deployments. Several test tracks around the United States (California, Michigan and Florida) have been funded by the USDOT. These proving grounds are setup with miles of paved roads that simulate an urban driving environment.

Many automakers have set 2020 as the goal for automated-driving technology in production models. Pilots and research by USDOT represent a huge reduction in risk for the automotive OEMs.

What else should embedded designers keep in mind when the topic is autonomous vehicles?

  • 100 Million Lines of Code
    • Connected vehicle technology is the single most complex system that is built by mankind. It takes about 100 million lines of code to build such a system and is more complex than a space shuttle, an operating system like Linux kernel, and smartphones. We recommend that the embedded designers depend on well tested and pre-defined middleware blocks to accelerate their design process.
  • FOTA and SOTA Updates
    • We also recommend that embedded designers build systems that depend heavily on firmware over the air (FOTA) and software over the air (SOTA) systems. We know that cars are going to follow the same trend as smartphones that require frequent software updates. Tesla has set a great example of this process with its updates and has said that its vehicles will constantly improve over time.
  • Aftermarket Systems as a Way to Introduce New Capabilities
    • Finally, embedded designers need to look at aftermarket systems as way to introduce semi-autonomous features to determine the feasibility and acceptance of these building blocks before they become part of the mainstream.

Puvvala_thumbRavi Puvvala is CEO of Savari.With 20+ years of experience in the telecommunications industry, including leading positions at Nokia and Qualcomm Atheros, Puvvala is the founder of Savari and a visionary of the future of mobility. He serves as an advisory member to transportation institutes and government bodies.

The Rise of Ethernet as FlexRay Changes Lanes

Friday, May 20th, 2016

There are five popular protocols for in-vehicle networking. Caroline Hayes examines the structure and merits of each.

Today’s vehicles use a range of technologies, systems and components to make each journey a safe, comfortable, and enjoyable experience. From infotainment systems to keep the driver informed and passengers entertained, to Advanced Driver Assistance Systems (ADAS) to keep road users safe, networked systems communicate within the vehicle. Vehicle systems such as engine control, anti-lock braking and battery management, air bags and immobilizers are integrated into the vehicle’s systems. In the driver cockpit, there are instrument clusters and drowsy-driver detection systems, as well as ADAS back-up cameras, automatic parking and automatic braking systems. For convenience, drivers are used to keyless entry, mirror and window control as well as interior lighting, all controlled via an in-vehicle network. All rely on a connected car and in-vehicle communication networks.

There are five in-vehicle network standards in use today, Local Interconnect Network (LIN), Controlled Area Network (CAN), Ethernet, Media Oriented Systems Transport (MOST) and FlexRay.

Evolving Standards
LIN targets control within a vehicle. It is a simple, standard UART interface, allowing sensors and actuators to be implemented, as well as lighting and cooling fans to be easily replaced. The single-wire, serial communications system operates at 19.2-kbit/s, to control intelligent sensors and switches, in windows, for example.

Figure 1: Microchip supports all automotive network protocols with devices, development tools and ecosystem for vehicle networking.

Figure 1: Microchip supports all automotive network protocols with devices, development tools and ecosystem for vehicle networking.

This data transfer rate is slower than CAN’s 1-Mbit/s (maximum) operation. CAN is used for high-performance, embedded applications. An evolution of CAN is CAN FD (Flexible Data rate), initiated in 2011 to meet increasing bandwidth needs. It operates at 2-Mbit/s, increasing to 5-Mbit/s when used point-to-point for software downloads. The higher data rate of CAN allows for a two-wire, untwisted pair cable structure, to accommodate a differential signal.

As well as boosting transmission rates, CAN FD extended the data field from 8-byte to 64-byte. When only one node is transmitting, increasing the bit rate is possible, as nodes do not need to be synchronized.

LIN debuted at the same time as vehicles saw more sensors and actuators arrive. At this juncture, point-to-point wiring became too heavy, and CAN became too expensive. Summarizing LIN, CAN and CAN FD, Johann Stelzer, Senior Marketing Manager for Automotive Information Systems (AIS), Automotive Product Group, Microchip, says: “CAN and CAN FD have a broadcast quality. Any node can be the master, whereas LIN uses master-slave communication.”

K2L’s Matthais Karcher: CAN FD’s higher payload can add security to the network.

K2L’s Matthais Karcher: CAN FD’s higher payload can add security to the network.

The higher bandwidth of CAN FD allows for security features to be added. “The larger payload can be used to transfer keys with multiple bytes as well as open up secure communications between two devices,” says Matthias Karcher, Senior Manager AIS Marketing Group, at K2L. The Microchip subsidiary provides development tools for automotive networks.

CAN FD’s ability to use an existing wiring harness to transfer more data from one electronic control unit to another, using a backbone or a diagnostic interface, is compelling, says Stelzer. It enables faster download of driver assistance or infotainment control software, for example, making it attractive to carmakers.

Microchip’s Johann Stelzer: Ethernet will evolve from diagnostics to become a communications backbone.

Microchip’s Johann Stelzer: Ethernet will evolve from diagnostics to become a communications backbone.

Ethernet as Communications Backbone

Ethernet uses packet data, but at the moment its use is restricted to diagnostics and software downloads. It acts as a bridge network, yet while it is flexible, it is also complex, laments Stelzer. As in-vehicle networks increase, so high-speed switching increases, adding to the complexity, requiring a high power microcontroller or microprocessor as well as requiring validating and debugging, which can add to development time.

In the future, asserts Stelzer, Ethernet will be used as the backbone communications between domains, such as safety, power and control, in the vehicle. When connected via a backbone it will be able to exchange software and data quickly, at up to 100-Mbit/s, or 100 times faster than CAN and 50 times faster than CAN FD.

At present, IEEE 802.3 operates at 100BaseTX, the predominant Fast Ethernet speed. The next stage is to operate at 100BaseT1, which is also 100-Mbit/s Ethernet over a single twisted wire pair. The implementation of Ethernet 100BaseT1 will be big, says Stelzer. “This represents a big jump in bandwidth,” he points out, “with less utilization overhead.” IEEE 802.3bw, finalized in 2014, will deliver 100-Mbit over a single twisted pair wire to reduce wiring, promoting the trend of deploying Ethernet in vehicles.

Figure 2: K2L offers the OptoLyzer MOCCA FD, a multi-bus user interface for CAN FD, CAN and LIN development.

Figure 2: K2L offers the OptoLyzer MOCCA FD, a multi-bus user interface for CAN FD, CAN and LIN development.

Increased deployment will come about when the development tools are in place. In each point-to-point node in the network, developers will have to integrate a tool in each section. “[The industry] will need good solutions,” he says, “to avoid overhead.” K2L offers evaluation boards, apps notes, software, Integrated Design Environment (IDE) support and development tools for standard Ethernet in vehicles. The company will announce the availability of support for Standard Ethernet T1 next year.

MOST for Media

MOST relates to high-speed networking and is predominantly used in infotainment systems in vehicles. It addresses all seven layers in the Open Systems Interconnection (OSI) for data communications, not just the physical and data link layers but also system services and apps.

The network is typically a ring structure and can include up to 64 devices. Total available bandwidth for synchronous data transmission and asynchronous data transmission (packet data) is around 23-MBaud.

MOST is flexible, with devices able to be added or removed. Each node becomes the master in the network, controlling the timing of transmission, although adding parameters can add to complexity. One solution, says Karcher, is for a customer to use Linux OS and a Linux driver to handle the generation distribution to encapsulate MOST for the apps layer. This allows the customer to concentrate on designing differentiation into the product. K2L provides software drivers and software libraries for MOST, as well as reference designs for analog front-ends, demonstration kits and evaluation boards. The level of hardware and software support, says Karcher, allows developers to focus on the application. Hardware can connect to MOST and also to CAN and LIN, he continues, adding that tools can connect and safeguard both system and application, reducing complexity and time-to-market.

The FlexRay Consortium, which was disbanded in 2009, developed FlexRay for on-board computing. There have not been any new developments in FlexRay, notes Karcher, who believes its use is limited to safety applications. Although K2L supplies tools to test and simulate FlexRay, “in the long run, it is hard to see a future for FlexRay,” says Karcher, citing the fact that there are no new designs or applications.

Caroline_Hayes_ThumbCaroline Hayes has been a journalist, covering the electronics sector for over 20 years. She has worked on many titles, most recently the pan-European magazine, EPN.

Vehicle-to-Everything (V2X) Technology Will Be a Literal Life Saver – But What Is It?

Thursday, May 19th, 2016

Increased safety and smarter energy are among the expected results as V2X gets underway: Here’s a look at its progress.

A massive consumer-focused industry like automobiles is up close and personal with people—so up close that safety and driver protection from harm are top of mind for manufacturers.  Although human error is the prevailing cause of collisions, creators of technologies used in vehicles have an obvious vested interest in helping lower the distressing statistics.  After all, pedestrian deaths rose by 3.1 percent in 2014 according to the National Highway Traffic Safety Administration’s Fatal Analysis Reporting System (FARS). In that year, 726 cyclists and 4,884 pedestrians were killed in motor vehicle crashes. And this damage to innocent bystanders doesn’t include the growing death rate of drivers and their passengers.

Figure 1: Benefits to driver and pedestrian safety, as well as increased power efficiency, are the aims of V2X. (Courtesy Movimento)

Figure 1: Benefits to driver and pedestrian safety, as well as increased power efficiency, are the aims of V2X. (Courtesy Movimento)

Distracted driving accounted for 10 percent of all crash fatalities, killing 3,179 people in 2014 while drowsy driving accounted for 2.6 percent of all crash fatalities, killing 846 people in 2014.  The road carnage is hardly limited to the United States. The International Organization for Road Accident Prevention noted a few years ago that 1.3 million road deaths occur worldwide annually and more than 50 million people are seriously injured. There are 3,500 deaths a day or 150 every hour and nearly three people get killed on the road every minute.

A Perplexing Stew

Thus it’s about time for increasingly sophisticated technology to step in and help protect distracted drivers from themselves. The centerpiece of what’s coming is so-called Vehicle to Everything (V2X) technology. Once it’s deployed, the advantages of V2X are extensive, alerting drivers to road hazards, the approach of emergency vehicles, pedestrians or cyclists, changing lights, traffic jams and more. In fact, the advantages extend even beyond the freeways and into residential streets where V2X technology helps improve power consumption and safety.

About the only problem with V2X is that it’s emerging as a perplexing stew of acronyms (V2V, V2I, V2D, V2H, V2G, V2P) that require some explanation—and the technology, while important, isn’t universally quite here yet.  But the significance of this technology is undeniable. And getting proficient in understanding V2X is valuable in tracking future vehicle features that will link cars to the world around them and make driving safer in the process.

Here’s an overview of the elements of V2X and predictions for when it will hit the roads, from the soonest to appear to the last.

Vehicle to Vehicle (V2V)

Vehicle to Vehicle (V2V) communication is a system that enables cars to talk to each other via Dedicated Short-Range Communication (DSRC), with the primary goal being to communicate wirelessly about speed and position and to utilize power in the most productive manner in order to warn drivers to take immediate action to avoid a collision. Also termed car-to-car communication, the technology makes driving much safer by alerting one vehicle about the presence of others. An embedded or aftermarket V2V module in the car allows vehicles to broadcast their position, speed, steering wheel position, brake status and other related data by DSRC to other vehicles in close proximity.

Clearly, V2V is expected to reduce vehicle collisions and crashes. It’s likely that this technology will enable multiple levels of autonomy, delivering assisted driver services like collision warnings but with the ultimate responsibility still belonging to the driver. V2V relies on DSRC, which is still in its infancy because the need remains to address security, mutual authentication and dynamic vehicle issues.

V2V is already making its way into new cars. For example, Toyota developed a communicating radar cruise control that uses V2V to make it easier for preceding and following vehicles to keep a safe distance apart. This is an element in a new “intelligent transportation system” that the company said was initially available at the end of 2015 on a few models in Japan. Meanwhile, 16 European vehicle manufacturers and related vendors launched the Car 2 Car Communication Consortium, which intends to speed time to market for V2V and V2I solutions and to ensure that products are interoperable. Plans call for “earliest possible” deployment. 

One key issue with V2V is that to be most effective, it should reside in all cars on the road. Nevertheless, this technology has to start somewhere, so Mercedes-Benz announced that its 2017 Mercedes E Class would be equipped with V2V, one of the first such solutions to go into production.

Vehicle to Device (V2D)

Vehicle to Device (V2D) communication is a system that links cars to many external receiving devices but will be particularly heralded by two-wheeled commuters.  It enables cars to communicate via DSRC with the V2D device on the cycle, sending an alert of traffic ahead. Given the fact that biking to work is the fastest-growing mode of transportation, increasing 60 percent in the past decade, V2D can potentially help prevent accidents. 

Although bicycle commuting is healthier than sitting in a car, issues like dark streets in the evening and heavy traffic flow make this mode problematic when it comes to accident potential.  Although less healthful, traveling by motorcycle and other two-wheel devices also has an element of risk because larger vehicles on the road tend to dominate.

V2D is tied to V2V because they both depend on DSRC, so V2D should begin to pop up after V2V rolls off the assembly line in 2017 and later. It will likely appear as aftermarket products for bicycles, motorcycles and other such vehicles starting in 2018.  Spurring the creation of V2D products have been quite a few crowd-funded efforts as well as government grants like the U.S. Department of Transportation’s  (DOT) Smart City Challenge that will pledge to the winner up to $40 million in funding for creating the nation’s most tech-savvy transportation network in a municipality.  Finalists (Denver, Austin, Columbus, Kansas City, Pittsburgh, Portland, San Francisco) have already been chosen and they are busy producing proposals.

DOT has other initiatives aimed at encouraging the creation of various V2X technologies. V2D is one of the application areas in DOT’s IntelliDrive program, a joint public/private effort to enhance safety and provide traffic management and traveler information. The goal is the development of applications in which warnings are transmitted to various devices such as cell phones or traffic control devices.

Vehicle to Pedestrian (V2P)

Vehicle to Pedestrian (V2P) communication is a system that communicates between cars and pedestrians and will particularly benefit elderly persons, school kids and physically challenged persons. V2P establishes a communications mechanism between pedestrians’ smartphones and vehicles and acts as an advisory to avoid imminent collision.

The concept is simple: V2P will reduce road accidents by alerting pedestrians crossing the road of approaching vehicles and vice versa. It’s expected to become a smartphone feature beginning in 2018 but, like V2D, requires the presence of DSRC capabilities in vehicles.  Ultimately, the DSRC version of V2P will be replaced by a higher-performance LTE version starting in 2020.

While there aren’t any V2P solutions currently available, this area is a hotbed of development, particularly when one includes the full gamut of possible technologies and includes multiple vehicle types such as public transit. Given the significant role that V2P can play in preventing damage to humans, the U.S. Department of Transportation maintains and updates a database of technologies in process. Of the current 86 V2P technologies listed, none are yet commercially available but a number are currently undergoing field tests.

A particularly fruitful approach to developing effective V2P products is a research partnership between telecom and automotive companies. For example, Honda R&D Americas and Qualcomm collaborated on a DSRC system that sends warnings to both a car’s heads-up display and a pedestrian’s device screen when there is a chance of colliding. Although the project won an award as an outstanding transportation system, there’s no word yet when this might appear commercially.

In another collaboration, Hitachi Automotive Systems teamed with Clarion, the Japan-based manufacturer of in-car infotainment products, navigation systems and more on a V2P solution that predicts pedestrian movements and rapidly calculates optimum speed patterns in real time. Undergoing field testing, this is another promising product to look for in the future.

Vehicle to Home (V2H)

Vehicle to Home (V2H) communication involves linkage between a vehicle and the owner’s domicile, sharing the task of providing energy.  During emergency or power outages, the vehicle’s battery can be used as a power source. Given the reality of severe weather and its effect on power supplies, this capability has been needed for a while, with disruptions in power after storms and other weather emergencies impacting many thousands of U.S. families annually.

V2H is a two-way street, with the vehicle powering the home and vice versa based on cost and demand for home energy. The car battery is used for energy storage, taking place when energy is cheap or “green.”

During power outages, power from a vehicle’s battery can be used to run domestic appliances and power can be drawn from the vehicle when utility prices are high. In areas with frequent power outages, the battery can be used to buffer energy to avoid flickering, and it can be used as an emergency survival kit.

It’s expected that V2H will kick into higher gear in 2019, playing a significant role when the number of plug-in hybrid Electric Vehicle (PHEV) and Electric Vehicles (EVs) make up over 20% of the total new cars sold in the United States. But a few projects have been underway for a while, such as a Nissan V2H solution that was already tested widely in Japan and launched in 2012 as the “Leaf to Home” V2H Power Supply System. Relying on an EV power station unit from Nichicon, this was one of the first backup power supply systems using an EV’s large-capacity battery.

Other Japanese car manufacturers have dabbled in these systems, including Mitsubishi and Toyota. Mitsubishi announced in 2014 that its Outlander PHEV vehicle could be used to power homes—only in Japan so far. There are other approaches to utilizing an EV’s battery for home use, such as some currently available devices that can not only charge a battery, but also supply the stored electricity to the home. One example is the SEVD-VI cable from Sumitomo Electric.

Vehicle to Grid (V2G)

Vehicle to Grid (V2G) communication is a system in which EVs communicate with the power grid to return electricity to the grid or throttle the vehicle’s charging rate. It will be an element in some EVs like plug-in models and is used as a power grid modulator to dynamically adjust energy demand.

A benefit of V2G is helping maintain the grid level and acting as a renewable power source alternative. This system could determine the best time to charge car batteries and enable energy flow in the opposite direction for shorter periods when the public grid is in need of power and the vehicle is not.

Given its key role in battery charging, this V2X technology is appearing soon—in affordable EVs like the Tesla model 3, which can now be advance ordered. Other products and companies like Faraday Future, NextEV, Apple Car, Uber and Lyft are all planning to launch EVs between 2017-2020. V2G is an extremely relevant area because it creates the obvious need for cities to start thinking and planning now about how they will support a large-scale EV society. Otherwise, energy utility companies will be in a panic situation and may resort to drastic measures such as rationing energy per household.

Figure 2: V2X technology will be part of the Tesla Model 3. [Photo: By Steve Jurvetson [CC BY 2.0 (], via Wikimedia Commons]

Figure 2: V2X technology will be part of the Tesla Model 3. [Photo: By Steve Jurvetson [CC BY 2.0 (

Other activity in the V2G area includes a partnership between Pacific Gas and Electric Company (PG&E) and BMW to test the ability of EV batteries to provide services to the grid. The automaker created a large energy storage unit made from re-utilized lithium-ion batteries while enlisting San Francisco Bay Area drivers of BMW 100 i3 cars to take part in what’s called the ChargeForward program. A pilot study, this now-underway project is giving qualifying i3 drivers up to $1,540 in charging incentives.

Another intriguing effort involves Nissan and Spain’s largest electric utility, which collaborated on a mass-market V2G system that was initially demonstrated in Spain last year but is aimed at the European market.  Like the BMW/PG&E program, this also involves re-purposed EV batteries for stationary energy storage.  V2G is a very promising market pegged to surpass $190 million worldwide by 2022 according to industry analysts.

Vehicle to Infrastructure (V2I)

Vehicle to Infrastructure (V2I) communication will likely be the last V2X system to appear. It’s the wireless exchange of critical safety and operational data between vehicles and roadway infrastructure, like traffic lights. V2I alerts drivers of upcoming red lights and prevents traffic congestion. The system will streamline traffic and enable drivers to maneuver away from heavy traffic flow.

Despite the enormous impact this technology will have on driver safety, the degree of infrastructure investment required is so massive that it will take time to implement.  Some question whether DSRC-based V2I with its questionable return on investment will ever take place, but there is more hope for LTE-based V2I.

This approach might play a key role starting in 2020 and be rolling along by 2022. Nevertheless, there are promising V2I projects already happening in countries where it’s easier to conduct massive public initiatives, such as China. A field test being run on public roads in Taicang, Jiangsu Province, China, involves buses that receive road condition data and thus can avoid stopping at lights when safe. Tongji University and Denso Corporation developed this project. 

Another recent collaboration involves Siemens and Cohda Wireless to develop intelligent road signs and traffic lights in which critical safety and operational data is exchanged with equipped vehicles.  In the United States, DOT is highly involved in working with state and local transportation agencies along with researchers and private-sector stakeholders to develop and test V2I technologies through test beds and pilot deployments.

Communication is the next frontier of car technology, and this is the bedrock of all the V2X capabilities appearing in the future. And none too soon. According to the World Health Organization (WHO), the incidence of traffic fatalities will continue to expand across the globe as vehicles are more prevalent. WHO notes that this increase is 67 percent through 2020. Having smarter, safer cars and communications systems for the drivers, pedestrians and cyclists who can be impacted by these vehicles could turn around this trend.  Add to that the aspects of flexible electricity storage and usage, and V2X becomes an even more promising technology.

Mahbubul_AlamMahbubul Alam is CTO and CMO of Movimento Group. A frequent author, speaker and multiple patent holder in the area of the new software defined car and all things IoT. He was previously a strategist for Cisco’s Internet-of-Things (IoT) and Machine-to-Machine (M2M) platforms.  Read more from Mahbubul at

Transitioning Applications from CAN 2.0 to CAN FD

Friday, April 22nd, 2016

The CAN bus protocol is used in a wide variety of applications, including industrial, automotive, and medical. Approximately 1.5B CAN nodes are used each year. Designers of these applications benefit from the many advantages CAN offers, such as reliability, cost effectiveness, engineering expertise and the availability of tools and components. CAN FD builds on the existing benefits of CAN 2.0 technology, allowing designers to leverage CAN 2.0 expertise, tools, hardware and software while also taking advantage of CAN FD’s increased data rate and data field length.

This paper will explore some of the considerations associated with CAN system design and how designers can transition their applications from CAN 2.0 to CAN FD. These considerations relate to physical layer, controller and overall system topics. Application designers must begin with hardware that conforms to both physical layer and controller requirements. Solutions for CAN FD controllers will be discussed, highlighting external CAN FD controllers as an alternative to integrated CAN FD controllers. These external controllers allow designers more flexibility when choosing an MCU that best fits the application and can reduce the migration effort from CAN 2.0 to CAN FD.


Automotive manufacturers and suppliers are facing some challenges with today’s CAN 2.0 networks. First, automotive manufacturers and suppliers are dealing with an increase in end-of-line (EOL) programming costs. This is due to an increase in Electronic Control Unit (ECU) memory requirements. Second, the use of automotive electronics continues to expand, requiring more ECUs to support the demands of these new electronic applications. This either decreases the available bandwidth on existing CAN 2.0 bus networks or it forces designers to introduce a new CAN 2.0 network into the system architecture. Last, as demand for cyber security continues to grow, ECUs will require more memory and bus utilization will increase drastically. CAN FD addresses some of these challenges by offering two significant enhancements over CAN 2.0. CAN FD increases the data rate capabilities in normal mode from 500 kb/s (typical) to 2 Mb/s, and in programming mode up to 5 Mb/s. In addition, it increases the data field from 8 to 64 data bytes. While these benefits can offer the designer faster EOL programming and free up network bandwidth, there are some development challenges associated with supporting new CAN FD applications. The following discusses and outlines some of the major design changes required and other considerations for designers who are transitioning their applications from CAN 2.0 to CAN FD.


Automotive system architectures utilize many different network technologies to support a wide range of safety, body and convenience, infotainment, and ADAS electronics within the automobile. Starting with the system gateway, CAN plays a major role in supporting many of these applications in today’s architectures.

CAN FD will continue to play a major role within future architectures. The key factor to supporting these architectures is enabling faster throughput at the gateway and branching it out into the subnetworks. Current CAN 2.0 gateways achieve ~37 s/MB transfer time based on a 500 kb/s (typical) data rate and an 8 byte data payload. Future CAN FD gateways are targeted to achieve ~1.9 s/MB based on a 5 Mb/s data rate and a 64 byte data payload.

Today’s system architectures support up to five or more CAN 2.0 networks. CAN 2.0 networks typically run at 500 kb/s and not 1 Mb/s. The bandwidth on a CAN bus is limited by the propagation delay and by the bus topology. Future CAN FD architectures will utilize two types of networks: dedicated CAN FD networks and mixed CAN 2.0 and CAN FD networks.

In a dedicated CAN FD network, all CAN nodes on the network will be CAN FD capable. The advantage of this configuration is that the CAN FD protocol can always be used, and there will be minimal effect on the physical layer transceivers (i.e. no need for Partial Networking-like transceivers). The disadvantage of this approach is that the entire network will have to support CAN FD, making the change to CAN FD very significant and costly.

Some automotive manufacturers mix CAN 2.0 and CAN FD nodes in the same network. This is possible because CAN FD controllers support both CAN 2.0 and CAN FD protocols. One advantage of this configuration is that networks can be migrated to CAN FD node by  node without requiring an entire network change. The disadvantage of this method is that physical layer transceivers will have to support a CAN FD filtering method on CAN 2.0 nodes to ensure they don’t create any error frames during CAN FD communication. This adds cost and complexity to the system.


The CAN protocol is specified by the ISO 11898 standard. The ISO 11898-1 specifies the Data Link Layer. In 2014, an initiative to include the CAN FD requirements in this specification began. This year, the International Standards Organization has approved the ISO 11898-1 as a Draft International Standard (DIS) without any votes against it. The final ISO 11898-1 standard is expected to be published in April, 2016.

The ISO 11898-2 originally specified the requirements of the CAN 2.0 Physical Layer up to 1 Mb/s. The ISO 11898-5 is an extension of the ISO 11898-2 accommodating new low-power requirements during CAN 2.0 bus idle conditions. The ISO 11898-6 is an extension of the ISO 11898-2 and ISO 11898-5 specifying the Selective Wake-up (Partial Networking) functionality.

In 2014, an initiative to add CAN FD to the ISO 11898-2 and to combine it with ISO 11898-5, and ISO 11898-6 was also started. This year, the ISO 11898-2 successfully passed the Committee Draft Ballot. The Draft International Standard (DIS) version is currently under development, and submission is expected soon. The final ISO 11898-2 standard is expected to be published in July, 2017.


The general Layered Architecture according to the OSI Reference Model specified in the ISO 11898-1 is the same for both CAN 2.0 and CAN FD (shown in Figure 1). The differences within the OSI Reference Model between CAN 2.0 and CAN FD are in the Logical Link Control (LLC) and the Medium Access Control (MAC) sublayers of the Data Link Layer, and the Physical Coding Sublayer (PCS) and the Physical Medium Attachment (PMA) of the Physical Layer. The Medium Dependent Interface (MDI) of the Physical Layer is the same for CAN 2.0 and CAN FD.


Table 1 illustrates the difference in requirements between CAN 2.0 and CAN FD.




One of the primary differences between CAN 2.0 and CAN FD is in the MAC of the DLL, where the payload can be increased from 8 data bytes up to 64 data bytes in the data field of the CAN FD (see Figure 2). This increase in payload makes the CAN FD communication more efficient by reducing the protocol overhead. Messages that had to be split due to the 8 byte payload limit can be combined into one message. Additionally, security can be enhanced via the encryption of CAN FD messages as a result of the higher data rate and increased payload.


CAN FD switches the data rate during the data and CRC field. The Control field of the CAN FD frame contains three new bits. The FDF bit is used to distinguish between CAN 2.0 and CAN FD frames. Bit rate switching is initiating by setting the BRS bit. The error state of the transmitter is indicated by the ESI bit.


The other main difference between CAN 2.0 and CAN FD is in the PCS of the Physical Layer, where the CAN 2.0 data rate was increased from typically 500 kb/s to 2 Mb/s for nominal vehicle operating conditions and up to 5 Mb/s for diagnostics or EOL programming.

The block diagram of a CAN FD transceiver is very similar to that of a CAN 2.0 transceiver. Figure 2 illustrates the main circuit blocks of a CAN FD transceiver. The CAN FD transceiver interfaces with the CAN FD controller via the TXD and RXD digital signals. When in Normal mode (STBY low), the bit stream from the CAN FD controller on TXD gets encoded to differential output voltages on the physical CAN bus signals (CAN_H and CAN_L). The RXD output pin of the CAN FD transceiver reflects the differential voltages on the CAN bus.

The TXD to RXD propagation delay of a CAN FD transceiver must not exceed 255 ns for both dominant and recessive transitions. Because the CAN FD transceiver is not a push-pull driver, there is some asymmetry between recessive and dominant TXD to RXD propagation delay. As a result, the recessive bit time on RXD tends to shorten. Figure 3 describes how the loop delay symmetry parameters are measured.



CAN FD transceivers are backwards compatible with CAN 2.0 transceivers. The Data Link Layer of CAN FD is not compatible with CAN 2.0. To implement mixed operation of CAN 2.0 and CAN FD nodes on the same bus, the CAN 2.0 nodes need to be ideal passive (invisible to the network) during CAN FD communication or error frames will be generated.

At least three options are available to make CAN 2.0 nodes tolerant to CAN FD: Partial Networking (PN), CAN FD Shield, and CAN FD Filter. Currently, only PN transceivers are available on the market. PN allows the CAN 2.0 controller to be disconnected from the bus during CAN FD communication. The PN transceiver will ignore all CAN FD messages by decoding the incoming CAN frames. The PN transceiver waits for a valid CAN 2.0 wake-up message with a specific ID before it restarts routing CAN 2.0 messages to the CAN 2.0 controller.


The block diagram of a CAN FD controller looks very similar to that of a CAN 2.0 controller. Figure 4 illustrates the main blocks of a CAN FD controller. The CAN FD controller interfaces to the CAN FD transceiver using digital transmit and receive pins. The Bit Stream Processor (BSP) implements the CAN FD protocol. It transmits and receives CAN FD frames. The Transmit Handler prioritizes messages that are queued for transmission. The Receive Handler manages received messages. Only those messages that match the Acceptance Filters are stored in RX message objects or FIFOs. The Memory Interface controls the access to the RAM. Message Objects are stored in RAM. The message RAM can be located in the system RAM of a microcontroller; it doesn’t have to be dedicated to the CAN FD controller. Optionally, the acceptance filter configuration can be stored in RAM. The microcontroller uses a Register Interface to access the Special Function Registers (SFR) of the CAN FD controller. The SFR are used to configure and control the CAN FD controller. Interrupts notify the microcontroller about successfully transmitted or received messages. Received messages are time stamped. Transmitted messages are optionally time stamped and their IDs can be stored in a Transmit Event FIFO (TEF).


Table 2 illustrates the difference in requirements between a CAN 2.0 and a CAN FD controller. The following section will discuss the major changes in more detail.




During the arbitration phase, the data rate is limited by the CAN network propagation delay. In the data phase, only one transmitter remains, therefore, the bandwidth can be increased by switching the bit rate during the data phase. The transmitter always compares the intended transmitted bits with the actual bits on the bus. The propagation delay in the data phase can be longer than the bit time. Therefore, the bits are sampled at a Secondary Sample Point (SSP). Data Bit Time and SSP configuration require additional configuration registers.

In normal operation, practical CAN 2.0 networks achieve a bandwidth of 17 bytes/ms when using a bit rate of 500 kb/s, 8 bytes of payload, and 50% bus utilization. During End Of Line (EOL) programming, 100% of the bus can be utilized,resulting in a bandwidth of 29 bytes/ms.

CAN FD improves the bandwidth up to a factor of four during normal operation. This can be achieved by increasing the data bit rate to 2 Mb/s and by increasing the payload to 32 bytes. Increasing the payload to 64 bytes and switching the data bit rate to 5 Mb/s results in a bandwidth gain of up to ten during EOL programming.

Increased bandwidth requires the CAN FD controller and the microcontroller to process the messages faster. This requires higher microcontroller and CAN FD controller clock speeds and FIFOs to buffer messages.


CAN 2.0 controllers often use an 8 MHz clock, while CAN FD controllers require a faster clock. The selection of the sample point within a CAN FD network is critical. It is recommended that all CAN FD nodes use the same sample point setting; clock frequencies of 20, 40, or 80 MHz are recommended. This allows shorter time quanta and, therefore, higher resolution for setting the sample point. Correctly switching the bit time is a technical challenge. Using the same time quanta resolution during Nominal and Data bit phase is also recommended. This requires more time quanta per bit during the Nominal bit phase as compared to CAN 2.0.


Increasing the payload of CAN FD messages requires more RAM for message storage. Storing 32 message objects with ID and a payload of 8 bytes requires 640 bytes of RAM. Increasing the payload to 64 bytes requires 2432 bytes of RAM.


In a CAN network, signals are mapped into meaningful Protocol Data Units (PDU), as shown in Figure 5. Usually one PDU is mapped into one CAN frame. The signals inside a PDU and the length of a PDU don’t change; they are static. The frames and signals are described in a message database.


Only one ECU can transmit a certain PDU (see Figure 6). Multiple ECUs can receive a PDU. Acceptance filtering is used to accept PDUs that are of interest to the ECU. Acceptance filtering is done in hardware and reduces the required message processing in the microcontroller. Filters can be set up to filter on the ID of the CAN frame and optionally on the first two data bytes of the CAN frame. Filters can point to different message objects inside the CAN controller.


The concept of static PDU frames can also be applied to CAN FD. In order to make CAN FD most efficient, a payload of 64 bytes should be used as much as possible, but the signal mapping gets even more complex for PDUs with 64 bytes.


Dynamic Multi-PDU Frames (M-PDU) try to maximize the efficiency of CAN FD by dynamically combining multiple PDUs into one frame (see Figure 7). PDUs are only transmitted when the data changes.


The message database of static PDUs can be re-used. Each PDU contains a header to distinguish between the different PDUs inside a frame. The header consists of the PDU ID and the byte length. M-PDUs are especially useful for CAN FD gateways. The gateway can collect multiple PDUs and send them out in one frame (see Figure 8). The system designer defines the rules for combining PDUs and for delaying an M-PDU before it gets transmitted.


Since multiple PDUs are combined dynamically into one CAN FD frame, the classic message filtering concept can’t be used. All frames must be received and the PDUs have to be filtered by software, not by hardware. This will increase the demand for faster message processing. FIFOs with payloads up to 64 bytes will be required to buffer received messages and give the microcontroller time to process the messages. This will require even more RAM to store the received data.


A major challenge of CAN FD adoption and transition is the limited number of CAN FD controllers available today and in the near future. Some of the causes associated with this challenge are the following:

  • Updating the ISO 11898-1 specification is a long process
  • The ISO CRC fix delayed silicon component availability
  • Silicon suppliers are limiting the number of developments due to risk of ISO spec changing
  • Initial MCUs targeted for release are supporting high-end CAN FD applications

The causes described above result in the following challenges:

  • Limited number of MCUs with CAN FD available on the market
  • Lack of available components puts pressure on silicon supplier, tier supplier and OEM to meet aggressive timelines
  • First MCUs with CAN FD released will be high performance and feature rich, leaving an MCU gap in many mid- to lower-end CAN FD applications

Replacing or updating the MCU is a major system change and will require the automotive supplier and manufacturer to redesign, revalidate and requalify the ECU. This results in a significant amount of time, resources and investment. In many cases, a replacement MCU with CAN FD may not be available. As a result, customers will benefit from using an external CAN FD controller. This allows ECU designers to enable CAN FD by adding only one external component while they continue to utilize the majority of their design.


External controllers that support CAN 2.0 applications are currently available, including the SJA1000 from NXP and the MCP2515 from Microchip. Both devices serve very well as external CAN 2.0 controllers and are widely used in automotive and industrial applications. These types of devices are typically used to add CAN 2.0 capability to an MCU or to add an additional CAN 2.0 controller to an existing MCU.

Many of the same considerations for using a CAN 2.0 controller apply to using an external CAN FD controller . It interfaces directly to the physical layer transceiver transmit and receive pins. The controller acts as a CAN FD engine, processing the CAN FD messages and relaying any relevant messages to the MCU. An external controller can interface to the MCU through a serial or parallel port. Figure 9 shows a typical application of an external CAN FD controller using an SPI interface.


Figure 10 is a block diagram of an external CAN FD controller with an SPI interface. Using an SPI interface decreases the number of pins required as compared to a parallel interface. The CAN FD controller contains the same blocks as the integrated CAN FD controller. When integrated into an MCU, the CAN FD peripheral can share the system RAM. The RAM inside an external CAN FD controller is dedicated.


The SPI interface accesses the SFR to control the CAN FD engine, to configure the CAN FD bit times and to set up the receive filters. The SPI interface also accesses the RAM to load transmit messages or to read received messages. The external CAN FD controller transmits and receives messages autonomously and interrupts the MCU only when a message is successfully transmitted or received. The SFRs are efficiently arranged to reduce the number of SPI transfers and to keep up with the higher CAN FD bandwidth. This allows the MCU to use a DMA to access larger SFR and RAM blocks via SPI. The external CAN FD controller integrates a clock generator to supply the CAN FD clock. Optionally, the clock can be provided to the MCU.

An external CAN FD controller has to meet the same requirements as an integrated CAN FD controller. The bandwidth requirements can be met by using an SPI with DMA and by increasing the SPI frequency. Calculations show that an external CAN FD controller utilizing an SPI frequency of 10 to 16 MHz can keep up with a 100% loaded CAN FD bus at data bit rates up to 8 Mb/s.


Applications transitioning from CAN 2.0 to CAN FD benefit from many enhancements, such as an increase in data rate and payload. However, application changes are required to take advantage of these enhancements. The transition affects all levels of the development process, from the tool supplier up to the end automotive manufacturer. Designers have already started to transition CAN FD into their automotive and industrial applications.

Automotive manufacturers in the US, Europe and Asia are planning to implement CAN FD as early as model year 2018 with a wider adoption expected in 2020. Automotive suppliers have started development to prepare for upcoming CAN FD programs and are facing aggressive timelines and design challenges. In many cases, cost-effective MCUs with CAN FD that are well-suited for their applications may not be available.

For these situations, using an external CAN FD controller can be a viable alternative. Using an external CAN FD controller will help to minimize development timelines and be more cost effective than using a high-end MCU with CAN FD.

Orlando Esparza

Microchip Technology Inc.

2355 W. Chandler Blvd.

Chandler, AZ 85244

+1- 480-792-7363

Wilhelm Leichtfried

Microchip Technology Inc.

2355 W. Chandler Blvd.

Chandler, AZ 85244

+1- 480-792-7572

Fernando Gonzalez

Microchip Technology Inc.

2355 W. Chandler Blvd.

Chandler, AZ 85244

+1- 480-792-4578


[1] ISO/DIS 11898-1:2015(E), Road vehicles – Controller area network


Part 1: Data link layer and physical signaling

[2] ISO/CD 11898-2:2015-06-08, Road vehicles – Controller area network


Part 2: High-speed medium access unit

[3] AUTOSAR 4.x Specification

Next Page »