Posts Tagged ‘top-story’

Next Page »

Improving Autonomous Driving Communication and Safety with Private Blockchains

Thursday, June 22nd, 2017

Here’s why Blockchain, a powerhouse database in finance and digital identification, has what it takes to become the backbone of automotive data communication—beginning with the autonomous car.

Blockchain technology isn’t just for Bitcoin: It’s driving into several other industries at a breath-taking velocity. It’s now well established for financial markets and digital identification, with other major industries such as healthcare and insurance companies in fast pursuit. Emerging areas for Blockchain are also diverse, covering areas such as energy, where micro-grid producers see Blockchain as a method to keep track of the energy generated. Blockchain itself is evolving as well, with Blockchain 2.0 promising even more functionally for broader groups by introducing new applications.

Figure 1: Yes, the use of Blockchain technology for V2V and V2i communication could be even closer than it appears.

Figure 1: Yes, the use of Blockchain technology for V2V and V2i communication could be even closer than it appears.

Blockchain can also be used throughout the automotive industry. Automotive applications range from revolutionizing the supply chain to authenticating ride sharing for a passenger and the vehicle owner. However, the clearest overall group of opportunities for automotive targets the critical functions autonomous vehicles perform when under their own control.

Communication Opportunities
One of the opportunities for Blockchain in the autonomous car deals with communication. That is Vehicle-to-Vehicle (V2V) communication as well as Vehicle-to-Infrastructure (V2i). Along with other vehicle based communications they are commonly grouped with, V2V and V2i can also be referred to as Vehicle-to Everything (V2X). Regardless if it’s V2V, V2i, or just V2X, all require fast and secure transmission of data as well as undisputable records—of the data, the transmission itself, and the recipient(s).

In V2V, vehicles inform other vehicles within their communication community about myriad details in the surrounding environment. One example would be real-time information regarding the roads traveled, including themes such as traffic flow, construction zones, workers on the road, etc. This type of detailed information can empower other vehicles in the communication community. Cars and trucks can optimize their performance and take the shortest time or distance route, based on real-time data. V2V can also lend itself to more core vehicle functions such as gear optimization on a given incline, allowing trucks to minimize fuel consumption. In V2V, autonomous communication to other similar vehicles (peer-to-peer) employing a fast and secure method is critical to the intelligent car’s core functionality.

Meet the Automotive Information Broker

In V2i, a car can produce thousands of independent information packets every minute and push them to what could be hundreds of infrastructure receivers from traffic lights to data aggregators. It’s an automotive information broker. The packets of information themselves hold little value independently, but when assembled with other vehicles, it then has substantial value in determining everything from dynamic traffic control to component wear patterns for a given class of car in a given geography. In V2i, fast and undisputable records are the key to success.

By using Blockchain for V2X, an OEM gains additional speed and frequency of secure transmissions not available with the majority of today’s OTA solutions. That is, the OTA solutions available are specifically designed to perform file transfers for either a full binary update or partial update for major systems such as infotainment, telematics, and (in some cases) the vehicles’ ECUs as well.

While these updates are highly secure, they are designed for the specific purpose of software updates. In V2X, the data transfer to or from the vehicle is typically small packets of data intended to either inform the vehicle or take an immediate action. A history or log is also advantageous for proof of action should an accident occur. Given the unique design of Blockchain, any record of any transmission can be validated for accuracy, thereby giving the OEM or any vehicle owner undisputable records of truth. This unique validation method is not available to the automotive market today. Blockchain also addresses the V2X security issues (many senders, many receivers) without leaving the vehicles vulnerable to hacking, making it an ideal data record for small packets of information.

As Blockchain comprises data records—a database—using it will not necessarily be a design consideration for the communication transport system itself. Underlying transmission standards such as the Wireless Access for Vehicular Environments (WAVE) for the U.S., which is based on the lower level IEEE 802.11p standard, and ETSI ITS-G5 for Europe, a standard also based on IEEE 802.11P, have focused on the transport system definition use. And the Car-to-Car communication consortium, a nonprofit industry driven organization also in Europe, has focused on standards for V2V and V2i. Blockchain use, rather than having an impact on these standards, would instead operate within the standards defined.

Greg-Bohl-6-17-16_WEBGreg Bohl is Vice President of the U.S. Analytics Services organization for HARMAN Connected Services. His work with analytics was launched over 20 years ago while at the Sabre Group and has continued through several companies including multiple start-ups. Greg has worked globally with OEMs defining a path of how machine learning and artificial intelligence can be used in the connected car. Publications range from numerical studies in clean technology through patents for predictive systems and methods used in the automotive industry. Greg has earned a BS-IS and MBA from the University of Texas, Arlington.

Heterogeneous: Performance and Power Consumption Benefits

Wednesday, May 10th, 2017

Why multi-threaded, heterogeneous, and coherent CPU clusters are earning their place in the systems powering ADAS and autonomous vehicles, networking, drones, industrial automation, security, video analytics, and machine learning.

High-performance processors typically employ techniques such as deep, multi-issue pipelines, branch prediction, and out-of-order processing to maximize performance, but these do come at a cost; specifically, they impact power efficiency.

If some of these tasks can be parallelized, this impact could be mitigated by partitioning them across a number of efficient CPUs to deliver a high-performance, power-efficient solution. To accomplish this, CPU vendors have provided multicore and multi-cluster solutions, and operating system and application developers have designed their software to exploit these capabilities.

Similarly, application performance requirements can vary over time, so transferring the task to a more efficient CPU when possible improves power efficiency. For specialist computation tasks, dedicated accelerators offer excellent energy efficiency but can only be used for part of the time.

So, what should you be looking for when it comes to heterogeneous processors that deliver significant benefits in terms of performance and low power consumption? Let’s look at a few important considerations.

Even with out-of-order execution, with typical workloads, CPUs aren’t fully utilized every CPU cycle; they spend most their time waiting for access to the memory system. However, when one portion of the program (known as a thread) is blocked, the hardware resources could potentially be used for another thread of execution. Multi-threading offers the benefit of being able to switch to a second thread when the first thread is blocked, leading to an increase in overall system throughput. Filling up all the CPU cycles with useful work that otherwise would be un-used leads to a performance boost; depending on the application, the addition of a second thread to a CPU typically adds 40 percent to the overall performance, for an additional silicon area cost of around 10 percent. Hardware multi-threading is a feature that in CPU IP is bespoke to Imagination’s MIPS CPUs.

A Common View
To move a task from one processor to another requires each processor to share the same instruction set and the same view of system memory. This is accomplished through shared virtual memory (SVM). Any pointer in the program must continue to point to the same code or data and any dirty cache line in the initial processor’s cache must be visible to the subsequent processor.

Figure 1: Memory moves when transferring between clusters.

Figure 1: Memory moves when transferring between clusters.

Figure 2: Smaller, faster memory movement when transferring within a cluster.

Figure 2: Smaller, faster memory movement when transferring within a cluster.

Cache Coherency
Cache coherency can be managed through software. This requires that the initial processor (CPU A) flush its cache to main memory before transferring to the subsequent processor (CPU B). CPU B then has to fetch the data and instructions back from main memory. This process can generate many memory accesses and is therefore time consuming and power hungry; this impact is magnified as the energy to access main memory is typically significantly higher than fetching from cache. To combat this, hardware cache coherency is vital, minimizing these power and performance costs. Hardware cache coherency tracks the location of these cache lines and ensures that the correct data is accessed by snooping the caches where necessary.

In many heterogeneous systems, the high-performance processors reside in one cluster, while the smaller, high-efficiency processors reside in another. Transferring a task between these different types of processors means that both the level 1 and level 2 caches of the new processor are cold. Warming them takes time and requires the previous cache hierarchy to remain active during the transition phase.

However, there is an alternative – the MIPS I6500 CPU. The I6500 supports a heterogeneous mix of external accelerators through an I/O Coherence Unit (IOCU) as well as different processor types within a cluster, allowing for a mix of high-performance, multi-threaded and power-optimized processors in the same cluster. Transferring a task from one type of processor to another is now much more efficient, as only the level 1 cache is cold, and the cost of snooping into the previous level 1 cache is much lower, so the transition time is much shorter.

Combining CPUs with Dedicated Accelerators
CPUs are general purpose machines. Their flexibility enables them to tackle almost any task but at the price of efficiency. Thanks to its optimizations, the PowerVR GPU can process larger, highly parallel computational tasks with very high performance and good power efficiency, in exchange for some reduction in flexibility compared to CPUs, and bolstered by a well-supported software development eco-system with APIs such as OpenCL or Open VX.

The specialization provided by dedicated hardware accelerators offers a combination of performance with power efficiency that is significantly better than a CPU, but with far less flexibility.

However, using accelerators for operations that occur frequently are ideal to maximize the potential performance and power efficiency gains. Specialized computational elements such as those for audio and video processing, as well as neural network processors used in machine learning, use similar mathematical operations.

Hardware acceleration can be coupled to the CPU by adding Single Instruction Multiple Data (SIMD) capabilities with floating point Arithmetic Logic Units (ALUs). However, while processing data through the SIMD unit, the CPU behaves as a Direct Memory Access (DMA) controller to move the data, and CPUs make very inefficient DMA controllers.

Conversely, a heterogeneous system essentially provides the best of both worlds. It contains some dedicated hardware accelerators that, coupled with a number of CPUs, offer the benefits of greater energy efficiency from dedicated hardware, while retaining much of the flexibility provided by CPUs.

These energy savings and performance boost depend on the proportion of time that the accelerator is doing useful work. Work packages appropriate for the accelerator are present in a wide range of sizes—you might expect a small number of large tasks, but many smaller tasks.

There is a cost in transferring the processing between a CPU and the accelerator, and this limits the size of the task that will save power or boost performance. For smaller tasks, the energy consumed and time taken to transfer the task exceeds the energy or time saved by using the accelerator.

Data Transfer Cost
To reduce time and energy costs, a Shared Virtual Memory with hardware cache coherency—as found in the I6500 CPU—is ideal as it addresses much of the cost of transferring the task. This is because it eliminates the copying of data and the flushing of caches. There are other available techniques to achieve even greater reductions.

The HSA Foundation has developed an environment to support the integration of heterogeneous processing elements in a system that extends beyond CPUs and GPUs. The HSA system’s intermediate language, HSAIL, provides a common compilation path to heterogeneous Instruction Set Architectures (ISAs) that greatly simplifies the system software development but also defines User Mode Queues.

These queues enable tasks to be scheduled and signals to trigger tasks on other processing elements, allowing sequences of tasks to execute with very little overhead between them.

Beyond Limitations
Heterogeneous systems offer the opportunity to significantly increase system performance and reduce system power consumption, enabling systems to continue to scale beyond the limitations imposed by ever shrinking process geometries.

Multi-threaded, heterogeneous and coherent CPU clusters such as the MIPS I6500 have the ideal characteristics to sit at the heart of these systems. As such they are well placed to efficiently power the next generation of devices.

Tim-Mace-2Tim Mace is Senior Manager, Business Development, MIPS Processors, Imagination Technologies.

Auto Makers See Opportunity with Embedded Handwriting

Monday, April 10th, 2017

Why handwriting technology in the automotive cockpit will continue to see dramatic growth.

Many people who don’t yet use handwriting technology on phones or tablets as computing input mechanisms may nonetheless already be familiar with digital handwriting technology. There’s a good chance they’ve been introduced to it in what might seem an unlikely place: Their cars.


Figure 1: In the new automotive ecosystem, embedded sensors and display units can communicate with mobile devices inside the car and gather all sorts of external information via the web.

Last year, higher-end auto manufacturers like Audi, Mercedes and Tesla began shipping cars with embedded handwriting technology for controlling GPS systems, entertainment systems and other dashboard controls. Watch a 10-second video showing handwriting at work in an Acura here and learn a bit more about the overall concept here.

But all of this is just the beginning. According to Frost & Sullivan, the market for handwriting recognition (HWR) technology in cars will grow at a rate of more than 30 percent each year through 2020. “The industry is now moving towards controlling the entire infotainment with help from HWR,” the firm adds.

Embedded systems are tough to design: By nature, they’re constrained not only by limited storage space, but also limited memory space and typically, lower performance CPUs compared to computational devices. But even bound by these limitations, today’s digital handwriting technology has delivered remarkable accuracy and consistent benefits to the automotive industry. The most recent technology includes the ability to superimpose characters, cursive words or portions of words on top of each other on the touchpad and still accurately recognize input. A keyboard option incorporating smooth typing enables a true multimodal solution. Here are some reasons why handwriting technology in the automotive cockpit will continue to see dramatic growth:

  • Low driver distraction interfaces have evolved to require handwriting. One reason that handwriting provides a more effective option for controlling GPS or entertainment systems compared to voice are because cars are often noisy, making it difficult to reliably give instructions. Another reason is that voice command systems are very difficult to edit, which makes it more challenging to either revise input or correct recognition errors. Finally, handwriting allows drivers to keep their attention safely focused on the road: Today’s tech is designed for use when the driver isn’t looking at what he’s writing on the touchpad.
  • Multimodal systems are easy for the user to manipulate. Car manufacturers care about customer satisfaction, and drivers today demand a consistent user experience when inputting information—whether they’re doing it by hand, keyboard or voice. Drivers want multiple methods to input that information, depending on what’s most convenient and more importantly, safe. Consistency is key: System responses to keyboard input need to be consistent with responses to handwritten input. No one wants to get a different dictionary response to a query if they’re writing by hand rather than  keyboarding, for instance. A single multimodal system pre-emptively solves that potential problem.
  • Multimodal is great for the integrator. What’s great about multimodal design for systems integrators is that they only need to integrate with a single technology provider that handles multiple forms of input instead of integrating several different functional libraries and debugging any adverse interactions. This shortens the development time required for integration and lessens demands on memory resources and storage. Ultimately, integrating a multimodal interface means developing products that are often lower-cost, quicker time-to-market, and easier to test and validate. A big win all around.

Handwriting also wins points for safety and accuracy. The American Automobile Association (AAA) ranked voice-based command systems, such as the iPhone’s Siri, and found that it significantly distracted car drivers. In a worst-case situation, drivers even at the low speed of 25 mph were distracted for up to 27 seconds, during which they travelled more than three football fields in length.

Handwriting adapts well to multiple situations—e.g., character input when driving, and word input when stopped. Drivers can reach down and direct their cars’ GPS or entertainment systems in dozens of languages (as selected by the OEM), via either cursive or block characters that are easily recognizable, and that can even be written at a tilt—up to over 30 degrees off a level line—and still be recognized. The ability to recognize letters even written at a significant tilt allows for a great deal of human error, which in turn enables increased safety.

Figure 2: Embedded handwriting technology, complemented with voice and other multimodal input options, offers today’s drivers an effective way to enjoy more applications with complex features even as states increase regulation

Figure 2: Embedded handwriting technology, complemented with voice and other multimodal input options, offers today’s drivers an effective way to enjoy more applications with complex features even as states increase regulation.

Handwriting, in sum, is a natural fit for inclusion in the auto market because it offers an intuitive method to control the automotive cockpit, assures minimum driver distraction, and provides a natural input method and low learning curve. Drivers of all ages can use it, and it offers high recognition accuracy of letters, numbers and gestures.

And the handwriting technology in cars can blossom into a full note-taking application for drivers to use when they’re stopped. This is ideal for road warrior executives who must constantly attend meetings, travel and share their notes.

Handwritten input, or ‘digital ink,’ is now as fully capable to be interpreted to text as input from the keyboard and mouse. Furthermore, diagrams such as mind maps, organizational charts, and flow diagrams are capable of being fully converted to digital form in a manner that allows for changes and editing. Today’s technology allows you to create content, edit and format that content, create diagrams, input complex math equations, and easily incorporate the interpreted handwriting results into your digital document workflow.

A booming professional services market has emerged to support developers of embedded handwriting technology, too. Handwriting technology vendors are offering in-depth professional engineering services for use cases based upon the SDK packages offered, all the way to complete turnkey subsystem design services.

Handwriting technology is already embedded in millions of cars today. But the most tremendous growth for this market lies ahead in a wide range of embedded applications and IoT devices. For ISVs and OEMs, the ultimate benefit is a massively improved user experience which enhances customer satisfaction and ultimately sales and profits.

Gary-Headshot_hi_resGary Baum is the Vice President of Marketing at MyScript, the source of the most advanced award winning technology for handwriting recognition and digital ink management. At the Car HMI Concepts and Systems conference, MyScript technology was recognized in the ‘Most Innovative Car HMI Technology’ category.

Read more about the MyScript SDK and other tools for the automotive industry.

Control, Drive, Sense: High-Power Density SiC and GaN Power Conversion Applications

Thursday, March 2nd, 2017

New power switch technologies are key to success with the next generation of motor control, solar inverters, energy storage and electric vehicles. Just as important—the ability to drive these technologies safely and sense them more accurately.

Sensing current within these systems while operating at these higher switching rates is becoming more challenging.

Electricity consumption and its generation, which adds to our carbon footprint and affects climate change, is one of the key problems the world faces. The largest global consumption of electricity is from electric motors and the systems they drive. These systems consume more than twice as much electricity as the next largest consumer, lighting. A 2011 International Energy Agency report estimates that electric motor systems account for between 43 and 46 percent of the world’s electricity consumption.

Farther on Less

The need to further shrink our carbon footprint by reducing the CO2 emissions from transportation is a key driver for the electrification of vehicles. With the electrification of vehicles comes the need for them to be able to travel greater distances with less energy consumed. At the same time, we must ensure that the electricity generated for charging these vehicles comes from clean sources. As important as reducing electricity consumption is improving electricity generation methods. Generating energy through renewable resources like the sun requires efficient solar farms that are becoming mainstream in implementations worldwide.

We’ve seen the emergence of Wide Bandgap semiconductor technologies like Silicon Carbide (SiC) and Gallium Nitride (GaN) and the use of power MOSFETS in applications such as solar inverters, motor drives, and electric vehicles. Along with these technologies comes the need for gate drivers that have the capability of driving them efficiently and safely at higher data rates with less dead time in the system. Sensing current within these systems while operating at these higher switching rates is becoming more challenging.

Moving to these new technologies makes electric motors and driving electronics smaller and lighter. Increasing the range of the electric vehicle and decreasing its charging time becomes possible. Higher switching frequencies in solar inverters, as specified in IEC62109-1, will improve the overall efficiency of the systems as well as reducing the size of the line filters. Industrial automation applications where motors are commonly used, as specified in the variable frequency motor drive standard IEC61800-5, will become less bulky and more efficient, reducing the overall energy footprint.

Greater Robustness, Reliability

Isolation is mandated for safety and operation. Implementing the isolation barriers within these applications without compromising on performance is critical. These systems often have long lifetimes and could be implemented in harsh environments, so high levels of component robustness and reliability are a must.

“Sensing current within these systems while operating at these higher switching rates is becoming more challenging.”

One example of a solution for driving new Power switch technologies is Analog Devices iCoupler® digital isolation integrated with gate drivers like the ADuM4121 (Figure 1). It has the capability of driving these new Power switch technologies because of its low industry leading propagation delay of 38ns typical, allowing faster switching and the ability to withstand high Common Mode Transients up to 150kV/µs during fast turn on and turn off events.

Integrating Analog Devices iCoupler digital isolation with industry leading sigma delta analog to digital converters, such as the AD7403, makes it possible to accurately sense the current in high-voltage applications across a smaller shunt resistor, improving system efficiency. This enables the use of higher accuracy shunt-based current measurement architecture rather than Hall Effect systems. Selecting smaller resistors reduces the overall size of the solution.


Figure 1: ADuM4121 Driving GaN MOSFET GS66508B

To demonstrate system performance benefits, Analog Devices has developed a new Half Bridge GaN evaluation platform in collaboration with GaN Systems, as shown in Figure 2. On this platform we have the ADuM4121 isolated gate driver driving the GS66508B GaN MOSFET from GaN Systems that is rated to 650V at 30A. The gate charge requirement of the GS66508B is very low, making it much easier to drive at higher frequencies with a much lower supply voltage on VDD2 of 6V. The ADuM4121 also includes an internal Miller clamp that activates at 2V on the falling edge of the gate drive output, supplying the driven gate with a lower impedance path to reduce the chance of Miller capacitance induced turn on.

Making use of three of these half bridge evaluation boards combined with the Analog Devices Motor Control evaluation platform, a demonstration system showcasing a three-phase inverter driving a three-phase motor was built (Figure 3). Within the three-phase inverter, large currents are being switched at high frequencies that can cause radiated and conducted emissions. To reduce the conducted and radiated emissions in the system while operating efficiently, it is critical to slew the edges of the switching waveforms sufficiently by selecting an appropriate gate resistance. This series resistance can further help with dampening the output ringing by matching the source to the load.

Figure 2: Replacing an IGBT inverter with a GaN Inverter

Figure 2: Replacing an IGBT inverter with a GaN Inverter

In this demonstration platform, the ADSP-CM409 generates the PWM signals required to drive the power switches, while the integrated SINC filters allow for direct connection of the Isolated Sigma delta ADC used for accurately sensing the current. The reinforced isolation provided by the isolated gate drivers can withstand up to 5kVrms as well as working voltages as high as 849Vpeak according to VDE0884-10. The isolation AD7403 offers can achieve 5kVrms withstand with a working voltage 1250Vpeak, also according to VDE0884-10.

Figure 3: Three Phase Inverter Motor Control Platform

Figure 3: Three Phase Inverter Motor Control Platform

Implementing a three-phase inverter using GaN suits systems operating up to 650V. SiC, having much higher breakdown voltages, more closely matches systems going up as high as 1200V and 1700V because it will have more margin within three-phase systems with 690Vrms line voltages.

ProfilePicture_webHein Marais is a System Application Engineer at Analog Devices, Inc.

Can Autonomous Vehicles Absolve Human Responsibility?

Monday, January 23rd, 2017

In our rush to embrace the latest technology and take advantage of whatever benefits it offers—greater convenience, higher efficiency, improved reliability, lower cost, etc.—we must not neglect human safety.

Transportation has been a major driver of technological innovation (Figure 1) since the inventions of James Watt, the Wright Brothers and automotive pioneers Daimler and Maybach. Over the years, concerns for occupant safety have led to the development of seat belts and air bags in cars, while such things as improvements in vehicle body materials and profiles, and the deployment of reversing alarms on trucks and buses have reduced the risks of accident and injury to pedestrians, cyclists, and other road users.


Figure 1: Mankind’s need to get from one spot to another has inspired innovators from James Watt to Elon Musk [Left image: By James Eckford Lauder (1811 - 1869) (Scottish) Details of artist on Google Art Project [Public domain], via Wikimedia Commons; Right image: [By jurvetson (Steve Jurvetson) [CC BY 2.0 (], via Wikimedia Commons]

Figure 1: Mankind’s need to get from one spot to another has inspired innovators from James Watt to Elon Musk (Top image: By James Eckford Lauder (1811 - 1869) (Scottish) Details of artist on Google Art Project (Public domain), via Wikimedia Commons; Bottom image: By jurvetson (Steve Jurvetson) - CC BY 2.0 (, via Wikimedia Commons

In more recent times, the technology of artificial intelligence (AI) has started to pervade the various electronic control systems that are an integral part of modern automotive design and today’s driving experience. However, as we move from advanced driver assistance systems (ADAS) to fully autonomous self-driving vehicles we need to recognize the point at which responsibility for safe operation passes from human to machine. The ethics of the autonomous functionality offered by AI in vehicles has parallels with the “three laws of robotics” science-fiction writer Isaac Asimov postulated in 1942, which mostly aimed to protect humans from harm due to the actions of any robots. In similar fashion, implementing AI in vehicles needs ethical decision-making rules to define behavior that eliminates or reduces harm to humans.

From Fighter Pilots to Car Drivers

A fighter jet represents the pinnacle of aircraft evolution in terms of its performance and complexity of operation. Consequently, fighter pilots are assisted in flying them. A comprehensive suite of artificial intelligence algorithms can control almost every aspect of their operation, enhancing the pilot’s capability while still allowing him to take control when the situation demands it. In the same way, equally powerful, game-changing AI technology in automotive applications must account for the ability to return control of the vehicle to the driver.

Within the auto industry today, many electronic technology companies are focusing on the technical needs of ADAS, developing both adaptive and predictive systems and components that will allow for better and safer driving. ADAS assists the driver or any other agent in charge of the vehicle in a number of ways: It may warn the driver or take actions to reduce risk. It may also improve safety and performance by automating some portion of the control task of operating the vehicle.

In its current state ADAS mainly functions in cooperation with the driver, i.e. by providing a human-to-machine interface, which is part of the control system of the vehicle with the human still maintaining overall responsibility for the vehicle. Over time, it is expected that developments in technology will be successful in wielding ever-greater control of the vehicle, so assistance becomes the norm and driver intervention is reduced. ADAS are ultimately expected to develop further into the kind of autonomous system that will offer the ability to respond more quickly and with greater benefits than when a human agent is in control of the vehicle.

ADAS Demands Component Solutions

The development of electronic components for ADAS, and ultimately for truly autonomous vehicles, is being undertaken by leading component manufacturers worldwide. These companies are typically already experienced in meeting the demanding performance, quality and reliability standards expected by the automotive industry. For example, ON Semiconductor provides robust, AEC-qualified, production part approval process (PPAP) capable products for automotive applications, including the NCV78763 Power Ballast and Dual LED Driver for ADAS front headlights. Freescale Semiconductor is helping to drive the world’s most innovative ADAS solutions with its automotive, MCU, analog and sensors, and digital networking portfolio expertise. The development of its latest FXTH8715 Tire Pressure Monitoring Sensors (TPMS), which integrate an 8-bit microcontroller (MCU), pressure sensor, XZ-axis or Z-axis accelerometer and RF transmitter, was driven by a market requirement for improved safety. AVX, a technology leader in the manufacture of passive electronic components, developed the VCAS & VGAS Series TransGuard® Automotive Multi-Layer Varistors (MLVs) to provide protection against automotive-related transients in ADAS applications. Delphi Connection Systems supports challenging automotive applications that demand robust design and reliability with its high-performance APEX® Series Wire Connectors.

The Dream of Vehicle Autonomy

The electronics industry has long been characterized by continual improvements in performance that come at an ever-decreasing cost. This electronics industry has allowed technology that was once the preserve of racing cars and the luxury automobile market to percolate down through mid-range vehicles to everyday family vehicles. Many people, both inside and outside the industry, now dream of a future where completely autonomous vehicles will come to dominate the world’s roads. They visualize benefits in safety, travel efficiency, comfort, and convenience in vehicles that are programmed to avoid accidents, optimize journey times and costs and maximize the functional utility of the vehicle. Clearly, amongst these, preventing injury to passengers and others as well as damage to the vehicle and property is the highest priority.

Autonomous Vehicles Require Ethical Rules

Current laws regulating road use place the responsibility for safety squarely with the human driver. He or she must ensure that other people, both inside and outside the vehicle, are protected from harm arising from his/her operation of the vehicle. While a car may be viewed as a means of getting people from point A to point B as efficiently as possible, its use at excessive speed or in a dangerous manner resulting in an accident that injures or kills a pedestrian would likely be considered a criminal offense. Indeed, the deliberate use of a vehicle to run down and kill someone would, in most cases, constitute murder.

However, these judgments are rarely black or white, and there may be mitigating circumstances, depending on the situation and people involved. Moreover, while we would not expect an autonomous vehicle to exceed speed limits or undertake dangerous maneuvers in a typical situation, there may be occasions when, like a human operator, it needs to make decisions where the outcome may be questionable. These decisions are where we need to understand the ethics involved to apply appropriate rules. This can be appreciated by considering a few hypothetical scenarios:

1. When traveling at speed in traffic, a human driver might react to an animal jumping out into the road by swerving to avoid it and, in doing so, hitting another car. As the driver, you may have saved that animal but what if the result was an accident in which other people were hurt?

2. What if, instead of an animal in the above example, it was a pedestrian who had stepped into the road and hitting them was likely to be fatal. Then the action would have saved a human life at the cost of potential injuries to the occupants of the other vehicle.”

3. An autonomously driven vehicle confronted with the same situation of a pedestrian stepping into the road might decide it cannot run over that person but may also decide it cannot swerve into another vehicle. Instead, it swerves off the road hitting a wall resulting in serious injuries to the human ‘driver’ of the car and potentially any passengers too.

In the latter situation, the human ‘driver’ is not to blame, but equally, there is an ethical dilemma as to whether any fault lies with the autonomous vehicle. Undoubtedly, as we become more reliant on technologies such as ADAS and ultimately on Autonomous Technology Systems (ATS) the responsibility for operating a vehicle becomes less dependent on the individual driver and shifts to the vehicle itself and therefore to the car manufacturer. Not surprisingly, the automotive industry will not want to accept liability for such risks unless the market recognizes this requirement and establishes an appropriate business model that makes economic sense for the manufacturers and doesn’t result in endless litigation.


Technological solutions are now starting to outpace the real-world situations into which they are being introduced. The deployment of artificial intelligence is challenging the status quo and forcing us to consider ethical questions about how machines should operate and who has control and is, therefore, responsible for their behavior.

This moral issue is certainly true of autonomous vehicles where ceding control to the vehicle requires AI that follows agreed ethical rules to protect human life. If we are to benefit from improved transportation systems with greater freedom, flexibility, efficiency, and safety, then it is society as a whole rather than design engineers and vehicle manufacturers that have to face up to this challenge and take on this responsibility.

Photo-RudyRamos_webRudy Ramos is the Project Manager for the Technical Content Marketing team at Mouser Electronics, accountable for the timely delivery of the Application and Technology sites from concept to completion. He has 30 years of experience working with electromechanical systems, manufacturing processes, military hardware, and managing domestic and international technical projects. He holds an MBA from Keller Graduate School of Management with a concentration in Project Management. Prior to Mouser, he worked for National Semiconductor and Texas Instruments. Ramos may be reached at

GENIVI Alliance Announces New Open Source Vehicle Simulator Project

Tuesday, September 20th, 2016

The GENIVI Alliance, a non-profit alliance focused on developing an open in-vehicle infotainment (IVI) and connectivity software platform for the transportation industry, today announced the GENIVI Vehicle Simulator (GVS) open source project has launched, with both developer and end-user code available immediately.

The GVS project and initial source code, developed by Elements Design Group, San Francisco and the Jaguar Land Rover Open Software Technology Center in Portland, Ore., provide an open source, extensible driving simulator that assists adopters to safely develop and test the user interface of an IVI system under simulated driving conditions.

“While there are multiple potential uses for the application, we believe the GVS is the most comprehensive open source vehicle simulator available today,” said Steve Crumb, executive director, GENIVI Alliance. ”Its first use is to test our new GENIVI Development Platform user interface in a virtually simulated environment, to help us identify and execute necessary design changes quickly and efficiently.”

Open to all individuals wishing to collaborate, contribute, or just use the software, the GVS provides a realistic driving experience with a number of unique features including:

  • Obstacles – Obstacles may be triggered by the administrator while driving.  If the driver hits an obstacle in the virtually simulated environment, the event is logged as an infraction that can be reviewed after the driving session.
  • Infraction Logging – A number of infractions can be logged including running stop signs, running red lights, vehicles driving over double yellow lines on a single highway and collisions with terrain, other vehicles, obstacles, etc.
  • Infraction Review – At the end of a driving session, the administrator and driver can review infractions from the most recent session, with screenshots of the infraction along with pertinent vehicle data displayed and saved.

To learn more, review the code, or start setting up your own vehicle simulator, visit

About GENIVI Alliance

The GENIVI Alliance is a non-profit alliance focused on developing an open in-vehicle infotainment (IVI) and connectivity platform for the transportation industry.  The alliance provides its members with a global networking community of more than 140 companies, joining connected car stakeholders with world-class developers in a collaborative environment, resulting in free, open source middleware.  GENIVI is headquartered in San Ramon Calif.

Automotive semiconductor market grows slightly in 2015, ranks shift

Wednesday, June 22nd, 2016

Despite slower growth for the automotive industry and exchange rate fluctuations, the automotive semiconductor market grew at a modest 0.2 percent year over year, reaching $29 billion in 2015, according to IHS (NYSE: IHS), a global source of critical information and insight.

A flurry of mergers and acquisitions last year caused the competitive landscape to shift, including the merger of NXP and Freescale, which created the largest automotive semiconductor supplier in 2015 with a market share of 14.3 percent, IHS said. The acquisition of International Rectifier (IR) helped Infineon overtake Renesas to secure the second-ranked position, with a market share of 9.8 percent. Renesas slipped to third-ranked position in 2015, with a market share of 9.1 percent, followed by STMicroelectronics and Texas Instruments.

“The acquisition of Freescale by NXP created a powerhouse for the automotive market. NXP increased its strength in automotive infotainment systems, thanks to the robust double-digit growth of its i.MX processors,” said Ahad Buksh, automotive semiconductor analyst for IHS Technology. “NXP’s analog integrated circuits also grew by double digits, thanks to the increased penetration rate of keyless-entry systems and in-vehicle networking technologies.”

NXP will now target the machine vision and sensor fusion markets with the S32V family of processors for autonomous functions, according to the IHS Automotive Semiconductor Intelligence Service Even on the radar front, NXP now has a broad portfolio of long- and mid-range silicon-germanium (SiGe) radar chips, as well as short-range complementary metal-oxide semiconductor (CMOS) radar chips under development. “The fusion of magnetic sensors from NXP, with pressure and inertial sensors from Freescale, has created a significant sensor supplier,” Buksh said.

The inclusion of IR, and a strong presence in advanced driver assistance systems (ADAS), hybrid electric vehicles and other growing applications helped Infineon grow 5.5 percent in 2015. Infineon’s 77 gigahertz (GHz) radar system integrated circuit (RASIC) chip family strengthened its position in ADAS. Its 32-bit microcontroller (MCU) solutions, based on TriCore architectures, reinforced the company’s position in the powertrain and chassis and safety domains.

The dollar-to-yen exchange rate worked against the revenue ranking for Renesas for the third consecutive year. A major share of Renesas business is with Japanese customers, which is primarily conducted in yen. Even though Renesas’ automotive semiconductor revenue fell 12 percent, when measured in dollars, the revenue actually grew by about 1 percent in yen. Renesas’ strength continues to be its MCU solutions, where the company is still the leading supplier globally.

STMicroelectronics’ automotive revenue declined 2 percent year over year; however, a larger part of the decline can be attributed to the lower exchange rate of the Euro against the U.S. dollar in 2015, which dropped 20 percent last year. STMicroelectronics’ broad- based portfolio and its presence in every growing automotive domain of the market helped the company maintain its revenue as well as it did. Apart from securing multiple design wins with American and European automotive manufacturers, the company is also strengthening its relationships with Chinese auto manufacturers. Radio and navigation solutions from STMicroelectronics were installed in numerous new vehicle models in 2015.

Texas Instruments has thrived in the automotive semiconductor market for the fourth consecutive year. Year-over-year revenue increased by 16.6 percent in 2015. The company’s success story is not based on any one particular vehicle domain. In fact, while all domains have enjoyed double-digit increases, infotainment, ADAS and hybrid-electric vehicles were the primary drivers of growth.


Other suppliers making inroads in automotive

After the acquisition of CSR, Qualcomm rose from its 42nd ranking in year 2014, to become the 20th largest supplier of automotive semiconductors in 2015. Qualcomm has a strong presence in cellular baseband solutions, with its Snapdragon and Gobi processors; while CSR’s strength lies in wireless application ICs — especially for Bluetooth and Wi-Fi. Qualcomm is now the sixth largest supplier of semiconductors in the infotainment domain.

Moving from 83rd position in 2011 to 37th in 2015, nVidia has used its experience, and its valuable partnership with Audi, to gain momentum in the automotive market. The non-safety critical status of the infotainment domain was a logical stepping stone to carve out a position in the automotive market, but now the company is also moving toward ADAS and other safety applications. The company has had particular success with its Tegra processors.

Due to the consolidation of Freescale, Osram entered the top-10 ranking of automotive suppliers for the first time in 2015. Osram is the global leader in automotive lighting and has enjoyed double-digit growth over the past three years, thanks to the increasing penetration of light-emitting diodes (LEDs) in new vehicles.

The Car, Scene Inside and Out: Q & A with FotoNation

Tuesday, May 24th, 2016

Looking at what’s moving autonomous vehicles closer to reality, who’s driving the car—and what’s in the back seat.

Mehra Full ResSumat Mehra, senior vice president of marketing and business development at FotoNation, spoke recently with EECatalog about the news that FotoNation and Kyocera have partnered to develop vision solutions for automotive applications.

EECatalog: What are some of the technologies experiencing improvement as the autonomous and semi-autonomous vehicle market develops?

Sumat Mehra, FotoNation: Advanced camera systems, RADAR, LiDAR, and other types of sensors that have been made available for automotive applications have definitely improved dramatically. Image processing, object recognition, scene understanding, and machine learning in general with convolutional neural networks have also seen huge enhancements and impact. Other areas where the autonomous driving initiative is spurring advances include sensor fusion and the car-to-car communication infrastructure.

Figure 1: Sumat Mehra, senior vice president of marketing and business development at FotoNation, noted that the company has already been working on metrics applicable to the computer vision related areas of object detection and scene understanding.

Figure 1: Sumat Mehra, senior vice president of marketing and business development at FotoNation, noted that the company has already been working on metrics applicable to the computer vision related areas of object detection and scene understanding.

EECatalog: What are three key things embedded designers working on automotive solutions for semi-autonomous and autonomous driving should anticipate?

Mehra, FotoNation: One, advances in machine learning. Second, through heterogeneous computing various general-purpose processors—CPUs, GPUs, DSPs—are all being made available for programming. Hardware developers as well as software engineers will use not only heterogeneous computing, but also other dedicated hardware accelerator blocks, such as our Image Processing Unit (IPU). The IPU enables super high performance at very low latency and with very low energy use. For example, the IPU makes it possible to run 4k video and process it for stabilization at extremely low power—18 milliwatts for 4k 60 frames per second video.

Third, sensors have come down dramatically in price and offer improved signal-to-noise ratios, resolution, and distance-to-subject performance.

We’re also seeing improved middleware, APIs and SDKs. Plus a framework to provide reliable and portable tool kits to build solutions around, much like what happened in the gaming industry.

EECatalog: Will looking to the gaming industry help avoid some re-invention of the wheel?

Mehra, FotoNation: Certainly. The need for compute power is something gaming and the automotive industry have in common, and we’ve seen companies with a gaming pedigree making efforts [in the automotive sector]. And, thanks to the mobile industry, sensors have come down in price to the point where they can be used for much more than having a large sensor with very large optics in one’s pocket. Sensors can now be embedded into bumpers, into side view mirrors, into the front and back ends of cars to enable much more power and vision functionality.

EECatalog: Will the efforts to enable self-driving cars be similar to the space program in that some of the research and development will result in solutions for nonautomotive applications?

Mehra, FotoNation: Yes. For example, collision avoidance and scene understanding are two of the applications that are driving machine learning and advances toward automotive self-driving. These are problems similar to those that robotics and drone applications face. Drones need to avoid trees, power lines, buildings, etc. while in flight, and robots in motion need to be aware of their surroundings and avoid collisions.

And other areas, including factory automation, home automation, and surveillance, will gain from advances taking place in automotive. Medical robots that can help with mobility [are another] example of a market that will benefit from the forward strides of the automotive sector.

EECatalog: How has FotoNation’s experience added to the capabilities the company has today?

Mehra, FotoNation: FotoNation has evolved dramatically. We have been in existence for more than 15 years, and when we started, it was still the era of film cameras. The first problem we started tackling was, “How do you transfer pictures from a device onto a computer or vice versa?”

So we worked in the field of picture transfer protocols, of taking pictures on and off devices. Then, when we came into the digital still camera space through this avenue, we realized there were other imaging problems that needed to be addressed.

We solved problems such as red eye removal through computational imaging. Understanding the pixels, understanding the images, understanding what’s being looked at—and being able to correct for it—relates to advances in facial detection, because the most important thing you want to understand in a scene is a person.

Then, as cameras became available for automotive applications, new problems arose. We drew from all that we had been learning through our experience with the entire gamut of image processing. The metrics FotoNation has been working on in different areas have become applicable to such automotive challenges as object detection and scene understanding.

As pioneers in imaging, we don’t deliver just standard software or an algorithm to the software for any one type of standard processor. We offer a hybrid architecture, where our IPU enables hardware acceleration that does specialized computer vision tasks like object recognition or video image stabilization at much higher performance and much lower power than a CPU.   We deliver our IPU as a netlist that goes into a system on chip (SOC).  Hybrid HW/SW architectures are important for applications such as automotive where high performance and low power are both required. Performance is required for low latency, to make decisions as fast as possible; you cannot wait for a car moving at 60 miles per hour to take extra frames (at 16 to 33 milliseconds per frame) to decide whether it is going to hit something.  Low power is required to avoid excessive thermal dissipation (heat), which is a serious problem for electronics, especially image sensors.

EECatalog: When it comes down to choosing FotoNation over another company with which to work, what reasons for selecting FotoNation are given to potential customers?

Mehra, FotoNation: One reason is experience. Our team has more than 1000 man-years of experience in embedded imaging. A lot of other companies come from the field of imaging processing for computers or desktops and then moved into embedded. We have lived and breathed embedded imaging, and the algorithms and solutions that we develop reflect that.

The scope of imaging that we cover ranges all the way from photons to pixels. Experience with the entire imaging subsystem is a key strength:  We understand how the optics, color filters, sensors, processors, software and hardware work independently and in conjunction with each other.

Another reason is that a high proportion of our engineers are PhDs who look at various ways of solving problems, refusing to be pigeonholed into addressing challenges in a single way. We have a strong legacy of technology innovation, demonstrated through our portfolio of close to 700 granted and applied for patents.

EECatalog: Had the press release about FotoNation’s working with Kyocera Corporation to develop vision solutions for automotive been longer, what additional information would you convey?

Mehra, FotoNation: More on our IPU, and how OEMs in the automotive area would definitely gain from the architectural advantages it delivers. The IPU is our greatest differentiator, and we would like our audience to understand more about it.

Another thing we would have liked to include is more on the importance of driver identification and biometrics. FotoNation acquired a company for iris biometrics a year ago, Smart Sensors, and we will be [applying] those capabilities toward driver monitoring system capabilities. The first step to autonomous vehicles is semi-autonomous vehicles, where drivers are sitting behind the steering wheel but not necessarily driving the car. And for that first step you need to know who the driver is. What the biometrics bring you is that capability of understanding the driver.

Other metrics include being able to look at the driver to tell whether he is drowsy, paying attention or looking somewhere else—decision making becomes easier when [the vehicle] knows what is going on inside the car, not just outside the car—that is an area where FotoNation is very strong.

EECatalog: In a situation where the car is being shared, a vehicle might have to recognize, for example, “Is this one of the 10 drivers authorized to share the car?”

Mehra, FotoNation: Absolutely, and the car’s behavior should be able to factor whether it is a teenager or adult getting behind the wheel, then risk assessments can begin to happen. All of this additional driver information can assist in better driving, and ultimately increased driver and pedestrian safety.

And we see [what’s ahead as] not just driver monitoring, but in-cabin monitoring through a 360-degree camera that is sitting inside the cockpit and able to see what is going on: Is there a dog in the back seat, which is about to jump into the front? Is there a child who is getting irate? All of those things can aid the whole experience and reduce the possibility of accidents.

Questions to Ask on the Journey to Autonomous Vehicles

Monday, May 23rd, 2016

In or out of Earth’s orbit, the journey will show similarities to the space race.

What comes first, connected vehicles or smart cities?

Smart cities will come first and play a critical role in the adoption of connected vehicles. The federal government is also investing money into these programs in many ways. USDOT has finalized seven cities that include San Francisco, Portland, Austin, Denver, Kansas City, Columbus and Pittsburgh through their Smart City Challenge Program.

Many of the remaining cities/states are finding alternate sources to fund their smart city deployments.

When we look at a co-operative safety initiative such as V2X (Vehicle-to-Everything), we see that it requires a majority of the vehicles to be supportive of the same technology. Proliferation of V2X is going to take few years to reach critical mass. This is the reason connected vehicles equipped with V2X are looking at smart city infrastructure as a way to demonstrate the use case scenarios for the “Day One Applications.”

What are the chief pillars of the autonomous vehicle (AV) market?

The three core pillars of the autonomous vehicle market will be:

  • Number Crunching Systems
    • Development of multicore processors has helped fuel the AI engines that are needed for the autonomous vehicle market. More and more companies are using GPUs and multicore processors for their complex algorithms. It is estimated that these systems process 1GB of data every second.
  • ADAS Sensors
    • The cost/performance ratio for ADAS sensors like lidars, radars and cameras has improved significantly over the past couple of years.  All of this will reduce the total cost of the core systems needed for autonomous vehicle systems, making the technology more mainstream.
  • Connectivity and Security
    • Connectivity will play a key role for such systems. Autonomous vehicles depend heavily on information from external sources like the cloud, other vehicles and infrastructure. These systems need to validate their sources and build a secure firewall to protect their information.

Total BOM for a complete system in the next five years will be around $5,000, and the total cost of the system to consumers will only add $20,000 or less to the vehicle’s sticker price. For a relatively small increase, consumers will get numerous benefits, ranging from enhanced safety to stress-free driving. This is one of the reasons why companies like Cruise got acquired for such huge valuations.

What three key events should embedded designers working on automotive solutions for semi-autonomous and autonomous driving anticipate?

  • Sensor Fusion
    • Standards will need to be developed to allow free integration of ADAS sensors, connecting all the various ADAS applications and supporting data sharing between these sensors.
  • Advances in Parallel Computing Inside Automotive Electronics
    • ECU systems inside the cars will eventually be replaced with complex parallel computing ADAS platforms. Artificial intelligence engines inside these platforms need to take advantage of parallel computing when processing gigabytes of data per second. Real time systems that can ascertain the decision making process in a split second will make all the difference.
  • Redundancy
    • Finally, the industry needs to create a redundant fault tolerant architecture. When talking about autonomous vehicles, the systems that enable autonomous driving need to have redundancy to ensure the system is always operating as designed.

How will the push to create self-driving cars (similar to what happened in the space race) result in useful technology for other areas?

The drone/surveillance video market will benefit from the push to create self-driving technology. Drones have similar characteristics to self-driving cars, just on a much smaller scale. The complexities around drone airspace management will definitely need some industry rules and support. This market will benefit from the advances and rule-making experience leveraged from self-driving cars.

What was the role of USDOT pilots and other research for enabling the autonomous vehicle market?

The role of the USDOT pilots has been predominantly focused on connected vehicles, and not much has happened yet with autonomous vehicles. The deployment of connected vehicle technology infrastructure can determine the usefulness to improve the robustness of data received by vehicles. This infrastructure for connected vehicles will pave the way for autonomous vehicles. Roadside infrastructure will play a role in monitoring rogue vehicles.

USDOT is also focusing on creating regulation and policies for autonomous vehicle deployments. Several test tracks around the United States (California, Michigan and Florida) have been funded by the USDOT. These proving grounds are setup with miles of paved roads that simulate an urban driving environment.

Many automakers have set 2020 as the goal for automated-driving technology in production models. Pilots and research by USDOT represent a huge reduction in risk for the automotive OEMs.

What else should embedded designers keep in mind when the topic is autonomous vehicles?

  • 100 Million Lines of Code
    • Connected vehicle technology is the single most complex system that is built by mankind. It takes about 100 million lines of code to build such a system and is more complex than a space shuttle, an operating system like Linux kernel, and smartphones. We recommend that the embedded designers depend on well tested and pre-defined middleware blocks to accelerate their design process.
  • FOTA and SOTA Updates
    • We also recommend that embedded designers build systems that depend heavily on firmware over the air (FOTA) and software over the air (SOTA) systems. We know that cars are going to follow the same trend as smartphones that require frequent software updates. Tesla has set a great example of this process with its updates and has said that its vehicles will constantly improve over time.
  • Aftermarket Systems as a Way to Introduce New Capabilities
    • Finally, embedded designers need to look at aftermarket systems as way to introduce semi-autonomous features to determine the feasibility and acceptance of these building blocks before they become part of the mainstream.

Puvvala_thumbRavi Puvvala is CEO of Savari.With 20+ years of experience in the telecommunications industry, including leading positions at Nokia and Qualcomm Atheros, Puvvala is the founder of Savari and a visionary of the future of mobility. He serves as an advisory member to transportation institutes and government bodies.

The Rise of Ethernet as FlexRay Changes Lanes

Friday, May 20th, 2016

There are five popular protocols for in-vehicle networking. Caroline Hayes examines the structure and merits of each.

Today’s vehicles use a range of technologies, systems and components to make each journey a safe, comfortable, and enjoyable experience. From infotainment systems to keep the driver informed and passengers entertained, to Advanced Driver Assistance Systems (ADAS) to keep road users safe, networked systems communicate within the vehicle. Vehicle systems such as engine control, anti-lock braking and battery management, air bags and immobilizers are integrated into the vehicle’s systems. In the driver cockpit, there are instrument clusters and drowsy-driver detection systems, as well as ADAS back-up cameras, automatic parking and automatic braking systems. For convenience, drivers are used to keyless entry, mirror and window control as well as interior lighting, all controlled via an in-vehicle network. All rely on a connected car and in-vehicle communication networks.

There are five in-vehicle network standards in use today, Local Interconnect Network (LIN), Controlled Area Network (CAN), Ethernet, Media Oriented Systems Transport (MOST) and FlexRay.

Evolving Standards
LIN targets control within a vehicle. It is a simple, standard UART interface, allowing sensors and actuators to be implemented, as well as lighting and cooling fans to be easily replaced. The single-wire, serial communications system operates at 19.2-kbit/s, to control intelligent sensors and switches, in windows, for example.

Figure 1: Microchip supports all automotive network protocols with devices, development tools and ecosystem for vehicle networking.

Figure 1: Microchip supports all automotive network protocols with devices, development tools and ecosystem for vehicle networking.

This data transfer rate is slower than CAN’s 1-Mbit/s (maximum) operation. CAN is used for high-performance, embedded applications. An evolution of CAN is CAN FD (Flexible Data rate), initiated in 2011 to meet increasing bandwidth needs. It operates at 2-Mbit/s, increasing to 5-Mbit/s when used point-to-point for software downloads. The higher data rate of CAN allows for a two-wire, untwisted pair cable structure, to accommodate a differential signal.

As well as boosting transmission rates, CAN FD extended the data field from 8-byte to 64-byte. When only one node is transmitting, increasing the bit rate is possible, as nodes do not need to be synchronized.

LIN debuted at the same time as vehicles saw more sensors and actuators arrive. At this juncture, point-to-point wiring became too heavy, and CAN became too expensive. Summarizing LIN, CAN and CAN FD, Johann Stelzer, Senior Marketing Manager for Automotive Information Systems (AIS), Automotive Product Group, Microchip, says: “CAN and CAN FD have a broadcast quality. Any node can be the master, whereas LIN uses master-slave communication.”

K2L’s Matthais Karcher: CAN FD’s higher payload can add security to the network.

K2L’s Matthais Karcher: CAN FD’s higher payload can add security to the network.

The higher bandwidth of CAN FD allows for security features to be added. “The larger payload can be used to transfer keys with multiple bytes as well as open up secure communications between two devices,” says Matthias Karcher, Senior Manager AIS Marketing Group, at K2L. The Microchip subsidiary provides development tools for automotive networks.

CAN FD’s ability to use an existing wiring harness to transfer more data from one electronic control unit to another, using a backbone or a diagnostic interface, is compelling, says Stelzer. It enables faster download of driver assistance or infotainment control software, for example, making it attractive to carmakers.

Microchip’s Johann Stelzer: Ethernet will evolve from diagnostics to become a communications backbone.

Microchip’s Johann Stelzer: Ethernet will evolve from diagnostics to become a communications backbone.

Ethernet as Communications Backbone

Ethernet uses packet data, but at the moment its use is restricted to diagnostics and software downloads. It acts as a bridge network, yet while it is flexible, it is also complex, laments Stelzer. As in-vehicle networks increase, so high-speed switching increases, adding to the complexity, requiring a high power microcontroller or microprocessor as well as requiring validating and debugging, which can add to development time.

In the future, asserts Stelzer, Ethernet will be used as the backbone communications between domains, such as safety, power and control, in the vehicle. When connected via a backbone it will be able to exchange software and data quickly, at up to 100-Mbit/s, or 100 times faster than CAN and 50 times faster than CAN FD.

At present, IEEE 802.3 operates at 100BaseTX, the predominant Fast Ethernet speed. The next stage is to operate at 100BaseT1, which is also 100-Mbit/s Ethernet over a single twisted wire pair. The implementation of Ethernet 100BaseT1 will be big, says Stelzer. “This represents a big jump in bandwidth,” he points out, “with less utilization overhead.” IEEE 802.3bw, finalized in 2014, will deliver 100-Mbit over a single twisted pair wire to reduce wiring, promoting the trend of deploying Ethernet in vehicles.

Figure 2: K2L offers the OptoLyzer MOCCA FD, a multi-bus user interface for CAN FD, CAN and LIN development.

Figure 2: K2L offers the OptoLyzer MOCCA FD, a multi-bus user interface for CAN FD, CAN and LIN development.

Increased deployment will come about when the development tools are in place. In each point-to-point node in the network, developers will have to integrate a tool in each section. “[The industry] will need good solutions,” he says, “to avoid overhead.” K2L offers evaluation boards, apps notes, software, Integrated Design Environment (IDE) support and development tools for standard Ethernet in vehicles. The company will announce the availability of support for Standard Ethernet T1 next year.

MOST for Media

MOST relates to high-speed networking and is predominantly used in infotainment systems in vehicles. It addresses all seven layers in the Open Systems Interconnection (OSI) for data communications, not just the physical and data link layers but also system services and apps.

The network is typically a ring structure and can include up to 64 devices. Total available bandwidth for synchronous data transmission and asynchronous data transmission (packet data) is around 23-MBaud.

MOST is flexible, with devices able to be added or removed. Each node becomes the master in the network, controlling the timing of transmission, although adding parameters can add to complexity. One solution, says Karcher, is for a customer to use Linux OS and a Linux driver to handle the generation distribution to encapsulate MOST for the apps layer. This allows the customer to concentrate on designing differentiation into the product. K2L provides software drivers and software libraries for MOST, as well as reference designs for analog front-ends, demonstration kits and evaluation boards. The level of hardware and software support, says Karcher, allows developers to focus on the application. Hardware can connect to MOST and also to CAN and LIN, he continues, adding that tools can connect and safeguard both system and application, reducing complexity and time-to-market.

The FlexRay Consortium, which was disbanded in 2009, developed FlexRay for on-board computing. There have not been any new developments in FlexRay, notes Karcher, who believes its use is limited to safety applications. Although K2L supplies tools to test and simulate FlexRay, “in the long run, it is hard to see a future for FlexRay,” says Karcher, citing the fact that there are no new designs or applications.

Caroline_Hayes_ThumbCaroline Hayes has been a journalist, covering the electronics sector for over 20 years. She has worked on many titles, most recently the pan-European magazine, EPN.

Next Page »