Posts Tagged ‘top-story’

Next Page »

Heterogeneous: Performance and Power Consumption Benefits

Wednesday, May 10th, 2017

Why multi-threaded, heterogeneous, and coherent CPU clusters are earning their place in the systems powering ADAS and autonomous vehicles, networking, drones, industrial automation, security, video analytics, and machine learning.

High-performance processors typically employ techniques such as deep, multi-issue pipelines, branch prediction, and out-of-order processing to maximize performance, but these do come at a cost; specifically, they impact power efficiency.

If some of these tasks can be parallelized, this impact could be mitigated by partitioning them across a number of efficient CPUs to deliver a high-performance, power-efficient solution. To accomplish this, CPU vendors have provided multicore and multi-cluster solutions, and operating system and application developers have designed their software to exploit these capabilities.

Similarly, application performance requirements can vary over time, so transferring the task to a more efficient CPU when possible improves power efficiency. For specialist computation tasks, dedicated accelerators offer excellent energy efficiency but can only be used for part of the time.

So, what should you be looking for when it comes to heterogeneous processors that deliver significant benefits in terms of performance and low power consumption? Let’s look at a few important considerations.

Multi-threading
Even with out-of-order execution, with typical workloads, CPUs aren’t fully utilized every CPU cycle; they spend most their time waiting for access to the memory system. However, when one portion of the program (known as a thread) is blocked, the hardware resources could potentially be used for another thread of execution. Multi-threading offers the benefit of being able to switch to a second thread when the first thread is blocked, leading to an increase in overall system throughput. Filling up all the CPU cycles with useful work that otherwise would be un-used leads to a performance boost; depending on the application, the addition of a second thread to a CPU typically adds 40 percent to the overall performance, for an additional silicon area cost of around 10 percent. Hardware multi-threading is a feature that in CPU IP is bespoke to Imagination’s MIPS CPUs.

A Common View
To move a task from one processor to another requires each processor to share the same instruction set and the same view of system memory. This is accomplished through shared virtual memory (SVM). Any pointer in the program must continue to point to the same code or data and any dirty cache line in the initial processor’s cache must be visible to the subsequent processor.

Figure 1: Memory moves when transferring between clusters.

Figure 1: Memory moves when transferring between clusters.

Figure 2: Smaller, faster memory movement when transferring within a cluster.

Figure 2: Smaller, faster memory movement when transferring within a cluster.

Cache Coherency
Cache coherency can be managed through software. This requires that the initial processor (CPU A) flush its cache to main memory before transferring to the subsequent processor (CPU B). CPU B then has to fetch the data and instructions back from main memory. This process can generate many memory accesses and is therefore time consuming and power hungry; this impact is magnified as the energy to access main memory is typically significantly higher than fetching from cache. To combat this, hardware cache coherency is vital, minimizing these power and performance costs. Hardware cache coherency tracks the location of these cache lines and ensures that the correct data is accessed by snooping the caches where necessary.

In many heterogeneous systems, the high-performance processors reside in one cluster, while the smaller, high-efficiency processors reside in another. Transferring a task between these different types of processors means that both the level 1 and level 2 caches of the new processor are cold. Warming them takes time and requires the previous cache hierarchy to remain active during the transition phase.

However, there is an alternative – the MIPS I6500 CPU. The I6500 supports a heterogeneous mix of external accelerators through an I/O Coherence Unit (IOCU) as well as different processor types within a cluster, allowing for a mix of high-performance, multi-threaded and power-optimized processors in the same cluster. Transferring a task from one type of processor to another is now much more efficient, as only the level 1 cache is cold, and the cost of snooping into the previous level 1 cache is much lower, so the transition time is much shorter.

Combining CPUs with Dedicated Accelerators
CPUs are general purpose machines. Their flexibility enables them to tackle almost any task but at the price of efficiency. Thanks to its optimizations, the PowerVR GPU can process larger, highly parallel computational tasks with very high performance and good power efficiency, in exchange for some reduction in flexibility compared to CPUs, and bolstered by a well-supported software development eco-system with APIs such as OpenCL or Open VX.

The specialization provided by dedicated hardware accelerators offers a combination of performance with power efficiency that is significantly better than a CPU, but with far less flexibility.

However, using accelerators for operations that occur frequently are ideal to maximize the potential performance and power efficiency gains. Specialized computational elements such as those for audio and video processing, as well as neural network processors used in machine learning, use similar mathematical operations.

Hardware acceleration can be coupled to the CPU by adding Single Instruction Multiple Data (SIMD) capabilities with floating point Arithmetic Logic Units (ALUs). However, while processing data through the SIMD unit, the CPU behaves as a Direct Memory Access (DMA) controller to move the data, and CPUs make very inefficient DMA controllers.

Conversely, a heterogeneous system essentially provides the best of both worlds. It contains some dedicated hardware accelerators that, coupled with a number of CPUs, offer the benefits of greater energy efficiency from dedicated hardware, while retaining much of the flexibility provided by CPUs.

These energy savings and performance boost depend on the proportion of time that the accelerator is doing useful work. Work packages appropriate for the accelerator are present in a wide range of sizes—you might expect a small number of large tasks, but many smaller tasks.

There is a cost in transferring the processing between a CPU and the accelerator, and this limits the size of the task that will save power or boost performance. For smaller tasks, the energy consumed and time taken to transfer the task exceeds the energy or time saved by using the accelerator.

Data Transfer Cost
To reduce time and energy costs, a Shared Virtual Memory with hardware cache coherency—as found in the I6500 CPU—is ideal as it addresses much of the cost of transferring the task. This is because it eliminates the copying of data and the flushing of caches. There are other available techniques to achieve even greater reductions.

The HSA Foundation has developed an environment to support the integration of heterogeneous processing elements in a system that extends beyond CPUs and GPUs. The HSA system’s intermediate language, HSAIL, provides a common compilation path to heterogeneous Instruction Set Architectures (ISAs) that greatly simplifies the system software development but also defines User Mode Queues.

These queues enable tasks to be scheduled and signals to trigger tasks on other processing elements, allowing sequences of tasks to execute with very little overhead between them.

Beyond Limitations
Heterogeneous systems offer the opportunity to significantly increase system performance and reduce system power consumption, enabling systems to continue to scale beyond the limitations imposed by ever shrinking process geometries.

Multi-threaded, heterogeneous and coherent CPU clusters such as the MIPS I6500 have the ideal characteristics to sit at the heart of these systems. As such they are well placed to efficiently power the next generation of devices.


Tim-Mace-2Tim Mace is Senior Manager, Business Development, MIPS Processors, Imagination Technologies.

Auto Makers See Opportunity with Embedded Handwriting

Monday, April 10th, 2017

Why handwriting technology in the automotive cockpit will continue to see dramatic growth.

Many people who don’t yet use handwriting technology on phones or tablets as computing input mechanisms may nonetheless already be familiar with digital handwriting technology. There’s a good chance they’ve been introduced to it in what might seem an unlikely place: Their cars.

Figure_1

Figure 1: In the new automotive ecosystem, embedded sensors and display units can communicate with mobile devices inside the car and gather all sorts of external information via the web.

Last year, higher-end auto manufacturers like Audi, Mercedes and Tesla began shipping cars with embedded handwriting technology for controlling GPS systems, entertainment systems and other dashboard controls. Watch a 10-second video showing handwriting at work in an Acura here and learn a bit more about the overall concept here.

But all of this is just the beginning. According to Frost & Sullivan, the market for handwriting recognition (HWR) technology in cars will grow at a rate of more than 30 percent each year through 2020. “The industry is now moving towards controlling the entire infotainment with help from HWR,” the firm adds.

Embedded systems are tough to design: By nature, they’re constrained not only by limited storage space, but also limited memory space and typically, lower performance CPUs compared to computational devices. But even bound by these limitations, today’s digital handwriting technology has delivered remarkable accuracy and consistent benefits to the automotive industry. The most recent technology includes the ability to superimpose characters, cursive words or portions of words on top of each other on the touchpad and still accurately recognize input. A keyboard option incorporating smooth typing enables a true multimodal solution. Here are some reasons why handwriting technology in the automotive cockpit will continue to see dramatic growth:

  • Low driver distraction interfaces have evolved to require handwriting. One reason that handwriting provides a more effective option for controlling GPS or entertainment systems compared to voice are because cars are often noisy, making it difficult to reliably give instructions. Another reason is that voice command systems are very difficult to edit, which makes it more challenging to either revise input or correct recognition errors. Finally, handwriting allows drivers to keep their attention safely focused on the road: Today’s tech is designed for use when the driver isn’t looking at what he’s writing on the touchpad.
  • Multimodal systems are easy for the user to manipulate. Car manufacturers care about customer satisfaction, and drivers today demand a consistent user experience when inputting information—whether they’re doing it by hand, keyboard or voice. Drivers want multiple methods to input that information, depending on what’s most convenient and more importantly, safe. Consistency is key: System responses to keyboard input need to be consistent with responses to handwritten input. No one wants to get a different dictionary response to a query if they’re writing by hand rather than  keyboarding, for instance. A single multimodal system pre-emptively solves that potential problem.
  • Multimodal is great for the integrator. What’s great about multimodal design for systems integrators is that they only need to integrate with a single technology provider that handles multiple forms of input instead of integrating several different functional libraries and debugging any adverse interactions. This shortens the development time required for integration and lessens demands on memory resources and storage. Ultimately, integrating a multimodal interface means developing products that are often lower-cost, quicker time-to-market, and easier to test and validate. A big win all around.

Handwriting also wins points for safety and accuracy. The American Automobile Association (AAA) ranked voice-based command systems, such as the iPhone’s Siri, and found that it significantly distracted car drivers. In a worst-case situation, drivers even at the low speed of 25 mph were distracted for up to 27 seconds, during which they travelled more than three football fields in length.

Handwriting adapts well to multiple situations—e.g., character input when driving, and word input when stopped. Drivers can reach down and direct their cars’ GPS or entertainment systems in dozens of languages (as selected by the OEM), via either cursive or block characters that are easily recognizable, and that can even be written at a tilt—up to over 30 degrees off a level line—and still be recognized. The ability to recognize letters even written at a significant tilt allows for a great deal of human error, which in turn enables increased safety.

Figure 2: Embedded handwriting technology, complemented with voice and other multimodal input options, offers today’s drivers an effective way to enjoy more applications with complex features even as states increase regulation

Figure 2: Embedded handwriting technology, complemented with voice and other multimodal input options, offers today’s drivers an effective way to enjoy more applications with complex features even as states increase regulation.

Handwriting, in sum, is a natural fit for inclusion in the auto market because it offers an intuitive method to control the automotive cockpit, assures minimum driver distraction, and provides a natural input method and low learning curve. Drivers of all ages can use it, and it offers high recognition accuracy of letters, numbers and gestures.

And the handwriting technology in cars can blossom into a full note-taking application for drivers to use when they’re stopped. This is ideal for road warrior executives who must constantly attend meetings, travel and share their notes.

Handwritten input, or ‘digital ink,’ is now as fully capable to be interpreted to text as input from the keyboard and mouse. Furthermore, diagrams such as mind maps, organizational charts, and flow diagrams are capable of being fully converted to digital form in a manner that allows for changes and editing. Today’s technology allows you to create content, edit and format that content, create diagrams, input complex math equations, and easily incorporate the interpreted handwriting results into your digital document workflow.

A booming professional services market has emerged to support developers of embedded handwriting technology, too. Handwriting technology vendors are offering in-depth professional engineering services for use cases based upon the SDK packages offered, all the way to complete turnkey subsystem design services.

Handwriting technology is already embedded in millions of cars today. But the most tremendous growth for this market lies ahead in a wide range of embedded applications and IoT devices. For ISVs and OEMs, the ultimate benefit is a massively improved user experience which enhances customer satisfaction and ultimately sales and profits.


Gary-Headshot_hi_resGary Baum is the Vice President of Marketing at MyScript, the source of the most advanced award winning technology for handwriting recognition and digital ink management. At the Car HMI Concepts and Systems conference, MyScript technology was recognized in the ‘Most Innovative Car HMI Technology’ category.

Read more about the MyScript SDK and other tools for the automotive industry.

Control, Drive, Sense: High-Power Density SiC and GaN Power Conversion Applications

Thursday, March 2nd, 2017

New power switch technologies are key to success with the next generation of motor control, solar inverters, energy storage and electric vehicles. Just as important—the ability to drive these technologies safely and sense them more accurately.

Sensing current within these systems while operating at these higher switching rates is becoming more challenging.

Electricity consumption and its generation, which adds to our carbon footprint and affects climate change, is one of the key problems the world faces. The largest global consumption of electricity is from electric motors and the systems they drive. These systems consume more than twice as much electricity as the next largest consumer, lighting. A 2011 International Energy Agency report estimates that electric motor systems account for between 43 and 46 percent of the world’s electricity consumption.

Farther on Less

The need to further shrink our carbon footprint by reducing the CO2 emissions from transportation is a key driver for the electrification of vehicles. With the electrification of vehicles comes the need for them to be able to travel greater distances with less energy consumed. At the same time, we must ensure that the electricity generated for charging these vehicles comes from clean sources. As important as reducing electricity consumption is improving electricity generation methods. Generating energy through renewable resources like the sun requires efficient solar farms that are becoming mainstream in implementations worldwide.

We’ve seen the emergence of Wide Bandgap semiconductor technologies like Silicon Carbide (SiC) and Gallium Nitride (GaN) and the use of power MOSFETS in applications such as solar inverters, motor drives, and electric vehicles. Along with these technologies comes the need for gate drivers that have the capability of driving them efficiently and safely at higher data rates with less dead time in the system. Sensing current within these systems while operating at these higher switching rates is becoming more challenging.

Moving to these new technologies makes electric motors and driving electronics smaller and lighter. Increasing the range of the electric vehicle and decreasing its charging time becomes possible. Higher switching frequencies in solar inverters, as specified in IEC62109-1, will improve the overall efficiency of the systems as well as reducing the size of the line filters. Industrial automation applications where motors are commonly used, as specified in the variable frequency motor drive standard IEC61800-5, will become less bulky and more efficient, reducing the overall energy footprint.

Greater Robustness, Reliability

Isolation is mandated for safety and operation. Implementing the isolation barriers within these applications without compromising on performance is critical. These systems often have long lifetimes and could be implemented in harsh environments, so high levels of component robustness and reliability are a must.

“Sensing current within these systems while operating at these higher switching rates is becoming more challenging.”

One example of a solution for driving new Power switch technologies is Analog Devices iCoupler® digital isolation integrated with gate drivers like the ADuM4121 (Figure 1). It has the capability of driving these new Power switch technologies because of its low industry leading propagation delay of 38ns typical, allowing faster switching and the ability to withstand high Common Mode Transients up to 150kV/µs during fast turn on and turn off events.

Integrating Analog Devices iCoupler digital isolation with industry leading sigma delta analog to digital converters, such as the AD7403, makes it possible to accurately sense the current in high-voltage applications across a smaller shunt resistor, improving system efficiency. This enables the use of higher accuracy shunt-based current measurement architecture rather than Hall Effect systems. Selecting smaller resistors reduces the overall size of the solution.

Figure_1_web

Figure 1: ADuM4121 Driving GaN MOSFET GS66508B

To demonstrate system performance benefits, Analog Devices has developed a new Half Bridge GaN evaluation platform in collaboration with GaN Systems, as shown in Figure 2. On this platform we have the ADuM4121 isolated gate driver driving the GS66508B GaN MOSFET from GaN Systems that is rated to 650V at 30A. The gate charge requirement of the GS66508B is very low, making it much easier to drive at higher frequencies with a much lower supply voltage on VDD2 of 6V. The ADuM4121 also includes an internal Miller clamp that activates at 2V on the falling edge of the gate drive output, supplying the driven gate with a lower impedance path to reduce the chance of Miller capacitance induced turn on.

Making use of three of these half bridge evaluation boards combined with the Analog Devices Motor Control evaluation platform, a demonstration system showcasing a three-phase inverter driving a three-phase motor was built (Figure 3). Within the three-phase inverter, large currents are being switched at high frequencies that can cause radiated and conducted emissions. To reduce the conducted and radiated emissions in the system while operating efficiently, it is critical to slew the edges of the switching waveforms sufficiently by selecting an appropriate gate resistance. This series resistance can further help with dampening the output ringing by matching the source to the load.

Figure 2: Replacing an IGBT inverter with a GaN Inverter

Figure 2: Replacing an IGBT inverter with a GaN Inverter

In this demonstration platform, the ADSP-CM409 generates the PWM signals required to drive the power switches, while the integrated SINC filters allow for direct connection of the Isolated Sigma delta ADC used for accurately sensing the current. The reinforced isolation provided by the isolated gate drivers can withstand up to 5kVrms as well as working voltages as high as 849Vpeak according to VDE0884-10. The isolation AD7403 offers can achieve 5kVrms withstand with a working voltage 1250Vpeak, also according to VDE0884-10.

Figure 3: Three Phase Inverter Motor Control Platform

Figure 3: Three Phase Inverter Motor Control Platform

Implementing a three-phase inverter using GaN suits systems operating up to 650V. SiC, having much higher breakdown voltages, more closely matches systems going up as high as 1200V and 1700V because it will have more margin within three-phase systems with 690Vrms line voltages.


ProfilePicture_webHein Marais is a System Application Engineer at Analog Devices, Inc.

Can Autonomous Vehicles Absolve Human Responsibility?

Monday, January 23rd, 2017

In our rush to embrace the latest technology and take advantage of whatever benefits it offers—greater convenience, higher efficiency, improved reliability, lower cost, etc.—we must not neglect human safety.

Transportation has been a major driver of technological innovation (Figure 1) since the inventions of James Watt, the Wright Brothers and automotive pioneers Daimler and Maybach. Over the years, concerns for occupant safety have led to the development of seat belts and air bags in cars, while such things as improvements in vehicle body materials and profiles, and the deployment of reversing alarms on trucks and buses have reduced the risks of accident and injury to pedestrians, cyclists, and other road users.

Figure1A_web

Figure 1: Mankind’s need to get from one spot to another has inspired innovators from James Watt to Elon Musk [Left image: By James Eckford Lauder (1811 - 1869) (Scottish) Details of artist on Google Art Project [Public domain], via Wikimedia Commons; Right image: [By jurvetson (Steve Jurvetson) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons]

Figure 1: Mankind’s need to get from one spot to another has inspired innovators from James Watt to Elon Musk (Top image: By James Eckford Lauder (1811 - 1869) (Scottish) Details of artist on Google Art Project (Public domain), via Wikimedia Commons; Bottom image: By jurvetson (Steve Jurvetson) - CC BY 2.0 (http://creativecommons.org/licenses/by/2.0), via Wikimedia Commons

In more recent times, the technology of artificial intelligence (AI) has started to pervade the various electronic control systems that are an integral part of modern automotive design and today’s driving experience. However, as we move from advanced driver assistance systems (ADAS) to fully autonomous self-driving vehicles we need to recognize the point at which responsibility for safe operation passes from human to machine. The ethics of the autonomous functionality offered by AI in vehicles has parallels with the “three laws of robotics” science-fiction writer Isaac Asimov postulated in 1942, which mostly aimed to protect humans from harm due to the actions of any robots. In similar fashion, implementing AI in vehicles needs ethical decision-making rules to define behavior that eliminates or reduces harm to humans.

From Fighter Pilots to Car Drivers

A fighter jet represents the pinnacle of aircraft evolution in terms of its performance and complexity of operation. Consequently, fighter pilots are assisted in flying them. A comprehensive suite of artificial intelligence algorithms can control almost every aspect of their operation, enhancing the pilot’s capability while still allowing him to take control when the situation demands it. In the same way, equally powerful, game-changing AI technology in automotive applications must account for the ability to return control of the vehicle to the driver.

Within the auto industry today, many electronic technology companies are focusing on the technical needs of ADAS, developing both adaptive and predictive systems and components that will allow for better and safer driving. ADAS assists the driver or any other agent in charge of the vehicle in a number of ways: It may warn the driver or take actions to reduce risk. It may also improve safety and performance by automating some portion of the control task of operating the vehicle.

In its current state ADAS mainly functions in cooperation with the driver, i.e. by providing a human-to-machine interface, which is part of the control system of the vehicle with the human still maintaining overall responsibility for the vehicle. Over time, it is expected that developments in technology will be successful in wielding ever-greater control of the vehicle, so assistance becomes the norm and driver intervention is reduced. ADAS are ultimately expected to develop further into the kind of autonomous system that will offer the ability to respond more quickly and with greater benefits than when a human agent is in control of the vehicle.

ADAS Demands Component Solutions

The development of electronic components for ADAS, and ultimately for truly autonomous vehicles, is being undertaken by leading component manufacturers worldwide. These companies are typically already experienced in meeting the demanding performance, quality and reliability standards expected by the automotive industry. For example, ON Semiconductor provides robust, AEC-qualified, production part approval process (PPAP) capable products for automotive applications, including the NCV78763 Power Ballast and Dual LED Driver for ADAS front headlights. Freescale Semiconductor is helping to drive the world’s most innovative ADAS solutions with its automotive, MCU, analog and sensors, and digital networking portfolio expertise. The development of its latest FXTH8715 Tire Pressure Monitoring Sensors (TPMS), which integrate an 8-bit microcontroller (MCU), pressure sensor, XZ-axis or Z-axis accelerometer and RF transmitter, was driven by a market requirement for improved safety. AVX, a technology leader in the manufacture of passive electronic components, developed the VCAS & VGAS Series TransGuard® Automotive Multi-Layer Varistors (MLVs) to provide protection against automotive-related transients in ADAS applications. Delphi Connection Systems supports challenging automotive applications that demand robust design and reliability with its high-performance APEX® Series Wire Connectors.

The Dream of Vehicle Autonomy

The electronics industry has long been characterized by continual improvements in performance that come at an ever-decreasing cost. This electronics industry has allowed technology that was once the preserve of racing cars and the luxury automobile market to percolate down through mid-range vehicles to everyday family vehicles. Many people, both inside and outside the industry, now dream of a future where completely autonomous vehicles will come to dominate the world’s roads. They visualize benefits in safety, travel efficiency, comfort, and convenience in vehicles that are programmed to avoid accidents, optimize journey times and costs and maximize the functional utility of the vehicle. Clearly, amongst these, preventing injury to passengers and others as well as damage to the vehicle and property is the highest priority.

Autonomous Vehicles Require Ethical Rules

Current laws regulating road use place the responsibility for safety squarely with the human driver. He or she must ensure that other people, both inside and outside the vehicle, are protected from harm arising from his/her operation of the vehicle. While a car may be viewed as a means of getting people from point A to point B as efficiently as possible, its use at excessive speed or in a dangerous manner resulting in an accident that injures or kills a pedestrian would likely be considered a criminal offense. Indeed, the deliberate use of a vehicle to run down and kill someone would, in most cases, constitute murder.

However, these judgments are rarely black or white, and there may be mitigating circumstances, depending on the situation and people involved. Moreover, while we would not expect an autonomous vehicle to exceed speed limits or undertake dangerous maneuvers in a typical situation, there may be occasions when, like a human operator, it needs to make decisions where the outcome may be questionable. These decisions are where we need to understand the ethics involved to apply appropriate rules. This can be appreciated by considering a few hypothetical scenarios:

1. When traveling at speed in traffic, a human driver might react to an animal jumping out into the road by swerving to avoid it and, in doing so, hitting another car. As the driver, you may have saved that animal but what if the result was an accident in which other people were hurt?

2. What if, instead of an animal in the above example, it was a pedestrian who had stepped into the road and hitting them was likely to be fatal. Then the action would have saved a human life at the cost of potential injuries to the occupants of the other vehicle.”

3. An autonomously driven vehicle confronted with the same situation of a pedestrian stepping into the road might decide it cannot run over that person but may also decide it cannot swerve into another vehicle. Instead, it swerves off the road hitting a wall resulting in serious injuries to the human ‘driver’ of the car and potentially any passengers too.

In the latter situation, the human ‘driver’ is not to blame, but equally, there is an ethical dilemma as to whether any fault lies with the autonomous vehicle. Undoubtedly, as we become more reliant on technologies such as ADAS and ultimately on Autonomous Technology Systems (ATS) the responsibility for operating a vehicle becomes less dependent on the individual driver and shifts to the vehicle itself and therefore to the car manufacturer. Not surprisingly, the automotive industry will not want to accept liability for such risks unless the market recognizes this requirement and establishes an appropriate business model that makes economic sense for the manufacturers and doesn’t result in endless litigation.

Conclusion

Technological solutions are now starting to outpace the real-world situations into which they are being introduced. The deployment of artificial intelligence is challenging the status quo and forcing us to consider ethical questions about how machines should operate and who has control and is, therefore, responsible for their behavior.

This moral issue is certainly true of autonomous vehicles where ceding control to the vehicle requires AI that follows agreed ethical rules to protect human life. If we are to benefit from improved transportation systems with greater freedom, flexibility, efficiency, and safety, then it is society as a whole rather than design engineers and vehicle manufacturers that have to face up to this challenge and take on this responsibility.


Photo-RudyRamos_webRudy Ramos is the Project Manager for the Technical Content Marketing team at Mouser Electronics, accountable for the timely delivery of the Application and Technology sites from concept to completion. He has 30 years of experience working with electromechanical systems, manufacturing processes, military hardware, and managing domestic and international technical projects. He holds an MBA from Keller Graduate School of Management with a concentration in Project Management. Prior to Mouser, he worked for National Semiconductor and Texas Instruments. Ramos may be reached at rudy.ramos@mouser.com

GENIVI Alliance Announces New Open Source Vehicle Simulator Project

Tuesday, September 20th, 2016

The GENIVI Alliance, a non-profit alliance focused on developing an open in-vehicle infotainment (IVI) and connectivity software platform for the transportation industry, today announced the GENIVI Vehicle Simulator (GVS) open source project has launched, with both developer and end-user code available immediately.

The GVS project and initial source code, developed by Elements Design Group, San Francisco and the Jaguar Land Rover Open Software Technology Center in Portland, Ore., provide an open source, extensible driving simulator that assists adopters to safely develop and test the user interface of an IVI system under simulated driving conditions.

“While there are multiple potential uses for the application, we believe the GVS is the most comprehensive open source vehicle simulator available today,” said Steve Crumb, executive director, GENIVI Alliance. ”Its first use is to test our new GENIVI Development Platform user interface in a virtually simulated environment, to help us identify and execute necessary design changes quickly and efficiently.”

Open to all individuals wishing to collaborate, contribute, or just use the software, the GVS provides a realistic driving experience with a number of unique features including:

  • Obstacles – Obstacles may be triggered by the administrator while driving.  If the driver hits an obstacle in the virtually simulated environment, the event is logged as an infraction that can be reviewed after the driving session.
  • Infraction Logging – A number of infractions can be logged including running stop signs, running red lights, vehicles driving over double yellow lines on a single highway and collisions with terrain, other vehicles, obstacles, etc.
  • Infraction Review – At the end of a driving session, the administrator and driver can review infractions from the most recent session, with screenshots of the infraction along with pertinent vehicle data displayed and saved.

To learn more, review the code, or start setting up your own vehicle simulator, visit projects.genivi.org/gvs.

About GENIVI Alliance

The GENIVI Alliance is a non-profit alliance focused on developing an open in-vehicle infotainment (IVI) and connectivity platform for the transportation industry.  The alliance provides its members with a global networking community of more than 140 companies, joining connected car stakeholders with world-class developers in a collaborative environment, resulting in free, open source middleware.  GENIVI is headquartered in San Ramon Calif.

Automotive semiconductor market grows slightly in 2015, ranks shift

Wednesday, June 22nd, 2016

Despite slower growth for the automotive industry and exchange rate fluctuations, the automotive semiconductor market grew at a modest 0.2 percent year over year, reaching $29 billion in 2015, according to IHS (NYSE: IHS), a global source of critical information and insight.

A flurry of mergers and acquisitions last year caused the competitive landscape to shift, including the merger of NXP and Freescale, which created the largest automotive semiconductor supplier in 2015 with a market share of 14.3 percent, IHS said. The acquisition of International Rectifier (IR) helped Infineon overtake Renesas to secure the second-ranked position, with a market share of 9.8 percent. Renesas slipped to third-ranked position in 2015, with a market share of 9.1 percent, followed by STMicroelectronics and Texas Instruments.

“The acquisition of Freescale by NXP created a powerhouse for the automotive market. NXP increased its strength in automotive infotainment systems, thanks to the robust double-digit growth of its i.MX processors,” said Ahad Buksh, automotive semiconductor analyst for IHS Technology. “NXP’s analog integrated circuits also grew by double digits, thanks to the increased penetration rate of keyless-entry systems and in-vehicle networking technologies.”

NXP will now target the machine vision and sensor fusion markets with the S32V family of processors for autonomous functions, according to the IHS Automotive Semiconductor Intelligence Service Even on the radar front, NXP now has a broad portfolio of long- and mid-range silicon-germanium (SiGe) radar chips, as well as short-range complementary metal-oxide semiconductor (CMOS) radar chips under development. “The fusion of magnetic sensors from NXP, with pressure and inertial sensors from Freescale, has created a significant sensor supplier,” Buksh said.

The inclusion of IR, and a strong presence in advanced driver assistance systems (ADAS), hybrid electric vehicles and other growing applications helped Infineon grow 5.5 percent in 2015. Infineon’s 77 gigahertz (GHz) radar system integrated circuit (RASIC) chip family strengthened its position in ADAS. Its 32-bit microcontroller (MCU) solutions, based on TriCore architectures, reinforced the company’s position in the powertrain and chassis and safety domains.

The dollar-to-yen exchange rate worked against the revenue ranking for Renesas for the third consecutive year. A major share of Renesas business is with Japanese customers, which is primarily conducted in yen. Even though Renesas’ automotive semiconductor revenue fell 12 percent, when measured in dollars, the revenue actually grew by about 1 percent in yen. Renesas’ strength continues to be its MCU solutions, where the company is still the leading supplier globally.

STMicroelectronics’ automotive revenue declined 2 percent year over year; however, a larger part of the decline can be attributed to the lower exchange rate of the Euro against the U.S. dollar in 2015, which dropped 20 percent last year. STMicroelectronics’ broad- based portfolio and its presence in every growing automotive domain of the market helped the company maintain its revenue as well as it did. Apart from securing multiple design wins with American and European automotive manufacturers, the company is also strengthening its relationships with Chinese auto manufacturers. Radio and navigation solutions from STMicroelectronics were installed in numerous new vehicle models in 2015.

Texas Instruments has thrived in the automotive semiconductor market for the fourth consecutive year. Year-over-year revenue increased by 16.6 percent in 2015. The company’s success story is not based on any one particular vehicle domain. In fact, while all domains have enjoyed double-digit increases, infotainment, ADAS and hybrid-electric vehicles were the primary drivers of growth.

IHS_Auto_Semis_Ranking_2015

Other suppliers making inroads in automotive

After the acquisition of CSR, Qualcomm rose from its 42nd ranking in year 2014, to become the 20th largest supplier of automotive semiconductors in 2015. Qualcomm has a strong presence in cellular baseband solutions, with its Snapdragon and Gobi processors; while CSR’s strength lies in wireless application ICs — especially for Bluetooth and Wi-Fi. Qualcomm is now the sixth largest supplier of semiconductors in the infotainment domain.

Moving from 83rd position in 2011 to 37th in 2015, nVidia has used its experience, and its valuable partnership with Audi, to gain momentum in the automotive market. The non-safety critical status of the infotainment domain was a logical stepping stone to carve out a position in the automotive market, but now the company is also moving toward ADAS and other safety applications. The company has had particular success with its Tegra processors.

Due to the consolidation of Freescale, Osram entered the top-10 ranking of automotive suppliers for the first time in 2015. Osram is the global leader in automotive lighting and has enjoyed double-digit growth over the past three years, thanks to the increasing penetration of light-emitting diodes (LEDs) in new vehicles.

The Car, Scene Inside and Out: Q & A with FotoNation

Tuesday, May 24th, 2016

Looking at what’s moving autonomous vehicles closer to reality, who’s driving the car—and what’s in the back seat.

Mehra Full ResSumat Mehra, senior vice president of marketing and business development at FotoNation, spoke recently with EECatalog about the news that FotoNation and Kyocera have partnered to develop vision solutions for automotive applications.

EECatalog: What are some of the technologies experiencing improvement as the autonomous and semi-autonomous vehicle market develops?

Sumat Mehra, FotoNation: Advanced camera systems, RADAR, LiDAR, and other types of sensors that have been made available for automotive applications have definitely improved dramatically. Image processing, object recognition, scene understanding, and machine learning in general with convolutional neural networks have also seen huge enhancements and impact. Other areas where the autonomous driving initiative is spurring advances include sensor fusion and the car-to-car communication infrastructure.

Figure 1: Sumat Mehra, senior vice president of marketing and business development at FotoNation, noted that the company has already been working on metrics applicable to the computer vision related areas of object detection and scene understanding.

Figure 1: Sumat Mehra, senior vice president of marketing and business development at FotoNation, noted that the company has already been working on metrics applicable to the computer vision related areas of object detection and scene understanding.

EECatalog: What are three key things embedded designers working on automotive solutions for semi-autonomous and autonomous driving should anticipate?

Mehra, FotoNation: One, advances in machine learning. Second, through heterogeneous computing various general-purpose processors—CPUs, GPUs, DSPs—are all being made available for programming. Hardware developers as well as software engineers will use not only heterogeneous computing, but also other dedicated hardware accelerator blocks, such as our Image Processing Unit (IPU). The IPU enables super high performance at very low latency and with very low energy use. For example, the IPU makes it possible to run 4k video and process it for stabilization at extremely low power—18 milliwatts for 4k 60 frames per second video.

Third, sensors have come down dramatically in price and offer improved signal-to-noise ratios, resolution, and distance-to-subject performance.

We’re also seeing improved middleware, APIs and SDKs. Plus a framework to provide reliable and portable tool kits to build solutions around, much like what happened in the gaming industry.

EECatalog: Will looking to the gaming industry help avoid some re-invention of the wheel?

Mehra, FotoNation: Certainly. The need for compute power is something gaming and the automotive industry have in common, and we’ve seen companies with a gaming pedigree making efforts [in the automotive sector]. And, thanks to the mobile industry, sensors have come down in price to the point where they can be used for much more than having a large sensor with very large optics in one’s pocket. Sensors can now be embedded into bumpers, into side view mirrors, into the front and back ends of cars to enable much more power and vision functionality.

EECatalog: Will the efforts to enable self-driving cars be similar to the space program in that some of the research and development will result in solutions for nonautomotive applications?

Mehra, FotoNation: Yes. For example, collision avoidance and scene understanding are two of the applications that are driving machine learning and advances toward automotive self-driving. These are problems similar to those that robotics and drone applications face. Drones need to avoid trees, power lines, buildings, etc. while in flight, and robots in motion need to be aware of their surroundings and avoid collisions.

And other areas, including factory automation, home automation, and surveillance, will gain from advances taking place in automotive. Medical robots that can help with mobility [are another] example of a market that will benefit from the forward strides of the automotive sector.

EECatalog: How has FotoNation’s experience added to the capabilities the company has today?

Mehra, FotoNation: FotoNation has evolved dramatically. We have been in existence for more than 15 years, and when we started, it was still the era of film cameras. The first problem we started tackling was, “How do you transfer pictures from a device onto a computer or vice versa?”

So we worked in the field of picture transfer protocols, of taking pictures on and off devices. Then, when we came into the digital still camera space through this avenue, we realized there were other imaging problems that needed to be addressed.

We solved problems such as red eye removal through computational imaging. Understanding the pixels, understanding the images, understanding what’s being looked at—and being able to correct for it—relates to advances in facial detection, because the most important thing you want to understand in a scene is a person.

Then, as cameras became available for automotive applications, new problems arose. We drew from all that we had been learning through our experience with the entire gamut of image processing. The metrics FotoNation has been working on in different areas have become applicable to such automotive challenges as object detection and scene understanding.

As pioneers in imaging, we don’t deliver just standard software or an algorithm to the software for any one type of standard processor. We offer a hybrid architecture, where our IPU enables hardware acceleration that does specialized computer vision tasks like object recognition or video image stabilization at much higher performance and much lower power than a CPU.   We deliver our IPU as a netlist that goes into a system on chip (SOC).  Hybrid HW/SW architectures are important for applications such as automotive where high performance and low power are both required. Performance is required for low latency, to make decisions as fast as possible; you cannot wait for a car moving at 60 miles per hour to take extra frames (at 16 to 33 milliseconds per frame) to decide whether it is going to hit something.  Low power is required to avoid excessive thermal dissipation (heat), which is a serious problem for electronics, especially image sensors.

EECatalog: When it comes down to choosing FotoNation over another company with which to work, what reasons for selecting FotoNation are given to potential customers?

Mehra, FotoNation: One reason is experience. Our team has more than 1000 man-years of experience in embedded imaging. A lot of other companies come from the field of imaging processing for computers or desktops and then moved into embedded. We have lived and breathed embedded imaging, and the algorithms and solutions that we develop reflect that.

The scope of imaging that we cover ranges all the way from photons to pixels. Experience with the entire imaging subsystem is a key strength:  We understand how the optics, color filters, sensors, processors, software and hardware work independently and in conjunction with each other.

Another reason is that a high proportion of our engineers are PhDs who look at various ways of solving problems, refusing to be pigeonholed into addressing challenges in a single way. We have a strong legacy of technology innovation, demonstrated through our portfolio of close to 700 granted and applied for patents.

EECatalog: Had the press release about FotoNation’s working with Kyocera Corporation to develop vision solutions for automotive been longer, what additional information would you convey?

Mehra, FotoNation: More on our IPU, and how OEMs in the automotive area would definitely gain from the architectural advantages it delivers. The IPU is our greatest differentiator, and we would like our audience to understand more about it.

Another thing we would have liked to include is more on the importance of driver identification and biometrics. FotoNation acquired a company for iris biometrics a year ago, Smart Sensors, and we will be [applying] those capabilities toward driver monitoring system capabilities. The first step to autonomous vehicles is semi-autonomous vehicles, where drivers are sitting behind the steering wheel but not necessarily driving the car. And for that first step you need to know who the driver is. What the biometrics bring you is that capability of understanding the driver.

Other metrics include being able to look at the driver to tell whether he is drowsy, paying attention or looking somewhere else—decision making becomes easier when [the vehicle] knows what is going on inside the car, not just outside the car—that is an area where FotoNation is very strong.

EECatalog: In a situation where the car is being shared, a vehicle might have to recognize, for example, “Is this one of the 10 drivers authorized to share the car?”

Mehra, FotoNation: Absolutely, and the car’s behavior should be able to factor whether it is a teenager or adult getting behind the wheel, then risk assessments can begin to happen. All of this additional driver information can assist in better driving, and ultimately increased driver and pedestrian safety.

And we see [what’s ahead as] not just driver monitoring, but in-cabin monitoring through a 360-degree camera that is sitting inside the cockpit and able to see what is going on: Is there a dog in the back seat, which is about to jump into the front? Is there a child who is getting irate? All of those things can aid the whole experience and reduce the possibility of accidents.

Questions to Ask on the Journey to Autonomous Vehicles

Monday, May 23rd, 2016

In or out of Earth’s orbit, the journey will show similarities to the space race.

What comes first, connected vehicles or smart cities?

Smart cities will come first and play a critical role in the adoption of connected vehicles. The federal government is also investing money into these programs in many ways. USDOT has finalized seven cities that include San Francisco, Portland, Austin, Denver, Kansas City, Columbus and Pittsburgh through their Smart City Challenge Program.

Many of the remaining cities/states are finding alternate sources to fund their smart city deployments.

When we look at a co-operative safety initiative such as V2X (Vehicle-to-Everything), we see that it requires a majority of the vehicles to be supportive of the same technology. Proliferation of V2X is going to take few years to reach critical mass. This is the reason connected vehicles equipped with V2X are looking at smart city infrastructure as a way to demonstrate the use case scenarios for the “Day One Applications.”

What are the chief pillars of the autonomous vehicle (AV) market?

The three core pillars of the autonomous vehicle market will be:

  • Number Crunching Systems
    • Development of multicore processors has helped fuel the AI engines that are needed for the autonomous vehicle market. More and more companies are using GPUs and multicore processors for their complex algorithms. It is estimated that these systems process 1GB of data every second.
  • ADAS Sensors
    • The cost/performance ratio for ADAS sensors like lidars, radars and cameras has improved significantly over the past couple of years.  All of this will reduce the total cost of the core systems needed for autonomous vehicle systems, making the technology more mainstream.
  • Connectivity and Security
    • Connectivity will play a key role for such systems. Autonomous vehicles depend heavily on information from external sources like the cloud, other vehicles and infrastructure. These systems need to validate their sources and build a secure firewall to protect their information.

Total BOM for a complete system in the next five years will be around $5,000, and the total cost of the system to consumers will only add $20,000 or less to the vehicle’s sticker price. For a relatively small increase, consumers will get numerous benefits, ranging from enhanced safety to stress-free driving. This is one of the reasons why companies like Cruise got acquired for such huge valuations.

What three key events should embedded designers working on automotive solutions for semi-autonomous and autonomous driving anticipate?

  • Sensor Fusion
    • Standards will need to be developed to allow free integration of ADAS sensors, connecting all the various ADAS applications and supporting data sharing between these sensors.
  • Advances in Parallel Computing Inside Automotive Electronics
    • ECU systems inside the cars will eventually be replaced with complex parallel computing ADAS platforms. Artificial intelligence engines inside these platforms need to take advantage of parallel computing when processing gigabytes of data per second. Real time systems that can ascertain the decision making process in a split second will make all the difference.
  • Redundancy
    • Finally, the industry needs to create a redundant fault tolerant architecture. When talking about autonomous vehicles, the systems that enable autonomous driving need to have redundancy to ensure the system is always operating as designed.

How will the push to create self-driving cars (similar to what happened in the space race) result in useful technology for other areas?

The drone/surveillance video market will benefit from the push to create self-driving technology. Drones have similar characteristics to self-driving cars, just on a much smaller scale. The complexities around drone airspace management will definitely need some industry rules and support. This market will benefit from the advances and rule-making experience leveraged from self-driving cars.

What was the role of USDOT pilots and other research for enabling the autonomous vehicle market?

The role of the USDOT pilots has been predominantly focused on connected vehicles, and not much has happened yet with autonomous vehicles. The deployment of connected vehicle technology infrastructure can determine the usefulness to improve the robustness of data received by vehicles. This infrastructure for connected vehicles will pave the way for autonomous vehicles. Roadside infrastructure will play a role in monitoring rogue vehicles.

USDOT is also focusing on creating regulation and policies for autonomous vehicle deployments. Several test tracks around the United States (California, Michigan and Florida) have been funded by the USDOT. These proving grounds are setup with miles of paved roads that simulate an urban driving environment.

Many automakers have set 2020 as the goal for automated-driving technology in production models. Pilots and research by USDOT represent a huge reduction in risk for the automotive OEMs.

What else should embedded designers keep in mind when the topic is autonomous vehicles?

  • 100 Million Lines of Code
    • Connected vehicle technology is the single most complex system that is built by mankind. It takes about 100 million lines of code to build such a system and is more complex than a space shuttle, an operating system like Linux kernel, and smartphones. We recommend that the embedded designers depend on well tested and pre-defined middleware blocks to accelerate their design process.
  • FOTA and SOTA Updates
    • We also recommend that embedded designers build systems that depend heavily on firmware over the air (FOTA) and software over the air (SOTA) systems. We know that cars are going to follow the same trend as smartphones that require frequent software updates. Tesla has set a great example of this process with its updates and has said that its vehicles will constantly improve over time.
  • Aftermarket Systems as a Way to Introduce New Capabilities
    • Finally, embedded designers need to look at aftermarket systems as way to introduce semi-autonomous features to determine the feasibility and acceptance of these building blocks before they become part of the mainstream.

Puvvala_thumbRavi Puvvala is CEO of Savari.With 20+ years of experience in the telecommunications industry, including leading positions at Nokia and Qualcomm Atheros, Puvvala is the founder of Savari and a visionary of the future of mobility. He serves as an advisory member to transportation institutes and government bodies.

The Rise of Ethernet as FlexRay Changes Lanes

Friday, May 20th, 2016

There are five popular protocols for in-vehicle networking. Caroline Hayes examines the structure and merits of each.

Today’s vehicles use a range of technologies, systems and components to make each journey a safe, comfortable, and enjoyable experience. From infotainment systems to keep the driver informed and passengers entertained, to Advanced Driver Assistance Systems (ADAS) to keep road users safe, networked systems communicate within the vehicle. Vehicle systems such as engine control, anti-lock braking and battery management, air bags and immobilizers are integrated into the vehicle’s systems. In the driver cockpit, there are instrument clusters and drowsy-driver detection systems, as well as ADAS back-up cameras, automatic parking and automatic braking systems. For convenience, drivers are used to keyless entry, mirror and window control as well as interior lighting, all controlled via an in-vehicle network. All rely on a connected car and in-vehicle communication networks.

There are five in-vehicle network standards in use today, Local Interconnect Network (LIN), Controlled Area Network (CAN), Ethernet, Media Oriented Systems Transport (MOST) and FlexRay.

Evolving Standards
LIN targets control within a vehicle. It is a simple, standard UART interface, allowing sensors and actuators to be implemented, as well as lighting and cooling fans to be easily replaced. The single-wire, serial communications system operates at 19.2-kbit/s, to control intelligent sensors and switches, in windows, for example.

Figure 1: Microchip supports all automotive network protocols with devices, development tools and ecosystem for vehicle networking.

Figure 1: Microchip supports all automotive network protocols with devices, development tools and ecosystem for vehicle networking.

This data transfer rate is slower than CAN’s 1-Mbit/s (maximum) operation. CAN is used for high-performance, embedded applications. An evolution of CAN is CAN FD (Flexible Data rate), initiated in 2011 to meet increasing bandwidth needs. It operates at 2-Mbit/s, increasing to 5-Mbit/s when used point-to-point for software downloads. The higher data rate of CAN allows for a two-wire, untwisted pair cable structure, to accommodate a differential signal.

As well as boosting transmission rates, CAN FD extended the data field from 8-byte to 64-byte. When only one node is transmitting, increasing the bit rate is possible, as nodes do not need to be synchronized.

LIN debuted at the same time as vehicles saw more sensors and actuators arrive. At this juncture, point-to-point wiring became too heavy, and CAN became too expensive. Summarizing LIN, CAN and CAN FD, Johann Stelzer, Senior Marketing Manager for Automotive Information Systems (AIS), Automotive Product Group, Microchip, says: “CAN and CAN FD have a broadcast quality. Any node can be the master, whereas LIN uses master-slave communication.”

K2L’s Matthais Karcher: CAN FD’s higher payload can add security to the network.

K2L’s Matthais Karcher: CAN FD’s higher payload can add security to the network.

The higher bandwidth of CAN FD allows for security features to be added. “The larger payload can be used to transfer keys with multiple bytes as well as open up secure communications between two devices,” says Matthias Karcher, Senior Manager AIS Marketing Group, at K2L. The Microchip subsidiary provides development tools for automotive networks.

CAN FD’s ability to use an existing wiring harness to transfer more data from one electronic control unit to another, using a backbone or a diagnostic interface, is compelling, says Stelzer. It enables faster download of driver assistance or infotainment control software, for example, making it attractive to carmakers.

Microchip’s Johann Stelzer: Ethernet will evolve from diagnostics to become a communications backbone.

Microchip’s Johann Stelzer: Ethernet will evolve from diagnostics to become a communications backbone.

Ethernet as Communications Backbone

Ethernet uses packet data, but at the moment its use is restricted to diagnostics and software downloads. It acts as a bridge network, yet while it is flexible, it is also complex, laments Stelzer. As in-vehicle networks increase, so high-speed switching increases, adding to the complexity, requiring a high power microcontroller or microprocessor as well as requiring validating and debugging, which can add to development time.

In the future, asserts Stelzer, Ethernet will be used as the backbone communications between domains, such as safety, power and control, in the vehicle. When connected via a backbone it will be able to exchange software and data quickly, at up to 100-Mbit/s, or 100 times faster than CAN and 50 times faster than CAN FD.

At present, IEEE 802.3 operates at 100BaseTX, the predominant Fast Ethernet speed. The next stage is to operate at 100BaseT1, which is also 100-Mbit/s Ethernet over a single twisted wire pair. The implementation of Ethernet 100BaseT1 will be big, says Stelzer. “This represents a big jump in bandwidth,” he points out, “with less utilization overhead.” IEEE 802.3bw, finalized in 2014, will deliver 100-Mbit over a single twisted pair wire to reduce wiring, promoting the trend of deploying Ethernet in vehicles.

Figure 2: K2L offers the OptoLyzer MOCCA FD, a multi-bus user interface for CAN FD, CAN and LIN development.

Figure 2: K2L offers the OptoLyzer MOCCA FD, a multi-bus user interface for CAN FD, CAN and LIN development.

Increased deployment will come about when the development tools are in place. In each point-to-point node in the network, developers will have to integrate a tool in each section. “[The industry] will need good solutions,” he says, “to avoid overhead.” K2L offers evaluation boards, apps notes, software, Integrated Design Environment (IDE) support and development tools for standard Ethernet in vehicles. The company will announce the availability of support for Standard Ethernet T1 next year.

MOST for Media

MOST relates to high-speed networking and is predominantly used in infotainment systems in vehicles. It addresses all seven layers in the Open Systems Interconnection (OSI) for data communications, not just the physical and data link layers but also system services and apps.

The network is typically a ring structure and can include up to 64 devices. Total available bandwidth for synchronous data transmission and asynchronous data transmission (packet data) is around 23-MBaud.

MOST is flexible, with devices able to be added or removed. Each node becomes the master in the network, controlling the timing of transmission, although adding parameters can add to complexity. One solution, says Karcher, is for a customer to use Linux OS and a Linux driver to handle the generation distribution to encapsulate MOST for the apps layer. This allows the customer to concentrate on designing differentiation into the product. K2L provides software drivers and software libraries for MOST, as well as reference designs for analog front-ends, demonstration kits and evaluation boards. The level of hardware and software support, says Karcher, allows developers to focus on the application. Hardware can connect to MOST and also to CAN and LIN, he continues, adding that tools can connect and safeguard both system and application, reducing complexity and time-to-market.

The FlexRay Consortium, which was disbanded in 2009, developed FlexRay for on-board computing. There have not been any new developments in FlexRay, notes Karcher, who believes its use is limited to safety applications. Although K2L supplies tools to test and simulate FlexRay, “in the long run, it is hard to see a future for FlexRay,” says Karcher, citing the fact that there are no new designs or applications.


Caroline_Hayes_ThumbCaroline Hayes has been a journalist, covering the electronics sector for over 20 years. She has worked on many titles, most recently the pan-European magazine, EPN.

Vehicle-to-Everything (V2X) Technology Will Be a Literal Life Saver – But What Is It?

Thursday, May 19th, 2016

Increased safety and smarter energy are among the expected results as V2X gets underway: Here’s a look at its progress.

A massive consumer-focused industry like automobiles is up close and personal with people—so up close that safety and driver protection from harm are top of mind for manufacturers.  Although human error is the prevailing cause of collisions, creators of technologies used in vehicles have an obvious vested interest in helping lower the distressing statistics.  After all, pedestrian deaths rose by 3.1 percent in 2014 according to the National Highway Traffic Safety Administration’s Fatal Analysis Reporting System (FARS). In that year, 726 cyclists and 4,884 pedestrians were killed in motor vehicle crashes. And this damage to innocent bystanders doesn’t include the growing death rate of drivers and their passengers.

Figure 1: Benefits to driver and pedestrian safety, as well as increased power efficiency, are the aims of V2X. (Courtesy Movimento)

Figure 1: Benefits to driver and pedestrian safety, as well as increased power efficiency, are the aims of V2X. (Courtesy Movimento)

Distracted driving accounted for 10 percent of all crash fatalities, killing 3,179 people in 2014 while drowsy driving accounted for 2.6 percent of all crash fatalities, killing 846 people in 2014.  The road carnage is hardly limited to the United States. The International Organization for Road Accident Prevention noted a few years ago that 1.3 million road deaths occur worldwide annually and more than 50 million people are seriously injured. There are 3,500 deaths a day or 150 every hour and nearly three people get killed on the road every minute.

A Perplexing Stew

Thus it’s about time for increasingly sophisticated technology to step in and help protect distracted drivers from themselves. The centerpiece of what’s coming is so-called Vehicle to Everything (V2X) technology. Once it’s deployed, the advantages of V2X are extensive, alerting drivers to road hazards, the approach of emergency vehicles, pedestrians or cyclists, changing lights, traffic jams and more. In fact, the advantages extend even beyond the freeways and into residential streets where V2X technology helps improve power consumption and safety.

About the only problem with V2X is that it’s emerging as a perplexing stew of acronyms (V2V, V2I, V2D, V2H, V2G, V2P) that require some explanation—and the technology, while important, isn’t universally quite here yet.  But the significance of this technology is undeniable. And getting proficient in understanding V2X is valuable in tracking future vehicle features that will link cars to the world around them and make driving safer in the process.

Here’s an overview of the elements of V2X and predictions for when it will hit the roads, from the soonest to appear to the last.

Vehicle to Vehicle (V2V)

Vehicle to Vehicle (V2V) communication is a system that enables cars to talk to each other via Dedicated Short-Range Communication (DSRC), with the primary goal being to communicate wirelessly about speed and position and to utilize power in the most productive manner in order to warn drivers to take immediate action to avoid a collision. Also termed car-to-car communication, the technology makes driving much safer by alerting one vehicle about the presence of others. An embedded or aftermarket V2V module in the car allows vehicles to broadcast their position, speed, steering wheel position, brake status and other related data by DSRC to other vehicles in close proximity.

Clearly, V2V is expected to reduce vehicle collisions and crashes. It’s likely that this technology will enable multiple levels of autonomy, delivering assisted driver services like collision warnings but with the ultimate responsibility still belonging to the driver. V2V relies on DSRC, which is still in its infancy because the need remains to address security, mutual authentication and dynamic vehicle issues.

V2V is already making its way into new cars. For example, Toyota developed a communicating radar cruise control that uses V2V to make it easier for preceding and following vehicles to keep a safe distance apart. This is an element in a new “intelligent transportation system” that the company said was initially available at the end of 2015 on a few models in Japan. Meanwhile, 16 European vehicle manufacturers and related vendors launched the Car 2 Car Communication Consortium, which intends to speed time to market for V2V and V2I solutions and to ensure that products are interoperable. Plans call for “earliest possible” deployment. 

One key issue with V2V is that to be most effective, it should reside in all cars on the road. Nevertheless, this technology has to start somewhere, so Mercedes-Benz announced that its 2017 Mercedes E Class would be equipped with V2V, one of the first such solutions to go into production.

Vehicle to Device (V2D)

Vehicle to Device (V2D) communication is a system that links cars to many external receiving devices but will be particularly heralded by two-wheeled commuters.  It enables cars to communicate via DSRC with the V2D device on the cycle, sending an alert of traffic ahead. Given the fact that biking to work is the fastest-growing mode of transportation, increasing 60 percent in the past decade, V2D can potentially help prevent accidents. 

Although bicycle commuting is healthier than sitting in a car, issues like dark streets in the evening and heavy traffic flow make this mode problematic when it comes to accident potential.  Although less healthful, traveling by motorcycle and other two-wheel devices also has an element of risk because larger vehicles on the road tend to dominate.

V2D is tied to V2V because they both depend on DSRC, so V2D should begin to pop up after V2V rolls off the assembly line in 2017 and later. It will likely appear as aftermarket products for bicycles, motorcycles and other such vehicles starting in 2018.  Spurring the creation of V2D products have been quite a few crowd-funded efforts as well as government grants like the U.S. Department of Transportation’s  (DOT) Smart City Challenge that will pledge to the winner up to $40 million in funding for creating the nation’s most tech-savvy transportation network in a municipality.  Finalists (Denver, Austin, Columbus, Kansas City, Pittsburgh, Portland, San Francisco) have already been chosen and they are busy producing proposals.

DOT has other initiatives aimed at encouraging the creation of various V2X technologies. V2D is one of the application areas in DOT’s IntelliDrive program, a joint public/private effort to enhance safety and provide traffic management and traveler information. The goal is the development of applications in which warnings are transmitted to various devices such as cell phones or traffic control devices.

Vehicle to Pedestrian (V2P)

Vehicle to Pedestrian (V2P) communication is a system that communicates between cars and pedestrians and will particularly benefit elderly persons, school kids and physically challenged persons. V2P establishes a communications mechanism between pedestrians’ smartphones and vehicles and acts as an advisory to avoid imminent collision.

The concept is simple: V2P will reduce road accidents by alerting pedestrians crossing the road of approaching vehicles and vice versa. It’s expected to become a smartphone feature beginning in 2018 but, like V2D, requires the presence of DSRC capabilities in vehicles.  Ultimately, the DSRC version of V2P will be replaced by a higher-performance LTE version starting in 2020.

While there aren’t any V2P solutions currently available, this area is a hotbed of development, particularly when one includes the full gamut of possible technologies and includes multiple vehicle types such as public transit. Given the significant role that V2P can play in preventing damage to humans, the U.S. Department of Transportation maintains and updates a database of technologies in process. Of the current 86 V2P technologies listed, none are yet commercially available but a number are currently undergoing field tests.

A particularly fruitful approach to developing effective V2P products is a research partnership between telecom and automotive companies. For example, Honda R&D Americas and Qualcomm collaborated on a DSRC system that sends warnings to both a car’s heads-up display and a pedestrian’s device screen when there is a chance of colliding. Although the project won an award as an outstanding transportation system, there’s no word yet when this might appear commercially.

In another collaboration, Hitachi Automotive Systems teamed with Clarion, the Japan-based manufacturer of in-car infotainment products, navigation systems and more on a V2P solution that predicts pedestrian movements and rapidly calculates optimum speed patterns in real time. Undergoing field testing, this is another promising product to look for in the future.

Vehicle to Home (V2H)

Vehicle to Home (V2H) communication involves linkage between a vehicle and the owner’s domicile, sharing the task of providing energy.  During emergency or power outages, the vehicle’s battery can be used as a power source. Given the reality of severe weather and its effect on power supplies, this capability has been needed for a while, with disruptions in power after storms and other weather emergencies impacting many thousands of U.S. families annually.

V2H is a two-way street, with the vehicle powering the home and vice versa based on cost and demand for home energy. The car battery is used for energy storage, taking place when energy is cheap or “green.”

During power outages, power from a vehicle’s battery can be used to run domestic appliances and power can be drawn from the vehicle when utility prices are high. In areas with frequent power outages, the battery can be used to buffer energy to avoid flickering, and it can be used as an emergency survival kit.

It’s expected that V2H will kick into higher gear in 2019, playing a significant role when the number of plug-in hybrid Electric Vehicle (PHEV) and Electric Vehicles (EVs) make up over 20% of the total new cars sold in the United States. But a few projects have been underway for a while, such as a Nissan V2H solution that was already tested widely in Japan and launched in 2012 as the “Leaf to Home” V2H Power Supply System. Relying on an EV power station unit from Nichicon, this was one of the first backup power supply systems using an EV’s large-capacity battery.

Other Japanese car manufacturers have dabbled in these systems, including Mitsubishi and Toyota. Mitsubishi announced in 2014 that its Outlander PHEV vehicle could be used to power homes—only in Japan so far. There are other approaches to utilizing an EV’s battery for home use, such as some currently available devices that can not only charge a battery, but also supply the stored electricity to the home. One example is the SEVD-VI cable from Sumitomo Electric.

Vehicle to Grid (V2G)

Vehicle to Grid (V2G) communication is a system in which EVs communicate with the power grid to return electricity to the grid or throttle the vehicle’s charging rate. It will be an element in some EVs like plug-in models and is used as a power grid modulator to dynamically adjust energy demand.

A benefit of V2G is helping maintain the grid level and acting as a renewable power source alternative. This system could determine the best time to charge car batteries and enable energy flow in the opposite direction for shorter periods when the public grid is in need of power and the vehicle is not.

Given its key role in battery charging, this V2X technology is appearing soon—in affordable EVs like the Tesla model 3, which can now be advance ordered. Other products and companies like Faraday Future, NextEV, Apple Car, Uber and Lyft are all planning to launch EVs between 2017-2020. V2G is an extremely relevant area because it creates the obvious need for cities to start thinking and planning now about how they will support a large-scale EV society. Otherwise, energy utility companies will be in a panic situation and may resort to drastic measures such as rationing energy per household.

Figure 2: V2X technology will be part of the Tesla Model 3. [Photo: By Steve Jurvetson [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons]

Figure 2: V2X technology will be part of the Tesla Model 3. [Photo: By Steve Jurvetson [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)

Other activity in the V2G area includes a partnership between Pacific Gas and Electric Company (PG&E) and BMW to test the ability of EV batteries to provide services to the grid. The automaker created a large energy storage unit made from re-utilized lithium-ion batteries while enlisting San Francisco Bay Area drivers of BMW 100 i3 cars to take part in what’s called the ChargeForward program. A pilot study, this now-underway project is giving qualifying i3 drivers up to $1,540 in charging incentives.

Another intriguing effort involves Nissan and Spain’s largest electric utility, which collaborated on a mass-market V2G system that was initially demonstrated in Spain last year but is aimed at the European market.  Like the BMW/PG&E program, this also involves re-purposed EV batteries for stationary energy storage.  V2G is a very promising market pegged to surpass $190 million worldwide by 2022 according to industry analysts.

Vehicle to Infrastructure (V2I)

Vehicle to Infrastructure (V2I) communication will likely be the last V2X system to appear. It’s the wireless exchange of critical safety and operational data between vehicles and roadway infrastructure, like traffic lights. V2I alerts drivers of upcoming red lights and prevents traffic congestion. The system will streamline traffic and enable drivers to maneuver away from heavy traffic flow.

Despite the enormous impact this technology will have on driver safety, the degree of infrastructure investment required is so massive that it will take time to implement.  Some question whether DSRC-based V2I with its questionable return on investment will ever take place, but there is more hope for LTE-based V2I.

This approach might play a key role starting in 2020 and be rolling along by 2022. Nevertheless, there are promising V2I projects already happening in countries where it’s easier to conduct massive public initiatives, such as China. A field test being run on public roads in Taicang, Jiangsu Province, China, involves buses that receive road condition data and thus can avoid stopping at lights when safe. Tongji University and Denso Corporation developed this project. 

Another recent collaboration involves Siemens and Cohda Wireless to develop intelligent road signs and traffic lights in which critical safety and operational data is exchanged with equipped vehicles.  In the United States, DOT is highly involved in working with state and local transportation agencies along with researchers and private-sector stakeholders to develop and test V2I technologies through test beds and pilot deployments.

Communication is the next frontier of car technology, and this is the bedrock of all the V2X capabilities appearing in the future. And none too soon. According to the World Health Organization (WHO), the incidence of traffic fatalities will continue to expand across the globe as vehicles are more prevalent. WHO notes that this increase is 67 percent through 2020. Having smarter, safer cars and communications systems for the drivers, pedestrians and cyclists who can be impacted by these vehicles could turn around this trend.  Add to that the aspects of flexible electricity storage and usage, and V2X becomes an even more promising technology.


Mahbubul_AlamMahbubul Alam is CTO and CMO of Movimento Group. A frequent author, speaker and multiple patent holder in the area of the new software defined car and all things IoT. He was previously a strategist for Cisco’s Internet-of-Things (IoT) and Machine-to-Machine (M2M) platforms.  Read more from Mahbubul at http://mahbubulalam.com/blog/.

Next Page »