Posts Tagged ‘top-story’

Next Page »

GENIVI Alliance Announces New Open Source Vehicle Simulator Project

Tuesday, September 20th, 2016

The GENIVI Alliance, a non-profit alliance focused on developing an open in-vehicle infotainment (IVI) and connectivity software platform for the transportation industry, today announced the GENIVI Vehicle Simulator (GVS) open source project has launched, with both developer and end-user code available immediately.

The GVS project and initial source code, developed by Elements Design Group, San Francisco and the Jaguar Land Rover Open Software Technology Center in Portland, Ore., provide an open source, extensible driving simulator that assists adopters to safely develop and test the user interface of an IVI system under simulated driving conditions.

“While there are multiple potential uses for the application, we believe the GVS is the most comprehensive open source vehicle simulator available today,” said Steve Crumb, executive director, GENIVI Alliance. ”Its first use is to test our new GENIVI Development Platform user interface in a virtually simulated environment, to help us identify and execute necessary design changes quickly and efficiently.”

Open to all individuals wishing to collaborate, contribute, or just use the software, the GVS provides a realistic driving experience with a number of unique features including:

  • Obstacles – Obstacles may be triggered by the administrator while driving.  If the driver hits an obstacle in the virtually simulated environment, the event is logged as an infraction that can be reviewed after the driving session.
  • Infraction Logging – A number of infractions can be logged including running stop signs, running red lights, vehicles driving over double yellow lines on a single highway and collisions with terrain, other vehicles, obstacles, etc.
  • Infraction Review – At the end of a driving session, the administrator and driver can review infractions from the most recent session, with screenshots of the infraction along with pertinent vehicle data displayed and saved.

To learn more, review the code, or start setting up your own vehicle simulator, visit projects.genivi.org/gvs.

About GENIVI Alliance

The GENIVI Alliance is a non-profit alliance focused on developing an open in-vehicle infotainment (IVI) and connectivity platform for the transportation industry.  The alliance provides its members with a global networking community of more than 140 companies, joining connected car stakeholders with world-class developers in a collaborative environment, resulting in free, open source middleware.  GENIVI is headquartered in San Ramon Calif.

Automotive semiconductor market grows slightly in 2015, ranks shift

Wednesday, June 22nd, 2016

Despite slower growth for the automotive industry and exchange rate fluctuations, the automotive semiconductor market grew at a modest 0.2 percent year over year, reaching $29 billion in 2015, according to IHS (NYSE: IHS), a global source of critical information and insight.

A flurry of mergers and acquisitions last year caused the competitive landscape to shift, including the merger of NXP and Freescale, which created the largest automotive semiconductor supplier in 2015 with a market share of 14.3 percent, IHS said. The acquisition of International Rectifier (IR) helped Infineon overtake Renesas to secure the second-ranked position, with a market share of 9.8 percent. Renesas slipped to third-ranked position in 2015, with a market share of 9.1 percent, followed by STMicroelectronics and Texas Instruments.

“The acquisition of Freescale by NXP created a powerhouse for the automotive market. NXP increased its strength in automotive infotainment systems, thanks to the robust double-digit growth of its i.MX processors,” said Ahad Buksh, automotive semiconductor analyst for IHS Technology. “NXP’s analog integrated circuits also grew by double digits, thanks to the increased penetration rate of keyless-entry systems and in-vehicle networking technologies.”

NXP will now target the machine vision and sensor fusion markets with the S32V family of processors for autonomous functions, according to the IHS Automotive Semiconductor Intelligence Service Even on the radar front, NXP now has a broad portfolio of long- and mid-range silicon-germanium (SiGe) radar chips, as well as short-range complementary metal-oxide semiconductor (CMOS) radar chips under development. “The fusion of magnetic sensors from NXP, with pressure and inertial sensors from Freescale, has created a significant sensor supplier,” Buksh said.

The inclusion of IR, and a strong presence in advanced driver assistance systems (ADAS), hybrid electric vehicles and other growing applications helped Infineon grow 5.5 percent in 2015. Infineon’s 77 gigahertz (GHz) radar system integrated circuit (RASIC) chip family strengthened its position in ADAS. Its 32-bit microcontroller (MCU) solutions, based on TriCore architectures, reinforced the company’s position in the powertrain and chassis and safety domains.

The dollar-to-yen exchange rate worked against the revenue ranking for Renesas for the third consecutive year. A major share of Renesas business is with Japanese customers, which is primarily conducted in yen. Even though Renesas’ automotive semiconductor revenue fell 12 percent, when measured in dollars, the revenue actually grew by about 1 percent in yen. Renesas’ strength continues to be its MCU solutions, where the company is still the leading supplier globally.

STMicroelectronics’ automotive revenue declined 2 percent year over year; however, a larger part of the decline can be attributed to the lower exchange rate of the Euro against the U.S. dollar in 2015, which dropped 20 percent last year. STMicroelectronics’ broad- based portfolio and its presence in every growing automotive domain of the market helped the company maintain its revenue as well as it did. Apart from securing multiple design wins with American and European automotive manufacturers, the company is also strengthening its relationships with Chinese auto manufacturers. Radio and navigation solutions from STMicroelectronics were installed in numerous new vehicle models in 2015.

Texas Instruments has thrived in the automotive semiconductor market for the fourth consecutive year. Year-over-year revenue increased by 16.6 percent in 2015. The company’s success story is not based on any one particular vehicle domain. In fact, while all domains have enjoyed double-digit increases, infotainment, ADAS and hybrid-electric vehicles were the primary drivers of growth.

IHS_Auto_Semis_Ranking_2015

Other suppliers making inroads in automotive

After the acquisition of CSR, Qualcomm rose from its 42nd ranking in year 2014, to become the 20th largest supplier of automotive semiconductors in 2015. Qualcomm has a strong presence in cellular baseband solutions, with its Snapdragon and Gobi processors; while CSR’s strength lies in wireless application ICs — especially for Bluetooth and Wi-Fi. Qualcomm is now the sixth largest supplier of semiconductors in the infotainment domain.

Moving from 83rd position in 2011 to 37th in 2015, nVidia has used its experience, and its valuable partnership with Audi, to gain momentum in the automotive market. The non-safety critical status of the infotainment domain was a logical stepping stone to carve out a position in the automotive market, but now the company is also moving toward ADAS and other safety applications. The company has had particular success with its Tegra processors.

Due to the consolidation of Freescale, Osram entered the top-10 ranking of automotive suppliers for the first time in 2015. Osram is the global leader in automotive lighting and has enjoyed double-digit growth over the past three years, thanks to the increasing penetration of light-emitting diodes (LEDs) in new vehicles.

The Car, Scene Inside and Out: Q & A with FotoNation

Tuesday, May 24th, 2016

Looking at what’s moving autonomous vehicles closer to reality, who’s driving the car—and what’s in the back seat.

Mehra Full ResSumat Mehra, senior vice president of marketing and business development at FotoNation, spoke recently with EECatalog about the news that FotoNation and Kyocera have partnered to develop vision solutions for automotive applications.

EECatalog: What are some of the technologies experiencing improvement as the autonomous and semi-autonomous vehicle market develops?

Sumat Mehra, FotoNation: Advanced camera systems, RADAR, LiDAR, and other types of sensors that have been made available for automotive applications have definitely improved dramatically. Image processing, object recognition, scene understanding, and machine learning in general with convolutional neural networks have also seen huge enhancements and impact. Other areas where the autonomous driving initiative is spurring advances include sensor fusion and the car-to-car communication infrastructure.

Figure 1: Sumat Mehra, senior vice president of marketing and business development at FotoNation, noted that the company has already been working on metrics applicable to the computer vision related areas of object detection and scene understanding.

Figure 1: Sumat Mehra, senior vice president of marketing and business development at FotoNation, noted that the company has already been working on metrics applicable to the computer vision related areas of object detection and scene understanding.

EECatalog: What are three key things embedded designers working on automotive solutions for semi-autonomous and autonomous driving should anticipate?

Mehra, FotoNation: One, advances in machine learning. Second, through heterogeneous computing various general-purpose processors—CPUs, GPUs, DSPs—are all being made available for programming. Hardware developers as well as software engineers will use not only heterogeneous computing, but also other dedicated hardware accelerator blocks, such as our Image Processing Unit (IPU). The IPU enables super high performance at very low latency and with very low energy use. For example, the IPU makes it possible to run 4k video and process it for stabilization at extremely low power—18 milliwatts for 4k 60 frames per second video.

Third, sensors have come down dramatically in price and offer improved signal-to-noise ratios, resolution, and distance-to-subject performance.

We’re also seeing improved middleware, APIs and SDKs. Plus a framework to provide reliable and portable tool kits to build solutions around, much like what happened in the gaming industry.

EECatalog: Will looking to the gaming industry help avoid some re-invention of the wheel?

Mehra, FotoNation: Certainly. The need for compute power is something gaming and the automotive industry have in common, and we’ve seen companies with a gaming pedigree making efforts [in the automotive sector]. And, thanks to the mobile industry, sensors have come down in price to the point where they can be used for much more than having a large sensor with very large optics in one’s pocket. Sensors can now be embedded into bumpers, into side view mirrors, into the front and back ends of cars to enable much more power and vision functionality.

EECatalog: Will the efforts to enable self-driving cars be similar to the space program in that some of the research and development will result in solutions for nonautomotive applications?

Mehra, FotoNation: Yes. For example, collision avoidance and scene understanding are two of the applications that are driving machine learning and advances toward automotive self-driving. These are problems similar to those that robotics and drone applications face. Drones need to avoid trees, power lines, buildings, etc. while in flight, and robots in motion need to be aware of their surroundings and avoid collisions.

And other areas, including factory automation, home automation, and surveillance, will gain from advances taking place in automotive. Medical robots that can help with mobility [are another] example of a market that will benefit from the forward strides of the automotive sector.

EECatalog: How has FotoNation’s experience added to the capabilities the company has today?

Mehra, FotoNation: FotoNation has evolved dramatically. We have been in existence for more than 15 years, and when we started, it was still the era of film cameras. The first problem we started tackling was, “How do you transfer pictures from a device onto a computer or vice versa?”

So we worked in the field of picture transfer protocols, of taking pictures on and off devices. Then, when we came into the digital still camera space through this avenue, we realized there were other imaging problems that needed to be addressed.

We solved problems such as red eye removal through computational imaging. Understanding the pixels, understanding the images, understanding what’s being looked at—and being able to correct for it—relates to advances in facial detection, because the most important thing you want to understand in a scene is a person.

Then, as cameras became available for automotive applications, new problems arose. We drew from all that we had been learning through our experience with the entire gamut of image processing. The metrics FotoNation has been working on in different areas have become applicable to such automotive challenges as object detection and scene understanding.

As pioneers in imaging, we don’t deliver just standard software or an algorithm to the software for any one type of standard processor. We offer a hybrid architecture, where our IPU enables hardware acceleration that does specialized computer vision tasks like object recognition or video image stabilization at much higher performance and much lower power than a CPU.   We deliver our IPU as a netlist that goes into a system on chip (SOC).  Hybrid HW/SW architectures are important for applications such as automotive where high performance and low power are both required. Performance is required for low latency, to make decisions as fast as possible; you cannot wait for a car moving at 60 miles per hour to take extra frames (at 16 to 33 milliseconds per frame) to decide whether it is going to hit something.  Low power is required to avoid excessive thermal dissipation (heat), which is a serious problem for electronics, especially image sensors.

EECatalog: When it comes down to choosing FotoNation over another company with which to work, what reasons for selecting FotoNation are given to potential customers?

Mehra, FotoNation: One reason is experience. Our team has more than 1000 man-years of experience in embedded imaging. A lot of other companies come from the field of imaging processing for computers or desktops and then moved into embedded. We have lived and breathed embedded imaging, and the algorithms and solutions that we develop reflect that.

The scope of imaging that we cover ranges all the way from photons to pixels. Experience with the entire imaging subsystem is a key strength:  We understand how the optics, color filters, sensors, processors, software and hardware work independently and in conjunction with each other.

Another reason is that a high proportion of our engineers are PhDs who look at various ways of solving problems, refusing to be pigeonholed into addressing challenges in a single way. We have a strong legacy of technology innovation, demonstrated through our portfolio of close to 700 granted and applied for patents.

EECatalog: Had the press release about FotoNation’s working with Kyocera Corporation to develop vision solutions for automotive been longer, what additional information would you convey?

Mehra, FotoNation: More on our IPU, and how OEMs in the automotive area would definitely gain from the architectural advantages it delivers. The IPU is our greatest differentiator, and we would like our audience to understand more about it.

Another thing we would have liked to include is more on the importance of driver identification and biometrics. FotoNation acquired a company for iris biometrics a year ago, Smart Sensors, and we will be [applying] those capabilities toward driver monitoring system capabilities. The first step to autonomous vehicles is semi-autonomous vehicles, where drivers are sitting behind the steering wheel but not necessarily driving the car. And for that first step you need to know who the driver is. What the biometrics bring you is that capability of understanding the driver.

Other metrics include being able to look at the driver to tell whether he is drowsy, paying attention or looking somewhere else—decision making becomes easier when [the vehicle] knows what is going on inside the car, not just outside the car—that is an area where FotoNation is very strong.

EECatalog: In a situation where the car is being shared, a vehicle might have to recognize, for example, “Is this one of the 10 drivers authorized to share the car?”

Mehra, FotoNation: Absolutely, and the car’s behavior should be able to factor whether it is a teenager or adult getting behind the wheel, then risk assessments can begin to happen. All of this additional driver information can assist in better driving, and ultimately increased driver and pedestrian safety.

And we see [what’s ahead as] not just driver monitoring, but in-cabin monitoring through a 360-degree camera that is sitting inside the cockpit and able to see what is going on: Is there a dog in the back seat, which is about to jump into the front? Is there a child who is getting irate? All of those things can aid the whole experience and reduce the possibility of accidents.

Questions to Ask on the Journey to Autonomous Vehicles

Monday, May 23rd, 2016

In or out of Earth’s orbit, the journey will show similarities to the space race.

What comes first, connected vehicles or smart cities?

Smart cities will come first and play a critical role in the adoption of connected vehicles. The federal government is also investing money into these programs in many ways. USDOT has finalized seven cities that include San Francisco, Portland, Austin, Denver, Kansas City, Columbus and Pittsburgh through their Smart City Challenge Program.

Many of the remaining cities/states are finding alternate sources to fund their smart city deployments.

When we look at a co-operative safety initiative such as V2X (Vehicle-to-Everything), we see that it requires a majority of the vehicles to be supportive of the same technology. Proliferation of V2X is going to take few years to reach critical mass. This is the reason connected vehicles equipped with V2X are looking at smart city infrastructure as a way to demonstrate the use case scenarios for the “Day One Applications.”

What are the chief pillars of the autonomous vehicle (AV) market?

The three core pillars of the autonomous vehicle market will be:

  • Number Crunching Systems
    • Development of multicore processors has helped fuel the AI engines that are needed for the autonomous vehicle market. More and more companies are using GPUs and multicore processors for their complex algorithms. It is estimated that these systems process 1GB of data every second.
  • ADAS Sensors
    • The cost/performance ratio for ADAS sensors like lidars, radars and cameras has improved significantly over the past couple of years.  All of this will reduce the total cost of the core systems needed for autonomous vehicle systems, making the technology more mainstream.
  • Connectivity and Security
    • Connectivity will play a key role for such systems. Autonomous vehicles depend heavily on information from external sources like the cloud, other vehicles and infrastructure. These systems need to validate their sources and build a secure firewall to protect their information.

Total BOM for a complete system in the next five years will be around $5,000, and the total cost of the system to consumers will only add $20,000 or less to the vehicle’s sticker price. For a relatively small increase, consumers will get numerous benefits, ranging from enhanced safety to stress-free driving. This is one of the reasons why companies like Cruise got acquired for such huge valuations.

What three key events should embedded designers working on automotive solutions for semi-autonomous and autonomous driving anticipate?

  • Sensor Fusion
    • Standards will need to be developed to allow free integration of ADAS sensors, connecting all the various ADAS applications and supporting data sharing between these sensors.
  • Advances in Parallel Computing Inside Automotive Electronics
    • ECU systems inside the cars will eventually be replaced with complex parallel computing ADAS platforms. Artificial intelligence engines inside these platforms need to take advantage of parallel computing when processing gigabytes of data per second. Real time systems that can ascertain the decision making process in a split second will make all the difference.
  • Redundancy
    • Finally, the industry needs to create a redundant fault tolerant architecture. When talking about autonomous vehicles, the systems that enable autonomous driving need to have redundancy to ensure the system is always operating as designed.

How will the push to create self-driving cars (similar to what happened in the space race) result in useful technology for other areas?

The drone/surveillance video market will benefit from the push to create self-driving technology. Drones have similar characteristics to self-driving cars, just on a much smaller scale. The complexities around drone airspace management will definitely need some industry rules and support. This market will benefit from the advances and rule-making experience leveraged from self-driving cars.

What was the role of USDOT pilots and other research for enabling the autonomous vehicle market?

The role of the USDOT pilots has been predominantly focused on connected vehicles, and not much has happened yet with autonomous vehicles. The deployment of connected vehicle technology infrastructure can determine the usefulness to improve the robustness of data received by vehicles. This infrastructure for connected vehicles will pave the way for autonomous vehicles. Roadside infrastructure will play a role in monitoring rogue vehicles.

USDOT is also focusing on creating regulation and policies for autonomous vehicle deployments. Several test tracks around the United States (California, Michigan and Florida) have been funded by the USDOT. These proving grounds are setup with miles of paved roads that simulate an urban driving environment.

Many automakers have set 2020 as the goal for automated-driving technology in production models. Pilots and research by USDOT represent a huge reduction in risk for the automotive OEMs.

What else should embedded designers keep in mind when the topic is autonomous vehicles?

  • 100 Million Lines of Code
    • Connected vehicle technology is the single most complex system that is built by mankind. It takes about 100 million lines of code to build such a system and is more complex than a space shuttle, an operating system like Linux kernel, and smartphones. We recommend that the embedded designers depend on well tested and pre-defined middleware blocks to accelerate their design process.
  • FOTA and SOTA Updates
    • We also recommend that embedded designers build systems that depend heavily on firmware over the air (FOTA) and software over the air (SOTA) systems. We know that cars are going to follow the same trend as smartphones that require frequent software updates. Tesla has set a great example of this process with its updates and has said that its vehicles will constantly improve over time.
  • Aftermarket Systems as a Way to Introduce New Capabilities
    • Finally, embedded designers need to look at aftermarket systems as way to introduce semi-autonomous features to determine the feasibility and acceptance of these building blocks before they become part of the mainstream.

Puvvala_thumbRavi Puvvala is CEO of Savari.With 20+ years of experience in the telecommunications industry, including leading positions at Nokia and Qualcomm Atheros, Puvvala is the founder of Savari and a visionary of the future of mobility. He serves as an advisory member to transportation institutes and government bodies.

The Rise of Ethernet as FlexRay Changes Lanes

Friday, May 20th, 2016

There are five popular protocols for in-vehicle networking. Caroline Hayes examines the structure and merits of each.

Today’s vehicles use a range of technologies, systems and components to make each journey a safe, comfortable, and enjoyable experience. From infotainment systems to keep the driver informed and passengers entertained, to Advanced Driver Assistance Systems (ADAS) to keep road users safe, networked systems communicate within the vehicle. Vehicle systems such as engine control, anti-lock braking and battery management, air bags and immobilizers are integrated into the vehicle’s systems. In the driver cockpit, there are instrument clusters and drowsy-driver detection systems, as well as ADAS back-up cameras, automatic parking and automatic braking systems. For convenience, drivers are used to keyless entry, mirror and window control as well as interior lighting, all controlled via an in-vehicle network. All rely on a connected car and in-vehicle communication networks.

There are five in-vehicle network standards in use today, Local Interconnect Network (LIN), Controlled Area Network (CAN), Ethernet, Media Oriented Systems Transport (MOST) and FlexRay.

Evolving Standards
LIN targets control within a vehicle. It is a simple, standard UART interface, allowing sensors and actuators to be implemented, as well as lighting and cooling fans to be easily replaced. The single-wire, serial communications system operates at 19.2-kbit/s, to control intelligent sensors and switches, in windows, for example.

Figure 1: Microchip supports all automotive network protocols with devices, development tools and ecosystem for vehicle networking.

Figure 1: Microchip supports all automotive network protocols with devices, development tools and ecosystem for vehicle networking.

This data transfer rate is slower than CAN’s 1-Mbit/s (maximum) operation. CAN is used for high-performance, embedded applications. An evolution of CAN is CAN FD (Flexible Data rate), initiated in 2011 to meet increasing bandwidth needs. It operates at 2-Mbit/s, increasing to 5-Mbit/s when used point-to-point for software downloads. The higher data rate of CAN allows for a two-wire, untwisted pair cable structure, to accommodate a differential signal.

As well as boosting transmission rates, CAN FD extended the data field from 8-byte to 64-byte. When only one node is transmitting, increasing the bit rate is possible, as nodes do not need to be synchronized.

LIN debuted at the same time as vehicles saw more sensors and actuators arrive. At this juncture, point-to-point wiring became too heavy, and CAN became too expensive. Summarizing LIN, CAN and CAN FD, Johann Stelzer, Senior Marketing Manager for Automotive Information Systems (AIS), Automotive Product Group, Microchip, says: “CAN and CAN FD have a broadcast quality. Any node can be the master, whereas LIN uses master-slave communication.”

K2L’s Matthais Karcher: CAN FD’s higher payload can add security to the network.

K2L’s Matthais Karcher: CAN FD’s higher payload can add security to the network.

The higher bandwidth of CAN FD allows for security features to be added. “The larger payload can be used to transfer keys with multiple bytes as well as open up secure communications between two devices,” says Matthias Karcher, Senior Manager AIS Marketing Group, at K2L. The Microchip subsidiary provides development tools for automotive networks.

CAN FD’s ability to use an existing wiring harness to transfer more data from one electronic control unit to another, using a backbone or a diagnostic interface, is compelling, says Stelzer. It enables faster download of driver assistance or infotainment control software, for example, making it attractive to carmakers.

Microchip’s Johann Stelzer: Ethernet will evolve from diagnostics to become a communications backbone.

Microchip’s Johann Stelzer: Ethernet will evolve from diagnostics to become a communications backbone.

Ethernet as Communications Backbone

Ethernet uses packet data, but at the moment its use is restricted to diagnostics and software downloads. It acts as a bridge network, yet while it is flexible, it is also complex, laments Stelzer. As in-vehicle networks increase, so high-speed switching increases, adding to the complexity, requiring a high power microcontroller or microprocessor as well as requiring validating and debugging, which can add to development time.

In the future, asserts Stelzer, Ethernet will be used as the backbone communications between domains, such as safety, power and control, in the vehicle. When connected via a backbone it will be able to exchange software and data quickly, at up to 100-Mbit/s, or 100 times faster than CAN and 50 times faster than CAN FD.

At present, IEEE 802.3 operates at 100BaseTX, the predominant Fast Ethernet speed. The next stage is to operate at 100BaseT1, which is also 100-Mbit/s Ethernet over a single twisted wire pair. The implementation of Ethernet 100BaseT1 will be big, says Stelzer. “This represents a big jump in bandwidth,” he points out, “with less utilization overhead.” IEEE 802.3bw, finalized in 2014, will deliver 100-Mbit over a single twisted pair wire to reduce wiring, promoting the trend of deploying Ethernet in vehicles.

Figure 2: K2L offers the OptoLyzer MOCCA FD, a multi-bus user interface for CAN FD, CAN and LIN development.

Figure 2: K2L offers the OptoLyzer MOCCA FD, a multi-bus user interface for CAN FD, CAN and LIN development.

Increased deployment will come about when the development tools are in place. In each point-to-point node in the network, developers will have to integrate a tool in each section. “[The industry] will need good solutions,” he says, “to avoid overhead.” K2L offers evaluation boards, apps notes, software, Integrated Design Environment (IDE) support and development tools for standard Ethernet in vehicles. The company will announce the availability of support for Standard Ethernet T1 next year.

MOST for Media

MOST relates to high-speed networking and is predominantly used in infotainment systems in vehicles. It addresses all seven layers in the Open Systems Interconnection (OSI) for data communications, not just the physical and data link layers but also system services and apps.

The network is typically a ring structure and can include up to 64 devices. Total available bandwidth for synchronous data transmission and asynchronous data transmission (packet data) is around 23-MBaud.

MOST is flexible, with devices able to be added or removed. Each node becomes the master in the network, controlling the timing of transmission, although adding parameters can add to complexity. One solution, says Karcher, is for a customer to use Linux OS and a Linux driver to handle the generation distribution to encapsulate MOST for the apps layer. This allows the customer to concentrate on designing differentiation into the product. K2L provides software drivers and software libraries for MOST, as well as reference designs for analog front-ends, demonstration kits and evaluation boards. The level of hardware and software support, says Karcher, allows developers to focus on the application. Hardware can connect to MOST and also to CAN and LIN, he continues, adding that tools can connect and safeguard both system and application, reducing complexity and time-to-market.

The FlexRay Consortium, which was disbanded in 2009, developed FlexRay for on-board computing. There have not been any new developments in FlexRay, notes Karcher, who believes its use is limited to safety applications. Although K2L supplies tools to test and simulate FlexRay, “in the long run, it is hard to see a future for FlexRay,” says Karcher, citing the fact that there are no new designs or applications.


Caroline_Hayes_ThumbCaroline Hayes has been a journalist, covering the electronics sector for over 20 years. She has worked on many titles, most recently the pan-European magazine, EPN.

Vehicle-to-Everything (V2X) Technology Will Be a Literal Life Saver – But What Is It?

Thursday, May 19th, 2016

Increased safety and smarter energy are among the expected results as V2X gets underway: Here’s a look at its progress.

A massive consumer-focused industry like automobiles is up close and personal with people—so up close that safety and driver protection from harm are top of mind for manufacturers.  Although human error is the prevailing cause of collisions, creators of technologies used in vehicles have an obvious vested interest in helping lower the distressing statistics.  After all, pedestrian deaths rose by 3.1 percent in 2014 according to the National Highway Traffic Safety Administration’s Fatal Analysis Reporting System (FARS). In that year, 726 cyclists and 4,884 pedestrians were killed in motor vehicle crashes. And this damage to innocent bystanders doesn’t include the growing death rate of drivers and their passengers.

Figure 1: Benefits to driver and pedestrian safety, as well as increased power efficiency, are the aims of V2X. (Courtesy Movimento)

Figure 1: Benefits to driver and pedestrian safety, as well as increased power efficiency, are the aims of V2X. (Courtesy Movimento)

Distracted driving accounted for 10 percent of all crash fatalities, killing 3,179 people in 2014 while drowsy driving accounted for 2.6 percent of all crash fatalities, killing 846 people in 2014.  The road carnage is hardly limited to the United States. The International Organization for Road Accident Prevention noted a few years ago that 1.3 million road deaths occur worldwide annually and more than 50 million people are seriously injured. There are 3,500 deaths a day or 150 every hour and nearly three people get killed on the road every minute.

A Perplexing Stew

Thus it’s about time for increasingly sophisticated technology to step in and help protect distracted drivers from themselves. The centerpiece of what’s coming is so-called Vehicle to Everything (V2X) technology. Once it’s deployed, the advantages of V2X are extensive, alerting drivers to road hazards, the approach of emergency vehicles, pedestrians or cyclists, changing lights, traffic jams and more. In fact, the advantages extend even beyond the freeways and into residential streets where V2X technology helps improve power consumption and safety.

About the only problem with V2X is that it’s emerging as a perplexing stew of acronyms (V2V, V2I, V2D, V2H, V2G, V2P) that require some explanation—and the technology, while important, isn’t universally quite here yet.  But the significance of this technology is undeniable. And getting proficient in understanding V2X is valuable in tracking future vehicle features that will link cars to the world around them and make driving safer in the process.

Here’s an overview of the elements of V2X and predictions for when it will hit the roads, from the soonest to appear to the last.

Vehicle to Vehicle (V2V)

Vehicle to Vehicle (V2V) communication is a system that enables cars to talk to each other via Dedicated Short-Range Communication (DSRC), with the primary goal being to communicate wirelessly about speed and position and to utilize power in the most productive manner in order to warn drivers to take immediate action to avoid a collision. Also termed car-to-car communication, the technology makes driving much safer by alerting one vehicle about the presence of others. An embedded or aftermarket V2V module in the car allows vehicles to broadcast their position, speed, steering wheel position, brake status and other related data by DSRC to other vehicles in close proximity.

Clearly, V2V is expected to reduce vehicle collisions and crashes. It’s likely that this technology will enable multiple levels of autonomy, delivering assisted driver services like collision warnings but with the ultimate responsibility still belonging to the driver. V2V relies on DSRC, which is still in its infancy because the need remains to address security, mutual authentication and dynamic vehicle issues.

V2V is already making its way into new cars. For example, Toyota developed a communicating radar cruise control that uses V2V to make it easier for preceding and following vehicles to keep a safe distance apart. This is an element in a new “intelligent transportation system” that the company said was initially available at the end of 2015 on a few models in Japan. Meanwhile, 16 European vehicle manufacturers and related vendors launched the Car 2 Car Communication Consortium, which intends to speed time to market for V2V and V2I solutions and to ensure that products are interoperable. Plans call for “earliest possible” deployment. 

One key issue with V2V is that to be most effective, it should reside in all cars on the road. Nevertheless, this technology has to start somewhere, so Mercedes-Benz announced that its 2017 Mercedes E Class would be equipped with V2V, one of the first such solutions to go into production.

Vehicle to Device (V2D)

Vehicle to Device (V2D) communication is a system that links cars to many external receiving devices but will be particularly heralded by two-wheeled commuters.  It enables cars to communicate via DSRC with the V2D device on the cycle, sending an alert of traffic ahead. Given the fact that biking to work is the fastest-growing mode of transportation, increasing 60 percent in the past decade, V2D can potentially help prevent accidents. 

Although bicycle commuting is healthier than sitting in a car, issues like dark streets in the evening and heavy traffic flow make this mode problematic when it comes to accident potential.  Although less healthful, traveling by motorcycle and other two-wheel devices also has an element of risk because larger vehicles on the road tend to dominate.

V2D is tied to V2V because they both depend on DSRC, so V2D should begin to pop up after V2V rolls off the assembly line in 2017 and later. It will likely appear as aftermarket products for bicycles, motorcycles and other such vehicles starting in 2018.  Spurring the creation of V2D products have been quite a few crowd-funded efforts as well as government grants like the U.S. Department of Transportation’s  (DOT) Smart City Challenge that will pledge to the winner up to $40 million in funding for creating the nation’s most tech-savvy transportation network in a municipality.  Finalists (Denver, Austin, Columbus, Kansas City, Pittsburgh, Portland, San Francisco) have already been chosen and they are busy producing proposals.

DOT has other initiatives aimed at encouraging the creation of various V2X technologies. V2D is one of the application areas in DOT’s IntelliDrive program, a joint public/private effort to enhance safety and provide traffic management and traveler information. The goal is the development of applications in which warnings are transmitted to various devices such as cell phones or traffic control devices.

Vehicle to Pedestrian (V2P)

Vehicle to Pedestrian (V2P) communication is a system that communicates between cars and pedestrians and will particularly benefit elderly persons, school kids and physically challenged persons. V2P establishes a communications mechanism between pedestrians’ smartphones and vehicles and acts as an advisory to avoid imminent collision.

The concept is simple: V2P will reduce road accidents by alerting pedestrians crossing the road of approaching vehicles and vice versa. It’s expected to become a smartphone feature beginning in 2018 but, like V2D, requires the presence of DSRC capabilities in vehicles.  Ultimately, the DSRC version of V2P will be replaced by a higher-performance LTE version starting in 2020.

While there aren’t any V2P solutions currently available, this area is a hotbed of development, particularly when one includes the full gamut of possible technologies and includes multiple vehicle types such as public transit. Given the significant role that V2P can play in preventing damage to humans, the U.S. Department of Transportation maintains and updates a database of technologies in process. Of the current 86 V2P technologies listed, none are yet commercially available but a number are currently undergoing field tests.

A particularly fruitful approach to developing effective V2P products is a research partnership between telecom and automotive companies. For example, Honda R&D Americas and Qualcomm collaborated on a DSRC system that sends warnings to both a car’s heads-up display and a pedestrian’s device screen when there is a chance of colliding. Although the project won an award as an outstanding transportation system, there’s no word yet when this might appear commercially.

In another collaboration, Hitachi Automotive Systems teamed with Clarion, the Japan-based manufacturer of in-car infotainment products, navigation systems and more on a V2P solution that predicts pedestrian movements and rapidly calculates optimum speed patterns in real time. Undergoing field testing, this is another promising product to look for in the future.

Vehicle to Home (V2H)

Vehicle to Home (V2H) communication involves linkage between a vehicle and the owner’s domicile, sharing the task of providing energy.  During emergency or power outages, the vehicle’s battery can be used as a power source. Given the reality of severe weather and its effect on power supplies, this capability has been needed for a while, with disruptions in power after storms and other weather emergencies impacting many thousands of U.S. families annually.

V2H is a two-way street, with the vehicle powering the home and vice versa based on cost and demand for home energy. The car battery is used for energy storage, taking place when energy is cheap or “green.”

During power outages, power from a vehicle’s battery can be used to run domestic appliances and power can be drawn from the vehicle when utility prices are high. In areas with frequent power outages, the battery can be used to buffer energy to avoid flickering, and it can be used as an emergency survival kit.

It’s expected that V2H will kick into higher gear in 2019, playing a significant role when the number of plug-in hybrid Electric Vehicle (PHEV) and Electric Vehicles (EVs) make up over 20% of the total new cars sold in the United States. But a few projects have been underway for a while, such as a Nissan V2H solution that was already tested widely in Japan and launched in 2012 as the “Leaf to Home” V2H Power Supply System. Relying on an EV power station unit from Nichicon, this was one of the first backup power supply systems using an EV’s large-capacity battery.

Other Japanese car manufacturers have dabbled in these systems, including Mitsubishi and Toyota. Mitsubishi announced in 2014 that its Outlander PHEV vehicle could be used to power homes—only in Japan so far. There are other approaches to utilizing an EV’s battery for home use, such as some currently available devices that can not only charge a battery, but also supply the stored electricity to the home. One example is the SEVD-VI cable from Sumitomo Electric.

Vehicle to Grid (V2G)

Vehicle to Grid (V2G) communication is a system in which EVs communicate with the power grid to return electricity to the grid or throttle the vehicle’s charging rate. It will be an element in some EVs like plug-in models and is used as a power grid modulator to dynamically adjust energy demand.

A benefit of V2G is helping maintain the grid level and acting as a renewable power source alternative. This system could determine the best time to charge car batteries and enable energy flow in the opposite direction for shorter periods when the public grid is in need of power and the vehicle is not.

Given its key role in battery charging, this V2X technology is appearing soon—in affordable EVs like the Tesla model 3, which can now be advance ordered. Other products and companies like Faraday Future, NextEV, Apple Car, Uber and Lyft are all planning to launch EVs between 2017-2020. V2G is an extremely relevant area because it creates the obvious need for cities to start thinking and planning now about how they will support a large-scale EV society. Otherwise, energy utility companies will be in a panic situation and may resort to drastic measures such as rationing energy per household.

Figure 2: V2X technology will be part of the Tesla Model 3. [Photo: By Steve Jurvetson [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons]

Figure 2: V2X technology will be part of the Tesla Model 3. [Photo: By Steve Jurvetson [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)

Other activity in the V2G area includes a partnership between Pacific Gas and Electric Company (PG&E) and BMW to test the ability of EV batteries to provide services to the grid. The automaker created a large energy storage unit made from re-utilized lithium-ion batteries while enlisting San Francisco Bay Area drivers of BMW 100 i3 cars to take part in what’s called the ChargeForward program. A pilot study, this now-underway project is giving qualifying i3 drivers up to $1,540 in charging incentives.

Another intriguing effort involves Nissan and Spain’s largest electric utility, which collaborated on a mass-market V2G system that was initially demonstrated in Spain last year but is aimed at the European market.  Like the BMW/PG&E program, this also involves re-purposed EV batteries for stationary energy storage.  V2G is a very promising market pegged to surpass $190 million worldwide by 2022 according to industry analysts.

Vehicle to Infrastructure (V2I)

Vehicle to Infrastructure (V2I) communication will likely be the last V2X system to appear. It’s the wireless exchange of critical safety and operational data between vehicles and roadway infrastructure, like traffic lights. V2I alerts drivers of upcoming red lights and prevents traffic congestion. The system will streamline traffic and enable drivers to maneuver away from heavy traffic flow.

Despite the enormous impact this technology will have on driver safety, the degree of infrastructure investment required is so massive that it will take time to implement.  Some question whether DSRC-based V2I with its questionable return on investment will ever take place, but there is more hope for LTE-based V2I.

This approach might play a key role starting in 2020 and be rolling along by 2022. Nevertheless, there are promising V2I projects already happening in countries where it’s easier to conduct massive public initiatives, such as China. A field test being run on public roads in Taicang, Jiangsu Province, China, involves buses that receive road condition data and thus can avoid stopping at lights when safe. Tongji University and Denso Corporation developed this project. 

Another recent collaboration involves Siemens and Cohda Wireless to develop intelligent road signs and traffic lights in which critical safety and operational data is exchanged with equipped vehicles.  In the United States, DOT is highly involved in working with state and local transportation agencies along with researchers and private-sector stakeholders to develop and test V2I technologies through test beds and pilot deployments.

Communication is the next frontier of car technology, and this is the bedrock of all the V2X capabilities appearing in the future. And none too soon. According to the World Health Organization (WHO), the incidence of traffic fatalities will continue to expand across the globe as vehicles are more prevalent. WHO notes that this increase is 67 percent through 2020. Having smarter, safer cars and communications systems for the drivers, pedestrians and cyclists who can be impacted by these vehicles could turn around this trend.  Add to that the aspects of flexible electricity storage and usage, and V2X becomes an even more promising technology.


Mahbubul_AlamMahbubul Alam is CTO and CMO of Movimento Group. A frequent author, speaker and multiple patent holder in the area of the new software defined car and all things IoT. He was previously a strategist for Cisco’s Internet-of-Things (IoT) and Machine-to-Machine (M2M) platforms.  Read more from Mahbubul at http://mahbubulalam.com/blog/.

Transitioning Applications from CAN 2.0 to CAN FD

Friday, April 22nd, 2016

The CAN bus protocol is used in a wide variety of applications, including industrial, automotive, and medical. Approximately 1.5B CAN nodes are used each year. Designers of these applications benefit from the many advantages CAN offers, such as reliability, cost effectiveness, engineering expertise and the availability of tools and components. CAN FD builds on the existing benefits of CAN 2.0 technology, allowing designers to leverage CAN 2.0 expertise, tools, hardware and software while also taking advantage of CAN FD’s increased data rate and data field length.

This paper will explore some of the considerations associated with CAN system design and how designers can transition their applications from CAN 2.0 to CAN FD. These considerations relate to physical layer, controller and overall system topics. Application designers must begin with hardware that conforms to both physical layer and controller requirements. Solutions for CAN FD controllers will be discussed, highlighting external CAN FD controllers as an alternative to integrated CAN FD controllers. These external controllers allow designers more flexibility when choosing an MCU that best fits the application and can reduce the migration effort from CAN 2.0 to CAN FD.

INTRODUCTION

Automotive manufacturers and suppliers are facing some challenges with today’s CAN 2.0 networks. First, automotive manufacturers and suppliers are dealing with an increase in end-of-line (EOL) programming costs. This is due to an increase in Electronic Control Unit (ECU) memory requirements. Second, the use of automotive electronics continues to expand, requiring more ECUs to support the demands of these new electronic applications. This either decreases the available bandwidth on existing CAN 2.0 bus networks or it forces designers to introduce a new CAN 2.0 network into the system architecture. Last, as demand for cyber security continues to grow, ECUs will require more memory and bus utilization will increase drastically. CAN FD addresses some of these challenges by offering two significant enhancements over CAN 2.0. CAN FD increases the data rate capabilities in normal mode from 500 kb/s (typical) to 2 Mb/s, and in programming mode up to 5 Mb/s. In addition, it increases the data field from 8 to 64 data bytes. While these benefits can offer the designer faster EOL programming and free up network bandwidth, there are some development challenges associated with supporting new CAN FD applications. The following discusses and outlines some of the major design changes required and other considerations for designers who are transitioning their applications from CAN 2.0 to CAN FD.

CHANGES TO CAN NETWORK ARCHITECTURE

Automotive system architectures utilize many different network technologies to support a wide range of safety, body and convenience, infotainment, and ADAS electronics within the automobile. Starting with the system gateway, CAN plays a major role in supporting many of these applications in today’s architectures.

CAN FD will continue to play a major role within future architectures. The key factor to supporting these architectures is enabling faster throughput at the gateway and branching it out into the subnetworks. Current CAN 2.0 gateways achieve ~37 s/MB transfer time based on a 500 kb/s (typical) data rate and an 8 byte data payload. Future CAN FD gateways are targeted to achieve ~1.9 s/MB based on a 5 Mb/s data rate and a 64 byte data payload.

Today’s system architectures support up to five or more CAN 2.0 networks. CAN 2.0 networks typically run at 500 kb/s and not 1 Mb/s. The bandwidth on a CAN bus is limited by the propagation delay and by the bus topology. Future CAN FD architectures will utilize two types of networks: dedicated CAN FD networks and mixed CAN 2.0 and CAN FD networks.

In a dedicated CAN FD network, all CAN nodes on the network will be CAN FD capable. The advantage of this configuration is that the CAN FD protocol can always be used, and there will be minimal effect on the physical layer transceivers (i.e. no need for Partial Networking-like transceivers). The disadvantage of this approach is that the entire network will have to support CAN FD, making the change to CAN FD very significant and costly.

Some automotive manufacturers mix CAN 2.0 and CAN FD nodes in the same network. This is possible because CAN FD controllers support both CAN 2.0 and CAN FD protocols. One advantage of this configuration is that networks can be migrated to CAN FD node by  node without requiring an entire network change. The disadvantage of this method is that physical layer transceivers will have to support a CAN FD filtering method on CAN 2.0 nodes to ensure they don’t create any error frames during CAN FD communication. This adds cost and complexity to the system.

CAN FD AND ISO STANDARD

The CAN protocol is specified by the ISO 11898 standard. The ISO 11898-1 specifies the Data Link Layer. In 2014, an initiative to include the CAN FD requirements in this specification began. This year, the International Standards Organization has approved the ISO 11898-1 as a Draft International Standard (DIS) without any votes against it. The final ISO 11898-1 standard is expected to be published in April, 2016.

The ISO 11898-2 originally specified the requirements of the CAN 2.0 Physical Layer up to 1 Mb/s. The ISO 11898-5 is an extension of the ISO 11898-2 accommodating new low-power requirements during CAN 2.0 bus idle conditions. The ISO 11898-6 is an extension of the ISO 11898-2 and ISO 11898-5 specifying the Selective Wake-up (Partial Networking) functionality.

In 2014, an initiative to add CAN FD to the ISO 11898-2 and to combine it with ISO 11898-5, and ISO 11898-6 was also started. This year, the ISO 11898-2 successfully passed the Committee Draft Ballot. The Draft International Standard (DIS) version is currently under development, and submission is expected soon. The final ISO 11898-2 standard is expected to be published in July, 2017.

OSI REFERENCE MODEL – ISO 11898-1/2

The general Layered Architecture according to the OSI Reference Model specified in the ISO 11898-1 is the same for both CAN 2.0 and CAN FD (shown in Figure 1). The differences within the OSI Reference Model between CAN 2.0 and CAN FD are in the Logical Link Control (LLC) and the Medium Access Control (MAC) sublayers of the Data Link Layer, and the Physical Coding Sublayer (PCS) and the Physical Medium Attachment (PMA) of the Physical Layer. The Medium Dependent Interface (MDI) of the Physical Layer is the same for CAN 2.0 and CAN FD.

CAN_Fig1

Table 1 illustrates the difference in requirements between CAN 2.0 and CAN FD.

CAN_Table1

CAN_Table1a

DATA LINK LAYER (DLL) – ISO 11898-1

One of the primary differences between CAN 2.0 and CAN FD is in the MAC of the DLL, where the payload can be increased from 8 data bytes up to 64 data bytes in the data field of the CAN FD (see Figure 2). This increase in payload makes the CAN FD communication more efficient by reducing the protocol overhead. Messages that had to be split due to the 8 byte payload limit can be combined into one message. Additionally, security can be enhanced via the encryption of CAN FD messages as a result of the higher data rate and increased payload.

CAN_Fig2

CAN FD switches the data rate during the data and CRC field. The Control field of the CAN FD frame contains three new bits. The FDF bit is used to distinguish between CAN 2.0 and CAN FD frames. Bit rate switching is initiating by setting the BRS bit. The error state of the transmitter is indicated by the ESI bit.

PHYSICAL LAYER – ISO 11898-2

The other main difference between CAN 2.0 and CAN FD is in the PCS of the Physical Layer, where the CAN 2.0 data rate was increased from typically 500 kb/s to 2 Mb/s for nominal vehicle operating conditions and up to 5 Mb/s for diagnostics or EOL programming.

The block diagram of a CAN FD transceiver is very similar to that of a CAN 2.0 transceiver. Figure 2 illustrates the main circuit blocks of a CAN FD transceiver. The CAN FD transceiver interfaces with the CAN FD controller via the TXD and RXD digital signals. When in Normal mode (STBY low), the bit stream from the CAN FD controller on TXD gets encoded to differential output voltages on the physical CAN bus signals (CAN_H and CAN_L). The RXD output pin of the CAN FD transceiver reflects the differential voltages on the CAN bus.

The TXD to RXD propagation delay of a CAN FD transceiver must not exceed 255 ns for both dominant and recessive transitions. Because the CAN FD transceiver is not a push-pull driver, there is some asymmetry between recessive and dominant TXD to RXD propagation delay. As a result, the recessive bit time on RXD tends to shorten. Figure 3 describes how the loop delay symmetry parameters are measured.

CAN_Fig3

MIXED NETWORKS AND PHYSICAL LAYER

CAN FD transceivers are backwards compatible with CAN 2.0 transceivers. The Data Link Layer of CAN FD is not compatible with CAN 2.0. To implement mixed operation of CAN 2.0 and CAN FD nodes on the same bus, the CAN 2.0 nodes need to be ideal passive (invisible to the network) during CAN FD communication or error frames will be generated.

At least three options are available to make CAN 2.0 nodes tolerant to CAN FD: Partial Networking (PN), CAN FD Shield, and CAN FD Filter. Currently, only PN transceivers are available on the market. PN allows the CAN 2.0 controller to be disconnected from the bus during CAN FD communication. The PN transceiver will ignore all CAN FD messages by decoding the incoming CAN frames. The PN transceiver waits for a valid CAN 2.0 wake-up message with a specific ID before it restarts routing CAN 2.0 messages to the CAN 2.0 controller.

CAN FD CONTROLLER

The block diagram of a CAN FD controller looks very similar to that of a CAN 2.0 controller. Figure 4 illustrates the main blocks of a CAN FD controller. The CAN FD controller interfaces to the CAN FD transceiver using digital transmit and receive pins. The Bit Stream Processor (BSP) implements the CAN FD protocol. It transmits and receives CAN FD frames. The Transmit Handler prioritizes messages that are queued for transmission. The Receive Handler manages received messages. Only those messages that match the Acceptance Filters are stored in RX message objects or FIFOs. The Memory Interface controls the access to the RAM. Message Objects are stored in RAM. The message RAM can be located in the system RAM of a microcontroller; it doesn’t have to be dedicated to the CAN FD controller. Optionally, the acceptance filter configuration can be stored in RAM. The microcontroller uses a Register Interface to access the Special Function Registers (SFR) of the CAN FD controller. The SFR are used to configure and control the CAN FD controller. Interrupts notify the microcontroller about successfully transmitted or received messages. Received messages are time stamped. Transmitted messages are optionally time stamped and their IDs can be stored in a Transmit Event FIFO (TEF).

CAN FD CONTROLLER REQUIREMENTS

Table 2 illustrates the difference in requirements between a CAN 2.0 and a CAN FD controller. The following section will discuss the major changes in more detail.

CAN_Table2

CAN_Fig4

HIGHER BANDWIDTH

During the arbitration phase, the data rate is limited by the CAN network propagation delay. In the data phase, only one transmitter remains, therefore, the bandwidth can be increased by switching the bit rate during the data phase. The transmitter always compares the intended transmitted bits with the actual bits on the bus. The propagation delay in the data phase can be longer than the bit time. Therefore, the bits are sampled at a Secondary Sample Point (SSP). Data Bit Time and SSP configuration require additional configuration registers.

In normal operation, practical CAN 2.0 networks achieve a bandwidth of 17 bytes/ms when using a bit rate of 500 kb/s, 8 bytes of payload, and 50% bus utilization. During End Of Line (EOL) programming, 100% of the bus can be utilized,resulting in a bandwidth of 29 bytes/ms.

CAN FD improves the bandwidth up to a factor of four during normal operation. This can be achieved by increasing the data bit rate to 2 Mb/s and by increasing the payload to 32 bytes. Increasing the payload to 64 bytes and switching the data bit rate to 5 Mb/s results in a bandwidth gain of up to ten during EOL programming.

Increased bandwidth requires the CAN FD controller and the microcontroller to process the messages faster. This requires higher microcontroller and CAN FD controller clock speeds and FIFOs to buffer messages.

BIT TIME CONFIGURATION AND CLOCK

CAN 2.0 controllers often use an 8 MHz clock, while CAN FD controllers require a faster clock. The selection of the sample point within a CAN FD network is critical. It is recommended that all CAN FD nodes use the same sample point setting; clock frequencies of 20, 40, or 80 MHz are recommended. This allows shorter time quanta and, therefore, higher resolution for setting the sample point. Correctly switching the bit time is a technical challenge. Using the same time quanta resolution during Nominal and Data bit phase is also recommended. This requires more time quanta per bit during the Nominal bit phase as compared to CAN 2.0.

BIGGER RAM

Increasing the payload of CAN FD messages requires more RAM for message storage. Storing 32 message objects with ID and a payload of 8 bytes requires 640 bytes of RAM. Increasing the payload to 64 bytes requires 2432 bytes of RAM.

STATIC PDU FRAMES

In a CAN network, signals are mapped into meaningful Protocol Data Units (PDU), as shown in Figure 5. Usually one PDU is mapped into one CAN frame. The signals inside a PDU and the length of a PDU don’t change; they are static. The frames and signals are described in a message database.

CAN_Fig5

Only one ECU can transmit a certain PDU (see Figure 6). Multiple ECUs can receive a PDU. Acceptance filtering is used to accept PDUs that are of interest to the ECU. Acceptance filtering is done in hardware and reduces the required message processing in the microcontroller. Filters can be set up to filter on the ID of the CAN frame and optionally on the first two data bytes of the CAN frame. Filters can point to different message objects inside the CAN controller.

CAN_Fig6

The concept of static PDU frames can also be applied to CAN FD. In order to make CAN FD most efficient, a payload of 64 bytes should be used as much as possible, but the signal mapping gets even more complex for PDUs with 64 bytes.

DYNAMIC MULTI-PDU FRAMES

Dynamic Multi-PDU Frames (M-PDU) try to maximize the efficiency of CAN FD by dynamically combining multiple PDUs into one frame (see Figure 7). PDUs are only transmitted when the data changes.

CAN_Fig7

The message database of static PDUs can be re-used. Each PDU contains a header to distinguish between the different PDUs inside a frame. The header consists of the PDU ID and the byte length. M-PDUs are especially useful for CAN FD gateways. The gateway can collect multiple PDUs and send them out in one frame (see Figure 8). The system designer defines the rules for combining PDUs and for delaying an M-PDU before it gets transmitted.

CAN_Fig8

Since multiple PDUs are combined dynamically into one CAN FD frame, the classic message filtering concept can’t be used. All frames must be received and the PDUs have to be filtered by software, not by hardware. This will increase the demand for faster message processing. FIFOs with payloads up to 64 bytes will be required to buffer received messages and give the microcontroller time to process the messages. This will require even more RAM to store the received data.

AVAILABILITY OF CAN FD MICROCONTROLLERS

A major challenge of CAN FD adoption and transition is the limited number of CAN FD controllers available today and in the near future. Some of the causes associated with this challenge are the following:

  • Updating the ISO 11898-1 specification is a long process
  • The ISO CRC fix delayed silicon component availability
  • Silicon suppliers are limiting the number of developments due to risk of ISO spec changing
  • Initial MCUs targeted for release are supporting high-end CAN FD applications

The causes described above result in the following challenges:

  • Limited number of MCUs with CAN FD available on the market
  • Lack of available components puts pressure on silicon supplier, tier supplier and OEM to meet aggressive timelines
  • First MCUs with CAN FD released will be high performance and feature rich, leaving an MCU gap in many mid- to lower-end CAN FD applications

Replacing or updating the MCU is a major system change and will require the automotive supplier and manufacturer to redesign, revalidate and requalify the ECU. This results in a significant amount of time, resources and investment. In many cases, a replacement MCU with CAN FD may not be available. As a result, customers will benefit from using an external CAN FD controller. This allows ECU designers to enable CAN FD by adding only one external component while they continue to utilize the majority of their design.

EXTERNAL CAN FD CONTROLLER

External controllers that support CAN 2.0 applications are currently available, including the SJA1000 from NXP and the MCP2515 from Microchip. Both devices serve very well as external CAN 2.0 controllers and are widely used in automotive and industrial applications. These types of devices are typically used to add CAN 2.0 capability to an MCU or to add an additional CAN 2.0 controller to an existing MCU.

Many of the same considerations for using a CAN 2.0 controller apply to using an external CAN FD controller . It interfaces directly to the physical layer transceiver transmit and receive pins. The controller acts as a CAN FD engine, processing the CAN FD messages and relaying any relevant messages to the MCU. An external controller can interface to the MCU through a serial or parallel port. Figure 9 shows a typical application of an external CAN FD controller using an SPI interface.

CAN_Fig9

Figure 10 is a block diagram of an external CAN FD controller with an SPI interface. Using an SPI interface decreases the number of pins required as compared to a parallel interface. The CAN FD controller contains the same blocks as the integrated CAN FD controller. When integrated into an MCU, the CAN FD peripheral can share the system RAM. The RAM inside an external CAN FD controller is dedicated.

CAN_Fig10

The SPI interface accesses the SFR to control the CAN FD engine, to configure the CAN FD bit times and to set up the receive filters. The SPI interface also accesses the RAM to load transmit messages or to read received messages. The external CAN FD controller transmits and receives messages autonomously and interrupts the MCU only when a message is successfully transmitted or received. The SFRs are efficiently arranged to reduce the number of SPI transfers and to keep up with the higher CAN FD bandwidth. This allows the MCU to use a DMA to access larger SFR and RAM blocks via SPI. The external CAN FD controller integrates a clock generator to supply the CAN FD clock. Optionally, the clock can be provided to the MCU.

An external CAN FD controller has to meet the same requirements as an integrated CAN FD controller. The bandwidth requirements can be met by using an SPI with DMA and by increasing the SPI frequency. Calculations show that an external CAN FD controller utilizing an SPI frequency of 10 to 16 MHz can keep up with a 100% loaded CAN FD bus at data bit rates up to 8 Mb/s.

SUMMARY

Applications transitioning from CAN 2.0 to CAN FD benefit from many enhancements, such as an increase in data rate and payload. However, application changes are required to take advantage of these enhancements. The transition affects all levels of the development process, from the tool supplier up to the end automotive manufacturer. Designers have already started to transition CAN FD into their automotive and industrial applications.

Automotive manufacturers in the US, Europe and Asia are planning to implement CAN FD as early as model year 2018 with a wider adoption expected in 2020. Automotive suppliers have started development to prepare for upcoming CAN FD programs and are facing aggressive timelines and design challenges. In many cases, cost-effective MCUs with CAN FD that are well-suited for their applications may not be available.

For these situations, using an external CAN FD controller can be a viable alternative. Using an external CAN FD controller will help to minimize development timelines and be more cost effective than using a high-end MCU with CAN FD.

Orlando Esparza

Microchip Technology Inc.

2355 W. Chandler Blvd.

Chandler, AZ 85244

+1- 480-792-7363

orlando.esparza@microchip.com

www.microchip.com

Wilhelm Leichtfried

Microchip Technology Inc.

2355 W. Chandler Blvd.

Chandler, AZ 85244

+1- 480-792-7572

wilhelm.leichtfried@microchip.com

www.microchip.com

Fernando Gonzalez

Microchip Technology Inc.

2355 W. Chandler Blvd.

Chandler, AZ 85244

+1- 480-792-4578

fernando.gonzalez@microchip.com

www.microchip.com

REFERENCES

[1] ISO/DIS 11898-1:2015(E), Road vehicles – Controller area network

(CAN)

Part 1: Data link layer and physical signaling

[2] ISO/CD 11898-2:2015-06-08, Road vehicles – Controller area network

(CAN)

Part 2: High-speed medium access unit

[3] AUTOSAR 4.x Specification

Road Sense Becomes Connected Sensor Technology

Thursday, April 21st, 2016

Sensors in vehicles are increasing in number and importance, reports Caroline Hayes. Without them, the era of  autonomous driving would have stalled.

Trials of autonomous vehicles continue to roll out, heralding the era of the driverless car. In North America, four states have taken advantage of government legislation that allows testing of automated vehicles. In Europe, Volvo, the Swedish Transport Administration, the Swedish Transport Agency, Lindholmen Science Park and the City of Gothenburg have come together in the ‘‘Drive Me” project putting 100 self-driving cars, performing everyday commutes using Autopilot technology, on the streets of Gothenburg by 2017. The test cars have so far been able to follow lanes and adapt to speeds and merging traffic, with the next stage being the cars driving the whole route in autonomous mode. Other countries are watching the growing interest in autonomous driving. Earlier this month, in Germany, arguably the heart of Europe’s automotive manufacturing, Chancellor Angela Merkel invited carmakers to draw up a wish list—and timetable—so that a legal basis for self-driving testing could be proposed.

Figure 1: The city of Gothenburg, Sweden is in the throes of the Drive Me project of autonomous vehicles driving commuter routes.

Figure 1: The city of Gothenburg, Sweden is in the throes of the Drive Me project of autonomous vehicles driving commuter routes (Courtesy Volvo).

Vehicles today have between 60 and 100 sensors. These are used in Advanced Driver Assistance Systems (ADAS) to warn a driver of hazards or obstacles on the road while driving, for example, if lane shifting is detected, as well as when parking. Sensors can detect when a vehicle is too close to the one in front with collision warning sensors, if vehicles are behind you when reversing (Rear Cross Traffic Alert, or RTCA), alert the driver to blind spots, and when coupled with a 360 degree viewing monitor, can detect moving objects within the vehicle’s perimeter.

ADAS can also be connected to the Cloud and other telematics services to advise of weather or traffic alerts, via a Human Machine Interface (HMI) on the driver dashboard.

As more data is collected, the number of sensors is increasing and is projected to reach 200 per vehicle, or approximately 22 billion sensors by 2020.

Real-Time Response

Autonomous vehicles rely on larger volumes of data than ADAS. It is critical in a self-driving car that it responds to changes in environment or conditions in real-time. This requires large amounts of data, from inside and outside of the vehicle, to be processed and analyzed in real-time. Once analyzed, it must be displayed in real-time.

Figure 2: Volvo, with the Gothenburg Chalmers University of Technology uses sensors to develop in-vehicle driver safety systems. (Courtesy Volvo)

Figure 2: Volvo, with the Gothenburg Chalmers University of Technology uses sensors to develop in-vehicle driver safety systems. (Courtesy Volvo)

In addition to cameras relaying image data, other safety features rely on sensors. For example, Volvo and the Gothenburg Chalmers University of Technology have been researching driver behavior (Figure 2). Sensors on the dashboard monitor where the driver is looking, head position and angle, as well as how open the eyes are to detect the driver’s state of awareness and attention. If the driver allows the vehicle to get too close to the car in front, or move out of its lane, an audible alert is activated.

Nissan is also using sensors to monitor driver alertness. Earlier this month, it introduced the 2016 Maxima sedan, which it calls a four-door sports car, at the New York International Auto Show. The four-wheel drive vehicle is equipped with a 3.5-liter V6 engine and Driver Attention Alert (DAA) technology to detect drowsiness or inattention. It uses steering angle sensors to monitor driver steering patterns. When a baseline is established, it continuously monitors subsequent driving patterns. It uses logic to account for curves in the road, lane changing, braking and weather conditions, but any deviation from the baseline triggers an audible chime, a coffee cup icon and message “Take a break?” on the information display (Figure 3).

Figure 3: Nissan’s sensor-based Driver Attention Alert (DAA) technology advises a break if it senses the driver is drowsy. (Courtesy Nissan).

Figure 3: Nissan’s sensor-based Driver Attention Alert (DAA) technology advises a break if it senses the driver is drowsy. (Courtesy Nissan).

This amount of data processing and display takes its toll. To access and process data from many sources, automotive manufacturers are looking for compute-intensive, yet compact, lightweight semiconductors to be designed into the automotive embedded system.

The consensus is that multicore architectures are the most silicon-efficient solution. Renesas has announced its third generation of SoCs for automotive computing, the R-Car H3. It is built around an ARM® Cortex®-A57 or Cortex-A53 core, with 64-bit CPU core architecture. To meet the processing power required in today’s vehicles, it achieves a processing performance of 40,000 Dhrystone Million Instructions Per Second (DMIPS) and also has Imagination Technologies’ PowerVR GX6650 3D graphics engine to display the information.

Radar Monitoring

The R-Car H3 processes the data from sensors around the vehicle in real-time and allows multiple applications such as detection, prediction and avoidance to run. It conforms to the ISO 26262 (SAIL-B) functional safety standard for automotive use. The SoC is supported by Green Hills Software’s INTEGRITY Real Time Operating System (RTOS)and INTEGRITY Multivisor Virtualization platform. The 64-bit secure virtualization platform was released last year, and was developed, says the company, with the specific capabilities of the R-Car H3 in mind. The platform meets ISO 26262 functional safety requirements and can also be adapted to applications such as reconfigurable digital instrument cluster or to provide compute and sensor capabilities for ADAS. The AUTOSAR-compliant application framework means that existing software components can be seamlessly integrated and reused.

Figure 4: Renesas has announced the third generation of its automotive computing platform, the R-Car H3. (Courtesy Renesas).

Figure 4: Renesas has announced the third generation of its automotive computing platform, the R-Car H3. (Courtesy Renesas).

Another semiconductor company, Infineon, relies on the multicore AURIX microcontroller. It is based on up to three 32-bit TriCore™ Central Processing Units (CPUs). The microcontroller can be used to control body, safety, ADAS and Powertrain applications.

In Munich last month, it presented a 24GHz radar-based assistance system for trucks based on the microcontroller, customised for radar applications.

Radar systems monitor blind spots in large trucks and construction vehicles. They allow several trucks to drive in columns with short stopping distances, even at night, during rain, fog or snow.

The 24GHz radar system uses a 24GHz radar chip, the BGT24ATR12 and the AURIX TC264DA microcontroller, which has been customized for radar systems. The radar chips receive high-frequency signals and transmit them to the microcontroller in the radar electronic control unit. The processed data is captured around the vehicle by the radar chip and transmitted to the driving assistance system.

Weight and space are valuable commodities in vehicle design. According to Infineon, AURIX in a radar system can eliminate the need for Digital Signal Processors (DSPs), memory chips and Analog to Digital Converters (ADCs).

The company’s AURIX TC297T microcontrollers were specified by TTTech for Audi’s zFAS central control unit, which integrates ADAS functionalities. The control unit is based on TTTech’s Deterministic Ethernet to combine traffic data with TTIntegration middleware to run the application on top of networked microcontrollers for piloted driving on congested roads and piloted parking.

Deterministic Ethernet and the AURIX TC297T have also been used to develop an evaluation platform driver assistance system Engine Control Units (ECUs) – TTA Drive. Different application modules can be integrated at the development stage, using TTIntegration software.


Caroline_Hayes_ThumbCaroline Hayes has been a journalist, covering the electronics sector for over 20 years. She has edited UK and pan-European titles, covering design and technology for established and emerging applications.

Accelerating Data Access in Automotive Designs

Friday, April 8th, 2016

Quenching drivers’ thirst for information without compromising security or risking power loss.

The modern car and commercial vehicle are arguably a rolling Internet of Things (IoT). Computers control the dashboard, engine dynamics, safety systems and more (Figure 1). Each subsystem has its own critical data, and some of that data is useful to other systems within the car. Today’s driver wants access to all the information available so they can be the best driver possible, but also so they can have the best driving experience. But choosing the right Real Time Operating System (RTOS) and file system to ensure seamless, reliable and secure access to data across subsystems is challenging.

The first electronic control units (ECUs) in cars were analog and handled the minutiae of fuel injection for Volkswagen and Chevrolet. It wasn’t long before microprocessors were in control of other key systems. Forty years later, cars have more than 100 ECUs with millions of lines of code controlling them. Dozens of microprocessors must now not only collect their sensor data and control their specific subsystems, but they must also communicate with other device systems—and occasionally with devices outside the car as well.

Figure 1: Keeping data accessible across subsystems is no easy task. (Courtesy Datalight)

Figure 1: Keeping data accessible across subsystems is no easy task. (Courtesy Datalight)

Considering Interactions

Each of these microprocessors has its own RTOS environment. From the console In Vehicle Infotainment (IVI), to the engine dynamics, to the overall vehicle diagnostics and data logging, each subsystem designer has chosen the right RTOS for their design. However, it is becoming increasingly important to also consider the interaction of systems within the automobile.

For each individual RTOS environment, there are a handful of default file system formats—though Linux would require big hands. These formats are nearly always incompatible with one another, so media written by one RTOS will not be directly readable by another RTOS unless a special driver is loaded. One exception to this is the FAT file system, which is generally common across all platforms and RTOS choices, but features some limitations.

The FAT file system introduces a host of problems, from access time for files (unless a FAT cache is also used) to the tendency to overwrite when modifying in place (unless unusual programming techniques are used). It lacks a reliable way to detect media or power interruption problems, and with that checking can be quite slow to mount. FAT also doesn’t feature user controls and real security attributes for files.

Other RTOS environments, including Wind River’s VxWorks and Mentor Graphics’ Nucleus, also lack full security attributes for their file system solutions. This is one reason many applications have been somewhat shielded from the actual hardware. However, applications can be modified to provide some of the missing security at the cost of a larger footprint and less flexibility.

Instead of using a common file system to share data among several subsystems, one could instead use a network or API to transfer data between the different RTOS environments. In this instance, the original copy of the data is never at risk, but the growing size of the data involved could cause problems with internal bandwidth and latency. The only common protocol in most automotive environments today is the Controller Area Network (CAN) protocol, which is insufficient for large transfers. Another concern for a network or API solution is later redesign or addition of components, which would then have to meet older standards for a wide variety of vehicles.

The Most Data in the Safest Manner

By using a common on-media format that has reliability and security built in, each application can have required access to common data when needed. The IVI console can display average results from engine metrics and controls for the telematics control unit (GPS and other external networks) without interrupting those functions. Any critical data can be accessed to help make intelligent choices, leaving the internal CAN and other networks free for vital functions.

Designers of automotive systems and subsystems have choices besides transmitting data across internal networks and sharing data on a common drive format that is vulnerable to power loss. Datalight’s Reliance Nitro is designed to be robust against power loss and flexible to meet the designer’s needs. Security attributes from Linux are used internally and can be added to VxWorks and other environments through a dynamic hooks interface. This solution will allow the most data available to the operator in the safest manner, resulting in the best driving experience.


Denholm_thumbThom Denholm is a technical product manager at Datalight. He is an embedded software engineer with more than 20 years of experience, combining a strong focus on operating system and fie system internals with a knowledge of modern flash devices. Thom holds a BS in Mathematics and Computer Science from Gonzaga University. His love for solving difficult technical problems has served him well in his fifteen years with Datalight. In his spare time, he works as a professional baseball umpire and an Internet librarian. Though he has lived in and around Seattle all his life, he has never had a cup of coffee.

Why It’s Important to Get on Board the Software-Defined Car

Thursday, July 9th, 2015

What’s already happened to smartphones and other devices will be disrupting the automotive market.

Pistons and powertrains absorbed all the attention of automotive engineers years ago, but today’s focus will increasingly be on the software driving the advanced electronics inside automobiles. Yes, this represents a major disruption, but like all significant evolutions, it also creates opportunities. Developments in what’s been called “the software-defined car” are moving like a Bugatti Veyron Super Sport in several automotive sectors, bringing new Silicon Valley thinking to Detroit.

F1
Figure 1: Software updates will be made over-the-air to the new Software Defined Cars. Photo courtesy Movimento Group

Nobody would deny that today’s car is a rich hardware platform bristling with sensors and processors that rely on software. Graphical displays, touch screens, computer graphics, voice control—these are becoming the car’s interface, with electronic sensors and algorithms determining the entire driving experience to a wide extent. The software-driven features coming down the road are amazing; not just new auto infotainment apps, but completely new features like wholesale personalization, brand-new advanced driver-assistance features, extensions for car sharing, regional specific adaptations, car-to-home integration, new vehicle safety options, remote mobile control, new 4×4 drive modes and more.

Automakers Join OTA SW Update Bandwagon
The software-defined car must also be the software-connected car, with over-the-air (OTA) updates making sure that each automobile is running the latest, most effective code. OTA updates began— naturally—with Tesla’s Model S, but the concept has been speeding forward. At least five automakers —BMW, Hyundai, Ford, Toyota and Mercedes-Benz—now offer OTA software updates, with many more likely to join in over the next 18 months or so.

OTA software upgrades not only affect entertainment systems but also powertrain and vehicle safety systems. According to analyst firm Gartner Group, there will be 250 million connected vehicles on the road by 2020 . Within five years, OTA software upgrades are expected to be commonplace for new vehicles.

The value of the market for connected car services is forecast to grow to $148 billion in 2020, reported Pricewaterhouse Coopers (PwC) . Safety-related features are expected to account for 47 percent, followed by autonomous driving at 35 percent, with entertainment features accounting for 13 percent. This networked mobility market represents a tripling from today’s level and is not only being pushed by demand for connected-car components, but also by the rise of entirely new digital business opportunities, stated PwC.

There’s another powerful factoid proving the huge potential of this no-fuss, no-muss approach to software updates that has grabbed the attention of the entire auto industry. The first round in a slew of high-priced acquisitions in the segment includes Harman International’s $170 million purchase of Red Bend Software and its $780 million buy of Symphony Teleca.

Thumbs Down on Software Update Travel
Given how consumers use their other electronic devices, they may soon consider it outrageous that they must either download to a thumb drive or travel to a dealership to have their vehicle’s software updated. Imagine having to go to a retailer to upgrade an application on your smartphone or tablet. While automotive systems are more complex than consumer electronics, OTA is clearly the future for electronics-driven vehicles.
Nevertheless, an automotive-grade solution must be reliable, secure and safe. It must also handle the numerous interdependent modules within the car. Customer expectations are much higher as well. People want “mobile-like” updates with zero downtime, zero crashes and zero trouble. While customers may be annoyed when a new software update breaks their phone for an hour, it’s unacceptable if a broken automotive software update means that they can’t get to work for the same period.

A Future in Future Proofing
Most discussions about vehicle software updates have been limited to bug fixes, recall avoidance and security patches. However, with software supplying so many of the car’s capabilities, software updates can give the industry an amazing degree of flexibility that hasn’t yet been fully realized. One obvious opportunity is in releasing innovative new software throughout the vehicle’s entire life. In effect, future-proofing hardware for the car.

The software-defined car also lets carmakers react in real time to their customers. OEMs can access anonymous diagnostic data that lets them see how their cars are being used. Such access can feed a continual improvement process to develop new features that customers want. Just like application updates do today, automotive software updates can fix areas of continual complaint, add newly requested options and, most important, create happy, satisfied customers.

The software-defined car must combine deep automotive expertise, secure over-the-air updates, complete occupant safety, and flexible diagnostic access to allow process and product improvements through reliably updatable software.

Fixing Some of Today’s OTA Limitations
While OTA is clearly on its way to becoming a standard practice in the industry, many current vendors haven’t ironed out some of the limitations in their solutions. One approach to avoiding concern among automotive OEMs and Tier-1 module manufacturers, comes from car software management company Movimento. The company recently brought to the market a solution that encompasses the entire software updating needs of the car as a whole.

Some vendors require brand-specific code to be installed on every module. But this software often isn’t even available for older, legacy architectures. OEMs should seek out solutions enabling all-car updates for the widest list of brands and models.

Another superior OTA feature worth seeking out is bidirectional data gathering within the car, in which a data agent gathers vehicle diagnostics, for prognostic and preventative analytics purposes. This action can provide data to third-party companies for insurance and other uses. The OTA platform, such as Movimento’s OTA platform, knows when to use this data to intelligently know when to safely and proactively apply the next software update to the car.

Security is an ongoing need in the OTA process to keep autos safe and minimize the risk of being hacked through monitoring the vehicle bus and detecting unauthorized messages. Today’s most advanced approach uses cybersecurity technology to continually protect the entire vehicle from unauthorized messages, including security breaches of any kind such as malware.

Mining the Software Mindset
The software-defined car is a huge step forward in a technology continuum that will eventually bring us production versions of the driverless car and other major advances rewriting the auto industry. Consider that a Boeing Dreamliner 787 has 15 million lines of code—a tenth of what’s expected in tomorrow’s driverless cars. Automotive engineers with a software mindset today will be those best equipped to harness the amazing technology developments in the near future.


Mahbubul_Alam_headshotMahbubul Alam is CTO, Movimento. Alam joined Movimento as CTO in 2015 and is responsible for aligning automotive and information technology with corporate strategy to enable Movimento to lead the automotive industry’s transition to software-defined vehicles. A 17-year industry veteran, Alam works with Movimento customers to maximize the potential of secure over-the-air (OTA) updates and enable new connected services for the vehicle.

Previously, Alam led Cisco Systems strategy in IoT and M2M, where he pioneered and developed this business from the ground up through vision, strategy, platform and execution. He also helped initiate the company’s smart connected vehicle roadmap. Before joining Cisco, Alam held technical leader positions at Siemens and worked as a technical advisor to the Dutch government.

Alam holds a master’s degree in Electrical Engineering from Delft University of Technology.

Next Page »