Posts Tagged ‘top-story’

Next Page »

The Impact of 5G on Autonomous Vehicles

Tuesday, October 9th, 2018

What are the ways 5G is set to enhance High Definition (HD) mapping and more?

One of the key aspects of 5G is the uplink data rate, which is data moving from the vehicle to the cloud. Compared to 4G, 5G significantly increases the sustainable bandwidth of both the uplink as well as the downlink. But that by itself is not good enough. How do you get close to real-time decision making? This would translate back to ultra-low latency with guaranteed jitter and delivery.

Second, High Definition (HD) mapping requires high-speed bandwidth to transmit massive volumes of data. HD mapping also insists on location precision in order to know where objects are relative to each other. A third necessary element is support for high vehicle density per cell site. Making it possible for many vehicles to be close to each other and at the same time access a high sustainable data rate is very important.

One thing that 5G provides, for the first time, is latency in sub-milliseconds. Today, if we look at 4G LTE we have 10-30 milliseconds elapse for a round-trip communication, but for 5G it would be under one millisecond, which is almost near real-time. The time can be further reduced. The peak bandwidth of the downlink is going to be about 20 Gbps, whereas sustainable bandwidth is 1 Gbps for the downlink. But sustainable uplink is also on the order of 10 Mbps. Peak bandwidth can go in the order of 100 Mbps. Having a sustainable uplink of 10 Mbps and a sustainable download of 1 Gbps is a big deal. 5G will improve vehicle density by 100x compared to 4G, i.e., 5G will achieve 100 times more density of devices/vehicles that can use real-time data streaming.

Last but not least is the logical separation of the 5G network for industry/markets. Unlike today’s 4G network, where you have a single network that is used for applications ranging from those for utilities, telemetry, and metering to watching movies and emailing, all using one single 4G/LTE network. 5G has network separation capability. So, a logical network, which could encompass connected driving features, can be solely dedicated to autonomous driving. This means that only vehicles will be using that particular logical network, and so network performance, business modelling, pricing, etc. can be focused on vehicles’ needs. The latency can also be managed because there is only a specific kind of traffic. Currently, there is a mixed pattern, which makes it difficult for 5G carriers to understand where optimization is required. 5G will enable HD mapping in the cloud and then send that information back—in close to real time— creating a superior user experience.

Better Suited for V2X?
V2X is not a technology, there are use cases for vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-pedestrian (V2P), Vehicle-to-X (where X can be anything) etc., it depends on what technology we use. Today, when people talk about V2X, what they mean is Dedicated Short Range Communication (DSRC), also known as IEEE802.11P. 5G comes from the evolution of cellular technologies such as 2G, 3G, 4G and WiFi. Worth considering is the difference between DSRC and 5G cellular technology for vehicle use cases. Both are applicable for V2X, but which one is better suited? From a security and coverage perspective, the two main components of V2X are V2V and V2I. DSRC-based V2V does not require an infrastructure, which means that the beacons are sent out from the vehicle and can detect other vehicles nearby. The same thing can be achieved with 5G Direct and even with 4G LTE Direct. This kind of communication also enables device-to-device (D2D) communication.

5G will have D2D communication capability similar to DSRC V2V communications. From that perspective, DSRC is focused on communication and the security around certificates. This can be used for 5G as well, except that 5G also has security on the radio link level because of the inherent security that comes with USIM (Universal Subscriber Identity Module) or eUICC (embedded Universal Integrated Circuit Card) technology. Though we get the advantage of mobile security, we might still end up having a list of vehicle certificates to be revoked in DSRC or even in 5G, this is above the communication level. Security at the communication level, 5G would provide better security because it is evolving from 3G/4G security and has inherent communication security built in it. DSRC does not have the same level of communication security on its own, so we have to add higher layers of security. But if we think about the certificates, the infrastructure, or the X509 certificates, they can be in both because they are not on the communication level, they are on a higher level of authentication and authorization.

If we look at cybersecurity, which includes silicon to communication to authentication/authorization, 5G is much more well-defined and enables V2V communication in which people have different variations, there is no standard. Looking at coverage that is better suited for different strategies and services, one of the key characteristics of 5G is that for the first time we can have infrastructure-less communication. It is not just about page speeds and low latency. What does that mean? It means that 5G now has the ability to have mashed technology—it can hop from one vehicle to another vehicle, to create an ad-hoc network of vehicles that communicates among themselves. DSRC also does this to some extent; it does not require infrastructure for V2V, so it creates a network of its own. 5G can offer the same services DSRC offers. With regard to strategy, if we look at it from a WiFi perspective then we are banking on DSRC. If we look at it from a cellular perspective, then we want to extend the cellular capability to enable mesh networks and ad-hoc communication. The cellular V2V approach is newer whereas the DSRC has been around for many years with people doing trials and POCs.

What Matters
Cellular V2V an DSRC can complement each other, but they can definitely compete with one other as well. If we use DSRC for V2X, we have to spend billions of dollars to build the infrastructure for V2I. A new infrastructure has to be built, operational expenditure and roadmap with future enhancements that justify the ROI and the ownership of this project, which would probably be taken up by the government. To take advantage of V2I, massive infrastructure investment is required, whereas 5G has the advantage of cellular technology and carriers that are interested to roll out different use cases. In this scenario, the carriers pick up the bill of the infrastructure. The carriers can logically separate out the infrastructure for mobile internet, for smart grids, for connected devices, for autonomous vehicles, and so on. In this way, they can monetize the technology in different ways and the infrastructure remains a shared infrastructure. From this perspective, 5G has an advantage over DSRC because the cell towers and real estate, etc. can all be leveraged and used. The carriers can monetize their investment not only from one vertical, but from many different vertical use cases: industrial, retail, end user, connected car, automated drive, utility, etc. That bigger pool of verticals increases the chance of having a favourable ROI. With that in mind, 5G is better suited. What we’ve learned from DSRC V2V and V2I use cases can be leveraged for 5G, as it is just the underlying technology. It does not really matter whether you use 5G or DSRC, what really matters is how we use the application built on top.

Ready to Go Nationwide
5G infrastructure is ready to go nationwide because of the business case note earlier. The business case for the carriers is well proven and they are looking to have 5G rolled out not only because of the capacity increase, but for the first time they would have the ability to monetize and provide dedicated networks for vertical industries and charge differently, which they could not do before with broadband internet services. Carriers are definitely ready. But DSRC is absolutely not. No country has billions of dollars to roll out this dedicated infrastructure only for vehicle safety. Another thing to understand is the silicon that has to go into the vehicle. The advantage that 5G has is that silicon is produced at a massive scale for smartphones. So, the cost of them building that silicon and the robustness that they have from the 3G/4G network, used by billions of people, are still upon unproven grounds in DSRC. DSRC lacks a use case even the size of hundred thousand people. There are a lot of new challenges that will appear for DSRC because of the inherent security of cellular technology of 3G/4G that it does not have. Of course, 5G will have its own challenges but they will pale in comparison to those that the DSRC will have. The cellular approach to V2I will extend to V2V as well because every car needs connectivity. It will also be applicable to V2P, where the beacon will be released from the person and the car will detect it in order to avoid accidents. The best part of cellular technology is its broad applicability of it—in smart phones, in cars, in hand-held devices. The applicability is so broad that you can truly have a V2X rather than only V2V or V2I. This can be further extended to device-to-device, where bicycles and motorcycles will be able to avoid fatal accidents. The massive volume of smartphone production is going to drive down costs faster than any other technology.

Conclusion
The window for DSRC to be implemented and to become successful is only becoming smaller and smaller. As 5G comes in, and DSRC does not have that kind of volume, it is not mandated and is not in the car, they are losing that advantage. Once 5G comes into the towers and cell phones, then it is going to be much more difficult to introduce a new technology and get investment only for vehicles.


Mahbubul Alam is CTO and CMO at Movimento – an Aptiv company.

 

 

“…the capability to store the entire trained model of the neural network…” Q&A with Sylvain Dubois, Crossbar

Tuesday, July 17th, 2018

Autonomous driving is just one of the applications hungry for processing at the edge, giving embedded memory growing strategic importance.

Editor’s Note: “The boundary between data and compute is really blurring now,” contends Sylvain Dubois. The vice president of strategic marketing and business development at ReRAM technology company Crossbar also explains why putting data and computing on the same chip is making more and more sense. I spoke with Dubois in May, shortly before Crossbar unveiled its collaboration with Microsemi. Microsemi products manufactured at the 1x nm process node will integrate Crossbar’s embedded ReRAM technology.

Crossbar ReRAM is enabling a new range of energy-efficient computing architectures compared to legacy SRAM or DRAM-based architectures.

EECatalog: Across AI, networking, computing, we’re seeing an increasing demand for embedded nonvolatile memory [NVM].

Sylvain Dubois, Crossbar: Yes, embedded memory is of strategic importance for CMOS foundries, and if you go to all of the top foundries’ Technology Symposiums such as TSMC, Global Foundries, UMC, Samsung, SMIC, they are all looking for ways to have access to embedded NVM (Non Volatile Memory) technologies: Flash all the way to 40 nm and then MRAM and ReRAM for 2x nm and 1x nm.

EECatalog: How is what Crossbar and Microsemi will be doing—integrating embedded ReRAM at 1x nm—going to make a difference for OEMs and developers?  Can you describe how a use case would change?

Dubois, Crossbar: A typical use case would involve bringing more computing power to the edge. More processing done locally, this includes wearables and hand-held devices, surveillance cameras and autonomous driving for example. And that brings up the whole topic of AI [Artificial Intelligence] inference at the edge, where you are not necessarily training the AI algorithms in the field but instead using the trained model so that the devices at the edge can recognize a face, a traffic sign. Crossbar’s ReRAM technology will make a difference with any pattern recognition task such as object or face detection. It’s what we demonstrated at the Embedded Vision Summit, showing how you can bring embedded ReRAM and neural networks together in a one-chip solution to make very low energy computing devices.

Today, what people are doing is trying to store the AI inference model, the weights and features of the neural network in the internal SRAM buffers on the chip. Because SRAM is not a dense memory; it won’t be big enough and the models will be partially stored in external DRAM banks that are very expensive and also very power hungry. Both SRAM and DRAM are volatile memory, meaning that they lose their content when powered down. This requires an additional layer of flash memory required to store the model when power is off.

But now with embedded ReRAM you have the capability to store the entire trained model of the neural network directly on chip. ReRAM retains its content for 10 years even when not powered, this eliminates the need for an external flash memory back-up and enables new use-models where the end-device can be frequently powered down and up to extend the battery life.

What we have designed is a specific memory array—a very wide memory array—with some amount of in-memory computing—pattern recognition, distance computation logic blocks. At the Embedded Vision Summit, we implemented a facial recognition demo showing a classification of a new face across a huge database of other faces in only once iteration.

EECatalog:  How did the demonstration turn out?

Dubois, Crossbar: It was very well received as this classification task or comparison of one input across a huge database of objects usually takes a lot of time and power. The value proposition here is that the comparison of one input across a huge database will be extremely deterministic, it always takes the same amount of time whatever the size of the database from very few pictures to 100,000 pictures. The computation is done in only one iteration, few clock cycles.

EECatalog:  How does that use case look if it’s not being accomplished with ReRAM?

Dubois, Crossbar: Today, if you want to do the same use case with embedded SRAM and external or stacked DRAMs and GPUs, it will be done in a serial manner, where the larger the database is, the longer time it will take, because you have to compare against all these multiple pictures of objects stored in the memory.

We provide a very energy-efficient way—because ReRAM is on-chip and non-volatile —to perform classification of objects, patterns, with fast and deterministic latencies, consuming less energy than SRAM/DRAM memories.

And it’s also very secure. Privacy matters when the database includes not only your face but also your biometrics and vocal commands. You don’t want the whole conversation in your living room to be processed in the cloud and potentially hackable by malware. Biometrics identification, speech recognition and classification of objects from surveillance cameras are typical use cases for energy-efficient computing and memory on the same chip.

EECatalog:  One of the big picture issues here ability to anticipate that next advanced process node and scale to it.

Dubois, Crossbar: Yes, it is important to pick a memory technology that scales because most of these AI chips or advanced SoCs, or microcontrollers, are currently designed at 22nm, 14nm, 12nm or even below 7 nm.

Crossbar ReRAM cells are programmed with a very low voltage across two electrodes causing the metal ions of the top electrode to move and thereby creating an extremely short narrow filament (3 or 4 nm). Growth of this metallic wire forms a conductive path, enabling a very low-resistance state. The ON current that is going through the filament determines the logic 1 state. When you want to have a logic 0 state, we just reverse the electric field so that the metal ions are pulled back to the top electrode, creating a high-resistance state, almost an open circuit.

Based on the metal filament physics that we grow and remove, the difference between ON and OFF current is extremely high, more than 1000X difference, providing great read margins and reliability to the ReRAM technology at the most advanced process nodes. As the filament is just 3 nm, going below 10 nm is something definitely possible with Crossbar ReRAM technology.

The ReRAM cell is so small that it can fit in between the metal routing layers of standard CMOS wafers. This is the reason why we can have a breakthrough architecture with millions of connection points between the logic and the memory compared to maximum thousands of connections with stacked DRAMs. It is a truly monolithic integration of embedded ReRAM and logic in the same silicon.

EECatalog:  Anything to add before we wrap up?

Dubois, Crossbar: The boundary between data and compute is really blurring now. Algorithms trained with lots of data and devices are now self-sufficient to perform object identification and pattern recognition with a minimum power budget. Crossbar ReRAM is enabling a new range of energy-efficient computing architectures compared to legacy SRAM or DRAM-based architectures. Crossbar is working with multiple partners to create innovative architectures where data and processing are integrated on same silicon chip.

For edge computing in hand-held mobile devices or home appliances, or cloud computing in data centers, people are starting to realize that they can cut their energy bill quite drastically by putting the data and the computing in the same chip. Most of the system companies are now expanding their strategies towards vertical integration of their business all the way to the chip manufacturing as it makes a lot of sense for a great differentiation.

7 Reasons Every Autonomous Vehicle Needs an Accurate Inertial Measurement Unit

Monday, July 9th, 2018

The IMU’s independent property serves it well for safety and sensor fusion applications, but that is just one of the ways it lends advantages to self-driving solutions.

An inertial measurement unit (IMU) is a device that directly measures the three linear acceleration components and the three rotational rate components (6-DOF) of a vehicle. An IMU is unique among the sensors typically found in an autonomous vehicle because an IMU requires no connection or knowledge of the external world.

A self-driving car requires many different technologies: LiDAR to create a precise 3-D image of the local surroundings; RADAR for ranging targets using a different part of the EM spectrum; cameras to read signs and detect color; high-definition maps for localization, and more. Each of these technologies relies on the external environment in order to send localization, perception, and control data to the software stack. The IMU has no such reliance, and its unique “independent” nature makes it a core technology for both safety and sensor-fusion.

Figure 1: An accurate IMU can mitigate depth and ambiguity and other issues associated with existing technologies.

#1 Safety First
The system engineer needs to consider every scenario and always have a back-up plan. Failure Mode Effects Analysis (FMEA) formalizes this approach into design requirements for risk mitigation. FMEA will ask what happens if the LiDAR, RADAR, and cameras all fail at the same time? An IMU can dead-reckon for a short period of time, meaning it can determine full position and attitude independently for a short while. An IMU alone can slow the vehicle down in a controlled way and bring it to a stop—achieving the best practical outcome in a bad situation. While this may seem like a contrived requirement, it turns out to be a fundamental one to a mature safety approach.

#2 A Good Attitude
An accurate IMU can determine and track attitude precisely. We often think of a car’s position or location, but when driving the direction or heading is equally crucial. Dynamic control of the vehicle requires sensors with dynamic response, and an IMU does a nice job of tracking dynamic attitude changes accurately. Moreover, attitude is needed to control the vehicle and is often an input into other algorithms. While LiDAR and cameras can be useful in determining attitude, GPS is often pretty useless. Moreover, a stable independent attitude reference has value in calibration and alignment.

#3 Accurate Lane Keeping
When not distracted or drunk, humans are pretty good at driving. A typical driver can hold their position in a lane to less than 10cm. If an autonomous vehicle wanders in its lane, then it will appear to be a bad driver. During a turn, for instance, poor lane keeping could easily result in an accident. The IMU is a key dynamic sensor to steer the vehicle dynamically, and the IMU can maintain a better than 30cm accuracy level for short periods (up to 10 seconds) when other sensors go offline. The IMU is also used in algorithms that can cross compare multiple ways to determine position/location and then assign a certainty to the overall localization estimate. Without the IMU, it may be impossible to even know when the location error from a LIDAR solution has degraded.

Figure 2: During turns, an accurate IMU plays a key role in lane keeping.

#4 LiDAR is Still Expensive
Tesla is famous for its “No LiDAR required” approach to autopilot technology. If you don’t have LiDAR, a good IMU is even more critical because camera-based localization of the vehicle will have more frequent periods of low accuracy, simply depending on what is in the camera scene or the external lighting conditions. Camera-based localization uses Software Implemented Fault Tolerance (SIFT) feature tracking in the captured images to compute attitude. If the camera is not stereo (often the case) inertial data from the IMU itself is also a core part of the math to compute the position and attitude in the first place.

#5 Compute is Not Free
The powerful combination of high-accuracy LIDAR and high-definition maps is at the core of the most advanced Level 4 self-driving approaches such as those being tested by Cruise and Waymo. In these systems LiDAR scans are matched in real-time to the HD map using convolutional signal processing techniques. Based on the match, the precise location of vehicle and attitude is estimated. This process is computationally expensive. While we all like to believe the cost of compute is vanishingly small, on a vehicle it simply is not that cheap. The more accurately the algorithm knows its initial position and attitude, the less computation is required to compute the best match. In addition, by using IMU data, the risk of the algorithm getting stuck in a local minimum of HD map data is reduced.

#6 GPS/INS: Making High-Accuracy GPS Work
In today’s production vehicles GPS systems use low-cost single-frequency receivers, rendering the GPS accuracy pretty useless for vehicle automation. However, low-cost multi-frequency, network-corrected GPS is on the way from a wide variety of silicon suppliers. On top of this upcoming silicon, network correction-based solutions such as Real Time Kinematic (RTK) and Precise Point Positioning (PPP) can provide GPS fixes to centimeter-level accuracy under ideal conditions. However, these solutions are very sensitive to bridges, trees, buildings, and other features of the environment. It is well established that the way to overcome this challenge and improve high-accuracy GPS reliability is to use high-accuracy IMU aiding at a low-level in the position solution. Such GPS/INS techniques include tightly coupled and ultra-tightly coupled GPS/INS, expected to be available in Q4 2018 for the automotive market.

#7 Cars Already Need an IMU
Turns out production automobiles already have anywhere from 1/3 of an IMU to a full IMU on board. Vehicle stability systems rely heavily on a Z-axis gyro and lateral X-Y accelerometers. Roll over detection relies on a gyro mounted with its sensitive axis in the direction of travel. These sensors have been part of the vehicles’ safety systems for over a decade now. The only problem is that the sensor accuracy is typically too low to be of use for the prior six use cases. So why not upgrade the vehicle to a high-accuracy IMU and let it drive independently? The main barrier is cost.

ACEINNA, along with other companies in the industry, is working hard to remove cost barriers in the way of high-accuracy IMUs and the benefits they hold for autonomous vehicles.

Figure 3: ACEINNA’s IMUs are pushing the boundary of price-performance.


Mike Horton is the CTO of ACEINNA, which provides MEMS-based sensing solutions. Horton is responsible for corporate technology strategy and inertial-navigation related technology development. Prior to ACEINNA, he founded Crossbow Technology, a leader in MEMS-based inertial navigation systems and wireless sensor networks, with his advisor the late Dr. Richard Newton, while at UC Berkeley. Crossbow Technology grew to $23M in revenue prior to being sold in two transactions (Moog, Inc and MEMSIC) totaling $50M. In addition to his role at ACEINNA, Horton is active as an angel investor with two Silicon Valley based angel groups—Band of Angels and Sand Hill Angels. He also actively mentors young entrepreneurs in the UC Berkeley research community. He holds more than 15 patents and holds a BSEE and MSEE from UC Berkeley.

 

 

 

Simplifying CAN Automotive Applications with Highly Integrated 8-bit MCUs

Tuesday, April 10th, 2018

Why 8-bit MCUs and CAN form an effective team

Designed for the automotive industry in the mid-1980’s, the controller area network (CAN) protocol addressed and continues to address the need to reduce the wiring complexity (weight, amount and cost) for data transmission in increasingly interconnected applications.

The advantages of CAN have also been embraced and adopted in other markets including factory automation and medical applications, so extensively that well over 1 billion CAN nodes are shipped worldwide each year. Similarly, over 1 billion 8-bit microcontroller units (MCUs) are shipped annually. While today there is some overlap in these statistics, that should increase considerably in the future.

…these 8-bit MCUs provide an alternative to 16-bit MCUs that are more expensive and more difficult to program.

CAN Continues to Meet Carmaker’s Needs
Traditional CAN communications are event-based, allowing microcontrollers and application specific integrated circuits (ASICS) to directly communicate with each other in applications without a host computer. The integration by semiconductor companies has greatly added to CAN’s cost-effectiveness and compatibility with many automotive systems. Since the early 2000’s, 8-bit MCUs have also included the CAN protocol. More recently an 8-bit MCU design approach, initially introduced in 2015, uses Core Independent Peripherals (CIPs) to allow a new family of 8-bit MCUs to address many system aspects in CAN applications.

In addition to its cost-effectiveness, CAN’s success can be attributed to its:

  • Robustness
  • Reliable data transmission
  • Rather simple implementation

Not surprisingly, 8-bit MCUs also have these same attributes in addition to their cost-effectiveness. So, 8-bit MCUs with CAN are a natural combination to address many automotive networking requirements.

Over the years, CAN has proven to be capable of meeting a variety of control system requirements. As automotive networks increased to require different attributes including time-triggered, fault-tolerant and single-wire implementations, as well as CAN with flexible data rate (CAN FD), CAN specifications expanded. Table 1 shows many of the CAN variations that have occurred since its initial introduction over 30 years ago.

Table 1: CAN adaptations can meet a variety of automotive needs.

For networking sensors and actuators to comfort systems, automotive engineers have used the local area network (LIN) protocol to reduce costs. However, LIN, a single-wire, master-slave network, requires both hardware and software changes from CAN. Some of the newest automotive applications for CAN include access control, battery charging/battery management, and diagnostic equipment. These and other vehicle requirements, especially those that require access to data from another CAN control system, are driving the use of 8-bit MCUs/CAN. Figure 1 shows the easy addition of an 8-bit MCU/CAN node to an existing CAN bus.

Figure 1: Different CAN implementations can coexist and add to the flexibility of the CAN bus.

Solving Low-Cost Networking Requirements Using an 8-Bit MCU with CAN
While connecting to the CAN bus is the minimum capability that system designers need, added peripherals that specifically address other system requirements simplify the designer’s task. Those system tasks could include sensing a parameter or two for control purposes, moving a motor, activating a solenoid, or providing some other functions.

The CIP approach can reduce software complexity and deliver faster response times at lower clock speeds while using less power. Broad system categories for CIPs in Microchip’s PIC18 K83 family include:

  • Intelligent analog (including sensor interface)
  • Waveform control
  • Timing and measurements
  • Logic and math
  • Safety and monitoring
  • Communications
  • Low power and system flexibility

Within these categories, specific peripherals include:

  • Cyclic Redundancy Check (CRC) with memory scan to ensure the integrity of non-volatile memory
  • Direct Memory Access (DMA) to enable data transfers between memory and peripherals without CPU involvement
  • Windowed Watchdog Timer (WWDT) for triggering system resets
  • A 12-bit Analog-to-Digital Converter with Computation (ADC2) for automating analog signal analysis for real-time system response
  • Complementary Waveform Generator (CWG) for enabling high-efficiency synchronous switching for motor control

In addition to working with CAN 2.0B, the integrated CAN controller is fully backwards compatible with previous CAN modules (CAN 1.2 and CAN 2.0A). The products’ capabilities include the Memory Access Partition (MAP) to support designers in data protection and bootloader applications. Device Information Area (DIA) provides a dedicated memory space for factory-programmed device ID and peripheral calibration values.

Since communications are a primary goal for CAN nodes, the 8-bit MCUs have improved serial communications, including UART with support for Asynchronous and LIN protocols as well as higher-speed, standalone I2C and SPI serial communication interfaces. Table 2 shows the 15 CIPs and how they address specific system requirements.

Table 2: Core Independent Peripherals in PIC18 K83 family address several system requirements.

Thanks to these on-chip structures that were not thought of or implemented in 8-bit MCUs in the past, today’s 8-bit MCUs can perform quite differently than many designers have come to expect and deliver much more than MCUs designed over a decade ago.

Programming an 8-bit MCU is simple and easy and with the CAN plus CIPs, it is even easier. When they provide sufficient processing power, especially for remote nodes, these 8-bit MCUs provide an alternative to 16-bit MCUs that are more expensive and more difficult to program. With CIPs, even more processing power is available, enabling more 8-bit MCU options.

The highly configurable on-chip hardware modules handle repetitive embedded tasksmore efficiently and deterministically. Thanks to the deterministic nature of CAN, if an MCU gets caught in a loop, one with CIPs can still continue operations outside of the core.

With the newest 8-bit MCUs/CAN+CIPs, as well as LIN, network designers now have more flexibility and options for implementing CAN and LIN communications. In fact, some typical 8-bit MCU LIN applications are now potential CAN applications. For example, if the module needs to be aware of other data on the network, such as vehicle speed, CAN may be a better choice or at least an option to LIN. This can be useful for windshield wipers that can change their speed based on the vehicle’s speed to avoid a CAN to LIN gateway. In addition, the system-level CIPs may avoid the need for an additional ASIC or two, as shown in Figure 2.

The same PWM and complementary waveform generator CIPs have been used for years to do fairly complex, multi-color LED mood lighting in vehicles. Those drivers were connected to a LIN bus because the MCUs did not have CAN. The combination of that functionality in a cost-effective 8-bit MCU with CAN could provide flexibility and a simplified alternate approach to the design.

While most 8-bit MCUs on the market rely heavily on the core for processing its peripheral’s functions, other system design possibilities that can be performed by CIPs without significantly taxing the CPU include: precision interface to various sensors, high-power LED driver and/or a reasonably complex level of motor control.

To determine which of these and other possibilities are appropriate for a specific network, a variety of development tools are available. For example, the MPLAB® Code Configurator (MCC) is a free software plug-in that provides a graphical interface to configure peripherals and functions specific to the application. With this tool, system design engineers can easily configure the hardware-based peripherals—rather than writing and validating an entire software routine—to accomplish a specific task.

Developing a CAN-Do Attitude
For automotive and industrial applications, system designers certainly have several choices for bus architectures. As a widely accepted bus, and certainly when additional sensing and/or control are required for an existing network, an MCU with additional functions to address different system requirements makes CAN an excellent option. With its Core Independent Peripherals, the 8-bit MCU/CAN family allows CAN expansion into more cost-sensitive nodes on the network.

The new 8-bit MCU/CAN+CIPs address emerging automotive network applications that require flexible, cost-effective, simple, and reliable robust data transmission. Present too are the increased performance and system support that access control, battery charging/battery management, and diagnostic equipment demand.

References

  1. PIC18 K83 Product Family: http://www.microchip.com/promo/pic18f-k83
  2. MPLAB Code Configurator (MCC): http://www.microchip.com/mplab/mplab-code-configurator

Edwin Romero began working in the semiconductor industry in 2006. Prior to joining Microchip Technology in 2012, he worked in various roles at ON Semiconductor, including application engineer, and in technical sales and marketing. As a product marketing manager within Microchip’s MCU8 Division, Romero is currently responsible for product definition and promotion of Microchip’s 8-bit PIC® microcontrollers. He has a Bachelor of Science degree in Electrical Engineering (BSEE) degree from Arizona State University.

 

Engineering Automated Driving Systems for Safety

Friday, March 9th, 2018

Modern autonomous vehicle designs benefit from safety-critical electronics built on proven, trusted military and aerospace systems engineering standards and practices.

Autonomous vehicles (AVs) of all types—including self-driving cars on city streets, trucks on military bases and the open road, and various unmanned platforms on land, at sea, and in air—are poised to transform transportation. The automotive market is on the cusp of significant change, enabled by innovative automated driving system (ADS) technologies; yet, widespread deployment hinges almost entirely on safety.

A single ADS mishap can have far-reaching implications, adversely affecting the acceptance and adoption of autonomous vehicles worldwide. Destruction of property and loss of life caused by any manner of ADS failure will not be tolerated, by the public, regulatory bodies, transportation officials, or lawmakers—all of whom seek assurances from automotive manufacturers that the vehicles, including all essential systems, can be trusted to perform reliably no matter what they might encounter on the road.

Figure 1: Automated Driving System technologies must work in environments from highways to city streets to the roughest back roads in all kinds of weather, a strong argument for relying on computer and electronics systems developed and designed for ruggedness.

Predictable, repeatable performance over time is integral to safety, which in turn builds trust. A single unfortunate event could slow, set back, or even bring an abrupt end to the advancement, adoption, and acceptance of autonomous driving systems and threaten the entire global autonomous vehicle industry. Automated driving system failures are not an option and must be avoided, through the use of systems specifically designed to be durable, offer high availability, and perform reliably in various operational environments throughout their life cycles.

Best Practices and Guidance
Transportation safety experts are encouraging autonomous vehicle and automated driving system manufacturers to benefit from lessons learned in the military and aerospace market. Best practices include the adoption of industry standards and systems engineering practices successfully employed for decades in aerospace and defense programs, according to the latest guidance from both the U.S. Department of Transportation (DoT) and the National Highway Traffic Safety Administration (NHTSA).

DoT officials issued Automated Driving Systems 2.0: A Vision for Safety, voluntary guidance that encourages best practices and prioritizes safety to help pave the way for the safe deployment of advanced driver assistance technologies. In it, officials encourage technology companies working on ADS to adopt guidance, best practices, design principles, and standards from industries such as aviation, space, and the military.

Military and aerospace vehicles and vehicle-based electronics are renowned the world over for being built to work reliably, without fail, over a long operational life in even the most challenging applications, rough terrains, and extreme environments. For example, the military continues to rely upon the Boeing B-52 Bomber military aircraft, currently in its sixth decade of service and expected to serve through 2045, while NASA’s Voyager 1 spacecraft is still receiving commands and communicating data, more than 40 years into its mission, from the harsh, radiation-rich environment of space.

Operational Environment
Transportation safety specialists also stress the importance of designing and validating ADS specifically for the entire operational design domain, defined as the specific environments in which the automated system is designed to operate, including roadway types, speed range, and environmental conditions, such as weather.

Military and aerospace organizations, including the Department of Defense (DoD) and NASA, learned decades ago that traditional computer and electronics systems would soon become ineffective, work only sporadically, or completely fail to function in the field. Consumer- and enterprise-level systems are largely designed to operate in climate-controlled, protected office environments and, when deployed in the field, typically cannot withstand and will succumb to various environmental elements, such as: shock and vibration, drops, hot and cold temperature extremes, dust and dirt, water and humidity, and snow and ice.

These and other environmental factors threaten computer and electronics reliability, which is inextricably linked to safety. Mission- and safety-critical projects, therefore, require computer and electronics systems to be protected from the elements. The most effective, efficient, and economical way to ensure high reliability, particularly for ADS expected to function daily on everything from highways to city streets to the roughest back roads in all kinds of weather, arguably is: to use computer and electronics systems that are built rugged from design and development, through meticulous manufacturing and testing, to deployment in modern autonomous vehicles of all types in various locales.

Autonomous Innovations
Today’s savvy automotive and automated driving system manufacturers recognize the value that long-time, trusted military and aerospace suppliers bring to the AV market, including: field-tested and proven technologies, standards and requirements compliance, experience, and expertise. Many are, therefore, proactively seeking out and partnering with technology companies that have extensive experience delivering rugged computer and electronics systems designed to meet strict industry standards and operational requirements and built to last in a variety of demanding applications and challenging environments.

Technology leader Intel Corporation partnered with Iowa-based Crystal Group Inc.—designer/manufacturer of rugged computer hardware, member of the Intel Internet of Things (IoT) Solutions Alliance, and award-winning Intel Technology Platinum Provider—to design, develop, manufacture, and test a robust, rugged, and reliable high-performance computer crafted specifically to speed time to market of autonomous vehicles and automated driving systems. Crystal Group’s technical staff applied decades of experience engineering systems to meet strict military small size, weight, and power (SWaP) requirements to pack the processing power and data storage capacity of a high-end server in a 3U high-performance embedded computer (HPEC) while reducing power consumption and limiting system temperature rise.

That award-winning AV computer designed in close collaboration with Intel formed the basis of the new Crystal Group Rugged Autonomous Computer Equipment (RACE™) line that is helping to accelerate AV and ADS development, testing, and deployment to bring innovations to market faster. The Crystal Group RACE0161 Rugged Server provides automaker OEMs with compute power, data-handling and Internet of Things (IoT) capabilities, and data storage capacity in a compact, rugged solution capable of withstanding harsh environmental conditions, including potholes, washboard roads, temperature extremes, and collisions that are likely to cause traditional systems to fail.

Specifically designed for unmanned and autonomous driving vehicles, the Crystal Group RACE0161 provides the horsepower AV and ADS projects need with dual Intel® Xeon® Scalable Processors (Skylake) to deliver a unique, turnkey AV/ADS computer with industry-leading compute and IoT capabilities. The rugged server combines robust I/O, multiple GPU capacity, dual Intel Xeon CPUs, sophisticated thermal management, and other high-quality components stabilized in a rugged, aluminum enclosure measuring just 6.5 x 14.1 x 15.6 inches and weighing 30 to 40 pounds. Providing superior performance per watt, the high-performance server can operate off a 12-volt (12V) car battery without the need for AC/DC power conversion and does not require costly and time-consuming power modifications to convert 12V power systems to existing 24V power systems.

Crystal Group’s RACE product line, including the RACE0161 and vehicle-specific development kits, is designed to deliver speed, agility, and quality in a single, turnkey package, helping AV and ADS manufacturers take advantage of the latest technology advances while speeding the pace of development. Crystal Group’s RACE0161 is garnering global industry attention for its potential to put AV and ADS projects on the fast track by speeding time to market, ahead of competitors, with the added confidence that comes with a military-grade, rugged, reliable solution.

High Reliability for Safety
“From reducing crash-related deaths and injuries, to improving access to transportation, to reducing traffic congestion and vehicle emissions, automated vehicles hold significant potential to increase productivity and improve the quality of life for millions of people,” A Vision for Safety 2.0 explains. Motor vehicle-related crashes on U.S. highways claimed 37,461 lives in 2016, DoT research indicates, noting that 94 percent of serious crashes are due to dangerous choices or errors people make behind the wheel.

“Technology can save lives,” transportation safety officials affirm, with the help of highly reliable, fail-safe automated driving systems tailored to fit the application and the environment. “Thanks to a convergence of technological advances,” A Vision for Safety 2.0 reads, “the promise of safer automated driving systems is closer to becoming a reality.”

Self-driving cars bring the promise of greater energy conservation, lower emissions, added convenience, and roads that are both safer and less congested. To fully realize this vision, automotive manufacturers and autonomous vehicle makers are opting to benefit from established standards and lessons learned in mission- and safety-critical military and aerospace programs. Through collaboration with experienced, trusted industry partners with proven, field-tested, military-grade products, services, and technologies manufacturers of autonomous vehicles and automated driving systems can bring their innovations to market faster. And with the added confidence that comes with rugged, reliable systems built to last in even the most extreme environments.


Chip Thurston is Chief Architect & Technical Director Crystal Group Inc. Thurston joined Crystal Group in late 2000, holding various leadership roles in engineering. Thurston pioneered advanced rugged technology for many major computing systems that are currently being used by NASA, Intel®, U.S. Armed Forces, and major automotive manufacturers. A native of Cedar Falls, Iowa, Thurston holds an AA in MIS/CIS and a BA in Business Management.

 

Automotive Ethernet: The Future of Autonomous Vehicles

Tuesday, February 27th, 2018

Understanding how the new Automotive PHYs differ from other BASE-T PHYs matters. Here’s why:

Introduction
From a broad market perspective, Automotive Ethernet is a joint effort of the Automotive and Networking industries to modernize, simplify, and expand the capabilities of vehicles by improving data communications inside vehicles. The key factors that led the Automotive industry to start this effort are the need for higher bandwidth, shifting of architectures to a centralized backbone, and guaranteed latency. Reducing the complexity of today’s vehicle network infrastructure, which commonly has up to eight different networks, is also a key driver. The complexity of handling so many different networks has a huge impact on the cost of today’s vehicles. Not only does the need for specialized skill sets for each networking type add expense, so does the difficulty of managing legacy software/firmware, with resulting incompatibility and inability to reuse parts. Because these improvements required a dedicated physical layer network, the industry widely recognized the strength of Ethernet as an ultra-compatible and flexible network that could handle the requirements of the Automotive industry. In 2014, the IEEE and the Automotive industry began efforts to make Automotive Ethernet a reality.

What Makes it ‘Automotive’?
‘Ethernet’ is one of the most ubiquitous LAN communication technologies in the world, but how an Ethernet network is implemented and physically represented can vary depending on the application. The IEEE 802.3 Ethernet standard has more than 100 clauses, each defining different protocols or physical layers (PHY) designed to cater to the many industries that have embraced Ethernet for the last 40+ years. When asked to describe ‘Ethernet,’ most people probably associate the plug on the back of their desktop PC and the Cat 5e network cable that connects to it. This form of Ethernet is one of several BASE-T PHYs that have been installed in office buildings for the better part of 30 years. These technologies all use the RJ-45 form factor (computer plug), designed to operate on Category cabling up to 100 meters, and can transfer data up to 10Gbps (but most installations are 1Gbps or slower). There are also the high-speed Ethernet implementations used in server farms that can transfer up to 100Gbps but have a reach of less than 10 meters over twinaxial based copper cable and implement different connectors. However, all Ethernet system models share the same media access control (MAC) definition (only the physical layer and transmission medium differ). So, all upper layer functionality is agnostic to the specific application being implemented. Another way to say this is that all Ethernet regardless of the physical medium use the same frame format. This is one of the key benefits of Ethernet.

‘Automotive Ethernet’ typically refers to one of two IEEE PHY definitions. Either IEEE 802.3bw, the 100Mbps PHY, or 802.3bp, the 1Gbps PHY, specified in Clause 96 and Clause 97 respectively. While these PHY types are commonly referred to as 100Mbps and 1Gbps Automotive Ethernet, their official IEEE PHY name is 100BASE-T1 and 1000BASE-T1. ‘BASE-T’ meaning a baseband technology that operates over a copper twisted pair medium, and ‘1’ specifying the number of differential pairs needed within the copper link segment. Both were developed within the IEEE at nearly the same time, and the use case for each is quite similar so they share several design features. These include a 15meter reach, full duplex operation over a point-to-point single unshielded twisted pair (UTP) architecture, and threelevel pulse amplitude modulation (PAM3) line coding

But why not just reuse the existing Ethernet definitions? Each protocol and PHY defined in the IEEE 802.3 standard is developed such that its implementation can be flexible to avoid limiting the technology to a specific application space. That being said, there also needs to be a starting point with use cases proving broad market potential, a distinct identity, and technical feasibility. The Automotive industry determined that existing Ethernet technologies didn’t meet all its needs as a costcompetitive option. First, cars are not 100 meters long, so the traditional BASE-T PHYs that are built into our laptops were over-designed in terms of reach for this specific PHY. Second, the environmentally conscientious climate of the world demands fuelefficient vehicles, and one of the simplest ways to improve fuel economy is to reduce a vehicle’s weight. Much to the IEEE community’s surprise, the third heaviest component in a car is the cable harness (heaviest is the engine, and second heaviest is the chassis). The Cat 5e cable used with BASE-T PHYs have four twisted pairs (eight wires), so one of the requests was to define the Ethernet network over a single twisted pair, leading to a weight (and therefore cost) reduction compared to Cat 5e cabling (Figure 1). Lastly, the environment inside a vehicle is drastically different than the office or home. Not only do the electronics and cabling need to withstand dirt, oil, grease, freezing temperatures, and blistering heat, but there are also electromagnetic interference (EMI) concerns so that the other circuitry in the vehicle isn’t compromised from radio frequency (RF) radiation of the Ethernet network. This is very different than the typical office and data-center environment considered by earlier Ethernet standards.

Figure 1: Example of automotive wiring system inside a car. (Image credit: Chris DiMinico, MC Communications)

Conformance Testing for Automotive Ethernet PHYs
As if the differences between the Automotive market and other Ethernet markets isn’t apparent by now, there’s also literally no room for error. In the office, it’s a slight annoyance if a data packet is received in error; in a self-driving car it could mean the difference between stopping at a traffic light or running a red light into oncoming traffic. With self-driving fully autonomous vehicles all but a reality, Automotive OEMs have countless safety regulations they need to prove can be maintained while a person is not under direct control of the vehicle. So, conformance test specifications have been defined for every aspect of the vehicle’s performance, including the Ethernet components that are installed.

There are three functions in the IEEE BASE-T1 PHY architecture (Figure 2) that have distinct capabilities and encompass all of the mandatory conformance requirements: physical medium attachment (PMA), physical coding sublayer (PCS), and PHY Control (function within the PMA).

Figure 2: 100BASE-T1 PHY Architecture (Image Credit: IEEE 802.3bw-2015 Standard)

Physical Medium Attachment Conformance Testing
The PMA is the PHY sublayer that drives the actual data signal onto the UTP cabling, directly manipulating the transmitted voltage. For this reason, most of the conformance requirements specified in the PMA test specification are related to voltage amplitude, transmit jitter, return loss, etc. The most difficult test to accurately set up is the Transmitter Distortion test. This test metric first appeared in the original Gigabit Ethernet (IEEE 802.3 Clause 40) standard as a time domain approach to quantify the linearity of an Ethernet transmitter. Since the PHY architecture is point-to-point full duplex operation, both 100BASE-T1 nodes that are linked together are transmitting and receiving on the same wires simultaneously. Therefore, it is important that each transmit output amplifier is operating in a linear state and can suppress any non-linear by-products of the two signals being summed on the copper UTP. To measure this, the test method described in Clause 96 defines a sine wave of specific amplitude and frequency to be injected into the transmit path of the device under test (DUT). The DUT is simultaneously transmitting a pseudorandom test pattern. The test pattern and the sine wave are summed and measured on an oscilloscope by probing the transmit path. The actual transmitter distortion value to determine conformance is a product of an IEEE-defined Matlab script that downloads the oscilloscope capture and analyzes the measured waveform.

The test equipment needed for the transmitter distortion test setup (Figure 3) consists of a real-time oscilloscope, differential probe, and waveform generator. All of these are common in test houses and used for most time domain test cases. However, the injected sine wave needs to be synchronized with both the oscilloscope sampling clock and DUT transmit clock to guarantee that the clock domains aren’t misaligned. Not doing so will unintentionally add distortion into the system and result in inaccurate distortion values. There are two commonly used methods to properly frequency lock all the clocks within the test setup: (1) synchronize the test equipment to the DUT’s 66.67 MHz transmit clock (TX_TCLK), or (2) provide the DUT an external reference clock that is generated from the test equipment. In either scenario the tester will need access to the TX_TCLK and the ability to probe it, or the option to provide an external clock source to the silicon. So, this requires the silicon vendor to provide such hardware features to properly characterize a PHYs transmitter distortion. This isn’t always the case. The IEEE standard does not define a test method for this scenario. To resolve this, recently some T&M vendors have implemented clock and data recovery (CDR) functionality into their oscilloscopes, which removes the need to have direct access to the DUT’s TX_TCLK.

Figure 3: 100BASE-T1 PMA Transmitter Distortion Test Setup

Physical Coding Sublayer and PHY Control Conformance Testing
The PCS and PHY Control test specifications are less straightforward. These sublayers are the digital logic within the PHY and are typically governed by transmitter and receiver state machines that define specific state behavior and timing requirements for each operation. Rather than calling for oscilloscope measurement of standardized test patterns transmitted by the PHY, these test cases require that the DUT successfully achieve a link with a test station that behaves as if it is a PHY. Meeting this requirement means that the test station must be able to encode a specific test sequence into the PAM3 signalling. What’s more, the test station must also decode the PAM3 signal the DUT transmits. To achieve these goals, the University of New Hampshire InterOperability Laboratory (UNH-IOL) created an FPGA-based test tool (Figure 4) that performs the necessary conversion to PAM3 signalling as well as bit-level error injection to fully stress the DUT’s receiver logic.

Figure 4: UNH-IOL 100BASE-T1 PCS and PHY Control Conformance Test Tool

Ethernet PHY receiver definitions specify many requirements, but how the functionality is implemented is left to the designer. So, test sequences necessary to test conformance can vary among silicon companies. One of the least consistent PHY parameters is the time necessary to achieve a link. Typically, the number of received idle symbols needed for the DUT to reliably recover the clock of the remote PHY governs the time needed to achieve a link—making test tool flexibility to accommodate any DUT crucial. Additionally, many test cases within these conformance test specifications perform negative test conditions, meaning intentionally injecting errors within the data stream to observe how the PHY behaves. Because of this, some test cases require bit errors, erroneous PAM3 symbols, or Ethernet packets with incorrect CRC values. The test setup used by UNH-IOL (Figure 5) uses a PC with custom software to dynamically control the test sequences used to accommodate any silicon design.

Figure 5: 100BASE-T1 PCS and PHY Control Test Setup

Conclusion
While the IEEE 802.3bw and IEEE 802.3bp PHY definitions may seem similar to other BASE-T PHYs, many environmental considerations and specific use-case data was used to create these unique Ethernet technologies within the IEEE standard. Test specifications were created specifically for these new Automotive PHYs, which require specialized test tools and attention to test setup details not considered in previous Ethernet conformance testing.


Curtis Donahue is the Senior Manager of Ethernet Technologies and manages the Automotive Ethernet Test Group at the University of New Hampshire InterOperability Laboratory (UNH-IOL). His main focus has been the development of test setups for physical layer conformance testing, and their respective test procedures, for High Speed Ethernet and Automotive Ethernet applications.

Donahue holds a Bachelor of Science in Electrical Engineering from the University of New Hampshire (UNH), Durham and is currently pursuing his Masters in Electrical Engineering at UNH.

 

WYSIWYG for the Automotive AR Era: Q&A with Mike Firth Texas Instruments

Wednesday, January 3rd, 2018

Why HUD system imagery and drivers stand to gain

Lynnette Reese, Embedded Systems Engineering

Editor’s Note: Automotive Head-Up Displays (HUDs) enable drivers to see critical information in front of their eyes so they no longer need to glance down. Augmented Reality (AR) is coming to automotive HUDs using technology that’s faced challenges in the harsh environment of the automobile. Thermal management has been an especially big challenge, since electronics are expected to start up and function properly from the instant the car is started. Texas Instruments (TI) has made improvements in HUD technology that overcome thermal management issues, double the Field of View (FOV), and provide drivers with image depth and vitality where AR can flourish.

TI marketing manager for automotive DLP Products Mike Firth took a moment to answer some questions for Embedded Systems Engineering on what makes a much-improved HUD experience possible.

Lynnette Reese, Embedded Systems Engineering (LR): Can you tell us a bit about the technology that Texas Instruments provides?

Mike Firth, Texas Instruments

Mike Firth, Texas Instruments: The technology powering these HUD interior automotive projection systems is the same basic technology that you find in corporate and educational projectors as well as digital cinema theatres but with a bit more to withstand the automotive venue. The DLP3030-Q1 has more than 400,000 individual micromirrors that switch on and off at extremely high rates. This fast switching is what enables the clear and bright imagery, high color saturation, and faithful color representation. Fundamentally, there are some unique features that enable the DLP3030-Q1 HUD chipset to address augmented reality (AR) in the automotive market, and they are the quality of the image that the chipset can produce, image brightness, and the ability of the chipset to withstand the thermal challenges that heavy solar loads—such as those found in a harsh automotive setting—pose. The digital micromirror device (DMD) automotive operating temperature range is from -40 ºC to +105 ºC, and the DLP technology performance does not deteriorate. In short, the DLP3030-Q1 chipset supports AR with amazing imagery, color, and very bright displays.

 

Figure 1:  Firth ponts out that the Texas Instruments DLP 3030-Q1 chipset is designed and optimized for a real-time augmented reality experience.

Digital imagery (Figure 1) is positioned in the driver’s field of view (FOV). You can see that it is not just a display. Instead, we are interacting with the driver’s FOV, the environment, and the objects in the environment, marking the distance between the driver and the next car. The red barrier on the right indicates potential danger. The system overlays digital information onto the windshield, interacting real-time with the world as the driver sees it.

What makes this next generation DLP 3030-Q1 chipset special is the solar load performance, accurate color reproduction, and brightness. And not to be overlooked is the decreased package size that enables smaller HUDs while increasing the overall performance. A new ceramic pin grid array package (CPGA) has reduced the overall footprint by 65 percent versus the previous generation.

LR: Are you saying that there’s no compromise in display quality in an automotive environment at all? What if the driver puts on polarized sunglasses? Won’t that make the HUD disappear?

Mike Firth, Texas Instruments: You see the same color, brightness, and contrast across that whole temperature range, enabling clear imagery in all types of conditions—images are visible regardless of temperature and polarization. And yes, in a typical HUD when the driver puts polarized sunglasses on, the image disappears because creating the image demands polarized light. With the DLP system images remain visible even when the driver wears polarized sunglasses.

LR: What makes an AR HUD any better than any other automotive HUD?

Mike Firth, Texas Instruments: To this point, an HUD has been primarily a display. It has not necessarily had the means to float far out over the road in the driver’s field of view, and both colors and information remain basic.

However, as we transition to AR HUDs, as seen in Figure 2, digital information is overlaid completely within the driver’s field of view, at varied distances away from the driver. The distance separation and the red warning bars are quite far out.

 

Figure 2: Moving to Augumented Reality Head-Up Displays

The virtual image distance, which is the measurement of how far from the driver’s eyes the images appear to be floating or resting, is typically somewhere in the two- to twenty-meter range. Today it is probably in the two- to two-and-a half meter range and essentially acts just as a secondary display. As the move to augmented reality takes place you’ll start seeing 7.5-, 10-, 15-, 20-meter virtual image distances, allowing those images to be projected farther.

So, in this case colors, brightness, and the field of view increase. While it’s great to have a wider FOV, you need more brightness to power a larger FOV. You need a technology that is very bright and very efficient at providing the light and accurate colors to enable that larger FOV. Part of the answer lies in the true augmented reality functionality that the DLP employs.


Figure 3: As virtual image distance grows, it opens up that canvas in which you can project images and interact with the driver’s field of view.

LR: Can you explain what you mean by “true augmented reality functionality”?

Mike Firth, Texas Instruments: Sure. It indicates how interactions occur and where that digital imagery can be projected (Figure 3). If you start with a 2.5-meter virtual image distance and, say, a five-degree field of view, which is the red, as Figure 3 shows you see that the image floats just over the hood of the car. You don’t have much field of view, so the images are not very large. You can’t interact with a whole lot of the driver’s environment, but as you start to increase the virtual image distance and increase the field of view with this technology, you can see that now you can start to interact with the cars ahead, sides of the street, turn indications, and so forth.

 

Figure 4: HUD optics magnify solar load.

LR: No pun intended, but have there been any significant road blocks in designing AR for automotive?

Mike Firth, Texas Instruments: One of the primary challenges in designing HUDs is related to the solar load (Figure 4), which is magnified by the HUD’s optics. That effect puts a whole lot of thermal energy on a very small area, causing considerable challenges to thermal load management. If not managed properly the amount of energy projected onto a very small area will cause significant damage to the imager.

LR: It sounds like part of the challenge includes optics. Could you tell us why, and go into more detail on why thermal management is a problem?

Mike Firth, Texas Instruments: Already thermal load management is challenging for today’s HUDs. And that’s with virtual image distances of just 2-2.5 meters. When you move to augmented reality, and you start getting to 7.5 meters, the challenge increases, because you have to increase the magnification to support that longer virtualized distance, which moves the imaging plane. So, whatever is making the image, it moves it closer to the focal point of the optical system, which results in a higher concentration of energy. In an augmented reality HUD, that imaging plane—the diffuser panel in the case of the DLP system—moves further back, resulting in a higher concentration of energy, not necessarily more energy coming in. Although with an AR HUD, because you have a wider field of view, you do let in more sunlight at the beginning than a traditional HUD out there today.

Table 1: The DLP technology diffuser maximum operating temp is 125 ºC.

LR: I can imagine that the DLP is what makes the difference. Can you connect the dots for us as to how the DLP technology creates an advantage over other HUD imaging solutions for harsh environments like automotive?

Mike Firth, Texas Instruments: DLP is a projection-based system that projects the image onto a diffuser, which receives the focused sunlight. The main advantage in DLP technology architecture is that the absorption of the sunlight is minimal, which eases the solar load problem; therefore, heat does not reach the level that other competing technologies introduce and then must deal with.

LR: What impact do you expect to see AR have in the automotive sector?

Mike Firth, Texas Instruments:  Trends are driving towards enabling AR displays, that is, trends are moving towards making it easier to implement augmented reality displays in the automobile. The trend in augmented reality itself is to increase that virtual image distance to at least 7.5 meters if not greater, and in a field of view of at least 10 degrees, as compared to the current six to eight degrees in FOV. The longer the virtual image distance and the wider the FOV, the better experience the driver is going to have.

LR: Does Texas Instruments see any other trends in automotive?

Mike Firth, Texas Instruments:  Three trends are gaining momentum, and they are Advanced Driver Assistance Systems (ADAS), electric cars, and autonomous cars. In 2024, lane departure and distance collision warning are forecast to be in over 50 percent of the automobiles produced worldwide[1].

The shift to electric cars is very strong, with several studies out there now that show that in Europe, up to 30 percent of the cars produced in 2025 could be electric—either plug-in hybrids or fully electric. Worldwide, it’s forecast that 14 percent of automobiles will be electric cars in 2025. And in an electric car, you no longer have a firewall separating occupants from a combustible engine compartment, so you have more space to install an augmented reality HUD. Electric cars can be designed from the ground up with AR in mind.

There is also a lot of interest in how AR can play a role in keeping drivers properly engaged in vehicles that are at less than Level 5 on the self-driving scale—especially at that transition point where the self-driving car is leaving fully autonomous mode, handing off control to the driver.

LR: Do you have any development tools for those interested in evaluating or developing an AR for a harsh environment like the automotive space?

Mike Firth, Texas Instruments:  There’s the DLP3030-Q1 EVM evaluation module for the electronics side of things, as well as the DLP3030PGUQ1EVM for evaluating a picture generation unit. A third EVM, DLP3030CHUDQ1EVM, is a table top demonstrator combiner HUD. It shows the image on a piece of glass and is a portable way to evaluate DLP technology and the performance. It’s a complete HUD system that you can drive with different test patterns and different video while assessing the overall performance of a DLP-based system and what it can offer.


Lynnette Reese is Editor-in-Chief, Embedded Intel Solutions and Embedded Systems Engineering, and has been working in various roles as an electrical engineer for over two decades. She is interested in open source software and hardware, the maker movement, and in increasing the number of women working in STEM so she has a greater chance of talking about something other than football at the water cooler.

 


[1] Strategy Analytics’ Aug 2017 Report (Advanced Driver Assistance Systems Demand Forecast 2015 – 2024)

Custom Cars for Everyone with 3D Printing?

Monday, August 21st, 2017

Why mass-scale personalization of cars is not right around the corner

Ultimately, as humans, we are all individuals and how we demonstrate this, through our appearance and behavior, is what characterizes our personality. This often extends to our immediate environment in terms of our possessions and how we decorate our homes and office workspaces. Fashion also plays its part, and how readily and how extravagantly we follow its ever-changing trends further indicates whether our sense of style is outrageous or more conservative.

Figure 1: 3D printing holds the possibility that, starting from a stock vehicle chassis and body, consumers can add features tied to their individual preferences. (Courtesy Mouser Electronics)

Figure 1: 3D printing holds the possibility that, starting from a stock vehicle chassis and body, consumers can add features tied to their individual preferences. (Courtesy Mouser Electronics)

Alongside personal styling, the gadgets we surround ourselves with and how we present our homes, one thing that probably says a lot about most of us is our car. Of course, for many, this is a very practical choice, influenced by considerations such as its capacity to carry people, pets and other loads, and running costs. For others, performance and style are key, which in the extreme, are often little more than an ostentatious display of wealth.

However, even the majority of “sensible and practical” car owners like to reflect some of their personality in their choice of vehicle, even if that is limited to the basics of make, model and color. This is not surprising when you consider that a 2016 report from the AAA Foundation for Traffic Safety found American drivers spend an average of more than 17,600 minutes behind the wheel each year. Unlike the early days of the automobile, when choices were limited and Henry Ford famously said words to the effect, “You can have any color as long as it is black,” the range of model, trim and styling choices from manufacturers today is almost bewildering. Even so, while the permutations possible may make it unlikely that you’ll see another car on the road that is exactly like yours, the level of personalization possible doesn’t amount to true customization.

This is not to say that everyone wants to own what we commonly refer to as a “custom car,” which usually evokes ideas of souped-up performance, extravagant styling and flamboyant color schemes. Rather, we might like the opportunity to impart something that’s a little original in the design or appearance of our car, something no one else will have. Wishful thinking or not? With the advent of 3D printers, this is entirely possible.

The Bounds of Possibility

From a theoretical standpoint, producing an entire car using 3D printing technology should be possible. However, the scenario of a customer visiting a dealership, specifying what they want and having that vehicle built to order to drive away is perhaps a little fanciful, at least in the immediately foreseeable future.

Nevertheless, all the elements are there, certainly for producing most of the parts using 3D printers. Today’s cars comprise some 20,000 components of all shapes and sizes, made from a variety of different materials. Printing these simultaneously with a single machine to produce a car in one go is simply not possible, even ignoring the complexities of some of the electronic components that require very specialized manufacturing processes.

Figure 2: Will 3D printing prove the means for vehicles to reflect their owners’ personalities? (courtesy Mouser Electronics)

Figure 2: Will 3D printing prove the means for vehicles to reflect their owners’ personalities? (courtesy Mouser Electronics)

Then there are the economies of scale to consider. Present day automotive assembly lines represent the evolution from having cars built one-at-a-time by teams of people to a situation where vehicles are built up step-by-step as they progress through the factory, with perhaps a hundred being worked on simultaneously, and then mostly by robotic machines dedicated to efficiently implementing a single function. Even allowing for the benefits of automation and robotics, reversing this manufacturing strategy would most likely also negate the cost benefits of current mass-production processes.

This is not to say that 3D-printed cars are just a pipe dream. Indeed, companies like Mouser’s partner, Local Motors, are demonstrating what is possible, particularly with a focus on autonomous vehicles. Despite such potential, the view of most industry experts is that the mass-scale personalization of cars is unlikely within even the next 100 years. Not only no infrastructure in place to support this, but also a host of design constraints would need to be addressed, not least meeting mandatory safety regulations. Instead, we have to look elsewhere for the benefits 3D printing can offer this industry.

Overnight Change? Not So Much

Automotive design and the large-scale manufacture of affordable cars using the production line approach has been refined not just over decades but now for more than a century. Consequently, we shouldn’t expect 3D printing to change things overnight, even allowing for the rapid evolution of that technology. So, while today it is possible to print a bare-bones mechanical car, it is important to understand the design requirements and the implications of material choices.

Regardless of how the material is formed to make a particular component, what is more important is how that component is designed to meet the required function. This is where the common expert refrain is that “design should be left to designers” because of the risk that, regardless of whether they know what they want, the majority of customers are incapable of designing something that will function correctly, let alone safely.

Mechanical elements need to perform in some different ways. They need to provide strength and rigidity while at the same time being lightweight and durable. The effects these components have on a vehicle’s handling, or its aerodynamic performance is not something even the maker generation of designer is equipped to deal with. Then there are the highly important considerations of safety and reliability, with many aspects of safety being highly regulated and often subject to rigorous testing. And of course, affordability is also essential.

Currently, the materials 3D printers do best are plastic and metals. Even so, the cost of producing something like a plastic bumper or a steel body panel would not only require a large printer but is unlikely to be cost-effective compared to the respective processes of using injection molding equipment or sheet metal presses. By contrast, 3D printers may provide the means to take advantage of a material like aluminum, which is difficult to work with using current production technologies but is attractive for making lightweight aluminum frames.

Squaring the Circle

Returning to the economics of custom versus mass production, one of the problems created by the extensive choice of options offered by vehicle manufacturers today is that of inventory. While producing in bulk typically reduces cost, to avoid the impact of lengthy component lead-times adding to the long waits customers already face when ordering a car that’s not a stock model, it is inevitably necessary to build ahead and hold inventory based on expected demand for these options.

And the wider the choice, the worse this problem becomes, which is where 3D printing may rebalance the cost equation in this consumer choice dilemma. Holding inventory entails a cost that is multiplied by the array of options available. For example, let’s consider just a few items of body detailing, such as a radiator grille, door trim, mirror cover and trunk sill protector. If each of these is offered in black, chrome and perhaps 2 or 3 other finishes to complement the vehicle’s body color, that already amounts to some 20 component variants. Under these circumstances, on demand printing may prove more cost effective.

Then we come to more creative options, which might potentially include customized body panels. There is an argument that vehicle manufacturers should start with a stock vehicle chassis and body to which customers can add features and styling details, with CAD software to ensure everything functions correctly—even through to running virtual wind tunnel and performance testing.

Whether this is where the future of the automobile industry lies remains to be seen. Undoubtedly manufacturers will embrace 3D printing technology where it offers obvious benefits, in much the same way as the aerospace industry has when it comes to designing parts that are lighter weight or achieve some key performance breakthrough. 3D printing could become economically viable with regard to addressing vehicle option proliferation, but its adoption for further enhanced customization will be another matter, which in part will reflect competitive pressures within the industry but equally could remain the preserve of after-market or specialist suppliers.


Photo-RobertHuntleyRobert Huntley is an HND-qualified engineer and technical writing specialist. Drawing on his background in telecommunications, navigation systems, and embedded applications engineering, he writes on a variety of technical and practical topics on behalf of Mouser Electronics.

Improving Autonomous Driving Communication and Safety with Private Blockchains

Thursday, June 22nd, 2017

Here’s why Blockchain, a powerhouse database in finance and digital identification, has what it takes to become the backbone of automotive data communication—beginning with the autonomous car.


Blockchain technology isn’t just for Bitcoin: It’s driving into several other industries at a breath-taking velocity. It’s now well established for financial markets and digital identification, with other major industries such as healthcare and insurance companies in fast pursuit. Emerging areas for Blockchain are also diverse, covering areas such as energy, where micro-grid producers see Blockchain as a method to keep track of the energy generated. Blockchain itself is evolving as well, with Blockchain 2.0 promising even more functionally for broader groups by introducing new applications.

Figure 1: Yes, the use of Blockchain technology for V2V and V2i communication could be even closer than it appears.

Figure 1: Yes, the use of Blockchain technology for V2V and V2i communication could be even closer than it appears.

Blockchain can also be used throughout the automotive industry. Automotive applications range from revolutionizing the supply chain to authenticating ride sharing for a passenger and the vehicle owner. However, the clearest overall group of opportunities for automotive targets the critical functions autonomous vehicles perform when under their own control.

Communication Opportunities
One of the opportunities for Blockchain in the autonomous car deals with communication. That is Vehicle-to-Vehicle (V2V) communication as well as Vehicle-to-Infrastructure (V2i). Along with other vehicle based communications they are commonly grouped with, V2V and V2i can also be referred to as Vehicle-to Everything (V2X). Regardless if it’s V2V, V2i, or just V2X, all require fast and secure transmission of data as well as undisputable records—of the data, the transmission itself, and the recipient(s).

In V2V, vehicles inform other vehicles within their communication community about myriad details in the surrounding environment. One example would be real-time information regarding the roads traveled, including themes such as traffic flow, construction zones, workers on the road, etc. This type of detailed information can empower other vehicles in the communication community. Cars and trucks can optimize their performance and take the shortest time or distance route, based on real-time data. V2V can also lend itself to more core vehicle functions such as gear optimization on a given incline, allowing trucks to minimize fuel consumption. In V2V, autonomous communication to other similar vehicles (peer-to-peer) employing a fast and secure method is critical to the intelligent car’s core functionality.

Meet the Automotive Information Broker

In V2i, a car can produce thousands of independent information packets every minute and push them to what could be hundreds of infrastructure receivers from traffic lights to data aggregators. It’s an automotive information broker. The packets of information themselves hold little value independently, but when assembled with other vehicles, it then has substantial value in determining everything from dynamic traffic control to component wear patterns for a given class of car in a given geography. In V2i, fast and undisputable records are the key to success.

By using Blockchain for V2X, an OEM gains additional speed and frequency of secure transmissions not available with the majority of today’s OTA solutions. That is, the OTA solutions available are specifically designed to perform file transfers for either a full binary update or partial update for major systems such as infotainment, telematics, and (in some cases) the vehicles’ ECUs as well.

While these updates are highly secure, they are designed for the specific purpose of software updates. In V2X, the data transfer to or from the vehicle is typically small packets of data intended to either inform the vehicle or take an immediate action. A history or log is also advantageous for proof of action should an accident occur. Given the unique design of Blockchain, any record of any transmission can be validated for accuracy, thereby giving the OEM or any vehicle owner undisputable records of truth. This unique validation method is not available to the automotive market today. Blockchain also addresses the V2X security issues (many senders, many receivers) without leaving the vehicles vulnerable to hacking, making it an ideal data record for small packets of information.

As Blockchain comprises data records—a database—using it will not necessarily be a design consideration for the communication transport system itself. Underlying transmission standards such as the Wireless Access for Vehicular Environments (WAVE) for the U.S., which is based on the lower level IEEE 802.11p standard, and ETSI ITS-G5 for Europe, a standard also based on IEEE 802.11P, have focused on the transport system definition use. And the Car-to-Car communication consortium, a nonprofit industry driven organization also in Europe, has focused on standards for V2V and V2i. Blockchain use, rather than having an impact on these standards, would instead operate within the standards defined.


Greg-Bohl-6-17-16_WEBGreg Bohl is Vice President of the U.S. Analytics Services organization for HARMAN Connected Services. His work with analytics was launched over 20 years ago while at the Sabre Group and has continued through several companies including multiple start-ups. Greg has worked globally with OEMs defining a path of how machine learning and artificial intelligence can be used in the connected car. Publications range from numerical studies in clean technology through patents for predictive systems and methods used in the automotive industry. Greg has earned a BS-IS and MBA from the University of Texas, Arlington.

Heterogeneous: Performance and Power Consumption Benefits

Wednesday, May 10th, 2017

Why multi-threaded, heterogeneous, and coherent CPU clusters are earning their place in the systems powering ADAS and autonomous vehicles, networking, drones, industrial automation, security, video analytics, and machine learning.

High-performance processors typically employ techniques such as deep, multi-issue pipelines, branch prediction, and out-of-order processing to maximize performance, but these do come at a cost; specifically, they impact power efficiency.

If some of these tasks can be parallelized, this impact could be mitigated by partitioning them across a number of efficient CPUs to deliver a high-performance, power-efficient solution. To accomplish this, CPU vendors have provided multicore and multi-cluster solutions, and operating system and application developers have designed their software to exploit these capabilities.

Similarly, application performance requirements can vary over time, so transferring the task to a more efficient CPU when possible improves power efficiency. For specialist computation tasks, dedicated accelerators offer excellent energy efficiency but can only be used for part of the time.

So, what should you be looking for when it comes to heterogeneous processors that deliver significant benefits in terms of performance and low power consumption? Let’s look at a few important considerations.

Multi-threading
Even with out-of-order execution, with typical workloads, CPUs aren’t fully utilized every CPU cycle; they spend most their time waiting for access to the memory system. However, when one portion of the program (known as a thread) is blocked, the hardware resources could potentially be used for another thread of execution. Multi-threading offers the benefit of being able to switch to a second thread when the first thread is blocked, leading to an increase in overall system throughput. Filling up all the CPU cycles with useful work that otherwise would be un-used leads to a performance boost; depending on the application, the addition of a second thread to a CPU typically adds 40 percent to the overall performance, for an additional silicon area cost of around 10 percent. Hardware multi-threading is a feature that in CPU IP is bespoke to Imagination’s MIPS CPUs.

A Common View
To move a task from one processor to another requires each processor to share the same instruction set and the same view of system memory. This is accomplished through shared virtual memory (SVM). Any pointer in the program must continue to point to the same code or data and any dirty cache line in the initial processor’s cache must be visible to the subsequent processor.

Figure 1: Memory moves when transferring between clusters.

Figure 1: Memory moves when transferring between clusters.

Figure 2: Smaller, faster memory movement when transferring within a cluster.

Figure 2: Smaller, faster memory movement when transferring within a cluster.

Cache Coherency
Cache coherency can be managed through software. This requires that the initial processor (CPU A) flush its cache to main memory before transferring to the subsequent processor (CPU B). CPU B then has to fetch the data and instructions back from main memory. This process can generate many memory accesses and is therefore time consuming and power hungry; this impact is magnified as the energy to access main memory is typically significantly higher than fetching from cache. To combat this, hardware cache coherency is vital, minimizing these power and performance costs. Hardware cache coherency tracks the location of these cache lines and ensures that the correct data is accessed by snooping the caches where necessary.

In many heterogeneous systems, the high-performance processors reside in one cluster, while the smaller, high-efficiency processors reside in another. Transferring a task between these different types of processors means that both the level 1 and level 2 caches of the new processor are cold. Warming them takes time and requires the previous cache hierarchy to remain active during the transition phase.

However, there is an alternative – the MIPS I6500 CPU. The I6500 supports a heterogeneous mix of external accelerators through an I/O Coherence Unit (IOCU) as well as different processor types within a cluster, allowing for a mix of high-performance, multi-threaded and power-optimized processors in the same cluster. Transferring a task from one type of processor to another is now much more efficient, as only the level 1 cache is cold, and the cost of snooping into the previous level 1 cache is much lower, so the transition time is much shorter.

Combining CPUs with Dedicated Accelerators
CPUs are general purpose machines. Their flexibility enables them to tackle almost any task but at the price of efficiency. Thanks to its optimizations, the PowerVR GPU can process larger, highly parallel computational tasks with very high performance and good power efficiency, in exchange for some reduction in flexibility compared to CPUs, and bolstered by a well-supported software development eco-system with APIs such as OpenCL or Open VX.

The specialization provided by dedicated hardware accelerators offers a combination of performance with power efficiency that is significantly better than a CPU, but with far less flexibility.

However, using accelerators for operations that occur frequently are ideal to maximize the potential performance and power efficiency gains. Specialized computational elements such as those for audio and video processing, as well as neural network processors used in machine learning, use similar mathematical operations.

Hardware acceleration can be coupled to the CPU by adding Single Instruction Multiple Data (SIMD) capabilities with floating point Arithmetic Logic Units (ALUs). However, while processing data through the SIMD unit, the CPU behaves as a Direct Memory Access (DMA) controller to move the data, and CPUs make very inefficient DMA controllers.

Conversely, a heterogeneous system essentially provides the best of both worlds. It contains some dedicated hardware accelerators that, coupled with a number of CPUs, offer the benefits of greater energy efficiency from dedicated hardware, while retaining much of the flexibility provided by CPUs.

These energy savings and performance boost depend on the proportion of time that the accelerator is doing useful work. Work packages appropriate for the accelerator are present in a wide range of sizes—you might expect a small number of large tasks, but many smaller tasks.

There is a cost in transferring the processing between a CPU and the accelerator, and this limits the size of the task that will save power or boost performance. For smaller tasks, the energy consumed and time taken to transfer the task exceeds the energy or time saved by using the accelerator.

Data Transfer Cost
To reduce time and energy costs, a Shared Virtual Memory with hardware cache coherency—as found in the I6500 CPU—is ideal as it addresses much of the cost of transferring the task. This is because it eliminates the copying of data and the flushing of caches. There are other available techniques to achieve even greater reductions.

The HSA Foundation has developed an environment to support the integration of heterogeneous processing elements in a system that extends beyond CPUs and GPUs. The HSA system’s intermediate language, HSAIL, provides a common compilation path to heterogeneous Instruction Set Architectures (ISAs) that greatly simplifies the system software development but also defines User Mode Queues.

These queues enable tasks to be scheduled and signals to trigger tasks on other processing elements, allowing sequences of tasks to execute with very little overhead between them.

Beyond Limitations
Heterogeneous systems offer the opportunity to significantly increase system performance and reduce system power consumption, enabling systems to continue to scale beyond the limitations imposed by ever shrinking process geometries.

Multi-threaded, heterogeneous and coherent CPU clusters such as the MIPS I6500 have the ideal characteristics to sit at the heart of these systems. As such they are well placed to efficiently power the next generation of devices.


Tim-Mace-2Tim Mace is Senior Manager, Business Development, MIPS Processors, Imagination Technologies.

Next Page »

Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.