Posts Tagged ‘top-story’

Next Page »

Hail a Cab and All Hail the GPU

Friday, January 5th, 2018

The autonomous car will rely on data that has to be processed in real-time and adapt through machine learning for safe passage whatever the conditions.

Autonomous vehicles will rely on artificial intelligence (AI) to process safety vision systems and navigation systems. Although not a new idea, machine learning has become significant today, as it contributes to improving algorithms that can refine and enhance applications currently used in Advanced Driver Assistance Systems (ADAS) and, in the future, for control of autonomous cars.

Robotaxis require a small data center’s worth of servers in the trunk to run the deep learning, parallel computing and computer vision algorithms to accommodate the constantly changing landscape and driving conditions.

The increased volumes of data currently being generated have prompted the advance of machine learning, which relies on data for its ‘training.’ Around 90 percent of digital data was created in the last two years, and this increased volume has been used to accelerate the development of better algorithms used in machine learning.

To process all this data, there has also been a surge in interest in the Graphic Processing Unit (GPU). At GTC Europe 2017, Jensen Huang, NVIDIA’s President and CEO, explained how the GPU’s ability to rapidly display graphics makes it suitable for machine learning and AI applications. “The GPU now finds itself with a rich suite of applications,” he said. GPUs serve “just about every industry,” he continued, listing High Power Computing (HPC) and Internet services as examples. Practically every query, photo search, and recommendation on a mobile phone relies on a GPU, he told the conference.

Its advantage, says Huang, is its parallel operation. “The only way to make a Central Processing Unit (CPU) go faster is higher clock speeds,” said Huang. “Not so with GPU computing. Applying parallelism is a special way to solve algorithms,” he said as he introduced the Volta GPU, describing it as the largest single processor ever made, using 21 billion transistors. It delivers 120 TFLOPS of deep learning performance and can replace an entire rack of servers, Huang told attendees.

Figure 1: Jensen Huang introduced the Volta GPU on stage at GTC Europe 2017. Production is expected to be Q1 2018.

Use of a GPU can help engineers develop better algorithms while also allowing devices to process large data sets quickly. Data can also be shared with other robots in various locations, via the Cloud, to pool information and accelerate learning.

Huang’s vision for transportation is that the Volta GPU will “turbo charge” the transportation sector. He believes that deep learning will accelerate applications “far faster” than Moore’s Law. Deep learning researchers are already using NVIDIA GPUs and realizing that they can overcome the main handicap of machine learning, the processing of data to create algorithms for behavior. Huang reported the researchers were finding the GPU to be incredibly effective if trained on a large amount of data which requires trillions and trillions of operations.

The software writes itself enthuses Huang, by taking in large amounts of data. Combined with CUDA, the company’s parallel computing architecture, with associated libraries, compilers and development tools, Huang said that these two forces, when they converge, will turbo charge the company’s GPUs. “The GPU will revolutionize the car industry,” he said. “It has always been in design and simulation as part of the workflow. Now it will be in the car, solving one of the greatest challenges of computing, planning,” he said. For autonomous vehicles, this will mean hundreds of millions of cars and hundreds of millions of people in vehicles and outside of them, to be assigned and safeguarded, as well as route planning and navigation.

At the same event, the company launched Pegasus, believed to be the world’s first AI computer and part of the DRIVE PX AI computing platform. It boasts 320 trillion operations per second for deep learning calculations to run numerous deep neural networks. It is designed to handle Level 5 driverless vehicles, i.e. with no driver required, so the vehicle does not have pedals, steering wheel, or any controls that can be operated by a human.

Figure 2: The latest addition to DRIVE PX AI, codenamed Pegasus, will usher in robotaxis.

These vehicles are a new genre — robotaxis. They can be summoned to an address and take passengers from there to their end destination. This type of driverless vehicle will bring mobility to disabled or elderly users who otherwise have to rely on private hire or the goodwill of friends and family to travel.

For this type of vehicle, masses of complex data will need to be processed to calculate pedestrians and hazards on the roads, other vehicles and their routes, and to plot and follow navigation paths that may need to be updated in real-time. These operations require large amounts of visual data from sensors and information systems in the car, from roadside information systems, and from satellite systems. Furthermore, processing has to include many levels of redundancy to meet the safety levels required in automotive use. As a result, robotaxis require a small data center’s worth of servers in the trunk to run the deep learning, parallel computing, and computer vision algorithms to accommodate the constantly changing landscape and driving conditions. For example, a DRIVE PX AI supercomputer in the vehicle and GPUs in the data center can combine to create highly detailed maps for autonomous vehicle navigation systems.

The decrease in size represented by the Pegasus is practical on two levels, not just conserving space in the trunk, but also saving weight in the vehicle to increase fuel efficiency.

The license plate-sized Pegasus is powered by two NVIDIA Xavier System on Chips (SoCs) which have a Volta GPU, and two GPUs to accelerate deep learning and computer vision algorithms.  It is designed for Automotive Safety Integrity Level (ASIL) D certification. This is the highest, most stringent safety level, to safeguard against life-threatening or fatal injury in the event of a malfunction. It also has Inputs/Outputs (I/Os) for a Controller Area Network (CAN), Flexray, dedicated high-speed inputs for RADAR, LIDAR and ultrasonic sensors, 10Gbit Ethernet connectors and one TeraByte per second memory capability.

It is expected to be available to automotive partners in the second half of 2018.

NVIDIA’s DRIVE IX software can be coupled with DRIVE IX software to process sensor data inside and outside of the vehicle.

DRIVE PX 2 configurations are available now.

Caroline Hayes has been a journalist covering the electronics sector for more than 20 years. She has worked on several European titles, reporting on a variety of industries, including communications, broadcast and automotive.


Autonomous Trucks, LIDAR, and the Future of Trucking

Tuesday, October 3rd, 2017

Are autonomous trucks an answer to the growing truck driver shortages in the U.S.?

Trucks are by far the single most-used mode to move freight around the country, moving 63 percent of the total tonnage in 2015. Nearly 18.1 billion tons of goods worth about $19.2 trillion moved on our nation’s transportation network in 2015, based on current Freight Analysis Framework 4 (FAF4) estimates. On a daily basis, 49 million tons of goods valued at more than $53 billion are shipped throughout the United States on all modes of transportation.[i] The industry is growing rapidly, fueled in part by growing e-commerce, but the number of drivers is not growing along with it. The American Trucking Association estimates that there are at least 48,000 open jobs for drivers in the U.S. with an estimated 174,500 drivers needed by 2024.

Figure 1: A visual image that projects the average daily long-haul truck traffic on the National Highway System in the United States in 2045. (Source: Bureau of Transportation Statistics, U.S. Dept. of Transportation)


Driver shortages and necessary safety regulations that limit time behind the wheel make fully autonomous trucks (ATs) that much more attractive. Regulations restrict the number of hours that commercial truck drivers can work. Drivers transporting property cannot drive more than 11 hours before taking ten consecutive hours off. A driver cannot drive after 14 total hours of being “on duty,” regardless of the number of hours driven, and may not work more than 60 hours in 7 consecutive days, or more than 70 hours in 8 consecutive days.[ii] ATs could decrease the time that a trucker spends behind the wheel, thus lengthening freight transport time for a driver, even with a truck that’s only autonomous on highways. It’s possible that in five to 10 years trucks will drive themselves for long stretches on public highways, but the technology for completely autonomous trucks off-highways is far from deployment. Drivers already enjoy, albeit by piecemeal, benefits of autonomy that enhance safety and provide a better return for the trucking business by reducing accidents and increasing fuel efficiency. Levels of autonomy start with Advanced Driver Assistance Systems (ADAS), which might include automatic emergency braking, lane departure warning, forward collision warning, and adaptive cruise control. Such features make the driver’s job simpler and lead to improved safety, especially considering the long hours truckers spend behind the wheel. Making up for driver shortages with fully autonomous trucks seems within reach.

Figure 2: In autonomous mode, the truck driver can take out a tablet or perform other tasks. Daimler Freightliner Inspiration Trucks are approved for autonomous operation on public roads in the state of Nevada. (Source:

Daimler, Volvo, Peterbilt (in partnership with Embark), and Uber are all working on autonomous trucks (ATs). In August 2016, Uber bought Otto, a company that retrofits trucks for autonomous capabilities, for purportedly $680 million. Uber successfully tested a fully autonomous truck delivery via public highway in Colorado in October 2016. A human driver piloted the truck on and off the highway, keeping it in the right lane. The truck had both a leader car and police escort for a 125-mile highway delivery of Budweiser from Fort Collins to Colorado Springs. The Uber technology includes video cameras, accelerometers, and a Light Detection and Ranging (LIDAR) sensor.

LIDAR: The Eyes of Automotive Autonomy
LIDAR sensors are key to self-driving cars, as they are the real eyes of the system, lending depth perception to the process of decision-making in driving. Late-model LIDAR sensors are about the size of a coffee can. A LIDAR sensor visualizes the world in 360 degrees, bouncing pulsed laser beams off nearby objects all around it to create a 3D map of the real world in real time. LIDAR sends a fixed train of light pulses to a target, with a known time interval between pulses. The pulse train hits an object in its path and returns a portion to the LIDAR. The system can measure the time of flight (ToF) of the pulses with an accurate range and speed. However, LIDAR is not perfect. LIDAR can be susceptible to failures associated with sunlight and nearby LIDAR sensors. The LIDAR industry’s  struggle to keep up with orders is causing  lead times as long as six months. However, more LIDAR startups are coming online with venture capital funding and new ideas on how to conquer present-day challenges, cut costs, and reduce size. Most self-driving vehicles use LIDAR. One exception is Tesla. Tesla’s site states, “All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.”[iii] Tesla owners won’t be able to use the system as fully autonomous without regulatory approval, of course. Tesla does have a driving feature in use called “autopilot” that allows hands-free highway driving. Tesla vehicles may not have LIDAR but use several cameras, ultrasonic sensors, and one radar.

Are Autonomous Trucks Taking Notes from Self-Driving Cars?
The technology for fully autonomous trucks on highways is hitting major milestones as several companies test ATs. Volvo plans to launch self-driving trucks in confined areas with private roads such as mining and shipping ports. Indeed, a more realistic outlook for the near future restricts autonomous vehicles to closed, predictable environments, and for good reason. Unpredictable circumstances challenge all autonomous vehicles with bad weather, faded or non-existent lane markings, construction, debris in the road, and the behavior of other drivers, wildlife, and pedestrians. Even so, predictions for autonomous trucks on public highways are often cited in the media at between three to 10 years away and almost universally cited as being safer than human drivers. The implementation of autonomous trucking is hampered by a lack of clear regulations, growing push-back from truck driving unions that fear job losses, and societal inertia. In that time-frame, with the support of legislation, it is conceivable that autonomous trucking can be limited to the right-hand lane in long stretches of highway, and only in good weather. Road conditions, construction, and debris can be managed to a good extent, but no public highway is ever going to be perfect.

Figure 3: Self-driving trucks as a concept in Logan, a Wolverine movie. The fully autonomous concept trucks have no cab. (Credit: Nick Pugh Studios.)

Google cars have driven millions of miles with few accidents, of which all but one was caused by other cars. The exception occurred when the self-driving Google car hit the side of a bus while merging into city traffic at a fairly low speed. However, trucks take much longer to come to a stop and cannot swerve in a maneuver of avoidance without a real risk of jack-knifing or flipping.  Some accidents cannot be avoided. For example, if a car pulls out in front of a truck and slams on its brakes in a two-lane highway, the truck has the choice of swerving into and hitting on-coming traffic, jack-knifing, or striking the car in front of it, regardless of who or what is driving the truck.

Legislation: Autonomous Cars before Trucks
Self-driving trucks may debut after self-driving cars if legislation is a guide. The recent development of legislation regarding self-driving cars excludes commercial trucks altogether. Reasons for the exclusion include matters of jurisdiction as well as opposition from unions. The proposed legislation refers to Highly Automated Vehicles as HAVs: “On Sept. 6, 2017, the U.S. House of Representatives unanimously passed bipartisan HAV legislation, known as the SELF DRIVE Act (Safely Ensuring Lives Future Deployment and Research in Vehicle Evolution Act). That legislation, which has not yet been adopted by the Senate, would amend the definitions contained in Section 30102 of Title 49, United States Code, to include definitions for ‘highly automated vehicle,’ ‘automated driving systems,’ ‘dynamic driving task’ and other definitions relevant to the development and use of autonomous vehicles.” [iv] This is a start, but the exclusion of trucks is telling. According to the Bureau of Labor Statistics, there are about 1.7 million trucking jobs in the United States. Truck drivers are concerned about losing their jobs to self-driving trucks. Although there is a shortage of drivers and parking for drivers who must pull over after 11 hours at the wheel, autonomous trucks are rightfully considered a threat to jobs. In reality, full AT technology might be engaged from point-to-point on designated, maintained-for-AT highway routes, where human truckers take over at the endpoints for regional delivery. But unlike robots that can take over tedious tasks that humans do not like doing, many truck drivers like the long-haul aspect of their jobs. Transporting hazardous chemicals, oversize cargo, and other unusual freight will likely always have continuous human involvement.

Evolving from Ridiculous
Technology has a habit of superseding our attempts to contain it. Early vehicles of the late 19th century led to the addition of traffic laws, signs, signals, and lane markings. Attempts to curb technology, even for reasons of safety, seem ludicrous now. Great Britain’s Red Flag Act of 1865 required that any self-propelled vehicle be preceded by a person holding a red flag.

Figure 4: The Red Flag Act of 1865 in Great Britain slowed horseless carriages down to walking speed. (Source:

Over 150 years later this seems ridiculous, but has society ever adapted as fast as technology? As technology steadily improves, autonomous trucks will be capable of taking over more driving tasks, perhaps eventually merging onto and off of highways without human intervention. Truck drivers can fight against the fruit of progress, but eventually, time and cost savings will overtake the best of intentions for saving long-haul truck driver’s jobs. The red flag was replaced by traffic rules, lights, and signs, which did not exist before. Jobs were created to implement and maintain an automotive infrastructure. As the truck driver shortage deepens, perhaps a similar infrastructure surrounding ATs will evolve new jobs, shifting the landscape without displacing truck drivers all at once. Technology solves problems, solving one challenge after another to make innovative visions reality. The biggest impediment to self-driving vehicles may very well be social, for which technology has no answers.

Lynnette Reese is Editor-in-Chief, Embedded Intel Solutions and Embedded Systems Engineering, and has been working in various roles as an electrical engineer for over two decades. She is interested in open source software and hardware, the maker movement, and in increasing the number of women working in STEM so she has a greater chance of talking about something other than football at the water cooler.

[i] “Freight Shipments Projected to Continue to Grow.” US Department of Transportation, United States Department of Transportation, 14 Aug. 2017, Accessed Sept. 30, 2017.

[ii] “Summary of Hours of Service Regulations.” Federal Motor Carrier Safety Administration, United States Department of Transportation, 30 Dec. 2013, Accessed Sept. 30, 2017.

[iii] Tesla, Inc, Accessed Sept. 30, 2017.

[iv] Hamilton, Lawrence, et al. “Self-Driving Trucks Get Closer to Hitting the Road.” Law360 – The Newswire for Business Lawyers, 29 Sept. 2017, 9fgv]. Accessed Sept. 30, 2017.

RFID Keeps Track of What’s on the Tracks

Friday, June 2nd, 2017

Radio Frequency Identification (RFID) is used for logistics and planning in modern transport systems.

Transmitting data from a chip embedded in a device or in a tag on an object is a commonplace way to relay information. Data stored on the chip or tag can be activated by radio waves enabled by a reader and wirelessly transmitted to a reader, which creates digital data like that used for location information.

“It is estimated that by 2020, more than 60 per cent of payment transactions will use contactless technologies such as NFC.”

Many cities around the world use RFID and Near Field Communication (NFC) devices for public transport services. In the UK, Transport for London has used the Oyster Card contactless system since 2003. By 2012 over 43 million Oyster Cards had been issued for travel all across London’s travel zones. The data can be used for payment for travel, calculating peak times and entry and exit points, but can also be used to assess busy periods for particular stations. Oyster Cards have been used to help track the journeys of missing people as well as, controversially, to track people of interest to the police.

City Networks
Columbus, Ohio, is celebrating winning the 2016 Smart City Challenge. The US Department of Transportation’s project encourages cities to use technology to ease the urban commute, with RFID tags in vehicle windows for payment of parking spaces and tolls without slowing down the flow of traffic. The city will receive $40 million from the federal government and $10 million from Vulcan, a company owned by Microsoft co-founder Paul Allen to invest in Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I) as well as smartcards for bus payments. (Columbus does not have a commuter train network). The city will deploy NXP’s V2V and V2I communications system and smart cards to add intelligence to the transportation system. Wireless technology to create ‘smart corridors’ along bus routes will enable V2V and V2I communications for efficiency and is hoped to improve safety as well as usability.

Figure 1: NFC has a familiar ring to it—this payment device is based on Infineon’s contactless security chip.

Figure 1: NFC has a familiar ring to it—this payment device is based on Infineon’s contactless security chip.

It is estimated that by 2020, more than 60 percent of payment transactions will use contactless technologies such as NFC. Companies are finding more inventive ways to make the payment transaction as easy as possible and on devices that cannot be left at home by mistake. Last year, Infineon announced that an NFC-enabled ring was based on its contactless security chip (Figure 1). At the UITP Global Public Transport Summit in Montreal, Canada, the company focused on security. Its chips are interoperable and compatible to international standards such as Common Criteria for computer security and EMVCo for card payment. All components are also CIPURSE-ready or enable CIPURSE Mobile transactions. CIPURSE is an open standard defined by the OSPT Alliance for transit fare collection. It is built on standards such as ISO 7816, AES-128, ISO/IEC 14443-4. A cryptographic protocol protects against Differential Power Analysis (DPA) and Differential Fault Analysis (DFA) to guard against hackers attacking the main or side channels to access passenger data.

“Companies are finding more inventive ways to make the payment transaction as easy as possible and on devices that cannot be left at home by mistake.”

Long Range Travel
For city use, cards must be placed on a reader as the range is short for RFID, typically around one meter (three feet). This range is overshadowed by RFID sensor tags introduced by Powercast. The company introduced the PCT100 and PCT200 multi-sensor RFID tags (Figure 2) with a range of 10m, or 32 feet. They are not intended for passenger use but for shipping goods. In addition to what is claimed to be the longest read range available today, the tags can withstand extremes of temperature from -40 to +85 degrees C, for the transportation of pharmaceuticals or perishable items, which have to be refrigerated or chilled throughout the journey. 

Initially, the tags include temperature, humidity, and light sensors, with other sensor types planned for later release. Tags that can sense the RFID reader and those with an on-board Light Emitting Diode (LED), which can be used to show the strength of the RFID field, and to ‘find’ a particular tag, are also available.

Figure 2: Powercast’s multiple sensor tags have an exceptional range of 10m/32 feet.

Figure 2: Powercast’s multiple sensor tags have an exceptional range of 10m/32 feet.

The two versions have different battery uses. The PCT100 is a battery-free design, while the PCT200 has a battery but can be recharged using an RFID reader, using the company’s patented RF Harvesting technology. An embedded Powerharvester receiver generates power from a standard RFID reader so that the sensor tag’s battery does not have to be changed or plugged in for a recharge. Battery life is around one month and data read times can be set from one minute to one hour for data logging outside the RF field over long periods of time.

The tags have more than 10 times the operational power of standard passive RFID tags, says the company. The RFID reader generates an electromagnetic signal, forwarded by an antenna to the NXP UCODE RFID chip in the tag. The UCODE chip has a reduced conditional read range to help deter theft and a digital switch that controls activation or deactivation of the switch for protection against theft.

Keeping Track
Still with transport, Harting has developed an Ultra High Frequency (UHF) RFID antenna to identify trains on the railway track. The WR24-r is part of the Ha-VIS RF-ANT-WR24 range of antennae (Figure 3).

It is robust enough to be fitted under a train and operates in extreme temperatures. The WR24-t version operates at up to 150 degrees C. The antenna has a 70-degree opening angle and can be used to create gates for container tracking identification. It can recognize the train on the track for logistics as well as business data gathering. There are three antenna models in the range; the third, WR24-i model is for general industrial use.

Figure 3: Harting has created the ANT-WR24 range of antenna for rail use.

Figure 3: Harting has created the ANT-WR24 range of antenna for rail use.

As cityscapes change, and as transport needs increase, the role of RFID and NFC is becoming more vital in contributing intelligence to smart transport systems.

hayes_caroline_115Caroline Hayes has been a journalist covering the electronics sector for more than 20 years. She has worked on several European titles, reporting on a variety of industries, including communications, broadcast and automotive.

Aviation: Systems Engineer Interview

Friday, December 16th, 2016

The embedded design choices keeping In-Flight Connectivity fast and dependable.

Editor’s Note: What’s embedded can go unnoticed. So can video streaming, live television and Internet shopping, as long as users’ attention isn’t drawn to annoying delays caused by slower connection speeds. The problem of keeping connection speeds at a minimum of 5 Mbps for airline crew and passengers is one Thales Group has taken on. In mid 2017 the company will launch the in-flight connectivity service FlytLIVE. Kemi Lewis, systems engineer at Thales InFlyt Experience™ in Melbourne, spoke with EECatalog shortly after the FlytLIVE announcement. He shared with us the challenges specific to gaining and then maintaining the means to keep connection speeds fast, lengthen MTBF/MTBUR and come out on the right side of SWaP calculations.

EECatalog: What should embedded designers know about some of the specific challenges involved when building a connectivity solution aimed at commercial aviation?

 Kemi Lewis, Thales

Kemi Lewis, Thales

Kemi Lewis, Thales: One factor to be aware of is the need for FAA certification of any hardware, depending on where that hardware is being deployed in the plane. When a plane is taken out of service to be retrofitted with new hardware, the FAA requires all the testing information before giving its approval to operate the equipment on the aircraft. Product management tools like Jama Software can help meeting this compliance process by ensuring everyone from the Product Team down to Testers are aware of what requirements must be met.


Lewis notes: “Our goal is to give you the same experience you have at home: fast Internet, TV, the ability to order items and find them waiting for you when you arrive at your destination.”

The other route is LineFit, where Boeing, Airbus or another aircraft manufacturer approves the product design portfolio of equipment from Thales or another OEM approved IFE&C supplier a year to 18 months ahead. The aircraft OEM, e.g., Airbus, Boeing, Comac, would then get the approval to use this equipment as a selection feature in any of its aircraft that an airline would choose to buy. Typically, those design requirements are much more rigorous. The design must be more ruggedized and have greater redundancy because OEM manufacturers have more critical requirements for what hardware and software is approved for use on their aircraft. This is another instance where using the Reuse feature of Jama Software shortens your product time to market.

EECatalog: Is there any degree of cross pollination or synergy between the military aviation and the commercial aviation sectors, particularly with regards to SWaP?

Lewis, Thales: Much of the time fully ruggedized military-grade hardware is past the price point for the low cost In Flight Connectivity market. And it might even be too expensive for the high end of the market. Hardware that is halfway between commercial and industrial/military occupies a kind of hybrid sweet spot—that is where we look to find COTS designs that we could either use straight off the bat or have the vendor do minimal customization.

Single Board Computers that are scalable and modular work best in the evolutionary life of a design, making it easier to upgrade a design that is already in service with minimal FAA requalification.

And when the new technologies come out in the consumer market, for example, USB Type-C, there is typically a two-to-three-year lag before that technology makes its appearance on an aircraft. It has to be certified for use on an aircraft. In addition, there is a Cost Of Scale, the more a particular technology is commonly adopted across the market that majority adoption drives down the price; it makes it more feasible to use the technology in question because there is more support for it. In a situation in which there are two competing technologies and there is no majority, the concern would be that the technology could go obsolete. Also, airlines tend to be skittish when it comes to new technologies coming out for which there is no proven pedigree—given it’s going to be in an enclosed metal box flying at 10,000 feet.

EECatalog: Could you speak a bit about the architecture that underpins your solutions?

Lewis, Thales: Starting from the top you have dedicated, fast, individual bandwidth pipes to each aircraft, so there’s no sharing. That allows us to guarantee 5Mbps or more to each passenger on the aircraft and even higher if they are accessing Amazon or Netflix.

We’re always looking for less power, faster processing speed, and smaller form factors. Anything we can do to reduce the overall design footprint (SWaP) of what goes on the airplane lessens weight and results in a fuel and cost savings for the airline.

To find a sweet spot, typically we lean more to the low power. Our hardware typically stays on the aircraft for five to ten years, so we are not looking for something that needs a massive amount of cooling or additional cooling outside the standard cargo bay environment.

EECatalog: Does that need for low power mean the processors would tend to be ARM processors?

Lewis, Thales: It all depends. We have used Intel in the past, we have used Freescale; it’s really dependent on specific applications. If it’s a server, it might be an Intel based processor; if it’s a box or smart display we might go with Freescale. It all depends on the environmental conditions of the location in the aircraft.

If it’s server that’s in the cargo bay you have a little more leeway, you can go with a processor that consumes more power and might need more cooling [as opposed to] something in the passenger seat where airflow is limited. Ideally you don’t want to use a processor that is thermal-hungry and putting out a lot of heat, causing the unit to fail at some point.

Especially if it is a wide body aircraft it might need a couple of servers to split up the workload so that the longevity of each box is extended because you have divided up the workload.

EECatalog: Cooling continues to be a challenge.

Lewis, Thales: I am hoping that given the roadmaps for different COM and SBC designs that cooling will be less of a limitation constraint for avionic designs.

But we are facing limitations. For instance in the cargo bay of the aircraft there is a maximum amount of airflow focused in that particular area of the aircraft—and depending on how many boxes the aircraft needs—and that is only accounting for the normal operation of the aircraft—now you have to factor in in-flight entertainment and Internet connectivity boxes into that area. So you are constrained there from the get go. That dictates the space allotment that you have, so you have 4 MCU limited to 100W of thermal dissipation. This thermal limitation combined with box size limitation demands that the processor have a fan or a fan and heatsink. Depending on what it must do for the application it can run hot. Given all those limitations it can curtail the life of your design, and the understanding is, “okay, this box in a typical real-world application might only last three years.”

Rail Safety Certification Eased

Thursday, May 26th, 2016

MEN and QNX Case Study: A highly efficient path to functional safety certification for railway transportation.

Terry stares at his project proposal on the table for the fifth time that morning. The title reads: “Project Proposal – Automatic Train Protection System (Certified to EN 50128 SIL 4).”  Although Terry has managed the development of multiple railway systems in the past, this project is different and has cost him several nights of sleep.

A professional engineer with years of industry experience, Terry has a solid grasp on the technical aspects of an automatic train protection system. Drawing up a project outline, complete with development schedule and budgetary figures, would usually take less than two weeks for Terry but the EN 50128 SIL 4 requirement changes everything. Numerous questions swirl around in Terry’s head, each threatening to undermine the usual knowledge that Terry has for estimating schedule and budget.

“What hardware platform should we choose to make the EN 50128 certification easier?”

“Which COTS components should we use?”

“What design implications does EN 50128 have on the overall system?” 

“How much does certification cost?” 

“How do we mitigate the risk of a failed certification attempt?” 

“Is there any existing expertise on our development team on the application of the EN 50128 standards?” 

“How does the EN 50128 requirement impact the overall schedule?  Would it double the project length?” 

A Harvard Business Review article surveying a sample of large IT projects showed that on average 45% of the projects run over budget, 7% over time and 56% deliver less value than predicted. Terry never thought these statistics would apply to any project he managed, based on his experience and industry knowledge. Now, faced with a new requirement for functional safety certification, Terry has a fresh appreciation for these numbers.

The Problem

Terry’s problem is commonly found in many industries, including railway transportation, power and energy, factory automation, to name a few. Any system whose malfunction could lead to damages to human or properties is a candidate for functional safety regulation. There is an increasing adoption of functional safety standards in different markets, resulting in a number of market specific standards. (Figure 1) EN 50128 is one such example. Based on IEC 61508, the standard governs the functional safety for heavy and light rail systems. Since the standard is relatively new (published in 2011), many system manufacturers are as yet unfamiliar with it. These manufacturers are unprepared for the implication of certification requirements and how such requirements impact project budget and time. A common mistake for less-experienced companies is gross under-estimation of this impact, which can lead to market timing errors and lost revenue.

Figure 1:  Railway is one of many industries requiring functional safety to protect human life and avoid catastrophic events.

Figure 1: Railway is one of many industries requiring functional safety to protect human life and avoid catastrophic events.

The knowledge level for functional safety and certification is one of the most important deciding factors for project success. Generally speaking, a project with functional safety certification requirements can easily double or triple the time it takes to complete a project without. Efforts invested in certification activities are often greater than efforts in straight development. This magnifying effect of the certification requirements is abated when knowledge level is high and amplified when knowledge level is low. The following table shows a fictitious scenario to illustrate this effect, assuming a development team with fairly good knowledge in safety and certification.


Table 1: Project Comparison – Certified vs. Non-Certified Product

Newcomers to functional safety standards may find this increase in effort level to be inhibitive. However, a closer look at the demands of the standards helps explain this. Take IEC 61508 for example. The safety integrity levels of IEC 61508 range from SIL 1, the lowest level, to the highest at SIL 4. To provide a sense of how demanding these certifications are, a system certified at SIL 3 must have a probability of dangerous failure below 1 in 10 million per hour of operation. Achieving such a low risk of failure is non-trivial, to say the least. In fact, it’s well nigh impossible to satisfy these functional safety requirements unless they are baked into the very design of the product. This type of design strategy for safety products is reflected by the increase in both developer head count and activity duration shown in the Table 1. Demonstrating the compliance to these requirements, to an independent auditing firm, adds a whole realm of new challenges of its own, which could easily stretch the project duration by more than 100%.

So how do today’s system vendors meet the challenge of increasing regulatory pressures for safety standard compliance in the face of increasingly compressed time-to-market windows?

Three Aspects of An Effective Solution

MEN Micro Inc’s Modular Train Control System (MTCS) is a perfect demonstration of the three aspects of a highly effective solution to this problem. First, embed functional safety concepts throughout the entire design lifecycle.  Second, leverage pre-certified components wherever possible. Last but not least, use modularity to control project scope. MEN Micro was able to deliver this sophisticated product with proven pedigree in functional safety in a timely fashion. In turn, the MTCS product is itself a pre-certified component, a critical ingredient of an effective solution for all railway system builders.

MTCS is a platform designed for safety-critical train applications like train control, automatic train operation (ATO) and automatic train protection (ATP) with certification requirements up to EN 50128 SIL 4. Available in an EN 50128 SIL 4 pre-certified configuration, the MTCS provides safe control of single functions as well as for complete train control. By changing its configurable setup, the MTCS can control anything in the train that requires functional safety – under SIL 4, SIL 3 or SIL 2 requirements. MTCS is developed according to EN 50128 and EN 50129 standards. As a seasoned supplier of safety-critical systems, MEN adheres to an internal culture that gives the utmost importance to safety. For a less experienced team, getting some external help to understand the build the safety culture is definitely a worthwhile investment.

Using pre-certified components lowers overall risk to system manufacturers through proven and reliable technologies. One of the most vital components in complex platforms consisting of hardware and software is the real-time operating system. A pre-certified operating system (OS) offers a high level of reliability and risk reduction for safety-critical systems that has been independently validated. It would be difficult to imagine a certified industrial control application without a pre-certified OS. This is an additional dimension to the build-or-buy decision for system manufacturers. Some companies have legacy home-grown components including operating systems. In most cases, the cost of certifying these home-grown components will outweigh the price tag of a pre-certified solution, simply due to the economy of scale factor.    

Hardware is a different story. Pre-certified hardware is difficult to find and hardware certification is a frequently asked question from system manufacturers. By including their customer-designed hardware in the scope of certification, MEN has effectively solved this problem for its customers. The QNX Neutrino real time operating system (RTOS) is certified to IEC 61508 Safety Integrity Level 3 (SIL 3), and offers a very high level of reliability and risk reduction for safety-critical systems. It plays a major role in building secure, survivable embedded systems. By adopting QNX’s pre-certified RTOS, MEN effectively shortened the project by approximately two years, reduced project cost by about $2 million and eliminated any certification risk on the OS level.

To control project scope, which translates to project cost, modularity is the key word. At the heart of the MTCS lies the F75P—the central computing part of onboard applications like Train Management Control Systems or Train Protection Systems. The F75P is a COTS safe computer with onboard functional safety that unites three CPUs on one 3U CompactPCI® PlusIO card. Two independent control processors (CP) with independent DDR2 RAM and Flash and a supervision structure provide safety (the board becomes a fail silent subsystem, certifiable up to SIL 4). An I/O processor completes the board with I/O connectivity. With its clear separation of safe and non-safe subsystems the F75P can replace multiprocessing systems with CPU redundancy and I/O by a small-footprint, low-power solution that is flexible for different types of application scenarios.  The communication protocol of the third CPU was developed in accordance with EN 50159, which targets safety relevant communication in transmission systems. This ensures safe communication in the area known as the black channel, located between the control unit and I/O, for comprehensive, safe communication throughout the system (Figure 2).

Figure 2: Software and hardware elements work together in this modular design.

Figure 2: Software and hardware elements work together in this modular design.

The modularity concept is also reflected in MEN Micro’s choice of real-time operating system. The QNX Neutrino RTOS is a based on the microkernel architecture that enforces strong boundaries between software processes to prevent any process from affecting the performance and behavior of other processes. Processes can damage one another intentionally (via malware) or unintentionally (via bugs); the QNX Neutrino RTOS provides mechanisms to prevent such damage and to keep the system in a healthy state. Furthermore, the adaptive partitioning technology found in the QNX Neutrino RTOS provides this level of separation for CPU bandwidth. A modular design at the system level is only possible if it is built on a platform that supports this.


The Modular Train Control System offers a pre-integrated, ready to install platform that combines the ideal operating system from QNX Software Systems for reliability and easier programming of safety critical applications with the F75P solution, representing an extremely compelling offer to address regulatory pressures and cost effectiveness challenges.

In addition to pre-certification credentials, MTCS offers high level of flexibility for system integrators, resulting in significant cost and timesavings during computerization of the train. The combined solution allows users to quickly create new solutions that take advantage of the latest industrial safety and processing speed and real-time automation technology while allowing them to reuse or adapt existing automation algorithms.

So, for Terry, many of his concerns can be addressed with the selection of a reliable, pre-certified component such as the MTCS. This approach largely removes the unknowns in project planning, both in time and budget. Adoption of such a pre-certified COTS system also represents the most effective solution for many companies, freeing up internal resources to focus on true competitive differentiators. Last but not least, choosing a supplier with good knowledge in functional safety and certification sometimes provides the shortest path to access that knowledge in the most relevant manner.

B.SchmitzSince 1992, Barbara Schmitz has served as Chief Marketing Officer of MEN Mikro Elektronik. Her tasks include public relations and product positioning, as well as development and coordination of global sales channels.

Schmitz graduated from the University of Erlangen-Nurnberg. Later, she studied business economics in a correspondence course at the Bad Harzburg business school and followed it with an apprenticeship in Marketing and Communications in Nuremberg.

Transportation’s ePaper Revolution

Monday, May 23rd, 2016

Light, rugged, and with three times the life expectancy of an LCD display, ePaper offers the transportation industry a revolutionary new way to communicate.

The promise of digital signage, specifically signs that are constantly dynamic and changing, meant that accurate and up-to-date information would be provided for planning in advance of, or concurrent with activity to help guide travel. What we see today is far from that reality. Most digital signage in place today in transportation environments consists primarily of advertising-based messages that may include some place-based transit network information.

Power Hungry
While there are more advanced installations scattered around the country, for the most part, the majority of the paper and static signage has not yet transitioned to digital. The reason is simple. The initial investment cost, and the cost of maintenance to support traditional digital signage, is expensive and requires ad revenue to pay for it. If the signage is small, such as with train or bus schedules, advertising is not an option, and the signage remains static.

Traditional digital signage also has another significant requirement. It needs power and lots of it.  LCD-based or even LED-based signs need to be connected to a power source. The Total Cost of Ownership (TCO) of these signs can be high, especially when you consider the normal life of 40,000-50,000 hours, which translates to five to six years of useful life based on the backlight. There are fewer moving parts, so service and maintenance costs are much less.

Lower Power and Longer Life
Alternatively, ePaper promises to change these critical dynamics as it relates to replacing static signage for transportation.

Electronic Paper, or ePaper, as it is commonly called, is best known by the consumer products that use it, namely the Amazon Kindle. While technically a display technology, you might say that ePaper has more in common with an Etch-a-Sketch than it does with other display technologies, such as LCD.

ePaper uses actual ink pigment suspended in millions of micro-capsules (Figure 1). These capsules are controlled by positive and negative charges coming from an electronic backplane, not too dissimilar to what controls the liquid crystals in an LCD. However, the display needs only a small amount of power to change the image on the screen. When power is removed, the display retains the last image uploaded to the screen, which makes it a Bi-Stable technology. Therefore, the display is extremely low power and can be easily run off a battery or solar panel, or through Power over Ethernet (POE). This makes it an ideal technology for remote locations that have no existing power.

Figure 1: Positive and negative charges rule ePaper micro-capsules.

Figure 1: Positive and negative charges rule ePaper micro-capsules.

The ePaper display does not have a backlight and requires ambient light or a light built into the sides of the display to be seen in low-light conditions (Figure 2).

But the result of what you see on this display is nothing short of astonishing. If you have used a Kindle, then you know that the display is as close as one can get to the experience of reading paper, both in pixel resolution and comfort on the eyes. In high ambient light conditions, the high contrast display is easier to read than an LCD. In low-light conditions, just like paper, by adding a front or spotlight, the text or images read like paper.

This is a game-changer for digital signage, especially in transportation, because it provides a light and rugged sign that has at least three times the life expectancy of an LCD display, is easily readable, and requires no power to keep displaying the text and images. And it can be changed as needed. Another advantage of ePaper is that in a situation involving loss of power, the content on the display can be changed to an emergency message through the use of a small capacitive charge.

In addition, while the larger sizes of ePaper are currently available in either 13.3″ or 32″ size diagonal displays, they can easily be matrixed together with hardly a seam in evidence. This means that very large ePaper signs are easy to create, and since they are so thin and light, installation is easier with a minimum of labor required.

Figure 2: An ePaper display.

Figure 2: An ePaper display.

The Technology’s Limitations

First, ePaper is nothing like an LCD display, it does not ‘refresh’ in the classical digital signage sense, does not play video, and may take two to three seconds to change to a new image. To ensure that it will work with your application, work with a company that has developed the electronics. The easiest ways to connect an ePaper display are through wireless or POE connections.

Second, while a 32″ color ePaper sign is available, it displays around 4,100 colors, more than enough for most static information signs, but not on par with LCD for advertising messages. The highest resolution ePaper displays, which display more than UHD, are black and white. High quality cover overlays are available, however, for logos or other static messages.

Third, without a backlight, the display needs to have some artificial light in low ambient light conditions. The color temperature of any external light needs to be matched to the ePaper display. It is important to work with an ePaper display manufacturer who can also provide a front light for large screen displays if external lighting is not available.

Fourth, the initial cost. An ePaper sign will initially cost more than an equivalent indoor LCD.  It would be more comparable in cost to a rugged outdoor-rated LCD. However, considering the TCO, low power consumption over its life, and the labor savings from not having to change the static signs, the ePaper sign more than pays for itself.

ePaper Applications

You can overcome most of these limitations by working with a company that can protect the ePaper substrate from humidity, dust and the elements, especially if used outdoors.

The applications in transportation for ePaper signs are endless. Just imagine all the places that paper is used today. From schedules on the platforms or remote bus shelters (Figure 3) to intricate maps for way-finding to emergency or network messages indicating when trains or buses are running, ePaper fits the requirements more easily than LCD or LED.

Figure 3: No running of power lines out to remote locations is necessary when the transportation signage relies on ePaper.

Figure 3: No running of power lines out to remote locations is necessary when the transportation signage relies on ePaper.

ePaper signage will certainly grow in the future as integrators learn how to incorporate the formats in more and more locations, providing the transit authorities and more importantly their customers, with ever-changing digital signs. Working with an experienced ePaper display manufacturer can help you navigate the limitations and take full advantage of the technology’s many features.

R Heise Head ShotRobert Heise is Executive Vice-President for Global Display Solutions (GDS), a member of the Digital Signage Federation, the only independent, not-for-profit trade organization serving the digital signage industry. The DSF ( ) supports and promotes the common business interests of worldwide digital signage, interactive technologies and digital out-of-home network industries.

GDS is display technology company that manufactures displays for indoor and outdoor environments. GDS has partnered with E Ink Technology, a leading ePaper technology supplier to develop large screen digital signage for indoor and outdoor applications. Robert has been involved in the display electronics industry for over 30 years and has written about various technologies that have emerged in that time across a wide breadth of industries including medical, industrial, financial and numerous digital signage segments. A technologist at heart, Robert enjoys talking about strategies to overcome technical challenges. Robert also runs the 1200-member group Display Monitor Professionals on LinkedIn.

Crashing the IoT Party?

Thursday, January 7th, 2016

As the IoT grows, transportation, automotive, medical, smart grid and other sectors have become more dependent on embedded software, but is embedded software up to shouldering the burden?

Automotive, train and aircraft transportation are becoming more dependent on embedded software and thus on embedded software safety and security. So too are the smart grid, industrial control and automation, and access control. As these industries become more dependent on embedded software, they must rely more on embedded software safety and security. A typical automobile will have 50 microprocessors in it. A high-end automobile, perhaps 100. Instead of the concern being bumpers and fuel tanks, now the concern is, “Can the vehicle next to me tamper with my engine control unit while I’m driving?”

Why Embedded Software is Complex

Embedded software tends to be even more complex than typical “desktop” software because it deals with resource-constrained hardware and real-time deadlines that must be met to avoid malfunction in a large variety of environments and by a variety of users. Also, many kinds of people — electrical engineers, computer scientists and sometimes even mechanical engineers or technicians—write embedded software. Some of these people have learned “on the job” and therefore might not apply the appropriate level of engineering rigor that is required for safety-critical software.

Lastly, as teams become larger and more distributed, and schedules and budgets become squeezed, quality becomes more difficult to control. And so we continue to hear news reports about big product recalls. Recent cases were an airbag subcontractor of Japanese car manufacturers and the crash of an Airbus military transporter, most likely caused by a software bug in the engine control.

There are best practices, tons of software tools, proven processes and even certifications that promise to make sure software does what it should do and nothing else. Yet still many opportunities exist for bugs to be introduced. For instance, if the code is maintained by a new developer who isn’t familiar with the code or who hasn’t been trained on the company’s internal development processes and tools, bugs can slip in. Some bugs are introduced when hardware changes, even though the software is unchanged. And some kinds of complex problems, such as race conditions and hardware glitches, are exceptionally difficult to discover without very comprehensive software and systems testing.

Figure 1: Embedded software operates in an environment that includes resource-constrained hardware and the demands of real-time deadlines. Courtesy
Figure 1: Embedded software operates in an environment that includes resource-constrained hardware and the demands of real-time deadlines. Courtesy

Security and Safety: Correlated but Not the Same

There is a correlation between safe software and secure software, but they are not the same concerns. For example, developers of a safety-critical system will apply techniques such as Failure Mode and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to identify areas of safety concerns, in order to ensure that the product not only operates safely, but fails safely as well. A security engineer has different concerns and will apply techniques such as Threat Modeling to identify security concerns and determine what countermeasures are appropriate.
A system can be relatively safe but insecure, or unsafe but relatively secure, but the best systems are designed to be both safe and secure. For example, an embedded software defect as buffer overflow or undefined behavior can result in both safety hazards and security vulnerabilities. It’s important that developers try to prevent or eliminate these types of problems before addressing more specific security or safety concerns.

Connection and Exposure

When we consider the safety and security traits of embedded software, safety seems to be more mature and well understood compared to security concerns. Thankfully, we don’t frequently hear of planes falling out of the sky, missiles launching themselves, or medical devices killing patients. But every week, it seems, there is a router running embedded Linux being hacked, a smartphone establishing insecure Internet connections, or patient/customer data being exposed by some small Internet-connected (“IoT”) device.

Many areas and industries are exposed, particularly from a security perspective. Devices in many industries are being connected to the Internet, often with no consideration of security. Medical devices, many of which are life-critical, contain private patient data, can receive software upgrades, contain sensitive and very highly guarded intellectual property (IP), and may even contain many “locked” upgrade features that are a “simple hack away” from being unlocked for free. Just think of what an attacker could do to an insecure device.

One of our biggest concerns at Barr Group is products, often low-cost ones, that are being “Internet enabled” without any concern for security. Does that refrigerator, smoke detector, or thermostat really need to be connected to the Internet?  Sure, the convenience and “coolness” are nice, but they come at what cost? These are things that need to be considered in our quest for connectivity.

Obstacles to Software Quality

The three biggest obstacles to software quality are cost, schedule, and expertise. Cost and schedule pressures aren’t going away, so it’s our goal—really our mission—to improve software quality by working with companies to improve their skills, their processes, and their tools. We do this through a mixture of consulting and training. Our embedded software classes teach very specific skills, best practices, and processes that are focused on preventing defects and improving quality. The later a defect is discovered, the more expensive it is to fix (the worst case being a product recall), so we focus on techniques and processes that keep bugs out of the embedded software from the outset. As an example, we strongly recommend the use of an effective coding standard, code reviews, and static analysis tools. The net effect is embedded software that ships on schedule and on budget.

team-SmithDan Smith is a principal engineer at The Barr Group.  He has over two decades of hands-on embedded software development experience in C and C++. His firmware lies at the heart of products including consumer electronics, telecom/datacom, industrial controls, and medical devices. Dan earned his BSEE at Princeton and is an expert in the area of embedded systems security.

Sub-meter Location Tracking and Why It Matters to Automotive and Transportation

Friday, May 15th, 2015

Creative and practical applications for wireless distance and location measurement are proliferating.

Wireless technology for location and distance measurement isn’t usually something considered in connection with automobiles. After all, the distances in automobiles are small and do not generally have to be measured carefully. But in recent months the use of UWB radio technology to improve automobile security, as well as the number of automotive innovations that can use and benefit from wireless distance and location measurement have multiplied.

Figure 1. Ultra-WideBand (UWB) technology could have a role to play in making sure that a vehicle meant to stay within a certain distance of a “supervisor” keeps within that distance. Courtesy Wikimedia Commons.

As I discussed in an earlier article, UWB, or Ultra-Wideband, is a radio technology for wireless communication that enables distance to be measured very precisely. That article discussed the use of UWB radio technology to improve automobile security by ensuring that the car owner was indeed standing next to the vehicle when unlocking the door. In the months since its appearance the number of automotive innovations that can use and benefit from wireless distance and location measurement has multiplied.

Most common wireless technologies, such as Wi-Fi and Bluetooth, can be used to measure the distance between two radio units such as on the car and key fob, but will have errors of up to eight meters. Location errors of eight meters might not bother you if you’re calculating which coffee shop a mall shopper is in, but for guaranteeing that a car owner is right next to a car, eight meter errors is too much. The reason for this error is that narrowband radio such as Wi-Fi and Bluetooth suffer heavily from interference, reflections, refractions and other radio artifacts, all of which causes inaccuracies. UWB is designed in a way that overcomes these problems and delivers distance measurements that are accurate to within 10cm, accurate enough for the most demanding automotive applications.
Keeping Cars Under Close Supervision

Keeping Cars Under Close Supervision
The ongoing race towards self-driving cars is one reason distance and location measurement matters in automobiles. While a lot of media attention has looked at the flashy fully autonomous vehicles being developed, a considerable amount of quieter R&D has gone into simpler versions of this. One example is self-parking cars, which cannot drive themselves on busy roads, but are capable of parking themselves in a parking lot or at the curb. Another example is a vehicle that can cruise down a street slowly, keeping its position right next to a mail delivery or newspaper delivery person walking alongside the road (Figure 1).

In both of these cases, even while technologies are being tested in R&D, lawmakers are in action limiting the scope of the technology’s use, in order to ensure safety. Under most laws under consideration, autonomous or semi-autonomous cars will need to ensure that they only move when they are being supervised by a person. This generally means having a person in the car.

Suppose a self-parking or self-cruising car can guarantee that a person with a controller is, say, within two meters of the car as the car is operating? The system would have to be able to measure the distance precisely, to be sure that the car is not driving away from the user or about to run over the user as a radio glitch makes the person appear to be closer or farther than expected from the car. With precise distance measurement provided by UWB technology, the car’s proximity to the person can be guaranteed.

The Fleet Management Challenge
As a second example of automotive applications of location measurement technology, consider fleet management, such as bus depot or car rental parking lots. The manager of such lots needs not only to find each vehicle when it is wanted, but also to manage the workflow of people assigned to clean, maintain and prepare the vehicles for use. Having a precise map of where the vehicles are in the lot can make this much more efficient. Because industrial lots are indoors or covered, GPS will not work. In an industrial setting, several meters of error in location tracking can mean confusing cars that are in the cleaning process and cars that are waiting for repairs. Not only will this impact the performance of the workers as they spend more time finding the car or bus they are looking for, but it will also make impossible the implementation of advanced analytics systems, which are a key element to achieve the aggressive operational efficiency goals requires in these industries.
UWB, thanks to its level of accuracy, now offers the possibility to track the positions of the cars and busses in the lots, as well as tracking the locations of staff and tools as they move around the lot.

Figure 2. The management of large numbers of vehicles at bus depots, for example, is an application for location measurement technology. Courtesy Wikimedia Commons.

UWB can be used to track each vehicle’s location precisely within such an indoor parking structure. Locator units installed every 50-100 meters along the walls can use the same UWB radio mentioned above to track the locations of vehicles within the lot. This is done using a process called “trilateration,” basically measuring the distance that each vehicle is from three or more locator units, and calculating the location in the lot that is the required distance from the locators. Other technologies, such as Bluetooth, can do this as well, but with inaccuracies of five or more meters, which can lead to confusion between vehicles parked near each other.

Reducing Error to Less than a Meter
As a third example, consider the growing number of consumer devices that cannot be used by drivers while driving, but which can be used by passengers in a vehicle. Many GPS applications already ask device users to confirm that they are passengers and not drivers, but of course the device cannot be sure that the question is being answered truthfully. With precise location technology inside the car, a device can ensure that it is not used by the person sitting in the driver’s seat. As in the examples above, this can only be done with technology such as UWB that can measure location to within 10cm and not have errors of over a meter.

As a fourth example, many people have been frustrated when they are using their phone near a running car, but outside of the car, and have their calls routed to their Bluetooth-based hands free units. These hands free units cannot distinguish between a phone user inside the car and a phone user that steps out of the car for a few minutes and wants to take his calls privately. But UWB can track locations accurately enough to determine when the phone is taken out of the car.

All of these examples of automotive use of wireless location technology have a key element in common: they all require that the location measurement be highly accurate. Many smartphone-based location technologies, including those based on Wi-Fi and Bluetooth, are typically inaccurate by over five meters. But newer technologies are reducing the error to less than a meter. UWB is a key technology in this area, with error rates around 10cm.

This is a trend with impact beyond the automotive industry. Both ABI Research and Grizzly Analytics have identified sub-meter accuracy location technologies as a key focus in the future of location technology. UWB is currently leading the market in sub-meter location tracking, with other technologies working to close the accuracy gap. Solutions now on the market make UWB is available in a single-chip low-power implementation.

We’re seeing next-generation automotive technologies, such as self-driving and self-parking cars, in-lot vehicle tracking, in-car driver/pedestrian differentiation, and more, go from research to development to market. As they do, component technologies such as UWB can enable the precise location tracking that is needed to ensure safety and compliance. On the horizon are additional exciting automotive and transportation developments that can be supported once location can be tracked accurately.

Viot_headshotMickael Viot is Marketing Manager at DecaWave, a fabless semiconductor company headquartered in Dublin, Ireland.

How Not to Strand Rail Passengers and other Safety-Critical System Development Considerations

Tuesday, January 6th, 2015

Shedding light on which tasks to keep in-house for your safety-critical systems development team.

Real-time adjustments for speed, route and passenger comfort are among the benefits as the manual and mechanical subsystems on trains and trams give way to electronic control and monitoring systems. As more electronic systems come into play, it becomes necessary to do whatever is possible to assure correct operation of these advanced systems. This very point is exemplified by the Altona Railway system malfunction in 1995.

Figure 1. London Underground

“The German Railway attempted to replace its long established railway switch tower at Hamburg-Altona station by a fully computerized system made by Siemens. Immediately after starting the new systems, the central computer failed. Siemens’ experts could not find the cause for hours, so German Railway decided to close the whole station, forcing thousands of passengers to start from locations up to 25 kilometers away. The search for the fault was difficult as it was very rare. Two full days later, experts detected that under certain conditions a stack overflow happened. When looking into the routine which should handle stack overflows, they found that this went into a deadloop due to a programming error. In a press conference, the responsible Siemens manager argued that the “hidden” faults were difficult to find, and that Siemens experts had assumed that the routine handling stack overflow would NEVER be used!”

—- As described by Klaus Brunnstein (Univ. Hamburg, 17 March 1995)

Software used in safety-critical systems is, of course, a key element in the correctness of the system’s operation. Most commonly, this software consists of an application running on top of an operating system.

The team that developed the system involved in the Hamburg-Altona incident described above would have been well served to use a commercial real-time operating system (RTOS). Given the RTOS’s ability to analytically determine the worst-case maximum stack size needed by the application, system failure might have been averted. Tools, such as Express Logic’s StackX, can determine maximum stack size requirements through analysis of an .elf executable file.

International Cooperation
Because it’s critical that electronic systems operate safely and because of incidents such as the one in the Hamburg-Altona station, various government-sponsored agencies and independent technical standards organizations define regulations for safety-critical systems. The International Electrotechnical Commission (IEC), a worldwide organization for standardization, promotes international cooperation on all questions concerning standardization in the electrical and electronic fields. IEC-61508, the international standard for electrical, electronic and programmable electronic safety-related systems, sets safety integrity level requirements for system design, implementation, operation and maintenance.

In Europe, CENELEC—the European Committee for Electrotechnical Standardization—governs European railway standards, and these standards are beginning to make their way into the North American railway and public transport market as well. The CENELEC standards EN 50126, EN 50128 and EN 50129 are typically applied to define appropriate safety analysis for such systems:

  • EN 50126 deals with Reliability, Availability, Maintainability and Safety for the entire railway system.
  • EN 50129 applies to safety-related electronic control and protection systems.
  • EN 50128 applies to safety-related software for railway control and protection systems.

IEC standards are applied at various safety integrity levels, or SIL, representing varying degrees of criticality based on the system’s use. IEC EN 61508 outlines the toleration of a probability for failure at each level, with the most critical aspects of the system (i.e., SIL 4) having the least tolerance for failure. As detailed in Table 1, the standard defines both a system’s Probability of Failure on Demand (PFD) and Risk Reduction Factor (RRF) for each SIL level. The term PFD means the likelihood that the system will fail when asked to perform a particular operation for which it is designed. The Risk Reduction Factor (RRF) is the amount of risk that can be reduced by implementing the corresponding SIL system.

Table 1. The Probability of Failure on Demand and the Risk Reduction Factor for the four Safety Integrity Levels.

Common Regulatory Elements
All international safety-critical software standards incorporate common elements that apply to all software systems, regardless of their end application. While the different standards have their own particular phraseology and individual features, they all generally require that software is developed according to a well-documented plan and that its operation is consistent with the plan.

In particular, safety-critical software must demonstrate through rigorous testing and documentation that it is well designed and operates safely.


The process through which the software was designed, developed and tested must be fully described and shown to be consistent. This is broken down into several subcategories:

  • Planning — the objectives of the system are stated, along with the plan for achieving and verifying that these objectives have been achieved.
  • Design — the system design is specified, including hardware and software, with theory of operation and other aspects of design that enable examiners to understand how the system intends to achieve its objectives.
  • Development — the development process, including the tools used, code reviews, test plan, documentation and staff training.
  • Requirements — the functional requirements of the system are explicitly identified and correlated with the system capabilities.
  • Verification — the process of assuring that the system performs in accordance with the specifications and that it achieves its objectives.
  • Configuration management — control of incremental revisions over time, enabling reproducibility of results and protecting against the introduction of faults that cannot be backed out.
  • Quality assurance — processes and procedures that assure that the system has been developed and produced in accordance with its goals, and that it delivers the capabilities it is intended to provide.


Code refers to the source code that the developers or development tools produce. It includes all system and application source code, test code, scripts and object code. This code is to be reviewed as part of the regulatory compliance process, and it must agree with the actual code used in the system.


Test includes specific tests performed to verify the correct operation of the code, as well as its ability to achieve all design goals and system requirements. Testing includes code coverage and analysis to insure that all program instructions are tested. Finally, unit/white-box, integration/black-box and final acceptance testing generally are included.


Results consist of complete results of all tests compiled into a unit and an integration test report.

Rising RTOS Use Poses Challenges

Manufacturers of rail transportation systems have development teams that are fully capable of generating the documentation required to comply with these safety-critical standards, as required by applicable regulations, and have done so for years.

However, there are some aspects of this work that developers would like to avoid or that pose challenges. As safety-critical systems evolve in complexity and make use of more powerful microprocessors, these systems increasingly employ commercial RTOS technology. The RTOS controls and manages the application software to maximize system resources for a given processor.

Generally, such applications involve multiple application tasks, or threads, and a priority-based real-time scheduling OS with interrupt capability for real-time responsiveness. A commercial RTOS provides these functions with an easy-to-use application programming interface (API) that can save developers substantial time in product development. A commercial RTOS also enables developers to use other commercial middleware components such as network stacks, graphics, USB communication and more.

Using a pre-qualified commercial RTOS in a safety-critical system eases the challenges facing the developer seeking to comply with the regulatory requirements that call for documentation and testing of software developed by an independent entity. It’s difficult enough to fully document regulatory compliance for the application code that a company’s development team creates. And it is that much more difficult to do the same for RTOS code that an organization outside the company develops. In addition, the time required to generate and organize all the RTOS-related documentation can take a hefty bite out of the project schedule. The challenge of documentation and the time involved become even more difficult if the commercial RTOS isn’t delivered with full source code.

Tossing the Baby Out with the Bathwater?

These concerns cause some development teams to use an in-house developed OS instead of a commercial product, which certainly makes the documentation task that much less demanding. But this choice also sacrifices the advantages of the commercial RTOS, and does require additional time and resources to develop, maintain, and document the in-house OS, so the developer is left in a quandary.

If the RTOS is delivered with full source code and is manageably small, then it might be feasible for the development team to complete the regulatory tasks internally. However, to accomplish this, the team would still require a fair amount of time to understand the outside code well enough to document and test it to the extent regulatory review demands. This effort—not to mention the risk of error and failure—make this approach, which adds time and overall development costs, less than satisfactory.

A different method is to use a product from an RTOS supplier that is geared to meet regulatory compliance demands, pre-qualified for compliance with the appropriate regulatory standards, and provided with all of the necessary documentation. Developers then can keep their attention focused on their own application, which they know well, while the RTOS company provides all documentation to meet regulatory compliance for the RTOS.

Express Logic’s Certification Pack™, for example, allows developers to avoid the dilemma of in-house versus commercial RTOS. Certification Pack consists of all the RTOS design, code, test and results documentation that each standard requires, fully prepared and guaranteed to be accepted by the governing agency. Using such a turnkey product also eliminates the risk of error and failure through the guarantee of successful regulatory approval. Table 2 shows the contents of a typical Certification Pack for IEC-61508.

Table 2. Contents of a typical IEC-61508 Certification Pack.

Delivered 100% complete, the Rail and Transportation Certification Packs are ready to submit for IEC-62679, EN-50128, IEC-61508 and 49CFR236 Subpart H certification. The Packs are TUV and CENELEC approved and have been proven for certification of devices up to and including SIL 3/4. Rail and Transportation Validation Suites contain everything needed to comply with IEC-62679, EN-50128, IEC-61508 and 49CFR236 Subpart H and comprise designs, source code, test code, test results, trace matrices and all related documentation for certification up through SIL 3/4.


Developers should balance efficient use of in-house personnel for a certification effort with the cost savings of using a commercial package prepared for the RTOS. Commercial solutions for the RTOS should be 100% turnkey, and are available for virtually all safety-critical software standards in the areas of medical, avionics, industrial and transportation. Developers who wish to tackle part of the effort in-house are able to do so as well. For those developers, a partial set of material is provided, and the in-house team typically might opt to perform target testing internally.

As we move forward as a technology-rich society, we can expect to see more safety-critical requirements intended to reduce the number of system failures that lead to injury and loss of life. Turnkey regulatory compliance solutions will likely become the norm, and they will help greatly in the development of quality, well-proven embedded software.

John_CarboneJohn A. Carbone, vice president of marketing for Express Logic, has over 40 years’ experience in real-time computer systems and software, ranging from embedded system developer and FAE to vice president of sales and marketing. Mr. Carbone has a BS degree in mathematics from Boston College.

ISO 26262—Software Compliance in a Nutshell

Monday, September 29th, 2014

What the standards for such transportation sectors as rail and aerospace have meant for ISO 26262 and automotive safety, and how requirements traceability, automated test and other techniques can help developers generate reliable code that complies with the standard.

ISO 26262 is a relative newcomer as far as safety standards are concerned, and so a sideways glance at the activities of other transportation industries can be useful. Not surprisingly, there are marked differences in the quality of software in the different transportation sectors. The automotive industry does a good job of listing all the requirements in a database and was the originator of the MISRA software coding standards. The aerospace, railway and process industries have long had standards governing the entire development cycle of electrical, electronic and programmable electronic systems, however, including the need to track all requirements—a core consideration of ISO 26262.

This article presents a pragmatic examination of some key aspects of the ISO 26262 standard from the embedded software perspective. It argues that the vagaries of the language in the standard, e.g., “Recommended,” “Highly recommended,” “Appropriate compromise,” and “Minimize in balance with other design considerations“ are a positive aspect for the development organization, and discusses what can be drawn from the more mature standards culture prevalent in other sectors.

Fig 1 Standards developed from the generic IEC 61508 standard co
Figure 1. Standards developed from the generic IEC 61508 standard

A Common Ancestry

It is no coincidence that the contents of ISO 26262 are similar in nature to those used in other safety-critical transportation sectors. ISO 26262 is an adaptation of the IEC 61508 generic standard, which formed the foundation for other industry-specific standards. The rail industry’s CENELEC EN 50128 standard is one example (see Figure 1).

ISO 26262 also has much in common with the DO-178B standard seen in aerospace applications, particularly with respect to the requirement for modified condition/decision coverage (MC/DC) and the structural coverage analysis process.

A common concept in the standards applied in safety-critical sectors is the use of a tiered, risk-based approach for determining the criticality of each function within the system under development. Typically known as safety integrity levels (SILs), they’re defined very early in the process before any decision is made about the technology to be deployed. In other words, they are not hardware- or software-focused because at the time they are defined, no decision has been made regarding how they will be fulfilled.

Whatever the industrial sector, there are usually between three and five grades used to specify the necessary safety measures to avoid an unreasonable residual risk either of the system as a whole, or a system component. Different industries adopt different approaches to the derivation of appropriate SILs, however, drawing on established best practices in that field. For example, the medical devices sector defines safety classifications according to the level of harm a failure could cause to a patient, operator, or other person in a manner analogous to the FDA classifications of medical devices:

  • A—no possible injury or damage to health
  • B—possibility of non-serious injury or harm
  • C—possibility of serious injury, harm, or death

For the most part, however, the derivative standards from IEC 61508 are similar in that they first of all establish the processes (including risk management processes), activities and tasks required throughout the software lifecycle. They stipulate that this cycle does not end with product release, but continues through maintenance and problem resolution as long as the software is operational. Ultimately, regardless of how they specify the level of acceptable or unacceptable risk, standards like IEC 62304, ISO 26262 and others exist to help us ensure that a system or device whose failure could cause injury, harm or death does not fail. They provide guides and measures that we must use to demonstrate to ourselves and to regulatory agencies that our systems and devices are indeed safe.
Automotive Safety Integrity Levels (ASIL)

ISO 26262 recognizes that software safety and security must be addressed in a systematic way throughout the software development life cycle (SDLC). This includes the safety requirements traceability, software design, coding and verification processes used to ensure correctness, control and confidence both in the software and in the systems to which that software contributes.

A key element of ISO 26262 (Part 4) is the practice of allocating technical safety requirements in the system design and developing that design further to derive an item integration and testing plan, and subsequently the tests themselves. It implicitly includes software elements of the system, with the explicit subdivision of hardware and software development practices being dealt with further down the “V” model.

Like the medical device industry standard cited earlier, the ISO 26262 approach to the derivation of ASIL draws on the automotive industry’s own best practice of mathematically derived dependability data such as ‘Statistical Process Control’ and ‘Six Sigma’ availability. The ASIL is assigned based on the risk of a hazardous event occurring, taking account of the frequency of the situation, the impact of possible damage and the extent to which the situation can be controlled or managed.

Figure 2: ASILs are designed to reflect the probability of exposure, controllability and severity of failure of the system functions with respect to possible hazards.

This definition neatly addresses a criticism of unqualified dependability data in that (for example) 99.999% availability can imply very different things depending on the distribution of the failures.

For instance, a claim of five-nines dependability for a car’s braking system has very different implications if the 0.001% failure (5 minutes, 16 seconds per year) occurs all at once or if it is spread across 1 million distinct instances of 316 microseconds (also 0.001% failure). One 5-minute, 16-second failure can lead to catastrophic results, while one million separate 316-millisecond failures may have no effect on the system’s dependability. In ASIL terms, that means that the severity of the latter failure mode would be nil.

Paying the Piper, Calling the Tune

It is clearly easier to develop a system or subsystem that is rated as ASIL A than ASIL D; hence, there is a commercial and pragmatic incentive to pitch your project as far down the scale as possible, and still be in a position to make the bold claim that it is “ISO 26262 compliant.”

A responsibility falls, then, on the independent functional safety assessment demanded in the standard. As observed by Messrs. Palin, Ward, Habli and Rivett1, however, “independent functional safety assessment is only a requirement for ASIL C and ASIL D, and the full level of independence (i.e. complete separation in terms of managerial, financial and release authority) is only a requirement for ASIL D.” In other words, the lower the ASIL, the less work there is to be done, and the less the independence required of the functional safety assessor.

It is also relevant here that the terminology of the standard provides flexibility to develop a reasonable and pragmatic approach in establishing practices to comply with the standard itself. Clearly, terms like “highly recommended” are a far cry from the possible alternative of “mandatory” or “compulsory” and are beneficial for a professional organization seeking to use its experience and know-how to establish a certifiable standard of work in a timely and cost conscious manner. These factors also generate a slightly uncomfortable feeling that there are loopholes to be exploited by anyone more interested in cutting corners while still acquiring the same rubber stamp on their ultimately inferior product, however.

Perhaps it is the ever-increasing threat of litigation that is likely to break what looks like an unhappily compromised vicious circle. In these times of corporate responsibility, who would be willing to stand before a jury and argue the case for an artificially low ASIL rating associated with what has proved to be a lethal safety critical component?

Tracing All Requirements

So far in this process, only the safety requirements of the system have been considered because as a “Functional Safety Standard,” they are the only requirements ISO 26262 concerns itself with. It is easy to lose sight of the fact that functional safety is not the primary aim of the development exercise, bearing in mind that generally the safest vehicle is a stationary one. So, the tracing of requirements is not only a cornerstone of the standard but it is also a key factor in producing a timely product that is fit for purpose.

Despite good intentions, many projects fall into a pattern of disjointed software development in which requirements, design, implementation, and testing artefacts are produced from isolated development phases. Such isolation results in tenuous links between that stage and/or the development team and the overall requirements traceability matrix (RTM). Unfortunately, these situations can just as easily occur on projects using state-of-the-art requirements management tools, modeling tools, IDEs and testing tools. Typically, this occurs because many requirement management tools use centralized, database-style architecture and application models. With these implementations, there is plenty of functionality to encourage good quality and good management in the requirements domain, yet little to aid the downstream effort where projects are designed, implemented and tested.

The traditional view of software development shows each phase flowing into the next, perhaps with feedback to earlier phases, and a surrounding framework of configuration management and process (e.g., Agile, RUP). Traceability is assumed to be part of the relationships between phases; however, the mechanism by which trace links are recorded is seldom stated. The reality is that, while each individual phase may be conducted efficiently thanks to investment in up-to-date tool technology, these tools are unlikely to contribute directly to the RTM. As a result, the RTM becomes increasingly poorly maintained over the duration of projects and is typically completed as a rush job. The net result is absent or superficial cross checking between requirements and implementation and consequent inadequacies in the resulting system.

In truth, the RTM sits at the heart of any project (see Figure 3). Whether or not the links are physically recorded and managed, they still exist. For example, a developer creates a link simply by reading a design specification and using that to drive the implementation.

Figure 3: RTM sits at the heart of the project, defining and describing the interaction between the design, code, test and verification stages of development.

This alternative view of the development landscape illustrates the importance that should be attached to the RTM. Due to this fundamental centrality, it is vital that project managers place the same priority on investing in tooling for RTM construction as they do purchase of requirements management, version control, change management, modelling and testing tools. As well, the RTM must be represented explicitly in any lifecycle model to emphasize its importance (see Figure 4). With this elevated focus, the RTM is constructed and maintained efficiently and accurately as an integral part of the development process.

Figure 4: Development Lifecycle Model Emphasizing the RTM

For aerospace and military projects such as air-traffic control or missile guidance systems, following a model where the RTM has high visibility is commonplace. ISO 26262 places similar demands on the automotive development team, particularly with regards to safety requirements, and it follows that a similar model will be appropriate.

The “Proven in Use” and “Increased Confidence from Use” Arguments

A key clause in the ISO 26262 standard applies when a component has been used in other applications without issues. This naturally extends to older systems that were developed in accordance with past best practice but which predate ISO 26262 itself.

That argument is generally sound. It deals admirably with the possibility that might otherwise exist of a proven component being somehow deemed no longer suitable due to a lack of ISO 26262 certification—despite a track record of millions of road miles presenting incontrovertible evidence to the contrary.

In some cases, that is entirely reasonable. For example, a software unit test tool that has been proven to be effective over a number of safety critical projects is likely to have been subjected to a great variety of circumstances and continually honed by its developers to be as near perfect as is reasonable to expect.

Care is needed when applying that argument to software components in the application code itself, however. It is often argued that legacy code that forms the basis of new developments has been adequately tested just by being deployed in the field. Even in the field, though, it is highly likely that the circumstances required to exercise some parts of the code have never (and possibly can never) occur. It follows that many unexercised paths are likely to remain in software tested only through functional testing and in the field, and such applications have therefore sustained little more than an extension of functional system testing by their in-field use.

When there is a requirement for ongoing development of legacy code for later revisions or new applications, previously unexercised code paths are likely to be called into use by combinations of data never previously encountered (see Figure 5).

Figure 5: Even code exercised both on site and by functional testing is likely to include many unproven execution paths.

If this seems unreasonable, consider the extreme example where a traction control application that has been proven on millions of family hatchbacks is then deemed sufficiently proven for use in a high-powered muscle car.

Such situations may become particularly challenging when the legacy code has been developed by individuals who have long since left the development team, particularly if the legacy code originates from a time when documentation was not of the high standard expected today.

The Tool Qualification Process

ISO 26262 recognizes that the use of widely accepted software tools simplifies or automates the task at hand, and the use of such tools in the more mature standards cultures prevalent outside the automotive industry will provide evidence of such acceptance, particularly with reference to the common ancestry of the related standards discussed earlier.

The standard requires that the tools to be used must be fully and accurately documented, and that the use of the tool in this instance must be supported by a number of tool qualification work products:

  • Software tool qualification plan
  • Software tool documentation
  • Software tool classification analysis
  • Software tool qualification support.

Software tool classification analysis is undertaken to determine the tool confidence level. This is not just concerned with how well the tool performs within its specification, but also how relevant that specification is for the tasks at hand.

Tool error detection is classified as being in the range TD1 (reflecting a high degree of confidence in the capabilities of a software tool) to TD4 (errors can only be detected randomly). In the case of a tool dedicated to the analysis of dynamic behavior through static analysis, for example, the nature of the tool itself means that false warnings will be raised that require verification by other means. This is likely to be reflected in a lower confidence level than would otherwise be the case.

The implication here is not that the tool is faulty; merely that the way in which it works brings with it a requirement for additional testing. The combination of these classifications provides a mechanism to evaluate an optimal combination of tools and techniques to achieve the overall aims of the project.


ISO 26262 provides a sound framework in which to develop functionally safe automotive applications, but it stops short of dictating exactly how that should be achieved. The flexibility of language and approach designed to permit the selection of appropriate techniques and to integrate applications proven in the field shows a level of pragmatism that should make compliance accessible to all.

That same flexibility and pragmatism should not, however, imply a diminished level of professionalism or consideration on the part of the engineer just because the standard is being met. An example of this would be the reuse of a software component, proven in the field on many vehicles but now proposed for use in an environment likely to expose it to different circumstances.

Despite the core reason for its existence being functional safety, the good practices outlined in ISO 26262 are often of wider benefit above and beyond a purely functional perspective. Soundly maintained requirements traceability is a good example of one such wider benefit.

The ability to embrace proven practices allows enlightened engineering management to look at proven practices in other sectors involved with safety critical development, and to draw on proven tools and approaches from those sectors. The common ancestry of ISO 26262 with many of the standards used elsewhere, such as in transportation sectors, reinforces the validity of this stance.

Jay ThomasJay Thomas, a Technical Development Manager for LDRA Technology in San Bruno, California and has worked on embedded controls simulation, processor simulation, mission- and safety-critical flight software, and communications applications in the aerospace industry. His focus on embedded verification implementation ensures that LDRA clients in aerospace, medical, and industrial sectors are well grounded in safety-, mission-, and security-critical processes.


1 ISO 26262 Safety Cases: Compliance and Assurance. Rob Palin, David Ward, Ibrahim Habli, Roger Rivett.

Next Page »