Posts Tagged ‘top-story’

Driving the I2C bus with Next-Generation Buffers

Monday, May 22nd, 2017

While next-generation I2C buffer devices are good for putting the 12C control buses you already know to work, their true claim to fame could be how they assist with cost, power dissipation, and design complexity, helping system control architectures mature along with overall system design.

Over the past several decades, the Inter-Integrated Circuit (I2C) bus standard has been the dominant control bus standard for most electronic systems. I2C has a loyal following because of its ease of implementation, flexibility, and the large ecosystem of integrated circuits that support the standard. I2C’s ubiquity as the control interface of choice often drives device selection decisions, which in turn shape the overall system architecture.

“Not having pull-up resistors also avoids multiple points of possible system failure.”

System management buses like I2C have become important control points for differentiating designs through system software (firmware). Given the importance placed on system control buses like I2C, it’s crucial that you get the maximum benefit out of your I2C bus to help solve implementation issues and meet system design objectives.

You probably face a multitude of challenges when designing modern electronic-based systems. For example, keeping power dissipation to a minimum is often key for battery-powered systems, while for industrial systems power dissipation is directly connected to thermal performance.

An Opportunity to Reduce Cost
Another area where you’re likely challenged is in system build cost. Market-price competition often drives the need to reduce cost from one product generation to the next. Often, system cost is directly related to design complexity. Complex system designs mean more components, software, and engineering effort to design and test systems. Any opportunity to reduce system complexity is often an opportunity to reduce system cost.

A system’s I2C bus implementation can play a role in helping address cost, power, design complexity, and performance challenges. Next-generation I2C buffers break from traditional I2C implementations by offering a solution that can help you address modern system design issues while staying true to the I2C heritage that has made the standard so widely accepted.

With I2C buffers like TI’s TCA980x series, you can solve common I2C design issues such as level translation, bus buffering, and bus capacitive loading while also tackling system-level challenges such as power, cost, and complexity.

Several key features and characteristics help distinguish the new class of I2C devices from other I2C buffering solutions. For example, placing current-source drivers on the B-side port is a departure from the traditional voltage-based implementations found on virtually all I2C buffers and provides multiple benefits. The B-side bus lines do not need the pull-up resistors commonly used on traditional I2C implementations, and their elimination provides incremental system cost savings, especially for larger control bus implementations. Not having pull-up resistors also avoids multiple points of possible system failure.

Longer Battery Life
The TCA980x’s current-mode implementation means that the I2C bus operation will consume much lower power. Power consumption for the TCA980x is approximately 20 times lower than comparable voltage-mode devices (75μA vs. 1,500μA); see Figure 1. At a system level, lower power dissipation translates into longer battery life and much better thermal performance. The ability to improve battery life is likely to be one of the easiest ways that you can help achieve system design goals without having to compromise performance.

Figure 1: TCA980x vs traditional I2C buffer static current consumption comparison

Figure 1: TCA980x vs. traditional I2C buffer static current consumption comparison

I2C buffers also provide the added benefit of I2C-level translation, with support down to 0.8V. Support for 0.8V helps future-proof your I2C control bus designs as peripheral and processor input/output (I/O) voltages move lower over time. In addition, the flexibility to support device I/O voltages down to 0.8V gives you more options in signal-chain device selection.

Another common issue is having to tweak pull-up resistors to meet system timing parameters, especially rise time, for heavily loaded bus implementations. Figure 2 compares TCA980x rise-time performance vs. traditional pull-up-based I2C bus implementations. As you can see, for heavily loaded bus environments, current source-based buffer device designs have superior rise-time performance compared to traditional I2C buffers.

Figure 2: Rise-time performance of current-source driver vs. a traditional voltage/resistor approach

Figure 2: Rise-time performance of current-source driver vs. a traditional voltage/resistor approach

You can appreciate the combined benefits of the new I2C buffers when evaluating the benefits at a system level. System control and communication buses are essentially the central nervous systems of their respective applications. All too often, designers must compromise on the design of their control bus implementations, like not tweaking pull-up resistor values to save engineering time and cost.

Another area where designers often compromise is when optimizing the operating power consumption of the I2C control bus. The size and expanse of most I2C bus designs often requires more engineering time than you may have in your budget. New I2C buffers enable you to architect control buses that enable core system components to operate at their intended maximum level. Current source driver-based I2C buffers with their superior timing characteristics, lower operating current consumption and lower voltage support provide the functionality and performance needed to implement control buses from a system-level perspective.

Current source-based I2C buffer devices represent a modern approach to implementing I2C buses. With next-generation I2C buffer devices, you can not only implement I2C control buses that you are familiar with, but also address critical design targets such as cost, power dissipation and design complexity. You can implement system control architectures that evolve with the overall system design rather than being a bottleneck in the system design process.

Atul-Patel-HSAtul Patel is a Product Marketer and New Business Development Manager for industrial markets within TI’s Standard Logic Product Line. Patel has more than 20 years of systems and marketing experience with analog and mixed signal devices covering a broad spectrum of markets including Industrial, Automotive and Telecom. He has a Bachelor of Science degree in Computer Engineering as well as an MBA from the University of Central Florida.

Technology Convergence Enables Industrial IoT Solutions

Friday, March 17th, 2017

Ever since Thomas Edison flipped the switch to power the first electric light, the pace of electronic innovation has never let up. With the invention of the transistor and then the integrated circuit, innovation within the electronics industry has developed at break neck speed. Today’s modern ICs contain upwards of 20 billion transistors. That scale means significant performance and tightly integrated heterogeneous systems on the same die.

Figure 1: Processing requirements, pixel size and frame rate.

Figure 1: Processing requirements, pixel size and frame rate.

This performance and integration capability has resulted in electronic innovation becoming very integrated with modern life. We now have convergence of applications which combine, among other capabilities, wired and wireless networking, vision processing, industrial control capability, and cloud computing, to create what are often known as smart products. These smart products are capable of interconnecting to form the Internet of Things (IoT). When these smart products are applied within an industrial context we refer to this as the Industrial Internet of Things (IIoT). Creating IIoT solutions brings with it several challenges. Let’s look at what IIoT systems are, the challenges they face, and how we can address these using the Zynq®-7000 and Zynq® UltraScale+™ MPSoC from Xilinx.

Application and Challenges of IIoT
The application of Industrial IoT is wide ranging and extends across automation and connectedness of the power grid to planes, trains, automobiles, shipping, and factory automation. General Electric, for example, is adding intelligent and connected systems across the many industries it serves, including power grid, transportation, oil and gas, mining, and water. In rail transportation, for instance, GE is outfitting its locomotives with smart technologies to help prevent accidents and monitor systems for wear and tear, enabling more accurate analysis for preventative and predictive maintenance. At the same time, GE is also diligently building smart rail infrastructure equipment that it has networked with its locomotives. This allows railway operators to run their lines and schedule maintenance accordingly to keep freight and passengers moving more efficiently and safely.

The above example demonstrates several of the challenges faced by the IIoT embedded system designer. IIoT systems need to be capable of interfacing and supporting a wide range of sensors, from simple MEMS-based sensors such as accelerometers, to more complex sensors such as CMOS Image Sensors (CIS). These sensors enable the IIoT system to understand its environment and become more context aware. However, many applications require the IIoT system to not only understand its environment, but also interact with it. Therefore, the system must contain intelligence so that it is capable of processing at the edge and thus interfacing with and controling the relevant actuators, motors, or other drive interfaces. Of course, the system is also required to be network enabled and to support a broad based of Industrial standards and protocols. This ability to communicate over networks, coupled with the remote and often isolated installation of IIoT systems, also demands secure operations and communications.

Determining the processing capability an IIoT system needs will depend upon the application, interfaces, and system throughput. One of the largest driving factors is the use of an embedded vision-based IIoT system within what is increasingly called Industry 4.0. Industry 4.0 introduces automation and data exchange to manufacturing technologies with technologies interconnecting via the IIoT and Cloud. When we are using Embedded Vision within the IIoT, key parameters of the image sensor will significantly drive the needed processing capability, typically defined by:

  • Sensor Resolution: The number of pixels in the horizontal and number of lines in the vertical.
  • Frame Rate: The number of times the entire sensor is read out each second.
  • Data Width: The number of bits used to represent the pixel value.

The combination of these define the key interfacing requirements and that of the processing chain and its data rate, as shown in Figure 1.
It is not just implementing the image processing algorithms the system designers need to consider, they must also remember that many IIoT applications are implemented in harsh conditions—not only those involving vibration, shock, and temperature, but also those which are electrically noisy. This can lead to processing the data received by first filtering from the less complex sensors before either acting on the data or communicating it onwards. Depending upon the complexity of the filtering required, they could implement this with a simple rolling average filter or perhaps require a more complex implementation like a Finite Impulse Response (FIR) filter to filter out unwanted noise. Regardless of the method chosen, signal processing, conditioning, and generation form an important part of IIoT systems.

The sensor processing requirements combined with the communication throughput contribute significantly towards the processing capability required. For connecting to the Internet, the system’s application requirements, as determined by the system architect, dictate the primary method used. This will include several networking standards ranging from 4G for remote and mobile applications, to WiFi, Bluetooth, and Bluetooth Low Energy (BLE), and wired connectivity for factory or fixed applications. In some applications, ad hoc networking may also be required.

Another significant factor in determining the processing requirements is the response time or latency of the system. Again, this can be different from application to application. An Industry 4.0 inspection application will require a much lower response time to detect a manufacturing defect on a production line, than a predictive maintenance application on rolling stock, for instance.

The system also needs to be secure. It’s not just the encryption of communications to and from the system that’s demanded. Also needed are the security and the trustworthiness of the system itself. These are often called Information Assurance (IA) and Threat Protection (TP). For IA, a typical approach is to implement encryption such as Advanced Encryption System (AES) or one of the IoT-specific algorithms like SIMON or SPEC. TP is more complicated as it requires an evaluation of the threats at both a device and system level. Anti-tamper protection at the device level is as critical as that at the system level and will vary from application to application. In remote and isolated or critical applications, the system designer will need to ensure the performance and integrity of the system cannot be affected or tampered with by an unauthorized party.

Summarizing the above, we can identify several high-level challenges that the IIoT designer must address:

  1. Ability to interface and control a wide range of sensors, actuators, motors and other application specific interfaces.
  2. Processing capability with the ability to process at the edge within the required response time.
  3. Communication support for a range of wired and wireless technologies.
  4. Security and the ability for the device and system to be secure both in terms of IA and TP.
  5. Should include Functional Safety (SIL levels).

Rising to the Challenge
The system architect can address these high-level challenges by selecting a device from the Xilinx All Programmable Zynq-7000 SoC family or the Zynq UltraScale+ MPSoC family. These heterogeneous processing systems include a complete ARM processing complex, comprised of processor subsystem and peripherals (in the case of the Zynq-7000 include either single core dual ARM® Cortex™-A9 processors, or Dual or Quad A53 cores and dual R5 real-time cores, Mali GPU core, and in select devices video codec supporting H.264/ H.265), all closely coupled directly with programmable logic and configurable IO, allowing the system designer to create an optimal solution for many IIoT applications.

As Identified in the summary of the previous section, the Zynq All Programmable SoC family addresses a wide breadth of sensor and connectivity modalities, configurable machine learning engines for analytics with requisite responsiveness to meet machine real-time precision control, and multi-layered security with multi-level safety.

One of the main advantages of using a Zynq SoC solution is that it provides for any-to-any interfacing, enabling connection with both Industry standard, legacy, and evolving interfaces (such as TSN). Within the processing system (PS) of Zynq 7000 and Zynq UltraScale+ MPSoC, the user is provided with several standard peripheral interfaces. From basic low-speed standards like SPI, I2C, and UART to more complex ones such as CAN, Ethernet, USB 3.0, PCIe, SATA, and DisplayPort. These integrated peripherals enable the design engineer to connect a wide range of sensors. However, should an interface be required that the processor doesn’t support, for instance a CIS or high-speed ADC or DAC, designers can utilize the programmable logic (PL) to implement the required peripheral interface. This ability to use the PL to create the interface required comes into its own when there is a need to interact with a proprietary or legacy interface.

Data produced by the sensors can be processed by either the processor or the programmable logic. Using Xilinx’s system-level Eclipse-based SDx environment SDSoC™, the design engineer can seamlessly partition and quickly optimize the design. Designers can move functions executing on the processor by offloading and accelerating them as co-processing engines implemented in the programmable logic. The SDSoC provides a rapid and seamless development environment, combining High Level Synthesis with optimized data movement engines and a connectivity framework, substantially boosting system performance.

For example, two commonly used algorithms within IIoT systems are FIR filters to reduce noise on sensor readings as previously noted, and AES encryption to secure communication channels. Both these algorithms can be executed within the processor system. However, the performance can be increased significantly by moving these functions into the programmable logic. Monitoring the number of clock cycles taken to execute these functions when the FIR filter example is implemented on the processor, and then in the programmable logic, the user obtains values of 537946 clock cycles using a bare metal operating system, and 54696 clock cycles when running in the programmable logic. A significant decrease in execution time of around 90 percent results, achieved without the use of an HDL specialist. Designers can provide similar acceleration functionality for signals that need to be generated by the IIoT system, for example performing motor control.

Furthermore, Machine Learning and Neural Networks can be implemented utilizing the combined processing capabilities provided by the Zynq Programmable SoC family. Some examples are inference engines for image classification, which is important in machine vision for process control, and autonomous operation in robotics and surveillance systems, combined with stochastic probabilistic algorithm ML engines for predictive to prescriptive maintenance. The Zynq Programmable SoC family and development tool environments provide a platform for rapid generation of these engines utilizing popular machine learning frameworks.

But of course, not all IIoT systems are implemented on a bare metal OS. Many require a real-time OS like FreeRTOS, or a more complex OS like Linux. The choice of operating system will have an impact on the performance of the accelerated function. Table 1 shows the performance of the AES256 algorithm when running in the processor system and accelerated to the programmable logic using several different operating systems.

Table 1: AES Acceleration by OS.

Table 1: AES Acceleration by OS.

When it comes to being network enabled and communication to and from the system, both Zynq-7000 and Zynq UltraScale+ MPSoC provide Gigabit Ethernet capability within the processor. If the system requires a wireless connection, users can leverage the any-to-any configurable interface capability to connect with an external WiFi module. Often these provide Bluetooth and BLE capability as well. If the system is designed to operate within a remote, isolated, or mobile application, a 4G interface may be provided in these instances to ensure continuity of connection back to the Cloud.

Security is designed into the very core of both Zynq-7000 and Zynq UltraScale+ MPSoC families, enabling secure boot facilities. Within both the processor and the programmable logic of Zynq-7000 devices there is a three-stage process system engineers can use to ensure system partitions are secure. These comprise a Hashed Message Authentication Code (HMAC), Advanced Encryption Standard (AES) Decryption, and RSA Authentication (Figure 2). Both the AES and HMAC use 256 bit private keys while the RSA uses a 2048 bit key. The security architecture of Zynq also allows for JTAG access to be enabled or disabled, preventing unauthorized system access.

Figure 2: All Programmable Zynq-7000 SoC Secure Boot and TrustZone Implementation

Figure 2: All Programmable Zynq-7000 SoC Secure Boot and TrustZone Implementation

These security features are enabled as users generate the boot file and the configuration partitions for their non-volatile boot media. It is also possible to define a fall-back partition such that should the initial first stage boot loader fail to load its application, it will fall back to another copy of the application stored at a different memory location, offering a degree of reliability. And if required, users can implement functional safety (IEC61508 based) within their IIoT design using techniques such as Isolation Flow (Figure 3). A reference design is available showing how Zynq-7000 achieves SIL 3 with HFT=1.

Figure 3. Isolation Flow Example

Figure 3. Isolation Flow Example

Having completed the secure boot and with the device executing its application, they can use the ARM TrustZone architecture to implement orthogonal worlds, which limits access to hardware functions within both the processor and programmable logic peripherals. Integrated A/Ds for voltage and temperature monitoring can assess SoC and overall system health. This can also be used to provide an anti-tamper capability to address unauthorized access to the system. The Zynq US+ MPSoC enhances the Zynq 7000 security, adding security functionality including Differential Power Analysis (DPA) avoidance, integrated Physically Unclonable Function (PUF), and other security features.

IIoT designers face several challenges that can be addressed using Xilinx Zynq All Programmable SoC and MPSoC families. These solutions address these challenges by combining Software Intelligence and Hardware optimization in a single Zynq SoC device. The devices provide real-time processing and response, breadth of communication standards and protocol support, with multilayer security and functional safety, any to any connectivity and the ability to rapidly develop at the system level using SDSoC to ensure the optimal system partitioning and performance, Zynq Programmable SoCs are the ideal platform for IIoT systems.

AdamTaylor_pic_webAdam Taylor is a world recognised expert in design and development of embedded systems and FPGA’s for several end applications. Throughout his career Adam has used FPGA’s to implement a wide variety of solutions from RADAR to safety critical control systems, with interesting stops in image processing and cryptography along the way. Most recently he was the Chief Engineer of a Space Imaging company, being responsible for several game changing projects. Adam is the author of numerous articles on electronic design and FPGA design including over 175 blogs on how to use the Zynq. Adam is Chartered Engineer and Fellow of the Institute of Engineering and Technology, he is also the owner of the engineering and consultancy company Adiuvo Engineering and Training

Challenges to Implementing the Internet of Things for Industrial Applications

Tuesday, May 10th, 2016

Building industrial IoT systems is getting easier as both hardware and software building blocks come together to provide robust performance and security.

The Internet of things (IoT) takes on many forms, ranging from a handful of smart home devices linked together and connected through the Internet via a gateway, to networks of hundreds or thousands of sensors and other connected devices in a smart office building, a factory floor, the power grid, or jet engines on a plane. The industrial use of IoT solutions takes on its own identity as the industrial IoT (IIoT), and the IIoT has more stringent performance requirements than consumer-connect solutions. Advanced security, quantified real-time performance, the ability to connect legacy equipment to the network, and the ability to handle huge amounts of data being collected from thousands of end-points are key performance characteristics that differentiate the IIoT from consumer IoT solutions in smart homes.

As explained in a white paper written by several product managers at Moxa, in industrial automation applications, collecting data from field devices will become more important than ever. Temperature, motor speed, start/stop status, or video footage, can be used to gain new insights to increase competitiveness. For example, you can determine how to optimize your energy usage, production line performance, and even when to do preventive maintenance to reduce the amount of downtime. However, these devices often speak different languages: some use proprietary protocols, whereas others use open standard protocols. Whatever the case may be, you will need to find an efficient way to convert back and forth between one or more protocols.

The IIoT thus presents many challenges to designers and system implementers. The first critical issue is to get all the “things” connected so that data and commands can seamlessly flow across the network. Next, all the data being collected must be turned into intelligence (little data from many sensors or endpoints turns into big data, and that data must be analyzed to extract useful intelligence). Often, groups of sensors are collected into a bank and that bank feeds into a gateway that will preprocess the data, reducing the amount of data sent back to the host system. That, in turn, will reduce the network bandwidth requirements, lowering the cost to collect the data since companies could use less-expensive lower-bandwidth interfaces.

Factory-floor applications and many other industrial applications bring with them legacy equipment issues, where equipment that might be 10, 20 or more years old would have to be adapted to communicate over the IIoT. But such equipment might employ many different communication protocols and interfaces, thus making the equipment difficult to configure and integrate. Thus, designers will have to deal with interoperability and scalability challenges to craft large, scalable heterogeneous networks that can operate in harsh environments—whether on the factory floor or dispersed across the country on the power grid.

Once properly set up, the IIoT can improve productivity and make users lives easier. However, an unreliable network would be the bane of any system developer since users would see longer system downtimes, risks from system breaches from hackers, malware, and organized cyber security attacks, and unstable operations as the diagram from Moxa illustrates (Figure 1). This is especially prevalent when wireless connections link portions of the system together.

To deal with all the security and interface issues, Moxa has developed Fieldbus-to-Ethernet gateways with smart functionality to deal with breaches and system performance issues. The company’s MGate gateways not only connect serial devices to Ethernet systems, but they also allow multiple connections and make it easy to employ various Ethernet protocol formats, such as Modbus TCP and Ethernet/IP.

Figure 1:  In a typical factory automation system, there are many concerns that designers must address to create a reliable IIoT solution. Image courtesy of Moxa.

Figure 1: In a typical factory automation system, there are many concerns that designers must address to create a reliable IIoT solution. Image courtesy of Moxa.

One Starting Point

In addition to all the hardware issues, there are many software aspects that designers must deal with, from the operating system software to application programs that run on the endpoints such as sensor nodes, machines on the factory floor, equipment in the field, or jet engines on a plane. One starting point from Wind River, its free scalable real-time operating system (RTOS) dubbed Rocket, is targeted for 32-bit microcontrollers and is a good fit for sensors, wearable products, industrial actuators, wireless gateways and other resource-constrained devices.

The Rocket RTOS lets designers develop, debug and deploy applications for small, intelligent devices from any browser. Included in Wind River’s Helix App Cloud, a cloud-based software development environment, the Rocket software allows designers to start developing IoT applications in minutes. To get started, designers just have to create an App Cloud account, connect their prototype board or use Wind River’s simulator, and then start writing the application code. A browser interface allows designers to work from anywhere to code and debug the application software, and prototypes can be developed without requiring any hardware purchases.

The kernel is a small footprint kernel designed for use on resource-constrained systems, from simple embedded environmental sensors and LED wearables to sophisticated smart watches and IoT wireless gateways. The software is tuned for memory- and power-constrained devices (as small as 4 Kbytes of memory).

Souped-up Security

The proliferation of IoT/IIoT devices brings with it significant security challenges. Hackers as well as cybercriminals can often find a way to enter the networks and wreak havoc by compromising system functions, holding system data for ransom, or siphoning off valuable system data for sale to other criminals. Both hardware and software measures to protect the networks are needed to prevent or minimize network intrusions and alert companies when an intrusion is taking place. To that end, microcontroller (MCU) vendors now include random-number generators as well as full encryption/decryption blocks on their chips to provide real-time encryption/decryption. Previous generation MCUs typically used software encryption/decryption and the software overhead slowed system throughput.

In addition to the RTOS software challenges, the ability to handle hundreds to thousands of sensor or endpoint inputs often requires the use of gateways or edge devices that can aggregate the data coming from the sensors or endpoints, potentially preprocessing the data and then forwarding the data to the host system. These Internet gateways will often handle multiple communication protocols such as WiFi, Bluetooth, ZigBee, LoRa (long-range wireless), and still others. The gateway will translate all the inputs into a common communication protocol, typically WiFi or wired Ethernet.

Situated between the sensors and the gateways, sensor hubs usually perform some degree of data reduction, extracting the key information from the sensor data streams to reduce the amount of data sent back to the host system (Figure 2). There are many vendors that offer gateways for IoT applications. Offering various reference designs for system gateways, Intel solutions, for example, span the range of simple gateways based on its low-end Quark processor, the X1000 system-on-a-chip, or its higher performance Atom processors such as the E3826. The gateway products provide connectivity from the sensors all the way up to the cloud and enterprise systems. The more intelligent gateways can preprocess and filter data to deliver selective results to the host. Local decision makes it easier for gateways to connect with legacy systems, and a hardware root-of-trust along with hardware supported data encryption can provide end-to-end security.

Figure 2: In a large IIoT system, gateways provide a means to translate multiple protocols into a common protocol used to communication with the host system somewhere in the cloud, while sensor hubs aggregate some of the sensor data to extract key information and reduce the amount of data transferred to the host. (Image courtesy of Intel.)

Figure 2: In a large IIoT system, gateways provide a means to translate multiple protocols into a common protocol used to communication with the host system somewhere in the cloud, while sensor hubs aggregate some of the sensor data to extract key information and reduce the amount of data transferred to the host. (Image courtesy of Intel.)

Thus designers can scale processor performance based on the processing requirements in the gateway. Additionally, the processors support multiple operating systems—Wind River, Microsoft, Ubuntu, and others. Robust security provided by McAfee (now part of Intel) embedded control security technologies tightly integrate with on-chip hardware based security features to provide seamless secure data flow from the edge devices to the cloud, protecting the data while in flight or at rest. Additionally, pre-integrated manageability options provide developers with an out-of-the-box offering they can build upon and customize.

The sensors that collect all the data have come down in price—from tens of dollars to just a dollar or two today—thanks to the large volumes consumed by the mobile communications and compute products such as smartphones and tablets. They are also getting more highly integrated, offering anywhere from a single-axis accelerometer to a nine-axis multi-sensor (three-axis accelerometer, three-axis gyroscope and three-axis compass) solution in a single surface-mount package such as offered by Invensense. And along with the sensors, some vendors also include some data processing functions on the sensor chip to preprocess the raw data stream, thus reducing the amount of data sent to the gateway. The low cost of the today’s multiaxis sensors allows designers to use them throughout the system to monitor many more aspects of system performance than in previous system generations.

The software and hardware building blocks to build a robust IIoT system are now readily available from multiple suppliers. Development tools now let designers prototype systems in minutes to a few hours. But there are still many system aspects that designers must address—deciding which communications protocol standard (as mentioned earlier) will be key to system scalability and performance. Additionally, for many systems the choice of either an RTOS or other operating system can be critical, but there are no standards that designers must follow. The key issue will be to determine if the system will be “closed” and only allow hardware from one vendor to do all the work, or be “open” and allow products from different suppliers to interoperate.

For smart homes, the Thread Group has developed an approach to connect and control products in the home. Built on open standards and using the IPv6 and 6LoWPAN protocols, Thread provides a secure and reliable mesh network with no single point of failure as well as simple connectivity and low power consumption. On the industrial side, the Industrial Internet Consortium is trying to define standards to encourage interoperability. The IIC is a worldwide not-for-profit, open membership organization that was formed to accelerate the development, adoption, and widespread use of interconnected machines and devices, intelligent analytics, and people at work. It helps achieve this by identifying the system and subsystem requirements for open interoperability standards and defining common architectures to connect smart devices, machines, people, and processes that will help to accelerate more reliable access to big data and unlock business value.

Increased Automation Challenges Embedded Safety Functionality

Tuesday, December 1st, 2015

An increase in automation, in factories, in vehicles and in the IoT is driving the need for functional safety to be embedded in the digital portion of designs for industrial and automotive products.

Increasing levels of automation in factories means that safety and security are more critical than ever. More equipment and systems are communicating and operating over a network to monitor and analyze processes in the manufacture of end products.

Embedded systems used in the automation process need to foresee problems. And devices used must comply with safety standards. In self-driving cars and the Internet of Things (IoT), devices are also operating without operator assistance. In all three instances, there has to be a high level of system awareness. Reliable devices must be certified to meet a level of risk analysis and an acceptable probability of failure.

Functional Safety Standards

Functional safety can be designed into embedded systems, whereby the system meets certification requirements and detects potentially dangerous conditions. Importantly, the definition of functional safety includes “the activation of a protective or corrective device or mechanism to prevent hazardous events arising or providing mitigation to reduce the fight consequence of the hazardous event.” (IEC-Functional Safety Explained)

“Embedded systems used in the automation process need to foresee problems and devices used must comply with safety standards.”

Although the initial part of IEC 61508 was introduced in 1998, sector-specific versions followed, for example, oil production and non-nuclear power plants in 2001, but it was not until 2010 that it addressed industrial communication networks. Roger May, system architect and functional safety lead at Altera, believes that the 2010 European machinery directive has changed the way designers integrate safety into products. “[It] required all machines to be safe,” he says, “This has led to customers looking to add safety into the core of the product instead of as an add-on. As these safety designs are becoming more prevalent, then there is a knock-on impact across other markets/geographies which see the need to improve the safety of their products.”

An extension of the IEC 61508 is ISO 26262, which defines Automotive System Integrity Levels (ASILs), or risk analysis. Both of these standards assess hardware safety integrity and systematic safety integrity.

Industrial and Automotive Design

At this year’s SPS IPC Drives exhibition in Nuremberg, Germany, many companies highlighted the need for device and system level to work in harmony to reduce risk and time to market for industrial and automotive systems.

Infineon was one. It announced that its XMC4000 32-bit microcontrollers will be available with a Safety Package to help designers develop TUV-certified automation tests that conform to Safety Integrity Level (SIL) 2 and SIL3. These SILs, as defined in the IEC 61508, are 0.000001 – 0.0000001 and 0.0000001 – 0.00000001 probability of failure per hour respectively for continuous operation. TUV Rheinland is the international, independent body for certification of safety and quality of products, services and management systems.

The XMC4000 family is based on an ARM Cortex-M4 processor and was designed for the industrial market, with an integrated EtherCAT, for real-time Ethernet communication and an ambient temperature operation of 125°C.

Figure 1: The XMC4800 32-bit microcontroller from Infineon has integrated EtherCAT for real-time communication.

Figure 1: The XMC4800 32-bit microcontroller from Infineon has integrated EtherCAT for real-time communication.

The Safety Package includes XMC4000 microcontroller hardware, documentation and the TUV-certified Fault Robust Software Test Library (fRSTL), jointly developed with YOGITECH, and consultancy and implementation support by embedded engineering tool company, Hitex. Documentation includes a failure mode report, failure mode effects and diagnostic analysis based on liable failure-in-time rates for the microcontrollers and the Safety Application Note to develop SIL2 and SIL3 systems.

The non-TUV-certified library (fRSTL) is available now, and the TUV-certified version will be available, under license from YOGITECH or Hitex, from Q1 2016. The documentation (failure mode report, failure mode effects and diagnostic analysis) documentation will be available in January 2016.

“There have been changes in the safety functionality that customers are embedding in their products,” observes May, requiring changes to the embedded system. “We are seeing a greater demand to add more complex drive safety features, such as SS1 (Safe Stop 1) and SLS (Safely-Limited Speed)—these require safety to be more deeply embedded in the digital system,” he adds.

FPGA Combines With IP

In 2010, Altera was the first FPGA company to deliver a certified toolflow and IP, says May. It has added toolflows such as Safety Design Partitioning and works with partners, such as YOGITECH, the independent IC design services provider. The Nios II embedded processor is now available with the Functional Safety Lockstep. Targeting industrial and automotive applications, the two companies have built the Lockstep solution using Altera FPGAs, SoCs and certified toolflows, with YOGITECH IP. Customers can use Lockstep to implement SIL3 safety designs in Altera FPGAs using fRSmartComp technology for diagnostic coverage, self-checking and safety-related diagnostics of ICs, in compliance with IEC 61508 and ISO 26262.

Figure 2: Altera launched the Nios II Lockstep processor solution with partner, YOGITECH, at this year’s SPS IPC Drives show in Nuremberg, Germany.

Figure 2: Altera launched the Nios II Lockstep processor solution with partner, YOGITECH, at this year’s SPS IPC Drives show in Nuremberg, Germany.

In a safety system, the more complete the tests for system faults, the better. May explains that a measure of this is the diagnostic coverage. When implementing safety on a processor, the processor itself must be tested. This can be done using Software Test Libraries (STLs) to test software functions that run on the processor. “A disadvantage of these STLs,” says May, is that they require significant processor performance (approximately >50%)—leaving less performance for the safety functionality. They are also only able to provide moderate diagnostic coverage (approximately 60% at best). This moderate-low diagnosis coverage limits the system designer when trying to implement the high safety levels that require diagnostics of at least 90%,” he points out. “Using Nios II Lockstep has benefits in that the diagnostic coverage is >99% and it is achieved without impacting the performance of the processor,” he says.

Figure 3: The YOGITECH safety methodology flow.

Figure 3: The YOGITECH safety methodology flow.

Nios II joins the ARM Cortex-R5 processor in the fRSmartComp, YOGITECH’s IP that implements lockstep operations by interfacing a master and slave microprocessor with all the logic and mechanisms required. It embeds the standard cycle-by-cycle comparator of dual-core lock-step architectures. When a discrepancy is detected at one of the interfaces, it detects which of the two cores may have failed. Fail-operation architecture allows the faulty core to be swapped for a good one. A fail-safe architecture has two channels, using one as either a redundancy channel, in case of one failing. Alternatively, it can be designed so that both channels perform the same operation, and in the event of failure, a single channel performs the task.

SMARC is The Low Power SFF

Thursday, September 25th, 2014

Since stripping and reusing the guts from a smartphone is impractical, the tiny SMARC form factor is the next best thing.

Modern smartphones and tablets are more powerful by far than the PC or Mac that was on your desk a couple of years ago. They run Microsoft Office and more apps than you ever had on your desktop or laptop—I guarantee it. They do video image processing, contain software-defined audio CODECs, and do RF and DSP crunching that used to require TI’s best purpose-built ICs. And most smartphones run ARM-based SoCs (32-bit, and starting with the iPhone 5s, 64-bit ISAs), although Intel’s x86 Atom is creeping into many overseas handset designs.

Most impressively, all this compact mobile horsepower, I/O and user application goodness is incredibly low power. Contrast these achievements with common open-standard embedded form factors such as COM Express, 3U VPX, or one of the PC/104 Express variants. See the difference? They’re all “huge” compared to a smartphone’s motherboard and draw 10W or more.

The Smart Mobility Architecture (SMARC) was created to bring smartphone-like performance, size and power to the open-standard embedded market. SMARC was conceived by Kontron and ADLINK working collaboratively, and ADLINK’s SMARC product line is called Low Energy Computer-on-Module (LEC). Let’s examine SMARC in some detail.

A “short” sized SMARC module by ADLINK and based upon the industrial-strength ARM-based TI Sitara AM3517 Cortex-A8. “Short” SMARC modules are 82 mm x 50 mm small.

SGeT Going

The initial collaboration between Kontron and ADLINK started in 2010 and was the result of Intel disclosing its then-Atom smartphone strategy to the company’s key embedded partners. Two trend lines were at work: Intel’s Atom variants weren’t ideal for traditional embedded form factor boards due to pinouts, feature sets and power; and two, standard ARM SoCs—although growing in performance—were becoming increasingly difficult to design with due to complexity and serial interface signal integrity.

On the one hand, the x86 embedded market was strong but Intel’s lowest power variants would be a challenge for open-standard small form factor (SFF) designers. On the other hand, the hugely popular and ultra low power ARM SoCs were out of sync with many SFF designers’ capabilities due to high complexity silicon and long design times. An additional concern was that ARM-based SoCs evolved in sync with the smartphone market which was too fast for “regular” SFF embedded designers and their customers. Embedded designers don’t spin boards annually.

Kontron’s then-CTO Dirk Finstel (now European General Manager at ADLINK) turned to ADLINK and others to form a “rapid response” collaboration effort that would spawn an ultra low power (ULP) computer-on-module (COM) standard that would:

  • Be geared towards low power ARM SoCs with longer lifecycles and common I/O. The target is three to five I/O generations at the board interface
  • Be nearly as small as a smartphone’s motherboard
  • Abstract the complicated processor design onto a SFF COM board for use with a base carrier board for I/O breakout.

SMARC was announced in 2012 and was to be administered by the new Standardization Group for Embedded Technologies (SGeT) committee with the goal of quickly establishing SMARC and other embedded SFFs. While the SGeT website lists nearly 50 company names the key players today as identified by ADLINK are: ADLINK, Advantech, b-Plus, Fortec, Greenbase, Kontron, and TQ Systems.


The SMARC specification has several key attributes that make it a very desirable COM for embedded, but power and size are the primary ones. The power design goal is 6 W maximum, with a mere 2 W as the typical draw. This is achieved today primarily by choosing Freescale- and TI-based ARM SoCs (although any ARM-based SoC such as a Qualcomm Snapdragon 800 would work as well). Compare this to the typical Qseven board (12 W) or COM Express board (up to 50 W) that both use x86 Atom or Core CPUs.

Size-wise, SMARC comes in short size (82 x 50 mm) and full size (82 x 80 mm) with real estate of 4100 mm2 and 6560 mm2, respectively (Figure 1). Compared to Qseven, SMARC is actually a bit bigger. Table 1 compares SMARC, Qseven and COM Express. Note that since SMARC is based on the ARM architecture, the desired operating systems—Android, Linux and Windows Embedded Compact 7—are primarily mobile OSes. This exemplifies SMARC’s low power roots.

Figure 1: There are two SMARC sizes. The edge connector mates to an MXM3 connector on a base carrier board; note the holes for standoffs. (Courtesy:
Table 1: Small form factor (SFF) comparison between SMARC, Qseven and COM Express. SMARC’s small size is complemented by extremely low power consumption. (Courtesy: ADLINK.)

High Density; High Society

SMARC intends to bring smartphone-like features, size and power to deeply embedded systems such as Internet of Things (IoT) intelligent nodes and smart gateways. Typical functional blocks found on a SMARC module are a function of the 314 pin MXM 3.0 connector. Figure 2 shows the impressive list of fancy smartphone-like interfaces and signals found on a SMARC COM board.

It’s clear they’re a combination of smartphone multimedia (video, camera, solid state storage), audio, and power conservation, coupled with the kinds of I/O seen in industrial and IoT embedded systems. 24-bit LVDS, Gigabit Ethernet, GPIO, CAN and UART are common embedded SFF I/O.

Figure 2: The types of I/O found on the 314-pin SMARC connector. Incidentally, there are 12 pins reserved for future power management. (Courtesy: ADLINK.)

Of note are the “modern interfaces” and Power Management pins shown in Figure 2. Dirk Finstel of ADLINK—a creator of SMARC—told me that additional memory and different kinds of future memory, plus the addition of high power/high source I/O was behind the 12 reserved Power Management pins. “Modern Interfaces” as shown are contrast with the typical embedded COM Express I/O such as SPI and Field Bus. SGeT’s thinking for these on SMARC is exemplified by the MIPI Alliance list of display interface specifications shown here.

Homes for SMARC

The tiny SMARC COM boards, in either Short or Full size (Figure 1) are geared towards low- or battery-powered, long-life systems soon to be found in IoT architectures. But they’re also robust enough for use in client-server architectures where a combination of Linux and Windows applies. This makes SMARC boards candidates for IoT smart gateways where new and legacy sensor data is aggregated and analyzed before being sent onward to the cloud. Their small size means multiple SMARC boards can be ganged inside a small “shoebox” to form a multiprocessing server—including a media processing server because of SMARC’s modern interfaces.

An example of a TI-based SMARC board from ADLINK is the LEC-3517 as shown in Figure 3a and 3b. What’s absolutely astounding is that this much processing and I/O can be found on such a small COM board. Even though the BASE carrier is larger than the SMARC board, the combination has a very small footprint for an open standard embedded SFF.

A system-level example using an Atom-based SMARC is the MXE-200i fanless IoT gateway from ADLINK, shown in Figure 4. Although SMARC was created for ARM’s low-power cores in SoCs from Freescale, TI and others, provisions were made for non-ARM processors. ADLINK, for example, recently introduced an Intel Atom E3800-based Full-sized SMARC COM called the LEC-BT.

Figure 3: Carrier board (left) and SMARC COM board (right). The BASE is larger than the SMARC COM, but still small despite it’s I/O density.
Figure 4: A notional IoT gateway system based upon SMARC. The OD is a mere 120 x 60 x 100 mm (WxHxD)—not much bigger than the dual Ethernet ports.

This article was sponsored by ADLINK.