Posts Tagged ‘top-story’

How Advanced Precision Motors Empower the IoT and Industry 4.0

Friday, November 2nd, 2018

Designers can tap new resources to develop motion-enabled solutions that are powerful, small, precise, and efficient even as the IoT and next-generation smart manufacturing create fresh demands.

This is an exciting time for the electronic design industry as embedded electronic functionality spreads into every facet of modern society.

Electric motors have been around for over a century (the electric trolley has existed since the late 19th century), but the first designs were inefficient, large, and imprecise. The advent of rare-earth magnets and advanced brushless DC motor (BLDC) motor design (Figure 1) has empowered a new range of motors small enough to fit into confined spaces, powerful enough to do real work, and efficient enough to be used in wireless or remote applications.

The pressure to make products that are both highly functional yet cost-effective means that electronic engineers must draw the optimum performance out of every system they design. High-performance BLDC motors can provide the levels of performance and economy that today’s demanding applications require. This includes applications ranging from the appliances in a smart home, to self-propelled IoT devices, to industrial shop floors to ‘down hole’ in oil and gas drilling and extraction operations.

Figure 1: Frameless BLDC motor designs like Sensata’s model DIP34 allow for the motor to be fully integrated within a given assembly.

More Affordable Core Technologies Migrate to Industrial
When it comes to industrial applications, the motors in robotic handling and assembly systems must be extremely reliable, cost effective and space-efficient. Many of the core technologies enabling this new generation of motors have come down in cost since proving successful in various scientific and mil/aero applications and are being incorporated in a wide range of industrial solutions.

One of the ways to streamline a package design is to incorporate the motor itself into the body of the product, instead of attaching it to an external movement point. A frameless BLDC motor design, also known as a rotor/stator part set, allows for the motor to become fully integrated within a given assembly, which results in the greatest torque-to-volume possible. For applications with restricted space constraints, developers have a choice of integration options and motor configurations. Depending on the application, either a regular cylindrical format or a flatter pancake-style motor can be used when the motion axis is space-restricted.

Autonomous Vehicles Go Off Road
Most people think of aircraft or automobiles when the topic of unmanned or autonomous vehicles comes up, but the field reaches farther than that to the depths of the sea. Advancements in the remotely operated underwater vehicle (ROV) industry have created a market for the nimble machines to be used in applications such as search and rescue, maritime security, military, hydro industry, offshore oil rig inspection, and scientific research.

The inspection-class ROVs that are used in such situations must be small and portable, so they can be easily moved to and deployed at the locations where they are needed to ensure the best operator performance and inspection results. Vehicles of this nature usually operate on less than 10 horsepower and use compact thruster motors directly attached to the propeller for positioning and navigation.

Figure 2: BLDC motors like this housed model from Sensata Technologies are used on Inspection Class ROVs, which operate on less than 10 horse power and rely on compact thruster motors attached to the propeller for positioning.

Inspection Class ROV motors must present a low profile and operate reliably in water, oil, and other liquids. The latest generation of advanced BLDC motors not only meet ROV Inspection Class requirements, but their small size, power, and efficiency enable a streamlined design with low acoustic noise (Figure 2). Customizable motors serve this specialty market segment with compact designs in both housed and frameless mechanical configurations.

While there are still fossil fuel-powered autonomous vehicles in aerial and wheeled applications, more and more manufacturers are migrating to all-electric systems because of smaller sizes and ease of integration. Electrical systems’ advantages over fossil fuel also include a reduced carbon footprint, less acoustic noise, and lower emissions. The lack of emissions and noise is beneficial to many other applications, such as police and military solutions that require a minimal size and acoustic signature.

Another overlooked advantage that electrical systems possess is the ability to safely operate in almost any human environment, even clean rooms. Such an ability is impossible with a system that emits exhaust of any nature. Automated forklifts, mobile equipment carts and parts bins, shop-floor scooters and other self-driven gear also operate much more quietly when driven by electric motors, making integration into the workplace easier as well.

Maximizing Battery Life
A powerful electric motor operating at a relatively high duty cycle places strict demands upon its energy delivery and storage system. Additionally, the power infrastructure must be as cost-effective as possible. An efficient electric motor reduces those demands.

Maximizing battery life is a decisive factor in a variety of mission-critical systems, where the reliability and longevity of the equipment involved can affect the outcome of that mission. In battery-operated medical devices as well as in remotely piloted military vehicles, the ability to last a few minutes longer in a demanding situation can make a significant difference. This need isn’t restricted to critical systems. Increasing the life of any battery-operated system is now a key initiative for equipment manufacturers, from phones to cars.

In order to address this demand, the latest frameless, brushless DC Motors are available in multiple configurations, with designs that provide extremely high operational efficiencies in excess of 90%. This enables an extended battery life over legacy designs. For battery-dependent applications, the efficiency of the motor is often the deciding factor, not the battery size, in achieving the longest operating time in a given application before the battery (or supercapacitor or reflow cell) needs to be charged.

Meeting the Additional Demands of Harsh Environments
In applications where a housed BLDC motor is used in harsh environments like oil and gas applications, there are additional demands placed on the design. To properly address these needs, motors for these applications must be rated to perform at up to 205°C and 30,000 psi, with the ability to withstand shock loads in excess of 1000g and with a vibration resistance of 25g RMS.

Figure 3: High Pressure High Temperature BLDC motors like Sensata’s model DII-15-60 can handle extreme ambient temperatures and pressures.

For example, the brushless DC motor in Figure 3 is designed for the most extreme ambient temperature and pressure environments.  This BLDC motor has been validated under some of the most extensive environmental test protocols available to ensure performance at those high ambient temperatures and pressures, while handling high shock and vibrational loads in extreme applications. These motors have a proprietary design that is highly customizable for specific needs, are available in a wide range of torques and speeds  and can be configured for gearboxes or other feedback systems.

Withstanding Harsh Drilling Operations Even Down the Hole
One aspect of the growth in advanced oil and gas drilling and extraction methods is the need for highly reliable and high performing motors that can survive and operate ‘down the hole’. This requires a rugged, brushless DC (BLDC) motor specifically developed to withstand the harshest possible conditions found in drilling operations.

The integrated motors used in drilling equipment for the extraction of oil must continue to operate reliably, regardless of the conditions, because oil and gas extraction is an industry where one hour of downtime can cost upwards of $100,000. Replacing a motor, or any integrated equipment, in a drilling rig can take hours or even days, depending on the situation.

In this case, a high pressure and temperature brushless DC motor can directly address those severe down-hole applications such as mud pulser valves, caliper deployment and sensor positioning. Motors of this nature have been successfully tested to operate under continuous duty in temperatures up to 205°C and pressures up to 30,000 psi.

A custom design with winding variations is often needed to create a motor that not only meets a wide range of controller voltages and currents, but that can endure the most extreme environmental conditions while still working optimally with a gearbox or other feedback device.

The motor’s robust design can also serve other demanding applications in a diverse range of severe and hostile environments where failure or poor operation is not an option.

The spread of advanced automation technology in manufacturing is rapidly advancing, with facilities both old and new implementing the latest in smart manufacturing and intelligent robotic systems. The need for precise motion control is driven by many demands now placed on the line, from moving products around on the manufacturing floor to a variety of work stations to the logistics of moving the finished product through a facility. Having the proper motion solution can greatly reduce this pressure for the designer.

For more information, please visit www.sensata.com .


Walter Smith is a Senior Applications Engineer at Sensata Technologies, where he specializes in brushless DC (BLDC) motors. 

Precision Prescription

Thursday, August 23rd, 2018

Why an expanding array of mission-critical equipment— from ventilators that help patients breathe to asteroid-collecting spacecraft—want the precision that voice coil actuator technology makes possible.

Originally developed for audio speakers, voice coil actuation (VCA) technology is now bringing precise and reliable motion control to a wide range of medical, industrial process, and space applications. Though around for decades, VCA technology (Figure 1) remains a mystery to many design engineers, as the application spaces for which it’s cost-effective have been until recently relatively restricted. Many designers have had to settle for more traditional, but less flexible, solenoid-based devices. But now that powerful MCUs and precise and efficient drivers are readily available, advanced linear motion designs using VCAs are easier and less expensive to implement.

Any time an engineer is looking at developing a product that requires highly reliable, highly repeatable, and highly controllable motions, they ought to take a look at VCA.

Voice coil actuators are simple and robust yet are as exactly precise as the input given them. VCAs accelerate smoothly and quickly to any position within their stroke with nearly zero hysteresis and are only limited by the system’s position-sensing precision and driver capability. Because of this accuracy, these devices suit applications such as medical devices, robotics, and industrial process equipment.

Figure 1: Used in a wide range of medical and industrial applications, an axial voice coil actuator is composed of a permanent magnet situated within a moving tubular coil of wire, all inside of a ferromagnetic cylinder. When current runs through the coil, it becomes magnetized and repels against the magnets, producing an in and out, back and forth motion.

Precision Control in Medical Devices
One application area that demands critical precision is the medical industry. Devices like drug-dispensing pumps and ventilators do not work on approximations—every microliter of liquid (or air) has to be carefully measured and managed. A VCA-based device’s precise motion control in a medical flow-management system increases accuracy without complexity or bulk.

Beyond meeting strict performance and regulatory standards devices must also be small and lightweight to allow for portable use by caregivers moving room-to-room in a facility. This demand for performance in a constrained space lends itself well to a VCA solution.

Figure 2: VCAs such as Sensata’s LAH08-15-000A are often used to control inhalation and exhalation valves on ventilators as their precision, reliability, and small size meet the demands for life-critical applications in the medical industry.

Linear voice coil actuators in particular can be designed to meet the ultra-small size and exacting motion control requirements needed in the medical industry. These tiny VCAs are often used to control inhalation and exhalation valves on ventilators to provide the exact amount of air specified and the necessary reliability for life-critical applications (Figure 2).

When VCA motors include bi-directional capabilities, permanent magnets, and magnetic latches, the VCA can remain in position at one end or the other of a stroke during a power failure, ensuring the valves stay open or closed in a disruptive power situation.

Compact VCAs measuring 0.75” in diameter and weighing 2.3 ounces, and which can deliver a peak force of nearly two pounds in an operating stroke of ±2 mm with low hysteresis, zero cogging, high acceleration, and a long-life cycle are available. This accurate linear motion control can also serve other precision medical systems like anesthesia machines as well as ultrasound probes, blood analyzers, and lab equipment.

Fail Safe Operation on Spacecraft
Equipment used in military and aerospace applications is dominated by a need to be as precise as possible in any environmental situation, so it has to be as rugged as it is accurate. In cases like these, custom solutions can help ensure that all the specifications to achieve desired performance are considered in the initial equipment design.

To address the unique requirements for a space-based application, Sensata engineers created a moving-magnet VCA that could handle the harsh requirements for a spectrometer moving mirror on the Origins-Spectral Interpretation Resource Identification Security Regolith Explorer (OSIRIS-Rex) spacecraft, which launched in September of 2016 to rendezvous with the Asteroid Bennu in 2018 and return samples to Earth in 2023.

This customized moving magnet VCA was attached to the moving-mirror assembly of the OSIRIS-Rex Thermal Emission Spectrometer (OTES) to properly position it to scan the surface as the spacecraft approached the asteroid. The custom VCA was specifically manufactured to ensure high reliability and fail-safe operation.  Redundant technology, low-outgassing, and high precision for exact motion control positioning were all key requirements to the actuator design.

Figure 3: High precision voice coil actuators like Sensata’s model LA05-05 meet the high purity and low outgassing requirements for semiconductor, military, and space applications.

Of course, given the extremely tight confines of a spacecraft, it was also critical to stay within the specific size, weight, and power limits, which were achieved in a compact housing measuring approximately 2” in diameter by 3” long. To meet the critical need for near-zero emissions, Sensata developed the actuator with as few adhesives or inks as possible and only spaceflight-approved materials, using mechanical assembly methods within a clean manufacturing process to eliminate substances that could cause outgassing. In addition, high-energy Neodymium Iron Boron magnets were incorporated to provide superior operating efficiencies.

This custom magnet solution and clean assembly methodology was migrated into a range of actuators with varying sizes, resulting in high performance VCAs measuring 12‒65 mm in diameter and 12‒75 mm in length (at mid-stroke) that meet the needs of semiconductor, military, space, and test and measurement applications (Figure 3).

Low Friction in Industrial and Processing Applications
Moving items from one place to another is an extremely popular application, but it isn’t always as simple as hooking up a motor to a conveyor belt. In many of continuously operating applications, inconsistent motion caused by excessive friction cannot only steal profits but also create work flaws that reduce yields. Applications like these need VCA solutions that exhibit extremely low hysteresis and friction while delivering precise and consistent bi-directional position control (Figure 4).

Figure 4: VCAs deliver low hysteresis/low friction for reliable and precise motion control in a compact package.

Key to the VCA’s low-friction design is a symmetrical flex circuit that prevents friction caused by movement coupled with a solid brass ball-cage to further reduce it. This design delivers an operating performance with a hysteresis of just 10 mN compared to similar style actuators that deliver as much as 50 to 200 mN. Other performance attributes of the VCA include a peak force of 4.7 N, a total stroke of 7.4 mm and a compact size of just 38 mm in diameter by 48.3 mm in length.

Design Tools
Self-contained Voice Coil Actuator Developer’s Kits (Figure 5) are available. The kits include a VCA with a built-in feedback sensor and a programmable controller with PC-compatible motion control software. Users can take advantage of  VCA benefits without needing to specify the electronics required for a complete control system. This type of tool can help designers quickly develop an actuation system and demonstrate a working design with velocity, position, force, reciprocation, and acceleration control to address nearly any application.

Figure 5: Self-Contained Voice Coil Actuator Developer Kits enable designers to quickly evaluate and implement a VCA-based motion solution.

Moving Forward with VCA Technology
The ability to put advanced precise linear motion anywhere is empowering legacy application spaces and inspiring designers to take on new challenges which could not be addressed before  due to cost or complexity. For example, a VCA built into an electric-vehicle charging system could raise and align the coil from the floor to the one in the bottom of the vehicle for optimum energy transfer, or on printer-head positioning in a high-resolution 3D printer.

The number of areas that can benefit from VCA technology are constantly expanding as engineers use VCAs to address complex, delicate, or sensitive motion applications. The key is to specify and develop the proper solution for the application, building on the strengths of the technology and the user’s needs.


James McNamara is a Senior Applications Engineer at Sensata Technologies where he is the leading expert on Voice Coil Actuator technology. He has been with the company for 12 years. McNamara has 40 years of experience working in the electric motor industry. Holding positions starting as an Engineering Lab Technician moving on to positions including Design Engineer, Applications Engineer and Regional Sales Manager.

 

Single Board Computers in the Arm World

Tuesday, June 19th, 2018

Why longer lifetime programs in mil/aero, industrial, and smart city infrastructure applications have reason to notice the arrival of extreme temperature and rugged Arm-based SBCs

Acorn Computers, now known as Arm Limited, first deployed the Acorn reduced instruction set computing (RISC) architecture in the BBC Micro computer in 1981. Since then Arm processors have gone through many iterations and have become widely used in mobile and consumer electronics applications. However, smart city, mil/aero, and industrial applications demand embedded computing solutions that are more production-ready, rugged, and tolerant of extreme temperatures. This article describes the various ways to deliver Arm-based processing capability: System on Chip (SOC), System on Module (SOM), and complete Single Board Computers (SBC). The article concludes with a focus on the SBC approach, describing a practical example, and its benefits and applicability to specific applications.

Three Routes into Designs
In the embedded computing market, Arm-based designs are a distant second to products based on x86 architecture. But with Arm processors evolving into more capable and higher performance units, their use in embedded applications is expected to grow at approximately twice the rate of x86-based solutions near term. Various sources predict this growth will be due to the application of Arm-based solutions in automobiles, sensors, IoT, unmanned vehicles, and other embedded applications. This is based on power and cost advantages. Whether based on Arm, or on x86 technology, computing architecture finds its way into designs via three routes: as a SOC, SOM, and integrated into an SBC.

As the SOC name suggests, a system on a chip is comprised of more than just a processor. The chip may contain processor core(s), memory, clocks, and I/O interfaces such as USB, Ethernet, and others. The advantage of this degree of integration is small size and, in the case of Arm-based SOCs, low power consumption, making them ideal for small devices such as smartphones, where a high degree of miniaturization is valued over flexibility and a multitude of external interface types. Of course, the down-side of using a system on a chip is that you get whatever is in the chip; the parts that match your needs, and the parts that don’t. However little or however much the designer put into the chip, that is what you have to work with, and what you pay for. Figure 1 shows conceptually the internal functions of an Arm SOC.

Figure 1:  Overview of a system on a chip.

A system on module (SOM) adds more flexibility by mounting the SOC on a board and expanding it with the desired memory or I/O interfaces. A SOM approach allows more specialized boards to be built, with larger memory, a greater number of I/O channels, or specific types of I/O that an application requires. Even though some additional functions and signals are included on these boards, they are only “modules,” which are not ready to connect to the outside world. SOMs are primarily CPU-centric products that are meant to act as only one part of an embedded computer. Figure 2 shows an example of a SOM.

Figure 2:  Example of a System on Module showing a COM Type 10 connector on the bottom side, which is used to connect to a carrier board.

A carrier board is necessary to convert the raw processor and I/O signals to standard I/O signals and real-world connectors such as Ethernet, DisplayPort, HDMI, USB, SATA, etc. However, a SOM provides a convenient head-start for someone designing their own system. For someone requiring a ready-to-go solution, a companion carrier board will be required. A SOM and carrier board set is shown in Figure 3.

Figure 3:  Example of a board set. The connectorized carrier board is on the top, and the system on a module is underneath.

The last category, Single Board Computers, is quite different from SOCs or SOMs. SBCs do not require additional carrier cards, companion boards, connector break-out boards, or other add-ons to function. They are ready to turn on and run application software. Although SBCs are quite common in the x86 world, they have traditionally been less available in Arm-based products. Fortunately, this is changing. As Arm processors become more powerful, and now include video output and other important I/O features, the availability of Arm-based SBCs is on the rise.

As an example, Figure 4 shows the Tetra SBC from VersaLogic Corp. This SBC is based on the i.MX6 quad-core processor.

Even though it’s a compact SBC, the Tetra model in this example does not sacrifice capabilities to get there. It includes Gigabit Ethernet, USB ports, SATA, HDMI, CAN, MIPI camera input, serial I/O, and audio I/O. There is even a 6-axis e-compass option for use in motion sensitive applications such as unmanned vehicles.

Figure 4: “Tetra” quad-core Arm-based SBC.

Applications Are Changing, Too
Applications for Arm-based single board computers are changing as well. Boards like the Tetra and the smaller Zebra board have been designed from the ground up for challenging embedded applications. They meet the demanding requirements of industrial and mil/aero applications including extended temperature operation (-40 to +85◦ C) and MIL-STD shock and vibration standards. Even in less demanding applications, such as parking garage sensors or smart meters, there is a need to operate over a wide temperature range, especially inside a sealed box. As the “smart city” concept grows, applications for this type of solution will increasingly require these types of cost effective, rugged, single board computers to provide computing power where it is needed.

Figure 5: VersaLogic’s Zebra SBC is  a smaller-size SBC (95 x 95 mm) with single and dual-core processor options. It includes industry standard I/O ports and supports full -40 to +85◦C operation.

In summary, users have access to Arm computing capability via system on chip, system on module, and single board computer offerings. The SBC route provides an off-the-shelf production-ready option. For OEMs this cuts six to twelve months from the product design schedule versus having to design custom I/O and expansion solutions around SOCs or SOMs. SBCs allow the design team to focus on development of the application rather than the computer board. The advent of extreme temperature and rugged Arm-based SBCs expands their applicability and, when coupled with long product lifecycles, makes them an attractive solution for longer lifetime programs in mil/aero, industrial, and smart city infrastructure applications.


Bob Buxton is Product Manager, VersaLogic Corporation, Tualatin, Oregon

Bob Buxton brings more than twenty years of experience in both R&D and product management roles. He has worked within, and has provided products to, the mil/aero segment. His R&D experiences have been primarily in connection with radar and microwave sub-system design.

He is currently working in product management at VersaLogic Corporation, a leading provider of embedded computers which are designed for the most demanding applications. VersaLogic is located in Tualatin, Oregon.

Bob holds a master’s degree in Microwaves and Modern Optics from University College, London and an MBA from George Fox University, Newberg Oregon. He is a Chartered Engineer and a Member of the Institution of Engineering and Technology.

 

TSN: Network Enabler

Wednesday, June 13th, 2018

Why the Industrial IoT sees Time-Sensitive Networking’s value

Only six months in, 2018 has proven to be a breakthrough year for the expansion of the Industrial Internet of Things (IIoT). And, as it continues to grow, the IIoT represents some of the biggest opportunities for manufacturers and other key stakeholders in factory automation and other industrial markets and applications. A 2017 McKinsey & Company study stated that “the potential value that could be unlocked with IoT applications in factory settings could be as much as $3.7 trillion in 2025, or about one third of all estimated potential economic value.” The new capabilities and opportunity found in industrial settings demand streamlined operational and manufacturing models.

Network infrastructure improvements are a key step in the development of solutions that can optimize systems and operational management models, while helping bring their companies to the forefront of this growth. The need for interoperability, efficiency, and determinism on the network increases as new automation capabilities arise. Enter: Ethernet-based Time Sensitive Networking, TSN.

Understanding Time-Sensitive Networking’s Evolution
Since its inception in the 1970s, Ethernet has morphed from a CSMA/CD technology, where hubs were used for connecting multiple segments together. Over the years, hubs have given way to switches, and collision detection methods have migrated to switching algorithms. These switches and algorithms weren’t traditionally deterministic. Further complicating configuration, all the functionality from 40 years of Ethernet was retained and aggregated into each switch. In the last 20 years, Ethernet has evolved to enable next generation control systems.

To better understand the evolution of TSN, one needs to look back to the late ‘90s through the early 00s when real-time Ethernet and the efforts to build a common foundation were not prioritized. Industry was creating its own unique approaches, with network protocols all competing for adoption. Protocols such as Profinet, Ethernet/IP, Ethernet Powerlink, EtherCAT, Modbus-IDA, and SERCOS III fell short on offering true flexibility and interoperability, serving only individual solutions and applications on the network. Members of the IEEE 802.1 Time-Sensitive Networking task group assembled to address this issue in the late 00s, initially developing the Audio Video Bridging (AVB) standards and later expanding into TSN standards. Changing the name to Time-Sensitive Networking more closely represented a new scope and capabilities in industrial and automotive settings.

Work by IEEE 802, the Internet Engineering Task Force (IETF), and other standards groups has extended the ability to operate time-sensitive systems over standard Ethernet networks, supporting diverse applications and markets including professional audio/video, automotive, and industrial. These standards define new mechanisms for creating distributed, synchronized, real-time systems using standard Ethernet technologies that allow the convergence of low latency control traffic and standard Ethernet traffic on the same network.

Figure 1: Converged TSN Ecosystems and the role Avnu Alliance plays

TSN: A More Capable Ethernet
The updates to standard Ethernet with TSN offer a variety of benefits and differentiators, including:

  • Bounded, low latency data transfer for control
  • Shared synchronized time
  • High bandwidth
  • Convergence of control and standard Ethernet traffic
  • Security enhancements aligned with IT standards.

TSN supports real-time control and synchronization, for example, between motion applications and robots over a single Ethernet network. TSN can, at the same time, support other common traffic found in manufacturing applications, driving convergence between IT and operational technologies. TSN brings a holistic approach to network management, requiring new tools that enable offline modeling of the network traffic and simulation of loading prior to acquisition of hardware or commissioning in the field. Scheduling becomes more calculable from a mathematical perspective by allowing system designers to predict if the network will be successful.

The benefits of TSN include changes to the workflow for designing and planning networks, where it values network calculus and planning for managing traffic and guaranteeing performance. In this new paradigm, payload, sampling frequency, and maximum latency can all be managed from a system-wide view to calculate flows and configure bridges and infrastructure to meet these demands. If designs prove inadequate, or a solution is not achievable given system constraints, then the network design can be modified to accommodate the system requirements.

As TSN-supported network infrastructure becomes more prevalent, many of today’s modified Ethernet networks can move to TSN based networks, using their OT-based application layers on standard 802.1 Ethernet. TSN also fills an important gap in standard networking quality of service, namely guaranteed latency and delivery for critical traffic. Automation and control applications require consistent delivery of data from sensors to controllers and actuators in highly dependable and precise time intervals. TSN ensures that specific traffic is delivered in a timely manner by securing bandwidth and time in the network infrastructure for that purpose while supporting all other forms of traffic. This enables users and vendors to converge networks and derive benefits from the increased connectivity to more and more devices.

By simplifying convergence and increased connectivity while unlocking the critical data needed to achieve improved operations driven by big data analytics, TSN makes the case for its IIoT business value. Designers can take advantage of the advancements in processing, communications, software, and system design with a standard that evolves while providing precise timing.

Driving TSN Adoption
One group driving adoption and the ecosystem is Avnu Alliance. Avnu Alliance fosters and develops an ecosystem of manufacturers providing interoperable devices for pro AV, consumer, automotive, and industrial networked systems. The industrial segment within Avnu includes member companies across the entire ecosystem and supply chain, including network infrastructure vendors, silicon providers, and industrial suppliers, who all work together to create an ecosystem of interoperable low-latency, time-synchronized, highly reliable networked devices built on TSN base standards.

To promote the shared network and accelerate products to market, Avnu Alliance facilitates a common technical platform through various services, including open-source software, hardware reference designs, test plans, and third-party certification to develop and verify the correct operation and implementation of TSN-enabled products.

Avnu Alliance has built a rich set of conformance and interoperability tests with a defined procedure for certification in various markets. Leveraging that multi-industry experience, Avnu defined a baseline certification with robust and comprehensive tests based on the market requirements for industrial automation devices and silicon. These conformance tests ensure that the device or silicon conforms to the relevant IEEE standards, as well as additional requirements that Avnu has selected as necessary for proper system interoperability.

Avnu is committed to accelerating the path to an interoperable foundation. Avnu’s conformance test tools, testbeds, open-source code, and certification program support vendors with rapid adoption of the TSN standards. With Avnu’s initiatives in play for TSN, system designers and engineers will no longer architect a network that becomes outdated before it’s even installed. As the need for time-synchronized communications continues to grow, Avnu Alliance will work to ensure that more companies are able to benefit from TSN’s advantages with successful implementations in industrial settings and beyond.

IIoT promises a world of smarter, hyper-connected devices and infrastructure where manufacturing machines, transportation systems, and the electrical grid will be outfitted with embedded sensing, processing, control, and analysis capabilities. Once networked together, they’ll create a smart system of systems that shares data between devices, across the enterprise, and in the cloud. Facilitating a community of industry leaders that are actively pursuing standardization recognizes that deterministic networking is going to be critical for the industrial environment .

To learn more about Avnu Alliance and upcoming event participation and activities, visit www.avnu.org.


Tom Weingartner is the Marketing Director for the Deterministic Ethernet Technology Group at Analog Devices and has 30 years experience with semiconductors and embedded systems ranging from ASIC design to avionics system development. Weingartner is responsible for ADI’s deterministic Ethernet product line including Industrial Ethernet and Time Sensitive Networking standards.

Analytics to Answer the Next Question and the One After That

Friday, June 2nd, 2017

New tools are emerging as the need to drive information to and from the shop floor becomes even more key.

Editor’s Note: Following the news that Plex Systems has extended its IntelliPlex Analytic Application Suite to encompass production analytics, EECatalog took the opportunity to ask Plex Systems and Hatch Stamping, which employs Plex Systems solutions, about approaches to capitalizing on the Industrial Internet of Things (IIoT) and related topics. We thank Janice D’Amico, Plex Specialist Lead at Hatch Stamping, Karl Ederle, Vice President of Product Management, Plex Systems, and Alexi Antonino, Product Manager of Business Developments, Plex Systems for their thoughtful responses.

Courtesy Hatch Stamping

Courtesy Hatch Stamping


Janice D’Amico, Plex specialist lead, Hatch Stamping

Janice D’Amico, Plex specialist lead, Hatch Stamping

EECatalog: How do you define the “Industrial Internet of Things”?

Plex Systems: The Industrial Internet of Things is about asking the question, “What happens when every part of my extended manufacturing process—from materials to the customer—is connected?” How will that open new areas of innovation, bring us closer to customers, enable new manufacturing processes, and even create new business opportunities? For a lot of organizations, the hard part is knowing where to start. The cloud winds up being a great foundation because it is inherently connective, so each machine or device you bring into the operation can be plugged in, in order to share data and open new opportunities. At Plex, one of our favorite things to do is go out on the customer’s plant floor and experiment with new connected devices and technology; inevitably we learn something new or discover a use case we never imagined.

Karl Ederle, Vice President of Product Management, Plex Systems

Karl Ederle, Vice President of Product Management, Plex Systems

EECatalog: How do IntelliPlex production analytics fit into the IIoT for Hatch?

Hatch Stamping: For years Hatch has been extracting data from many different sources and manually building the company’s key measurable system in Excel with a lot of human intervention. Our management teams will meet weekly and monthly and review those manual reports. Our intent with analytics is to work with each functional department across the organization, determine what should be measured and how that can be captured within Plex, and then drive analytics to provide real-time dashboards at our fingertips anytime and anywhere. This will naturally improve the accuracy of the measurable system and empower decision making instead of reactionary responses.

EECatalog: What else would you like our readers to know that may not have been addressed, or covered in depth, in the release about your IntelliPlex Production Analytic Application and “visibility all the way down to the shop floor”?

Plex Systems: The IntelliPlex Analytic Applications Suite—including the newest Production Analytic Application – is built on a powerful platform that takes this huge trove of data stored in Plex and makes it instantly available for business decision-making. We built a cloud-based data warehouse that allows for fast data analysis and manipulation, while also enabling users to drill down into the transactional detail. To do that with a traditional, on-premise system would require huge investments in standalone hardware and software. For our customers, they just activate it. Sometimes the behind-the-scenes stuff is much more powerful than people imagine.

“What may start out as a concern about scrap levels on a specific production line can quickly turn into a measure of materials quality and supplier performance.”

Because Plex analytics leverage our core system, they also open opportunities for organic growth. Each time an organization adds end points, such as adding new tools, equipment, systems, or devices like wearables, those new connections contribute to intelligence gathering and the resulting analysis and impact. The ability to use these tools, layered onto data that can be trusted, adds up to a future where manufacturers can freely experiment and explore ways to improve their businesses.

Alexi Antonino, Product Manager of Business Developments, Plex Systems

Alexi Antonino, Product Manager of Business Developments, Plex Systems

EECatalog: In moving from turnkey analytics for sales, order management, procurement and finance to analytic capabilities for production, what were the pitfalls to avoid and how did Plex experience help?

Plex Systems: The Plex Manufacturing Cloud connects all aspects of manufacturing, from suppliers to production line equipment, people, processes and even customers. That means we collect data from every corner of a manufacturer’s business, so our approach to analytics is to start with a very simple idea: What questions do our customers have about their business, and how can we help with quick, accurate answers?

It turns out that manufacturing questions are complex. What may start out as a concern about scrap levels on a specific production line can quickly turn into a measure of materials quality and supplier performance. The key with our analytic applications is that you can analyze multiple facets of the business—quality, scrap, supplier scorecards, financial performance—all from one dashboard. We look at things holistically because that’s the way our customers run their enterprises.

EECatalog: What helped Hatch decide to invest in bringing strong analytics to your shop floor?

Hatch Stamping: Hatch has always been keen to drive information to and from the shop floor, and believes there isn’t anyone who wants success for us more than our operators, who take pride in their productivity and the quality of their work. With the launch of Plex in 2007 we have been providing real-time information to our people from the shop floor to the executive boardroom. When we learned of analytics we were excited about the opportunity to improve reporting features and glamorize the process seamlessly. Throughout our facilities today, we demonstrate live screens of the Visual Shop Floor and Workcenter Replenishment. With the Analytic tools, we are standardizing our dashboards to display across the organization as well, in real-time.

EECatalog: Did you face any internal challenges when choosing to implement production analytics?

Hatch Stamping: People fear change. We had a lot of manual effort going into providing the companies high level and low level ops. We were not interested in replacing any of our people with analytics but instead we were interested in better use of our people’s times. Because analytics is so easy to develop, our IT team has been able to structure the High Level Ops and Low Level Ops and hand it off to the administers of the weekly and monthly meetings. They then are being provided training in how to structure the dashboards to their liking for those meetings. This is one of the strongest features that sold us on Analytics: You don’t have to have full time programmers on staff to use it, and you can build it from department to department with security at different levels to protect the structure overall.

EECatalog: How has listening to customers and potential customers made a difference?

Plex Systems: Plex was built, literally, by walking back and forth between the shop floor and the offices of a manufacturer, understanding the needs of the business and finding ways to manage processes and collect accurate data where it happened. We still take that approach today, spending time with customers in their work environments, collaborating via our online community and with our strategic advisory board. For our analytic solutions, two key design factors directly relate to customer feedback. First, the solutions had to deliver value instantly with a set of turnkey analytics that work as soon as the customer turns them on. Second, they had to be able to extend into new areas of data and new performance indicators based on an individual customer’s need. The idea is that we answer all the questions you have today, but we make the solution capable of answering the next question and the one after that as well.

EECatalog: Why can Plex Systems pivot quickly if needed?

Plex Systems: The power of Plex is that we are the backbone for our customers’ business, which means our systems are managing all aspects of manufacturing. About 50 percent of the data we manage comes from the shop floor; the rest comes from connections to suppliers, customers, and business administration activity. The breadth and depth of Plex data effectively means we can answer almost any question a customer has about their operations. Every time we deliver a new analytic application, there is always one new measurement or key performance indicator a customer asks for—that’s the foundation of our continuous innovation model.

EECatalog: How have IntelliPlex production analytics enabled Hatch to respond to customers more quickly and make your data more actionable?

Hatch Stamping: Analytics has all the bells and whistles to woo the observer. Our initial efforts have been in building dashboards for our internal use. The second phase will be building our dashboards for our Customer and Supplier Portals. We see the power of including these deliverables in our sales presentations as well as in our supplier auditing sessions.

EECatalog: What competitive advantages have you seen since deploying analytics?

Hatch Stamping: We are a world leader in stamping manufacturing, we know that knowledge is power. We are confident that using Plex with analytics is giving us a leading edge with our competitors who still struggle today with outdated systems and data. We are seeing that fruition in our sales growth, which has increased 25 percent in the last two years.

Driving the I2C bus with Next-Generation Buffers

Monday, May 22nd, 2017

While next-generation I2C buffer devices are good for putting the I2C control buses you already know to work, their true claim to fame could be how they assist with cost, power dissipation, and design complexity, helping system control architectures mature along with overall system design.

Over the past several decades, the Inter-Integrated Circuit (I2C) bus standard has been the dominant control bus standard for most electronic systems. I2C has a loyal following because of its ease of implementation, flexibility, and the large ecosystem of integrated circuits that support the standard. I2C’s ubiquity as the control interface of choice often drives device selection decisions, which in turn shape the overall system architecture.

“Not having pull-up resistors also avoids multiple points of possible system failure.”

System management buses like I2C have become important control points for differentiating designs through system software (firmware). Given the importance placed on system control buses like I2C, it’s crucial that you get the maximum benefit out of your I2C bus to help solve implementation issues and meet system design objectives.

You probably face a multitude of challenges when designing modern electronic-based systems. For example, keeping power dissipation to a minimum is often key for battery-powered systems, while for industrial systems power dissipation is directly connected to thermal performance.

An Opportunity to Reduce Cost
Another area where you’re likely challenged is in system build cost. Market-price competition often drives the need to reduce cost from one product generation to the next. Often, system cost is directly related to design complexity. Complex system designs mean more components, software, and engineering effort to design and test systems. Any opportunity to reduce system complexity is often an opportunity to reduce system cost.

A system’s I2C bus implementation can play a role in helping address cost, power, design complexity, and performance challenges. Next-generation I2C buffers break from traditional I2C implementations by offering a solution that can help you address modern system design issues while staying true to the I2C heritage that has made the standard so widely accepted.

With I2C buffers like TI’s TCA980x series, you can solve common I2C design issues such as level translation, bus buffering, and bus capacitive loading while also tackling system-level challenges such as power, cost, and complexity.

Several key features and characteristics help distinguish the new class of I2C devices from other I2C buffering solutions. For example, placing current-source drivers on the B-side port is a departure from the traditional voltage-based implementations found on virtually all I2C buffers and provides multiple benefits. The B-side bus lines do not need the pull-up resistors commonly used on traditional I2C implementations, and their elimination provides incremental system cost savings, especially for larger control bus implementations. Not having pull-up resistors also avoids multiple points of possible system failure.

Longer Battery Life
The TCA980x’s current-mode implementation means that the I2C bus operation will consume much lower power. Power consumption for the TCA980x is approximately 20 times lower than comparable voltage-mode devices (75μA vs. 1,500μA); see Figure 1. At a system level, lower power dissipation translates into longer battery life and much better thermal performance. The ability to improve battery life is likely to be one of the easiest ways that you can help achieve system design goals without having to compromise performance.

Figure 1: TCA980x vs traditional I2C buffer static current consumption comparison

Figure 1: TCA980x vs. traditional I2C buffer static current consumption comparison

I2C buffers also provide the added benefit of I2C-level translation, with support down to 0.8V. Support for 0.8V helps future-proof your I2C control bus designs as peripheral and processor input/output (I/O) voltages move lower over time. In addition, the flexibility to support device I/O voltages down to 0.8V gives you more options in signal-chain device selection.

Another common issue is having to tweak pull-up resistors to meet system timing parameters, especially rise time, for heavily loaded bus implementations. Figure 2 compares TCA980x rise-time performance vs. traditional pull-up-based I2C bus implementations. As you can see, for heavily loaded bus environments, current source-based buffer device designs have superior rise-time performance compared to traditional I2C buffers.

Figure 2: Rise-time performance of current-source driver vs. a traditional voltage/resistor approach

Figure 2: Rise-time performance of current-source driver vs. a traditional voltage/resistor approach

You can appreciate the combined benefits of the new I2C buffers when evaluating the benefits at a system level. System control and communication buses are essentially the central nervous systems of their respective applications. All too often, designers must compromise on the design of their control bus implementations, like not tweaking pull-up resistor values to save engineering time and cost.

Another area where designers often compromise is when optimizing the operating power consumption of the I2C control bus. The size and expanse of most I2C bus designs often requires more engineering time than you may have in your budget. New I2C buffers enable you to architect control buses that enable core system components to operate at their intended maximum level. Current source driver-based I2C buffers with their superior timing characteristics, lower operating current consumption and lower voltage support provide the functionality and performance needed to implement control buses from a system-level perspective.

Summary
Current source-based I2C buffer devices represent a modern approach to implementing I2C buses. With next-generation I2C buffer devices, you can not only implement I2C control buses that you are familiar with, but also address critical design targets such as cost, power dissipation and design complexity. You can implement system control architectures that evolve with the overall system design rather than being a bottleneck in the system design process.


Atul-Patel-HSAtul Patel is a Product Marketer and New Business Development Manager for industrial markets within TI’s Standard Logic Product Line. Patel has more than 20 years of systems and marketing experience with analog and mixed signal devices covering a broad spectrum of markets including Industrial, Automotive and Telecom. He has a Bachelor of Science degree in Computer Engineering as well as an MBA from the University of Central Florida.

Technology Convergence Enables Industrial IoT Solutions

Friday, March 17th, 2017

Ever since Thomas Edison flipped the switch to power the first electric light, the pace of electronic innovation has never let up. With the invention of the transistor and then the integrated circuit, innovation within the electronics industry has developed at break neck speed. Today’s modern ICs contain upwards of 20 billion transistors. That scale means significant performance and tightly integrated heterogeneous systems on the same die.

Figure 1: Processing requirements, pixel size and frame rate.

Figure 1: Processing requirements, pixel size and frame rate.

This performance and integration capability has resulted in electronic innovation becoming very integrated with modern life. We now have convergence of applications which combine, among other capabilities, wired and wireless networking, vision processing, industrial control capability, and cloud computing, to create what are often known as smart products. These smart products are capable of interconnecting to form the Internet of Things (IoT). When these smart products are applied within an industrial context we refer to this as the Industrial Internet of Things (IIoT). Creating IIoT solutions brings with it several challenges. Let’s look at what IIoT systems are, the challenges they face, and how we can address these using the Zynq®-7000 and Zynq® UltraScale+™ MPSoC from Xilinx.

Application and Challenges of IIoT
The application of Industrial IoT is wide ranging and extends across automation and connectedness of the power grid to planes, trains, automobiles, shipping, and factory automation. General Electric, for example, is adding intelligent and connected systems across the many industries it serves, including power grid, transportation, oil and gas, mining, and water. In rail transportation, for instance, GE is outfitting its locomotives with smart technologies to help prevent accidents and monitor systems for wear and tear, enabling more accurate analysis for preventative and predictive maintenance. At the same time, GE is also diligently building smart rail infrastructure equipment that it has networked with its locomotives. This allows railway operators to run their lines and schedule maintenance accordingly to keep freight and passengers moving more efficiently and safely.

The above example demonstrates several of the challenges faced by the IIoT embedded system designer. IIoT systems need to be capable of interfacing and supporting a wide range of sensors, from simple MEMS-based sensors such as accelerometers, to more complex sensors such as CMOS Image Sensors (CIS). These sensors enable the IIoT system to understand its environment and become more context aware. However, many applications require the IIoT system to not only understand its environment, but also interact with it. Therefore, the system must contain intelligence so that it is capable of processing at the edge and thus interfacing with and controling the relevant actuators, motors, or other drive interfaces. Of course, the system is also required to be network enabled and to support a broad based of Industrial standards and protocols. This ability to communicate over networks, coupled with the remote and often isolated installation of IIoT systems, also demands secure operations and communications.

Determining the processing capability an IIoT system needs will depend upon the application, interfaces, and system throughput. One of the largest driving factors is the use of an embedded vision-based IIoT system within what is increasingly called Industry 4.0. Industry 4.0 introduces automation and data exchange to manufacturing technologies with technologies interconnecting via the IIoT and Cloud. When we are using Embedded Vision within the IIoT, key parameters of the image sensor will significantly drive the needed processing capability, typically defined by:

  • Sensor Resolution: The number of pixels in the horizontal and number of lines in the vertical.
  • Frame Rate: The number of times the entire sensor is read out each second.
  • Data Width: The number of bits used to represent the pixel value.

The combination of these define the key interfacing requirements and that of the processing chain and its data rate, as shown in Figure 1.
It is not just implementing the image processing algorithms the system designers need to consider, they must also remember that many IIoT applications are implemented in harsh conditions—not only those involving vibration, shock, and temperature, but also those which are electrically noisy. This can lead to processing the data received by first filtering from the less complex sensors before either acting on the data or communicating it onwards. Depending upon the complexity of the filtering required, they could implement this with a simple rolling average filter or perhaps require a more complex implementation like a Finite Impulse Response (FIR) filter to filter out unwanted noise. Regardless of the method chosen, signal processing, conditioning, and generation form an important part of IIoT systems.

The sensor processing requirements combined with the communication throughput contribute significantly towards the processing capability required. For connecting to the Internet, the system’s application requirements, as determined by the system architect, dictate the primary method used. This will include several networking standards ranging from 4G for remote and mobile applications, to WiFi, Bluetooth, and Bluetooth Low Energy (BLE), and wired connectivity for factory or fixed applications. In some applications, ad hoc networking may also be required.

Another significant factor in determining the processing requirements is the response time or latency of the system. Again, this can be different from application to application. An Industry 4.0 inspection application will require a much lower response time to detect a manufacturing defect on a production line, than a predictive maintenance application on rolling stock, for instance.

The system also needs to be secure. It’s not just the encryption of communications to and from the system that’s demanded. Also needed are the security and the trustworthiness of the system itself. These are often called Information Assurance (IA) and Threat Protection (TP). For IA, a typical approach is to implement encryption such as Advanced Encryption System (AES) or one of the IoT-specific algorithms like SIMON or SPEC. TP is more complicated as it requires an evaluation of the threats at both a device and system level. Anti-tamper protection at the device level is as critical as that at the system level and will vary from application to application. In remote and isolated or critical applications, the system designer will need to ensure the performance and integrity of the system cannot be affected or tampered with by an unauthorized party.

Summarizing the above, we can identify several high-level challenges that the IIoT designer must address:

  1. Ability to interface and control a wide range of sensors, actuators, motors and other application specific interfaces.
  2. Processing capability with the ability to process at the edge within the required response time.
  3. Communication support for a range of wired and wireless technologies.
  4. Security and the ability for the device and system to be secure both in terms of IA and TP.
  5. Should include Functional Safety (SIL levels).

Rising to the Challenge
The system architect can address these high-level challenges by selecting a device from the Xilinx All Programmable Zynq-7000 SoC family or the Zynq UltraScale+ MPSoC family. These heterogeneous processing systems include a complete ARM processing complex, comprised of processor subsystem and peripherals (in the case of the Zynq-7000 include either single core dual ARM® Cortex™-A9 processors, or Dual or Quad A53 cores and dual R5 real-time cores, Mali GPU core, and in select devices video codec supporting H.264/ H.265), all closely coupled directly with programmable logic and configurable IO, allowing the system designer to create an optimal solution for many IIoT applications.

As Identified in the summary of the previous section, the Zynq All Programmable SoC family addresses a wide breadth of sensor and connectivity modalities, configurable machine learning engines for analytics with requisite responsiveness to meet machine real-time precision control, and multi-layered security with multi-level safety.

One of the main advantages of using a Zynq SoC solution is that it provides for any-to-any interfacing, enabling connection with both Industry standard, legacy, and evolving interfaces (such as TSN). Within the processing system (PS) of Zynq 7000 and Zynq UltraScale+ MPSoC, the user is provided with several standard peripheral interfaces. From basic low-speed standards like SPI, I2C, and UART to more complex ones such as CAN, Ethernet, USB 3.0, PCIe, SATA, and DisplayPort. These integrated peripherals enable the design engineer to connect a wide range of sensors. However, should an interface be required that the processor doesn’t support, for instance a CIS or high-speed ADC or DAC, designers can utilize the programmable logic (PL) to implement the required peripheral interface. This ability to use the PL to create the interface required comes into its own when there is a need to interact with a proprietary or legacy interface.

Data produced by the sensors can be processed by either the processor or the programmable logic. Using Xilinx’s system-level Eclipse-based SDx environment SDSoC™, the design engineer can seamlessly partition and quickly optimize the design. Designers can move functions executing on the processor by offloading and accelerating them as co-processing engines implemented in the programmable logic. The SDSoC provides a rapid and seamless development environment, combining High Level Synthesis with optimized data movement engines and a connectivity framework, substantially boosting system performance.

For example, two commonly used algorithms within IIoT systems are FIR filters to reduce noise on sensor readings as previously noted, and AES encryption to secure communication channels. Both these algorithms can be executed within the processor system. However, the performance can be increased significantly by moving these functions into the programmable logic. Monitoring the number of clock cycles taken to execute these functions when the FIR filter example is implemented on the processor, and then in the programmable logic, the user obtains values of 537946 clock cycles using a bare metal operating system, and 54696 clock cycles when running in the programmable logic. A significant decrease in execution time of around 90 percent results, achieved without the use of an HDL specialist. Designers can provide similar acceleration functionality for signals that need to be generated by the IIoT system, for example performing motor control.

Furthermore, Machine Learning and Neural Networks can be implemented utilizing the combined processing capabilities provided by the Zynq Programmable SoC family. Some examples are inference engines for image classification, which is important in machine vision for process control, and autonomous operation in robotics and surveillance systems, combined with stochastic probabilistic algorithm ML engines for predictive to prescriptive maintenance. The Zynq Programmable SoC family and development tool environments provide a platform for rapid generation of these engines utilizing popular machine learning frameworks.

But of course, not all IIoT systems are implemented on a bare metal OS. Many require a real-time OS like FreeRTOS, or a more complex OS like Linux. The choice of operating system will have an impact on the performance of the accelerated function. Table 1 shows the performance of the AES256 algorithm when running in the processor system and accelerated to the programmable logic using several different operating systems.

Table 1: AES Acceleration by OS.

Table 1: AES Acceleration by OS.

When it comes to being network enabled and communication to and from the system, both Zynq-7000 and Zynq UltraScale+ MPSoC provide Gigabit Ethernet capability within the processor. If the system requires a wireless connection, users can leverage the any-to-any configurable interface capability to connect with an external WiFi module. Often these provide Bluetooth and BLE capability as well. If the system is designed to operate within a remote, isolated, or mobile application, a 4G interface may be provided in these instances to ensure continuity of connection back to the Cloud.

Security is designed into the very core of both Zynq-7000 and Zynq UltraScale+ MPSoC families, enabling secure boot facilities. Within both the processor and the programmable logic of Zynq-7000 devices there is a three-stage process system engineers can use to ensure system partitions are secure. These comprise a Hashed Message Authentication Code (HMAC), Advanced Encryption Standard (AES) Decryption, and RSA Authentication (Figure 2). Both the AES and HMAC use 256 bit private keys while the RSA uses a 2048 bit key. The security architecture of Zynq also allows for JTAG access to be enabled or disabled, preventing unauthorized system access.

Figure 2: All Programmable Zynq-7000 SoC Secure Boot and TrustZone Implementation

Figure 2: All Programmable Zynq-7000 SoC Secure Boot and TrustZone Implementation

These security features are enabled as users generate the boot file and the configuration partitions for their non-volatile boot media. It is also possible to define a fall-back partition such that should the initial first stage boot loader fail to load its application, it will fall back to another copy of the application stored at a different memory location, offering a degree of reliability. And if required, users can implement functional safety (IEC61508 based) within their IIoT design using techniques such as Isolation Flow (Figure 3). A reference design is available showing how Zynq-7000 achieves SIL 3 with HFT=1.

Figure 3. Isolation Flow Example

Figure 3. Isolation Flow Example

Having completed the secure boot and with the device executing its application, they can use the ARM TrustZone architecture to implement orthogonal worlds, which limits access to hardware functions within both the processor and programmable logic peripherals. Integrated A/Ds for voltage and temperature monitoring can assess SoC and overall system health. This can also be used to provide an anti-tamper capability to address unauthorized access to the system. The Zynq US+ MPSoC enhances the Zynq 7000 security, adding security functionality including Differential Power Analysis (DPA) avoidance, integrated Physically Unclonable Function (PUF), and other security features.

Conclusion
IIoT designers face several challenges that can be addressed using Xilinx Zynq All Programmable SoC and MPSoC families. These solutions address these challenges by combining Software Intelligence and Hardware optimization in a single Zynq SoC device. The devices provide real-time processing and response, breadth of communication standards and protocol support, with multilayer security and functional safety, any to any connectivity and the ability to rapidly develop at the system level using SDSoC to ensure the optimal system partitioning and performance, Zynq Programmable SoCs are the ideal platform for IIoT systems.


AdamTaylor_pic_webAdam Taylor is a world recognised expert in design and development of embedded systems and FPGA’s for several end applications. Throughout his career Adam has used FPGA’s to implement a wide variety of solutions from RADAR to safety critical control systems, with interesting stops in image processing and cryptography along the way. Most recently he was the Chief Engineer of a Space Imaging company, being responsible for several game changing projects. Adam is the author of numerous articles on electronic design and FPGA design including over 175 blogs on how to use the Zynq. Adam is Chartered Engineer and Fellow of the Institute of Engineering and Technology, he is also the owner of the engineering and consultancy company Adiuvo Engineering and Training http://www.adiuvoengineering.com

Challenges to Implementing the Internet of Things for Industrial Applications

Tuesday, May 10th, 2016

Building industrial IoT systems is getting easier as both hardware and software building blocks come together to provide robust performance and security.

The Internet of things (IoT) takes on many forms, ranging from a handful of smart home devices linked together and connected through the Internet via a gateway, to networks of hundreds or thousands of sensors and other connected devices in a smart office building, a factory floor, the power grid, or jet engines on a plane. The industrial use of IoT solutions takes on its own identity as the industrial IoT (IIoT), and the IIoT has more stringent performance requirements than consumer-connect solutions. Advanced security, quantified real-time performance, the ability to connect legacy equipment to the network, and the ability to handle huge amounts of data being collected from thousands of end-points are key performance characteristics that differentiate the IIoT from consumer IoT solutions in smart homes.

As explained in a white paper written by several product managers at Moxa, in industrial automation applications, collecting data from field devices will become more important than ever. Temperature, motor speed, start/stop status, or video footage, can be used to gain new insights to increase competitiveness. For example, you can determine how to optimize your energy usage, production line performance, and even when to do preventive maintenance to reduce the amount of downtime. However, these devices often speak different languages: some use proprietary protocols, whereas others use open standard protocols. Whatever the case may be, you will need to find an efficient way to convert back and forth between one or more protocols.

The IIoT thus presents many challenges to designers and system implementers. The first critical issue is to get all the “things” connected so that data and commands can seamlessly flow across the network. Next, all the data being collected must be turned into intelligence (little data from many sensors or endpoints turns into big data, and that data must be analyzed to extract useful intelligence). Often, groups of sensors are collected into a bank and that bank feeds into a gateway that will preprocess the data, reducing the amount of data sent back to the host system. That, in turn, will reduce the network bandwidth requirements, lowering the cost to collect the data since companies could use less-expensive lower-bandwidth interfaces.

Factory-floor applications and many other industrial applications bring with them legacy equipment issues, where equipment that might be 10, 20 or more years old would have to be adapted to communicate over the IIoT. But such equipment might employ many different communication protocols and interfaces, thus making the equipment difficult to configure and integrate. Thus, designers will have to deal with interoperability and scalability challenges to craft large, scalable heterogeneous networks that can operate in harsh environments—whether on the factory floor or dispersed across the country on the power grid.

Once properly set up, the IIoT can improve productivity and make users lives easier. However, an unreliable network would be the bane of any system developer since users would see longer system downtimes, risks from system breaches from hackers, malware, and organized cyber security attacks, and unstable operations as the diagram from Moxa illustrates (Figure 1). This is especially prevalent when wireless connections link portions of the system together.

To deal with all the security and interface issues, Moxa has developed Fieldbus-to-Ethernet gateways with smart functionality to deal with breaches and system performance issues. The company’s MGate gateways not only connect serial devices to Ethernet systems, but they also allow multiple connections and make it easy to employ various Ethernet protocol formats, such as Modbus TCP and Ethernet/IP.

Figure 1:  In a typical factory automation system, there are many concerns that designers must address to create a reliable IIoT solution. Image courtesy of Moxa.

Figure 1: In a typical factory automation system, there are many concerns that designers must address to create a reliable IIoT solution. Image courtesy of Moxa.

One Starting Point

In addition to all the hardware issues, there are many software aspects that designers must deal with, from the operating system software to application programs that run on the endpoints such as sensor nodes, machines on the factory floor, equipment in the field, or jet engines on a plane. One starting point from Wind River, its free scalable real-time operating system (RTOS) dubbed Rocket, is targeted for 32-bit microcontrollers and is a good fit for sensors, wearable products, industrial actuators, wireless gateways and other resource-constrained devices.

The Rocket RTOS lets designers develop, debug and deploy applications for small, intelligent devices from any browser. Included in Wind River’s Helix App Cloud, a cloud-based software development environment, the Rocket software allows designers to start developing IoT applications in minutes. To get started, designers just have to create an App Cloud account, connect their prototype board or use Wind River’s simulator, and then start writing the application code. A browser interface allows designers to work from anywhere to code and debug the application software, and prototypes can be developed without requiring any hardware purchases.

The kernel is a small footprint kernel designed for use on resource-constrained systems, from simple embedded environmental sensors and LED wearables to sophisticated smart watches and IoT wireless gateways. The software is tuned for memory- and power-constrained devices (as small as 4 Kbytes of memory).

Souped-up Security

The proliferation of IoT/IIoT devices brings with it significant security challenges. Hackers as well as cybercriminals can often find a way to enter the networks and wreak havoc by compromising system functions, holding system data for ransom, or siphoning off valuable system data for sale to other criminals. Both hardware and software measures to protect the networks are needed to prevent or minimize network intrusions and alert companies when an intrusion is taking place. To that end, microcontroller (MCU) vendors now include random-number generators as well as full encryption/decryption blocks on their chips to provide real-time encryption/decryption. Previous generation MCUs typically used software encryption/decryption and the software overhead slowed system throughput.

In addition to the RTOS software challenges, the ability to handle hundreds to thousands of sensor or endpoint inputs often requires the use of gateways or edge devices that can aggregate the data coming from the sensors or endpoints, potentially preprocessing the data and then forwarding the data to the host system. These Internet gateways will often handle multiple communication protocols such as WiFi, Bluetooth, ZigBee, LoRa (long-range wireless), and still others. The gateway will translate all the inputs into a common communication protocol, typically WiFi or wired Ethernet.

Situated between the sensors and the gateways, sensor hubs usually perform some degree of data reduction, extracting the key information from the sensor data streams to reduce the amount of data sent back to the host system (Figure 2). There are many vendors that offer gateways for IoT applications. Offering various reference designs for system gateways, Intel solutions, for example, span the range of simple gateways based on its low-end Quark processor, the X1000 system-on-a-chip, or its higher performance Atom processors such as the E3826. The gateway products provide connectivity from the sensors all the way up to the cloud and enterprise systems. The more intelligent gateways can preprocess and filter data to deliver selective results to the host. Local decision makes it easier for gateways to connect with legacy systems, and a hardware root-of-trust along with hardware supported data encryption can provide end-to-end security.

Figure 2: In a large IIoT system, gateways provide a means to translate multiple protocols into a common protocol used to communication with the host system somewhere in the cloud, while sensor hubs aggregate some of the sensor data to extract key information and reduce the amount of data transferred to the host. (Image courtesy of Intel.)

Figure 2: In a large IIoT system, gateways provide a means to translate multiple protocols into a common protocol used to communication with the host system somewhere in the cloud, while sensor hubs aggregate some of the sensor data to extract key information and reduce the amount of data transferred to the host. (Image courtesy of Intel.)

Thus designers can scale processor performance based on the processing requirements in the gateway. Additionally, the processors support multiple operating systems—Wind River, Microsoft, Ubuntu, and others. Robust security provided by McAfee (now part of Intel) embedded control security technologies tightly integrate with on-chip hardware based security features to provide seamless secure data flow from the edge devices to the cloud, protecting the data while in flight or at rest. Additionally, pre-integrated manageability options provide developers with an out-of-the-box offering they can build upon and customize.

The sensors that collect all the data have come down in price—from tens of dollars to just a dollar or two today—thanks to the large volumes consumed by the mobile communications and compute products such as smartphones and tablets. They are also getting more highly integrated, offering anywhere from a single-axis accelerometer to a nine-axis multi-sensor (three-axis accelerometer, three-axis gyroscope and three-axis compass) solution in a single surface-mount package such as offered by Invensense. And along with the sensors, some vendors also include some data processing functions on the sensor chip to preprocess the raw data stream, thus reducing the amount of data sent to the gateway. The low cost of the today’s multiaxis sensors allows designers to use them throughout the system to monitor many more aspects of system performance than in previous system generations.

The software and hardware building blocks to build a robust IIoT system are now readily available from multiple suppliers. Development tools now let designers prototype systems in minutes to a few hours. But there are still many system aspects that designers must address—deciding which communications protocol standard (as mentioned earlier) will be key to system scalability and performance. Additionally, for many systems the choice of either an RTOS or other operating system can be critical, but there are no standards that designers must follow. The key issue will be to determine if the system will be “closed” and only allow hardware from one vendor to do all the work, or be “open” and allow products from different suppliers to interoperate.

For smart homes, the Thread Group has developed an approach to connect and control products in the home. Built on open standards and using the IPv6 and 6LoWPAN protocols, Thread provides a secure and reliable mesh network with no single point of failure as well as simple connectivity and low power consumption. On the industrial side, the Industrial Internet Consortium is trying to define standards to encourage interoperability. The IIC is a worldwide not-for-profit, open membership organization that was formed to accelerate the development, adoption, and widespread use of interconnected machines and devices, intelligent analytics, and people at work. It helps achieve this by identifying the system and subsystem requirements for open interoperability standards and defining common architectures to connect smart devices, machines, people, and processes that will help to accelerate more reliable access to big data and unlock business value.

Increased Automation Challenges Embedded Safety Functionality

Tuesday, December 1st, 2015

An increase in automation, in factories, in vehicles and in the IoT is driving the need for functional safety to be embedded in the digital portion of designs for industrial and automotive products.

Increasing levels of automation in factories means that safety and security are more critical than ever. More equipment and systems are communicating and operating over a network to monitor and analyze processes in the manufacture of end products.

Embedded systems used in the automation process need to foresee problems. And devices used must comply with safety standards. In self-driving cars and the Internet of Things (IoT), devices are also operating without operator assistance. In all three instances, there has to be a high level of system awareness. Reliable devices must be certified to meet a level of risk analysis and an acceptable probability of failure.

Functional Safety Standards

Functional safety can be designed into embedded systems, whereby the system meets certification requirements and detects potentially dangerous conditions. Importantly, the definition of functional safety includes “the activation of a protective or corrective device or mechanism to prevent hazardous events arising or providing mitigation to reduce the fight consequence of the hazardous event.” (IEC-Functional Safety Explained)

“Embedded systems used in the automation process need to foresee problems and devices used must comply with safety standards.”

Although the initial part of IEC 61508 was introduced in 1998, sector-specific versions followed, for example, oil production and non-nuclear power plants in 2001, but it was not until 2010 that it addressed industrial communication networks. Roger May, system architect and functional safety lead at Altera, believes that the 2010 European machinery directive has changed the way designers integrate safety into products. “[It] required all machines to be safe,” he says, “This has led to customers looking to add safety into the core of the product instead of as an add-on. As these safety designs are becoming more prevalent, then there is a knock-on impact across other markets/geographies which see the need to improve the safety of their products.”

An extension of the IEC 61508 is ISO 26262, which defines Automotive System Integrity Levels (ASILs), or risk analysis. Both of these standards assess hardware safety integrity and systematic safety integrity.

Industrial and Automotive Design

At this year’s SPS IPC Drives exhibition in Nuremberg, Germany, many companies highlighted the need for device and system level to work in harmony to reduce risk and time to market for industrial and automotive systems.

Infineon was one. It announced that its XMC4000 32-bit microcontrollers will be available with a Safety Package to help designers develop TUV-certified automation tests that conform to Safety Integrity Level (SIL) 2 and SIL3. These SILs, as defined in the IEC 61508, are 0.000001 – 0.0000001 and 0.0000001 – 0.00000001 probability of failure per hour respectively for continuous operation. TUV Rheinland is the international, independent body for certification of safety and quality of products, services and management systems.

The XMC4000 family is based on an ARM Cortex-M4 processor and was designed for the industrial market, with an integrated EtherCAT, for real-time Ethernet communication and an ambient temperature operation of 125°C.

Figure 1: The XMC4800 32-bit microcontroller from Infineon has integrated EtherCAT for real-time communication.

Figure 1: The XMC4800 32-bit microcontroller from Infineon has integrated EtherCAT for real-time communication.

The Safety Package includes XMC4000 microcontroller hardware, documentation and the TUV-certified Fault Robust Software Test Library (fRSTL), jointly developed with YOGITECH, and consultancy and implementation support by embedded engineering tool company, Hitex. Documentation includes a failure mode report, failure mode effects and diagnostic analysis based on liable failure-in-time rates for the microcontrollers and the Safety Application Note to develop SIL2 and SIL3 systems.

The non-TUV-certified library (fRSTL) is available now, and the TUV-certified version will be available, under license from YOGITECH or Hitex, from Q1 2016. The documentation (failure mode report, failure mode effects and diagnostic analysis) documentation will be available in January 2016.

“There have been changes in the safety functionality that customers are embedding in their products,” observes May, requiring changes to the embedded system. “We are seeing a greater demand to add more complex drive safety features, such as SS1 (Safe Stop 1) and SLS (Safely-Limited Speed)—these require safety to be more deeply embedded in the digital system,” he adds.

FPGA Combines With IP

In 2010, Altera was the first FPGA company to deliver a certified toolflow and IP, says May. It has added toolflows such as Safety Design Partitioning and works with partners, such as YOGITECH, the independent IC design services provider. The Nios II embedded processor is now available with the Functional Safety Lockstep. Targeting industrial and automotive applications, the two companies have built the Lockstep solution using Altera FPGAs, SoCs and certified toolflows, with YOGITECH IP. Customers can use Lockstep to implement SIL3 safety designs in Altera FPGAs using fRSmartComp technology for diagnostic coverage, self-checking and safety-related diagnostics of ICs, in compliance with IEC 61508 and ISO 26262.

Figure 2: Altera launched the Nios II Lockstep processor solution with partner, YOGITECH, at this year’s SPS IPC Drives show in Nuremberg, Germany.

Figure 2: Altera launched the Nios II Lockstep processor solution with partner, YOGITECH, at this year’s SPS IPC Drives show in Nuremberg, Germany.

In a safety system, the more complete the tests for system faults, the better. May explains that a measure of this is the diagnostic coverage. When implementing safety on a processor, the processor itself must be tested. This can be done using Software Test Libraries (STLs) to test software functions that run on the processor. “A disadvantage of these STLs,” says May, is that they require significant processor performance (approximately >50%)—leaving less performance for the safety functionality. They are also only able to provide moderate diagnostic coverage (approximately 60% at best). This moderate-low diagnosis coverage limits the system designer when trying to implement the high safety levels that require diagnostics of at least 90%,” he points out. “Using Nios II Lockstep has benefits in that the diagnostic coverage is >99% and it is achieved without impacting the performance of the processor,” he says.

Figure 3: The YOGITECH safety methodology flow.

Figure 3: The YOGITECH safety methodology flow.

Nios II joins the ARM Cortex-R5 processor in the fRSmartComp, YOGITECH’s IP that implements lockstep operations by interfacing a master and slave microprocessor with all the logic and mechanisms required. It embeds the standard cycle-by-cycle comparator of dual-core lock-step architectures. When a discrepancy is detected at one of the interfaces, it detects which of the two cores may have failed. Fail-operation architecture allows the faulty core to be swapped for a good one. A fail-safe architecture has two channels, using one as either a redundancy channel, in case of one failing. Alternatively, it can be designed so that both channels perform the same operation, and in the event of failure, a single channel performs the task.

SMARC is The Low Power SFF

Thursday, September 25th, 2014

Since stripping and reusing the guts from a smartphone is impractical, the tiny SMARC form factor is the next best thing.

Modern smartphones and tablets are more powerful by far than the PC or Mac that was on your desk a couple of years ago. They run Microsoft Office and more apps than you ever had on your desktop or laptop—I guarantee it. They do video image processing, contain software-defined audio CODECs, and do RF and DSP crunching that used to require TI’s best purpose-built ICs. And most smartphones run ARM-based SoCs (32-bit, and starting with the iPhone 5s, 64-bit ISAs), although Intel’s x86 Atom is creeping into many overseas handset designs.

Most impressively, all this compact mobile horsepower, I/O and user application goodness is incredibly low power. Contrast these achievements with common open-standard embedded form factors such as COM Express, 3U VPX, or one of the PC/104 Express variants. See the difference? They’re all “huge” compared to a smartphone’s motherboard and draw 10W or more.

The Smart Mobility Architecture (SMARC) was created to bring smartphone-like performance, size and power to the open-standard embedded market. SMARC was conceived by Kontron and ADLINK working collaboratively, and ADLINK’s SMARC product line is called Low Energy Computer-on-Module (LEC). Let’s examine SMARC in some detail.

img1
A “short” sized SMARC module by ADLINK and based upon the industrial-strength ARM-based TI Sitara AM3517 Cortex-A8. “Short” SMARC modules are 82 mm x 50 mm small.

SGeT Going

The initial collaboration between Kontron and ADLINK started in 2010 and was the result of Intel disclosing its then-Atom smartphone strategy to the company’s key embedded partners. Two trend lines were at work: Intel’s Atom variants weren’t ideal for traditional embedded form factor boards due to pinouts, feature sets and power; and two, standard ARM SoCs—although growing in performance—were becoming increasingly difficult to design with due to complexity and serial interface signal integrity.

On the one hand, the x86 embedded market was strong but Intel’s lowest power variants would be a challenge for open-standard small form factor (SFF) designers. On the other hand, the hugely popular and ultra low power ARM SoCs were out of sync with many SFF designers’ capabilities due to high complexity silicon and long design times. An additional concern was that ARM-based SoCs evolved in sync with the smartphone market which was too fast for “regular” SFF embedded designers and their customers. Embedded designers don’t spin boards annually.

Kontron’s then-CTO Dirk Finstel (now European General Manager at ADLINK) turned to ADLINK and others to form a “rapid response” collaboration effort that would spawn an ultra low power (ULP) computer-on-module (COM) standard that would:

  • Be geared towards low power ARM SoCs with longer lifecycles and common I/O. The target is three to five I/O generations at the board interface
  • Be nearly as small as a smartphone’s motherboard
  • Abstract the complicated processor design onto a SFF COM board for use with a base carrier board for I/O breakout.

SMARC was announced in 2012 and was to be administered by the new Standardization Group for Embedded Technologies (SGeT) committee with the goal of quickly establishing SMARC and other embedded SFFs. While the SGeT website lists nearly 50 company names the key players today as identified by ADLINK are: ADLINK, Advantech, b-Plus, Fortec, Greenbase, Kontron, and TQ Systems.

Get SMARC

The SMARC specification has several key attributes that make it a very desirable COM for embedded, but power and size are the primary ones. The power design goal is 6 W maximum, with a mere 2 W as the typical draw. This is achieved today primarily by choosing Freescale- and TI-based ARM SoCs (although any ARM-based SoC such as a Qualcomm Snapdragon 800 would work as well). Compare this to the typical Qseven board (12 W) or COM Express board (up to 50 W) that both use x86 Atom or Core CPUs.

Size-wise, SMARC comes in short size (82 x 50 mm) and full size (82 x 80 mm) with real estate of 4100 mm2 and 6560 mm2, respectively (Figure 1). Compared to Qseven, SMARC is actually a bit bigger. Table 1 compares SMARC, Qseven and COM Express. Note that since SMARC is based on the ARM architecture, the desired operating systems—Android, Linux and Windows Embedded Compact 7—are primarily mobile OSes. This exemplifies SMARC’s low power roots.

adlink_fig1
Figure 1: There are two SMARC sizes. The edge connector mates to an MXM3 connector on a base carrier board; note the holes for standoffs. (Courtesy: SGeT.org.)
adlink_table1
Table 1: Small form factor (SFF) comparison between SMARC, Qseven and COM Express. SMARC’s small size is complemented by extremely low power consumption. (Courtesy: ADLINK.)

High Density; High Society

SMARC intends to bring smartphone-like features, size and power to deeply embedded systems such as Internet of Things (IoT) intelligent nodes and smart gateways. Typical functional blocks found on a SMARC module are a function of the 314 pin MXM 3.0 connector. Figure 2 shows the impressive list of fancy smartphone-like interfaces and signals found on a SMARC COM board.

It’s clear they’re a combination of smartphone multimedia (video, camera, solid state storage), audio, and power conservation, coupled with the kinds of I/O seen in industrial and IoT embedded systems. 24-bit LVDS, Gigabit Ethernet, GPIO, CAN and UART are common embedded SFF I/O.

adlink_fig2
Figure 2: The types of I/O found on the 314-pin SMARC connector. Incidentally, there are 12 pins reserved for future power management. (Courtesy: ADLINK.)

Of note are the “modern interfaces” and Power Management pins shown in Figure 2. Dirk Finstel of ADLINK—a creator of SMARC—told me that additional memory and different kinds of future memory, plus the addition of high power/high source I/O was behind the 12 reserved Power Management pins. “Modern Interfaces” as shown are contrast with the typical embedded COM Express I/O such as SPI and Field Bus. SGeT’s thinking for these on SMARC is exemplified by the MIPI Alliance list of display interface specifications shown here.

Homes for SMARC

The tiny SMARC COM boards, in either Short or Full size (Figure 1) are geared towards low- or battery-powered, long-life systems soon to be found in IoT architectures. But they’re also robust enough for use in client-server architectures where a combination of Linux and Windows applies. This makes SMARC boards candidates for IoT smart gateways where new and legacy sensor data is aggregated and analyzed before being sent onward to the cloud. Their small size means multiple SMARC boards can be ganged inside a small “shoebox” to form a multiprocessing server—including a media processing server because of SMARC’s modern interfaces.

An example of a TI-based SMARC board from ADLINK is the LEC-3517 as shown in Figure 3a and 3b. What’s absolutely astounding is that this much processing and I/O can be found on such a small COM board. Even though the BASE carrier is larger than the SMARC board, the combination has a very small footprint for an open standard embedded SFF.

A system-level example using an Atom-based SMARC is the MXE-200i fanless IoT gateway from ADLINK, shown in Figure 4. Although SMARC was created for ARM’s low-power cores in SoCs from Freescale, TI and others, provisions were made for non-ARM processors. ADLINK, for example, recently introduced an Intel Atom E3800-based Full-sized SMARC COM called the LEC-BT.

adlink_fig3
Figure 3: Carrier board (left) and SMARC COM board (right). The BASE is larger than the SMARC COM, but still small despite it’s I/O density.
adlink_fig4
Figure 4: A notional IoT gateway system based upon SMARC. The OD is a mere 120 x 60 x 100 mm (WxHxD)—not much bigger than the dual Ethernet ports.

This article was sponsored by ADLINK.


Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.