Posts Tagged ‘top-story’

Next Page »

Testing IoT at Scale Using Realistic Data, Part I

Thursday, April 12th, 2018

IoT can involve vast numbers of sensors, devices, and gateways that connect to a cloud. How do you comprehensively test your whole IoT system and not just the sum of its parts? KnowThings.io introduces a novel solution to perform quality testing on a massive scale using machine learning and dramatically reduces the engineering workload… (read more)

Editor’s Note: Embedded Systems Engineering’s Editor-in-Chief, Lynnette Reese, sat down with KnowThings.io, an aspiring, smart startup accelerator project within CA Technologies that created a tool for IoT developers to realistically test IoT applications, from small to very large scale by leveraging machine learning. They currently call it the Self-learning IoT Virtualizer. KnowThings.io’s CEO, Anand Kameswaran, talked with Reese about its mission to make realistic IoT simulation and testing effective and easy, including cloud interaction, internet foibles, massive numbers of IoT sensors and connections, and IoT chaos in general.

Lynnette Reese (LR) Embedded Systems Engineering: You’re the CEO of a very busy start-up; I am glad you made the time for this interview.  

Anand Kameswaran, KnowThings

Anand Kameswaran, (AK) KnowThings: You’re welcome. 

LR: I understand that KnowThings.io has a tool that makes testing the whole integrated IoT/Cloud /network arrangement much easier for developers. You call the KnowThings tool an IoT Virtualizer. Can you briefly explain how the tool works?

AK:  It’s an IoT simulator that creates a virtual device model, so we like to call it an IoT Virtualizer. In short, it can mimic real device system interactions accurately within minutes. It’s a specific type of simulation that interacts at the network layer, using patented self-learning or machine learning algorithms to simulate up to hundreds of thousands of individual data sensors, device inputs, and their interaction with the cloud. Think of it as a “no surprises” test harness for an entire IoT network, even if that IoT system is very, very large.

It allows you to test sections as you develop, try out new tweaks to see how it affects everything else, and test your final revisions so it’s as good as it can be before you send it out into the real world.

 

Figure 1: KnowThings has a three-step workflow that a) captures device interactions via packet capture (PCAP), b) models the captured traffic, and c) plays back the adaptive virtual device (AVD). (Source: KnowThings.io)

 LR: So, it allows you to experience the larger picture of thousands of IoT devices on the internet, interacting with each other and the cloud? But without having to have the actual IoT devices online yet?

AK:  Yes. There’s individual bugs that occur on a device level or with the application that runs analytics, processing etc., but there are also issues that can develop simply based on the number of devices you’re dealing with. When you have thousands of devices collecting data that coalesce into valuable trends and knowledge crucial to good decision-making, there’s a level of collaboration that has to be worked out as the sum of thousands of IoT parts creates a whole different beast.

In one case, we were working with a company doing shipping container monitoring. And their use case is to have 50,000 containers on a ship, with all containers communicating with a single, central node. When we were working with the prototype virtualizer, we were already engaged with customers who were trying to accomplish goals at that kind of scale. We are still working with customers today that are testing at large scale across several industries, whether it’s supply chain and transportation, smart agriculture in monitoring fairly large fields, and so on. That’s one of the reasons why we are interested in those who want to get into our early adopter program, our beta test program, so we can expose the IoT Virtualizer to as many unique industry problems as we can. 

LR: Can you tell us more about the prototype tool and how it’s coming along?

AK:  IoT is a really big space, covering many industries. We are having a new community edition preview on our website and it’s free to download today from our site, knowthings.io.

LR: Why are you giving away this incredible tool for free?

AK: We value getting direct feedback from customers and shaping the direction from that. There is still a lot to be learned. We are bringing together people from the embedded space, cloud networking, and other areas; they’re all coming together, so the best practices going forward form the best development tool we can provide. 

LR: Who would the tool benefit most, and why?

AK: Any IoT developer or solution architect or tester working with IoT applications who wants to test their solutions to a large scale quickly, cheaply, and thoroughly. It can improve time to market, reduce labor costs, and reduce the confusion and frustration that can occur as an engineer straddles both embedded hardware and network cloud integration to implement a viable system in the real world with as few surprises as possible.

LR: The Virtualizer is a new idea, then? No one else offers this?

AK: Though device virtualization exists today through a few vendors, usage of machine learning to generate realistic data scenarios is something unique about our solution. The community edition pre-release is the preview of our free edition that will always remain free for customers to try before they purchase our commercial product. It runs on a desktop or even the Raspberry Pi Zero, but we are planning to release a cloud-based one soon. Those that are interested in working closely with us can apply to participate in our early adopter program.

LR: How did you get the idea for this tool?

AK:  It started a few years ago, as an idea to borrow techniques from work CA Technologies was doing on service virtualization based on genomic sequencing. We’ve been working on the underlying machine learning with the University of Swinburne, Australia, as part of a partnership that predates KnowThings. We found that we could take these algorithms that we use for doing genomic sequencing and apply them to learn computer messages. Sequencing a bunch of genes is actually not a whole lot different than sequencing bytes in working towards understanding what that information is telling us.

There is a type of machine learning associated with genome sequencing, which is a learning algorithm and data mining method that analyzes recorded samples. After this was successfully applied in a service virtualization solution, we felt a similar approach can be taken in the IoT space for the KnowThings solution. We flipped that machine learning to automatically create a simulation of IoT devices with individual, asynchronous behaviors represented at different nodes, or locations on the network. It came out of a collaboration with Computer Associates’ strategic research in continuing to advance the underlying algorithm for the IoT Virtualizer. IoT presents a different type of data space versus genome sequencing, so we can make assumptions that let us take some shortcuts outside of the original genome sequencing algorithm and end up with a very efficient algorithm for IoT simulation.

After several years in research, KnowThings is now on the third generation of the technology. Indeed, some of the previous versions were run in service virtualization solutions with real customers on high-performance computers, dealing with hundreds of thousands of transactions per second. So, we have a history of real-life testing already, and we know that the code and the simulation can successfully accommodate a huge scale simulation.

KnowThings has some close early adopters and is working with customers that need an environment at that sort of scale.

LR: What kind of applications would typically require this sort of scale?

AK: The verticals that KnowThings is currently working with for existing customers include smart agriculture, smart transportation, and facets of retail that include IoT. Smart ag would include hydroponics farms. The Virtualizer for smart transportation would help with the operations and logistics side. And an IoT retail channel might include smartphone tracking via Bluetooth beacons to establish behaviors for very targeted marketing through customer smartphones that act as IoT devices through, say, a coupon app that is also tracking customer behavior to some degree. It’s one thing to test a digital coupon-to-smartphone interaction with one or two participating smartphones, but what happens on Black Friday? KnowThings.io ’s product assists developers to honestly answer the question, “What could go wrong?”

The KnowThings IoT Virtualizer would work well for any IoT application that needs to test on a scale that’s too large to simulate by one’s self. It can save time, for one thing.

LR: How can developers get their hands on this tool?

AK: We are in the early adopter stage right now and offering a role in beta testing. There’s an opportunity for us to partner with those customers that are part of the early adoption program. Not only will partners shape what everything should look like, but they will also help in developing best practices in a very challenging development environment. We want to know about the real challenges IoT is facing and concentrate on solving the problems that IoT developers care about.

Anyone interested in trying it out and contributing suggestions to improving the tool can download the community edition pre-release or sign up for the early adopter program at https://knowthings.io/.  Commercial product launch is in mid-summer.

For more information, go to the KnowThings.io Self-Learning IoT Virtualizer FAQ online.


Lynnette Reese is Editor-in-Chief, Embedded Intel Solutions and Embedded Systems Engineering, and has been working in various roles as an electrical engineer for over two decades

 

 

Industrial IoT: How Smart Alerts and Sensors Add Value to Electric Motors

Thursday, April 12th, 2018

Considering electric motors in industrial environments and the ways the IoT produces some real value, perhaps in ways you hadn’t considered.

I’m often asked about the value of the IoT—sometimes directly, but often indirectly, as in “How can a deluge of data create value?”

Electric Motors: The Workhorses of Industrial Life
Electric motors come in all sizes, from very small to very large. They usually run on main power, but sometimes on batteries, like in electric cars. We all have many electric motors in our homes—in our vacuum cleaners, fridges, freezers, garage door openers. And, of course, many toys have miniature electric motors, like the locomotives in model trains.

Figure 1: The author explains why factoring in the IoT can cause condition-based maintenance to be seen as a better option than either preventive or run-to-failure maintenance.

Factories are also equipped with many electric motors used for all kinds of jobs: lifting, pressing, pumping, sucking or drying—basically everything that can be done with motion. Electric motors are the workhorses of industry today. They’re also used in areas that are too dusty, dangerous, or difficult to reach by human effort. In short, modern industrial life doesn’t exist without the electric motor.

Maintenance, Maintenance, Maintenance
Electric motors are mechanical devices, so it’s no surprise that they go down occasionally. Statistics show a failure rate of seven percent per year; on average, an electric motor stops working once every 14 years. Not bad, you might think—but for a factory with a hundred electric motors, that means one motor is down just about every month. And keep in mind that one motor going down sometimes means a whole production line going down, which can become very expensive, very quickly. Now factor in the reality that motor failures can come with incredibly unfortunate timing, like just before that critical order has to be delivered.

To reduce unexpected downtime, factories employ maintenance crews. Maintenance of electric motors is an important part of their efforts, but it’s also expensive. There are several approaches to maintenance:

  • Preventive maintenance. Maintenance schedules are based on an estimate of how long the average electric motor runs. To be on the safe side and avoid complete motor failure, maintenance usually occurs too early (although occasionally too late), and well-functioning parts still in good condition may be replaced. The catch? There’s no guarantee that a new problem won’t occur shortly after maintenance takes place.
  • “Run-to-failure” maintenance. This approach waits to do maintenance until the machine stops working. This typically results in full motor replacement, because repairing a rundown electric motor on the spot usually isn’t simple.
  • Condition-based maintenance. Before electric motors go down, they generally start to show irregularities like noise, imbalance, drag, etc. In a condition-based approach, a maintenance specialist goes to every electric motor and “listens” to it with the appropriate tools, much like a doctor with a stethoscope. Depending on the environment, this may be an easy job or a difficult and even a dangerous one. And, of course, the doctor can’t be everywhere at once.

Despite its drawbacks, preventive maintenance is probably better and more cost-effective than the “run-to-failure” alternative—but condition-based may be a better option … especially when you bring in the IoT.

Condition-based Maintenance: Made Stronger with AI and IoT
With the IoT, every electric motor on a factory floor is equipped with one or multiple sensors that are connected (preferably wirelessly) to a control database that continuously collects data about the motors. The control database can use artificial intelligence (AI) to learn normal behavior for every motor and then, after a typically short period of learning, it can generate immediate alerts when deviations from that normal occur. In other words, the IoT combined with AI not only sees problems coming, it continuously scans for problems.

Figure 2: The true value of the IoT: machines with connected sensors for maintenance

Keep in mind that this control database doesn’t need to be programmed. It can simply be fed with data and then learns by itself “automatically” what is normal and what are exceptions. When an exception (i.e., a problem) occurs, it sends an immediate alert, which in many cases avoids total motor failure and replacement. This kind of smart alert also allows the treatment to match the problem at the moment it starts to manifest, rather than general maintenance that may be too early, too late, or miss the pending failure completely. Depending on the severity of the problem and alert, the motor’s downtime can even be planned to minimize any disruption to operations.

Finally, this kind of sensor-based data collection is far more precise and thorough than anything humans could achieve. A slow deterioration of the quality of any given electric motor will continue undetected by human eyes and ears until a serious problem develops or failure occurs, but the IoT will notice even the smallest shifts in normal performance over a longer period of time.

The True Value of the IoT: Making Better Decisions Faster
We think we live in a modern world, but we actually waste a lot of resources and money by making the wrong decisions and/or making decisions too slowly. The promise of the IoT is that we can now collect enough data—cost-effective data that already exists, but that we never captured. And we can capture this data continuously, and quite effortlessly, in enormous volumes. With AI, we can learn from it to make better decisions, faster.

Back to the original question, then. What value does the IoT bring? It enables people to make better decisions, faster.


Cees Links was the founder and CEO of GreenPeak Technologies, which is now part of Qorvo. Under his responsibility, the first wireless LANs were developed, ultimately becoming household technology integrated into PCs and notebooks. He also pioneered the development of access points, home networking routers, and hotspot base stations. He was involved in the establishment of the IEEE 802.11 standardization committee and the Wi-Fi Alliance. He was also instrumental in establishing the IEEE 802.15 standardization committee to become the basis for the ZigBee® sense and control networking. Since GreenPeak was acquired by Qorvo, Cees has become the General Manager of the Wireless Connectivity Business Unit in Qorvo. He was recently recognized as Wi-Fi pioneer with the Golden Mousetrap Lifetime Achievement award. For more information, please visit www.qorvo.com .

 

Embedded World 2018

Thursday, March 8th, 2018

Embedded World broke its own records—again—this year, with over 1,000 exhibitor companies and more than 32,000 international visitors from the embedded community.

Nuremberg, Germany was a winter wonderland for this year’s Embedded World, as freezing temperatures gripped the Nuremberg Messe showground. Inside the six halls of the world’s biggest embedded industry event, however, was a hotbed of IoT, automation, automotive, communications, and networking innovation.

Figure 1: Nuremberg Messe hosted the 16th Embedded World from February 27 to March 01 2018.

All types and sizes of modules were on display, illustrating the diversity of choice available in the market today. One of the most interesting was the System on Module (SoM) for industrial-grade Linux designs by Microchip. The ATSAMA5D27-SOM1 (Figure 2) is designed to remove the complexity of developing an industrial system based on a microprocessor, running Linux® OS. Lucio di Jasio, Business Development Manager, Europe, at Microchip, explains that the 40 x 40mm board will help engineers with PCB layout in these applications. It has the company’s ATSAMA5D27C-D1G-CU System in Package (SiP) and uses the SAMA5D2 MPU. The small form factor manages to integrate power management, non-volatile boot memory, Ethernet PHY and high speed DDR2 memory to develop a Linux-based system or it can be used as a reference design. Schematics, design, and Gerber files are available online and free of charge.

Figure 2: The ATSAMA5D27-SOM1 System on Module was announced by Microchip.

For security, the SAMA5D2 family has integrated Arm TrustZone® and capabilities for tamper detection, secure data and program storage, hardware encryption engine and secure boot. The SoM also contains Microchip’s QSPI NOR Flash memory, a Power Management Integrated Circuit (PMIC), an Ethernet PHY and serial EEPROM with a Media Access Control (MAC) address.

There is a choice of three DDR2 memory sizes for the SiP, 128 and 512Mbit and 1Gbit, all optimized for bare metal, Real-Time Operating Systems (RTOS) and Linux OS. All of Microchip’s Linux development code for the SiP and SOM are in the Linux communities.

Customers can transition from the SOM to the SiP or the MPU, depending on the needs of the design, adds di Jasio.

The company also announced two new microcontroller families, one for the PIC range and one for megaAVR series. The PIC16F18446 microcontrollers are suitable for use in sensor nodes, while the ATmega4809 is the first megaAVR device to include Core Independent Peripherals (CIPs), to execute tasks in hardware instead of through software, decreasing the amount of code required to speed time to market.

Graphics Performance
A COM Express Type 6 module from congatec shows how the company is wasting no time in exploiting the prowess of AMD’s latest Ryzen™ processor. The conga-TR4 (Figure 3) is based on neighboring exhibitor AMD’s Ryzen Embedded V1000 processors.

Figure 3: congatec has based its conga-TR4 COM Type 6 module on AMD’s high-performance GPU, the Ryzen V1000.

The company has identified embedded computing systems that need the graphics performance of the Ryzen processor for medical imaging, broadcast, infotainment and gambling, digital signage, surveillance systems, optical quality control in automation and 3D simulators.

The Ryzen Embedded V1000 was launched days before Embedded World, together with the EPYC™ Embedded 3000 processor, and made up the ‘Zen’ zone in the company’s booth, as the two processors enter a new age for high-performance embedded processors, explains Alisha Perkins, Embedded Marketing Manager at AMD.

Focusing on the Embedded V1000, Perkins explained that it targets medical imaging, industrial systems, digital gaming, and thin clients to the edge of the network. It increases performance by a factor of two, compared with earlier generations, has up to three times more GPU performance than any processor currently available, has nearly half as much again (up to 46%) more multi-threaded performance than competing alternatives and, crucially for mobile or portable applications, is 36% smaller than its competitors.

AMD couples its Accelerated Processing Unit (APU) with Zen Central Processing Units (CPUs) and Vega Graphics Processing Units (GPUs) on a single die, offering up to four CPU cores/eight threads and up to 11 GPU compute units for a 3.6TFLOPS processing throughput. On the stand were examples of medical imaging stations that could be wheeled between wards, a dashboard for remote monitoring of utilities and an automated beer bottle checking visual inspection station, using the high-performance graphics and computing powers of the processor.

At congatec, the COM Express Type 6 module was also cited as being suitable for smart robotics and autonomous vehicles, where its Thermal Design Power (TDP) is scalable from 12 to 54W to optimize size, weight, power and costs (SWaP-C) at high graphics performance, says Christian Eder, Director of Marketing, congatec.

Industrial Automation
For the smart, connected factory, Texas Instruments introduced its latest SimpleLink™ microcontrollers (MCUs) devices, with concurrent multi-standard and multi-band connectivity for Thread, Zigbee®, Bluetooth® 5 and Sub-1 GHz. Designers can reuse code across the company’s Arm® Cortex®-M4-based MCUs in sensor networks to the cloud.

The additions announced in Nuremberg expand the SimpleLink MCU platform to support connectivity protocols and standards for 2.4 GHz and Sub-1 GHz bands, including the latest Thread and Zigbee standards, Bluetooth low energy, IEEE 802.15.4g and Wireless M-Bus. The multi-band CC1352P wireless MCU, for example, has an integrated power amplifier to extend the range for metering and building automation applications, while maintaining a low transmit current of 60mA.

SimpleLink MSP432P4 MCUs have an integrated 16-bit precision ADC and can host multiple wireless connectivity stacks and a 320-segment Liquid Crystal Display with extended temperature range for industrial applications.

Security is addressed with new hardware accelerators in the CC13x2 and CC26x2 wireless MCUs, for AES-128/256, SHA2-512, Elliptic Curve Cryptography (ECC), RSA-2048 and true random number generator (TRNG) encryption protocols.

Code compatibility: These new products are all supported by the SimpleLink software development kit (SDK) and provide a unified framework for platform extension through 100 percent application code reuse.

Still with automation, ADLINK’s Jim Liu, CEO, has his sights set on Artificial Intelligence (AI). “We have gone from being a pure embedded CPU vendor to an AI engine vendor,” he says, introducing autonomous mobile robotics and ‘AI at the Edge’ solutions using NVIDIA technology.

Figure 4: Industrial vision systems from ADLINK use NVIDIA technology for AI and deep learning.

Its embedded systems and connectivity couple with NVIDIA’s AI and deep learning technologies to target compute-intensive applications, such as robotics, autonomous vehicles and healthcare. Demonstrations included an autonomous mobile robot platform using ROS 2, an open source software stack specifically designed for factory-of-the-future connected solutions. There was a smart camera technology that can scan barcodes on irregularly shaped objects and differentiate between them (Figure 4). Another demonstration calculated vehicle flow to improve traffic management in a smart city.

Arm was also moving to the edge of computing, with machine learning and a display that fascinated many—a robotic Rubik’s Cube solver (Figure 5). John Ronco, Vice President and General Manager, Embedded and Auto Line of Business at Arm, sounds a cautious note: “Inference at the edge of the cloud has network, power and latency issues; there are also privacy issues,” he says. Ahead of Embedded World, the company announced its Project Trillium, promoting its Machine Learning (ML) technology, using an ML processor that is capable of over 4.6 Trillion Operations per Second (TOPs) with a power conserving efficiency of 3TOPs/W, and an object detection processor.

Figure 5: Arm was demonstrating its machine learning with the classic puzzle, the Rubik’s Cube.

Embedded Tools
Swedish embedded software tools and services company, IAR Systems shared news of its many recent partnerships. The first, with Data I/O, is to bridge the development -manufacturing gap by integrating the software with the latter’s data programming and secure provisioning to transition the microcontroller firmware development to manufacture. The two share many customers within the automotive, IoT, medical, wireless, consumer electronics, and industrial controls markets, although at separate stages of the design and manufacturing process. To address the growing complexity of designs and the security concerns in the embedded market, explains Tora Fridholm, Product Marketing Manager at IAR Systems, the two companies have established a roadmap based on customer requirements for a workflow where resources such as images, configuration files, and documents can be securely shared. Customers thus enjoy an efficient design to manufacturing workflow that reduces time to market.

If adding device-specific security credentials, such as keys and certificates, both companies are committed to integrate the appropriate processes and tools.

Another announcement was with Renesas Electronics, whereby its Synergy™ can use the Advanced IAR C/C++ Compiler™ in e² Studio Integrated Development Environment (IDE) to reduce application code, allowing more features to be added to Synergy microcontrollers. There is also the benefit of the compiler’s execution speed, which allows the microcontroller to remain in low power mode to conserve battery life.

Synergy microcontrollers are used in IoT devices to monitor the environment in buildings and industrial automation, energy management, and healthcare equipment.

Embedded Boards
An essential part of embedded design is board technology, and this year’s show did not disappoint. WinSystems was highlighting two of its latest single board computers, the PX1-C415 (Figure 6) which manages IoT nodes, and the SBC35-C427, based on the Intel Atom® E3900 processor series.

The first uses Microsoft® Windows® 10 IoT Core OS to support IoT development, the second is designed for industrial IoT, with an onboard ADC input, General Purpose Input/Output (GPIO), dual Ethernet, two USB3.0 and four USB 2.0 channels. It can be used in transportation, energy, industrial control, digital signage and industrial IoT applications.

The SBC supports up to three video displays via DisplayPort and LVDS interfaces. It can be expanded using the Mini-PCie socket, an M.2 connector (E-Key) and the company’s own Modular I/O 80 interface.

Figure 6: WinSystems offers one of the first boards to run on IoT Core OS.

A COM Express Type 7 module by ConnectTech was among the company’s booth highlights. The Com Express Type 7 + GPU Embedded System (Figure 7), can be used across four, independent or as a headless processing system. It has Intel Xeon® D x86 processors, NVIDIA Quadro® and Tesla® GPUs in a 216 x 164mm form factor. It anticipates the needs of high-performance applications that require 10GbE and Gigabit Ethernet, USB 3.0 and USB 2.0, HDMI, SATA II, I2C, M.2 and miniPCIe for encode/decode video, GPGPU CUDA® processing, deep learning and AI applications.

Figure 7: A COM Express Type 7 module by ConnectTech targets high-performance applications.

The company, an ecosystem partner of NVIDIA’s Jetson SoM also showed its Orbitty Carrier and a Cogswell Vision System, both based on NVIDIA’s Jetson TX1/TX2.


Caroline Hayes has been a journalist covering the electronics sector for more than 20 years. She has worked on several European titles, reporting on a variety of industries, including communications, broadcast and automotive.

 

IoT Growth Brings Fresh EMI/EMC Challenges: Q&A with Tektronix

Tuesday, March 6th, 2018

Taking on customers’ EMC/EMI compliance pain points led to a solution the provider has designed as all-in-one, so users can realize time and cost savings.

What the IoT brings to industrial, consumer, mobile and mil-aero connectivity does not have to include problems with electronic interference and unintentional radiators, as wired and wireless devices proliferate.

 

Editor’s Note: Unintended consequences are something to be avoided, with careful planning often the prescribed method for doing so. Recently Dylan Stinson, Product Marketing Manager at test, measurement, and monitoring solutions provider Tektronix, spoke with EECatalog.com about how designers and manufacturers can avoid interference that causes safety, regulatory, and performance issues, even as wireless “stuff” enters our lives at a relentless pace. Tektronix, says Stinson, “offers a complete solution, including pre-compliance software, accessories, and also the benefit of having a real-time spectrum analyzer.”  We spoke with Stinson at the time Tektronix announced  its EMI/EMC Compliance Testing solution. Edited excerpts from our interview follow.

EECatalog: What is EMC and who needs to care about it?

Dylan Stinson, Tektronix

Dylan Stinson, Tektronix: EMC or electromagnetic compatibility is defined as the interaction of electrical and electronic equipment with its electromagnetic environment and with other equipment.

Anybody who designs, manufactures, or is importing products with electronics inside is definitely going to want to care about EMC compliance.

There have been several well published cases where because electromagnetic compatibility and EMC testing were not fully considered, companies have been fined and products have even had to be recalled or withdrawn from the market due to their emission levels exceeding regulated limits.[1]

EECatalog:  What have designers and manufacturers been doing to achieve compliance and what’s changed?

Stinson, Tektronix: To answer your second question first, we’re seeing new problems. For example, consider a lap top computer or a smartphone [see Figure 1]. It contains all the high-speed digital systems that are necessary in a digital computer or phone, combined with wireless transmitters or receivers for necessary connectivity and communication.

Figure 1: Multiple noise sources characterize today’s systems. (Courtesy Tektronix)

With the proliferation of all these wireless-enabled devices, where you have the proximity of unintended radiators, combined with sensitive receivers, you have an area that is rife with interference opportunities.

In the case illustrated in [Figure 1], each element of the design is operating within regulated limits, but EMI from the computer clocks or random hard drive access may be entering the receivers, reducing communication efficiency and overall performance. In some cases, it may work poorly, not work at all as an overall integrated system, or not be legal.

What we’d like to see change is the situation where designers and manufacturers have to go out of house to get EMC/EMI compliance testing done—and more than half of our spectrum analyzer users are testing for EMC and EMI issues. In doing so, they are having to do things like go to a test lab with an anechoic chamber to get a pre-scan. One customer told us, “We spend $1,250 per hour for a technician, lab equipment, and chamber. This adds up over time, as you can imagine. One time, we had a situation where we spent a year trying to figure out where the noise was coming from.”

As to your first question, the traditional method is: you have a design, you take it about 90 percent of the way, then you take it to an external test house that is licensed, but as just noted, this can be expensive, especially when multiple visits and design changes are required. In the U.S., designers have reported spending as much as $10,000 to get a product certified by an external compliance test house.

EECatalog: What should designers know about in-house pre-compliance testing?

Stinson, Tektronix: Performing basic pre-compliance testing in-house, the option Tektronix is providing, can help minimize product development time and expenses and help overall with the design. Pre-compliance allows issues to be caught early on, saving time, effort, and money.

We’ve introduced EMCVu as an all-in-one EMC pre-compliance solution. It is included as a license option for our existing SignalVu-PC software, a part of our Real-Time spectrum analyzer products.

This is for both radiated emission testing and conducted emission testing as well as EMI troubleshooting and debugging. It includes all the accessories—defined and characterized so you don’t have to spend the time doing it yourself—for EMC testing: two antennas, pre-amp, and tripod for radiated emissions testing, AC LISN, DC LISN, and transient limiter for conducted emissions testing, as well as near field probes and 20 dB amp for troubleshooting.

EECatalog: What opportunities to save time and effort does your solution make possible?

Stinson, Tektronix:  Some of the ways we accelerate EMC compliance are with our failure-targeting Quasi Peak detector. You specify the failure you want to test and spend less time having to test other failures outside that test. It includes an easy-to-learn wizard with built-in standards:  All the limit tables for the CISPR, and MIL standards, are included in the software.

You can populate limit lines based on the standards you select. In order to get higher accuracy as well as adding convenience, we have pre-defined the gains and losses of all our accessories, including antennas, cables, LISNs and pre-amplifiers. This pre-defining for gains and losses in the software means you don’t have to worry about the characterizing, and you get a higher level of accuracy.

Also, you can use our ambient noise comparison feature to measure the ambient noise in the test environment. Then you can compare that ambient noise to the actual measurement and apply trace math to remove it from the actual measurement. You can readily distinguish a failure caused by ambient noise from a failure caused by your test environment and equipment under test. This allows you to have the confidence to perform EMI/EMC pre-compliance testing in relatively noisy environments such as office areas, conference rooms, labs, and basements.

And you can get all of the notes, images, and result information to your manager and other engineers conveniently because the software is fully configurable for reporting in formats such as pdf and rtf. You can get any number of experiments and test results into this report.

EECatalog:  Do you anticipate any difficulties in getting folks to change what they’ve been doing, i.e., going to external test houses?

Stinson, Tektronix:  No, because we see it as not being much of an effort at all for somebody to get up and running on this. The software is easy to use. It has a set up wizard so even a junior engineer can get up and running on it and learn how to do EMI pre-compliance. We had customers going to the test house three or four times on a product. With proper in-house pre-compliance testing, customers can reduce it down to as little as one visit, so the potential cost savings here are huge.

EECatalog: Anything to add before we wrap up?

Stinson, Tektronix:  This solution plays well in many markets from IoT, medical devices, to military equipment and systems, and it also spans into non-traditional RF applications such as switching power supplies, DC to DC converters, and wireless chargers—these are all things requiring more attention to EMC and EMI compliance.

[1] http://www.fr.com/files/Uploads/attachments/fcc/2012_Q1_FCC-Enforcement-Matrix.pdf

 

For Medical Device Design on the IoT, Get a Solid Head Start

Monday, January 29th, 2018

Their value for patient care and the efficiency they create for medical personnel has medical devices proliferating rapidly on the Internet. This brings along heavy responsibilities for accuracy, reliability, and meeting certification standards. Due to the wide range of specific needs among patients and the devices used to care for them, medical device developers should start with a powerful and rich set of tools based on a platform that offers the team a solid starting point to implement the power-saving, processing, and security features.

The development of intelligent medical devices has always been a challenge. Adding to the familiar pressures of cost, time to market, size, weight and power (SWaP) optimization, safety, security and providing the proper feature mix, there is also the challenge of certification by agencies of multiple countries. These certifications also involve different levels of stringency depending on their possible risk. Still, hospitals and medical facilities have long relied on medical electronics in the clinic and operating room and a great deal of experience has been gained in their development.

Medical, industrial, and consumer devices all increasingly connect via the Internet, making them part of a world that can seem like the Wild West. Medical devices which are part of the IoT can shorten hospital stays and enable elderly patients to live at home longer, thus driving down the costs of medical care. For more active patients the IoT’s medical “things” can also be a huge advantage with their ability to directly sense and interpret vital bodily functions and signal alerts when needed as well as connect via the cloud to physicians and specialists for more regular monitoring and emergency attention. In athletics, they can help prevent serious injury by sensing possible concussions along with other signs that could indicate a player should receive immediate attention (Figure 2).

Figure 1:  Medical IoT devices must communicate securely over one of a variety of wireless network protocols, eventually connecting to the Internet using an IP stack and ultimately to a cloud service—in this case, Microsoft Azure. And they must maintain security along the whole path. Source: Microsoft, Inc.

Among the additional challenges for wearable medical devices are even more demanding size and low-power requirements, implementing the appropriate wireless communication technology and perhaps most important—security. The latter is needed both to comply with existing regulations such as HIPPA as well as to protect the devices against hacking. Hackers could exploit access to devices to find their way into larger systems to steal and exploit data. And as gruesome as it may sound, there is also good reason to protect devices such as pacemakers from attack. The advantage to having a pacemaker connected wirelessly is to be able to make adjustments and software upgrades without surgery. But that also exposes it to possible malicious attack.

Start with a Solid Platform
Fortunately, developers do not have to start from scratch, and wearable medical devices, while addressing very specialized needs and functions, do have many things in common. If developers are able to start from a point from which they can directly address their shared challenges, they can add innovation—and value. Early in the process they can quickly get started meeting those often intricate specific needs for a successful medical device project. That boils down to the concept of a “platform”—a combination of basic software and hardware that lets developers get right to adding their specific value and innovation. Of course, such a platform must offer hardware and software components, which, while shared among medical projects, still focus on the common needs of medical devices. Its aim is to provide selected components that can offer a broad set of features for medical devices without making developers search through a complex set of unnecessary components. They should be able to quickly find and evaluate those most suited to their development goals.

Among the options in such a platform should be a choice of wireless connectivity. If a device is to be worn by a patient in a hospital setting, the link should be to the hospital’s network, possibly via Wi-Fi. If the device is to be worn by a patient during normal daily activities, then a Bluetooth link to the patient’s smartphone might be more appropriate. For sporting events, such as a marathon that covers extended areas, a wider-ranging choice might be LoRa. While devices can connect directly to the Internet, the more usual approach is to connect to a gateway device using one of the wireless protocols. The gateway then sends the data over the Internet to cloud services using Internet protocols such as Transport Layer Security (TLS), which also offers methods of securing communications beyond the gateway.

The gateway or edge device can be a specialized design or a PC in the home running the needed applications and connected to the Internet. One important design consideration is how and where to make decisions that are necessary to the patient’s well-being. For example, a fall or a blow to the body should invoke an instant response alerting the proper remote service professionals to deal with it. In other cases, the gateway can simply forward data to a cloud application where it is analyzed. Anomalous or out-of-range results can then alert a physician who can determine what steps to take. Yet again, code on the device could provide the ability to recognize the need to dispense medication, possibly via a module worn on the body. Decisions such as these will influence the allocation of resources including memory, power consumption, and processing capability and where in the device/gateway/cloud chain they are implemented. So, in addition to a rich selection of specialized peripherals and their support, the developer must select a processor, an operating system, power management functions, location and motion sensing, body sensors, security support, and a choice of wireless communication hardware and protocols.

 

Figure 2: For a device like a concussion monitor, sensors, drivers, and a certain amount of processing, security and communication capability must be built in to concentrate on a very specific set of issues both to detect the concussion and to relay information as to its possible effects.

The issue of the human interface is also one that must be thoughtfully developed. There is little room for a rich human machine interface (HMI) on the device itself but important functions can be placed on a small display on a smart watch, for example. When—and only when—practical, a richer set can also often be implemented on a smartphone. The gateway device is often the major host for the HMI because it can be quickly accessed by the patient or remotely by the physician either directly over the Internet or from applications in the cloud. Of course, other control and analysis applications running in the cloud can utilize the device HMI as well as other application-based user interfaces.

and Security is a Must
As mentioned above, devices must be able to negotiate secure connections that not only protect data, but also guard against hacking and malicious access. One should quite naturally expect security support in the form of security-critical software components such as secure mail or secure SMTP along with secure web pages or HTTPS for implementing layered security strategies plus TLLS as noted earlier. Secure management—SNMP v3—can be used to secure both authentication and transmission of data between the management station and the SNMP agent along with an encrypted file system.

Given the different protocols used in connecting medical IoT devices from the patient over a wireless network to edge/gateway device and then via the Internet up to the cloud services, it is vital that security be assured end-to-end over the whole route. This must ensure that data and control codes can be authenticated as passing between a sender and receiver that have both verified their identities. It means that the messages must be undistorted and uncorrupted either by communication glitches or by malicious intent. And communications must remain secure and private, which also involves encryption and decryption.

Encrypted messages passing through the gateway from wireless protocol to the Internet will utilize a standard Internet protocol like TLS, which uses a securely stored private key with a public key to generate a unique encryption key for a given session. For both message integrity and privacy, it is important that the content as well as the private key remain secure. TLS forms the basis for HTTPS for implementing layered security strategies. Secure management—SNMP v3—can be used to secure both authentication and transmission of data between the management station and the SNMP agent along with an encrypted file system. Additional protocols for graphics functionality along with camera and video support set the developer up with a rich selection of options for the entire range of possible medical applications.

Another thing that is often missed is the memory model used by the underlying operating system. Today, many RTOSs are modeled on Linux, which supports dynamic loading where the code can be loaded into RAM and then run under control and protection of the operating system. However, that protection can sometimes be breached, and so a dynamic loading memory model involves definite dangers when used in critical embedded systems like medical devices.

Another model executes code from Flash memory. In order for code to be loaded in so that it can execute, it must be placed in Flash. Loading malicious code is much more difficult than just putting it into RAM. Flash-based code is a single linked image so that when it executes, it just executes from that image. There is no swapping of code in and out from RAM. RAM is of course used for temporary storage of variables and stacks such as used for context switching, but all instructions execute from Flash.

Even if an attacker could breach the security used for program updates, they could not load a program into device memory to be executed even under control of the OS because the code must be executed from Flash. The only way to modify that Flash image is to upload an entirely new image, presumably one that includes the malware along with the regular application code. That means a hacker would need a copy of the entire Flash-based application in order to modify and upload it. Such a scenario is extremely unlikely.

Figure 3: Rowebots, Ltd. supplies a selection of prototyping platforms consisting of RTOS, communications components, and supported peripheral (including sensor) drivers. There is also a selection of processor boards such as the STM32F4 (l) and STM32F7 (r) boards from STMicroelectronics.

These days, idea of a “development platform” is widespread and well accepted. Nobody wants to start from scratch nor do they need to. Developers may already have a fairly clear idea of the class of processor and mix of peripherals they will need for a given project and will look for a stable and versatile platform consisting of a choice of supported MCUs and hardware devices along with an RTOS environment tailored for very small systems along with a host of supported drivers, software modules and protocols. Finding a platform that has a range of features and supported elements that is close to a project’s goals can go a long way toward shortening time to market and verifying the proof-of-concept before making that potentially expensive commitment to a specific design. The key is to have those goals firmly in mind and then look for the platform that best meets them. Fortunately, today, many semiconductor manufacturers as well as RTOS vendors are collaborating to offer such platforms, some of which are targeted to specific markets and application areas (Figure 3).

On the other end, it is also wise to tailor the project for cloud connectivity out of the box. Among the available services are Microsoft Azure IoT Hub, AWS IoT, and IBM Watson IoT. Such services let developers build, deploy, and manage applications and services through a global network of Microsoft-managed data centers. Microsoft Azure, for example, provides a ready-made cloud environment and connectivity for IoT devices to communicate their data, be monitored, and receive commands from authorized personnel.


Kim Rowe has started several embedded systems and Internet companies including RoweBots Limited. He has over 30 years’ experience in embedded systems development and management. With an MBA and MEng, Rowe has published and presented more than 50 papers and articles on both technical and management aspects of embedded and connected systems.

The Digitization of Cooking

Wednesday, November 22nd, 2017

Smart, connected, programmable cooking appliances are coming to market that deliver consumer value in the form of convenience, quality, and consistency by making use of digital content about the food the appliances cook. Solid state RF energy is emerging as a leading form of programmable energy to enable these benefits.

Home Cooking Appliance Market
The cooking appliance market is a large (>270M units/yr.) and relatively slow growing (3-4% CAGR) segment of the home appliance market. For the purposes of this article, cooking appliances are aggregated into three broad categories:

  1. Ovens (such as ranges and built ins), with an annual global shipment rate of 57M units[1]
  2. Microwave Ovens, with an annual global shipment rate of 74M units[2]
  3. Small Cooking Appliances, with an approximate annual global shipment rate of 138M units[3]

Figure 1:  Among the newer, non-traditional appliances coming online is the Miele Dialog oven, which employs RF energy and interfaces to a proprietary application via WiFi (Courtesy Miele).

Appliance analysts generally cite increasing disposable income and the steady rise in the standard of living globally as primary factors contributing to cooking appliance market growth. These have greatest impact in economically developing regions such as BRIC countries. However, there are other factors shaping cooking appliance features and capabilities, which are beginning to influence a change in the type of appliance consumers purchase to serve their lifestyle interests. Broad environmental factors include connectivity and cloud services, which make access to information and valuable services possible from OEM’s and third parties. Individual interests in improving health and wellbeing drive up-front food sourcing decisions and can also impact the selection and use of certain cooking appliances based on their ability to deliver healthy cooking results.

Food as Digital Content?
Yes, food is being ‘digitized’ in the form of online recipes, nutrition information, sources of origin, and freshness. Recipes as digital content have been available online almost since the widespread use of the internet as consumers and foodies flocked to readily available information on the web for everything from the latest nouveau cuisine to the everyday dinner. Over the past several years, new companies and services have been emerging to bring even more digital food content to the consumer and are now working to make this same information available directly to the cooking appliances themselves. Such companies break down the composition of foods and recipes into their discrete elements and offer information on calories, fat content, the amount of sodium, etc. as well as about the food being used in a recipe, the recipe itself, and the instructions to the cook—or to the appliance—on how best to cook the food.

In many ways, this is analogous to the transition of TV content moving from analog to digital broadcast, and TVs’ transition from tubes (analog) to solid state (LCD, OLED, etc.) formats. It’s not too much of a stretch to imagine how this will enable a number of potential new uses and services including, but not limited to, guided recipe prep and execution, personalization of recipes, inventory management and food waste reduction, and appliances with automated functionality to fully execute recipes.

It’s Getting Hot in Here
A common thread among all cooking appliances is that they provide at least one source of heat (energy) in order to perform their basic task. In almost every cooking appliance, that source of heat is a resistive element of some form.

Resistive elements can be very fast to rise to temperature, but must raise the ambient temperature over time to the target temperature used in a recipe. Once the ambient temperature is raised the food must undergo a transfer of energy from the ambient environment, to raise its temperature. The time needed to heat a cavity volume to the recipe starting temperature contributes to the overall cooking timeline and is generally a waste of energy. Just as the resistive element takes time to increase the ambient temperature, it also takes a long time to reduce the ambient temperature, and furthermore, relies on a person monitoring the cooking process to do so. This renders the final cooking result as a very subjective outcome. Resistive elements also degrade with time, causing them to become more inefficient and lower overall temperature output. The increased cooking time for a given recipe and the amount of attention required to assure a reasonable outcome burden the user.

Solid state RF cooking solutions on the other hand are noted for their ability to instantly begin to heat food as a result of the ability of RF energy to penetrate materials and to propagate heat through the dipole effect[4]. Thus, no waiting for the ambient cavity to warm to a suitable temperature is needed before cooking commences, which can appreciably reduce cooking time. When implemented in a closed loop, digitally controlled circuit, RF energy can be precisely increased and decreased with immediate effect on the food, thus resulting in the ability to precisely control the final cooking outcome.

Figure 2:  Maximum available power for heating effectiveness and speed along with high RF gain and efficiency are among the features of RF components serving the needs of cooking appliances.

In addition, solid state devices are inherently reliable, as there are no moving parts or components that tend to degrade in performance over time. Solid state RF power transistors such as those from NXP Semiconductor are built in silicon laterally diffused metal oxide semiconductor (LDMOS) and may demonstrate 20-year lifetime durability without reduction in performance or functionality (Figure 2). RF components can be designed specifically for the consumer and commercial cooking appliance market in order to deliver the optimum performance and functionality specific to the cooking appliance application. This includes maximum available power for heating effectiveness and speed, high RF gain and efficiency for high-efficiency systems, and RF ICs for compact and cost-effective PCB design.

The Digital Cooking Appliance
At the appliance level, a significant trend underway is the transition away from the conventional appliance that supports analog cooking methods—defined as using a set temperature, set time, and continuously checking the progress. These traditional appliances have remained largely unchanged in terms of their performance or functionality for decades, and OEMs producing these appliances suffer from continuous margin pressure owing in large part to their relative commodity nature. However, newer innovative appliances coming to market are utilizing digital cooking methods which make use of sensors to provide measurement and feedback, and programmable cooking recipes which are able to access deep pools of information such as recipes, prep methods, and food composition information, online and off, to drive intelligent algorithms that enable automation and differentiated cooking results. Miele recently announced its breakthrough Dialog Oven featuring the use of RF energy in addition to convection and radiant heat, and a WiFi connection for interfacing to Miele’s proprietary application (Figure 1).

Solid state RF cooking sub-system reference designs and architectures such as NXP’s MHT31250C provide the programmable, real time, closed loop control of the energy (heat) created and distributed in the cooking appliance. Solid state RF cooking sub-systems such as this must provide necessary functionality from the signal generator, RF amplifier, RF measurement, and digital control, as well as a means to interface or communicate with the sub-system through an application programming interface (API) for instance. Emerging standards to facilitate the broad adoption of solid state RF cooking solutions into appliances are being addressed through technical associations such as the RF Energy Alliance (rfenergy.org), which is working on a cross-industry basis to develop proposed standard architectures to support solid state RF cooking solutions.

With fully programmable control over power, frequency, and other operational parameters, a solid state RF cooking sub-system can operate across as many as four modules. It can deliver a total of 1000W of heating power, making it possible to differentiate levels of cooking precision as well as use multiple energy feeds to distribute the energy for more even cooking temperatures.

Solid state RF cooking sub-systems provide RF power measurement continuously during the cooking process which enables the appliance to adapt to the actual cooking process and progress underway in real time. Having additional sensor or measurement inputs can also help improve the appliances recipe execution. It is the real-time control plus real time measurement capability which enables adaptive functionality in the appliance. This is important for accommodating changes in food composition, as well as enabling revisions, replacement, and additions to recipes delivered remotely from a cloud based service provider or the OEM. With access to a growing pool of digital details about the food to be cooked, the appliance can determine the best range of parameters to execute for achieving the desired cooking outcome.


Dan Viza is the Director of Global Product Management for RF Heating Products at NXP Semiconductor (www.nxp.com). A veteran of the electronics and semiconductor industry with more than 20 years of experience leading strategy, development, and commercialization of new technologies in fields of robotics, molecular biology, sensors, automotive radar, and RF cooking, Viza holds four U.S. patents. He graduated with highest honors from Rochester Institute of Technology and holds an MBA from the University of Edinburgh in the UK.

 

 

[1] “Major Home Appliance Market Report 2017”

[2]  “Small Home Personal Care Appliance Report 2014”

[3]  Wikipedia.org

[4] Wikipedia.org

Enterprise Drives Augmented Reality Demand

Monday, November 20th, 2017

Consumer products garner the headlines, but industry is likely to drive AR commercialization.

While many things that Google comes up with seemingly turn to gold, Google Glass wasn’t one of them. According to Forbes magazine, the company’s augmented reality (AR) headset will go down in the annals of bad product launches because it was poorly designed, confusingly marketed, and badly timed.

The voice-controlled, head-mounted display was meant to introduce the benefits of AR—a technology that enhances human perception of the physical environment with computer-generated graphics—to the masses but failed. The product’s appeal turned out to be limited to a few New York and Silicon Valley early adopters. And even they soon ditched the glasses and returned to smartphones when buggy software and distrust of filming in public places became apparent. Faced with a damp squib, the glasses were quietly withdrawn.

However, the company shed few tears over its aborted foray into AR-for-the-masses because it gained valuable experience. Armed with that knowledge, Alphabet, Inc. (Google’s parent company) has brought Google Glass back, only this time geared toward factory workers, not the average consumer.

Some would argue the first-generation device never really went away. Pioneering enterprises—initially overlooked by Google Glass marketers—used the consumer product to project in new ways. For example, AR was used to animate instructions, which was found to be an effective way to deskill complex manual tasks and to improve quality. Further, head-mounted AR raised productivity, reduced errors, and improved safety. A report from Bloomberg even noted some firms went to the extent of hiring third-party software developers to customize the product for specific tasks.

Enter “Glass Enterprise Edition”

Such encouragement resulted in Alphabet’s launch of “Glass Enterprise Edition” targeted for the workplace and boasting several upgrades over the consumer product, including a better camera, faster microprocessor, improved Wi-Fi, and longer battery life…plus a red light LED that illuminates to let others know they’re being recorded. But perhaps the key improvement is the packaging of the electronics into a small module that can be attached to safety glasses or prescription spectacles, making it easier for workers to use. While employees are much less concerned about the aesthetics of wearables than consumers, streamlining previously chunky AR devices improves comfort and hence usability.

According to a report in Wired, sales of Glass Enterprise Edition are still only modest, and many large companies are taking the product on a trial basis only. But that’s not stopped Alphabet’s product managers sounding bullish about the product’s workplace prospects. “This isn’t an experiment. Now we are in full-on production with our customers and with our partners,” project lead Jay Kothari told Wired.

Alphabet is not alone in realizing AR is a little underdeveloped for the consumer, but practical for the worker. Seattle-based computer-software, -hardware, and -services giant Microsoft has also entered the fray with HoloLens, a “mixed reality” holographic computer and head-mounted display (Figure 1). And Japan’s Sony is tapping into rising industrial interest with SmartEyeGlass.

Figure 1: Microsoft’s HoloLens is finding favor with automotive makers such as Volvo to help engineers visualize new concepts in three dimensions. (Photo Courtesy https://www.volvocars.com)

Focus on Specifics

With its first-generation Google Glass, Alphabet repeated a mistake all too common with tech companies: Aiming for volume sales by targeting the consumer market. While that was a strategy that worked well for smartphones, it hasn’t proven quite so successful for wearables.

The consumer was quick to realize the smartphone had many specific uses—like communication, connectivity, photography—while the smartwatch, for example, seemed to just duplicate many of those uses while bringing few useful features of its own. For instance, a smartwatch’s fitness tracking functionality has little long-term use; most people can tell if they are fit or not by taking the stairs instead of the elevator and seeing if they’re out of breath as a result.

But where smartwatches will really take off is when they offer specialized functionality such as constant glucose monitoring, fall alerts for seniors, or in occupations like driving public service vehicles where operators benefit from observing notifications without removing their hands from the wheel.

Similarly, early AR did little more for consumers than shift information presentation from the handset to a heads-up display—useful, but not earthshattering enough to justify shelling out thousands of dollars. In contrast, freeing up the workers’ hands by presenting instructions directly in their line of sight is a big deal for industries where efficiency gains equal greater profits.

Impacting the Bottom Line

Enterprise is excellent at spotting where a new technology like AR can address a specific challenge, especially if the result impacts the bottom line. Robots were added to car assembly lines because they automated tasks where human error led to safety issues; machine-to-machine wireless communications was embraced because it predicted the need for maintenance in advance of machines grinding to a halt. In both, AR reduced costs by eliminating the need for skilled workers.

And so it appears to be with AR. German carmakers Volkswagen and BMW have experimented with the technology for communication between assembly-line teams. Similarly, aircraft manufacturer Boeing has equipped its technicians with AR headsets to speed navigation of planes’ wiring harnesses. And AccuVein, a portable AR device that projects an accurate image of the location of peripheral veins onto a patient’s skin, is in use in hospitals across the U.S., assisting healthcare professionals to improve venipuncture.

Elevator manufacturer ThyssenKrupp has taken things even further by equipping all its field engineers with Microsoft’s HoloLens so they can look at a piece of broken elevator equipment to see what repairs are needed. Better yet, IoT-connected elevators tell technicians in advance what tools are needed to make repairs, eliminating time-consuming and costly back and forth.

Too Soon to Call

It is too early in AR’s development to tell if this generation of the technology will be a runaway success. In the consumer sector, the signs aren’t great. Virtual Reality (VR), AR’s much-hyped bigger brother, is not exactly flying off the shelves; a recent report in The Economist noted, for example, that the 2016 U.S. sales forecast for Sony’s PlayStation VR headset was cut from 2.6 million to just 750,000 shipments.

And although VR’s immersive experience might have some applications in training and education, enterprise applications will do little to boost its chances of mainstream acceptance. In contrast, AR’s long-term prospects are dramatically boosted by industry’s embrace. And, in the same way that PCs, Wi-Fi, and smartphones built on clunky, expensive, and power-sapping first-generation technology went on to become the sophisticated products we use today, industry’s investment in the technology will ensure AR headsets will become more streamlined, powerful, and efficient—and ultimately much more appealing to the consumer.

AR’s interleaving of the virtual and real worlds to improve human performance will become a compelling draw for profit-making concerns and the public alike. It’s reality, only better.

For more on the future of AR see Mouser Electronics’ ebook Augmented Reality.


Steven Keeping is a contributing writer for Mouser Electronics and gained a BEng (Hons.) degree at Brighton University, U.K., before working in the electronics divisions of Eurotherm and BOC for seven years. He then joined Electronic Production magazine and subsequently spent 13 years in senior editorial and publishing roles on electronics manufacturing, test, and design titles including What’s New in Electronics and Australian Electronics Engineering for Trinity Mirror, CMP and RBI in the U.K. and Australia. In 2006, Steven became a freelance journalist specializing in electronics. He is based in Sydney.

Originally published by Mouser Electronics https://www.mouser.com/. Reprinted with permission.

Digital Signage Complexities Addressed

Tuesday, November 7th, 2017

An overview of the parts that make up a digital signage system

Digital signage has been an important topic across the IT, commercial audio-visual, and signage industries for several years now. The benefits of replacing a static sign with a dynamic digital display are clear. However, while it’s impossible to avoid digital signage as we go about our daily lives, I still find myself surprised by the number of people who want digital signage, but don’t understand what goes into a signage system. The attitude of the casual observer could be, “Oh, it’s a flat panel and some video files! We can do that!”

Figure 1: Complete digital signage installation in a restaurant setting. (Photo Courtesy of Premier Mounts and Embed Digital.)

Yet the reality is that digital signage comprises more components than many realize. Digital signage is a web of technologies, involving several different pieces, and potentially several different manufacturers. It’s not nearly as simple as it may seem at first glance, but the good news is we can organize all this complexity into a few categories of components to aid understanding.

Don’t Overlook Content
First up is the obvious category, displays. No! Really! Now, my friends in the display manufacturing world really hate this, but we will start with a component that most people don’t think of as one, and that if not done right, will cripple any chance the system has of success. That component is content. I group this in with all the physical hardware because it has a cost, must be planned for, and selected just as carefully as any piece of electronics. Content is the vehicle that delivers your message and enables you to achieve your objectives. Without it you don’t have much of a system, so ensure you plan for it, its cost, and its need to be continually refreshed. Whether done in house, or outsourced, this is one component that must not be overlooked!

The Single Most Important Product
The Content Management System or CMS, is the heart of any digital signage system. It’s the component that enables you to distribute and manage your content and set up all the scheduling and dayparting you will use. This makes this the single most important product (not getting past that content thing yet!) that you will select. Now, a lot will come down to your strategy… what are your objectives, for what will the signage be used? Different software packages (and there are hundreds!) all offer generally the same group of features, but typically have features that let them focus on a specific vertical. For example, a CMS focused on interactive content, or one focused on integration with external data sources. There are also different business models involved here; some software is on premise, meaning you purchase it and host it yourself, others are Software as a Service (SaaS) and hosted in the cloud for a monthly subscription fee. Neither is inherently superior, and a lot of this will depend on your IT policies, and finances.

Once you know software, then you can select a media player. Today, as we rapidly reach the end of 2017, you have a surprising number of choices, not just traditional PCs. Android, ChromeOS, custom SoC, display embedded… this deserves a whole discussion unto itself. Keep in mind that your software vendor will often have guidelines and recommendations for this, since this device is where the software lives. Personally, I’m getting to where I prefer non-Windows devices, they tend to be lower cost and easier to manage, but don’t take that as absolute… there are always great options in all types.

The Backbone
If the CMS software is the heart, then the network is the backbone. This is the connection that lets each media player communicate back to the central server, wherever it is. This is often the part of digital signage that will require the most technical skill, especially if the Internet is involved to connect multiple sites. You need to be comfortable connecting devices to the network, configuring them, managing ports and bandwidth, and dealing with firewalls. If that sounds complicated… yes, it can be!  Implementing this may take a “guess and check” mentality, as communication rarely works perfectly the very first time you power on. Sorry, plug and play is an illusion used as marketing, not reality!

Displays: Understanding Three Things
Selecting the type of display involves understanding three things; environment, hours of use, and audience position. Understanding the environment in which the display will live helps us choose how bright it needs to be, and if we need protection against dust or moisture. Knowing the hours it will be turned on helps us to select the duty cycle required. Finally, knowing the audience position will help us in selecting how large the display (and the image shown on it) will need to be. LCD flat panels are the most common, and will be the go to for general purpose displays, but projectors are being used quite a bit as well (especially the models that don’t use traditional lamps!). Direct view LED displays are much more affordable then they have been, so those are now a much more common choice. Each one has its own pros and cons.

We’re not quite done, so bear with me a bit longer. We also have mounting for the display and media player. Before you laugh at me, this is a more complex choice than you think. You need to understand the structure of what you are mounting onto, where the display will go, and if you are dealing with an unusual environment. Sometimes, we need protective enclosures, or a kiosk for an interactive display. All of this makes the mounting solution key, and not something to be selected as an afterthought. Also, always buy from a top tier mount provider… saving money here can cost time and increase the risk of mount failure.

Now that we have covered these components, you should understand the general parts of a digital signage system. If this topic intrigues you, and you want to learn more, I will present “Understanding the Parts of a Digital Signage Network,” at Digital Signage Expo 2018 on Wednesday, March 28 at 9 a.m. at the Las Vegas Convention Center. For more information on this or any educational program offered at DSE 2018 or to learn more about digital signage go to www.dse2018.com .


Jonathan Brawn is a principal of Brawn Consulting, an audio-visual consulting, educational development, and marketing firm, based in Vista, California with national exposure and connections to the major manufacturers and integrators in the AV and IT industries. Prior to this, Brawn was Director of Technical Services for Visual Appliances, an Aliso Viejo CA based firm that holds the patent on ZeroBurn™ plasma display technology.

 

 

Zigbee® 3.0 Green Power: Saving the Day for IoT Connectivity

Tuesday, November 7th, 2017

With the proliferation of apps and devices, a standard is needed to ensure simplicity on the user end, reliability of connections, and interoperability of products from a variety of manufacturers.

 

With its 2014 release of Zigbee 3.0, the Zigbee® Alliance announced the unification of its wireless standards to a single standard named Zigbee 3.0. This standard seeks to provide interoperability among the widest range of smart devices and give consumers and businesses access to innovative products and services that will work together seamlessly to enhance everyday life.

Zigbee 3.0 also includes Zigbee Green Power. Zigbee Green Power was originally developed as an ultra-low-power wireless standard to support energy harvesting devices (i.e., devices without batteries that are able to extract their energy requirement from the environment.) Green Power is especially effective for devices that are only sometimes on the network (when they have power). The Green Power standard enables these devices to go on and off the network in a secure manner, so they can be off most of the time.

As an ultra-low wireless technology, Green Power is also a very effective option when using battery-powered devices, as it enables them to run off a battery for years. Green Power also allows for low-cost end nodes to communicate with the rest of the network, specifically in situations where there is no meshing required.

What Is Meshing?
Meshing has long been an intriguing concept in networking technology, so let’s take a quick look using a familiar Wi-Fi meshing scenario. The basic home Wi-Fi setup today is a cable or DSL router that wirelessly connects with our tablets and smartphones. If it doesn’t work so well, we install a repeater as an intermediate. These days there are sets of router boxes available that are preconfigured to wirelessly work together to cover our sprawling mansions or tidy cottages, as the case may be, including that room behind the garage where we escape to play video games. Reliable, speedy coverage.

So how can meshing help? The concept is a simple one. If I am in one corner of the house with my smartphone, and a laptop is closer to me than the router, then I just hop (mesh) via the laptop—as long as it is powered on. Meshing is generally self-configuring: everyone on the network helps everyone else to reach the router and via the router, the internet. And if that intermediate laptop is turned off, then my smartphone finds another device. Meshing can also be self-healing; it’s no problem if one link to the router breaks down, as the network will (hopefully) find another one

So, what’s the downside? This is one of those situations where sympathetic, pioneering technology clashes with the day-to-day grumpy consumer, who wants reliable connectivity all the time without being bothered with much else.

Unfortunately, there are three general problems with meshing—and this is true whether we’re looking at Wi-Fi or Zigbee (or Thread). The first is intermittent failures, the second is related to battery life, and the third is cost.

Meshing Has Issues; Zigbee Green Power Has Answers
Intermittent failures are usually recognized as a nasty network problem—sometimes it works, sometimes it doesn’t. Nobody knows why, exactly, so nobody can really fix it. In the meshed Wi-Fi example above, maybe your smartphone network connection depends on whether your son leaves his game station on or off—and good luck figuring out that one.

And then there’s battery life. In a meshed Wi-Fi network, your laptop might become an intermediate node. Suddenly, your laptop is running low on battery, because someone else in the family was (perhaps unknowingly) using your laptop as a hopping point (“meshing node”) for watching YouTube videos on his tablet. The meshing network has actually created a battery issue for a device that wouldn’t have had a problem otherwise.

And finally, there’s a cost issue. Every node in the network is not only an edge node (“a node doing something”), but it also needs to be able to function as a meshing node—and it needs to be equipped to do so. In practice, this means running more software on a larger processor with more memory. And meshing nodes may also have to be “on” constantly, which requires a larger and more expensive battery, while an edge node only has to be turned on when triggered.

But here is where the Zigbee 3.0 with Green Power saves the day. Zigbee enables meshing, but does not require it. Edge nodes can easily be Green Power networked, and if the area to be covered exceeds the range of a single router, multiple routers can work together to establish a backbone network that all the edge nodes can connect to, without carrying the overhead themselves to be meshing nodes.

Is Meshing an Outdated Concept?
In the context of a home, meshing is a band-aid solution for a poor radio that lacks range. Meshing would not be necessary with a powerful enough radio, running on a coin cell battery, that enables you to reach the router—even if it is not located in the most optimal center of the home. These radios may not have existed 10 years ago, but they exist today. This makes meshing a fringe solution for exceptional radio coverage problems. And these days, multiple radio frequency Wi-Fi channels are more often implemented, not meshing. Zigbee can do that, too.

The self-healing “benefit” of meshing is also a bit of a relic from the days when networking equipment was not as reliable as it is today. Single points of failure were red flags, and mesh networks with multiple paths were seen as a great plus—enabling rerouting via alternative connections at the moment of breakdown. But with today’s reliable networking equipment, the need for avoiding single points of failure is more or less gone.

Still there are many situations where meshing is a good and practical solution, e.g. where coverage is limited, or where there is lack of infrastructure.

How Does Green Power Work?
As a standard feature of Zigbee 3.0, Green Power’s simple networking protocol essentially brings all the complex networking features to a proxy (usually the router), while the Green Power node focuses on making sure that the essential signal—whether a temperature measurement, a command to turn on a light, reporting whether a door or window is open or closed—reliably reaches a router for further consumption. As mentioned, Zigbee Green Power features ultra-long battery life ¬– and in the case of energy-generating light switches, there is no need for batteries at all. Zigbee Green Power is fully integrated with Zigbee 3.0 and fully compatible with all the services that the Zigbee 3.0 delivers, from installation to security, and from ease-of-use to maintenance.

Green Power Works in Simple and Complex Scenarios
At the simplest end, it enables low-cost implementations of standard, standalone solutions. This covers most of the Zigbee applications in the market today. For example, if you have a few lamps, a dimmer-switch and a gateway, then to connect the lamps to the internet, you only need Green Power. As simple as that—no meshing capability required.

For more extensive solutions, Green Power enables the building of a simple Zigbee star network in your home. Or there can be multiple stars dropping from a single backbone in the case of a larger building installation. Either way, Green Power eliminates the disadvantages of meshing— intermittent connections that are difficult to diagnose, and unexpected situations where sensor nodes suddenly and quickly run out of battery power.

Green Power also allows for a Zigbee infrastructure that is fully aligned with the Wi-Fi infrastructure in a building. It is useful to note that a Zigbee radio has a comparable or better range than a Wi-Fi radio. Zigbee (IEEE 802.15.4) is essentially low-power Wi-Fi (IEEE 802.11).

The practical fact is that both Zigbee 3.0 (with Green Power) and Wi-Fi integrate in a single router box, simplifying and cost-reducing the overall networking infrastructure for consumers or enterprise customers, and paving the way for the real Smart Home as part of the Internet of Things. But only Zigbee 3.0 networks use meshing when it really adds value—the real power of Green Power.


Cees Links is GM of Qorvo’s Wireless Connectivity Business Unit. Links was the founder and CEO of GreenPeak Technologies, which is now part of Qorvo. Under his responsibility, the first wireless LANs were developed, ultimately becoming household technology integrated into PCs and notebooks. He also pioneered the development of access points, home networking routers, and hotspot base stations. He was involved in the establishment of the IEEE 802.11 standardization committee and the Wi-Fi Alliance. He was also instrumental in establishing the IEEE 802.15 standardization committee to become the basis for the ZigBee® sense and control networking. He was recently recognized as Wi-Fi pioneer with the Golden Mousetrap Lifetime Achievement award. For more information, please visit www.qorvo.com .

 

 

The Internet of Things… Are We There Yet?

Wednesday, August 9th, 2017

A common driver exists for the IoT, centered on knowledge and decisions.

There is a lot of chatter about the IoT these days, with tech companies, journalists, investors and consumers all trying to figure out what it is, what it will affect, and how to make money from it.

But what exactly is the IoT? What is its core value? And are we there yet? Considering that the IoT may have as much (or more) impact on society as computers and the Internet have had, these are important questions.

Uncaptioned-_lead_in_photo_WEB

What exactly is the IoT?

The name may be somewhat misleading, but probably the best way to describe the Internet of Things is as an application or as a service that uses information collected from sensors (the “things”), analyzes the data, and then does something with it (e.g., via actuators – more “things”).

The service, for instance, could be an electronic lifestyle coach, collecting data via a wristband, analyzing this data (trends) and coaching the wearer to live a healthier life. Or it can be an electronic security guard that analyzes data from motion sensors or cameras, and creates alerts. “Internet of Services” might be a more accurate description of the IoT value.

But whatever its best name may be, the IoT is typically a set of “things” connected via the cloud (Internet) to a server that stores and analyzes data (trends, alerts, etc.) and then communicates with a user via an application running on a computer, tablet, or a smartphone. So, it’s not the “things connected to the Internet” that create value. Rather, it’s the collecting, sending/receiving and interpreting data from the Internet that creates value: taking action based on data analytics, not the things themselves.

Why all the IoT hype now?

A cynic might attribute this to technology companies needing “something new” when the first signs emerged of a saturating smartphone market. But the reality is that a few fundamental things changed, creating momentum for new emerging applications that found a home under the umbrella of IoT—from Fitbits to thermostats, smart street lights to smart parking.

The first fundamental change was that the Internet became nearly ubiquitous. Initially connecting computers, the Internet now connects homes and buildings. And with the advent of wireless technology (Wi-Fi, LTE), access to the Internet changed from a technology into a commodity.

The second fundamental change was essentially Moore’s Law rolling along, with smaller, more powerful and lower cost devices being developed to collect data.

And finally, low-power communication technologies were developed that extended the battery life for these devices from days into years, connecting them permanently and maintenance-free to the Internet.

What is the real value of the IoT?

We live in a wonderfully interesting time, when amazing things happen. Consider that in the year 1820, 90% of the population lived in abject poverty. Today, some 200 years later, that percentage has shrunk to under 10%, despite that the population itself has multiplied several times. It is the miracle of the industrial revolution and many other things coming together. After World War II and the invention of the transistor, the industrial revolution seamlessly folded into the technology revolution, and we went from computers to smartphones, and from the internet to the IoT.

The common driver? It’s all about “making better decisions faster.” The industrial revolution was based on innovation and creativity, individual freedom and organization. Consider that the Hoover Dam, one of the wonders of the twentieth century, was designed with slide rulers, paper and pencils. Three decades later, we managed to get men on the moon using computers that had a fraction of the power of our smartphones.

The motivator of “making better decisions faster” drove computers into existence. Does anybody remember how to do bookkeeping without a computer? Or run a manufacturing plant? Making better decisions faster drove the Internet into existence. When was the last time you wrote a letter, instead of an email? What was the last edition of the Encyclopedia Britannica before Wikipedia’s real-time updates introduced it to obsolescence?

This making better decisions faster is driving the IoT into existence, too, and will make it the pivotal technology for the current decade. It will make our personal lives more comfortable, more safe and secure. We will waste less energy. The IoT will make the quality of our products better. Factories will be more efficient with raw materials and other resources. We will be able to better monitor our environment, and our impact on it. The IoT is not a break from the past, it is a natural progression in making better decisions faster, and a continuing engine for our economic growth and wealth creation – driving out poverty altogether.

Are there downsides to the IoT?

The industrial revolution and the subsequent invention of assembly line production certainly resulted in groups of winners and losers, and there was major upheaval and social unrest as we came to grips with all the changes. The technology revolution also contributed to upheaval and unrest. People were replaced by computers and lost their jobs.

To date, despite considerable pessimism about the loss of jobs to automation, overall employment has not appeared to decrease. Clearly, change has been very painful for those impacted. But overall, where jobs were lost, other jobs were created. And by economic law, jobs with low value-add disappeared and were replaced with jobs with high value-add—the “cleaning mechanism” through which economic growth and wealth creation were affected.

The IoT will follow the same pattern. It will redefine jobs and skills. It may even create unrest. There will be winners and losers. There will be people who will see opportunities. And there will be people who will fall victim because “better and faster” is not what they can absorb. In this sense, the IoT will be just the next example of the tradition of the industrial revolution – that more prosperity comes at a price.

The IoT’s network of connected devices will absorb many of the repetitive, drudge work tasks of today. And in much the same way as the post-industrial revolution period, while machines are doing the grunt work, humans will have more time to spend on solving bigger problems. Will it enable the next level of creative culture? A new generation of space explorers? A new enlightenment, perhaps?

So are we there yet?

As with many technologies, after a few years of high expectations, the IoT is slowly entering the Valley of Disillusionment phase of the “hype cycle,” that quiet phase where sobering reality starts kicking-in. Usually this is also the period where the fads and the wild ideas separate from the strong and more realistic groundswell of useful applications. The good news is that when we compare this to other technologies, we seem to have short memories of the “not quite right yet” years, when early adopters worked to help the technology through to success. The same will happen with the IoT.

The IoT is suffering today from a lack of understanding of its true value proposition; and at the same time, a plethora of proprietary and open communication standards inhibit interconnectivity, create confusion with consumers, and confusion among product builders themselves, keeping product prices high and delaying market growth. On top of all that, large companies seem determined to seek the holy grail by promoting their own ecosystems.

Even if we are currently in the Valley of Disillusionment, we should not be distracted. We still have a lot to learn (maybe less technology and more business models on maximizing the value-add), but we are in the middle of shaping a better world for the next generation. A world with less poverty, hopefully fewer wars. Maybe a new Golden Age, an Enlightened world? We have a long way to go, but we will see—because we can!


CeesLinks_WEBCees Links was the founder and CEO of GreenPeak Technologies, which is now part of Qorvo. Under his responsibility, the first wireless LANs were developed, ultimately becoming household technology integrated into PCs and notebooks. He also pioneered the development of access points, home networking routers, and hotspot base stations. He was involved in the establishment of the IEEE 802.11 standardization committee and the Wi-Fi Alliance. He was also instrumental in establishing the IEEE 802.15 standardization committee to become the basis for the ZigBee® sense and control networking. Since GreenPeak was acquired by Qorvo, Cees has become the General Manager of the Wireless Connectivity Business Unit in Qorvo. He was recently recognized as Wi-Fi pioneer with the Golden Mousetrap Lifetime Achievement award. For more information, please visit www.qorvo.com .

Next Page »