Posts Tagged ‘top-story’

Next Page »

Removing Technological Barriers to IoT Adoption through Pre-Certified Connectivity

Tuesday, February 26th, 2019

Modular Sigfox RF solution eliminates technical design barriers, simplifying and shortening the design and certification process by integrating an advanced RF SoC with all of the necessary external components, including CE certification

The Internet-of-Things (IoT) enables greater productivity, control, and efficiency while delivering power and potential across an almost limitless breadth of markets and end applications. It has received a great deal of attention in markets where cutting edge technology has not been employed in a widespread manner yet, especially when connectivity is required.

In industrial and consumer applications, the key blocks of sensing, processing, actuation, and connectivity are pivotal to IoT designs. Modular, plug-and-play solutions for each block, if available, can significantly speed and ease new designs in applications such as smart home / smart buildings, wellness, and asset tracking, to name but a few. This is especially relevant if modular solutions include tailored development tools and require pre-certification regarding any relevant international regulatory standards and protocol requirements.

Connectivity is one of the most challenging areas, proffering a plethora of protocols that are more or less relevant depending on the specific nature of each application. With a built-in infrastructure and long-range connectivity, Sigfox™ has emerged as one of the most useful of the connectivity technologies. However, for many potential designers of IoT solutions, Sigfox is a new technology area, and so easing adoption is key to the proliferation of the IoT.

Challenges with IoT Applications

There are over 31 billion devices (‘things’) connected to the IoT – and thousands more are being connected every day. Together, these devices are bringing huge positive change to consumers and business users all around the world. In the home, automated lighting control is saving energy and providing security. Remote doorbells allow users to be ‘home’ from anywhere on the planet, provided they can access the internet. In business, every detail of a factory or other facility can be monitored; delivering data that will enhance operational efficiency more than ever before. Businesses that operate equipment in remote locations can monitor the operation from the comfort of their offices, eliminating the cost of regular inspection visits. With enhanced data analytics, real-time monitoring, predictive maintenance, and other high-value propositions, the overall benefits of IoT are becoming a reality.

However, many of the attributes that make these IoT devices so useful and portable, such as their small size, connectivity, and ability to be used remotely are also delivering significant challenges to designers. While the devices are physically small, allowing them to be deployed in constrained spaces, an IoT node needs to contain a significant amount of functionality. Typically, this would include a microcontroller (MCU) to manage the system and process data, various types of sensors, depending on what is to be measured or monitored; and cryptographic technology to ensure that any sensitive data is stored and transmitted securely. Also necessary is a power source and, while many IoT devices are deployed in homes, offices, or factories where mains power is available, many of these devices are battery-powered for convenience. Obviously, all IoT devices used in remote locations where no mains power is available will be battery-powered.

The size constraints, as well as the finite energy available from batteries, means that designers have a number of significant challenges to overcome regarding selecting and implementing small, ultra-low power components and developing sophisticated power management algorithms to ensure that no valuable energy is wasted.

Figure 1: The AX-SIP-SFEU is the world’s most compact Sigfox Verified solution; a miniature 7 mm x 9 mm x 1 mm, conformally coated package can be deployed in space-constrained, remote IoT applications.

Challenges in Connecting IoT devices

Another challenge with IoT devices is providing the communications interface that is essential to connect the node to the IoT. This is a relatively specialist area, and one key challenge for designers lies in selecting the most appropriate protocol(s) from the huge range that is available. Some of the protocols are proprietary and intended for very specific applications, while others such as Bluetooth® and Wi-Fi are widely implemented, albeit for short-range applications.

Until recently, cellular technology was one of the only available methods for connecting nodes that are beyond the reach of other short-range wireless technologies such as Bluetooth®. However, cellular was intended for voice and high data rate communications, making it relatively power-hungry and unsuitable for the simple Machine-to-Machine (M2M) communications that the IoT requires and relies upon.

Sigfox is a cellular-style system that has been established to provide low power, long range, low data rate, and low-cost communications for remotely connected devices, especially IoT nodes. Aimed at simple M2M communications, the Sigfox network allows simple connectivity over distances far greater than a simple low power transmitter can achieve alone. The network employs Ultra-Narrow Band (UNB) technology that enables low transmitter power while maintaining a robust connection. Designed to fit almost any IoT application, Sigfox has few constraints.  Provided the application does not need to send more than 140 twelve-byte messages per day, and can accept a wireless throughput of 100 bits per second, Sigfox provides a reliable, low power, and low-cost connectivity solution.

However, unlike ubiquitous communication protocols such as Bluetooth, know-how relating to Sigfox is considered relatively ‘niche.’ This presents design engineers with a steep learning curve to design and implement a successful Sigfox-based communications interface, creating a technological barrier-to-entry for companies seeking to address the remote IoT market.

Modular Sigfox Solution Eliminates Technological Design Barriers

ON Semiconductor is active in this sector and recently announced a programmable Sigfox RF transceiver System in Package (SiP) that integrates an advanced RF System-on-Chip (SoC) with all of the necessary external components (including a TCXO), thereby creating an opportunity to simplify and shorten the design and certification process.

ON Semi’s AX-SIP-SFEU SiP delivers out-of-the-box device-to-cloud Sigfox connectivity including both uplink and downlink for remote IoT applications using Sigfox LPWAN communication. The SiP includes a Sigfox radio IC, discrete RF matching components, all required passive components, and firmware in a single package. As the solution has been pre-certified for CE and is Sigfox Verified, using ON Semiconductor’s extensive know-how, designers are assured of a high quality, fully integrated, and complete solution.

The miniature 7 mm x 9 mm x 1 mm, conformally coated package ensures that the AX-SIP-SFEU can be deployed in space-constrained, remote IoT applications. The AX-SIP-SFEU device is the world’s most compact Sigfox Verified solution, ensuring that designers can overcome the physical space challenges of designing remote IoT nodes.  The miniature size is especially suitable for wearables, asset tracking tags, or any other application requiring a small Sigfox solution.

Power-related issues are also significantly reduced by using the AX-SIP-SFEU, as the ultra-low power design incorporates standby, sleep, and deep sleep modes to conserve power when not required to transmit. In these modes, currents of just 0.55 milliamps (mA), 1.2 microamps (mA), and 180 nanoamps (nA) respectively, are drawn, allowing the device to be powered from a coin cell battery (CR2032). Alternatively, energy harvesting techniques may be used, removing the need for any form of battery, management, or replacement.

One of the most daunting aspects of any radio design, especially when designing for the first time, is gaining approvals. The AX-SIP-SFEU SiP is Sigfox Verified for the RC1 zone network, meaning that it is certified to comply with the RF and protocol specifications of the standard, thereby ensuring interoperability. Additionally, the device has achieved CE certification, verifying that it conforms to the health, safety and environmental protection standards for products sold within the European Economic Area.


While the IoT delivers huge benefits and opportunities, the small and complex nature of nodes creates significant challenges for design engineers. Not only must they meet the physical size constraints and deal with the low power requirements, but they also have to ensure that the RF communications included in the design meet international standards – which add time, cost, and risk to the design process. This is particularly important for remote devices requiring long-range wireless connectivity that need a more cost-effective solution than could be provided with cellular networks.

By using pre-certified, ultra-miniature, ultra-low-power modules such as the AX-SIP-SFEU from ON Semiconductor, designers are now able to design IoT nodes with the confidence that they can implement a pre-certified RF communications system with ease and almost zero risk, thereby removing one of the significant technological barriers to IoT design.                                                                               

Brian Buchanan is the manager of the Wireless Connectivity Solutions business unit in the Applications Products Group at ON Semiconductor with more than 15 years of experience in the semiconductor industry.  He began his career as a mixed-signal design engineer, where he designed multiple products for automotive and industrial applications.  He has held various positions within the company as a product definer, corporate marketing manager, product line manager, strategic marketing manager, and recently took on responsibility for managing the business operations of the Wireless Connectivity Solutions business unit within ON Semiconductor.

Mr. Buchanan holds a Bachelor of Computer Engineering and an MBA from the University of Utah.


How to Adapt an Existing Design for Use in the Internet of Things (IoT)

Friday, February 15th, 2019

Making post-certification software or hardware changes may trigger the need to requalify a product. However, separating portions that need certification into subsystems so that bugs in one subsystem will not affect the performance of other subsystems lends a considerable advantage.

Arild Rodland, Business Development Manager (EMEA), Microchip Technology Inc.

For many people, the current surge in connected household appliances evokes memories of how personal computers became increasingly connected to the internet in the 1990s. At the time, there was a similar debate over whether this technology was simply a gimmick or would indeed have a lasting impact on society. Connected PCs and cell phones are now considered indispensable, and many foresee a similar path ahead as the world embraces connected household appliances.

The ability to turn on a coffeemaker from anywhere in the world may not seem like life-changing technology, but coffeemakers are just the start of the household IoT revolution. The IoT will serve as the foundation for an evolution of innovation and business opportunities in the appliance space. Ongoing advances in machine learning and artificial intelligence technology will only accelerate this evolution. The ability to gather raw data from appliances and sensors opens a whole new world of use cases and opportunities.

Some designers are unsure whether they want to join the IoT revolution because they fear that building an embedded design with IoT connectivity will be a daunting task. The reality is that the requirements are quite simple to achieve. An IoT-enabled product typically consists of only three elements: a processor or microcontroller (the “smart” element), a network controller (the “connected” element), and a means of securing the communication with the cloud (the “secure” element).

Since most designers have already invested considerable time and effort into making a great product, there is an advantage to reusing most of the existing design work. Often, only the connectivity and security elements need to be added to an existing design to enable IoT connectivity. Rather than having to design a solution from the ground up, it is possible to quickly transform existing designs for connection to the IoT. This can be done in a highly efficient manner using techniques proven in the software programming world to simplify and accelerate development.

Decomposing the Challenge

There are a few tricks embedded designers can learn from software programmers as they embark on the task of enabling an existing product to operate in the IoT. Programmers facing a complex programming challenge have a long tradition of turning to a top-down design approach, or modular programming. This method involves decomposing the bigger problem into smaller, more manageable sub-problems, which again can be divided up into smaller tasks to tackle. This is a powerful and proven approach to solve challenging problems that would be difficult to solve with monolithic code. So, how does this translate to embedded hardware systems?

It turns out that embedded systems engineers can achieve the same benefits by modularizing their system development. In addition to posing pure programming challenges, embedded systems often need to comply with standards and undergo rigorous certification processes. Making post-certification software or hardware changes may trigger the need to requalify the product. For this reason alone, there is a considerable advantage in separating the portions that need certification into subsystems. This way, bugs in one subsystem will not affect the performance of other subsystems.

For example, many designers want to add a secure internet connection to the next generation of an existing product to improve the user experience and facilitate adding capabilities including remote diagnostics, monitoring functionality, automated fulfillment services, and statistical data gathering to plan for future product enhancements. This IoT-enabled product will need three main functions: 1) the original application; 2) connectivity to the internet, and 3) a means of securing the application. As illustrated in Figure 1, this type of IoT-enabled application is the original application with added security and connectivity.

Figure 1: An IoT-enabled application consists of an application, security, and connectivity.

From an implementation point of view, this design challenge can be broken down into three subtasks, where the original application code is reused, and only security and connectivity are added.

Both security and internet connectivity are, however, complicated to engineer from scratch. In addition, integrating new functionality into an existing application can interfere with the existing solution, — reducing the quality of the combined application. Developers often write code that has been highly optimized for the current application. As a result, it can be very difficult to add timing-critical connectivity and computationally-heavy security while still guaranteeing the same benchmark levels of performance on the updated products.

Figure 2 illustrates this combined approach. All functionality is implemented as a single solution, increasing the complexity of both writing and debugging the application. Bugs in one part of the code may affect timing and performance of other critical functions, making it much more likely that a simple bug could have side effects, triggering the need for a requalification.

Figure 2: In this integrated solution, all code and functionality are integrated into a single device, increasing the code complexity and code development time.

Taking a modular approach will allow designers to keep their existing codebase and IP intact and just add connectivity and security functionality as needed.

Using the above approach, the functionality of security and connectivity can be implemented as separate software and hardware tasks, which saves an enormous amount of time and reduces the number of engineers needed for a given product. The approach also provides for easier code and system reuse, which offers greater flexibility. For instance, a designer might want to offer both a Wi-Fi and a Bluetooth Low Energy (BLE) version of the same product. The modular approach enables fast and easy innovation in IoT design in this scenario.

Figure 3: With a modular solution, designers can reuse the existing application and isolate the security and connectivity to smaller and more manageable tasks that work independent of the main application.

The advantage of the modular approach is that all the work focused on optimizing and tweaking the existing system is not lost when adding IoT connectivity to the product. The designer can easily add the required functionality without affecting other parts of the system.

To simplify the process, developers can choose certified modules for both security and wireless communication, which will significantly reduce certification time and the amount of time required to get the new product to market. An example of such a certified secure element is Microchip’s ATECC608A device. This device handles all the tasks associated with authentication and secure storage of keys and certificates, delivering a secure solution without requiring any code writing. Similarly, certified wireless modules execute everything needed to connect securely to a wireless network.

Using certified modules for security and wireless functionality also eliminates the need for a designer to be an expert in security or communications. The modules include all the necessary pieces of code and generally are controlled by simple commands sent over a serial interface like UART, SPI, or I2C.

To further simplify design and accelerate time to market, development boards such as Microchip’s AVR-IoT WG Development Board contain these modules for secure and easy-to-deploy IoT connectivity. Using tools like these, it can take just 30 seconds and a few clicks for an engineer to connect an existing product to the Google Cloud IoT Core and start transmitting data.

Figure 4: The AVR-IoT WG Development Board combines an AVR® microcontroller, secure element IC and certified Wi-Fi network controller, enabling designers to prototype connected devices within minutes.

The ability to connect appliances and consumer products to the cloud creates the potential for them to deliver considerably greater value, whether through delivering big data for artificial intelligence and machine learning applications or simply to offer an easier way to execute secure remote firmware updates. Decomposing the challenge and using certified modules for security and communications functionality gives designers a shortcut for adapting their current designs to take advantage of these opportunities.

Multi-Sensor Modules Ease Indoor Agriculture Design Challenges

Monday, February 11th, 2019

Controlled-environment agriculture employs technology so that it is not dependent on expensive arable land, embracing the Industrial Internet of Things.

Controlled-environment agriculture (CEA, also called vertical farming due to multistoried facilities that house the crops and infrastructure) is a promising niche. By embracing the Industrial Internet of Things (IIoT) and employing fully enclosed, climate-controlled environments to eliminate external factors such as disease, pests, or seasonal weather variations, yields of plants such as leafy greens, herbs, and tomatoes can multiply dramatically. The technology is not dependent on expensive arable land and is therefore suitable to establish anywhere—including the center of urban communities.

Behind the high yields that vertical farming generates are some leading-edge electronic engineering developments that include wireless sensors, mesh networking, and cloud connectivity. The infrastructure is vital for the success of a CEA enterprise, but it adds costs due to the complex and time-consuming design, commission, and configuration. The engineering expertise required to build and the costs to purchase today’s CEA limits its adoption to large, rich organizations and discourages the spread of the technology to places where it could have the highest impact, such as in the developing world. However, electronics manufacturers are working hard to change this trend by introducing products that combine multiple sensors, which are essential for successful vertical farming, into a single module, making it much easier to implement and shave down costs.

A Tricky Enterprise

While vertical farming is gaining momentum, as this global CEA market hit about $1.5 billion in value in 2016 and projects growth of about $6.4 billion by 2023, its adoption is currently limited to rich countries, analytics company Statista reports. Research analyst MarketsandMarkets notes that China, Japan, Singapore, the United States (US), and the Netherlands are by far the biggest players. Likewise, though the LEDs, wireless sensors, Internet connectivity, and online software and hardware resources required for vertical farming are proven and accessible, large commercial implementations are rare. A major reason for the relative scarcity is that CEA systems are challenging to design and implement.

Vertical farming success depends on the precise management and maintenance of a farm’s environment. By controlling factors such as light intensity and wavelengths, temperature, humidity, airflow and CO2 and volatile organic compound (VOC) concentrations, supervisors can ensure plants grow as fast as possible while retaining their aesthetic, taste, and nutritional value for consumers. For example, research has shown that constant air movement (at a constant vapor pressure) in the range of 0.3 to 0.5mps provides an optimum diffusion of CO2 and water vapor into leaves to aid photosynthesis and maximize the growth of tasty leafy greens.

Such delicate management of growth conditions demands the supervision of computer-based, closed-loop process-control systems that employ sensors that wirelessly forward data to a supervisory computer that is in the facility or in a remote location. If a sensor reading detects a drift from an optimum diffusion rate, the supervisory computer can make appropriate adjustments to reset conditions back to the desirable level. Connectivity ensures that the system retains all the data and can use it to calculate which conditions prove to be the best for a particular plant’s growth.

For example, CEA scientists from the Newark, New Jersey-based AeroFarms monitor more than 130,000 data points every harvest. The technicians then use the information together with predictive analytics to review, test, and enhance the growth systems for the next harvest.

CEA Electronic Engineering Made Simple

A closed-loop, networked, wireless sensor system for a single variable is a challenge to implement, but the difficulty significantly multiplies when control extends to multiple variables that determine the success of CEA. These challenges include:

  • Financing the installation and maintenance of large sensor populations,
  • Building and verifying sensor-based systems and the associated mesh networks,
  • Commissioning and configuring multiple sensor types,
  • Ensuring conflict-free communications across large networks, and
  • Managing power to widespread sensor populations.

One solution for these CEA engineering challenges is arriving in the form of a commercial introduction of modules that integrate multiple sensors onto a single module. Products such as TE Connectivity’s AmbiMate sensor module provide multiple sensors on a ready-to-attach printed circuit board (PCB) assembly for easy integration into a host product. Each of the modules combines four core sensors (i.e., motion, light, temperature, and humidity) while variants can add CO2, VOC, and sound (microphone) capabilities.

The sensors are linked to a single onboard microprocessor and leverage a single power supply. The embedded microprocessor is equipped with an I2C interface that makes linking to a wireless transceiver for data transfers relatively straightforward. Each module variant is manufactured on the same footprint, making it simple for designers to swap to a different sensor mix when it is necessary. Finally, the onboard sensors are capable of measuring parameters with the precision required for CEA. For example, the temperature and humidity sensors offer ±0.3°C and ±2 percent relative humidity (RH) measurement accuracy, respectively, and a one-second acquisition rate (Figure 1).

Figure 1: TE Connectivity’s AmbiMate sensor module MS4 series is based on a common footprint, making it easier for designers to customize end products for a specific application. (Source: TE Connectivity)

By integrating multiple sensors into a single module, products such as AmbiMate’s MS4 series can reduce the number of nodes in a CEA process-control system and ease many of the key engineering challenges. Fewer nodes lead to simpler networks and lower complexity and cost levels as well as accelerates tests, configurations, and commissions.

Encouraging Vertical Farming

Traditional farming already consumes 40 percent of the planet’s land and finding more space to grow crops requires cutting down valuable rainforests. Yet, according to US newspaper Newsweek, food production will need to expand by 70 percent by 2050 to feed a predicted global population of nearly 10 billion.

According to the CEA Global Association, vertical farming is the answer because it combines the sciences of agriculture, engineering, and technology to grow fresh food in scaled and sustained environments 365 days a year and in locations where weather and outdoor environments would otherwise limit or preclude fresh food production. Furthermore, localizing fresh food production near urban populations is smart and efficient. By integrating agriculture and engineering, conservation of finite resources such as water, arable land, and energy becomes feasible.


There is no doubt CEA is a promising technology, considering the best-run installations produce annual yields approaching 400 times greater than the equivalent area of traditional farmland. Still, it is a fledgling enterprise, and the barriers to entry are currently so high that anything other than very rich nations are excluded from adopting the technology. The barriers are high because CEA’s success relies heavily on technology that is currently complex and expensive. The introduction of multi-sensor modules, such as TE Connectivity’s AmbiMate, is a significant step on a path to simplifying CEA-based engineering and releasing the technology to further help and feed the world.

Steven Keeping gained a BEng (Hons.) degree at Brighton University, U.K., before working in the electronics divisions of Eurotherm and BOC for seven years. He then joined Electronic Production magazine and subsequently spent 13 years in senior editorial and publishing roles on electronics manufacturing, test, and design titles including What’s New in Electronics and Australian Electronics Engineering for Trinity Mirror, CMP and RBI in the U.K. and Australia. In 2006, Steven became a freelance journalist specializing in electronics. He is based in Sydney.

Reprinted by permission from Mouser Electronics

Retail, IoT, and Smart Lighting Combined?

Thursday, September 27th, 2018

But enterprises also have to see the light when it comes to IoT security

Part of the retail scene is lighting, from ambient lighting to accent lighting. LED lighting has boosted energy savings, reduced heat buildup in freezer cases, and lasts much longer than other bulbs. Many stores have or are in the process of converting all lighting to LEDs, and IoT is on its heels. Why not combine them?  Lighting fixtures can also serve as an IoT platform for beacons and sensors to track customer interaction with products and displays. GE’s Predix Platform is marketed for Industrial IoT as “a comprehensive and secure application platform to run, scale, and extend digital industrial solutions.”[i] According to Lux magazine, “GE’s Predix Internet-of-Things network can transform the lighting into a smart digital infrastructure that uses data and analytics to optimise energy usage, employee productivity and customer interaction.” A large grocery retailer, Sainsbury’s in the U.K., has announced plans to retrofit all lighting in every store to LEDs by 2020. In a two-for-one inspiration, Sainsbury’s is integrating the GE Predix platform into its lighting installations to add IoT capabilities. The IoT network will utilize the new lighting system as a digital infrastructure for gathering data and performing analytics to improve energy efficiency, customer interaction, and employee productivity.[ii]

IoT security is at the top of the list for such systems, however, and smart lighting can be open to easy attacks if not properly protected. One out of every four luminaries will be smart by 2020, according to the Boston Group. Aniruddha Deodhar, Head of Connected Spaces, IoT Services Group at Arm, states, “Isolation (using hardware to isolate sensitive operations and data), cryptography (end-to-end encryption of data), tamper mitigation and side-channel attack protection (physical protection against tamper attacks such as spoofing, man-in-the-middle, malware and lab-based attack) and security services (secure remote firmware updates over the air) can minimize the risks of IoT lighting systems and greatly enhance their security features.”[iii] He goes on to say that good security starts with a Threat Model and Security Analysis (TMSA), and companies like Arm are working hard to provide the tools for securing IoT, including in retail, from the ground up.[iv] IoT can bring great improvement to the bottom line for retail, but IoT is not worth the headache of a breach if it’s not secured from the idea stage on through provisioning and beyond.



A Day in the Life: How IoT Is Changing the World of Sports and Health

Monday, July 9th, 2018

A look at a day in the life of a professional cyclist and the IoT applications that support her.

Monitoring Made Easier with the IoT
The first thing in the morning, at the instant the athlete opens her eyes, she will tap her Fitbit and measure her pulse at rest rate. Great cyclists generally have an extraordinary heart capacity, and a lower heart rate at rest typically implies more efficient heart function and better cardiovascular fitness. It can also indicate if the athlete has any infections or circulatory problems.

Her smart helmet has bone conduction audio technology, which turns audio into vibration that goes straight to the inner ear…

This morning, our athlete has a very low 37 beats per minute (bpm): all clear for a good training day.

The author’s daughter, Alicia Franck, a professional cyclist in Europe

She starts off with a well-balanced breakfast of granola, fruit, and yogurt for a total of 550 kcal. She enters the food and its weight in a food calculator app and shares it via the IoT with the nutritionist who’s a part of her athletic support team.

The nutritionist then optimizes our athlete’s food intake for the required output, based on three types of days: training, racing, or rest. And indeed, less weight on the bike equals less resistance and better results, so continuous monitoring and follow-up are key.

Honing a Training Plan in the Lab
Before our cyclist ever gets on her bike for the day’s training ride, the cyclist’s support team has previously analyzed both her equipment and her fitness levels to maximize every possible variable:

Fitness and Endurance Tests
Professional cyclists are under continuous medical supervision and do several endurance tests per year to measure their progress while training for the start of the season or for specific events, like a world championship. These tests can include maximal oxygen consumption (VO2 max), lactate threshold and a total body scan to measure the athlete’s fat percentage. The results of these tests are shared with the cyclist, sports doctor, trainer, and nutritionist, who all collaborate to shape the perfect sports body for our athlete and create optimized training schedules and nutrition plans.

Our cyclist undergoing wind tunnel tests

The training team uses a bicycle fitting lab and wind tunnel to test and optimize the position and geometry of the cyclist on her bicycle. The variables to adjust are countless: pedals, cleats, cycling shoes, and crank arm length. The selection of saddle, its height and tilt, the type of cycling shorts for comfort. Handlebars, brake levers, and hoods. A bike is symmetrical, but the human body is not—and every millimeter of adjustment can impact a professional cyclist’s results.

All this real-time information shapes the cyclist’s individual training plan so that she can focus on what’s most important: her riding.

New Levels of Training Efficiency with IoT
When it’s time to start her training session, our cyclist will put on her training clothes, shoes, and helmet. But her equipment is optimized by technology and the IoT:

  • Her smart helmet has bone conduction audio technology, which turns audio into vibration that goes straight to the inner ear from the tabs of the helmet straps, through the cheekbones, bypassing the ear drum. The result is amazing: You hear music and voice navigation “inside your head,” and the cyclist can still hear the ambient sounds of traffic to maintain situational awareness for safety. It’s the safest way to listen to music while riding.
  • Her GPS computer/navigator tracks her location and is linked to her heart monitor.
  • Once on her training ride, she uses a power meter—a device fitted to the bike that measures the power output of the rider—to quantify her workout and give instant feedback. Cycling experts believe a bottom bracket power meter system  is the most accurate type, and will help cyclists to accurately measure progress, without the impact of external factors like weather that can influence the result.

“A bike is symmetrical, but the human body is not—and every millimeter of adjustment can impact a professional cyclist’s results.”

But the cyclist isn’t alone on her ride—her support team can monitor it in real time using the IoT. The trainer can follow the cyclist with the GPS tracker as training sessions are automatically shared as live streams. The trainer checks heart rates, speed, power—and discipline. Nothing goes unnoticed, and skipping a training becomes very visible.

In the past, trainers only looked at average speed during training sessions. Today, they look at distance and speed, power output and explosivity, velocity, resistance or help from tail- or headwind, and many other variables (like weather), which can all influence the result of a training session.


Athletes today have access to a plethora of technology, tools, instruments, and applications to measure performance and progress and share it with their support and training team:

  • Wireless communication (GPS tracking, Bluetooth in the helmet and heart monitor, ANT+ for the electronic gears and more) enhances user comfort and safety.
  •  Dashboard applications look at the bigger picture.

The team analyzes all gathered data to further optimize the most precious instrument the cyclist has: her body.

A Growth Path in Sports
No one is born to be a world-record holder. People are born with certain abilities and talents, but they must work with those abilities to become the best. When technology kicks in to optimize an athlete’s performance and bicycle, in close collaboration with the supporting medical and care team, the IoT has a nice growth path in sports.

And for the rest of us who aren’t professional athletes, connected IoT devices and applications can still bring the same benefits. We can use different apps and wearable devices like a Fitbit or Apple Watch to track our own fitness, monitor progress toward goals, share our achievements, and stay motivated, as well as convey information to health care providers. At its heart, the IoT can bring more information and more data for sports and health—no matter your fitness level.

Elly Schietse is Director of Wireless Connectivity at Qorvo, Inc. As an evangelist of the Internet of Things, Schietse has been enriching people’s views of how RF technology and the IoT can create a better world. She has worked closely with Cees Links since the start of GreenPeak Technologies, which Qorvo acquired in 2016.

Testing IoT at Scale Using Realistic Data, Part II

Tuesday, May 15th, 2018

Continuing the IoT Testing conversation on how to comprehensively test a large IoT system and not just the sum of its parts. introduces a novel solution to perform quality testing on a massive scale using machine learning and dramatically reduces the engineering workload… 

Editor’s Note: This is Part II of a two-part series on testing IoT on a massive scale. Click to find: Part I of Testing IoT at Scale Using Realistic Data.

The Internet of Things (IoT) is growing into big business, and with it comes masses of sensors, all gathering data for a more significant cause. One of the problems developers have, however, is how to realistically test the whole system, in all its potential chaotic glory. How do you road-test managing communications with thousands of IoT sensors to a cloud? And even if you manage to fake that kind of traffic, what nuances are you missing at that kind of scale?

Embedded Systems Engineering’s Editor-in-Chief, Lynnette Reese, sat down with, an ambitious, smart startup accelerator project within CA Technologies that created a tool for IoT developers to realistically test IoT applications, from a small to an immense scale by leveraging machine learning. They currently call it the Self-learning IoT Virtualizer.’s CEO, Anand Kameswaran, talked with ESE about its mission to make realistic IoT simulation and testing effective and easy, including cloud interaction, internet foibles, massive numbers of IoT sensors and connections, and IoT chaos in general.

Lynnette Reese (LR), Embedded Systems Engineering: 

Why not just create a script that fakes inputs from a large number of devices? More IoT just creates more traffic, right?

Anand Kameswaran (AK), That sounds good on the face of it, but IoT doesn’t work that way past the first layer. There’s much more to a live, inter-integrated IoT ecosystem than simulating sensors spinning out data. We wanted a way to test an entire IoT network, interacting with a cloud and potential latencies, against as much chaos that real-world scenarios can throw at an engineer who’s trying to make sure that an automated 10 plate juggling act scales to 10,000 plates, including a database, cloud, and multiple IP addresses.

Faking 10,000 sensors can be done manually using a computer to spit out data. However, not only does that approach take time and a potential learning curve, it’s also coming from one source, and the data is not varying naturally as it would with a machine-learning algorithm to simulate realistically. The ability to generate realistic data scenarios for our customers is one of our unique value propositions. This includes factoring for latency, environmental factors, the correlation between them and the IoT data, as well as replicating the real data that can be predicted to be accurate over an extended period. In addition, the ability to generate a large amount of data (for example, one year’s worth of data) within a brief period is very valuable for customers who want to use a huge quantity of data to qualify and make decisions on their predictive analytic systems under test.

The IoT Virtualizer saves time and helps you find issues that you didn’t know you would or could have at scale. It allows IoT developers to build robust code by testing from the ground up and helps testers find out what they don’t know. Someone said, “There are things we know that we know. And there are known unknowns, which just means that there are things that we now know we don’t know. But there are also unknown unknowns.” And this is what KnowThings provides, a look into testing what we do not know that we don’t know.

LR: I think that was Donald Rumsfeld, talking about national security.

AK: Yes, that sounds right. But it’s true. You cannot test for what you don’t know can possibly happen at higher levels, after everything is online and interconnected. Abstractions aside, it’s just smart to test thoroughly, and simulation and machine-learning are our core competencies.

LR: Speaking of security, how does the self-learning IoT Virtualizer help with security?

AK: We take security seriously and are working on that, with plans to add the ability to deal with encrypted data streams in our next major release. But first we had to get the tool working solidly for as many IoT industries and scenarios as possible. Another interesting angle for security is testing the whole system and how the entire system in action reacts to hacking attempts. If a node is overwhelmed, will the security you have in place work the same as you planned when it had only a couple of requests a second? Our tool creates realistic, dynamic scenarios in which to test security hypotheses. In addition, working with our customers, we found some creative ways to detect phone home and back-door security exploits that we are hoping to market in the future.

LR: So, I’m hearing that it’s just another way of prototyping with a more realistic, real-world scenario simulation using machine learning.

AK: Large-scale mass data input is difficult to realistically simulate without the IoT Virtualizer, because we incorporated machine learning to replicate the underlying physical behavior. Building on that, the Virtualizer creates a simulation of what large IoT systems might experience in the real world. And this is how IoT developers can test their IoT systems realistically and thoroughly. Once the tool creates a template and gets a basic idea of the overall interactions, it fills in the gaps for you. That is where some of the time savings come from. The manual way of doing simulations is you have to build every case that you want to test for, and that becomes very time-consuming. The plan for our commercial version due later this year will allow the software to handle network latencies and congestions by editing the model. The time of device interactions can be adjusted to replicate devices functioning in different regions of the world, for example.

Figure 1: FunkNFresh Farm uses aquaponics. The fish live in the same water that is fed to hydroponically grown vegetables.

LR: I’m having a hard time visualizing what the Virtualizer is virtualizing, so to speak. Can you give me an example of an application where the Virtualizer would make a real difference for a developer or quality/test engineer?

AK: One example of a use for this first product is in farming or large-scale agriculture. There are so many sensors that are being deployed to monitor the moisture, temperature, humidity and various other factors. The data is collected and developers look for anomalies. But developers do not have the ability to set up 10,000 actual sensors and have them working as they would in the real world. That’s what we can offer.

LR: I recall researching for this interview and reading your blog on the hydroponics farm, FunkNFresh Farms. Can you expand on that IoT example?

AK: Sure, but it’s an aquaponics farm, not just hydroponics. Aquaponics is when you raise fish in a pond or tank. The fish live in the same water that is fed to hydroponically grown vegetables. The plants get nourishment from the fish waste and also purify the water, keeping the fish healthy. Whereas high technology and instrumentation are not exactly required for aquaponics, this is my wife’s company, so I was recruited to make it all work, and of course I had to make the greenhouse profitable.

LR: So, you have personal IoT design and development experience, then?

AK: Oh, yes. In fact, I spent several years contributing to open source automation servers and other similar projects before the current generation of smart home solutions. As far as the greenhouse project, it was a lot of fun, and the farm is a successful project that’s selling produce. Aquaponics requires a controlled mini ecosystem but brings year-round crop production. The IoT part of it includes optimization of greenhouse operations through instrumentation, data collection, and automated actions. So, the greenhouse is an IoT device, which measures and reports on water temperature, ambient lighting, and circulating water pH.

Figure 2: The aquaponics farm has an air and a water pump. Both pumps are monitored simultaneously by a microphone.

LR: This is connected to the web?

AK: Yes, and before you point out that this is just automation and that IoT involves data analysis, I am working on that through audio analysis. I am putting a microphone in the greenhouse that picks up audio of both the water and air pump. In a two-for-one status check, the system checks both pump run statuses at the same time by capturing the audio and doing spectrum analysis on the .wav files. I can determine if the pumps are running, and if they are under stress or load. While this is very much a work in progress, I am happy that its moving in the right direction.

LR: Clever.

AK: Thanks. It is progressing well, and it’s a cheaper solution than individually monitoring each pump. The analysis is done on a remote computer. Over time we will have files that will not only predict a pump that’s shut off, but a pump that is about to fail. IoT not only alerts on status, but also allows me to do optimizations at a lower cost, fewer parts, and with less complexity versus a straight automation route.

Figure 3: The .wav files from the microphone are sent via internet to a remote computer for an audio analysis to determine if a pump has failed (and if so, which one), and whether a pump is under an unexpected load.

LR: How did the IoT Self-Learning IoT Virtualizer come into being?

AK: We started out with machine learning and genome sequencing on a massive scale. Fast-forward to three or four years later, and we are applying it to IoT devices, for example, smartwatches with heart monitors and such. We are training on the data that transmits back and forth, so when your smartwatch is telling your phone, “your pulse rate is now at 58 bpm,” we can learn quickly what that means and be able to reproduce that without an engineer having to serve as an interpreter. The machine learning does all of that interpretation for us. This means that if I am a developer, at the end of the day I don’t really care how those bytes are structured. I just want to run my tests, build my code, and configure the system. Machine learning makes it so that the very time-consuming tasks are made faster, for example, configuring the physical system or the simulator by replicating the scenarios.

LR: I can see how that would save time for engineers because they don’t have to stop to decode what’s going on. The Self-Learning IoT Virtualizer carries that load.

AK: Yes, and the engineer is relieved of the task of getting his or her physical IoT device environment setup so that they can verify application logic by generating realistic data. By plugging in the virtualizer, which utilizes machine learning, you can take what would be two to three weeks of painstaking work and accomplish it in about five minutes. You do not need to be an expert in every device or every piece of kit you’re playing with, you just need to know how they all come together as a whole. Now I can use a tool to create the realistic data that I need in about five minutes, which otherwise could have taken up to a month to produce and would need subsequent care and feeding throughout the life cycle of a product.

LR: What kind of QA testing have you already done?

AK: We’re on the third version of the tool and have tested with several real-world customers who have very large IoT networks. It took one of our customers about 80 hours to create their own simulation for their particular IoT scenario. was able to create an adaptive virtual device for them using our Self-Learning IoT Virtualizer in just five minutes. We call them ‘adaptive’ virtual devices because they can learn how to simulate behavior that they have not yet seen, however, it is still a behavior that is possible. Adaptive virtual devices are useful for testing because you aren’t forced to think of all of the ways your device could behave. In the end you can test with a blend of what you actually observed, what was generated through the machine learning, and whatever else you want to add. This grants complete coverage to test and build your solution.

LR: What kinds of clouds do you work with? Can the Virtualizer capture the quirks of the various clouds from different providers?

AK: The IoT Virtualizer is cloud-agnostic. It doesn’t matter what cloud you have chosen, because we interface deeper down, in the network layer.

LR: What IoT protocols can the IoT Virtualizer support?

AK: We are working on integrating several protocols, but our KnowThings product currently supports TCP/IP, REST, and CoAP over TCP. We are also working on integrating several other protocols such as Zigbee, LoRa, Modbus, Bluetooth etc. We are open to suggestions.

LR: IoT can be as simple as a group of sensors. Can I virtualize a group of sensors?

AK: Yes, you can virtualize sensors on a very large scale if you like.

LR: How can developers get their hands on this tool?

AK: We are in the early adopter stage right now and offering a role in beta testing. There’s an opportunity for us to partner with those customers that are part of the early adoption program. Not only will partners shape what everything should look like, but also help in developing best practices in a very challenging development environment. We want to know about the real challenges IoT is facing and concentrate on solving the problems that IoT developers care about.

Anyone interested in trying it out and contributing suggestions to improving the tool can download the community edition pre-release, or sign up for the early adopter program at . Commercial product launch is in mid-summer.

For more information, go to the Self-Learning IoT Virtualizer FAQ online.  

Lynnette Reese is Editor-in-Chief, Embedded Intel Solutions and Embedded Systems Engineering, and has been working in various roles as an electrical engineer for over two decades



Testing IoT at Scale Using Realistic Data, Part I

Thursday, April 12th, 2018

IoT can involve vast numbers of sensors, devices, and gateways that connect to a cloud. How do you comprehensively test your whole IoT system and not just the sum of its parts? introduces a novel solution to perform quality testing on a massive scale using machine learning and dramatically reduces the engineering workload… (read more)

Editor’s Note: Embedded Systems Engineering’s Editor-in-Chief, Lynnette Reese, sat down with, an aspiring, smart startup accelerator project within CA Technologies that created a tool for IoT developers to realistically test IoT applications, from small to very large scale by leveraging machine learning. They currently call it the Self-learning IoT Virtualizer.’s CEO, Anand Kameswaran, talked with Reese about its mission to make realistic IoT simulation and testing effective and easy, including cloud interaction, internet foibles, massive numbers of IoT sensors and connections, and IoT chaos in general.

Lynnette Reese (LR) Embedded Systems Engineering: You’re the CEO of a very busy start-up; I am glad you made the time for this interview.  

Anand Kameswaran, KnowThings

Anand Kameswaran, (AK) KnowThings: You’re welcome. 

LR: I understand that has a tool that makes testing the whole integrated IoT/Cloud /network arrangement much easier for developers. You call the KnowThings tool an IoT Virtualizer. Can you briefly explain how the tool works?

AK:  It’s an IoT simulator that creates a virtual device model, so we like to call it an IoT Virtualizer. In short, it can mimic real device system interactions accurately within minutes. It’s a specific type of simulation that interacts at the network layer, using patented self-learning or machine learning algorithms to simulate up to hundreds of thousands of individual data sensors, device inputs, and their interaction with the cloud. Think of it as a “no surprises” test harness for an entire IoT network, even if that IoT system is very, very large.

It allows you to test sections as you develop, try out new tweaks to see how it affects everything else, and test your final revisions so it’s as good as it can be before you send it out into the real world.


Figure 1: KnowThings has a three-step workflow that a) captures device interactions via packet capture (PCAP), b) models the captured traffic, and c) plays back the adaptive virtual device (AVD). (Source:

 LR: So, it allows you to experience the larger picture of thousands of IoT devices on the internet, interacting with each other and the cloud? But without having to have the actual IoT devices online yet?

AK:  Yes. There’s individual bugs that occur on a device level or with the application that runs analytics, processing etc., but there are also issues that can develop simply based on the number of devices you’re dealing with. When you have thousands of devices collecting data that coalesce into valuable trends and knowledge crucial to good decision-making, there’s a level of collaboration that has to be worked out as the sum of thousands of IoT parts creates a whole different beast.

In one case, we were working with a company doing shipping container monitoring. And their use case is to have 50,000 containers on a ship, with all containers communicating with a single, central node. When we were working with the prototype virtualizer, we were already engaged with customers who were trying to accomplish goals at that kind of scale. We are still working with customers today that are testing at large scale across several industries, whether it’s supply chain and transportation, smart agriculture in monitoring fairly large fields, and so on. That’s one of the reasons why we are interested in those who want to get into our early adopter program, our beta test program, so we can expose the IoT Virtualizer to as many unique industry problems as we can. 

LR: Can you tell us more about the prototype tool and how it’s coming along?

AK:  IoT is a really big space, covering many industries. We are having a new community edition preview on our website and it’s free to download today from our site,

LR: Why are you giving away this incredible tool for free?

AK: We value getting direct feedback from customers and shaping the direction from that. There is still a lot to be learned. We are bringing together people from the embedded space, cloud networking, and other areas; they’re all coming together, so the best practices going forward form the best development tool we can provide. 

LR: Who would the tool benefit most, and why?

AK: Any IoT developer or solution architect or tester working with IoT applications who wants to test their solutions to a large scale quickly, cheaply, and thoroughly. It can improve time to market, reduce labor costs, and reduce the confusion and frustration that can occur as an engineer straddles both embedded hardware and network cloud integration to implement a viable system in the real world with as few surprises as possible.

LR: The Virtualizer is a new idea, then? No one else offers this?

AK: Though device virtualization exists today through a few vendors, usage of machine learning to generate realistic data scenarios is something unique about our solution. The community edition pre-release is the preview of our free edition that will always remain free for customers to try before they purchase our commercial product. It runs on a desktop or even the Raspberry Pi Zero, but we are planning to release a cloud-based one soon. Those that are interested in working closely with us can apply to participate in our early adopter program.

LR: How did you get the idea for this tool?

AK:  It started a few years ago, as an idea to borrow techniques from work CA Technologies was doing on service virtualization based on genomic sequencing. We’ve been working on the underlying machine learning with the University of Swinburne, Australia, as part of a partnership that predates KnowThings. We found that we could take these algorithms that we use for doing genomic sequencing and apply them to learn computer messages. Sequencing a bunch of genes is actually not a whole lot different than sequencing bytes in working towards understanding what that information is telling us.

There is a type of machine learning associated with genome sequencing, which is a learning algorithm and data mining method that analyzes recorded samples. After this was successfully applied in a service virtualization solution, we felt a similar approach can be taken in the IoT space for the KnowThings solution. We flipped that machine learning to automatically create a simulation of IoT devices with individual, asynchronous behaviors represented at different nodes, or locations on the network. It came out of a collaboration with Computer Associates’ strategic research in continuing to advance the underlying algorithm for the IoT Virtualizer. IoT presents a different type of data space versus genome sequencing, so we can make assumptions that let us take some shortcuts outside of the original genome sequencing algorithm and end up with a very efficient algorithm for IoT simulation.

After several years in research, KnowThings is now on the third generation of the technology. Indeed, some of the previous versions were run in service virtualization solutions with real customers on high-performance computers, dealing with hundreds of thousands of transactions per second. So, we have a history of real-life testing already, and we know that the code and the simulation can successfully accommodate a huge scale simulation.

KnowThings has some close early adopters and is working with customers that need an environment at that sort of scale.

LR: What kind of applications would typically require this sort of scale?

AK: The verticals that KnowThings is currently working with for existing customers include smart agriculture, smart transportation, and facets of retail that include IoT. Smart ag would include hydroponics farms. The Virtualizer for smart transportation would help with the operations and logistics side. And an IoT retail channel might include smartphone tracking via Bluetooth beacons to establish behaviors for very targeted marketing through customer smartphones that act as IoT devices through, say, a coupon app that is also tracking customer behavior to some degree. It’s one thing to test a digital coupon-to-smartphone interaction with one or two participating smartphones, but what happens on Black Friday? ’s product assists developers to honestly answer the question, “What could go wrong?”

The KnowThings IoT Virtualizer would work well for any IoT application that needs to test on a scale that’s too large to simulate by one’s self. It can save time, for one thing.

LR: How can developers get their hands on this tool?

AK: We are in the early adopter stage right now and offering a role in beta testing. There’s an opportunity for us to partner with those customers that are part of the early adoption program. Not only will partners shape what everything should look like, but they will also help in developing best practices in a very challenging development environment. We want to know about the real challenges IoT is facing and concentrate on solving the problems that IoT developers care about.

Anyone interested in trying it out and contributing suggestions to improving the tool can download the community edition pre-release or sign up for the early adopter program at  Commercial product launch is in mid-summer.

For more information, go to the Self-Learning IoT Virtualizer FAQ online.


Part II of this story can be found at:

Lynnette Reese is Editor-in-Chief, Embedded Intel Solutions and Embedded Systems Engineering, and has been working in various roles as an electrical engineer for over two decades



Industrial IoT: How Smart Alerts and Sensors Add Value to Electric Motors

Thursday, April 12th, 2018

Considering electric motors in industrial environments and the ways the IoT produces some real value, perhaps in ways you hadn’t considered.

I’m often asked about the value of the IoT—sometimes directly, but often indirectly, as in “How can a deluge of data create value?”

Electric Motors: The Workhorses of Industrial Life
Electric motors come in all sizes, from very small to very large. They usually run on main power, but sometimes on batteries, like in electric cars. We all have many electric motors in our homes—in our vacuum cleaners, fridges, freezers, garage door openers. And, of course, many toys have miniature electric motors, like the locomotives in model trains.

Figure 1: The author explains why factoring in the IoT can cause condition-based maintenance to be seen as a better option than either preventive or run-to-failure maintenance.

Factories are also equipped with many electric motors used for all kinds of jobs: lifting, pressing, pumping, sucking or drying—basically everything that can be done with motion. Electric motors are the workhorses of industry today. They’re also used in areas that are too dusty, dangerous, or difficult to reach by human effort. In short, modern industrial life doesn’t exist without the electric motor.

Maintenance, Maintenance, Maintenance
Electric motors are mechanical devices, so it’s no surprise that they go down occasionally. Statistics show a failure rate of seven percent per year; on average, an electric motor stops working once every 14 years. Not bad, you might think—but for a factory with a hundred electric motors, that means one motor is down just about every month. And keep in mind that one motor going down sometimes means a whole production line going down, which can become very expensive, very quickly. Now factor in the reality that motor failures can come with incredibly unfortunate timing, like just before that critical order has to be delivered.

To reduce unexpected downtime, factories employ maintenance crews. Maintenance of electric motors is an important part of their efforts, but it’s also expensive. There are several approaches to maintenance:

  • Preventive maintenance. Maintenance schedules are based on an estimate of how long the average electric motor runs. To be on the safe side and avoid complete motor failure, maintenance usually occurs too early (although occasionally too late), and well-functioning parts still in good condition may be replaced. The catch? There’s no guarantee that a new problem won’t occur shortly after maintenance takes place.
  • “Run-to-failure” maintenance. This approach waits to do maintenance until the machine stops working. This typically results in full motor replacement, because repairing a rundown electric motor on the spot usually isn’t simple.
  • Condition-based maintenance. Before electric motors go down, they generally start to show irregularities like noise, imbalance, drag, etc. In a condition-based approach, a maintenance specialist goes to every electric motor and “listens” to it with the appropriate tools, much like a doctor with a stethoscope. Depending on the environment, this may be an easy job or a difficult and even a dangerous one. And, of course, the doctor can’t be everywhere at once.

Despite its drawbacks, preventive maintenance is probably better and more cost-effective than the “run-to-failure” alternative—but condition-based may be a better option … especially when you bring in the IoT.

Condition-based Maintenance: Made Stronger with AI and IoT
With the IoT, every electric motor on a factory floor is equipped with one or multiple sensors that are connected (preferably wirelessly) to a control database that continuously collects data about the motors. The control database can use artificial intelligence (AI) to learn normal behavior for every motor and then, after a typically short period of learning, it can generate immediate alerts when deviations from that normal occur. In other words, the IoT combined with AI not only sees problems coming, it continuously scans for problems.

Figure 2: The true value of the IoT: machines with connected sensors for maintenance

Keep in mind that this control database doesn’t need to be programmed. It can simply be fed with data and then learns by itself “automatically” what is normal and what are exceptions. When an exception (i.e., a problem) occurs, it sends an immediate alert, which in many cases avoids total motor failure and replacement. This kind of smart alert also allows the treatment to match the problem at the moment it starts to manifest, rather than general maintenance that may be too early, too late, or miss the pending failure completely. Depending on the severity of the problem and alert, the motor’s downtime can even be planned to minimize any disruption to operations.

Finally, this kind of sensor-based data collection is far more precise and thorough than anything humans could achieve. A slow deterioration of the quality of any given electric motor will continue undetected by human eyes and ears until a serious problem develops or failure occurs, but the IoT will notice even the smallest shifts in normal performance over a longer period of time.

The True Value of the IoT: Making Better Decisions Faster
We think we live in a modern world, but we actually waste a lot of resources and money by making the wrong decisions and/or making decisions too slowly. The promise of the IoT is that we can now collect enough data—cost-effective data that already exists, but that we never captured. And we can capture this data continuously, and quite effortlessly, in enormous volumes. With AI, we can learn from it to make better decisions, faster.

Back to the original question, then. What value does the IoT bring? It enables people to make better decisions, faster.

Cees Links was the founder and CEO of GreenPeak Technologies, which is now part of Qorvo. Under his responsibility, the first wireless LANs were developed, ultimately becoming household technology integrated into PCs and notebooks. He also pioneered the development of access points, home networking routers, and hotspot base stations. He was involved in the establishment of the IEEE 802.11 standardization committee and the Wi-Fi Alliance. He was also instrumental in establishing the IEEE 802.15 standardization committee to become the basis for the ZigBee® sense and control networking. Since GreenPeak was acquired by Qorvo, Cees has become the General Manager of the Wireless Connectivity Business Unit in Qorvo. He was recently recognized as Wi-Fi pioneer with the Golden Mousetrap Lifetime Achievement award. For more information, please visit .


E-Paper Displays Mature

Monday, March 26th, 2018

How innovative solutions are taking E-paper displays from niche to mainstream IoT applications.

Adding to the existing strengths of Electronic Paper Displays (EPDs)—ultra-low power consumption, superb readability, and compact size—are several breakthrough enhancements, including new colors, faster display updating, even lower power usage, and a much wider operating temperature range.

Integrating these next generation EPDs into IoT devices can undoubtedly improve the functionality and lower the cost of current IoT systems. But more exciting still, these new EPDs also have the potential to enable fresh IoT applications that will give early adopters in the development community an opportunity to pioneer entirely new markets.

Why Electronic Paper is Already Ideal for IoT Applications
To understand why EPDs are poised to revolutionize IoT devices and make possible wholly new IoT applications, it is helpful to first understand current generation EPD technology’s key advantages over older flat panel display technologies, such as LCDs and OLEDs.

As the name suggests, an Electronic Paper Display displays images and text that are easily readable in natural light, just like printed ink on a sheet of paper. In this, an EPD is fundamentally unlike other display technologies, which all require their own internal luminance source—one that is power hungry, bulky, complex to design and manufacture, usually impractical to maintain, and prone to defects including uneven brightness, burn in, gradual dimming, and failure.

The EPD technology used by Pervasive Displays creates images from hundreds of minute charged ink particles within each of the tiny capsules that form each pixel. By varying the electrical field across the capsule, ink particles of the desired color are moved to the top surface of the paper, instantly changing the pixel’s color. As the particles are solid ink-like pigments, they create a clear and sharp image that looks like ink on paper. Users find the EPD graphics and text are not only more quickly read and understood, but are also more visually pleasing, and reduce eye strain, because they so precisely mimic the appearance of traditional printing and writing technologies that have been used for thousands of years.

For the IoT, a slim, compact, high-contrast EPD which is clearly visible in natural light is a huge boon. Such a display requires far less power than other technologies and is visible in a wide range of lighting conditions, from dim interior lighting, to bright sunlight that makes other displays painful or impossible to read. In addition, EPDs provide a very wide viewing angle and they help users to read and comprehend critical information without delay.

Figure 1: A two-inch EPD consumes considerably less power compared to a two-inch TFT LCD when updated several times a day—such as for IoT applications.

An EPD shares another similarity with ink on paper: it is bi-stable. Energy is only consumed when the image is being changed. On the other hand, display technologies that are not bi-stable constantly drain power to refresh and illuminate the image, whether it changes or not. For IoT applications, which often display static images and text for hours on end, and may rely solely on battery or environmental power, this is yet another huge energy saver, adding to the power saved by not requiring a constant internal light source. EPDs are such frugal energy users that some can provide an updating display that is driven and maintained simply by the residual energy available when a battery-free RFID tag is scanned.

The zero-power static display capability of electronic paper also frees users from the inconvenience of having to switch on a battery-powered display every time they need to briefly check the device status. Instead, the device condition is always instantly readable at a glance, minimizing unnecessary energy drain. For a typical IoT device with a 2-inch display that may only be updated a few times per day, a traditional LCD will consume well over 250 times more power than an EPD module. By slashing the energy consumption of the display—one of the most power-hungry components—to a minimum, IoT devices can operate in the field, perhaps with zero maintenance, for years. In the same situation, a constant LCD display could deplete its battery in a matter of days.

However, while the EPD’s crystal-clear display, ultra-low energy use and slim size seem almost tailor-made for the IoT—certain limitations have, until recently, frustratingly prevented full use of EPDs in some of the most promising IoT markets. Today, however, several new EPD technologies are set to sweep those barriers aside.

EPDs are already ideal for a wide range of IoT applications, but the latest electronic paper technology enhancements and innovations are about to expand EPD reach and usability much further—bringing the IoT to new environments and new markets.

Wider Temperature Range Extends Global Markets and Creates New Applications
Many IoT applications require devices to be usable in the field, often outdoors, and in extreme temperature conditions. Early generations of EPD technology had a narrow operational temperature range. This limited their deployment in some IoT applications or required additional hardware to stabilize the temperature.

Fortunately, recent innovations have dramatically extended EPD operating temperature range. Today, EPD modules can operate from -20 °C to +40 °C. This much wider temperature range makes an unmodified EPD-equipped IoT device usable in far more global climate environments, all-year round, and also in high- and low-temperature facilities. The temperature range extension has been achieved largely by improving the sequence and timing of the display’s driving waveforms.

Figure 2: A 1.6-inch e-paper display with weather information

With an extended operating temperature, the sheer quantity of potential new applications now fully opening up to EPD-equipped IoT products is too numerous to mention. It includes industrial, logistics, transportation, and automotive—and to give more specific examples: cold-chain logistics temperature logging, and RFID tags, as well as many similar applications in outdoor and harsh environments.

Faster Refresh Rates for More Timely, Dynamic Information
Older EPDs had relatively slow update times of a second or more, possibly delaying operator response to new information. This made the screens less practical when rapid display changes were required, for example in sensing and monitoring applications, for fast-changing data, and for animation.

This limitation existed mainly because the entire screen had to be cleared and redrawn to make even a small change. However, by only updating the section of the display that has been changed, the latest EPDs can now update important data with almost no significant delay. These partial updates can achieve refresh speeds of 300-600 ms—a four-fold improvement. In addition, these partial updates use even less power than a full screen refresh, further reducing the EPD’s already very low energy consumption.

Figure 3: The partial update process only updates the information on the screen which needs to change, such as the room temperature and energy usage information.

In brief, partial updates are performed by comparing the previous image and the new image to get a delta image. This delta image is then input into the EPD. Because of the physical characteristics of the EPD’s ink particles, the waveform used to program the delta image into the display is adjusted based on ambient temperature. There is no limit to the size of screen that partial updates can work on, although larger sizes of screen updates will require more RAM and faster CPU speeds to drive the waveforms properly.

Partial updates do have some limitations. Numerous partial updates without performing a full screen refresh can result in ghosting artifacts, especially for black to white pixel transitions. This can be mitigated by minimizing these black to white transitions. For three color screens, partial updates work on all three colors. However, as the third color (red or yellow) updates relatively slowly, taking a few seconds, partial updates generally only make sense to be done on the black and white pixels in the image.

Moreover, unlike earlier versions of partial update technology, the latest EPD modules do not require additional dedicated electronics for partial updating, thereby providing potential for more reductions in display module cost, complexity, size, and power consumption.

In general, just like the other recent innovations discussed here, partial update technology opens up new applications and markets for EPDs. Partial update technology combines all the power consumption and readability advantages of an EPD with a responsive, rapidly updated display.

More Colors: Attractive, Informative, Safer
Moving beyond early EPDs that only offered monochrome black and white displays, the latest EPDs from Pervasive Displays provide three colors: black, white and red, and most recently, black, white and yellow. For retail applications, these additional vivid eye-catching colors greatly enhance the attention-grabbing power of pricing, signage, and product displays. This allows retailers to draw customers’ attention to special offers, and deals, or special conditions—improving stock throughput, saving staff and customers’ time, and increasing customer satisfaction.

Figure 4: E-paper displays from Pervasive Displays are now available in black, white and yellow as well as black white and red.

For industrial and monitoring applications, bright colors are perfect for instantly bringing attention to critical data—such as warnings or sensor measurements that are outside of nominal range. Simply adding color hints can greatly enhance efficiency and safety, as well as reducing operator fatigue.

In addition, with EPDs providing sharp display resolutions of up to 200+ DPI, these additional colors provide more options for dithering (displaying alternate adjacent pixels in different colors) to generate new shades beyond the standard three, a strong tool for creating attractive retail displays.

Rethinking the EPD: A New Generation of Display
Adding all these improvements in refresh rate, power consumption, operating temperature, and color to EPDs demands a rethink of the EPD’s role in the IoT. In a sense, these EPDs are almost a new class of display technology. The new EPD can now replace LCDs and OLEDs in applications that require features like responsive display updates and color highlighting, and it can be used across a huge area of the globe, from the arctic to the equator, and in environments from the heat of a desert oil well or an iron foundry to the chill of a medical sample storage area or a refrigerated goods truck.

And EPDs can achieve all this without compromising their unbeatable natural light visibility, and with power consumption lower than 0.5 percent of an equivalent LCD screen—offering the potential for five to seven years of battery life on a coin cell battery and practical operation driven by solar and other environmental power sources.

HD Lee is Founder, CTO and President, Pervasive Displays. Lee has over 17 years’ experience in research and development for advanced displays, specializing in TFT-LCD, OLED and e-paper. Lee was a co-founder of Jemitek, a mobile LCD design house, which was acquired by Innolux in 2006. In 2011 he co-founded Pervasive Displays Inc., a world-leading low power display company that focuses on industrial applications and has sold more than 10 million e-paper displays. The company was acquired by SES-imagotag in 2016. Privately, Lee holds 43 granted patents with more patents pending. He has an MS and BS in Electronic Engineering from the National Taiwan University.



Embedded World 2018

Thursday, March 8th, 2018

Embedded World broke its own records—again—this year, with over 1,000 exhibitor companies and more than 32,000 international visitors from the embedded community.

Nuremberg, Germany was a winter wonderland for this year’s Embedded World, as freezing temperatures gripped the Nuremberg Messe showground. Inside the six halls of the world’s biggest embedded industry event, however, was a hotbed of IoT, automation, automotive, communications, and networking innovation.

Figure 1: Nuremberg Messe hosted the 16th Embedded World from February 27 to March 01 2018.

All types and sizes of modules were on display, illustrating the diversity of choice available in the market today. One of the most interesting was the System on Module (SoM) for industrial-grade Linux designs by Microchip. The ATSAMA5D27-SOM1 (Figure 2) is designed to remove the complexity of developing an industrial system based on a microprocessor, running Linux® OS. Lucio di Jasio, Business Development Manager, Europe, at Microchip, explains that the 40 x 40mm board will help engineers with PCB layout in these applications. It has the company’s ATSAMA5D27C-D1G-CU System in Package (SiP) and uses the SAMA5D2 MPU. The small form factor manages to integrate power management, non-volatile boot memory, Ethernet PHY and high speed DDR2 memory to develop a Linux-based system or it can be used as a reference design. Schematics, design, and Gerber files are available online and free of charge.

Figure 2: The ATSAMA5D27-SOM1 System on Module was announced by Microchip.

For security, the SAMA5D2 family has integrated Arm TrustZone® and capabilities for tamper detection, secure data and program storage, hardware encryption engine and secure boot. The SoM also contains Microchip’s QSPI NOR Flash memory, a Power Management Integrated Circuit (PMIC), an Ethernet PHY and serial EEPROM with a Media Access Control (MAC) address.

There is a choice of three DDR2 memory sizes for the SiP, 128 and 512Mbit and 1Gbit, all optimized for bare metal, Real-Time Operating Systems (RTOS) and Linux OS. All of Microchip’s Linux development code for the SiP and SOM are in the Linux communities.

Customers can transition from the SOM to the SiP or the MPU, depending on the needs of the design, adds di Jasio.

The company also announced two new microcontroller families, one for the PIC range and one for megaAVR series. The PIC16F18446 microcontrollers are suitable for use in sensor nodes, while the ATmega4809 is the first megaAVR device to include Core Independent Peripherals (CIPs), to execute tasks in hardware instead of through software, decreasing the amount of code required to speed time to market.

Graphics Performance
A COM Express Type 6 module from congatec shows how the company is wasting no time in exploiting the prowess of AMD’s latest Ryzen™ processor. The conga-TR4 (Figure 3) is based on neighboring exhibitor AMD’s Ryzen Embedded V1000 processors.

Figure 3: congatec has based its conga-TR4 COM Type 6 module on AMD’s high-performance GPU, the Ryzen V1000.

The company has identified embedded computing systems that need the graphics performance of the Ryzen processor for medical imaging, broadcast, infotainment and gambling, digital signage, surveillance systems, optical quality control in automation and 3D simulators.

The Ryzen Embedded V1000 was launched days before Embedded World, together with the EPYC™ Embedded 3000 processor, and made up the ‘Zen’ zone in the company’s booth, as the two processors enter a new age for high-performance embedded processors, explains Alisha Perkins, Embedded Marketing Manager at AMD.

Focusing on the Embedded V1000, Perkins explained that it targets medical imaging, industrial systems, digital gaming, and thin clients to the edge of the network. It increases performance by a factor of two, compared with earlier generations, has up to three times more GPU performance than any processor currently available, has nearly half as much again (up to 46%) more multi-threaded performance than competing alternatives and, crucially for mobile or portable applications, is 36% smaller than its competitors.

AMD couples its Accelerated Processing Unit (APU) with Zen Central Processing Units (CPUs) and Vega Graphics Processing Units (GPUs) on a single die, offering up to four CPU cores/eight threads and up to 11 GPU compute units for a 3.6TFLOPS processing throughput. On the stand were examples of medical imaging stations that could be wheeled between wards, a dashboard for remote monitoring of utilities and an automated beer bottle checking visual inspection station, using the high-performance graphics and computing powers of the processor.

At congatec, the COM Express Type 6 module was also cited as being suitable for smart robotics and autonomous vehicles, where its Thermal Design Power (TDP) is scalable from 12 to 54W to optimize size, weight, power and costs (SWaP-C) at high graphics performance, says Christian Eder, Director of Marketing, congatec.

Industrial Automation
For the smart, connected factory, Texas Instruments introduced its latest SimpleLink™ microcontrollers (MCUs) devices, with concurrent multi-standard and multi-band connectivity for Thread, Zigbee®, Bluetooth® 5 and Sub-1 GHz. Designers can reuse code across the company’s Arm® Cortex®-M4-based MCUs in sensor networks to the cloud.

The additions announced in Nuremberg expand the SimpleLink MCU platform to support connectivity protocols and standards for 2.4 GHz and Sub-1 GHz bands, including the latest Thread and Zigbee standards, Bluetooth low energy, IEEE 802.15.4g and Wireless M-Bus. The multi-band CC1352P wireless MCU, for example, has an integrated power amplifier to extend the range for metering and building automation applications, while maintaining a low transmit current of 60mA.

SimpleLink MSP432P4 MCUs have an integrated 16-bit precision ADC and can host multiple wireless connectivity stacks and a 320-segment Liquid Crystal Display with extended temperature range for industrial applications.

Security is addressed with new hardware accelerators in the CC13x2 and CC26x2 wireless MCUs, for AES-128/256, SHA2-512, Elliptic Curve Cryptography (ECC), RSA-2048 and true random number generator (TRNG) encryption protocols.

Code compatibility: These new products are all supported by the SimpleLink software development kit (SDK) and provide a unified framework for platform extension through 100 percent application code reuse.

Still with automation, ADLINK’s Jim Liu, CEO, has his sights set on Artificial Intelligence (AI). “We have gone from being a pure embedded CPU vendor to an AI engine vendor,” he says, introducing autonomous mobile robotics and ‘AI at the Edge’ solutions using NVIDIA technology.

Figure 4: Industrial vision systems from ADLINK use NVIDIA technology for AI and deep learning.

Its embedded systems and connectivity couple with NVIDIA’s AI and deep learning technologies to target compute-intensive applications, such as robotics, autonomous vehicles and healthcare. Demonstrations included an autonomous mobile robot platform using ROS 2, an open source software stack specifically designed for factory-of-the-future connected solutions. There was a smart camera technology that can scan barcodes on irregularly shaped objects and differentiate between them (Figure 4). Another demonstration calculated vehicle flow to improve traffic management in a smart city.

Arm was also moving to the edge of computing, with machine learning and a display that fascinated many—a robotic Rubik’s Cube solver (Figure 5). John Ronco, Vice President and General Manager, Embedded and Auto Line of Business at Arm, sounds a cautious note: “Inference at the edge of the cloud has network, power and latency issues; there are also privacy issues,” he says. Ahead of Embedded World, the company announced its Project Trillium, promoting its Machine Learning (ML) technology, using an ML processor that is capable of over 4.6 Trillion Operations per Second (TOPs) with a power conserving efficiency of 3TOPs/W, and an object detection processor.

Figure 5: Arm was demonstrating its machine learning with the classic puzzle, the Rubik’s Cube.

Embedded Tools
Swedish embedded software tools and services company, IAR Systems shared news of its many recent partnerships. The first, with Data I/O, is to bridge the development -manufacturing gap by integrating the software with the latter’s data programming and secure provisioning to transition the microcontroller firmware development to manufacture. The two share many customers within the automotive, IoT, medical, wireless, consumer electronics, and industrial controls markets, although at separate stages of the design and manufacturing process. To address the growing complexity of designs and the security concerns in the embedded market, explains Tora Fridholm, Product Marketing Manager at IAR Systems, the two companies have established a roadmap based on customer requirements for a workflow where resources such as images, configuration files, and documents can be securely shared. Customers thus enjoy an efficient design to manufacturing workflow that reduces time to market.

If adding device-specific security credentials, such as keys and certificates, both companies are committed to integrate the appropriate processes and tools.

Another announcement was with Renesas Electronics, whereby its Synergy™ can use the Advanced IAR C/C++ Compiler™ in e² Studio Integrated Development Environment (IDE) to reduce application code, allowing more features to be added to Synergy microcontrollers. There is also the benefit of the compiler’s execution speed, which allows the microcontroller to remain in low power mode to conserve battery life.

Synergy microcontrollers are used in IoT devices to monitor the environment in buildings and industrial automation, energy management, and healthcare equipment.

Embedded Boards
An essential part of embedded design is board technology, and this year’s show did not disappoint. WinSystems was highlighting two of its latest single board computers, the PX1-C415 (Figure 6) which manages IoT nodes, and the SBC35-C427, based on the Intel Atom® E3900 processor series.

The first uses Microsoft® Windows® 10 IoT Core OS to support IoT development, the second is designed for industrial IoT, with an onboard ADC input, General Purpose Input/Output (GPIO), dual Ethernet, two USB3.0 and four USB 2.0 channels. It can be used in transportation, energy, industrial control, digital signage and industrial IoT applications.

The SBC supports up to three video displays via DisplayPort and LVDS interfaces. It can be expanded using the Mini-PCie socket, an M.2 connector (E-Key) and the company’s own Modular I/O 80 interface.

Figure 6: WinSystems offers one of the first boards to run on IoT Core OS.

A COM Express Type 7 module by ConnectTech was among the company’s booth highlights. The Com Express Type 7 + GPU Embedded System (Figure 7), can be used across four, independent or as a headless processing system. It has Intel Xeon® D x86 processors, NVIDIA Quadro® and Tesla® GPUs in a 216 x 164mm form factor. It anticipates the needs of high-performance applications that require 10GbE and Gigabit Ethernet, USB 3.0 and USB 2.0, HDMI, SATA II, I2C, M.2 and miniPCIe for encode/decode video, GPGPU CUDA® processing, deep learning and AI applications.

Figure 7: A COM Express Type 7 module by ConnectTech targets high-performance applications.

The company, an ecosystem partner of NVIDIA’s Jetson SoM also showed its Orbitty Carrier and a Cogswell Vision System, both based on NVIDIA’s Jetson TX1/TX2.

Caroline Hayes has been a journalist covering the electronics sector for more than 20 years. She has worked on several European titles, reporting on a variety of industries, including communications, broadcast and automotive.


Next Page »

Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.