Posts Tagged ‘top-story’

Next Page »

Testing IoT at Scale Using Realistic Data, Part II

Tuesday, May 15th, 2018

Continuing the IoT Testing conversation on how to comprehensively test a large IoT system and not just the sum of its parts. KnowThings.io introduces a novel solution to perform quality testing on a massive scale using machine learning and dramatically reduces the engineering workload… 

Editor’s Note: This is Part II of a two-part series on testing IoT on a massive scale. Click to find: Part I of Testing IoT at Scale Using Realistic Data.

The Internet of Things (IoT) is growing into big business, and with it comes masses of sensors, all gathering data for a more significant cause. One of the problems developers have, however, is how to realistically test the whole system, in all its potential chaotic glory. How do you road-test managing communications with thousands of IoT sensors to a cloud? And even if you manage to fake that kind of traffic, what nuances are you missing at that kind of scale?

Embedded Systems Engineering’s Editor-in-Chief, Lynnette Reese, sat down with KnowThings.io, an ambitious, smart startup accelerator project within CA Technologies that created a tool for IoT developers to realistically test IoT applications, from a small to an immense scale by leveraging machine learning. They currently call it the Self-learning IoT Virtualizer. KnowThings.io’s CEO, Anand Kameswaran, talked with ESE about its mission to make realistic IoT simulation and testing effective and easy, including cloud interaction, internet foibles, massive numbers of IoT sensors and connections, and IoT chaos in general.

Lynnette Reese (LR), Embedded Systems Engineering: 

Why not just create a script that fakes inputs from a large number of devices? More IoT just creates more traffic, right?

Anand Kameswaran (AK), KnowThings.io: That sounds good on the face of it, but IoT doesn’t work that way past the first layer. There’s much more to a live, inter-integrated IoT ecosystem than simulating sensors spinning out data. We wanted a way to test an entire IoT network, interacting with a cloud and potential latencies, against as much chaos that real-world scenarios can throw at an engineer who’s trying to make sure that an automated 10 plate juggling act scales to 10,000 plates, including a database, cloud, and multiple IP addresses.

Faking 10,000 sensors can be done manually using a computer to spit out data. However, not only does that approach take time and a potential learning curve, it’s also coming from one source, and the data is not varying naturally as it would with a machine-learning algorithm to simulate realistically. The ability to generate realistic data scenarios for our customers is one of our unique value propositions. This includes factoring for latency, environmental factors, the correlation between them and the IoT data, as well as replicating the real data that can be predicted to be accurate over an extended period. In addition, the ability to generate a large amount of data (for example, one year’s worth of data) within a brief period is very valuable for customers who want to use a huge quantity of data to qualify and make decisions on their predictive analytic systems under test.

The IoT Virtualizer saves time and helps you find issues that you didn’t know you would or could have at scale. It allows IoT developers to build robust code by testing from the ground up and helps testers find out what they don’t know. Someone said, “There are things we know that we know. And there are known unknowns, which just means that there are things that we now know we don’t know. But there are also unknown unknowns.” And this is what KnowThings provides, a look into testing what we do not know that we don’t know.

LR: I think that was Donald Rumsfeld, talking about national security.

AK: Yes, that sounds right. But it’s true. You cannot test for what you don’t know can possibly happen at higher levels, after everything is online and interconnected. Abstractions aside, it’s just smart to test thoroughly, and simulation and machine-learning are our core competencies.

LR: Speaking of security, how does the self-learning IoT Virtualizer help with security?

AK: We take security seriously and are working on that, with plans to add the ability to deal with encrypted data streams in our next major release. But first we had to get the tool working solidly for as many IoT industries and scenarios as possible. Another interesting angle for security is testing the whole system and how the entire system in action reacts to hacking attempts. If a node is overwhelmed, will the security you have in place work the same as you planned when it had only a couple of requests a second? Our tool creates realistic, dynamic scenarios in which to test security hypotheses. In addition, working with our customers, we found some creative ways to detect phone home and back-door security exploits that we are hoping to market in the future.

LR: So, I’m hearing that it’s just another way of prototyping with a more realistic, real-world scenario simulation using machine learning.

AK: Large-scale mass data input is difficult to realistically simulate without the IoT Virtualizer, because we incorporated machine learning to replicate the underlying physical behavior. Building on that, the Virtualizer creates a simulation of what large IoT systems might experience in the real world. And this is how IoT developers can test their IoT systems realistically and thoroughly. Once the tool creates a template and gets a basic idea of the overall interactions, it fills in the gaps for you. That is where some of the time savings come from. The manual way of doing simulations is you have to build every case that you want to test for, and that becomes very time-consuming. The plan for our commercial version due later this year will allow the software to handle network latencies and congestions by editing the model. The time of device interactions can be adjusted to replicate devices functioning in different regions of the world, for example.

Figure 1: FunkNFresh Farm uses aquaponics. The fish live in the same water that is fed to hydroponically grown vegetables.

LR: I’m having a hard time visualizing what the Virtualizer is virtualizing, so to speak. Can you give me an example of an application where the Virtualizer would make a real difference for a developer or quality/test engineer?

AK: One example of a use for this first product is in farming or large-scale agriculture. There are so many sensors that are being deployed to monitor the moisture, temperature, humidity and various other factors. The data is collected and developers look for anomalies. But developers do not have the ability to set up 10,000 actual sensors and have them working as they would in the real world. That’s what we can offer.

LR: I recall researching for this interview and reading your blog on the hydroponics farm, FunkNFresh Farms. Can you expand on that IoT example?

AK: Sure, but it’s an aquaponics farm, not just hydroponics. Aquaponics is when you raise fish in a pond or tank. The fish live in the same water that is fed to hydroponically grown vegetables. The plants get nourishment from the fish waste and also purify the water, keeping the fish healthy. Whereas high technology and instrumentation are not exactly required for aquaponics, this is my wife’s company, so I was recruited to make it all work, and of course I had to make the greenhouse profitable.

LR: So, you have personal IoT design and development experience, then?

AK: Oh, yes. In fact, I spent several years contributing to open source automation servers and other similar projects before the current generation of smart home solutions. As far as the greenhouse project, it was a lot of fun, and the farm is a successful project that’s selling produce. Aquaponics requires a controlled mini ecosystem but brings year-round crop production. The IoT part of it includes optimization of greenhouse operations through instrumentation, data collection, and automated actions. So, the greenhouse is an IoT device, which measures and reports on water temperature, ambient lighting, and circulating water pH.

Figure 2: The aquaponics farm has an air and a water pump. Both pumps are monitored simultaneously by a microphone.

LR: This is connected to the web?

AK: Yes, and before you point out that this is just automation and that IoT involves data analysis, I am working on that through audio analysis. I am putting a microphone in the greenhouse that picks up audio of both the water and air pump. In a two-for-one status check, the system checks both pump run statuses at the same time by capturing the audio and doing spectrum analysis on the .wav files. I can determine if the pumps are running, and if they are under stress or load. While this is very much a work in progress, I am happy that its moving in the right direction.

LR: Clever.

AK: Thanks. It is progressing well, and it’s a cheaper solution than individually monitoring each pump. The analysis is done on a remote computer. Over time we will have files that will not only predict a pump that’s shut off, but a pump that is about to fail. IoT not only alerts on status, but also allows me to do optimizations at a lower cost, fewer parts, and with less complexity versus a straight automation route.

Figure 3: The .wav files from the microphone are sent via internet to a remote computer for an audio analysis to determine if a pump has failed (and if so, which one), and whether a pump is under an unexpected load.

LR: How did the IoT Self-Learning IoT Virtualizer come into being?

AK: We started out with machine learning and genome sequencing on a massive scale. Fast-forward to three or four years later, and we are applying it to IoT devices, for example, smartwatches with heart monitors and such. We are training on the data that transmits back and forth, so when your smartwatch is telling your phone, “your pulse rate is now at 58 bpm,” we can learn quickly what that means and be able to reproduce that without an engineer having to serve as an interpreter. The machine learning does all of that interpretation for us. This means that if I am a developer, at the end of the day I don’t really care how those bytes are structured. I just want to run my tests, build my code, and configure the system. Machine learning makes it so that the very time-consuming tasks are made faster, for example, configuring the physical system or the simulator by replicating the scenarios.

LR: I can see how that would save time for engineers because they don’t have to stop to decode what’s going on. The Self-Learning IoT Virtualizer carries that load.

AK: Yes, and the engineer is relieved of the task of getting his or her physical IoT device environment setup so that they can verify application logic by generating realistic data. By plugging in the virtualizer, which utilizes machine learning, you can take what would be two to three weeks of painstaking work and accomplish it in about five minutes. You do not need to be an expert in every device or every piece of kit you’re playing with, you just need to know how they all come together as a whole. Now I can use a tool to create the realistic data that I need in about five minutes, which otherwise could have taken up to a month to produce and would need subsequent care and feeding throughout the life cycle of a product.

LR: What kind of QA testing have you already done?

AK: We’re on the third version of the tool and have tested with several real-world customers who have very large IoT networks. It took one of our customers about 80 hours to create their own simulation for their particular IoT scenario. KnowThings.io was able to create an adaptive virtual device for them using our Self-Learning IoT Virtualizer in just five minutes. We call them ‘adaptive’ virtual devices because they can learn how to simulate behavior that they have not yet seen, however, it is still a behavior that is possible. Adaptive virtual devices are useful for testing because you aren’t forced to think of all of the ways your device could behave. In the end you can test with a blend of what you actually observed, what was generated through the machine learning, and whatever else you want to add. This grants complete coverage to test and build your solution.

LR: What kinds of clouds do you work with? Can the Virtualizer capture the quirks of the various clouds from different providers?

AK: The IoT Virtualizer is cloud-agnostic. It doesn’t matter what cloud you have chosen, because we interface deeper down, in the network layer.

LR: What IoT protocols can the IoT Virtualizer support?

AK: We are working on integrating several protocols, but our KnowThings product currently supports TCP/IP, REST, and CoAP over TCP. We are also working on integrating several other protocols such as Zigbee, LoRa, Modbus, Bluetooth etc. We are open to suggestions.

LR: IoT can be as simple as a group of sensors. Can I virtualize a group of sensors?

AK: Yes, you can virtualize sensors on a very large scale if you like.

LR: How can developers get their hands on this tool?

AK: We are in the early adopter stage right now and offering a role in beta testing. There’s an opportunity for us to partner with those customers that are part of the early adoption program. Not only will partners shape what everything should look like, but also help in developing best practices in a very challenging development environment. We want to know about the real challenges IoT is facing and concentrate on solving the problems that IoT developers care about.

Anyone interested in trying it out and contributing suggestions to improving the tool can download the community edition pre-release, or sign up for the early adopter program at . Commercial product launch is in mid-summer.

For more information, go to the KnowThings.io Self-Learning IoT Virtualizer FAQ online.  


Lynnette Reese is Editor-in-Chief, Embedded Intel Solutions and Embedded Systems Engineering, and has been working in various roles as an electrical engineer for over two decades

 

 

Testing IoT at Scale Using Realistic Data, Part I

Thursday, April 12th, 2018

IoT can involve vast numbers of sensors, devices, and gateways that connect to a cloud. How do you comprehensively test your whole IoT system and not just the sum of its parts? KnowThings.io introduces a novel solution to perform quality testing on a massive scale using machine learning and dramatically reduces the engineering workload… (read more)

Editor’s Note: Embedded Systems Engineering’s Editor-in-Chief, Lynnette Reese, sat down with KnowThings.io, an aspiring, smart startup accelerator project within CA Technologies that created a tool for IoT developers to realistically test IoT applications, from small to very large scale by leveraging machine learning. They currently call it the Self-learning IoT Virtualizer. KnowThings.io’s CEO, Anand Kameswaran, talked with Reese about its mission to make realistic IoT simulation and testing effective and easy, including cloud interaction, internet foibles, massive numbers of IoT sensors and connections, and IoT chaos in general.

Lynnette Reese (LR) Embedded Systems Engineering: You’re the CEO of a very busy start-up; I am glad you made the time for this interview.  

Anand Kameswaran, KnowThings

Anand Kameswaran, (AK) KnowThings: You’re welcome. 

LR: I understand that KnowThings.io has a tool that makes testing the whole integrated IoT/Cloud /network arrangement much easier for developers. You call the KnowThings tool an IoT Virtualizer. Can you briefly explain how the tool works?

AK:  It’s an IoT simulator that creates a virtual device model, so we like to call it an IoT Virtualizer. In short, it can mimic real device system interactions accurately within minutes. It’s a specific type of simulation that interacts at the network layer, using patented self-learning or machine learning algorithms to simulate up to hundreds of thousands of individual data sensors, device inputs, and their interaction with the cloud. Think of it as a “no surprises” test harness for an entire IoT network, even if that IoT system is very, very large.

It allows you to test sections as you develop, try out new tweaks to see how it affects everything else, and test your final revisions so it’s as good as it can be before you send it out into the real world.

 

Figure 1: KnowThings has a three-step workflow that a) captures device interactions via packet capture (PCAP), b) models the captured traffic, and c) plays back the adaptive virtual device (AVD). (Source: KnowThings.io)

 LR: So, it allows you to experience the larger picture of thousands of IoT devices on the internet, interacting with each other and the cloud? But without having to have the actual IoT devices online yet?

AK:  Yes. There’s individual bugs that occur on a device level or with the application that runs analytics, processing etc., but there are also issues that can develop simply based on the number of devices you’re dealing with. When you have thousands of devices collecting data that coalesce into valuable trends and knowledge crucial to good decision-making, there’s a level of collaboration that has to be worked out as the sum of thousands of IoT parts creates a whole different beast.

In one case, we were working with a company doing shipping container monitoring. And their use case is to have 50,000 containers on a ship, with all containers communicating with a single, central node. When we were working with the prototype virtualizer, we were already engaged with customers who were trying to accomplish goals at that kind of scale. We are still working with customers today that are testing at large scale across several industries, whether it’s supply chain and transportation, smart agriculture in monitoring fairly large fields, and so on. That’s one of the reasons why we are interested in those who want to get into our early adopter program, our beta test program, so we can expose the IoT Virtualizer to as many unique industry problems as we can. 

LR: Can you tell us more about the prototype tool and how it’s coming along?

AK:  IoT is a really big space, covering many industries. We are having a new community edition preview on our website and it’s free to download today from our site, knowthings.io.

LR: Why are you giving away this incredible tool for free?

AK: We value getting direct feedback from customers and shaping the direction from that. There is still a lot to be learned. We are bringing together people from the embedded space, cloud networking, and other areas; they’re all coming together, so the best practices going forward form the best development tool we can provide. 

LR: Who would the tool benefit most, and why?

AK: Any IoT developer or solution architect or tester working with IoT applications who wants to test their solutions to a large scale quickly, cheaply, and thoroughly. It can improve time to market, reduce labor costs, and reduce the confusion and frustration that can occur as an engineer straddles both embedded hardware and network cloud integration to implement a viable system in the real world with as few surprises as possible.

LR: The Virtualizer is a new idea, then? No one else offers this?

AK: Though device virtualization exists today through a few vendors, usage of machine learning to generate realistic data scenarios is something unique about our solution. The community edition pre-release is the preview of our free edition that will always remain free for customers to try before they purchase our commercial product. It runs on a desktop or even the Raspberry Pi Zero, but we are planning to release a cloud-based one soon. Those that are interested in working closely with us can apply to participate in our early adopter program.

LR: How did you get the idea for this tool?

AK:  It started a few years ago, as an idea to borrow techniques from work CA Technologies was doing on service virtualization based on genomic sequencing. We’ve been working on the underlying machine learning with the University of Swinburne, Australia, as part of a partnership that predates KnowThings. We found that we could take these algorithms that we use for doing genomic sequencing and apply them to learn computer messages. Sequencing a bunch of genes is actually not a whole lot different than sequencing bytes in working towards understanding what that information is telling us.

There is a type of machine learning associated with genome sequencing, which is a learning algorithm and data mining method that analyzes recorded samples. After this was successfully applied in a service virtualization solution, we felt a similar approach can be taken in the IoT space for the KnowThings solution. We flipped that machine learning to automatically create a simulation of IoT devices with individual, asynchronous behaviors represented at different nodes, or locations on the network. It came out of a collaboration with Computer Associates’ strategic research in continuing to advance the underlying algorithm for the IoT Virtualizer. IoT presents a different type of data space versus genome sequencing, so we can make assumptions that let us take some shortcuts outside of the original genome sequencing algorithm and end up with a very efficient algorithm for IoT simulation.

After several years in research, KnowThings is now on the third generation of the technology. Indeed, some of the previous versions were run in service virtualization solutions with real customers on high-performance computers, dealing with hundreds of thousands of transactions per second. So, we have a history of real-life testing already, and we know that the code and the simulation can successfully accommodate a huge scale simulation.

KnowThings has some close early adopters and is working with customers that need an environment at that sort of scale.

LR: What kind of applications would typically require this sort of scale?

AK: The verticals that KnowThings is currently working with for existing customers include smart agriculture, smart transportation, and facets of retail that include IoT. Smart ag would include hydroponics farms. The Virtualizer for smart transportation would help with the operations and logistics side. And an IoT retail channel might include smartphone tracking via Bluetooth beacons to establish behaviors for very targeted marketing through customer smartphones that act as IoT devices through, say, a coupon app that is also tracking customer behavior to some degree. It’s one thing to test a digital coupon-to-smartphone interaction with one or two participating smartphones, but what happens on Black Friday? KnowThings.io ’s product assists developers to honestly answer the question, “What could go wrong?”

The KnowThings IoT Virtualizer would work well for any IoT application that needs to test on a scale that’s too large to simulate by one’s self. It can save time, for one thing.

LR: How can developers get their hands on this tool?

AK: We are in the early adopter stage right now and offering a role in beta testing. There’s an opportunity for us to partner with those customers that are part of the early adoption program. Not only will partners shape what everything should look like, but they will also help in developing best practices in a very challenging development environment. We want to know about the real challenges IoT is facing and concentrate on solving the problems that IoT developers care about.

Anyone interested in trying it out and contributing suggestions to improving the tool can download the community edition pre-release or sign up for the early adopter program at https://knowthings.io/.  Commercial product launch is in mid-summer.

For more information, go to the KnowThings.io Self-Learning IoT Virtualizer FAQ online.


Lynnette Reese is Editor-in-Chief, Embedded Intel Solutions and Embedded Systems Engineering, and has been working in various roles as an electrical engineer for over two decades

 

 

Industrial IoT: How Smart Alerts and Sensors Add Value to Electric Motors

Thursday, April 12th, 2018

Considering electric motors in industrial environments and the ways the IoT produces some real value, perhaps in ways you hadn’t considered.

I’m often asked about the value of the IoT—sometimes directly, but often indirectly, as in “How can a deluge of data create value?”

Electric Motors: The Workhorses of Industrial Life
Electric motors come in all sizes, from very small to very large. They usually run on main power, but sometimes on batteries, like in electric cars. We all have many electric motors in our homes—in our vacuum cleaners, fridges, freezers, garage door openers. And, of course, many toys have miniature electric motors, like the locomotives in model trains.

Figure 1: The author explains why factoring in the IoT can cause condition-based maintenance to be seen as a better option than either preventive or run-to-failure maintenance.

Factories are also equipped with many electric motors used for all kinds of jobs: lifting, pressing, pumping, sucking or drying—basically everything that can be done with motion. Electric motors are the workhorses of industry today. They’re also used in areas that are too dusty, dangerous, or difficult to reach by human effort. In short, modern industrial life doesn’t exist without the electric motor.

Maintenance, Maintenance, Maintenance
Electric motors are mechanical devices, so it’s no surprise that they go down occasionally. Statistics show a failure rate of seven percent per year; on average, an electric motor stops working once every 14 years. Not bad, you might think—but for a factory with a hundred electric motors, that means one motor is down just about every month. And keep in mind that one motor going down sometimes means a whole production line going down, which can become very expensive, very quickly. Now factor in the reality that motor failures can come with incredibly unfortunate timing, like just before that critical order has to be delivered.

To reduce unexpected downtime, factories employ maintenance crews. Maintenance of electric motors is an important part of their efforts, but it’s also expensive. There are several approaches to maintenance:

  • Preventive maintenance. Maintenance schedules are based on an estimate of how long the average electric motor runs. To be on the safe side and avoid complete motor failure, maintenance usually occurs too early (although occasionally too late), and well-functioning parts still in good condition may be replaced. The catch? There’s no guarantee that a new problem won’t occur shortly after maintenance takes place.
  • “Run-to-failure” maintenance. This approach waits to do maintenance until the machine stops working. This typically results in full motor replacement, because repairing a rundown electric motor on the spot usually isn’t simple.
  • Condition-based maintenance. Before electric motors go down, they generally start to show irregularities like noise, imbalance, drag, etc. In a condition-based approach, a maintenance specialist goes to every electric motor and “listens” to it with the appropriate tools, much like a doctor with a stethoscope. Depending on the environment, this may be an easy job or a difficult and even a dangerous one. And, of course, the doctor can’t be everywhere at once.

Despite its drawbacks, preventive maintenance is probably better and more cost-effective than the “run-to-failure” alternative—but condition-based may be a better option … especially when you bring in the IoT.

Condition-based Maintenance: Made Stronger with AI and IoT
With the IoT, every electric motor on a factory floor is equipped with one or multiple sensors that are connected (preferably wirelessly) to a control database that continuously collects data about the motors. The control database can use artificial intelligence (AI) to learn normal behavior for every motor and then, after a typically short period of learning, it can generate immediate alerts when deviations from that normal occur. In other words, the IoT combined with AI not only sees problems coming, it continuously scans for problems.

Figure 2: The true value of the IoT: machines with connected sensors for maintenance

Keep in mind that this control database doesn’t need to be programmed. It can simply be fed with data and then learns by itself “automatically” what is normal and what are exceptions. When an exception (i.e., a problem) occurs, it sends an immediate alert, which in many cases avoids total motor failure and replacement. This kind of smart alert also allows the treatment to match the problem at the moment it starts to manifest, rather than general maintenance that may be too early, too late, or miss the pending failure completely. Depending on the severity of the problem and alert, the motor’s downtime can even be planned to minimize any disruption to operations.

Finally, this kind of sensor-based data collection is far more precise and thorough than anything humans could achieve. A slow deterioration of the quality of any given electric motor will continue undetected by human eyes and ears until a serious problem develops or failure occurs, but the IoT will notice even the smallest shifts in normal performance over a longer period of time.

The True Value of the IoT: Making Better Decisions Faster
We think we live in a modern world, but we actually waste a lot of resources and money by making the wrong decisions and/or making decisions too slowly. The promise of the IoT is that we can now collect enough data—cost-effective data that already exists, but that we never captured. And we can capture this data continuously, and quite effortlessly, in enormous volumes. With AI, we can learn from it to make better decisions, faster.

Back to the original question, then. What value does the IoT bring? It enables people to make better decisions, faster.


Cees Links was the founder and CEO of GreenPeak Technologies, which is now part of Qorvo. Under his responsibility, the first wireless LANs were developed, ultimately becoming household technology integrated into PCs and notebooks. He also pioneered the development of access points, home networking routers, and hotspot base stations. He was involved in the establishment of the IEEE 802.11 standardization committee and the Wi-Fi Alliance. He was also instrumental in establishing the IEEE 802.15 standardization committee to become the basis for the ZigBee® sense and control networking. Since GreenPeak was acquired by Qorvo, Cees has become the General Manager of the Wireless Connectivity Business Unit in Qorvo. He was recently recognized as Wi-Fi pioneer with the Golden Mousetrap Lifetime Achievement award. For more information, please visit www.qorvo.com .

 

E-Paper Displays Mature

Monday, March 26th, 2018

How innovative solutions are taking E-paper displays from niche to mainstream IoT applications.

Adding to the existing strengths of Electronic Paper Displays (EPDs)—ultra-low power consumption, superb readability, and compact size—are several breakthrough enhancements, including new colors, faster display updating, even lower power usage, and a much wider operating temperature range.

Integrating these next generation EPDs into IoT devices can undoubtedly improve the functionality and lower the cost of current IoT systems. But more exciting still, these new EPDs also have the potential to enable fresh IoT applications that will give early adopters in the development community an opportunity to pioneer entirely new markets.

Why Electronic Paper is Already Ideal for IoT Applications
To understand why EPDs are poised to revolutionize IoT devices and make possible wholly new IoT applications, it is helpful to first understand current generation EPD technology’s key advantages over older flat panel display technologies, such as LCDs and OLEDs.

As the name suggests, an Electronic Paper Display displays images and text that are easily readable in natural light, just like printed ink on a sheet of paper. In this, an EPD is fundamentally unlike other display technologies, which all require their own internal luminance source—one that is power hungry, bulky, complex to design and manufacture, usually impractical to maintain, and prone to defects including uneven brightness, burn in, gradual dimming, and failure.

The EPD technology used by Pervasive Displays creates images from hundreds of minute charged ink particles within each of the tiny capsules that form each pixel. By varying the electrical field across the capsule, ink particles of the desired color are moved to the top surface of the paper, instantly changing the pixel’s color. As the particles are solid ink-like pigments, they create a clear and sharp image that looks like ink on paper. Users find the EPD graphics and text are not only more quickly read and understood, but are also more visually pleasing, and reduce eye strain, because they so precisely mimic the appearance of traditional printing and writing technologies that have been used for thousands of years.

For the IoT, a slim, compact, high-contrast EPD which is clearly visible in natural light is a huge boon. Such a display requires far less power than other technologies and is visible in a wide range of lighting conditions, from dim interior lighting, to bright sunlight that makes other displays painful or impossible to read. In addition, EPDs provide a very wide viewing angle and they help users to read and comprehend critical information without delay.

Figure 1: A two-inch EPD consumes considerably less power compared to a two-inch TFT LCD when updated several times a day—such as for IoT applications.

An EPD shares another similarity with ink on paper: it is bi-stable. Energy is only consumed when the image is being changed. On the other hand, display technologies that are not bi-stable constantly drain power to refresh and illuminate the image, whether it changes or not. For IoT applications, which often display static images and text for hours on end, and may rely solely on battery or environmental power, this is yet another huge energy saver, adding to the power saved by not requiring a constant internal light source. EPDs are such frugal energy users that some can provide an updating display that is driven and maintained simply by the residual energy available when a battery-free RFID tag is scanned.

The zero-power static display capability of electronic paper also frees users from the inconvenience of having to switch on a battery-powered display every time they need to briefly check the device status. Instead, the device condition is always instantly readable at a glance, minimizing unnecessary energy drain. For a typical IoT device with a 2-inch display that may only be updated a few times per day, a traditional LCD will consume well over 250 times more power than an EPD module. By slashing the energy consumption of the display—one of the most power-hungry components—to a minimum, IoT devices can operate in the field, perhaps with zero maintenance, for years. In the same situation, a constant LCD display could deplete its battery in a matter of days.

However, while the EPD’s crystal-clear display, ultra-low energy use and slim size seem almost tailor-made for the IoT—certain limitations have, until recently, frustratingly prevented full use of EPDs in some of the most promising IoT markets. Today, however, several new EPD technologies are set to sweep those barriers aside.

EPDs are already ideal for a wide range of IoT applications, but the latest electronic paper technology enhancements and innovations are about to expand EPD reach and usability much further—bringing the IoT to new environments and new markets.

Wider Temperature Range Extends Global Markets and Creates New Applications
Many IoT applications require devices to be usable in the field, often outdoors, and in extreme temperature conditions. Early generations of EPD technology had a narrow operational temperature range. This limited their deployment in some IoT applications or required additional hardware to stabilize the temperature.

Fortunately, recent innovations have dramatically extended EPD operating temperature range. Today, EPD modules can operate from -20 °C to +40 °C. This much wider temperature range makes an unmodified EPD-equipped IoT device usable in far more global climate environments, all-year round, and also in high- and low-temperature facilities. The temperature range extension has been achieved largely by improving the sequence and timing of the display’s driving waveforms.

Figure 2: A 1.6-inch e-paper display with weather information

With an extended operating temperature, the sheer quantity of potential new applications now fully opening up to EPD-equipped IoT products is too numerous to mention. It includes industrial, logistics, transportation, and automotive—and to give more specific examples: cold-chain logistics temperature logging, and RFID tags, as well as many similar applications in outdoor and harsh environments.

Faster Refresh Rates for More Timely, Dynamic Information
Older EPDs had relatively slow update times of a second or more, possibly delaying operator response to new information. This made the screens less practical when rapid display changes were required, for example in sensing and monitoring applications, for fast-changing data, and for animation.

This limitation existed mainly because the entire screen had to be cleared and redrawn to make even a small change. However, by only updating the section of the display that has been changed, the latest EPDs can now update important data with almost no significant delay. These partial updates can achieve refresh speeds of 300-600 ms—a four-fold improvement. In addition, these partial updates use even less power than a full screen refresh, further reducing the EPD’s already very low energy consumption.

Figure 3: The partial update process only updates the information on the screen which needs to change, such as the room temperature and energy usage information.

In brief, partial updates are performed by comparing the previous image and the new image to get a delta image. This delta image is then input into the EPD. Because of the physical characteristics of the EPD’s ink particles, the waveform used to program the delta image into the display is adjusted based on ambient temperature. There is no limit to the size of screen that partial updates can work on, although larger sizes of screen updates will require more RAM and faster CPU speeds to drive the waveforms properly.

Partial updates do have some limitations. Numerous partial updates without performing a full screen refresh can result in ghosting artifacts, especially for black to white pixel transitions. This can be mitigated by minimizing these black to white transitions. For three color screens, partial updates work on all three colors. However, as the third color (red or yellow) updates relatively slowly, taking a few seconds, partial updates generally only make sense to be done on the black and white pixels in the image.

Moreover, unlike earlier versions of partial update technology, the latest EPD modules do not require additional dedicated electronics for partial updating, thereby providing potential for more reductions in display module cost, complexity, size, and power consumption.

In general, just like the other recent innovations discussed here, partial update technology opens up new applications and markets for EPDs. Partial update technology combines all the power consumption and readability advantages of an EPD with a responsive, rapidly updated display.

More Colors: Attractive, Informative, Safer
Moving beyond early EPDs that only offered monochrome black and white displays, the latest EPDs from Pervasive Displays provide three colors: black, white and red, and most recently, black, white and yellow. For retail applications, these additional vivid eye-catching colors greatly enhance the attention-grabbing power of pricing, signage, and product displays. This allows retailers to draw customers’ attention to special offers, and deals, or special conditions—improving stock throughput, saving staff and customers’ time, and increasing customer satisfaction.

Figure 4: E-paper displays from Pervasive Displays are now available in black, white and yellow as well as black white and red.

For industrial and monitoring applications, bright colors are perfect for instantly bringing attention to critical data—such as warnings or sensor measurements that are outside of nominal range. Simply adding color hints can greatly enhance efficiency and safety, as well as reducing operator fatigue.

In addition, with EPDs providing sharp display resolutions of up to 200+ DPI, these additional colors provide more options for dithering (displaying alternate adjacent pixels in different colors) to generate new shades beyond the standard three, a strong tool for creating attractive retail displays.

Rethinking the EPD: A New Generation of Display
Adding all these improvements in refresh rate, power consumption, operating temperature, and color to EPDs demands a rethink of the EPD’s role in the IoT. In a sense, these EPDs are almost a new class of display technology. The new EPD can now replace LCDs and OLEDs in applications that require features like responsive display updates and color highlighting, and it can be used across a huge area of the globe, from the arctic to the equator, and in environments from the heat of a desert oil well or an iron foundry to the chill of a medical sample storage area or a refrigerated goods truck.

And EPDs can achieve all this without compromising their unbeatable natural light visibility, and with power consumption lower than 0.5 percent of an equivalent LCD screen—offering the potential for five to seven years of battery life on a coin cell battery and practical operation driven by solar and other environmental power sources.


HD Lee is Founder, CTO and President, Pervasive Displays. Lee has over 17 years’ experience in research and development for advanced displays, specializing in TFT-LCD, OLED and e-paper. Lee was a co-founder of Jemitek, a mobile LCD design house, which was acquired by Innolux in 2006. In 2011 he co-founded Pervasive Displays Inc., a world-leading low power display company that focuses on industrial applications and has sold more than 10 million e-paper displays. The company was acquired by SES-imagotag in 2016. Privately, Lee holds 43 granted patents with more patents pending. He has an MS and BS in Electronic Engineering from the National Taiwan University.

 

 

Embedded World 2018

Thursday, March 8th, 2018

Embedded World broke its own records—again—this year, with over 1,000 exhibitor companies and more than 32,000 international visitors from the embedded community.

Nuremberg, Germany was a winter wonderland for this year’s Embedded World, as freezing temperatures gripped the Nuremberg Messe showground. Inside the six halls of the world’s biggest embedded industry event, however, was a hotbed of IoT, automation, automotive, communications, and networking innovation.

Figure 1: Nuremberg Messe hosted the 16th Embedded World from February 27 to March 01 2018.

All types and sizes of modules were on display, illustrating the diversity of choice available in the market today. One of the most interesting was the System on Module (SoM) for industrial-grade Linux designs by Microchip. The ATSAMA5D27-SOM1 (Figure 2) is designed to remove the complexity of developing an industrial system based on a microprocessor, running Linux® OS. Lucio di Jasio, Business Development Manager, Europe, at Microchip, explains that the 40 x 40mm board will help engineers with PCB layout in these applications. It has the company’s ATSAMA5D27C-D1G-CU System in Package (SiP) and uses the SAMA5D2 MPU. The small form factor manages to integrate power management, non-volatile boot memory, Ethernet PHY and high speed DDR2 memory to develop a Linux-based system or it can be used as a reference design. Schematics, design, and Gerber files are available online and free of charge.

Figure 2: The ATSAMA5D27-SOM1 System on Module was announced by Microchip.

For security, the SAMA5D2 family has integrated Arm TrustZone® and capabilities for tamper detection, secure data and program storage, hardware encryption engine and secure boot. The SoM also contains Microchip’s QSPI NOR Flash memory, a Power Management Integrated Circuit (PMIC), an Ethernet PHY and serial EEPROM with a Media Access Control (MAC) address.

There is a choice of three DDR2 memory sizes for the SiP, 128 and 512Mbit and 1Gbit, all optimized for bare metal, Real-Time Operating Systems (RTOS) and Linux OS. All of Microchip’s Linux development code for the SiP and SOM are in the Linux communities.

Customers can transition from the SOM to the SiP or the MPU, depending on the needs of the design, adds di Jasio.

The company also announced two new microcontroller families, one for the PIC range and one for megaAVR series. The PIC16F18446 microcontrollers are suitable for use in sensor nodes, while the ATmega4809 is the first megaAVR device to include Core Independent Peripherals (CIPs), to execute tasks in hardware instead of through software, decreasing the amount of code required to speed time to market.

Graphics Performance
A COM Express Type 6 module from congatec shows how the company is wasting no time in exploiting the prowess of AMD’s latest Ryzen™ processor. The conga-TR4 (Figure 3) is based on neighboring exhibitor AMD’s Ryzen Embedded V1000 processors.

Figure 3: congatec has based its conga-TR4 COM Type 6 module on AMD’s high-performance GPU, the Ryzen V1000.

The company has identified embedded computing systems that need the graphics performance of the Ryzen processor for medical imaging, broadcast, infotainment and gambling, digital signage, surveillance systems, optical quality control in automation and 3D simulators.

The Ryzen Embedded V1000 was launched days before Embedded World, together with the EPYC™ Embedded 3000 processor, and made up the ‘Zen’ zone in the company’s booth, as the two processors enter a new age for high-performance embedded processors, explains Alisha Perkins, Embedded Marketing Manager at AMD.

Focusing on the Embedded V1000, Perkins explained that it targets medical imaging, industrial systems, digital gaming, and thin clients to the edge of the network. It increases performance by a factor of two, compared with earlier generations, has up to three times more GPU performance than any processor currently available, has nearly half as much again (up to 46%) more multi-threaded performance than competing alternatives and, crucially for mobile or portable applications, is 36% smaller than its competitors.

AMD couples its Accelerated Processing Unit (APU) with Zen Central Processing Units (CPUs) and Vega Graphics Processing Units (GPUs) on a single die, offering up to four CPU cores/eight threads and up to 11 GPU compute units for a 3.6TFLOPS processing throughput. On the stand were examples of medical imaging stations that could be wheeled between wards, a dashboard for remote monitoring of utilities and an automated beer bottle checking visual inspection station, using the high-performance graphics and computing powers of the processor.

At congatec, the COM Express Type 6 module was also cited as being suitable for smart robotics and autonomous vehicles, where its Thermal Design Power (TDP) is scalable from 12 to 54W to optimize size, weight, power and costs (SWaP-C) at high graphics performance, says Christian Eder, Director of Marketing, congatec.

Industrial Automation
For the smart, connected factory, Texas Instruments introduced its latest SimpleLink™ microcontrollers (MCUs) devices, with concurrent multi-standard and multi-band connectivity for Thread, Zigbee®, Bluetooth® 5 and Sub-1 GHz. Designers can reuse code across the company’s Arm® Cortex®-M4-based MCUs in sensor networks to the cloud.

The additions announced in Nuremberg expand the SimpleLink MCU platform to support connectivity protocols and standards for 2.4 GHz and Sub-1 GHz bands, including the latest Thread and Zigbee standards, Bluetooth low energy, IEEE 802.15.4g and Wireless M-Bus. The multi-band CC1352P wireless MCU, for example, has an integrated power amplifier to extend the range for metering and building automation applications, while maintaining a low transmit current of 60mA.

SimpleLink MSP432P4 MCUs have an integrated 16-bit precision ADC and can host multiple wireless connectivity stacks and a 320-segment Liquid Crystal Display with extended temperature range for industrial applications.

Security is addressed with new hardware accelerators in the CC13x2 and CC26x2 wireless MCUs, for AES-128/256, SHA2-512, Elliptic Curve Cryptography (ECC), RSA-2048 and true random number generator (TRNG) encryption protocols.

Code compatibility: These new products are all supported by the SimpleLink software development kit (SDK) and provide a unified framework for platform extension through 100 percent application code reuse.

Still with automation, ADLINK’s Jim Liu, CEO, has his sights set on Artificial Intelligence (AI). “We have gone from being a pure embedded CPU vendor to an AI engine vendor,” he says, introducing autonomous mobile robotics and ‘AI at the Edge’ solutions using NVIDIA technology.

Figure 4: Industrial vision systems from ADLINK use NVIDIA technology for AI and deep learning.

Its embedded systems and connectivity couple with NVIDIA’s AI and deep learning technologies to target compute-intensive applications, such as robotics, autonomous vehicles and healthcare. Demonstrations included an autonomous mobile robot platform using ROS 2, an open source software stack specifically designed for factory-of-the-future connected solutions. There was a smart camera technology that can scan barcodes on irregularly shaped objects and differentiate between them (Figure 4). Another demonstration calculated vehicle flow to improve traffic management in a smart city.

Arm was also moving to the edge of computing, with machine learning and a display that fascinated many—a robotic Rubik’s Cube solver (Figure 5). John Ronco, Vice President and General Manager, Embedded and Auto Line of Business at Arm, sounds a cautious note: “Inference at the edge of the cloud has network, power and latency issues; there are also privacy issues,” he says. Ahead of Embedded World, the company announced its Project Trillium, promoting its Machine Learning (ML) technology, using an ML processor that is capable of over 4.6 Trillion Operations per Second (TOPs) with a power conserving efficiency of 3TOPs/W, and an object detection processor.

Figure 5: Arm was demonstrating its machine learning with the classic puzzle, the Rubik’s Cube.

Embedded Tools
Swedish embedded software tools and services company, IAR Systems shared news of its many recent partnerships. The first, with Data I/O, is to bridge the development -manufacturing gap by integrating the software with the latter’s data programming and secure provisioning to transition the microcontroller firmware development to manufacture. The two share many customers within the automotive, IoT, medical, wireless, consumer electronics, and industrial controls markets, although at separate stages of the design and manufacturing process. To address the growing complexity of designs and the security concerns in the embedded market, explains Tora Fridholm, Product Marketing Manager at IAR Systems, the two companies have established a roadmap based on customer requirements for a workflow where resources such as images, configuration files, and documents can be securely shared. Customers thus enjoy an efficient design to manufacturing workflow that reduces time to market.

If adding device-specific security credentials, such as keys and certificates, both companies are committed to integrate the appropriate processes and tools.

Another announcement was with Renesas Electronics, whereby its Synergy™ can use the Advanced IAR C/C++ Compiler™ in e² Studio Integrated Development Environment (IDE) to reduce application code, allowing more features to be added to Synergy microcontrollers. There is also the benefit of the compiler’s execution speed, which allows the microcontroller to remain in low power mode to conserve battery life.

Synergy microcontrollers are used in IoT devices to monitor the environment in buildings and industrial automation, energy management, and healthcare equipment.

Embedded Boards
An essential part of embedded design is board technology, and this year’s show did not disappoint. WinSystems was highlighting two of its latest single board computers, the PX1-C415 (Figure 6) which manages IoT nodes, and the SBC35-C427, based on the Intel Atom® E3900 processor series.

The first uses Microsoft® Windows® 10 IoT Core OS to support IoT development, the second is designed for industrial IoT, with an onboard ADC input, General Purpose Input/Output (GPIO), dual Ethernet, two USB3.0 and four USB 2.0 channels. It can be used in transportation, energy, industrial control, digital signage and industrial IoT applications.

The SBC supports up to three video displays via DisplayPort and LVDS interfaces. It can be expanded using the Mini-PCie socket, an M.2 connector (E-Key) and the company’s own Modular I/O 80 interface.

Figure 6: WinSystems offers one of the first boards to run on IoT Core OS.

A COM Express Type 7 module by ConnectTech was among the company’s booth highlights. The Com Express Type 7 + GPU Embedded System (Figure 7), can be used across four, independent or as a headless processing system. It has Intel Xeon® D x86 processors, NVIDIA Quadro® and Tesla® GPUs in a 216 x 164mm form factor. It anticipates the needs of high-performance applications that require 10GbE and Gigabit Ethernet, USB 3.0 and USB 2.0, HDMI, SATA II, I2C, M.2 and miniPCIe for encode/decode video, GPGPU CUDA® processing, deep learning and AI applications.

Figure 7: A COM Express Type 7 module by ConnectTech targets high-performance applications.

The company, an ecosystem partner of NVIDIA’s Jetson SoM also showed its Orbitty Carrier and a Cogswell Vision System, both based on NVIDIA’s Jetson TX1/TX2.


Caroline Hayes has been a journalist covering the electronics sector for more than 20 years. She has worked on several European titles, reporting on a variety of industries, including communications, broadcast and automotive.

 

IoT Growth Brings Fresh EMI/EMC Challenges: Q&A with Tektronix

Tuesday, March 6th, 2018

Taking on customers’ EMC/EMI compliance pain points led to a solution the provider has designed as all-in-one, so users can realize time and cost savings.

What the IoT brings to industrial, consumer, mobile and mil-aero connectivity does not have to include problems with electronic interference and unintentional radiators, as wired and wireless devices proliferate.

Editor’s Note: Unintended consequences are something to be avoided, with careful planning often the prescribed method for doing so. Recently Dylan Stinson, Product Marketing Manager at test, measurement, and monitoring solutions provider Tektronix, spoke with EECatalog.com about how designers and manufacturers can avoid interference that causes safety, regulatory, and performance issues, even as wireless “stuff” enters our lives at a relentless pace. Tektronix, says Stinson, “offers a complete solution, including pre-compliance software, accessories, and also the benefit of having a real-time spectrum analyzer.”  We spoke with Stinson at the time Tektronix announced its EMI/EMC Compliance Testing solution. Edited excerpts from our interview follow.

EECatalog: What is EMC and who needs to care about it?

Dylan Stinson, Tektronix

Dylan Stinson, Tektronix: EMC or electromagnetic compatibility is defined as the interaction of electrical and electronic equipment with its electromagnetic environment and with other equipment.

Anybody who designs, manufactures, or is importing products with electronics inside is definitely going to want to care about EMC compliance.

There have been several well published cases where because electromagnetic compatibility and EMC testing were not fully considered, companies have been fined and products have even had to be recalled or withdrawn from the market due to their emission levels exceeding regulated limits.[1]

EECatalog:  What have designers and manufacturers been doing to achieve compliance and what’s changed?

Stinson, Tektronix: To answer your second question first, we’re seeing new problems. For example, consider a lap top computer or a smartphone [see Figure 1]. It contains all the high-speed digital systems that are necessary in a digital computer or phone, combined with wireless transmitters or receivers for necessary connectivity and communication.

Figure 1: Multiple noise sources characterize today’s systems. (Courtesy Tektronix)

With the proliferation of all these wireless-enabled devices, where you have the proximity of unintended radiators, combined with sensitive receivers, you have an area that is rife with interference opportunities.

In the case illustrated in [Figure 1], each element of the design is operating within regulated limits, but EMI from the computer clocks or random hard drive access may be entering the receivers, reducing communication efficiency and overall performance. In some cases, it may work poorly, not work at all as an overall integrated system, or not be legal.

What we’d like to see change is the situation where designers and manufacturers have to go out of house to get EMC/EMI compliance testing done—and more than half of our spectrum analyzer users are testing for EMC and EMI issues. In doing so, they are having to do things like go to a test lab with an anechoic chamber to get a pre-scan. One customer told us, “We spend $1,250 per hour for a technician, lab equipment, and chamber. This adds up over time, as you can imagine. One time, we had a situation where we spent a year trying to figure out where the noise was coming from.”

As to your first question, the traditional method is: you have a design, you take it about 90 percent of the way, then you take it to an external test house that is licensed, but as just noted, this can be expensive, especially when multiple visits and design changes are required. In the U.S., designers have reported spending as much as $10,000 to get a product certified by an external compliance test house.

EECatalog: What should designers know about in-house pre-compliance testing?

Stinson, Tektronix: Performing basic pre-compliance testing in-house, the option Tektronix is providing, can help minimize product development time and expenses and help overall with the design. Pre-compliance allows issues to be caught early on, saving time, effort, and money.

We’ve introduced EMCVu as an all-in-one EMC pre-compliance solution. It is included as a license option for our existing SignalVu-PC software, a part of our Real-Time spectrum analyzer products.

This is for both radiated emission testing and conducted emission testing as well as EMI troubleshooting and debugging. It includes all the accessories—defined and characterized so you don’t have to spend the time doing it yourself—for EMC testing: two antennas, pre-amp, and tripod for radiated emissions testing, AC LISN, DC LISN, and transient limiter for conducted emissions testing, as well as near field probes and 20 dB amp for troubleshooting.

EECatalog: What opportunities to save time and effort does your solution make possible?

Stinson, Tektronix:  Some of the ways we accelerate EMC compliance are with our failure-targeting Quasi Peak detector. You specify the failure you want to test and spend less time having to test other failures outside that test. It includes an easy-to-learn wizard with built-in standards:  All the limit tables for the CISPR, and MIL standards, are included in the software.

You can populate limit lines based on the standards you select. In order to get higher accuracy as well as adding convenience, we have pre-defined the gains and losses of all our accessories, including antennas, cables, LISNs and pre-amplifiers. This pre-defining for gains and losses in the software means you don’t have to worry about the characterizing, and you get a higher level of accuracy.

Also, you can use our ambient noise comparison feature to measure the ambient noise in the test environment. Then you can compare that ambient noise to the actual measurement and apply trace math to remove it from the actual measurement. You can readily distinguish a failure caused by ambient noise from a failure caused by your test environment and equipment under test. This allows you to have the confidence to perform EMI/EMC pre-compliance testing in relatively noisy environments such as office areas, conference rooms, labs, and basements.

And you can get all of the notes, images, and result information to your manager and other engineers conveniently because the software is fully configurable for reporting in formats such as pdf and rtf. You can get any number of experiments and test results into this report.

EECatalog:  Do you anticipate any difficulties in getting folks to change what they’ve been doing, i.e., going to external test houses?

Stinson, Tektronix:  No, because we see it as not being much of an effort at all for somebody to get up and running on this. The software is easy to use. It has a set up wizard so even a junior engineer can get up and running on it and learn how to do EMI pre-compliance. We had customers going to the test house three or four times on a product. With proper in-house pre-compliance testing, customers can reduce it down to as little as one visit, so the potential cost savings here are huge.

EECatalog: Anything to add before we wrap up?

Stinson, Tektronix:  This solution plays well in many markets from IoT, medical devices, to military equipment and systems, and it also spans into non-traditional RF applications such as switching power supplies, DC to DC converters, and wireless chargers—these are all things requiring more attention to EMC and EMI compliance.

[1] http://www.fr.com/files/Uploads/attachments/fcc/2012_Q1_FCC-Enforcement-Matrix.pdf

 

For Medical Device Design on the IoT, Get a Solid Head Start

Monday, January 29th, 2018

Their value for patient care and the efficiency they create for medical personnel has medical devices proliferating rapidly on the Internet. This brings along heavy responsibilities for accuracy, reliability, and meeting certification standards. Due to the wide range of specific needs among patients and the devices used to care for them, medical device developers should start with a powerful and rich set of tools based on a platform that offers the team a solid starting point to implement the power-saving, processing, and security features.

The development of intelligent medical devices has always been a challenge. Adding to the familiar pressures of cost, time to market, size, weight and power (SWaP) optimization, safety, security and providing the proper feature mix, there is also the challenge of certification by agencies of multiple countries. These certifications also involve different levels of stringency depending on their possible risk. Still, hospitals and medical facilities have long relied on medical electronics in the clinic and operating room and a great deal of experience has been gained in their development.

Medical, industrial, and consumer devices all increasingly connect via the Internet, making them part of a world that can seem like the Wild West. Medical devices which are part of the IoT can shorten hospital stays and enable elderly patients to live at home longer, thus driving down the costs of medical care. For more active patients the IoT’s medical “things” can also be a huge advantage with their ability to directly sense and interpret vital bodily functions and signal alerts when needed as well as connect via the cloud to physicians and specialists for more regular monitoring and emergency attention. In athletics, they can help prevent serious injury by sensing possible concussions along with other signs that could indicate a player should receive immediate attention (Figure 2).

Figure 1:  Medical IoT devices must communicate securely over one of a variety of wireless network protocols, eventually connecting to the Internet using an IP stack and ultimately to a cloud service—in this case, Microsoft Azure. And they must maintain security along the whole path. Source: Microsoft, Inc.

Among the additional challenges for wearable medical devices are even more demanding size and low-power requirements, implementing the appropriate wireless communication technology and perhaps most important—security. The latter is needed both to comply with existing regulations such as HIPPA as well as to protect the devices against hacking. Hackers could exploit access to devices to find their way into larger systems to steal and exploit data. And as gruesome as it may sound, there is also good reason to protect devices such as pacemakers from attack. The advantage to having a pacemaker connected wirelessly is to be able to make adjustments and software upgrades without surgery. But that also exposes it to possible malicious attack.

Start with a Solid Platform
Fortunately, developers do not have to start from scratch, and wearable medical devices, while addressing very specialized needs and functions, do have many things in common. If developers are able to start from a point from which they can directly address their shared challenges, they can add innovation—and value. Early in the process they can quickly get started meeting those often intricate specific needs for a successful medical device project. That boils down to the concept of a “platform”—a combination of basic software and hardware that lets developers get right to adding their specific value and innovation. Of course, such a platform must offer hardware and software components, which, while shared among medical projects, still focus on the common needs of medical devices. Its aim is to provide selected components that can offer a broad set of features for medical devices without making developers search through a complex set of unnecessary components. They should be able to quickly find and evaluate those most suited to their development goals.

Among the options in such a platform should be a choice of wireless connectivity. If a device is to be worn by a patient in a hospital setting, the link should be to the hospital’s network, possibly via Wi-Fi. If the device is to be worn by a patient during normal daily activities, then a Bluetooth link to the patient’s smartphone might be more appropriate. For sporting events, such as a marathon that covers extended areas, a wider-ranging choice might be LoRa. While devices can connect directly to the Internet, the more usual approach is to connect to a gateway device using one of the wireless protocols. The gateway then sends the data over the Internet to cloud services using Internet protocols such as Transport Layer Security (TLS), which also offers methods of securing communications beyond the gateway.

The gateway or edge device can be a specialized design or a PC in the home running the needed applications and connected to the Internet. One important design consideration is how and where to make decisions that are necessary to the patient’s well-being. For example, a fall or a blow to the body should invoke an instant response alerting the proper remote service professionals to deal with it. In other cases, the gateway can simply forward data to a cloud application where it is analyzed. Anomalous or out-of-range results can then alert a physician who can determine what steps to take. Yet again, code on the device could provide the ability to recognize the need to dispense medication, possibly via a module worn on the body. Decisions such as these will influence the allocation of resources including memory, power consumption, and processing capability and where in the device/gateway/cloud chain they are implemented. So, in addition to a rich selection of specialized peripherals and their support, the developer must select a processor, an operating system, power management functions, location and motion sensing, body sensors, security support, and a choice of wireless communication hardware and protocols.

 

Figure 2: For a device like a concussion monitor, sensors, drivers, and a certain amount of processing, security and communication capability must be built in to concentrate on a very specific set of issues both to detect the concussion and to relay information as to its possible effects.

The issue of the human interface is also one that must be thoughtfully developed. There is little room for a rich human machine interface (HMI) on the device itself but important functions can be placed on a small display on a smart watch, for example. When—and only when—practical, a richer set can also often be implemented on a smartphone. The gateway device is often the major host for the HMI because it can be quickly accessed by the patient or remotely by the physician either directly over the Internet or from applications in the cloud. Of course, other control and analysis applications running in the cloud can utilize the device HMI as well as other application-based user interfaces.

and Security is a Must
As mentioned above, devices must be able to negotiate secure connections that not only protect data, but also guard against hacking and malicious access. One should quite naturally expect security support in the form of security-critical software components such as secure mail or secure SMTP along with secure web pages or HTTPS for implementing layered security strategies plus TLLS as noted earlier. Secure management—SNMP v3—can be used to secure both authentication and transmission of data between the management station and the SNMP agent along with an encrypted file system.

Given the different protocols used in connecting medical IoT devices from the patient over a wireless network to edge/gateway device and then via the Internet up to the cloud services, it is vital that security be assured end-to-end over the whole route. This must ensure that data and control codes can be authenticated as passing between a sender and receiver that have both verified their identities. It means that the messages must be undistorted and uncorrupted either by communication glitches or by malicious intent. And communications must remain secure and private, which also involves encryption and decryption.

Encrypted messages passing through the gateway from wireless protocol to the Internet will utilize a standard Internet protocol like TLS, which uses a securely stored private key with a public key to generate a unique encryption key for a given session. For both message integrity and privacy, it is important that the content as well as the private key remain secure. TLS forms the basis for HTTPS for implementing layered security strategies. Secure management—SNMP v3—can be used to secure both authentication and transmission of data between the management station and the SNMP agent along with an encrypted file system. Additional protocols for graphics functionality along with camera and video support set the developer up with a rich selection of options for the entire range of possible medical applications.

Another thing that is often missed is the memory model used by the underlying operating system. Today, many RTOSs are modeled on Linux, which supports dynamic loading where the code can be loaded into RAM and then run under control and protection of the operating system. However, that protection can sometimes be breached, and so a dynamic loading memory model involves definite dangers when used in critical embedded systems like medical devices.

Another model executes code from Flash memory. In order for code to be loaded in so that it can execute, it must be placed in Flash. Loading malicious code is much more difficult than just putting it into RAM. Flash-based code is a single linked image so that when it executes, it just executes from that image. There is no swapping of code in and out from RAM. RAM is of course used for temporary storage of variables and stacks such as used for context switching, but all instructions execute from Flash.

Even if an attacker could breach the security used for program updates, they could not load a program into device memory to be executed even under control of the OS because the code must be executed from Flash. The only way to modify that Flash image is to upload an entirely new image, presumably one that includes the malware along with the regular application code. That means a hacker would need a copy of the entire Flash-based application in order to modify and upload it. Such a scenario is extremely unlikely.

Figure 3: Rowebots, Ltd. supplies a selection of prototyping platforms consisting of RTOS, communications components, and supported peripheral (including sensor) drivers. There is also a selection of processor boards such as the STM32F4 (l) and STM32F7 (r) boards from STMicroelectronics.

These days, idea of a “development platform” is widespread and well accepted. Nobody wants to start from scratch nor do they need to. Developers may already have a fairly clear idea of the class of processor and mix of peripherals they will need for a given project and will look for a stable and versatile platform consisting of a choice of supported MCUs and hardware devices along with an RTOS environment tailored for very small systems along with a host of supported drivers, software modules and protocols. Finding a platform that has a range of features and supported elements that is close to a project’s goals can go a long way toward shortening time to market and verifying the proof-of-concept before making that potentially expensive commitment to a specific design. The key is to have those goals firmly in mind and then look for the platform that best meets them. Fortunately, today, many semiconductor manufacturers as well as RTOS vendors are collaborating to offer such platforms, some of which are targeted to specific markets and application areas (Figure 3).

On the other end, it is also wise to tailor the project for cloud connectivity out of the box. Among the available services are Microsoft Azure IoT Hub, AWS IoT, and IBM Watson IoT. Such services let developers build, deploy, and manage applications and services through a global network of Microsoft-managed data centers. Microsoft Azure, for example, provides a ready-made cloud environment and connectivity for IoT devices to communicate their data, be monitored, and receive commands from authorized personnel.


Kim Rowe has started several embedded systems and Internet companies including RoweBots Limited. He has over 30 years’ experience in embedded systems development and management. With an MBA and MEng, Rowe has published and presented more than 50 papers and articles on both technical and management aspects of embedded and connected systems.

The Digitization of Cooking

Wednesday, November 22nd, 2017

Smart, connected, programmable cooking appliances are coming to market that deliver consumer value in the form of convenience, quality, and consistency by making use of digital content about the food the appliances cook. Solid state RF energy is emerging as a leading form of programmable energy to enable these benefits.

Home Cooking Appliance Market
The cooking appliance market is a large (>270M units/yr.) and relatively slow growing (3-4% CAGR) segment of the home appliance market. For the purposes of this article, cooking appliances are aggregated into three broad categories:

  1. Ovens (such as ranges and built ins), with an annual global shipment rate of 57M units[1]
  2. Microwave Ovens, with an annual global shipment rate of 74M units[2]
  3. Small Cooking Appliances, with an approximate annual global shipment rate of 138M units[3]

Figure 1:  Among the newer, non-traditional appliances coming online is the Miele Dialog oven, which employs RF energy and interfaces to a proprietary application via WiFi (Courtesy Miele).

Appliance analysts generally cite increasing disposable income and the steady rise in the standard of living globally as primary factors contributing to cooking appliance market growth. These have greatest impact in economically developing regions such as BRIC countries. However, there are other factors shaping cooking appliance features and capabilities, which are beginning to influence a change in the type of appliance consumers purchase to serve their lifestyle interests. Broad environmental factors include connectivity and cloud services, which make access to information and valuable services possible from OEM’s and third parties. Individual interests in improving health and wellbeing drive up-front food sourcing decisions and can also impact the selection and use of certain cooking appliances based on their ability to deliver healthy cooking results.

Food as Digital Content?
Yes, food is being ‘digitized’ in the form of online recipes, nutrition information, sources of origin, and freshness. Recipes as digital content have been available online almost since the widespread use of the internet as consumers and foodies flocked to readily available information on the web for everything from the latest nouveau cuisine to the everyday dinner. Over the past several years, new companies and services have been emerging to bring even more digital food content to the consumer and are now working to make this same information available directly to the cooking appliances themselves. Such companies break down the composition of foods and recipes into their discrete elements and offer information on calories, fat content, the amount of sodium, etc. as well as about the food being used in a recipe, the recipe itself, and the instructions to the cook—or to the appliance—on how best to cook the food.

In many ways, this is analogous to the transition of TV content moving from analog to digital broadcast, and TVs’ transition from tubes (analog) to solid state (LCD, OLED, etc.) formats. It’s not too much of a stretch to imagine how this will enable a number of potential new uses and services including, but not limited to, guided recipe prep and execution, personalization of recipes, inventory management and food waste reduction, and appliances with automated functionality to fully execute recipes.

It’s Getting Hot in Here
A common thread among all cooking appliances is that they provide at least one source of heat (energy) in order to perform their basic task. In almost every cooking appliance, that source of heat is a resistive element of some form.

Resistive elements can be very fast to rise to temperature, but must raise the ambient temperature over time to the target temperature used in a recipe. Once the ambient temperature is raised the food must undergo a transfer of energy from the ambient environment, to raise its temperature. The time needed to heat a cavity volume to the recipe starting temperature contributes to the overall cooking timeline and is generally a waste of energy. Just as the resistive element takes time to increase the ambient temperature, it also takes a long time to reduce the ambient temperature, and furthermore, relies on a person monitoring the cooking process to do so. This renders the final cooking result as a very subjective outcome. Resistive elements also degrade with time, causing them to become more inefficient and lower overall temperature output. The increased cooking time for a given recipe and the amount of attention required to assure a reasonable outcome burden the user.

Solid state RF cooking solutions on the other hand are noted for their ability to instantly begin to heat food as a result of the ability of RF energy to penetrate materials and to propagate heat through the dipole effect[4]. Thus, no waiting for the ambient cavity to warm to a suitable temperature is needed before cooking commences, which can appreciably reduce cooking time. When implemented in a closed loop, digitally controlled circuit, RF energy can be precisely increased and decreased with immediate effect on the food, thus resulting in the ability to precisely control the final cooking outcome.

Figure 2:  Maximum available power for heating effectiveness and speed along with high RF gain and efficiency are among the features of RF components serving the needs of cooking appliances.

In addition, solid state devices are inherently reliable, as there are no moving parts or components that tend to degrade in performance over time. Solid state RF power transistors such as those from NXP Semiconductor are built in silicon laterally diffused metal oxide semiconductor (LDMOS) and may demonstrate 20-year lifetime durability without reduction in performance or functionality (Figure 2). RF components can be designed specifically for the consumer and commercial cooking appliance market in order to deliver the optimum performance and functionality specific to the cooking appliance application. This includes maximum available power for heating effectiveness and speed, high RF gain and efficiency for high-efficiency systems, and RF ICs for compact and cost-effective PCB design.

The Digital Cooking Appliance
At the appliance level, a significant trend underway is the transition away from the conventional appliance that supports analog cooking methods—defined as using a set temperature, set time, and continuously checking the progress. These traditional appliances have remained largely unchanged in terms of their performance or functionality for decades, and OEMs producing these appliances suffer from continuous margin pressure owing in large part to their relative commodity nature. However, newer innovative appliances coming to market are utilizing digital cooking methods which make use of sensors to provide measurement and feedback, and programmable cooking recipes which are able to access deep pools of information such as recipes, prep methods, and food composition information, online and off, to drive intelligent algorithms that enable automation and differentiated cooking results. Miele recently announced its breakthrough Dialog Oven featuring the use of RF energy in addition to convection and radiant heat, and a WiFi connection for interfacing to Miele’s proprietary application (Figure 1).

Solid state RF cooking sub-system reference designs and architectures such as NXP’s MHT31250C provide the programmable, real time, closed loop control of the energy (heat) created and distributed in the cooking appliance. Solid state RF cooking sub-systems such as this must provide necessary functionality from the signal generator, RF amplifier, RF measurement, and digital control, as well as a means to interface or communicate with the sub-system through an application programming interface (API) for instance. Emerging standards to facilitate the broad adoption of solid state RF cooking solutions into appliances are being addressed through technical associations such as the RF Energy Alliance (rfenergy.org), which is working on a cross-industry basis to develop proposed standard architectures to support solid state RF cooking solutions.

With fully programmable control over power, frequency, and other operational parameters, a solid state RF cooking sub-system can operate across as many as four modules. It can deliver a total of 1000W of heating power, making it possible to differentiate levels of cooking precision as well as use multiple energy feeds to distribute the energy for more even cooking temperatures.

Solid state RF cooking sub-systems provide RF power measurement continuously during the cooking process which enables the appliance to adapt to the actual cooking process and progress underway in real time. Having additional sensor or measurement inputs can also help improve the appliances recipe execution. It is the real-time control plus real time measurement capability which enables adaptive functionality in the appliance. This is important for accommodating changes in food composition, as well as enabling revisions, replacement, and additions to recipes delivered remotely from a cloud based service provider or the OEM. With access to a growing pool of digital details about the food to be cooked, the appliance can determine the best range of parameters to execute for achieving the desired cooking outcome.


Dan Viza is the Director of Global Product Management for RF Heating Products at NXP Semiconductor (www.nxp.com). A veteran of the electronics and semiconductor industry with more than 20 years of experience leading strategy, development, and commercialization of new technologies in fields of robotics, molecular biology, sensors, automotive radar, and RF cooking, Viza holds four U.S. patents. He graduated with highest honors from Rochester Institute of Technology and holds an MBA from the University of Edinburgh in the UK.

 

 

[1] “Major Home Appliance Market Report 2017”

[2]  “Small Home Personal Care Appliance Report 2014”

[3]  Wikipedia.org

[4] Wikipedia.org

Enterprise Drives Augmented Reality Demand

Monday, November 20th, 2017

Consumer products garner the headlines, but industry is likely to drive AR commercialization.

While many things that Google comes up with seemingly turn to gold, Google Glass wasn’t one of them. According to Forbes magazine, the company’s augmented reality (AR) headset will go down in the annals of bad product launches because it was poorly designed, confusingly marketed, and badly timed.

The voice-controlled, head-mounted display was meant to introduce the benefits of AR—a technology that enhances human perception of the physical environment with computer-generated graphics—to the masses but failed. The product’s appeal turned out to be limited to a few New York and Silicon Valley early adopters. And even they soon ditched the glasses and returned to smartphones when buggy software and distrust of filming in public places became apparent. Faced with a damp squib, the glasses were quietly withdrawn.

However, the company shed few tears over its aborted foray into AR-for-the-masses because it gained valuable experience. Armed with that knowledge, Alphabet, Inc. (Google’s parent company) has brought Google Glass back, only this time geared toward factory workers, not the average consumer.

Some would argue the first-generation device never really went away. Pioneering enterprises—initially overlooked by Google Glass marketers—used the consumer product to project in new ways. For example, AR was used to animate instructions, which was found to be an effective way to deskill complex manual tasks and to improve quality. Further, head-mounted AR raised productivity, reduced errors, and improved safety. A report from Bloomberg even noted some firms went to the extent of hiring third-party software developers to customize the product for specific tasks.

Enter “Glass Enterprise Edition”

Such encouragement resulted in Alphabet’s launch of “Glass Enterprise Edition” targeted for the workplace and boasting several upgrades over the consumer product, including a better camera, faster microprocessor, improved Wi-Fi, and longer battery life…plus a red light LED that illuminates to let others know they’re being recorded. But perhaps the key improvement is the packaging of the electronics into a small module that can be attached to safety glasses or prescription spectacles, making it easier for workers to use. While employees are much less concerned about the aesthetics of wearables than consumers, streamlining previously chunky AR devices improves comfort and hence usability.

According to a report in Wired, sales of Glass Enterprise Edition are still only modest, and many large companies are taking the product on a trial basis only. But that’s not stopped Alphabet’s product managers sounding bullish about the product’s workplace prospects. “This isn’t an experiment. Now we are in full-on production with our customers and with our partners,” project lead Jay Kothari told Wired.

Alphabet is not alone in realizing AR is a little underdeveloped for the consumer, but practical for the worker. Seattle-based computer-software, -hardware, and -services giant Microsoft has also entered the fray with HoloLens, a “mixed reality” holographic computer and head-mounted display (Figure 1). And Japan’s Sony is tapping into rising industrial interest with SmartEyeGlass.

Figure 1: Microsoft’s HoloLens is finding favor with automotive makers such as Volvo to help engineers visualize new concepts in three dimensions. (Photo Courtesy https://www.volvocars.com)

Focus on Specifics

With its first-generation Google Glass, Alphabet repeated a mistake all too common with tech companies: Aiming for volume sales by targeting the consumer market. While that was a strategy that worked well for smartphones, it hasn’t proven quite so successful for wearables.

The consumer was quick to realize the smartphone had many specific uses—like communication, connectivity, photography—while the smartwatch, for example, seemed to just duplicate many of those uses while bringing few useful features of its own. For instance, a smartwatch’s fitness tracking functionality has little long-term use; most people can tell if they are fit or not by taking the stairs instead of the elevator and seeing if they’re out of breath as a result.

But where smartwatches will really take off is when they offer specialized functionality such as constant glucose monitoring, fall alerts for seniors, or in occupations like driving public service vehicles where operators benefit from observing notifications without removing their hands from the wheel.

Similarly, early AR did little more for consumers than shift information presentation from the handset to a heads-up display—useful, but not earthshattering enough to justify shelling out thousands of dollars. In contrast, freeing up the workers’ hands by presenting instructions directly in their line of sight is a big deal for industries where efficiency gains equal greater profits.

Impacting the Bottom Line

Enterprise is excellent at spotting where a new technology like AR can address a specific challenge, especially if the result impacts the bottom line. Robots were added to car assembly lines because they automated tasks where human error led to safety issues; machine-to-machine wireless communications was embraced because it predicted the need for maintenance in advance of machines grinding to a halt. In both, AR reduced costs by eliminating the need for skilled workers.

And so it appears to be with AR. German carmakers Volkswagen and BMW have experimented with the technology for communication between assembly-line teams. Similarly, aircraft manufacturer Boeing has equipped its technicians with AR headsets to speed navigation of planes’ wiring harnesses. And AccuVein, a portable AR device that projects an accurate image of the location of peripheral veins onto a patient’s skin, is in use in hospitals across the U.S., assisting healthcare professionals to improve venipuncture.

Elevator manufacturer ThyssenKrupp has taken things even further by equipping all its field engineers with Microsoft’s HoloLens so they can look at a piece of broken elevator equipment to see what repairs are needed. Better yet, IoT-connected elevators tell technicians in advance what tools are needed to make repairs, eliminating time-consuming and costly back and forth.

Too Soon to Call

It is too early in AR’s development to tell if this generation of the technology will be a runaway success. In the consumer sector, the signs aren’t great. Virtual Reality (VR), AR’s much-hyped bigger brother, is not exactly flying off the shelves; a recent report in The Economist noted, for example, that the 2016 U.S. sales forecast for Sony’s PlayStation VR headset was cut from 2.6 million to just 750,000 shipments.

And although VR’s immersive experience might have some applications in training and education, enterprise applications will do little to boost its chances of mainstream acceptance. In contrast, AR’s long-term prospects are dramatically boosted by industry’s embrace. And, in the same way that PCs, Wi-Fi, and smartphones built on clunky, expensive, and power-sapping first-generation technology went on to become the sophisticated products we use today, industry’s investment in the technology will ensure AR headsets will become more streamlined, powerful, and efficient—and ultimately much more appealing to the consumer.

AR’s interleaving of the virtual and real worlds to improve human performance will become a compelling draw for profit-making concerns and the public alike. It’s reality, only better.

For more on the future of AR see Mouser Electronics’ ebook Augmented Reality.


Steven Keeping is a contributing writer for Mouser Electronics and gained a BEng (Hons.) degree at Brighton University, U.K., before working in the electronics divisions of Eurotherm and BOC for seven years. He then joined Electronic Production magazine and subsequently spent 13 years in senior editorial and publishing roles on electronics manufacturing, test, and design titles including What’s New in Electronics and Australian Electronics Engineering for Trinity Mirror, CMP and RBI in the U.K. and Australia. In 2006, Steven became a freelance journalist specializing in electronics. He is based in Sydney.

Originally published by Mouser Electronics https://www.mouser.com/. Reprinted with permission.

Digital Signage Complexities Addressed

Tuesday, November 7th, 2017

An overview of the parts that make up a digital signage system

Digital signage has been an important topic across the IT, commercial audio-visual, and signage industries for several years now. The benefits of replacing a static sign with a dynamic digital display are clear. However, while it’s impossible to avoid digital signage as we go about our daily lives, I still find myself surprised by the number of people who want digital signage, but don’t understand what goes into a signage system. The attitude of the casual observer could be, “Oh, it’s a flat panel and some video files! We can do that!”

Figure 1: Complete digital signage installation in a restaurant setting. (Photo Courtesy of Premier Mounts and Embed Digital.)

Yet the reality is that digital signage comprises more components than many realize. Digital signage is a web of technologies, involving several different pieces, and potentially several different manufacturers. It’s not nearly as simple as it may seem at first glance, but the good news is we can organize all this complexity into a few categories of components to aid understanding.

Don’t Overlook Content
First up is the obvious category, displays. No! Really! Now, my friends in the display manufacturing world really hate this, but we will start with a component that most people don’t think of as one, and that if not done right, will cripple any chance the system has of success. That component is content. I group this in with all the physical hardware because it has a cost, must be planned for, and selected just as carefully as any piece of electronics. Content is the vehicle that delivers your message and enables you to achieve your objectives. Without it you don’t have much of a system, so ensure you plan for it, its cost, and its need to be continually refreshed. Whether done in house, or outsourced, this is one component that must not be overlooked!

The Single Most Important Product
The Content Management System or CMS, is the heart of any digital signage system. It’s the component that enables you to distribute and manage your content and set up all the scheduling and dayparting you will use. This makes this the single most important product (not getting past that content thing yet!) that you will select. Now, a lot will come down to your strategy… what are your objectives, for what will the signage be used? Different software packages (and there are hundreds!) all offer generally the same group of features, but typically have features that let them focus on a specific vertical. For example, a CMS focused on interactive content, or one focused on integration with external data sources. There are also different business models involved here; some software is on premise, meaning you purchase it and host it yourself, others are Software as a Service (SaaS) and hosted in the cloud for a monthly subscription fee. Neither is inherently superior, and a lot of this will depend on your IT policies, and finances.

Once you know software, then you can select a media player. Today, as we rapidly reach the end of 2017, you have a surprising number of choices, not just traditional PCs. Android, ChromeOS, custom SoC, display embedded… this deserves a whole discussion unto itself. Keep in mind that your software vendor will often have guidelines and recommendations for this, since this device is where the software lives. Personally, I’m getting to where I prefer non-Windows devices, they tend to be lower cost and easier to manage, but don’t take that as absolute… there are always great options in all types.

The Backbone
If the CMS software is the heart, then the network is the backbone. This is the connection that lets each media player communicate back to the central server, wherever it is. This is often the part of digital signage that will require the most technical skill, especially if the Internet is involved to connect multiple sites. You need to be comfortable connecting devices to the network, configuring them, managing ports and bandwidth, and dealing with firewalls. If that sounds complicated… yes, it can be!  Implementing this may take a “guess and check” mentality, as communication rarely works perfectly the very first time you power on. Sorry, plug and play is an illusion used as marketing, not reality!

Displays: Understanding Three Things
Selecting the type of display involves understanding three things; environment, hours of use, and audience position. Understanding the environment in which the display will live helps us choose how bright it needs to be, and if we need protection against dust or moisture. Knowing the hours it will be turned on helps us to select the duty cycle required. Finally, knowing the audience position will help us in selecting how large the display (and the image shown on it) will need to be. LCD flat panels are the most common, and will be the go to for general purpose displays, but projectors are being used quite a bit as well (especially the models that don’t use traditional lamps!). Direct view LED displays are much more affordable then they have been, so those are now a much more common choice. Each one has its own pros and cons.

We’re not quite done, so bear with me a bit longer. We also have mounting for the display and media player. Before you laugh at me, this is a more complex choice than you think. You need to understand the structure of what you are mounting onto, where the display will go, and if you are dealing with an unusual environment. Sometimes, we need protective enclosures, or a kiosk for an interactive display. All of this makes the mounting solution key, and not something to be selected as an afterthought. Also, always buy from a top tier mount provider… saving money here can cost time and increase the risk of mount failure.

Now that we have covered these components, you should understand the general parts of a digital signage system. If this topic intrigues you, and you want to learn more, I will present “Understanding the Parts of a Digital Signage Network,” at Digital Signage Expo 2018 on Wednesday, March 28 at 9 a.m. at the Las Vegas Convention Center. For more information on this or any educational program offered at DSE 2018 or to learn more about digital signage go to www.dse2018.com .


Jonathan Brawn is a principal of Brawn Consulting, an audio-visual consulting, educational development, and marketing firm, based in Vista, California with national exposure and connections to the major manufacturers and integrators in the AV and IT industries. Prior to this, Brawn was Director of Technical Services for Visual Appliances, an Aliso Viejo CA based firm that holds the patent on ZeroBurn™ plasma display technology.

 

 

Next Page »

Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.