Posts Tagged ‘top-story’

Next Page »

For Medical Device Design on the IoT, Get a Solid Head Start

Monday, January 29th, 2018

Their value for patient care and the efficiency they create for medical personnel has medical devices proliferating rapidly on the Internet. This brings along heavy responsibilities for accuracy, reliability, and meeting certification standards. Due to the wide range of specific needs among patients and the devices used to care for them, medical device developers should start with a powerful and rich set of tools based on a platform that offers the team a solid starting point to implement the power-saving, processing, and security features.

The development of intelligent medical devices has always been a challenge. Adding to the familiar pressures of cost, time to market, size, weight and power (SWaP) optimization, safety, security and providing the proper feature mix, there is also the challenge of certification by agencies of multiple countries. These certifications also involve different levels of stringency depending on their possible risk. Still, hospitals and medical facilities have long relied on medical electronics in the clinic and operating room and a great deal of experience has been gained in their development.

Medical, industrial, and consumer devices all increasingly connect via the Internet, making them part of a world that can seem like the Wild West. Medical devices which are part of the IoT can shorten hospital stays and enable elderly patients to live at home longer, thus driving down the costs of medical care. For more active patients the IoT’s medical “things” can also be a huge advantage with their ability to directly sense and interpret vital bodily functions and signal alerts when needed as well as connect via the cloud to physicians and specialists for more regular monitoring and emergency attention. In athletics, they can help prevent serious injury by sensing possible concussions along with other signs that could indicate a player should receive immediate attention (Figure 2).

Figure 1:  Medical IoT devices must communicate securely over one of a variety of wireless network protocols, eventually connecting to the Internet using an IP stack and ultimately to a cloud service—in this case, Microsoft Azure. And they must maintain security along the whole path. Source: Microsoft, Inc.

Among the additional challenges for wearable medical devices are even more demanding size and low-power requirements, implementing the appropriate wireless communication technology and perhaps most important—security. The latter is needed both to comply with existing regulations such as HIPPA as well as to protect the devices against hacking. Hackers could exploit access to devices to find their way into larger systems to steal and exploit data. And as gruesome as it may sound, there is also good reason to protect devices such as pacemakers from attack. The advantage to having a pacemaker connected wirelessly is to be able to make adjustments and software upgrades without surgery. But that also exposes it to possible malicious attack.

Start with a Solid Platform
Fortunately, developers do not have to start from scratch, and wearable medical devices, while addressing very specialized needs and functions, do have many things in common. If developers are able to start from a point from which they can directly address their shared challenges, they can add innovation—and value. Early in the process they can quickly get started meeting those often intricate specific needs for a successful medical device project. That boils down to the concept of a “platform”—a combination of basic software and hardware that lets developers get right to adding their specific value and innovation. Of course, such a platform must offer hardware and software components, which, while shared among medical projects, still focus on the common needs of medical devices. Its aim is to provide selected components that can offer a broad set of features for medical devices without making developers search through a complex set of unnecessary components. They should be able to quickly find and evaluate those most suited to their development goals.

Among the options in such a platform should be a choice of wireless connectivity. If a device is to be worn by a patient in a hospital setting, the link should be to the hospital’s network, possibly via Wi-Fi. If the device is to be worn by a patient during normal daily activities, then a Bluetooth link to the patient’s smartphone might be more appropriate. For sporting events, such as a marathon that covers extended areas, a wider-ranging choice might be LoRa. While devices can connect directly to the Internet, the more usual approach is to connect to a gateway device using one of the wireless protocols. The gateway then sends the data over the Internet to cloud services using Internet protocols such as Transport Layer Security (TLS), which also offers methods of securing communications beyond the gateway.

The gateway or edge device can be a specialized design or a PC in the home running the needed applications and connected to the Internet. One important design consideration is how and where to make decisions that are necessary to the patient’s well-being. For example, a fall or a blow to the body should invoke an instant response alerting the proper remote service professionals to deal with it. In other cases, the gateway can simply forward data to a cloud application where it is analyzed. Anomalous or out-of-range results can then alert a physician who can determine what steps to take. Yet again, code on the device could provide the ability to recognize the need to dispense medication, possibly via a module worn on the body. Decisions such as these will influence the allocation of resources including memory, power consumption, and processing capability and where in the device/gateway/cloud chain they are implemented. So, in addition to a rich selection of specialized peripherals and their support, the developer must select a processor, an operating system, power management functions, location and motion sensing, body sensors, security support, and a choice of wireless communication hardware and protocols.


Figure 2: For a device like a concussion monitor, sensors, drivers, and a certain amount of processing, security and communication capability must be built in to concentrate on a very specific set of issues both to detect the concussion and to relay information as to its possible effects.

The issue of the human interface is also one that must be thoughtfully developed. There is little room for a rich human machine interface (HMI) on the device itself but important functions can be placed on a small display on a smart watch, for example. When—and only when—practical, a richer set can also often be implemented on a smartphone. The gateway device is often the major host for the HMI because it can be quickly accessed by the patient or remotely by the physician either directly over the Internet or from applications in the cloud. Of course, other control and analysis applications running in the cloud can utilize the device HMI as well as other application-based user interfaces.

and Security is a Must
As mentioned above, devices must be able to negotiate secure connections that not only protect data, but also guard against hacking and malicious access. One should quite naturally expect security support in the form of security-critical software components such as secure mail or secure SMTP along with secure web pages or HTTPS for implementing layered security strategies plus TLLS as noted earlier. Secure management—SNMP v3—can be used to secure both authentication and transmission of data between the management station and the SNMP agent along with an encrypted file system.

Given the different protocols used in connecting medical IoT devices from the patient over a wireless network to edge/gateway device and then via the Internet up to the cloud services, it is vital that security be assured end-to-end over the whole route. This must ensure that data and control codes can be authenticated as passing between a sender and receiver that have both verified their identities. It means that the messages must be undistorted and uncorrupted either by communication glitches or by malicious intent. And communications must remain secure and private, which also involves encryption and decryption.

Encrypted messages passing through the gateway from wireless protocol to the Internet will utilize a standard Internet protocol like TLS, which uses a securely stored private key with a public key to generate a unique encryption key for a given session. For both message integrity and privacy, it is important that the content as well as the private key remain secure. TLS forms the basis for HTTPS for implementing layered security strategies. Secure management—SNMP v3—can be used to secure both authentication and transmission of data between the management station and the SNMP agent along with an encrypted file system. Additional protocols for graphics functionality along with camera and video support set the developer up with a rich selection of options for the entire range of possible medical applications.

Another thing that is often missed is the memory model used by the underlying operating system. Today, many RTOSs are modeled on Linux, which supports dynamic loading where the code can be loaded into RAM and then run under control and protection of the operating system. However, that protection can sometimes be breached, and so a dynamic loading memory model involves definite dangers when used in critical embedded systems like medical devices.

Another model executes code from Flash memory. In order for code to be loaded in so that it can execute, it must be placed in Flash. Loading malicious code is much more difficult than just putting it into RAM. Flash-based code is a single linked image so that when it executes, it just executes from that image. There is no swapping of code in and out from RAM. RAM is of course used for temporary storage of variables and stacks such as used for context switching, but all instructions execute from Flash.

Even if an attacker could breach the security used for program updates, they could not load a program into device memory to be executed even under control of the OS because the code must be executed from Flash. The only way to modify that Flash image is to upload an entirely new image, presumably one that includes the malware along with the regular application code. That means a hacker would need a copy of the entire Flash-based application in order to modify and upload it. Such a scenario is extremely unlikely.

Figure 3: Rowebots, Ltd. supplies a selection of prototyping platforms consisting of RTOS, communications components, and supported peripheral (including sensor) drivers. There is also a selection of processor boards such as the STM32F4 (l) and STM32F7 (r) boards from STMicroelectronics.

These days, idea of a “development platform” is widespread and well accepted. Nobody wants to start from scratch nor do they need to. Developers may already have a fairly clear idea of the class of processor and mix of peripherals they will need for a given project and will look for a stable and versatile platform consisting of a choice of supported MCUs and hardware devices along with an RTOS environment tailored for very small systems along with a host of supported drivers, software modules and protocols. Finding a platform that has a range of features and supported elements that is close to a project’s goals can go a long way toward shortening time to market and verifying the proof-of-concept before making that potentially expensive commitment to a specific design. The key is to have those goals firmly in mind and then look for the platform that best meets them. Fortunately, today, many semiconductor manufacturers as well as RTOS vendors are collaborating to offer such platforms, some of which are targeted to specific markets and application areas (Figure 3).

On the other end, it is also wise to tailor the project for cloud connectivity out of the box. Among the available services are Microsoft Azure IoT Hub, AWS IoT, and IBM Watson IoT. Such services let developers build, deploy, and manage applications and services through a global network of Microsoft-managed data centers. Microsoft Azure, for example, provides a ready-made cloud environment and connectivity for IoT devices to communicate their data, be monitored, and receive commands from authorized personnel.

Kim Rowe has started several embedded systems and Internet companies including RoweBots Limited. He has over 30 years’ experience in embedded systems development and management. With an MBA and MEng, Rowe has published and presented more than 50 papers and articles on both technical and management aspects of embedded and connected systems.

The Digitization of Cooking

Wednesday, November 22nd, 2017

Smart, connected, programmable cooking appliances are coming to market that deliver consumer value in the form of convenience, quality, and consistency by making use of digital content about the food the appliances cook. Solid state RF energy is emerging as a leading form of programmable energy to enable these benefits.

Home Cooking Appliance Market
The cooking appliance market is a large (>270M units/yr.) and relatively slow growing (3-4% CAGR) segment of the home appliance market. For the purposes of this article, cooking appliances are aggregated into three broad categories:

  1. Ovens (such as ranges and built ins), with an annual global shipment rate of 57M units[1]
  2. Microwave Ovens, with an annual global shipment rate of 74M units[2]
  3. Small Cooking Appliances, with an approximate annual global shipment rate of 138M units[3]

Figure 1:  Among the newer, non-traditional appliances coming online is the Miele Dialog oven, which employs RF energy and interfaces to a proprietary application via WiFi (Courtesy Miele).

Appliance analysts generally cite increasing disposable income and the steady rise in the standard of living globally as primary factors contributing to cooking appliance market growth. These have greatest impact in economically developing regions such as BRIC countries. However, there are other factors shaping cooking appliance features and capabilities, which are beginning to influence a change in the type of appliance consumers purchase to serve their lifestyle interests. Broad environmental factors include connectivity and cloud services, which make access to information and valuable services possible from OEM’s and third parties. Individual interests in improving health and wellbeing drive up-front food sourcing decisions and can also impact the selection and use of certain cooking appliances based on their ability to deliver healthy cooking results.

Food as Digital Content?
Yes, food is being ‘digitized’ in the form of online recipes, nutrition information, sources of origin, and freshness. Recipes as digital content have been available online almost since the widespread use of the internet as consumers and foodies flocked to readily available information on the web for everything from the latest nouveau cuisine to the everyday dinner. Over the past several years, new companies and services have been emerging to bring even more digital food content to the consumer and are now working to make this same information available directly to the cooking appliances themselves. Such companies break down the composition of foods and recipes into their discrete elements and offer information on calories, fat content, the amount of sodium, etc. as well as about the food being used in a recipe, the recipe itself, and the instructions to the cook—or to the appliance—on how best to cook the food.

In many ways, this is analogous to the transition of TV content moving from analog to digital broadcast, and TVs’ transition from tubes (analog) to solid state (LCD, OLED, etc.) formats. It’s not too much of a stretch to imagine how this will enable a number of potential new uses and services including, but not limited to, guided recipe prep and execution, personalization of recipes, inventory management and food waste reduction, and appliances with automated functionality to fully execute recipes.

It’s Getting Hot in Here
A common thread among all cooking appliances is that they provide at least one source of heat (energy) in order to perform their basic task. In almost every cooking appliance, that source of heat is a resistive element of some form.

Resistive elements can be very fast to rise to temperature, but must raise the ambient temperature over time to the target temperature used in a recipe. Once the ambient temperature is raised the food must undergo a transfer of energy from the ambient environment, to raise its temperature. The time needed to heat a cavity volume to the recipe starting temperature contributes to the overall cooking timeline and is generally a waste of energy. Just as the resistive element takes time to increase the ambient temperature, it also takes a long time to reduce the ambient temperature, and furthermore, relies on a person monitoring the cooking process to do so. This renders the final cooking result as a very subjective outcome. Resistive elements also degrade with time, causing them to become more inefficient and lower overall temperature output. The increased cooking time for a given recipe and the amount of attention required to assure a reasonable outcome burden the user.

Solid state RF cooking solutions on the other hand are noted for their ability to instantly begin to heat food as a result of the ability of RF energy to penetrate materials and to propagate heat through the dipole effect[4]. Thus, no waiting for the ambient cavity to warm to a suitable temperature is needed before cooking commences, which can appreciably reduce cooking time. When implemented in a closed loop, digitally controlled circuit, RF energy can be precisely increased and decreased with immediate effect on the food, thus resulting in the ability to precisely control the final cooking outcome.

Figure 2:  Maximum available power for heating effectiveness and speed along with high RF gain and efficiency are among the features of RF components serving the needs of cooking appliances.

In addition, solid state devices are inherently reliable, as there are no moving parts or components that tend to degrade in performance over time. Solid state RF power transistors such as those from NXP Semiconductor are built in silicon laterally diffused metal oxide semiconductor (LDMOS) and may demonstrate 20-year lifetime durability without reduction in performance or functionality (Figure 2). RF components can be designed specifically for the consumer and commercial cooking appliance market in order to deliver the optimum performance and functionality specific to the cooking appliance application. This includes maximum available power for heating effectiveness and speed, high RF gain and efficiency for high-efficiency systems, and RF ICs for compact and cost-effective PCB design.

The Digital Cooking Appliance
At the appliance level, a significant trend underway is the transition away from the conventional appliance that supports analog cooking methods—defined as using a set temperature, set time, and continuously checking the progress. These traditional appliances have remained largely unchanged in terms of their performance or functionality for decades, and OEMs producing these appliances suffer from continuous margin pressure owing in large part to their relative commodity nature. However, newer innovative appliances coming to market are utilizing digital cooking methods which make use of sensors to provide measurement and feedback, and programmable cooking recipes which are able to access deep pools of information such as recipes, prep methods, and food composition information, online and off, to drive intelligent algorithms that enable automation and differentiated cooking results. Miele recently announced its breakthrough Dialog Oven featuring the use of RF energy in addition to convection and radiant heat, and a WiFi connection for interfacing to Miele’s proprietary application (Figure 1).

Solid state RF cooking sub-system reference designs and architectures such as NXP’s MHT31250C provide the programmable, real time, closed loop control of the energy (heat) created and distributed in the cooking appliance. Solid state RF cooking sub-systems such as this must provide necessary functionality from the signal generator, RF amplifier, RF measurement, and digital control, as well as a means to interface or communicate with the sub-system through an application programming interface (API) for instance. Emerging standards to facilitate the broad adoption of solid state RF cooking solutions into appliances are being addressed through technical associations such as the RF Energy Alliance (, which is working on a cross-industry basis to develop proposed standard architectures to support solid state RF cooking solutions.

With fully programmable control over power, frequency, and other operational parameters, a solid state RF cooking sub-system can operate across as many as four modules. It can deliver a total of 1000W of heating power, making it possible to differentiate levels of cooking precision as well as use multiple energy feeds to distribute the energy for more even cooking temperatures.

Solid state RF cooking sub-systems provide RF power measurement continuously during the cooking process which enables the appliance to adapt to the actual cooking process and progress underway in real time. Having additional sensor or measurement inputs can also help improve the appliances recipe execution. It is the real-time control plus real time measurement capability which enables adaptive functionality in the appliance. This is important for accommodating changes in food composition, as well as enabling revisions, replacement, and additions to recipes delivered remotely from a cloud based service provider or the OEM. With access to a growing pool of digital details about the food to be cooked, the appliance can determine the best range of parameters to execute for achieving the desired cooking outcome.

Dan Viza is the Director of Global Product Management for RF Heating Products at NXP Semiconductor ( A veteran of the electronics and semiconductor industry with more than 20 years of experience leading strategy, development, and commercialization of new technologies in fields of robotics, molecular biology, sensors, automotive radar, and RF cooking, Viza holds four U.S. patents. He graduated with highest honors from Rochester Institute of Technology and holds an MBA from the University of Edinburgh in the UK.



[1] “Major Home Appliance Market Report 2017”

[2]  “Small Home Personal Care Appliance Report 2014”



Enterprise Drives Augmented Reality Demand

Monday, November 20th, 2017

Consumer products garner the headlines, but industry is likely to drive AR commercialization.

While many things that Google comes up with seemingly turn to gold, Google Glass wasn’t one of them. According to Forbes magazine, the company’s augmented reality (AR) headset will go down in the annals of bad product launches because it was poorly designed, confusingly marketed, and badly timed.

The voice-controlled, head-mounted display was meant to introduce the benefits of AR—a technology that enhances human perception of the physical environment with computer-generated graphics—to the masses but failed. The product’s appeal turned out to be limited to a few New York and Silicon Valley early adopters. And even they soon ditched the glasses and returned to smartphones when buggy software and distrust of filming in public places became apparent. Faced with a damp squib, the glasses were quietly withdrawn.

However, the company shed few tears over its aborted foray into AR-for-the-masses because it gained valuable experience. Armed with that knowledge, Alphabet, Inc. (Google’s parent company) has brought Google Glass back, only this time geared toward factory workers, not the average consumer.

Some would argue the first-generation device never really went away. Pioneering enterprises—initially overlooked by Google Glass marketers—used the consumer product to project in new ways. For example, AR was used to animate instructions, which was found to be an effective way to deskill complex manual tasks and to improve quality. Further, head-mounted AR raised productivity, reduced errors, and improved safety. A report from Bloomberg even noted some firms went to the extent of hiring third-party software developers to customize the product for specific tasks.

Enter “Glass Enterprise Edition”

Such encouragement resulted in Alphabet’s launch of “Glass Enterprise Edition” targeted for the workplace and boasting several upgrades over the consumer product, including a better camera, faster microprocessor, improved Wi-Fi, and longer battery life…plus a red light LED that illuminates to let others know they’re being recorded. But perhaps the key improvement is the packaging of the electronics into a small module that can be attached to safety glasses or prescription spectacles, making it easier for workers to use. While employees are much less concerned about the aesthetics of wearables than consumers, streamlining previously chunky AR devices improves comfort and hence usability.

According to a report in Wired, sales of Glass Enterprise Edition are still only modest, and many large companies are taking the product on a trial basis only. But that’s not stopped Alphabet’s product managers sounding bullish about the product’s workplace prospects. “This isn’t an experiment. Now we are in full-on production with our customers and with our partners,” project lead Jay Kothari told Wired.

Alphabet is not alone in realizing AR is a little underdeveloped for the consumer, but practical for the worker. Seattle-based computer-software, -hardware, and -services giant Microsoft has also entered the fray with HoloLens, a “mixed reality” holographic computer and head-mounted display (Figure 1). And Japan’s Sony is tapping into rising industrial interest with SmartEyeGlass.

Figure 1: Microsoft’s HoloLens is finding favor with automotive makers such as Volvo to help engineers visualize new concepts in three dimensions. (Photo Courtesy

Focus on Specifics

With its first-generation Google Glass, Alphabet repeated a mistake all too common with tech companies: Aiming for volume sales by targeting the consumer market. While that was a strategy that worked well for smartphones, it hasn’t proven quite so successful for wearables.

The consumer was quick to realize the smartphone had many specific uses—like communication, connectivity, photography—while the smartwatch, for example, seemed to just duplicate many of those uses while bringing few useful features of its own. For instance, a smartwatch’s fitness tracking functionality has little long-term use; most people can tell if they are fit or not by taking the stairs instead of the elevator and seeing if they’re out of breath as a result.

But where smartwatches will really take off is when they offer specialized functionality such as constant glucose monitoring, fall alerts for seniors, or in occupations like driving public service vehicles where operators benefit from observing notifications without removing their hands from the wheel.

Similarly, early AR did little more for consumers than shift information presentation from the handset to a heads-up display—useful, but not earthshattering enough to justify shelling out thousands of dollars. In contrast, freeing up the workers’ hands by presenting instructions directly in their line of sight is a big deal for industries where efficiency gains equal greater profits.

Impacting the Bottom Line

Enterprise is excellent at spotting where a new technology like AR can address a specific challenge, especially if the result impacts the bottom line. Robots were added to car assembly lines because they automated tasks where human error led to safety issues; machine-to-machine wireless communications was embraced because it predicted the need for maintenance in advance of machines grinding to a halt. In both, AR reduced costs by eliminating the need for skilled workers.

And so it appears to be with AR. German carmakers Volkswagen and BMW have experimented with the technology for communication between assembly-line teams. Similarly, aircraft manufacturer Boeing has equipped its technicians with AR headsets to speed navigation of planes’ wiring harnesses. And AccuVein, a portable AR device that projects an accurate image of the location of peripheral veins onto a patient’s skin, is in use in hospitals across the U.S., assisting healthcare professionals to improve venipuncture.

Elevator manufacturer ThyssenKrupp has taken things even further by equipping all its field engineers with Microsoft’s HoloLens so they can look at a piece of broken elevator equipment to see what repairs are needed. Better yet, IoT-connected elevators tell technicians in advance what tools are needed to make repairs, eliminating time-consuming and costly back and forth.

Too Soon to Call

It is too early in AR’s development to tell if this generation of the technology will be a runaway success. In the consumer sector, the signs aren’t great. Virtual Reality (VR), AR’s much-hyped bigger brother, is not exactly flying off the shelves; a recent report in The Economist noted, for example, that the 2016 U.S. sales forecast for Sony’s PlayStation VR headset was cut from 2.6 million to just 750,000 shipments.

And although VR’s immersive experience might have some applications in training and education, enterprise applications will do little to boost its chances of mainstream acceptance. In contrast, AR’s long-term prospects are dramatically boosted by industry’s embrace. And, in the same way that PCs, Wi-Fi, and smartphones built on clunky, expensive, and power-sapping first-generation technology went on to become the sophisticated products we use today, industry’s investment in the technology will ensure AR headsets will become more streamlined, powerful, and efficient—and ultimately much more appealing to the consumer.

AR’s interleaving of the virtual and real worlds to improve human performance will become a compelling draw for profit-making concerns and the public alike. It’s reality, only better.

For more on the future of AR see Mouser Electronics’ ebook Augmented Reality.

Steven Keeping is a contributing writer for Mouser Electronics and gained a BEng (Hons.) degree at Brighton University, U.K., before working in the electronics divisions of Eurotherm and BOC for seven years. He then joined Electronic Production magazine and subsequently spent 13 years in senior editorial and publishing roles on electronics manufacturing, test, and design titles including What’s New in Electronics and Australian Electronics Engineering for Trinity Mirror, CMP and RBI in the U.K. and Australia. In 2006, Steven became a freelance journalist specializing in electronics. He is based in Sydney.

Originally published by Mouser Electronics Reprinted with permission.

Digital Signage Complexities Addressed

Tuesday, November 7th, 2017

An overview of the parts that make up a digital signage system

Digital signage has been an important topic across the IT, commercial audio-visual, and signage industries for several years now. The benefits of replacing a static sign with a dynamic digital display are clear. However, while it’s impossible to avoid digital signage as we go about our daily lives, I still find myself surprised by the number of people who want digital signage, but don’t understand what goes into a signage system. The attitude of the casual observer could be, “Oh, it’s a flat panel and some video files! We can do that!”

Figure 1: Complete digital signage installation in a restaurant setting. (Photo Courtesy of Premier Mounts and Embed Digital.)

Yet the reality is that digital signage comprises more components than many realize. Digital signage is a web of technologies, involving several different pieces, and potentially several different manufacturers. It’s not nearly as simple as it may seem at first glance, but the good news is we can organize all this complexity into a few categories of components to aid understanding.

Don’t Overlook Content
First up is the obvious category, displays. No! Really! Now, my friends in the display manufacturing world really hate this, but we will start with a component that most people don’t think of as one, and that if not done right, will cripple any chance the system has of success. That component is content. I group this in with all the physical hardware because it has a cost, must be planned for, and selected just as carefully as any piece of electronics. Content is the vehicle that delivers your message and enables you to achieve your objectives. Without it you don’t have much of a system, so ensure you plan for it, its cost, and its need to be continually refreshed. Whether done in house, or outsourced, this is one component that must not be overlooked!

The Single Most Important Product
The Content Management System or CMS, is the heart of any digital signage system. It’s the component that enables you to distribute and manage your content and set up all the scheduling and dayparting you will use. This makes this the single most important product (not getting past that content thing yet!) that you will select. Now, a lot will come down to your strategy… what are your objectives, for what will the signage be used? Different software packages (and there are hundreds!) all offer generally the same group of features, but typically have features that let them focus on a specific vertical. For example, a CMS focused on interactive content, or one focused on integration with external data sources. There are also different business models involved here; some software is on premise, meaning you purchase it and host it yourself, others are Software as a Service (SaaS) and hosted in the cloud for a monthly subscription fee. Neither is inherently superior, and a lot of this will depend on your IT policies, and finances.

Once you know software, then you can select a media player. Today, as we rapidly reach the end of 2017, you have a surprising number of choices, not just traditional PCs. Android, ChromeOS, custom SoC, display embedded… this deserves a whole discussion unto itself. Keep in mind that your software vendor will often have guidelines and recommendations for this, since this device is where the software lives. Personally, I’m getting to where I prefer non-Windows devices, they tend to be lower cost and easier to manage, but don’t take that as absolute… there are always great options in all types.

The Backbone
If the CMS software is the heart, then the network is the backbone. This is the connection that lets each media player communicate back to the central server, wherever it is. This is often the part of digital signage that will require the most technical skill, especially if the Internet is involved to connect multiple sites. You need to be comfortable connecting devices to the network, configuring them, managing ports and bandwidth, and dealing with firewalls. If that sounds complicated… yes, it can be!  Implementing this may take a “guess and check” mentality, as communication rarely works perfectly the very first time you power on. Sorry, plug and play is an illusion used as marketing, not reality!

Displays: Understanding Three Things
Selecting the type of display involves understanding three things; environment, hours of use, and audience position. Understanding the environment in which the display will live helps us choose how bright it needs to be, and if we need protection against dust or moisture. Knowing the hours it will be turned on helps us to select the duty cycle required. Finally, knowing the audience position will help us in selecting how large the display (and the image shown on it) will need to be. LCD flat panels are the most common, and will be the go to for general purpose displays, but projectors are being used quite a bit as well (especially the models that don’t use traditional lamps!). Direct view LED displays are much more affordable then they have been, so those are now a much more common choice. Each one has its own pros and cons.

We’re not quite done, so bear with me a bit longer. We also have mounting for the display and media player. Before you laugh at me, this is a more complex choice than you think. You need to understand the structure of what you are mounting onto, where the display will go, and if you are dealing with an unusual environment. Sometimes, we need protective enclosures, or a kiosk for an interactive display. All of this makes the mounting solution key, and not something to be selected as an afterthought. Also, always buy from a top tier mount provider… saving money here can cost time and increase the risk of mount failure.

Now that we have covered these components, you should understand the general parts of a digital signage system. If this topic intrigues you, and you want to learn more, I will present “Understanding the Parts of a Digital Signage Network,” at Digital Signage Expo 2018 on Wednesday, March 28 at 9 a.m. at the Las Vegas Convention Center. For more information on this or any educational program offered at DSE 2018 or to learn more about digital signage go to .

Jonathan Brawn is a principal of Brawn Consulting, an audio-visual consulting, educational development, and marketing firm, based in Vista, California with national exposure and connections to the major manufacturers and integrators in the AV and IT industries. Prior to this, Brawn was Director of Technical Services for Visual Appliances, an Aliso Viejo CA based firm that holds the patent on ZeroBurn™ plasma display technology.



Zigbee® 3.0 Green Power: Saving the Day for IoT Connectivity

Tuesday, November 7th, 2017

With the proliferation of apps and devices, a standard is needed to ensure simplicity on the user end, reliability of connections, and interoperability of products from a variety of manufacturers.


With its 2014 release of Zigbee 3.0, the Zigbee® Alliance announced the unification of its wireless standards to a single standard named Zigbee 3.0. This standard seeks to provide interoperability among the widest range of smart devices and give consumers and businesses access to innovative products and services that will work together seamlessly to enhance everyday life.

Zigbee 3.0 also includes Zigbee Green Power. Zigbee Green Power was originally developed as an ultra-low-power wireless standard to support energy harvesting devices (i.e., devices without batteries that are able to extract their energy requirement from the environment.) Green Power is especially effective for devices that are only sometimes on the network (when they have power). The Green Power standard enables these devices to go on and off the network in a secure manner, so they can be off most of the time.

As an ultra-low wireless technology, Green Power is also a very effective option when using battery-powered devices, as it enables them to run off a battery for years. Green Power also allows for low-cost end nodes to communicate with the rest of the network, specifically in situations where there is no meshing required.

What Is Meshing?
Meshing has long been an intriguing concept in networking technology, so let’s take a quick look using a familiar Wi-Fi meshing scenario. The basic home Wi-Fi setup today is a cable or DSL router that wirelessly connects with our tablets and smartphones. If it doesn’t work so well, we install a repeater as an intermediate. These days there are sets of router boxes available that are preconfigured to wirelessly work together to cover our sprawling mansions or tidy cottages, as the case may be, including that room behind the garage where we escape to play video games. Reliable, speedy coverage.

So how can meshing help? The concept is a simple one. If I am in one corner of the house with my smartphone, and a laptop is closer to me than the router, then I just hop (mesh) via the laptop—as long as it is powered on. Meshing is generally self-configuring: everyone on the network helps everyone else to reach the router and via the router, the internet. And if that intermediate laptop is turned off, then my smartphone finds another device. Meshing can also be self-healing; it’s no problem if one link to the router breaks down, as the network will (hopefully) find another one

So, what’s the downside? This is one of those situations where sympathetic, pioneering technology clashes with the day-to-day grumpy consumer, who wants reliable connectivity all the time without being bothered with much else.

Unfortunately, there are three general problems with meshing—and this is true whether we’re looking at Wi-Fi or Zigbee (or Thread). The first is intermittent failures, the second is related to battery life, and the third is cost.

Meshing Has Issues; Zigbee Green Power Has Answers
Intermittent failures are usually recognized as a nasty network problem—sometimes it works, sometimes it doesn’t. Nobody knows why, exactly, so nobody can really fix it. In the meshed Wi-Fi example above, maybe your smartphone network connection depends on whether your son leaves his game station on or off—and good luck figuring out that one.

And then there’s battery life. In a meshed Wi-Fi network, your laptop might become an intermediate node. Suddenly, your laptop is running low on battery, because someone else in the family was (perhaps unknowingly) using your laptop as a hopping point (“meshing node”) for watching YouTube videos on his tablet. The meshing network has actually created a battery issue for a device that wouldn’t have had a problem otherwise.

And finally, there’s a cost issue. Every node in the network is not only an edge node (“a node doing something”), but it also needs to be able to function as a meshing node—and it needs to be equipped to do so. In practice, this means running more software on a larger processor with more memory. And meshing nodes may also have to be “on” constantly, which requires a larger and more expensive battery, while an edge node only has to be turned on when triggered.

But here is where the Zigbee 3.0 with Green Power saves the day. Zigbee enables meshing, but does not require it. Edge nodes can easily be Green Power networked, and if the area to be covered exceeds the range of a single router, multiple routers can work together to establish a backbone network that all the edge nodes can connect to, without carrying the overhead themselves to be meshing nodes.

Is Meshing an Outdated Concept?
In the context of a home, meshing is a band-aid solution for a poor radio that lacks range. Meshing would not be necessary with a powerful enough radio, running on a coin cell battery, that enables you to reach the router—even if it is not located in the most optimal center of the home. These radios may not have existed 10 years ago, but they exist today. This makes meshing a fringe solution for exceptional radio coverage problems. And these days, multiple radio frequency Wi-Fi channels are more often implemented, not meshing. Zigbee can do that, too.

The self-healing “benefit” of meshing is also a bit of a relic from the days when networking equipment was not as reliable as it is today. Single points of failure were red flags, and mesh networks with multiple paths were seen as a great plus—enabling rerouting via alternative connections at the moment of breakdown. But with today’s reliable networking equipment, the need for avoiding single points of failure is more or less gone.

Still there are many situations where meshing is a good and practical solution, e.g. where coverage is limited, or where there is lack of infrastructure.

How Does Green Power Work?
As a standard feature of Zigbee 3.0, Green Power’s simple networking protocol essentially brings all the complex networking features to a proxy (usually the router), while the Green Power node focuses on making sure that the essential signal—whether a temperature measurement, a command to turn on a light, reporting whether a door or window is open or closed—reliably reaches a router for further consumption. As mentioned, Zigbee Green Power features ultra-long battery life ¬– and in the case of energy-generating light switches, there is no need for batteries at all. Zigbee Green Power is fully integrated with Zigbee 3.0 and fully compatible with all the services that the Zigbee 3.0 delivers, from installation to security, and from ease-of-use to maintenance.

Green Power Works in Simple and Complex Scenarios
At the simplest end, it enables low-cost implementations of standard, standalone solutions. This covers most of the Zigbee applications in the market today. For example, if you have a few lamps, a dimmer-switch and a gateway, then to connect the lamps to the internet, you only need Green Power. As simple as that—no meshing capability required.

For more extensive solutions, Green Power enables the building of a simple Zigbee star network in your home. Or there can be multiple stars dropping from a single backbone in the case of a larger building installation. Either way, Green Power eliminates the disadvantages of meshing— intermittent connections that are difficult to diagnose, and unexpected situations where sensor nodes suddenly and quickly run out of battery power.

Green Power also allows for a Zigbee infrastructure that is fully aligned with the Wi-Fi infrastructure in a building. It is useful to note that a Zigbee radio has a comparable or better range than a Wi-Fi radio. Zigbee (IEEE 802.15.4) is essentially low-power Wi-Fi (IEEE 802.11).

The practical fact is that both Zigbee 3.0 (with Green Power) and Wi-Fi integrate in a single router box, simplifying and cost-reducing the overall networking infrastructure for consumers or enterprise customers, and paving the way for the real Smart Home as part of the Internet of Things. But only Zigbee 3.0 networks use meshing when it really adds value—the real power of Green Power.

Cees Links is GM of Qorvo’s Wireless Connectivity Business Unit. Links was the founder and CEO of GreenPeak Technologies, which is now part of Qorvo. Under his responsibility, the first wireless LANs were developed, ultimately becoming household technology integrated into PCs and notebooks. He also pioneered the development of access points, home networking routers, and hotspot base stations. He was involved in the establishment of the IEEE 802.11 standardization committee and the Wi-Fi Alliance. He was also instrumental in establishing the IEEE 802.15 standardization committee to become the basis for the ZigBee® sense and control networking. He was recently recognized as Wi-Fi pioneer with the Golden Mousetrap Lifetime Achievement award. For more information, please visit .



The Internet of Things… Are We There Yet?

Wednesday, August 9th, 2017

A common driver exists for the IoT, centered on knowledge and decisions.

There is a lot of chatter about the IoT these days, with tech companies, journalists, investors and consumers all trying to figure out what it is, what it will affect, and how to make money from it.

But what exactly is the IoT? What is its core value? And are we there yet? Considering that the IoT may have as much (or more) impact on society as computers and the Internet have had, these are important questions.


What exactly is the IoT?

The name may be somewhat misleading, but probably the best way to describe the Internet of Things is as an application or as a service that uses information collected from sensors (the “things”), analyzes the data, and then does something with it (e.g., via actuators – more “things”).

The service, for instance, could be an electronic lifestyle coach, collecting data via a wristband, analyzing this data (trends) and coaching the wearer to live a healthier life. Or it can be an electronic security guard that analyzes data from motion sensors or cameras, and creates alerts. “Internet of Services” might be a more accurate description of the IoT value.

But whatever its best name may be, the IoT is typically a set of “things” connected via the cloud (Internet) to a server that stores and analyzes data (trends, alerts, etc.) and then communicates with a user via an application running on a computer, tablet, or a smartphone. So, it’s not the “things connected to the Internet” that create value. Rather, it’s the collecting, sending/receiving and interpreting data from the Internet that creates value: taking action based on data analytics, not the things themselves.

Why all the IoT hype now?

A cynic might attribute this to technology companies needing “something new” when the first signs emerged of a saturating smartphone market. But the reality is that a few fundamental things changed, creating momentum for new emerging applications that found a home under the umbrella of IoT—from Fitbits to thermostats, smart street lights to smart parking.

The first fundamental change was that the Internet became nearly ubiquitous. Initially connecting computers, the Internet now connects homes and buildings. And with the advent of wireless technology (Wi-Fi, LTE), access to the Internet changed from a technology into a commodity.

The second fundamental change was essentially Moore’s Law rolling along, with smaller, more powerful and lower cost devices being developed to collect data.

And finally, low-power communication technologies were developed that extended the battery life for these devices from days into years, connecting them permanently and maintenance-free to the Internet.

What is the real value of the IoT?

We live in a wonderfully interesting time, when amazing things happen. Consider that in the year 1820, 90% of the population lived in abject poverty. Today, some 200 years later, that percentage has shrunk to under 10%, despite that the population itself has multiplied several times. It is the miracle of the industrial revolution and many other things coming together. After World War II and the invention of the transistor, the industrial revolution seamlessly folded into the technology revolution, and we went from computers to smartphones, and from the internet to the IoT.

The common driver? It’s all about “making better decisions faster.” The industrial revolution was based on innovation and creativity, individual freedom and organization. Consider that the Hoover Dam, one of the wonders of the twentieth century, was designed with slide rulers, paper and pencils. Three decades later, we managed to get men on the moon using computers that had a fraction of the power of our smartphones.

The motivator of “making better decisions faster” drove computers into existence. Does anybody remember how to do bookkeeping without a computer? Or run a manufacturing plant? Making better decisions faster drove the Internet into existence. When was the last time you wrote a letter, instead of an email? What was the last edition of the Encyclopedia Britannica before Wikipedia’s real-time updates introduced it to obsolescence?

This making better decisions faster is driving the IoT into existence, too, and will make it the pivotal technology for the current decade. It will make our personal lives more comfortable, more safe and secure. We will waste less energy. The IoT will make the quality of our products better. Factories will be more efficient with raw materials and other resources. We will be able to better monitor our environment, and our impact on it. The IoT is not a break from the past, it is a natural progression in making better decisions faster, and a continuing engine for our economic growth and wealth creation – driving out poverty altogether.

Are there downsides to the IoT?

The industrial revolution and the subsequent invention of assembly line production certainly resulted in groups of winners and losers, and there was major upheaval and social unrest as we came to grips with all the changes. The technology revolution also contributed to upheaval and unrest. People were replaced by computers and lost their jobs.

To date, despite considerable pessimism about the loss of jobs to automation, overall employment has not appeared to decrease. Clearly, change has been very painful for those impacted. But overall, where jobs were lost, other jobs were created. And by economic law, jobs with low value-add disappeared and were replaced with jobs with high value-add—the “cleaning mechanism” through which economic growth and wealth creation were affected.

The IoT will follow the same pattern. It will redefine jobs and skills. It may even create unrest. There will be winners and losers. There will be people who will see opportunities. And there will be people who will fall victim because “better and faster” is not what they can absorb. In this sense, the IoT will be just the next example of the tradition of the industrial revolution – that more prosperity comes at a price.

The IoT’s network of connected devices will absorb many of the repetitive, drudge work tasks of today. And in much the same way as the post-industrial revolution period, while machines are doing the grunt work, humans will have more time to spend on solving bigger problems. Will it enable the next level of creative culture? A new generation of space explorers? A new enlightenment, perhaps?

So are we there yet?

As with many technologies, after a few years of high expectations, the IoT is slowly entering the Valley of Disillusionment phase of the “hype cycle,” that quiet phase where sobering reality starts kicking-in. Usually this is also the period where the fads and the wild ideas separate from the strong and more realistic groundswell of useful applications. The good news is that when we compare this to other technologies, we seem to have short memories of the “not quite right yet” years, when early adopters worked to help the technology through to success. The same will happen with the IoT.

The IoT is suffering today from a lack of understanding of its true value proposition; and at the same time, a plethora of proprietary and open communication standards inhibit interconnectivity, create confusion with consumers, and confusion among product builders themselves, keeping product prices high and delaying market growth. On top of all that, large companies seem determined to seek the holy grail by promoting their own ecosystems.

Even if we are currently in the Valley of Disillusionment, we should not be distracted. We still have a lot to learn (maybe less technology and more business models on maximizing the value-add), but we are in the middle of shaping a better world for the next generation. A world with less poverty, hopefully fewer wars. Maybe a new Golden Age, an Enlightened world? We have a long way to go, but we will see—because we can!

CeesLinks_WEBCees Links was the founder and CEO of GreenPeak Technologies, which is now part of Qorvo. Under his responsibility, the first wireless LANs were developed, ultimately becoming household technology integrated into PCs and notebooks. He also pioneered the development of access points, home networking routers, and hotspot base stations. He was involved in the establishment of the IEEE 802.11 standardization committee and the Wi-Fi Alliance. He was also instrumental in establishing the IEEE 802.15 standardization committee to become the basis for the ZigBee® sense and control networking. Since GreenPeak was acquired by Qorvo, Cees has become the General Manager of the Wireless Connectivity Business Unit in Qorvo. He was recently recognized as Wi-Fi pioneer with the Golden Mousetrap Lifetime Achievement award. For more information, please visit .

The Long-Term Evolution of the Internet of Things

Monday, June 19th, 2017

As LTE (Long Term Evolution) technology evolves to address the needs of a more connected world, the cellular landscape surrounding it grows increasingly complex.

Different countries and carriers have adopted different technologies and bands in their efforts to bring LTE to IoT, resulting in significant market fragmentation. These rifts and complexities will require product creators to take strategic approaches to future-proof their IoT devices. Some of these strategies include the utilization of eSIM, pin-compatible and multi-band modules, and MVNO/IoT platforms.

Some enterprising companies are addressing this emerging problem, by creating IoT platforms that take the burden of connectivity off product creators.

Although the Internet of Things has received a lot of news coverage as of late, the practice of adding connectivity to machines is nothing new. Network operators have been offering GSM-based connectivity modules for many years, and cellular connectivity has a proven track record of success across a wide range of applications. What makes the Internet of Things revolutionary is not the idea of device connectivity itself, but its widespread application. In the very same way, LTE stands to revolutionize IoT not in essence, but in scale. LTE’s low cost, low bandwidth, and low power consumption will allow for IoT applications which were either not possible, or not economically viable, in the past. And with the number of internet-connected devices projected to reach over 50 billion by 2020[1], LTE is likely to become a definitive technology in the development of the Internet of Things.

But for product creators currently deploying solutions based on legacy networks, the jump to LTE involves both opportunity and risk. In the race to implement (and define) the future of LTE, countries, carriers, and hardware manufacturers have embraced a variety of different cellular technologies and bands. As a result, the LTE landscape has grown increasingly fragmented and complex.

Despite these challenges, the transition to LTE appears all but inevitable. Therefore, the onus has fallen largely upon product creators to take flexible, strategic approaches to future-proof their IoT devices.

The Future of Cellular
Sending data wirelessly is fundamental to IoT applications. It has been ever since the very first 2G cellular modules were added to assets to help reduce servicing costs. The data received could include things like stock levels (in vending machines), evidence of tampering (in payment kiosks), or hours of use (in manufacturing equipment). Data could also include simple functions such as taking an asset in or out of service, alerting an operator that a fault had been reported, or delivering over-the-air updates.

As the roll-out of LTE networks continues, a much wider range of applications will become possible. Applications with more varied data demands, such as video, audio, and control telemetry, will drive demand for more connections that are reliable, flexible, and power efficient.

Today, many IoT devices are deployed in remote locations and depend on battery power to operate. These distributed applications require much more power-efficient chipsets and modules but have relatively modest bandwidth requirements. Here, too, LTE holds the key to connectivity.

Currently, LTE is being positioned to support IoT applications by dedicating ‘resource blocks’ to low-bandwidth IoT traffic, through what are referred to as “Categories.” Most notably, IoT will employ LTE Category M1 for Machine-Type Communications (known as LTE-MTC, LTE-M or LTE Cat M1), as well as NB-LTE-M and NB-IoT (NB stands for Narrowband).

The categories targeting IoT traffic require significantly less complex chipsets, which means two things:

  • Lower operating power—enabling ultra-low power nodes such as smart sensors and actuators
  • Lower cost—allowing for a wider and more diverse range of applications

Part of the way this is achieved is through using lower (in terms of 3G) bandwidths, which makes them even better suited to IoT applications in which data exchange is limited.

Fragmentation in the Cellular Landscape
Product creators first getting started with cellular connectivity face several formidable obstacles. The cellular landscape, which was historically simplified by roaming agreements and fixed sets of supported bands, is set to become much more complicated with LTE. Product creators must contend with the planned (but not always publicly revealed) retirement of existing 2G/3G networks, as well as the international fragmentation of LTE standards and bands.

Many, but not all, carriers have already announced plans to retire their 2G networks within a decade. And as more investment capital is directed toward LTE, the future of 3G networks has also come into question.

At the same time, operators are facing challenges of their own in the way of customer retention. Supporting customers moving to LTE will involve understanding complex, intertwined  relationships—among chipset vendors, module makers, and carriers.

The cellular landscape has always been subject to (and complicated by) regional variations. Unfortunately, that is unlikely to change with LTE. In the past, product creators could rely on roaming agreements for international compatibility. With the advent of LTE, however, fragmentation in the bands used by various carriers will render these types of roaming agreements impossible. Some are seeing this as a sign that product creators may need to start serving as their own Mobile Virtual Network Operators (MVNOs) and establish place direct agreements with carriers. Far from fueling innovation, these issues may prove to be discouraging, or outright debilitating, barriers to market entry. Each carrier agreement could take many months to negotiate and be subject to its own terms of service, pricing, and data volumes. Reselling this as a service will present further challenges for manufacturers acting as MVNOs.

Some enterprising companies are addressing this emerging problem, by creating IoT platforms that take the burden of connectivity off product creators. These platforms aim to combine the advantages of MVNOs with the resourcefulness of cloud-based service providers.

International Product Complexity
In the field of cellular communications, there’s never just one, universal standard. This has caused product creators lots of headaches over the years, but in the case of LTE for IoT, the lack of a gold standard may be justified. That’s because neither LTE-M or NB-IoT can be supported seamlessly. Migrating to the former will require a software upgrade to existing cellular towers, while the latter will require a software upgrade and new radio hardware.

These realities will only serve to further divide carriers along geographic and technological borders. In the U.S., Canada and Mexico, for example, LTE-M1 has become the norm. Meanwhile in Europe, China, and Southeast Asia, NB-IoT is favored (Figure 1). For product creators wanting to sell in multiple markets, these inconsistencies will require the production of either two product ranges, or a product range that can support both categories. Either way manufacturers will incur additional costs.

Figure 1: Targeting markets worldwide requires a product range which can serve carriers who have opted for LTE-M1 as well as those whose choice is NB-IoT.

Figure 1: Targeting markets worldwide requires a product range which can serve carriers who have opted for LTE-M1 as well as those whose choice is NB-IoT.

The Advent of eSim
Another innovation that is expected to impact MVNOs, is the embedded SIM (eSIM). An eSIM allows hardware to be carrier-neutral, enabling the hardware to be switched in the field or provisioned after shipping. Some in the industry believe the eSIM will help build cooperation between MVNOs and thereby improve supply-chain efficiencies.

The eSIM is expected to be integral in the global roll-out of LTE-IoT devices, as it will allow products to be shipped with ‘blank’ SIMs that can then be activated in the country of destination. This will make it easier for product creators to develop and offer new products in different market segments.

The Reality of LTE
The types of applications currently being targeted by LTE-M and NB-IoT developers are sensors and actuators; devices that will require low data bandwidth and, potentially, low duty-cycles. This has led to the widespread misconception that smart sensors and actuators will be able to operate for many years on a single cell. In reality, the longevity of remotely deployed smart sensors and actuators will depend upon the requirements of the specific use case (Figures 2 and 3).


Figures 2 and 3: Modules with footprint/pin-out and software compatibility make a straightforward approach to addressing diverse markets possible.

Figures 2 and 3: Modules with footprint/pin-out and software compatibility make a straightforward approach to addressing diverse markets possible.

Managing Risk and Reward
Chipset and module manufacturers are now working closely with platform providers to harness the potential of LTE for IoT. Despite the complexities associated with LTE, forward-thinking product design and stack selection can mitigate the risks associated with product development and increase ROI. To future proof IoT products, creators must consider hardware solutions, eSIM, and MVNO/Data platforms.

Understanding the limitations of hardware is critical. For example, a dual-mode modem may seem like the best option for global coverage in single product range, but it will likely add a great deal of unnecessary cost. A better approach would be to adopt a module that offers footprint/pin-out and software compatibility for all variants. That way, a single product design can be used across a variety of markets by simply swapping modules.

To overcome the costs and challenges associated with complex roaming agreements, some product creators are now turning to a novel alternative—third-party IoT/MVNO platforms. These platforms are making implementation easier for product creators by eliminating the need to negotiate carrier agreements across an increasingly complex global market.

IoT/MVNO platforms built on dedicated hardware are proving to be invaluable means for product creators to get their projects to market and scale more easily. These platforms provide the infrastructure required to transmit, manage, and protect data all the way from device to cloud.

There is a long history of using cellular modules to add connectivity to assets. But in the past, economic and technological limitations have made the widespread application of cellular connectivity unfeasible. LTE technology—due to its low cost, bandwidth, and energy consumption—stands to remove those barriers and make possible a multitude of new and innovative IoT applications (Figure 4).

Figure 4: LTE technology is positioned to support new IoT applications.

Figure 4: LTE technology is positioned to support new IoT applications.

However, the road to that new reality is not without its challenges. Market fragmentation caused by the global adoption of many different technologies and bands has already significantly complicated the transition to LTE. And as the roll-out progresses, new, unforeseen challenges are likely to emerge. For example, a key incompatibility between Radio System Software created by Huawei and Ericsson has already threatened the future of NB-IoT—the technology which has been chosen by European Carriers for IoT[2].

Within this landscape, solutions such as eSIMs, footprint/pin-out and software compatible cellular modules, and full-stack IoT/MVNO platforms seek to provide flexibility in the face of uncertainty.

Photo-PattyFeltsPatty Felts became Principal Product Manager for u-blox in March 2017 and is responsible for the LTE Cat M1 (R4 Series) product portfolio. Previously, she acted as Director Product Management, PMP, at u-blox San Diego for nearly five years. Felts holds a Bachelor of Chemical Engineering from Georgia Institute of Technology and a Master of Business Administration (MBA) from St. Edward’s University.

Photo-WillHartWill Hart is the General Manager for Developer Tools, Particle, where he has spent more than four years bringing IoT solutions from initial prototype to mass production. He oversees Particle’s supply chain and developer tools ecosystem, and is responsible for taking all new concepts and prototypes through rigorous production engineering, quality assurance testing, and final assembly. Hart also manages Particle’s components, manufacturing relationships, and distribution logistics to meet the high demand for Particle’s innovative products. He has a B.E. in Mechanical Engineering from the Thayer School of Engineering at Dartmouth College.



“…beautiful blend of market and mission” – What we hear from K4Connect CEO Scott Moody

Wednesday, June 7th, 2017

The IoT, home automation, and serving 1.5 billion underserved.

Editor’s note: Scott Moody is Chief Client Advocate, CEO and co-founder of K4Connect, a company that serves older adults and individuals living with disabilities by integrating disparate technologies, among them those designed for the IoT home automation space, into a single system and application. EECatalog recently asked Moody to describe his company’s mission and to address some of the (not always accurate) ideas held about a growing demographic he finds underserved by technology.

EECatalog: What are the pre-conceived notions around older adults and technology that you find to be more myth than fact?

F. Scott Moody, K4Connect

F. Scott Moody, K4Connect

Scott Moody, K4Connect: Probably the foremost one you hear is this idea that seniors do not like technology, which I have truly never believed to be true. Yes, they do not like things designed for somebody fifty or sixty years younger than they are, but if designed for them in mind, they most certainly will use technology. The analogy I use is this, if my daughters give their grandmother a smartphone  with a bunch of icons she can’t see, show her a bunch of apps that she has no interest in at all, and then tell her there are 1.5 million apps she can download, which is not a number that is going to excite Grandma, well, when Grandma sets the phone aside, people say, “See, Grandma doesn’t like technology.” But the analogy is that their grandmother does not wear the same clothes as her granddaughters do, but that doesn’t mean she doesn’t like clothes. She just likes clothes designed for her, that are comfortable, that provide utility in her life.

The fact is that if you design something that older adults like, that they can understand, that is relevant to their lives, that provides them true utility in their lives, then I can assure you they will use it. In the end, they don’t think about it as “technology,” but simply a part of their daily lives. The fact is that technology is really only successful when you stop calling it technology, when people see it being a ubiquitous part of their everyday lives.

For many, IoT and home automation devices are simply a convenience, and for some, nothing more than a novelty, but for the demographic we serve, many of these same devices and applications can bring real value and utility to their lives. Take the example of a wireless door lock and a wireless doorbell. For most, these are more of a convenience, heck, we can just get off our butts and answer the door. But for those we serve, someone maybe with a mobility challenge, maybe in a wheelchair, or just not able to quickly make their way down the stairs, knowing who is at that door and being able to open it using our application is actually quite important.

EECatalog: How are K4Connect clients using the solutions you offer?

Moody, K4Connect:  All our systems are designed around three design pillars, what we call: Simpler, Healthier, and Happier. The Simpler elements revolve around the home automation devices you think of, like those door locks and doorbells, but also lights, thermostats, sensors, etc. The Healthier elements logically integrate things like activity trackers, scales, etc, but even have features such as pill reminders and wellness information. And finally, the Happier elements include things like secure video/voice calls, messaging and picture sharing, important particularly for those that may have lost some mobility.

Our applications integrate all these features into a common application, designed specifically for those we serve. And to my point earlier, when designed with our clients in mind, they do use “technology.” Interesting enough, where deployed (our first product is K4Community, specifically designed for the residents and operators of senior living communities), 100% of the clients/residents use the home automation features. This includes things thermostats changing automatically so it’s a little cooler at night, then warming up during the day, and lights coming on automatically if the client is getting out of bed in the middle of the night.

Moreover, for the application itself, almost 75 percent of our clients use the application and, when used, use it an average of 10 to 15 minutes a day—which is pretty darn significant. Those numbers demonstrate that the whole idea that seniors don’t like technology is wrong. Fact is they don’t like technology built for 20- year-olds, just like a 20-year old is not going to appreciate technologies built for a 91-year-old.

EECatalog: How does the underlying architecture contribute to the capabilities K4Connect offers?

Moody, K4Connect: Our patented architecture is the key to what we do, allowing us to offer better reliability and faster speeds. While its seems like everybody [else] rushed to the cloud years ago when it comes to IoT and home automation; we’re using the concept of a fog architecture. Computing power costs have come down so much that there is no reason you can’t have a very small, very inexpensive box in each home/apartment. Thus, if the Internet connection or Wi-Fi is down, your home continues to work and given the local connection, there is very little latency

EECatalog: What should embedded developers know about the market you serve?

Moody, K4Connect: Fact—there are over 1.5 billion people who are either over the age of 65 or living with a disability, and those numbers are going up significantly.

The challenge is that you have to look at this market a bit differently, whereas a lot of people look at this market the same way they look at other markets. They see a problem, they develop a device or application, they make the font a little bigger (i.e. “accessible”) and then they send it out hoping folks will buy. But older adults don’t have the inclination to have to use 10-20 different applications just to control their home. Forget about it being too hard, the fact is that the more complexity you add, the more the foundational utility is diminished. You could have 10 different light bulbs in your home with 10 different apps running them, add to that door locks, door bells, social media, video chat, pill reminders, etc, and next thing you know could be using 50 different apps, none of which work together. Who wants that?

To solve this, we provide a platform and integrate disparate devices, applications and even systems into a single application with a consistent and easy-to-use UI. You may have 10 different light bulbs in the room, but they all have the same UI.

We don’t make the hardware; we don’t make necessarily develop all the applications; we just bring them together into a common application built on our software platform technologies.

Beyond the older adults themselves, senior living communities don’t usually have an  IT staff at the community facility level, so when we walk in and show them we integrate and manage all this functionality, their interest level is amazing. People recognize the value many of these devices and applications can provide to their residents, they just didn’t have a way to manage it all before. We provide that.

EECatalog: Closing thoughts?

Moody, K4Connect: We are very much a mission-centered company, focused on making the lives of the people we serve better, as we said, Simpler, Healthier and Happier. Fact is that we believe if we do that, if we do in fact make the lives of our clients, better, then those around them will benefit as well. But it all starts with bringing first order value to our clients.

In the end, we don’t sell, we serve. And if we do that well, good things will come, but it all hinges around the mission—to serve the underserved.

IoT Security: Intel EPID Starts at the Silicon

Monday, May 22nd, 2017

With a projected 100 billion IoT devices by 2020, security cannot be reliably entrusted to end users or installers deploying millions of IoT devices every month.

There are 8.4 billion Internet-connected things in use this year, which is up by 31% from 2016, according to Gartner, which also reveals that “total spending on endpoints and services will reach almost $2 trillion in 2017.”[1] The Internet of Things (IoT) is shorthand for anything connected to the Internet that can autonomously communicate to accomplish a purpose, transceiving data. Massive amounts of data, in combination with algorithms, artificial intelligence, or other paradigms, can glean information for decision-making on an unprecedented scale. IoT automates painstaking data-gathering and can also provide a low level of interpretation that sifts out irrelevant data. The potential for deeper visibility can result in productivity gains, scalpel-like precision for decision-making, risk reduction, and a heretofore unnatural depth and breadth of information for automation, which can result in savings of time, energy, expense, and capital expenditure.

However, one of our largest IoT problems is security. Another problem is the sheer scale of the number of IoT devices; one projection indicates 100 billion IoT devices deployed by 2020, thus initiating security at deployment can be a challenge. IoT solutions that require manual registration in the field are untenable and impractical. To leave installers to implement the last step in security is overly optimistic when an installation of hundreds of IoT gateway devices need provisioning across an airport, warehouse, or stadium, for example.

IoT security (or lack thereof) creates another issue: Information Technology (IT) network personnel are loath to add IoT devices to the company network. This means that added functionality from IoT devices purchased and installed by operations technologists (OT) (e.g., facilities management) may face significant delays in getting connected to the company network due to a lack of trust. Smart solar, windmills, and simple things like pump monitoring via smartphone will not get connected to the company wi-fi until IT clears the devices as secure. Vendors are left with extra steps beyond the sale to prove trustworthiness, otherwise equipment without connections loses its “smart” status. Security has become an increasingly well-known problem as Internet-connected devices from computers to smartphones to smart light bulbs create individual points of access for hackers worldwide. At the IoT DevCon held in Santa Clara last April, the majority of keynotes were directed at IoT security. Differentiation of smart product design is simply not as interesting as innovation lower down in the stack. Intel® might have an answer, however, that starts with fundamentals in hardware and can be tailored to myriad use cases that characterize IoT.

Intel Makes a Case at IoT DevCon
Jennifer Gilburg, Director of Strategy, IoT Security at Intel corporation delivered a keynote directed at Intel’s Enhanced Privacy Identification (Intel® EPID) security technology, which begins at the physical layer. The Intel IoT Platform is a reference model intended to reduce the complexity of security and interoperability in developing and deploying IoT solutions covering the gamut of use cases (see Figure 1). Intel EPID provides a foundation for addressing secure, interoperable, and affordable IoT development and deployment solutions with the ability to scale to IoT growth while also meeting the most stringent privacy requirements.

The Intel IoT Platform Reference Security Model is a starting place for IoT security implementation. (Source:  Intel white paper: A Cost-Effective Foundation for End-to-End IoT Security.)

The Intel IoT Platform Reference Security Model is a starting place for IoT security implementation. (Source: Intel white paper: A Cost-Effective Foundation for End-to-End IoT Security.)

Gilburg asserts that security in Intel’s EPID technology begins in a hardware root-of-trust and continues throughout hardware, software, and through deployment as products are released. Security cannot be reliably entrusted to end users or installers deploying millions of IoT devices every month. Security needs to be a plug-and-play event for end users and installers if it’s not to be neglected in the last mile. Intel’s effort centers around helping developers do it right with an end-to-end technology like EPID, which Intel partners such as Microchip use, beginning in the silicon. Root-of-trust offers inherently trusted elements, whether hardware or software based, in the chain of custody for deploying secure IoT.

Gilburg rightly states that effective security extends across the entire lifecycle of a product. A viable product should have a minimum of protection, which she indicates that Intel terms a “minimum viable product,” meaning “the least that you can get away with.” That the least effort starts in silicon is no surprise, considering the sophistication of hacks at the lowest level of the stack, such as at boot up or in firmware updates. In her talk, Gilburg went on to explain that “…if you do nothing else with security, you have to have these building blocks. And so we put it into our silicon. It’s protected boot, protected storage. You want hardware and software identities, you want the ability to have a hardware root of trust, but you also want to be able to store software keys in a hardened fashion, in a trusted execution environment.”

For  IT networking personnel, assuring security requires attesting the authenticity of IoT devices throughout the lifecycle, including at hardware manufacturing, throughout software integration, installation or deployment, and even within the process of creating data. Intel’s EPID uses layers of security, starting with cryptography in the bedrock silicon of the chip using advanced hardware and software security.

A boon in EPID that U.S.-only markets are now thinking about is fundamental privacy. EPID does not require personally identifiable information to work, unlike the persistent identity associated with Public Key Infrastructures (PKI). It is not feasible to expect privacy when data is associated with an identifiable entity in every instance. To maintain hope of gaining acceptance in countries with strict privacy laws, the goal should always be to collect the minimum amount of data needed to do the job, and no more. Collect only a minimum amount of information that is tied to your data so you have only the metadata necessary to accomplish a task. IoT should not be collecting anything else, as it’s simply a liability to do so. Therefore, EPID does not require individual identifiers. This is important in countries with strict privacy laws like Japan and Germany. For example, if an IoT system monitoring traffic speed has sensors that detect a car on the highway, all the IoT monitoring system needs to know is that the car is legitimate (not someone spoofing a car) and the speed of the car. Further on down the highway the same car might pass by, but the IoT system will not recognize it as the same car, only that it is again a legitimate car and that car’s speed.

EPID starts with minimum data for strict privacy use cases. PKI can always be added for use cases that require specific identification, such as positive identification for billing. (Source: White Paper, Enhancing National Cybersecurity with the Internet of Things (IoT))

EPID starts with minimum data for strict privacy use cases. PKI can always be added for use cases that require specific identification, such as positive identification for billing. (Source: White Paper, Enhancing National Cybersecurity with the Internet of Things (IoT))

Aiming IoT only at an intended target while minimizing data collection on that target also reduces network traffic while limiting security to a “need to know basis.” As mentioned above, one of the unintended consequences of traditional PKI is the ability to trace users. When you have a public key and a private key and your device requires the signature of the private key, then you know who signed it. This is how traditional PKI works. With EPID there are multiple private keys linked to one group key. With EPID, unless you intentionally add PKI on top of EPID (e.g., for billing), you cannot determine that an action was the same user authenticating twice. If EPID is a built-in technology, developers can minimize the data that they are collecting and sharing while maintaining compliance in countries with strict privacy laws. But with traditional PKI, starting with a foundation of enhanced privacy and determining later whether the use case needs additional known data is not possible.

It’s interesting to note that EPID is in all Intel silicon. EPID is an ISO standard and a Trusted Computing Group (TCG) standard. EPID security includes two-way, mutual authentication so that the new owner needs to prove itself to the device as much as the device needs to prove itself to the new owner. Rogue devices (such as such as smart drones hovering near smart lamps) are prevented from taking over devices in an ecosystem. EPID is also available in Intel partner silicon and freely and openly available for anyone to use.

The process of how EPID gets implemented has most of the heavy-lifting done before the installer ever touches a smart product, but use cases can vary. However, in a common scenario EPID-provisioned silicon goes to an OEM, who puts in a unique General Use Identification (GUID) identifier. (A GUID is a one-time-use identifier for the IoT device to identify itself to the authentication service later on.) At the time of execution or onboarding, a URL in the device allows it to “phone home” to the onboarding service upon power up. An ownership credential is generated that initiates a chain of trust. The chain of trust allows the supply chain to sign the certificate that went before it, so when the owner buys the IoT device they can observe a chain of trust all along the supply chain. Similar practices exist for authorized parts distributors, pharmaceuticals, and evidence in criminal investigations. Within the EPID framework, the process will authenticate the new owner to the device, and the device can verify the new owner.

A Proof-of-Concept Use Case
To better illustrate how EPID can work, what follows is a proof-of-concept use case. As an example, consider a smart light bulb. Once the lightbulb is assigned a GUID number at the OEM, an entity further down the chain, perhaps a distributor, might receive the lightbulb. The distributor signs the ownership proxy, which is a digital document in the chain of trust that goes with the light bulb through the supply chain. Ultimately an end user buys the light bulb and installs it in an IoT platform management system, which we will call “Wee-wow” as a fictional example. Thus, Wee-wow will manage the lightbulb once it’s deployed. When the end user buys the smart light bulb, they import the ownership proxy into their Wee-wow platform management system. Wee-wow pings the EPID service to identify itself as the end-user platform that’s using the new smart light bulb’s unique GUID number. The Wee-wow IoT platform management system contacts the EPID service with a message stating something like, “I am GUID345. I will wait at IP address” In this example, the installer comes in the following week, hikes up a ladder and screws in the smart light bulb.  Upon power-up, the smart light bulb connects with the service, identifies itself as GUID345, and requests the identity of its new owner. The service then tells the smart light bulb to “try the owner, Wee-wow, at IP address because it claims that it’s your new owner.” It’s important to note that the process arranges matchmaking, so to speak, between device and owner and is not negotiating trust in the cloud.

The smart light bulb then connects to the IP address and shows its EPID signature. In this use case, there could be an attestation service that allows you to attest to the validity of the device, as in what type of device it is, or other validation characteristics. After seeing the valid EPID signature, the Wee-wow IoT platform management system sends down the owner proxy and mutual authentication is accomplished. The first signature in the ownership proxy should match the key that was put into the smart light bulb at manufacturing. If it’s a match, the service creates a secure tunnel that allows the developer to provision whatever is needed to make the device operational, for example, updated images and instructions. The service then deletes the ownership credential, onboarding service information, and the GUID, replacing the values such that if the device ever needs to be re-provisioned there are clean credentials for the smart light bulb.

As mentioned earlier, Intel is offering the EPID technology to others. According to the 2016 Intel white paper, A Cost-Effective Foundation for End-to-End IoT Security, “Intel is making Intel EPID technology freely available to all silicon and device manufacturers, software vendors, and the broader IoT community.” A de-facto standard for IoT security couldn’t come too soon. Intel’s realistic conclusion that installers should not be depended upon to key in codes atop a ladder with a smartphone or tablet app is spot-on.

Kicking the Can Down the Road Won’t Work
If IoT is to scale to 50 – 100 billion devices by 2020, depending on the estimate’s source, group and device permissions in the EPID paradigm ease the deployment phase. If an installer is deploying 500 gateways, the installer is mainly concerned with getting the work done, and should not be burdened with keying in long numbers to implement security for each device. Kicking IoT security down the road means it will need to be dealt with later, most likely after some disaster forces positive, not passive action. IoT is vulnerable to hackers, and security is best not left to the last person on the chain. New IoT devices and innovative use cases create new opportunity, revenue, and challenges. As new IoT devices and IoT system architectures are considered for IoT edge devices (which act something like a server-gateway hybrid to pre-process massive amounts of data) we can expect system designers from respected suppliers to take security seriously. As Gilburg said in her keynote, “We’re working with enabling different industry partners…We are saying, ‘Is there a need to securely onboard these devices?’ and 99% of the time the answer is yes, so there’s a lot of opportunity and a lot of interest from customers on solving this problem.” Gilburg was three-deep in a cluster of engineers after her talk, who eagerly peppered her with questions.  Interest is high in  a solution that can be tailored to individual use cases while maintaining privacy from a global viewpoint. IoT is seen as a huge money-maker, but with it comes the cost of mobilizing security.

LynnetteReese_115Lynnette Reese is Editor-in-Chief, Embedded Intel Solutions and Embedded Systems Engineering, and has been working in various roles as an electrical engineer for over two decades. She is interested in open source software and hardware, the maker movement, and in increasing the number of women working in STEM so she has a greater chance of talking about something other than football at the water cooler.

[1] “Gartner Says 8.4 Billion Connected.” Gartner. N.p., 7 Feb. 2017. Web. 15 May 2017. <>.

How to Make Your IoT Gadget Smaller and Cooler

Thursday, April 20th, 2017

Turning away from typical approaches can stretch battery life and use space more successfully.

Charging and fuel gauging ICs—at the heart of every battery management system—are critical components of ever-shrinking mobile and IoT electronic gadgets. While size is shrinking, complexity is increasing. Faster charging with minimum heat generation calls for high-efficiency charging at a high input voltage and a high charge current. This article reviews a typical approach for meeting the challenges of reduced space requirements and minimizing heat generation. It then presents a new solution that delivers more efficient power in a smaller space, enabling longer battery life and smaller form factors.


Figure 1: Smartphone at End-of-Charge

Typical Power Management Implementation

A typical charger and fuel gauge system is shown in Figure 2. In addition to the charger and the fuel gauge, it includes two safeout low dropout regulators (SFLDO) that drive the system’s USB interface. For simplicity, the external passives are not shown.

Figure 2: Typical Charger and Fuel Gauge System

Figure 2: Typical Charger and Fuel Gauge System

All active and passive components for the system diagram shown in Figure 2 are accounted for in the solution drawing of Figure 3.

Figure 3: Typical Charger and Fuel Gauge System Footprint (55mm2)

Figure 3: Typical Charger and Fuel Gauge System Footprint (55mm2)

The 1.5µH, 3.5A inductor (in a large 2520 package) and the two external LDOs present a huge challenge in terms of space. This solution occupies a board area of about 55mm2.

An Integrated Solution

The PMIC in Figure 4 (MAX77818) integrates the charger, fuel gauge, and LDOs in a single chip. This eliminates the wasted space associated with using multiple components. Another advantage of this solution is the ability to use a lower value 0.47µH inductor that carries 3.5A in a smaller 2016 package.

Figure 4: Highly Integrated Charger and Fuel Gauge System-on-Chip

Figure 4: Highly Integrated Charger and Fuel Gauge System-on-Chip

All active and passive components of the system diagram shown in Figure 4 are accounted for in the solution drawing of Figure 5.

Figure 5: Highly Integrated Charger and Fuel Gauge System Footprint (37mm2)

Figure 5: Highly Integrated Charger and Fuel Gauge System Footprint (37mm2)

The total board space occupied by the components is a mere 37mm2, saving 33 percent of the real estate. In addition, the MAX77818 draws only 20µA of total quiescent current. This saves valuable battery life, which enables either system size reduction by using the smallest battery possible or longer use time between each charge.

More Efficiency in Less Space

A comparison between the efficiency (from CHGIN to BATT) of the MAX77818 charger solution and a competitive solution is shown in Figure 6. To provide a true comparison, both solutions are powered with a 1µH inductor in a 2520 package. The integrated charger solution exhibits higher efficiency across most of the load current range even with a substantially higher switching frequency. At full load (3.5A), the efficiency of the integrated solution is well above 88 percent, while the competitive solution is just above 86 percent, a more than two percent efficiency difference.

Figure 6: MAX77818 vs. Competitor Efficiency

Figure 6: MAX77818 vs. Competitor Efficiency

Even with the large 2520 inductor, the integrated solution is 30 percent smaller than the competitive solution.

MAX77818 Integrated Charger and Fuel Gauge

The MAX77818 switching charger is designed with a special CC, CV, and die temperature regulation algorithm. The ModelGauge™ (m5) algorithm delivers battery fuel gauging with the highest accuracy available while operating with extremely low battery current.

The fuel gauge must be loaded with a factory characterization file (INI) to match the battery in use. Maxim has developed a vast battery database of cell characteristics and behaviors over a wide range of test conditions based on customer use cases. This allows Maxim to provide an initial configuration file that works well with the battery until a more specific battery characterization file is created to achieve the highest accuracy operation. To learn more about the terms of support for battery characterization, please contact technical support.

Additional Advantages of Integration

A fuel gauge IC has temperature-sensing capabilities to conform to the correct charging profile based on the battery at hand. That’s not normally the case for the charger IC. However, JEITA, a Japanese electronics protocol that has become the standard for lithium-ion (Li+) batteries specifies charger behavior vs. temperature. For example, to control charge current and voltage based on temperature, which reduces the current at cold temperatures, the charger would need to integrate a temperature sensor or a microprocessor host to provide temperature information. But the MAX77818’s integration permits the fuel gauge and charger to easily access the temperature, an important function when the JEITA mode is enabled. Integration reduces silicon overhead and potentially lowers the thermistor countFinally, with the fuel gauge and charger in the same package, there is less chance of having PCB layout issues.


We have discussed the shortcomings of a typical charger-fuel gauge implementation and introduced a new, highly integrated solution. The new solution, compared to an alternative implementation, delivers a full load efficiency advantage of more than two percent and a 30 percent reduction in PCB area, enabling longer battery life and smaller form factors.

Bakul-DamleBakul Damle is the Mobile Power Business Management Director at Maxim Integrated. His current interests include battery and power management specifically in fuel gauges, battery charging, energy harvesting, wireless charging, and battery authentication. He has several patents in test and measurement. Bakul holds a Master of Science degree in Electrical Engineering from the California Institute of Technology and a Bachelor of Technology in Engineering Physics from the Indian Institute of Technology.

JiaHuJia Hu is an Executive Business Manager for Mobile Power at Maxim Integrated. He has more than 15 years of experience in the semiconductor industry. Jia holds a master’s degree in Electrical Engineering from the University of Washington and bachelor’s degrees in Electrical Engineering and Economics from the University of California, Berkeley.

Nazzareno-Rossetti_2017Nazzareno (Reno) Rossetti, Principal Marketing Writer at Maxim Integrated, is a seasoned Analog and Power Management professional, a published author and holds several patents in this field. He holds a doctorate in Electrical Engineering from Politecnico di Torino, Italy.

Next Page »