Posts Tagged ‘top-story’

Next Page »

Programmable Logic Expands Its Reach

Wednesday, February 20th, 2019

By Barry Manz, Mouser Electronics 

Wireless technology has gained significant traction among medical applications, but this is just the start of something much bigger.

Not only is the performance of FPGAs and other logic devices becoming more formidable, these devices are incorporating functions typically performed by other types of logic, CPUs, GPUs, and DSPs. They’re the semiconductor versions of the Swiss Army Knife.

In 1969, when hundreds of thousands (including the author) partied at Woodstock, the Concorde made its first trial run, and Richard Nixon was inaugurated, another momentous event occurred: The XC157 mask-programmed gate array with 12 gates and 30 input/output pins appeared in the 1968 edition of the venerable Motorola Semiconductor Data Book (Figure 1). It may not have shaken the world, but for the electronics industry it was like planting a flag on the moon, as it (and efforts by many others at the time) marked the time when programmable logic devices became a commercial reality.

To use a time-worn phrase, the rest is history – and what a history is has been, with dozens of different logic types created by many companies arguably making all types of embedded systems possible.

The various logic types can be grouped into three categories based on their comparative level of complexity (Table 1). The top of the hierarchy is the domain of field-programmable devices–FPGAs and their variants— that have come a very long way since David W. Page and LuVerne Peterson in 1985 initiated the concept. The differences between FPGAs and their nearest brethren, Complex Programmable Logic Devices (CPLDs), are their ability to perform more complex functions and that they are a “blank canvas” on which their functions must be painted rather than being previously endowed by the manufacturer with specific functions.

Programmable Logic Expands Its Reach Figure 1

Figure 1:Motorola’s XC157 “Multi-Gate Array” as it was described in the 1968 Motorola Semiconductor Data Book. Courtesy: Jason Scott, proprietor,

Programmable Logic Expands Its Reach Figure 2

This makes FPGAs extraordinarily versatile, as they can perform computing, signal processing, high-speed communication, and other functions without external peripherals. Defining the vast number of connections and cell logic functions in an FPGA was never easy and of necessity, FPGA manufacturers and design software vendors have developed software tools that make the process less onerous. Predesigned and verified intellectual property (IP) functional blocks are also available to help speed the programming process.

The initial FPGA concept was followed in the late 1980s by the results of a U.S. Naval Surface Warfare Department program in which industry participants developed a computer that implemented 600,000 reprogrammable gates. Shortly thereafter, the first commercial FPGA was (the XC2064) was unveiled by Ross Freeman and Bernard Vonderschmitt in 1985 who founded Xilinx. It was an 8 x 8 configurable logic block (CLB) grid (64 CLBs) with two three-input look-up tables (LUTs).

Fast forward to today, and it shows as to how startling progress has been. FPGAs now integrate ARM Cortex or other processor cores, can perform more than 1 trillion floating-point operations per second (teraflops) for DSP, integrate ADCs, have total throughout (all serial transceiver channels) of nearly 3 Tb/s, up to 50 million logic elements, very-high-speed memory (and lots of it), and 128-b encryption. All this and more is contained in a single device that has very low power consumption, less latency than ever, and other impressive attributes. FPGAs can now perform so many different type of functions that they are not only a system on a chip, but an extremely flexible and complex one, as well. For example, while communication systems once employed a bank of ASICs to implement their receiver front-end functions, today they can all be handled by a single FPGA. Unlike an ASIC whose functions are fixed during fabrication, FPGAs can be reprogrammed, generally without hardware changes.

The Next Frontier for FPGAs

The next major step for FPGAs may be their use in reconfigurable computing, where along with software they provide all of the functionality of a computer, with a general-purpose processor used only for control. It’s not a new concept, having first been described in the 1960s and first demonstrated in 1991 by Tom Kean, Dr. John Gray, and Dr. David Rees of Algotronix with their CHS2X4. It was based on the company’s CAL1024 FPGA, which had 1024 programmable cells in 1.5-µm double metal CMOS, and was the first FPGA to provide random access to its control memory and to provide input/output signal sharing to enable arrays of devices to be constructed. The achievement and the technology that enabled it was so impressive that Xilinx acquired the company in 1993.

Using FPGAs for reconfigurable computing has been used in some specialized high-performance systems, including cryptography. An interesting example of such a code-breaking machine is Cost-Optimized Parallel COde Breaker (COPACOBANA), which is optimized for running crypt analytical algorithms like the Data Encryption Standard (DES), as well as for other parallel computing problems. It’s not general-purpose in any sense, as its communicaitons are limited in speed and bandwidth and clock rates are much lower than a CPU-based computer. However, it’s also much less expensive and is very powerful in the applications for which it is best suited. By using only FPGAs and other off-the-shelf parts, it’s a fraction of the cost of a CPU-based cryto-computer like the Electronic Frontier Foundation’s DeepCrack.

COPACOBANA (Figure 2) uses 120 FPGAs (plus or minus depending on its specific design) and fits into three units of a 19-inch rack. It devours 48 billion DES decryptions per second while consuming only 600 W, and for control needs only a garden-variety PC running Windows or Linux. In the Cryptographic Hardware and Embedded Systems 2006 (CHES 2006) Workshop’s secret key challenge, COPACABANA took 21 hr., 26 min, 29 s using 108 of its 128 processors at a throughput of 43.1852 billion keys per second. It found the key after searching through 4.7% of the key space. Hardware of this type is used in custom hardware attacks to unlock encrypted transmissions by literally guessing the key or password.

Programmable Logic Expands Its Reach Figure 2

Figure 2:COPACABANA: FPGA-based codebreaking on the cheap. (Source:

The Wide World of FPGAs

“Logic” would dictate that the versatility of FPGAs would result in a wide range of models, each with specific and sometimes unique attributes designed to meet the needs of different applications. Some of the most demanding of these are defense radar, electronic warfare, and signals intelligence, whose demands are so severe that it’s difficult to believe that any system short of a supercomputer could possibly satisfy them. Yet FPGAs have taken the defense industry by storm, thanks to their massively-parallel processing power and I/O, comparatively low power consumption, and most recently their ability to harness the power of floating-point arithmetic, which is the key to solving many types of computing problems.

Serving these applications are powerhouse FPGAs like Altera’s Stratix 10 family with more than four million logic elements that are manufactured in the Intel 14 nm Tri-Gate process, and incorporate 64-b quad-core ARM Cortex-A53 processors. Compared to their predecessors, they have four times the processor data throughput, four times the serial transceiver bandwidth, a 28-Gb/s backplane, 56-Gb/s chip-to-chip/module speed, more than 2.5 Tb/s bandwidth for serial memory, and more than 1.3 Tb/s bandwidth for parallel memory interfaces with support for DDR4 memory at 3200 Mb/s. To handle the aforementioned need for floating-point operations, they offer more than 10 TFLOPs of single-precision DSP performance.

Overall, they consume 70% less power than previous generation high-end FPGAs with single-precision floating point efficiency 100 GFLOPS/W. The Stratix 10 devices are supported by Altera’s advanced development and debug tools like Altera SDK for OpenCL, SoC Embedded Design Suite.

In the mid-range are Altera’s instant-on MAX 10 FPGAs (Figure 3) integrate DSP, analog blocks with 12-b ADCs and a temperature sensor, PLLs and low-skew global clocks, embedded soft processor support, memory controllers, up to 736 Kbytes of dual configuration flash allowing you to store and dynamically switch between two images on a single chip. They’re built on TSMC’s 55-nm embedded flash technology enabling instant-on configuration so they can control power-up or initialization of other components in the system. Densities range from 2,000 to 50,000 logic elements. Other features include up to 500 user I/O pins, 18 analog input channels, and 128-b AES encryption.

Programmable Logic Expands Its Reach Figure 3

Figure 3:Altera’s MAX 10 FPGA on an evaluation board. Source: Altera

Of particular interest is Nios II that combines a Nios II processor core, on-chip peripherals and memory with interfaces to off-chip memory. The Nios II processor is a configurable “soft” IP core as opposed to a fixed microcontroller, which means the processor core is programmable and not fixed in silicon. As it is implemented on the FPGA, it allows software and hardware engineers to work together to optimize the hardware and test the software running on it.

Even FPGAs that serve most mainstream applications have daunting performance. For example, Lattice Semiconductor says that its MachXO3 FPGA familyis the smallest, lowest-cost-per I/O platform targeted at expanding system capabilities and bridging emerging connectivity interfaces using both parallel and serial I/O such as MIPI, PCIe, and Gigabit Ethernet. Their package technology eliminates bond wires, which reduces cost and increases I/O density. Depending on the model they have from 640 to 6900 look-up tables. Typical applications include consumer electronics, computing and storage, wireless communicationindustrial control, and automotive. Their design tool library include popular logic synthesis software, pre-engineered IP, and free reference designs optimized for the MachXO3L family.

To make things easier, Terasic, which produces optimized subsystems using FPGAs (and other devices), offers the Cyclone V GX Starter Kit hardware design platform built around Altera’s Cyclone V GX FPGA. The board (Figure 4) includes hardware such as an Arduino Header, on-board USB Blaster, and audio and video capabilities along with high-speed transceivers. The company’s goal is to simplify the task of evaluating and prototyping subsystems based on the FPGA with the addition of only a Windows-based PC.

Programmable Logic Expands Its Reach Figure 4

Figure 4:Terasic’s Cyclone V GX starter kit. (Source: Cyclone V GX Starter Kit User Manual)

Not the Only Player on the Board

FPGAs may be sexy (in engineer parlance) but they are not the only programmable logic devices out there; other devices can provide a broad array of functions within a single device. Consider the “programmable system on a chip” (PSoC) architecture from Cypress Semiconductor, which is the industry’s only programmable embedded SoC that combines a high-performance analog block, PLD-based programmable logic, memory, and a microcontroller on a single chip that is notable for its frugality with power. The most advanced PSoC variant, the PSoC-5 family, integrates the latest ARM Cortex-M mixed-signal solutions.

The CY8C56LP member of the PSoC-5 family for example (Figure 5) provides configurable blocks of analog, digital, and interconnect circuitry configured around a CPU subsystem. The combination of a CPU with an analog subsystem, digital subsystem, routing, and I/O makes it very appealing for a broad swath of consumer, industrial, and medical applications. The PSoC’s digital subsystem effectively connects a digital signal from any peripheral to any pin through its digital system interconnect, while functional flexibility is afforded via an array of small, fast, low-power universal data blocks (UDBs).

Programmable Logic Expands Its Reach Figure 5

Figure 5:The major components of the CY8C56LP are the ARM Cortex-M3 CPU, digital, analog, programming, debug, and test subsystems, I/O, clocking, and power. Source: Cypress Semiconductor

It is supported by the PSoC Creator library of tested, standard digital peripherals (UART, SPI, LIN, PRS, CRC, timer, counter, PWM, AND, OR, etc.) that are mapped to the UDB array. Each UDB contains programmable logic functionality and a small state machine engine that allows it to support a wide variety of peripherals. Other configurable digital blocks can be used for specific functions such as four 16-bit timer, counter, and PWM blocks, I2C slave, master, and multi-master, as well as USB and CAN 2.0. This very device is usually marketed as an MCU, but shares much of the flexibility that an FPGA would have, and is fairly easy to program.


If by this point the powerful and versatile nature of today’s programmable logic devices is not apparent, consider this: It is possible today to construct a complete signal capture and processing subsystem using only three cards in the OpenVPX form factor (for defense systems.) The major devices include two high-end FPGAs, top-drawer ADCs and DACs (two each), an Intel Core i7 quad-core processor, SERDES transceivers, memory, and I/O. Thanks to the broad bandwidth and high sampling rate of the DACs and the intensive processing power of the FPGAs, this three-card solution can even directly capture signals off the air from DC to 6 GHz. Accomplishing this only a few years ago in such form factors would have required a solution 5 to 10 times the size.

Achievements like this are occurring with increasing rapidity as more market sectors, from consumer electronics, to image processing, automotive and defense electronics, industrial automation and control, and medicine, are taking advantage of what these devices can do. So while FPGAs and other programmable logic devices as a category are not new, they are accomplishing feats that certainly are on the cutting edge.

Barry Manz is president of Manz Communications, Inc., a technical media relations agency he founded in 1987. He has since worked with more than 100 companies in the RF and microwave, defense, test and measurement, semiconductor, embedded systems, lightwave, and other markets. Barry writes articles for print and online trade publications, as well as white papers, application notes, symposium papers, technical references guides, and Web content. He is also a contributing editor for the Journal of Electronic Defense, editor of Military Microwave Digest, co-founder of MilCOTS Digest magazine, and was editor in chief of Microwaves & RF magazine.

Reprinted by permission from Mouser Electronics

“…brain inspired”: Q&A with Paul Washkewicz and Chet Jewan, Eta Compute

Monday, October 1st, 2018

Why microcontrollers architected in a “brain inspired” way could have a growing role in smart metering and other point-to-point communication applications, such as those relying on sub-GHz next generation networks.

Editor’s Note : “Low power solar operation means more deployments with low operational cost,” Eta Compute co-founder  and VP marketing Paul Washkewicz tells EECatalog. One potential use case, according to the company’s Chet Jewan, who serves as VP sales and business development: oil pipeline monitoring. Earlier this year Eta Compute Inc. and ROHM Semiconductor announced  the two have partnered to develop sensor nodes compatible with the Wi-SUN sub-GHz communication protocol—news followed by a successful Sensors Expo demonstration of ROHM’s sensor technology and Eta Compute’s low power MCUs. Edited excerpts of our conversation follow.

MJ1011 Soil Sensor Unit from LAPIS Semiconductor, a ROHM Group Company

EECatalog: What did you want to accomplish by introducing your Sensors Expo attendees to an energy-harvesting sensor evaluation board and how did it go?

Chet Jewan, Eta Compute

Paul Washkewicz, Eta Compute

Chet Jewan (CJ): To convey that it was possible to capture data—including lighting, temperature, pressure—and transmit that from the sensor node to a Wi-SUN receiver in ROHM’s booth, presenting how an energy-harvesting solution captures and transmits data in real time—all without being plugged into the wall—as well as operating from lighting in the exhibit floor, showing a real-life application.

Paul Washkewicz (PW): There was a steady stream  of sensor customers coming by both the Eta Compute booth and the ROHM booth asking about details of the design, especially in relation to the issue of: “How do you effectively deploy systems that have excellent coverage but don’t require a steady stream of management, which raises operating expenses?”

EECatalog: Why pool resources with ROHM?

PW: Pooling resources makes it possible for customers to get the sensor nodes required for deploying next-generation networks. Individually, each of our companies has products that theoretically could work seamlessly together. However, while engineers hear those words “work seamlessly” often, they can find actual implementation more elusive.

For example, a key part of this joint solution is the power management for the energy-harvesting section of the design. By pooling resources, we put together a design that includes our low-power controller and ROHM sensors, but also includes the design of the power management. Anything to streamline an application’s design and prove out the application in hardware and software is a better starting point than a collection of datasheets.

CJ: They can use our reference design as an example, whether they want to change the communication protocol, or change the sensors, or take an existing reference design and be able to deploy that in an existing factory as is, they can do that.

EECatalog: For a running start to deployment as quickly as possible?

CJ: Exactly, so they can take the sensor node as is and deploy it on a trial basis, and from there they can enhance it, modify it, whatever the case may be.

EECatalog: What’s the significance of Wireless Smart Ubiquitous Network (Wi-SUN) compatibility?

PW: Wi-SUN is an open standard based on a low-power mesh network targeting smart cities’ and smart utilities’ networks. With hundreds of millions of possible deployments, making these as near “maintenance free” as possible will help the proliferation of such systems. Wi-SUN is one good example of a low-power sub GHz RF standard for these applications.

EECatalog: What features of this solution do you anticipate catching folks’ attention?

CJ: One aspect people will be interested in is how often can they capture and transmit data from the sensors, and the variable would be, for example, lighting in their environment. If this node is inside a factory, the data customers are looking for is, “Okay, if it’s only 100 to 300 lux type of lighting, how often can I poll the sensors for data, and then how often can I transmit that data?”

EECatalog: But we’re not talking just inside factories?

CJ: Absolutely not, for instance, take cold chain applications. The sensor node could be inside a truck and you could transmit to the cabin to alert the driver—say he’s transporting lettuce—that the temperature is going up.

EECatalog: How does sensor fusion fit in?

CJ: Say along with the temperature sensor that indicates the truckload of lettuce is getting too warm, you also have a GPS unit that is tracking location. For that truck that is moving inventory which supposed to be refrigerated you want to be able to look at location, temperature, pressure and say, “Okay, what’s really happening here?” Maybe action has to be taken because the truck has traveled from a cooler, higher altitude in Flagstaff Arizona to a warmer location, Phoenix or Tucson. You are taking data from multiple sensors and producing  meaningful data for the user. Looking ahead, artificial intelligence integrated into our MCU will make possible a rapid, low cost way of doing sensor fusion at the edge resulting in relevant information for the user.

EECatalog: What is capturing Eta’s Compute’s attention with regard to Artificial Intelligence?

CJ: With the AI market, which is still in its early stage, a lot of the work being done is to transmit the data to the cloud in order to get it processed and then provide data back down. We, on the other hand, are working to provide AI at the edge. So, for any of these applications that are IoT sensor-driven, we want to be able to take the data from the sensor, use artificial intelligence to do sensor fusion, and act [based upon] that data without going to the cloud for processing.

We want to make an edge device that is very low power, with the performance to run AI and process locally, instead of having to transmit the data to the cloud. You want to avoid latency and connectivity issues, where, for example, you’ve got this truck and you’re sending data to the cloud. And by the time you get the data back, you’ve got a refrigeration issue, and it may be too late.

We are developing efficient neural nets, we’ve got the low-power hardware—we are putting that altogether where AI is possible at the edge, and especially for sensor-fusion and IoT types of applications, we can sense, infer, and act locally, without cloud connectivity.

EECatalog: That’s in the context of some applications needing the kind of computing power the cloud can offer?

CJ: Yes, it’s application dependent. For some heavy-lifting performance, yes, you are going to need the cloud, and that is not the market we are going after.

In terms of sensors and IoT things, not everything is connected to the cloud, and to go ahead and connect that is going to cost money—you’ve got to get some kind of subscription service, whether it is LoRaWAN or NB-IoT, or whatever the case is.


Figure 1: IoT Soil Environment Monitoring Case Study from LAPIS Semiconductor, a ROHM Group Company

Let’s take an agriculture application, where you’ve got these sensors out in the field checking for moisture, checking for light—those type of things, it is going to be difficult to have all of these devices connected to the cloud, but you may want them connected locally, so you can send a message. Wi-SUN, LoRaWAN typically transmit a kilometer or more, and you may have a base station there within the farm—and now you are sending signals where you are seeing too much moisture, too little moisture; there may be an issue with lighting; there may be an issue with a soil sensor checking different pH levels, whatever the case may be, but you want to be able to handle that locally and not necessarily send all that data up to the cloud to get processed.

If it is a large field you may have 100 sensors there, and you would not have each one connected to the cloud, you might be pulling them together to a local place to get processed and then act, depending on what that sensor is telling you (Figure 1).

Among those 50 billion connected devices that are supposed to be here by 2020, we are trying to connect the ones that need very low power, that are optimal for energy- harvesting There’s a large market where a plug isn’t readily available, and where you don’t necessarily want to rely on batteries that require changing. Locations that have energy-harvesting sources available, vibration, or light or thermal for instance are those where typically there would be enough power for our low-power MCU and sensors to operate.

EECatalog: Unlike a situation where there’s the need to check on and change batteries that could be in difficult to access locations, it’s more “set it and forget it”?

CJ: Yes, to cite another example, oil pipelines. These pipelines run for hundreds of miles across the country, and they are out in the open, so sunlight is available as an energy source making it possible to monitor for flow, leaks, other parameters of concern.

The sensor node can be mounted on a decent-sized magnet that can be positioned on the pipeline. Oil flowing through the pipeline will create a constant vibration and with sensor fusion and AI you know what a typical flow would feel like. And if something stops or changes, you can pick up on that. And all of this can be done without batteries, without  electricity. Replacing batteries would be very expensive, whereas you can just run solar for indefinite periods of time and be able to transmit that to a localized base station or even use an NB-IoT type of application, employing a cellular network or satellite, and transmit that data a few times a day, saying, “Everything is okay, there are no issues,” or if there is an issue, you can transmit that data right away. That’s as compared to sending people out to check the pipeline or using helicopters to fly along the pipeline to make sure there are no issues—that gets very expensive over time.

EECatalog: Anything to add before we wrap up?

PW: Our delay insensitive asynchronous logic, or DIAL, MCU is a good fit for supporting machine learning and machine intelligence in portable devices, mainly due to the extremely low power of operation. With operating currents on the order of micro amps, even coin cells last years or alternatively, we can run on small solar cells.

CJ: Because our DIAL MCU is asynchronous, it consumes very low power. Putting together our asynchronous MCU with a brain inspired neural net which is also running asynchronously produces a solution that we do not believe, for the amount of performance we offer at low power, is found anyplace else out there.

FD-SOI Process Yields Processor on a Power-Sipping IoT Budget

Tuesday, April 10th, 2018

IoT finds new life with technology that adapts to the often intermittent, bursty high-performance followed by periods of ultra-low power that IoT set-ups may often demand.

For Internet of Things (IoT) devices, proficient energy management and ultra-low power consumption are critical. One area of concern for conserving power lies within the IoT device’s embedded processor.

The main factors contributing to the power efficiency of the processor include:

  • Technology node
    • Wide dynamic voltage range
    • Low leakage
  • Architecture
    • Heterogeneous processing
    • Power Islands
    • Power mode enablement

Of all the parameters, the process technology is fundamental to power efficiency. One recent process technology that has dramatically improved the landscape for power efficiency is Fully Depleted Silicon On Insulator (FD-SOI).

Processor Technology and Core Architecture
Shrinking process technologies allow for higher integration along with lower run time power at a cost of increased static power due to higher leakage.

Figure 1: NXP has unique FD-SOI enablement in a large dynamic gate and body biasing voltage range, low quiescent current bias generators, and enhanced Analog-to-Digital Converter (ADC) performance.

FD-SOI can prevent the trend of growing leakage through the following methods:

  • Leveraging existing manufacturing techniques to apply an ultra-thin buried oxide layer on top of the silicon base (see Figure 1). The transistor channel is formed with a thin layer of silicon above the oxide layer. The buried oxide layer curtails the flow of electrons between the source and drain, which naturally reduces leakage current.
  • The thinner transistor channel also allows for the use of lower Vdd voltages through better electrostatic control.

Additionally, FD-SOI’s ability to Reverse Body Bias (RBB), applying a negative voltage on the back side of the channel, can create a barrier preventing the movement of electrons.

Even though the shrinking process technology naturally allows for lower dynamic currents, FD-SOI can go a step further in reducing dynamic power through the following:

  • Wide Vdd voltage range, allowing for operation at very low power levels.
  • Forward Body Biasing (FBB) reduces the operating Vdd voltage for a given frequency. Instead of impeding the movement of electrons due to RBB, now electrons are encouraged to move.
  • FD-SOI construction results in lower parasitics, lowering the dynamic power of the transistor.

Figure 2: Body-biasing means that the device can be faster when required and more energy efficient when performance isn’t critical.

As process technologies have gotten smaller, other trade-offs have arisen. Indeed, designing for analog signals has become more complicated. However, FD-SOI relaxes density rules, allows higher gain, a closer match of components (which reduces the need for compensation during layout), and achieves lower 1/f noise [1].

The industry is hitting a physical wall as miniaturization continues to approach ever-smaller nanometer nodes. The physics of electricity at nanometer scale have begun to interfere with our design dreams of better, faster, smaller, cheaper, and more power-efficient chips. The smaller the architecture, the more significant the potential is for latch-up, as transistors within integrated chips are spaced physically closer to each other. Latch-up provides a potentially catastrophic alternate path for current flow, and until power is cycled to the chip, latch-up is present even after the condition that caused it is no longer present. However, the ultra-thin buried oxide layer utilized in the FD-SOI process provides immunity from latch-up.

Features Enabled by FD-SOI
Taking a markedly different approach than do vendors who rebrand cell phone or tablet chips as ‘IoT SoCs,’ NXP has designed the new i.MX 7ULP applications processor from the ground up as an IoT device—choosing a low-power 28nm FD-SOI process node, a set of power-sensitive peripherals, and an architecture featuring dual power domains based on the Cortex-A7 and the Cortex-M4 cores.

By leveraging the process technology, power friendly heterogeneous architecture, and multiple smart power modes, the processor can achieve impressive ultra-low power consumption levels, with a deep sleep suspend of 50 µW or less.

The i.MX 7ULP offers Rich OS support (Linux, Android), as well as sophisticated Real-Time Operating Software (RTOS) support (FreeRTOS), and additional features suitable for IoT or any portable use case that demands long battery life.

The i.MX 7ULP family of processors is faster when required and more energy-efficient when performance is not as critical, enabling dynamic trade-offs. Engineers no longer face a forced selection: low-power processor or high-performance processor. Rather, the selection for performance or power efficiency can be made instantaneously, as needed, without having to reconfigure.

Figure 3: The i.MX 7ULP block diagram. The i.MX 7ULP is extremely flexible, with the ability to dynamically transition from high performance to ultra-low power consumption while maintaining active operation. Bursty performance increases can be applied as needed, to offer the best of both worlds, which is especially relevant to IoT applications. (Image: NXP Semiconductors)

The i.MX 7ULP is well-suited for IoT edge devices, as well as smart home controls, building automation, portable patient monitoring, wearables, and portable scanners. The iMX 7ULP reaches a new level in IoT by offering the high performance required for rendering rich graphical images on a power-sipping wearable.

The IoT bestows upon nearly every industry the promise of significant forward progress by granting access to data at levels heretofore unseen. Harnessing the benefits of knowledge gained through the IoT is not going to be free or arrive without hazard. Yet seeking ever-higher productivity, with the potential to progress from the unknown to the well-informed, and in acquiring a new vehicle with which to spark innovation, we can’t help but proceed.

[1] Note that 1/f noise is found in electronics, music, nature, and other areas, and cannot be filtered out of an analog circuit. Chopper stabilization can be used but introduces switching noise.

Joe Yu is the Vice President and General Manager of the Low-Power MPU & LPC MCU product lines at NXP Semiconductors. He has been in the semiconductor industry since 1988 working for companies including NXP/Philips, Freescale, Toshiba, Altera and Atmel. Yu’s work experience includes applications engineering, marketing, business development as well as general management. His passion is to develop low-power microprocessors and microcontrollers for broad market applications. As greater levels of processing and connectivity push to the edge nodes, one of the key areas of focus he has been championing is to find new ways to reduce the power consumption of these IoT devices.

Yu has a BSEE degree from Santa Clara and resides in Palo Alto, CA.

Emerging Applications Spell the End of the Battery’s Life

Tuesday, February 20th, 2018

New mobile applications, such as wearables and mobile gaming, mean there is a shift in power management at the system level, seeing alternative ways to meet performance levels and design challenges.

Applications such as autonomous driving and artificial intelligence are driving a demand for higher performance, high efficiency processors. These rely on processing images in real time and dissecting each image to identify and locate an element within that image to detect objects or to learn behavior.

Image processing and image recognition offer the possibility of creating new operations in other markets. Nick Pandher, Director of Market Development for Radeon Professional Graphics, at AMD, believes we are only seeing the tip of the graphics processing iceberg for deep learning. For example, he suggests, they could be used in financial institutions and organizations to model a training framework in a financial data set. This would perform the analysis usually done by someone with a financial background. It can also be used to look for anomalies in staff log-ins to flag issues and highlight at-risk areas in an organization’s operations.

There are also medical uses for treatment, where anomalies can be quickly identified, and in research, where patterns can be detected across multiple data frameworks to link symptoms.

For wearable devices, the same restrictions on weight, size and power apply as in mobile gaming processors. They have to be light enough to be worn during fitness activities or light and unobtrusive if used in medical monitoring.

Power Adopts a Game Face
AMD has based its latest Graphics Processing Unit (GPU), Radeon Mobile, on a 14nm FinFET technology, to meet the form factor and power demands of emerging markets, such as mobile gaming.

Mobile gaming places different demands on a GPU than a desktop application does. For example, for mobile, the GPU has to be light in weight and small in size to integrate into mobile devices or Virtual Reality (VR) headsets. It has to be thermally efficient, as a fan will add weight and space restrictions, yet have a laptop’s performance.


Figure 1: Mobile gaming is expected to account for half of the revenue generated by the video game industry worldwide by 2020. Picture Credit: AMD

FinFETs are 3D Field Effect Transistors (FETs), named after their fin-like structure rising above the substrate. The transistor’s gate wraps around the fin to reduce the amount of current leaking when the device is in the off state. This approach lowers threshold voltages to improve power consumption without increasing the die size.

Scott Wasson, Senior Manager of Technical Marketing, AMD, confirms the reason for the choice of transistor: “The key thing for anyone building a chip like [Radeon Vega Mobile] is to keep voltage as low as you can. . . . Radeon WattMan [AMD’s power management, based on Radeon software which controls GPU voltage, clocks, fan speed, and temperature], can be used to tweak and tune the voltage,” he says.

By adding “a few bits of special sauce” to the earlier Polaris architecture, the company has improved switching speed and performance in the Vega mobile architecture, explains Wasson. “It is very important to be always the refining power management algorithms we build into hardware and software,” he says. “The essential strategy is to provide performance when needed and to turn down the clocks, and the power, when you don’t need the performance, in order to conserve power,” he adds.

While the Vega Mobile, announced at CES last month, is not VR-ready yet, it is built to be small and relatively low power to meet the benchmark for VR in anticipation of what AMD’s partners will develop for VR and mobile gaming.

Wearable Challenges
For wearable devices, the same restrictions on weight, size, and power apply as in mobile gaming processors. They have to be light enough to be worn during fitness activities or light and unobtrusive if used in medical monitoring. For both, they should be wireless too, so that the wearer can record or gather data without being tethered.

Products such as watches, trackers, and monitors rely on a battery for power, but the processor’s power system must be able to regulate voltage from the battery. The problem is that the battery runs down, so the system has to manage a source with a declining voltage output. Some wearable device functions need a higher voltage than the 3.2 to 4.2V typical of a rechargeable Lithium Ion battery. Many wearable products use main power rails that are below the minimum charge of a single cell Lithium Ion battery, so the rails are sourced by a step-down regulator, possibly more than one.

Maxim Integrated introduced the MAX14690 battery charge device last year (Figure 2), targeting low-power, wearable applications. It has a linear battery charger with a smart power selector, two low- power buck regulators, and three low-power, Low DropOut (LDO) linear regulators.

Figure 2: The MAX14690’s level of integration minimizes footprint for power management in wearable devices.

If the device is connected to a power source, the power selector allows the device to operate when the battery is dead. The input current to the selector is limited, based on an I2C register, to avoid overloading the power adapter. If the charger power source cannot meet the supply needs for the whole system load, the smart power control circuit can supplement the system load with current from the battery.

To conserve power during periods of light load operation, the synchronous step-down buck regulators have a burst mode option and a fixed frequency Pulse Width Modulation (PWM) mode to regulate the load. The output can be programmed using I2C bus.

The LDO linear regulators can also be programmed via I2C and configured to operate as power switches to disconnect the quiescent load of the system peripherals for power management.

This is all packed into a 36-bump, 0.4mm pitch 2.72 x 2.47mm Wafer Level Package (WLP). Maxim Integrated also offers the MAX14690 Evaluation Board, an assembled and tested circuit for evaluating the device.

Battery Management in the IoT
The nature of the Internet of Things (IoT) means it has its own low-power requirements. Devices are wireless, mobile, often located remotely, and rely on batteries. In many designs, the battery is the sticking point. The battery can be expensive to replace. Additionally, the remoteness of the IoT node, either geographically, or in a hard-to-reach spot in a building or factory, can make battery replacement a time-consuming exercise. Hence the concentration by many Power Management Integrated Circuit (PMIC) manufacturers to take a system-level approach to power management.

The vision of the IoT is for hundreds of billions of nodes to be still active into the next century, despite being located in hard-to-reach, inaccessible, or hostile locations. For Dr. Peter Harrop, Chairman of market research firm IDTechEX, this means batteries will have to go (Battery Elimination in Electronics and Electrical Engineering 2018-20128). In a series of reports, he points out that batteries have “serious limitations of cost, weight, space, toxicity, flammability, explosions, energy density, power density, leakage current, reliability, maintenance and/or life.” He is clearly not a fan. He continues “Lithium Ion batteries will dominate the market for at least 10 years, and probably much longer, yet no Lithium Ion cell is inherently safe and no Lithium Ion battery management system can ensure safety in all circumstances.”

Energy harvesting is an alternative, practical approach to eliminating batteries. A Power Management Unit (PMU) converts DC power from one energy source to another. For example, the ADP5091 and ADP5092 (Figure 3), by Analog Devices can be used in PhotoVoltaic (PV) cell energy harvesting, ThermoElectric Generators (TEG) energy harvesting, industrial monitoring, and self-powered wireless sensor devices as well as portable and wearable devices.

Figure 3: Analog Devices takes a system-level approach to IoT battery management

The PMUs harvest from 6µW to 600mW, with an internal cold start circuit which allows input voltage down to 380mV. In both devices, the charging control function protects the rechargeable energy storage by monitoring the battery voltage with the programmable charging termination voltage and the shutdown discharge voltage.

There is the option to connect a primary cell battery, managed by an internal power path management control block. This enables the power source to switch from the energy harvester, rechargeable battery, and primary cell battery.

Figure 4: The 24-lead LFCSP ADP501 by Analog Devices could eliminate batteries in the Industrial IoT

The company also offers the ADP509 evaluation board, based on the ADP509PMIC and the Alta Device PV cell. It includes a PV panel and power management to enable devices to be powered by energy harvesting.

Caroline Hayes has been a journalist covering the electronics sector for more than 20 years. She has worked on several European titles, reporting on a variety of industries, including communications, broadcast and automotive.


Low-Density Serial Flash Is Back—and IoT’s the Reason

Friday, August 4th, 2017

Low power and enhanced system performance fuel a re-emergence.

For the past decade, we have seen an abundance of systems employing off-the-shelf MCU devices with embedded flash. However, as designers now try to future proof their applications by enhancing performance, reducing power consumption, and extending battery life, several new challenges have emerged.

Amazon Echo, Google Home, and many other home hubs, media controllers, and building/home automation controllers are beginning to have a significant impact on our lives. These technologies bring with them new protocols and standards. Designers want to add this new functionality to their applications in the form of Over-the-Air (OTA) updates. This requires larger firmware images for these new applications to retain compatibility, functionality, and interoperability. These enhanced applications need more memory space and often exceed the flash and RAM resources on the embedded MCU. OTA capabilities require extra memory to store the additional firmware code and must support up to three of four complete firmware images: factory default image, current image and the new downloaded image ready to be shadowed to the internal MCU flash. OTA updates must happen easily, reliably, securely, safely, and autonomously.

Figure 1: New edge nodes need OTA update capability, local data storage, and system flexibility. (Courtesy Adesto Technologies)

Figure 1: New edge nodes need OTA update capability, local data storage, and system flexibility. (Courtesy Adesto Technologies)

So what about the MCUs? As MCU vendors scale to smaller geometries, the size and cost of embedded flash pose problems of their own. Larger density flash embedded inside the MCU can often increase cost while reducing performance and/or increasing power consumption. Even with the latest MCUs, engineers are wondering when they will again run out of memory. We have already seen the first of a new generation of flash-less MCU devices this year: MCU vendors abandoned embedded flash and moved to a more cost-effective, higher-performance MCU-only solution.

Modern day IoT application challenges have brought about an increased demand for low-density serial flash. This is all happening while the market enters a phase of supply shortages, capacity restrictions, and a rash of portfolio-wide end-of-life notifications. An MCU used inside a smart sensor, smart door lock, or other IoT edge node device with 256Kbit to 8Mbit of embedded flash will typically need between 1Mbit and 32Mbit of external flash to provide external OTA capability.

Standard serial flash devices have been available for decades, but evolution has focused on achieving higher density, higher performance, and lower cost. Designers often select the external memory device as a last resort when it is evident they need a larger MCU with more flash; this last-minute decision comes at higher cost, with a new PCB layout and complete new code image. Alternatively, designers choose to include an external memory to allow expansion of the current system to support the new features and requirements.

At Odds
It can be argued that the current range of serial flash solutions are often at odds with application demands. The serial flash devices today have architectures suited to high read performance and lower cost, while power consumption and programming flexibility are sacrificed. Adding these memory devices substantially increases power consumption, reduces the battery life, increases MCU overheads, reduces system performance, and often exceeds the embedded SRAM resources required to temporarily support the external flash during OTA programming and updates.

Figure 2: New protocols and standards are arriving in the wake of new home hubs, media controllers, and building/home automation controllers. [Courtesy Adesto Technologies]

Figure 2: New protocols and standards are arriving in the wake of new home hubs, media controllers, and building/home automation controllers. (Courtesy Adesto Technologies)

Factors such as system security are also impacted. This is due to the need to erase and reprogram large blocks of data, the inability to permanently protect factory code images and the generic un-personalized nature of the commodity memory devices—all of which can lead to vulnerabilities in the device trust chain. In addition, components such as voltage regulators may be required to ensure correct operation.

The latest generation of low-density memory devices have architectures and features optimized to address many IoT device and edge node application demands.

  • Wide Voltage Range Operation
    • Devices that support a wide operating voltage (e.g., 1.65V to 4.4V) that matches the VCC range of the host MCU to eliminate the need for additional voltage regulation and maximize battery life.
  • Optimized Operating Power
    • Read and programming current is optimized to improve energy consumption and to further maximize battery life.
  • Ultra-Low-Power Standby Power Consumption
    • Devices that have a user command driven by ultra-low standby power consumption at under 100 Nano amps negate the need to switch the power supply to the memory device and conserve power when the memory is not being used, saving on further external components and MCU GPIO pins.
  • OTA optimized and Data Storage Small Page Program and Erase Structure
    • Small page erase capability allows code updates as small as 256Bytes to be erased and programmed with minimal MCU over heads, while improving performance and reducing update time.
  • Serial EEPROM Emulation Capabilities
    • A byte write/byte erase capability that does not require large block erase and considerable MCU resources to manage it allows simple data and system configuration to be updated a few bytes at a time.
  • One Time Programmable Lockable Secure Sectors
    • One-time-programmable sector locking capability to protect critical code and factory code images.
  • Intelligent MCU-Friendly Active Memory interface
    • Features that allow the MCU to sleep and reduce system power whilst the memory is programming or erasing. The memory device becomes an active peripheral device that generates its own interrupt signal when a process or operation is completed, negating the need for the memory to wake up, test, or poll the memory for status, or run an inefficient fixed delay loop counter for each programming or erase operation.
  • Enhanced Command Rich Serial Memory Interface
    • A rich command set reducing the MCU overheads, such as a single command that acts as both an erase and buffer program in a single operation, versus having to issue an erase command, wait and monitor the erase operation, before issuing and monitoring the programming command. This saves on MCU overheads and execution time and improves system performance and battery life.
  • On-Chip User Accessible Security Numbers
    • Memory devices that contain user accessible unique identity numbers to allow the device to be integrated into a system trust chain of components for enhanced security.
  • Flexible SRAM Buffers
    • Flexible SRAM buffers inside the memory device that can be read from as well as used in the traditional programming-only mode, allowing data and code storage to be loaded directly to the SRAM buffer and read back or modified later. These R/W SRAM memory buffers can even be used as extended scratch pad space to supplement the embedded SRAM and stack space on the MCU.

Low Density, High Interest
The rise of new IoT applications is introducing new demands and driving a resurgence of interest in low-density memory. Many of these devices are battery powered and need to be Amazon Alexa or Google Home compatible. To keep pace with evolving and growing standards all of these new edge nodes require OTA update capability, local data storage, and system flexibility. Having a low-power memory that forces the MCU to burn more energy to manage the memory is a false economy. The latest Adesto Serial Memory product options provide intelligent, MCU-friendly features to reduce overheads, improve security options, increase device flexibility, improve performance, enhance energy consumption, and extend battery life. The memory device is a critical component in any modern system, and can now be treated as an intelligent system peripheral that works autonomously with the MCU rather than just a simple memory storage solution that is a slave to the host MCU.

IMG_1270Paul Hill is Senior Director of Product Marketing for Santa Clara, CA-based Adesto Technologies. Adesto (NASDAQ:IOTS) is a leading provider of application-specific, ultra-low power, smart non-volatile memory products. Contact Paul at

Cooler, Safer Smart Home Hub: What a Difference a Diode Makes

Thursday, June 22nd, 2017

Steps taken to solve high forward voltage drop and high reverse leakage current issues.

Your customer has successfully launched a new smart home hub with a design that wirelessly controls door locks, lights, thermostats, audio, and electrical appliances, all while sending notifications to the homeowner. The soap-bar sized gadget (Figure 1, lower left corner), powered by a wall adapter, is packed with electronics and includes backup batteries in case of a power outage. However, the sleek design is pushing the device’s thermal limit, and now the customer is concerned that the heat generated by the hub may become a problem, compromising the product’s success. The customer asks for help to seamlessly reduce the heat generated without compromising the overall design. This design solution will examine the best option for addressing this issue.

Figure 1: Smart Home Hub

Figure 1: Smart Home Hub

Power Management Implementation
Figure 2 shows a diagram of the customer’s current smart hub system. It’s powered by a 5V wall adapter and has a non-rechargeable backup battery. Three AA alkaline batteries (1.5V x 3, 2Ah) support the 200mA average load and the always-on buck converter outputs a nominal 2.5V. The radio communicates with home appliances equipped with wireless protocols such as Z-Wave™ and ZigBee™. The Ethernet connection ensures exchange of notifications and events with the cloud, but the device will still work locally provided there is power.

Figure 2: Your Customer’s Smart Hub System

Figure 2: Your Customer’s Smart Hub System

The active and passive components of the smart hub’s power circuit are shown in Figure 3. The two diodes are housed in an SOT23-3 (2.6mm x 3mm) package, outlined in red.

Figure 3: Smart Hub Power Circuit Footprint (46mm2)

Figure 3: Smart Hub Power Circuit Footprint (46mm2)

Design Shortcomings
An initial thermal design analysis shows that the always-on buck regulator is operating quite efficiently. So, the focus quickly turns to the two Schottky diodes. At 200mA, these diodes develop a 600mV voltage drop, dissipating 120mW. The power delivered to the load is 2.5V x 0.2A = 500mW. This means that the diode adds a 24% power dissipation overhead.

Another concern in using Schottky diodes is leakage. In normal operation, the diode connected to the alkaline battery is reverse-biased, dumping a current of about 1µA into the load, with the leakage increasing over temperature. This leakage effectively acts like a trickle charge of the battery for an inordinately long time until a power outage comes along to discharge the battery. The problem is that these are non-rechargeable batteries that carry a warning, “If recharged they may explode or leak and cause burn injury.” This is not what we intended for the customer.

What are the Options?
Clearly, it’s critical to drastically reduce the power dissipation. One option is to use a low RDSON n-channel DMOS-based solution such as the one shown in Figure 4. The diodes intrinsic to the MOSFETs are indicated with dashed lines. In this configuration, the control IC biases the GATE according to the voltage sensed across the MOSFET. A positive source-to-drain voltage turns the MOSFET “on” with current flowing in reverse mode (source to drain). A negative source-to-drain voltage turns the MOSFET “off,” with the intrinsic diode reverse-biased. This solution requires two MOSFETS and two control ICs, making it bulky and expensive. This is a drastic solution that would require the redesign of the entire PCB.

Figure 4: A Discrete Solution to the Diode OR-ing Problem

Figure 4: A Discrete Solution to the Diode OR-ing Problem

The best solution would be a small, diode with dramatically lower losses than a Schottky diode and no or minimal reverse current. Fortunately for your customer, such a device is available.

An Ultra-Tiny, Micropower, 1A Ideal Diode with Ultra-Low Voltage Drop
Based on a low RDSON p-channel DMOS, the MAX40200 diode (Figure 5) drops a voltage about an order of magnitude lower than that of Schottky diodes. The internal circuitry senses the MOSFET drain-to-source voltage and, in addition to driving the gate, keeps the body diode reverse-biased. This additional step allows the device to behave like a true open switch when EN is pulled low, or when hitting its thermal limit. A positive drain-to-source voltage turns the MOSFET “on,” with current flowing in normal mode while the body diode is reverse-biased. A negative drain-to-source voltage turns the MOSFET “off,” with the intrinsic diode again reverse-biased. If EN is low then the device is ‘off’ independently of the VDD-OUT polarity.

Figure 5: Ideal Diode Functional Diagram

Figure 5: Ideal Diode Functional Diagram

When forward-biased and enabled, the MAX40200 conducts with less than 100mV of voltage drop while carrying currents as high as 1A. The typical voltage drop is 43mV at 500mA, with the voltage drop increasing linearly at higher currents. Figure 6 shows the ideal diode’s forward voltage vs. forward current and temperature. The MAX40200 thermally protects itself and any downstream circuitry from overtemperature conditions. It operates from a supply voltage of 1.5V to 5.5V and is housed in a tiny, 0.73mm x 0.73mm, 4-bump wafer-level package (WLP).

Figure 6: Ideal Diode Forward Voltage

Figure 6: Ideal Diode Forward Voltage

The tiny MAX40200 WLP package is about one order of magnitude smaller than the SOT23-3 housing the two Schottky diodes in Figure 3. Two MAX40200s fit easily in place of the SOT23-3 position on the PCB with additional room to spare. The MAX40200 application footprint is shown in Figure 7.

Figure 7: Integrated Smart Hub Power Solution with Two MAX40200s

Figure 7: Integrated Smart Hub Power Solution with Two MAX40200s

With a 20mV drop at 200mA, the device consumes only 20mV x 200mA = 4mW of power, greatly reducing the thermal load. Additionally, the MAX40200 reverse leakage current is mostly from Cathode/OUT to ground, while the Anode/VDD reverse current is practically null (Figure 8). This effectively eliminates the unwanted trickle charge of the non-rechargeable battery. The MAX40200-based solution, easily and quickly implemented by the relieved customer, solves both the heat and leakage problems in the smart home hub device.

Figure 8: Leakage Current Into VDD

Figure 8: Leakage Current Into VDD

We discussed the constraints of a smart hub’s power management system in the context of a customer problem case study. The customer’s smart hub had severe problems directly related to the classic shortcomings of Schottky diodes, namely the high forward voltage drop and high reverse leakage current caused by these diodes. The smart home hub generated too much heat and the reverse leakage current was unwittingly trickle-charging the non-rechargeable battery. To address this, two MAX40200 diodes were used to seamlessly replace the two Schottky diodes with little modification of the PCB. This practically eliminated the leakage current into the battery and reduced the power dissipation overhead by a factor of thirty, making for a happy, satisfied customer.

Nazzareno-Rossetti_2017Nazzareno (Reno) Rossetti, Principal Marketing Writer at Maxim Integrated, is a seasoned Analog and Power Management professional, a published author and holds several patents in this field. He holds a doctorate in Electrical Engineering from Politecnico di Torino, Italy.

Steve-Logan---800x600_smaller-fileSteve Logan is an executive business manager at Maxim Integrated, overseeing the company’s signal chain product line. Steve joined Maxim in 2013 and has more than 15 years of semiconductor industry experience, both in business management and applications engineering roles. Steve holds a BSEE degree from San Jose State University.

Fully Customizable Gateway SoC Platform Targets Wide Variety of IoT Applications

Wednesday, April 19th, 2017

How the design methods used to develop an IoT Gateway SoC with a customizable platform can reduce risk, schedule, and costs.

A gateway device plays a critical role in the Internet of Things (IoT) by collecting data from the sensors located at the network edge. It then filters, analyzes, and normalizes the data, and sends it to the cloud for sharing with the network. Designing a gateway SoC from scratch is a challenge that involves not only developing the SoC architecture, software, and hardware, but managing the integration and validation as well. These activities take a significant amount of time and require the involvement of a large design team, which results in longer design cycles and longer time-to-market. There are a variety of applications where these gateways are used, such as surveillance, deep learning, artificial intelligence, data storage, data security and more. Designing a custom SoC for each application from scratch is not a viable solution. Designing a single SoC for all the applications is also not feasible due to the huge investment, risk, and time consumption.

Gateway SoC design is greatly simplified when a gateway platform is utilized. A gateway reference platform offers a modular design that is fully customizable to enable multiple solutions on a single gateway SoC. This helps in reducing system BOM cost and speeding time-to-market. The reference platform approach enables efficient hardware and software partitioning, custom IP integration, and device software development during custom SoC development. The key to creating cost effective custom silicon for the IoT is the platform approach because it reduces risk, schedule, and cost.

A gateway SoC may be a simple device that captures the data from various slow-speed peripherals, then packs the data and sends it through low- to medium-speed peripherals to the cloud. The gateway SoC can also be a complex device that captures data from different high- and low-speed interfaces, performs pre- and post-processing of data, performs analytics, and then sends the resulting data to the cloud through high-speed interfaces. To build a gateway SoC suitable for different applications, it is important to perform partitioning of the silicon such that the simple gateway device is a subset of a complex gateway SoC. An intelligent approach is to keep the simple gateway SoC design as a basic building block and add other IP blocks to create complex gateway SoC variants. However, adding multiple application-specific IPs into a single SoC will increase cost, since each IP is expensive. Also, if an application does not use an IP, then the IP will be redundant for that SoC.

A gateway SoC needs to be validated for functionality, and this is achieved using a platform approach. All the IPs that need to be part of a simple gateway design are carefully selected such that the same IPs can be seamlessly reused in complex gateway designs. The fastest way to develop a gateway SoC is to use different pre-silicon design methodology platforms—including RTL, virtual, and FPGA platforms. By using these pre-silicon methodology platforms, the design can be verified in simulation. In addition, the software stack can be developed on the virtual prototyping platform, and the end use case application can be tested with real peripheral devices and software for functionality on the FPGA platform.

Software drivers are developed and tested along with RTL verification, so the bring-up effort on software development is greatly reduced. The FPGA prototyping platform aids the designer to test the design with real-life peripheral components or devices and check the functionality of the end product or design. This approach significantly boosts the confidence of the designer and the success rate of first-time working silicon at tape out.

Figure 1 shows the pre-silicon methodology platforms for successful IoT gateway SoC design.

Once the design is verified on pre-silicon methodology platforms and taped out for fabrication, the design team can ready for silicon bring-up by enhancing the software and designing the bring-up board. This increases the efficiency of the design team, consumes less design time, and reduces design errors.

During the gateway SoC design, the design team can create multiple designs for various end applications, retaining common core blocks and adding newer blocks to create additional variants that result in new silicon parts for different applications in a short amount of time.

Figure 1: Gateway SoC Design Platforms

Figure 1: Gateway SoC Design Platforms

The key advantages of using gateway SoC platforms is that the block, which is considered as core block, can be verified, and the software drivers can be written and used as the core library. New application-specific IPs can be seamlessly integrated with the verified core library and used to create new gateway SoC designs. By adding new test cases and designing new daughter cards for application-specific IPs, the validation of new designs can be performed faster, resulting in less design effort and minimal risk.

Figure 2 shows an example of a Gateway SoC consisting of generic and application-specific IP sections.

Figure 2: IoT Gateway SoC Partition

Figure 2: IoT Gateway SoC Partition

A sample IoT gateway SoC consists of two partitions, one with a generic IP section and the second consisting of an application-specific IP section. The generic IP section consists of IP blocks that are common across applications. Some of these are processor complex, DDR3/4 and flash memories, high speed peripherals, such as PCIe/USB/SATA, and low speed peripherals, such as SPI/I2C/UART/CAN/Timers.

The IPs in a generic IP section are carefully selected and verified on pre-silicon methodology platforms. In an FPGA platform, the daughter card is designed and contains all the peripherals required in the end application, and the design is verified close to the end application.

Figure 3. Custom SoC-Based Smart City IoT Gateway Reference Board

Figure 3. Custom SoC-Based Smart City IoT Gateway Reference Board

Once the generic IP section design is verified, different variants of the gateway SoC can be designed by adding application-specific IPs to the generic IP section based on end applications. The design with new IPs integrated is verified on the pre-silicon methodology platforms. An add-on daughter card with specific peripheral devices can be designed to validate the newly added application-specific IPs. Several SoCs can be designed in parallel for different applications. This will reduce time-to-market, minimize bugs in the design, and lower overall design cost.

Case Study

The gateway SoC methodology platforms were utilized in an IoT gateway custom SoC design. The SoC is intended to be used for IoT gateways in smart city applications. Figure 3 shows the smart city IoT gateway reference board built around the custom SoC. The gateway is a full featured device that supports various types of wireless and wireline connectivity, communicates with IoT edge devices, and connects to the cloud through 3G/LTE/WiFi.

Figure 4. IoT Gateway SoC Platform Software Features

Figure 4. IoT Gateway SoC Platform Software Features


It will be industry best practice to utilize the IoT SoC platform approach, as detailed above, to design custom SoCs for various IoT applications with reduced risk, schedule, and cost.

Naveen-HN Naveen HN is an Engineering Manager for Open-Silicon. He oversees board design, post-silicon validation and system architecture. He also facilitates Open-Silicon’s SerDes Technology Center of Excellence and is instrumental in the company’s strategic initiatives. Naveen is an active participant in the IoT for Smart City Task Force, which is an industry body that defines IoT requirements for smart cities in India.

How Richer Content is Reshaping Mobile Design

Tuesday, March 29th, 2016

Low- and mid-phone memory demand is heating up more rapidly than ever, but with 3D NAND, things are looking (and stacking) up.

Editor’s Note: When Intel and Micron together announced the availability of a NAND technology that tripled capacity compared to other solutions, the two firms pointed to “mobile consumer devices” as among the beneficiaries of the storage breakthrough. Flash forward to March of this year, when Micron’s Mike Rayfield, VP and GM of the company’s Mobile Business Unit, shared the Mobile World Live “How Richer Content is Reshaping Mobile Design” webinar stage with Ben Bajarin, Principal Analyst, Global Consumer Tech, Creative Strategies and moderator Steve Costello, Senior Editor, Mobile World Live. Following are selected edited excerpts from the webinar transcript, with organizational text, figures and captions.

Mike Rayfield, Micron

Mike Rayfield, Micron

Mike Rayfield: [In making] observations at Mobile World Congress this year, I [saw] three things:

1. The content for mobile devices is getting a lot richer. Whether it’s driven by the coming 5G pipes being a lot thicker to push a lot more data or whether it’s high resolution content or virtual reality. But, clearly the richness of data has put a huge strain on the mobile handset and what it’s got to be able to do.

2. [IoT] things are hubbed to your smartphone. And what we’re seeing is it’s becoming a much more important device.

3. One or two billion people don’t have a computer, don’t have a phone, and don’t have access to the Internet. There are now devices, full functionality devices that can be acquired by those folks for well under a hundred dollars that are going to allow them to have that first computer, make that first phone call and surf the Internet. Mobile devices, while they’re important to us now, are getting even more important and are going to become important to folks that will own them in the very near future.

Smartphone as Primary Computer

Ben Bajarin, Creative Strategies

Ben Bajarin, Creative Strategies

Ben Bajarin: So much of the component landscape and innovation from smartphones is driving other sectors. If you look back to the 90s and early 2000s era of the PC, you’d see [that PC] components, (SoCs, memory screens), drove other elements of different industries. And now we’re seeing that same thing play out in smartphones’ [components and innovation] driving the IoT. And those same products are moving to cars. Those same products are moving to virtual reality.

The smartphone, for most of the world, is not just their only computer, but even in developed markets it is their primary computer. That has huge implications on how it’s used, on engagement levels, all the way across the board to new opportunities in software and services.

Rayfield: If you think about mobile device design, the things that it cares about are performance, energy efficiency and footprint. Networking and cloud, enterprise, IoT, automotive—all of those things care about those same three things.

Figure 1: The requirements for memory in each of these areas are critical, accelerating and evolving.

Figure 1: The requirements for memory in each of these areas are critical, accelerating and evolving.

The mobile device is also now becoming a creator of content. That pushes a staggering amount of bandwidth onto the network, [forcing] the network to be more robust and higher performance. It forces more storage onto the cloud—we all want to back things up. And mobile is the reference design for automotive and IoT. All of this innovation we do, whether it be in performance, footprint or energy efficiency in mobile, is touching all of the other different markets. [It] is something that has just started to happen in the past couple of years and is going to accelerate.

Bajarin: Mobile is the catalyst for new things. Things we haven’t even thought of yet. Virtual reality is just one example of those things we weren’t talking a lot about, and now we’re talking very distinctly about how the mobile ecosystem drives the experience. That’s the emphasis of smartphones now and that whole ecosystem—driving all of these other peripheral markets. When we looked just at some usage behaviors, where we’re going in the next generation, 5G, most people are going to consume more and more and more bandwidth (Figure 1).

Burden on Memory Grows

Rayfield: Even a couple of years ago the average [handset] device had 1 GB or less of DRAM. And it was relatively small. It had 4 or 8 GB of NAND. As these devices have become your mobile computer, the burden on the memory has become significantly larger. Flagship devices on average have about 3 GB of DRAM (Figure 2) and we’re working with partners in China where they’re already working on 6 GB DRAM devices.

Figure 2: Low-end and mid-end phone memory demand is accelerating faster than ever before due to the shift to full smartphone capabilities.

Figure 2: Low-end and mid-end phone memory demand is accelerating faster than ever before due to the shift to full smartphone capabilities.

Consumers figured out more is better, and not only because it’s a larger number but because they see a difference in performance.

Everybody has talked about the idea that the cloud is going to replace local storage. The reality is we’re just too impatient for that. Connectivity is not ubiquitous by any means, and ultimately we want our pictures, we want our videos, we want our movies on our phone.

Advances in NAND have allowed us to go from 32 to 64 to 128 GB of local storage, and I see that just continuing. I do a sort of a sample size of my kids and I look at their devices. They’ve got a smartphone with 64 GB NAND and there’s about 8 GB of free storage whenever I look at it. We’re going to continue to go in that direction.

Bajarin: One of the trends that we’re looking at particularly, not just in developed markets, but also as we look more at emerging markets, is: What’s the role cloud services are going to play? If I’m thinking about increasing the capacity of what I can do on a device, and I want to move that to other products to be backed up etc., looking at how many people do this today.

In consumer research, we found that 75 percent of consumers are yet to invest in cloud services [invest meaning] pay for something—pay for iCloud, pay for Dropbox, pay for cloud storage as part of a solution.

[…] local storage is still playing a huge role. […]you’re starting to see increase again in capture. [Consider those] on Snapchat making short videos of their day. Posting them to their stories. Sometimes that’s archived, sometimes it’s not. And dual camera is something you’re going to hear a lot about this year from the component landscape.

[You’ll be able to]  take a 45-plus megapixel picture, [have] multi-range zoom, change the focal length, record two videos: one in slow motion and one in fast motion. We’re talking about a tremendous amount of file storage there. Even if everyone was pervasively subscribing to the cloud today, [consider] how much back-end infrastructure [is needed] to take all that’s being created. There are still some innovations in smartphone cycles, dual camera being one of them, which will put very significant demands on the capabilities of those products.

Mobile to Smart

Rayfield: There are 1-2 billion people that we really haven’t touched with mobile devices yet (Figure 3). Only a couple of years ago, it was assumed that those folks would buy a 20/30/50 dollar feature phone, which ultimately had very little capabilities, just the ability to make a phone call. Clearly, that wasn’t a very interesting market. If you can now look at devices that are well under 100 dollars that are full computers, that give them the ability to access the Internet, that give them the ability to make phone calls, give them the ability to have a computer, I think it changes the landscape a lot even over what we were seeing a couple of years ago.

Figure 3: Smartphone installed base per population.

Figure 3: Smartphone installed base per population.

I’ve seen three examples of phones that are well under 100 dollars (Figure 4). And they’re full functioning computers. They have 1 GB of DRAM, and 4, 8 or 16 GB of NAND, and they have great cameras and high-performance processors. Literally, these are devices that a couple of years ago would’ve been $400, $500 or $600. I think that’s a tipping point that has infrastructure folks putting better infrastructure in developing countries. That is going to open up this huge opportunity for the next 1-2 billion people.

Figure 4: Feature phones that historically had almost no memory are being replaced by very inexpensive smartphones with the same amount of memory as high-end smartphones.

Figure 4: Feature phones that historically had almost no memory are being replaced by very inexpensive smartphones with the same amount of memory as high-end smartphones.

Bajarin: In my analysis of these markets I break out: What did we do to get to this first 2 billion people who are online with their smartphones today? What are their behaviors like? The dynamics between the mid range and the high range are changing within this first 2 billion. We look at the first 2 billion [smartphone users] as a very distinct market.

You hear, ‘oh, smartphones are a saturated market, it’s slowing down,’ and while that’s true, I think we also have to recognize that there are a lot of people who still don’t have smartphones. And that a lot of this is an economic discussion. But the way that I visualize this is: I link our model of smartphone installed base by a number of countries that I track vs. their population (Figure 3). You look at places like China and India, even Brazil which is large, Indonesia, and particularly you look at the continent of Africa, and you see massive, massive, amounts of those populations that are yet to have a smartphone.

Now, what we also keep in mind is that most of them actually have a mobile phone. So it’s not like we’re starting from scratch.

I believe, over time, we will convert the two or so billion people today who are yet to have a smartphone but do have a mobile phone, we will convert those to become smartphone users. And you follow that up with the observation that it will become their primary computer to do things, engage in commerce, learn tips about farming, engage in trade in new ways, and become better educated. All of that has a huge potential increase to the GDP of those countries, particularly less developed parts of the countries.

Planar to 3D

Rayfield: We’ve talked a lot about the smartphones, let’s go back and look at what they’re made up of. Even the 100-dollar devices have computing capabilities that were in PCs only a couple of years ago. From a DRAM standpoint, it’s already the highest performing high-volume DRAM in the industry. It’s all about energy per bit, how do we get more efficient, how do we get battery life to last longer? There’s going to be next generation options whether you put the memory in package or other kinds of things. Those are things that are being driven by the smartphones and are going to be used across other markets. From a NAND standpoint, we’ve talked about people wanting additional storage and capability.

Planar NAND has pretty much run out of steam in terms of lithography, so we’re going to 3D. We’re getting to the point where it will be very easy in one or two or four die configurations to have 64, 128 or 256 GB of memory in the same footprint that you have 4 GB now. That’s an innovation that people will use once they have the storage, and it’s being designed in devices right now. The reality is we think a lot of the mobile phone design is bound by how much memory or NAND [is available].

Bajarin: […] what happens if we move now to a video era? Where all of a sudden we see an increase in capture and creativity and sharing of a range of things, again, probably video experiences we can’t even fathom today because they haven’t been created yet.

There are huge implications across the hardware, software and services landscape, but all of these devices have to be built understanding that consumer demand will be there. If we give them more capacity, they will take advantage of that, but more importantly so will the developer ecosystem. We study millennials a lot, because I think millennials globally put the most demand on their devices [and] video is a normal part of their life and usage, and that’s just going to increase, it’s not going to stop. New behaviors around video and new demands will emerge as this generation starts to get those capabilities from a capture standpoint in terms of sensors, as well as just having the throughput in 5G and beyond to now take advantage of these services. So we look toward this next video era in light of what we saw in the photo era. Unprecedented things happened with photography, and we’re on the cusp of seeing the same thing with video.

Rayfield: One of the things that video puts a burden on is storage. We’ve now gone to 3D NAND (Figure 5). What 3D allows you to do is back off on lithography, making it a little easier to scale and scale vertically. And what we gain is capacity. So in a couple nanometers of silicon you can put all the storage you need in your mobile device.

You also gain is speed. Very quickly the network is getting fast enough that it’s forcing us to get better on storage. And that is going to be the thing that continues to drive us. As we go to this 3D, we get much, much, bigger bandwidth with the storage.

Figure 5: Traditionally, flash has been built in a planar, or two-dimensional structure—much like a one-story building. When we wanted to add space, we had to make the data cells (the rooms in the building) smaller. With 3D, we’re building a vertical building—like a skyscraper.

Figure 5: Traditionally, flash has been built in a planar, or two-dimensional structure—much like a one-story building. When we wanted to add space, we had to make the data cells (the rooms in the building) smaller. With 3D, we’re building a vertical building—like a skyscraper.

In summary, the amount of data that people consume, generate and want to share is driving what the system is. The content that the developed world is generating, consuming and sharing is quickly moving to the next 1-2 billion users and again that’s going to put a lot of pressure both on the devices and their capabilities. And then if you look at a bottleneck of consumer experiences and devices, it’s all about generating, creating, and sharing content. These devices now have a capability where maybe a couple of years ago people wouldn’t have thought that they’d become as ubiquitous as we believe they’ll be around the world. 

Ultra Low Energy and the ULE Alliance

Thursday, March 24th, 2016

A new generation wireless communication technology for the IoT has applicability in cases ranging from energy savings to emergency response to the smart home.

The ULE Alliance is an organization that works to expedite the worldwide deployment and market adoption of Ultra Low Energy (ULE) products. The ULE Alliance works with its members to quickly develop new products and services in the areas of Home Automation, Security and Climate control by certifying standards conformance and ensuring interoperability between IoT products of the different vendors, thereby improving service provider selection, delivering true customer satisfaction and increasing the overall size of the market for all participants. Our goal is to ensure that the proven and superior ULE technology will be a leading infrastructure and standard for home wireless networks, enabling a more safe and convenient life for all people.

Our organization is made up of global service providers, vendors and chipset developers dedicated to developing energy-efficient and powerful solutions for the Smart Home and Office.


Comparison to Other Wireless Options on the Market

ULE is a new generation wireless communication technology for the IoT, based on DECT, an established technology with 20+ years of deployment. Since this is an established technology, the chipsets are reliable and costs are extremely competitive. ULE operates in dedicated, licensed, royalty free spectrum, giving the service provider the longest wireless range of 75-100 meters in building, and up to 300 meters in open range, compared to competitors that range from 10-30 meters from the gateway. This greatly reduces the need for expensive repeaters throughout the home, greatly simplifies the layout and installation, and reduces energy consumption and overall cost; many providers are looking at self-installation as another cost saving option for their intelligent home solutions.

ULE operates in dedicated, licensed, royalty free spectrum, giving the service provider the longest wireless range of 75-100 meters in building, and up to 300 meters in open range, compared to competitors that range from 10-30 meters from the gateway.  This greatly reduces the need for expensive repeaters throughout the home, greatly simplifies the layout and installation, and reduces energy consumption and overall cost; many providers are looking at self-installation as another cost saving option for their intelligent home solutions.

This unique band has no interference from Wi-Fi, ZigBee, Bluetooth, Z-wave, etc. and has the ability to carry two-way voice and video with great stability. This makes possible applications such as fire systems that tell you exactly where the fire rather than just sounding an alarm. Such systems can also open a northbound call to 911 to provide an open channel of communication. Critical information, e.g., “my children are trapped in the back of the house” can be conveyed in a situation where seconds matter.

The ultra-low power asset also makes great use of the battery, so much so that tests have shown some of the batteries can last seven years without changing. Imagine not having to tell customers to change the alarm batteries every six months, or worrying that the system will fail due to a dead battery. Most batteries will die of corrosion before losing power. Tens of millions of home gateways use DECT; these gateways can be easily upgraded to ULE by a software upgrade, something that the service providers perform routinely. This creates a new business opportunity for the service providers to expand the communication services to their customers’ homes by providing smart home services, re-using installed home gateways.

Tens of millions of home gateways use DECT; these gateways can be easily upgraded to ULE by a software upgrade, something that the service providers perform routinely. This creates a new business opportunity for the service providers to expand the communication services to their customers’ homes by providing smart home services, re-using installed home gateways.


ULE Alliance has established active work liaisons with ETSI, Home Gateway Initiative (HGI), AllSeen Alliance and Open Connectivity Foundation (OCF – former OIC) to help foster better interoperability, management and bridging between all sensors and devices in
the home. The DECT Forum is an active member of the ULE Alliance as well.

At CES 2016 the ULE Alliance and its partners, All-Seen and OCF, made joint demonstrations, and close cooperation is expected to continue and strengthen all the parties.


Since launching the Ultra-Low Energy Certification in mid-2015, 30+ products have successfully achieved certification. We expect the number to top 100 by 2017.

  • ULE Alliance demonstrated at CES 2016 ULE working over IPv6 (6LoWPAN).
  • Members Turkcell and DSP Group announced in January the commercial IoT deployment based completely on ULE, which will serve a 25 million household base.
  • Deutsche Telecom announced that all the next generation home gateways will be equipped with ULE.
  • Panasonic introduced the Home Automation kit, based on the ULE technology, in USA and Europe.

What’s Next

  • With the introduction of 6LoWPAN, the IPv6 connectivity will play an increasing role in extending the IP communication protocol use in the IoT, replacing the proprietary technologies and enabling better interoperability.
  • More companies will adopt the use of the wireless technology agnostic application layers, such as AllJoyn and IoTivity.
  • The consolidation and cooperation between different technologies should happen in order to achieve the goal of seamless integration of new devices into the existing networks, regardless of the wireless technology in use. Interoperability across various wireless technologies is imperative in order to support widespread adoption for IoT.

Ahead for the ULE Alliance

  • ULE Alliance will release the 6LoWPAN support for ULE in Q2’2016
  • ULE Alliance will continue to cooperation with AllSeen and OCF, introducing two projects with each partner:
  • A bridging gateway between existing ULE and AllJoin/IoTivity networks
  • Sensors running AllJoin / IoTivity application layers over ULE protocol
  • New ULE based service deployments by major European service providers
  • Increasing number of device manufacturers worldwide using ULE
  • Number of ULE based sensors/actuators surpassing 250 by end of 2017

Avi Barel_115Avi Barel is Director of Business Development, ULE Alliance. He has over 30 years of broad high tech experience, ranging from software development, semiconductor, engineering and business management.

Barel joined the ULE Alliance in April 2013 as the Director of Business Development and is actively leading the promotion of the ULE technology worldwide. Prior to ULE Alliance, at DSP Group, he served as the Corporate Vice President of sales for 8 years. Before joining DSP Group Barel established and managed Winbond subsidiary in Israel for 5 years, developing semiconductor products for speech processing and communication, managing team of 50+ engineers. At National Semiconductor he held variety of engineering, engineering management and business management positions; involved in development and management of award winning, innovative projects.

Barel holds M.Sc. degree in Computer Science and B.Sc. degree in Mathematics and Physics from the Hebrew University in Jerusalem.

Formal Low-Power Verification of Power- Aware Designs

Monday, November 9th, 2015


Power reduction and management methods are now all pervasive in system- on-chip (SoC) designs. They are used in SoCs targeted at power-critical applications ranging from mobile appliances with limited battery life to big-box electronics that consume large amounts of increasingly expensive power. Power reduction methods are now applied throughout the chip design flow from architectural design through RTL implementation to physical design.

Power-Aware Verification Challenges

Power-aware verification must ensure not only that the power intent has been completely and correctly implemented as described in the Unified Power Format (UPF) specification [1] or the Common Power Format (CPF) specification [2] , but also that the functional design continues to operate correctly after the insertion of power management circuitry.

Power estimates are made at several stages in the design flow, with accurate estimates becoming available only after physical layout. Consequently, design changes such as RTL modifications and layout reworks ripple back and forth through the design flow. This iterative power optimization increases verification and debug effort, project risk, and cost. The objective is to achieve the target power consumption while limiting the cost of doing so.

The power management scheme

Initially, an implementation-independent functional specification is devised to meet product requirements. The functionality is then partitioned across hardware and software. An initial power management scheme can be devised at this architectural level, but it defines only which functionality can or should be deactivated for any given use case, not how it should be deactivated. The functionality is then implemented in RTL, making extensive use and reuse of pre-designed silicon IP blocks, together with new RTL blocks. Some of these blocks may be under the control of software stacks.

At this stage, decisions are made about how functionality should be deactivated, using a multiplicity of power reduction methods to achieve the requisite low-power characteristics. These decisions must comprehend both hardware- and software-controlled circuit activity. Common power management methods include:

  • Clock gating
  • Dynamic voltage-frequency scaling (DVFS)
  • Power shut-off
  • Memory access optimizations
  • Multiple supply voltages
  • Substrate biasing

These early-stage power management decisions can be influenced by a physical floorplan, but they do not—and cannot— comprehend the final partitioning at P& R, where accurate power estimates are made. Consequently, the power management scheme can be modified and must be re-verified after P& R.

Clearly, the power management scheme is a moving target, and requires iterative design optimization, verification, and re-verification at every stage of the design flow—including architecture, RTL implementation, and physical design.

Additional complications

Implementing any scheme is often subject to significant additional complications, such as the impact of IP use and re-use and of DFT circuitry. A given IP block can implement several functions, each of which can be or must be switched off independently of the others, for example, by adding interfaces to the power-control registers or signals. These functions can be problematic in the case of third-party IP, where (often) only black box information about its behavior is available. In any case, the verification challenge now includes re-verifying the redesigned IP block(s) as well as verifying the power management circuitry.

In order to minimize test pattern count and test time, conventional DFT assumes that the whole chip operates with all functions up and running. That is how it operates not only on the tester, but also in field diagnostics. With power-aware design, DFT circuitry must now mesh with the design’s power management scheme in order to avoid excessive power consumption and unnecessary yield loss at final test.

Power-Aware Verification Requirements

Functional analysis, optimization, and verification throughout the design flow, complicated by inadequate visibility of third-party IP white-box functionality, mandates the following five principal requirements for implementing and verifying a low-power scheme:

  • Sufficiently accurate power estimates using representative waveforms, both pre- and post-route
  • Accurate visibility and analysis of the white box behavior of third-party IP prior to its modification and reuse
  • Deployment and ongoing optimization and verification of appropriate power reduction techniques, both pre- and post-integration
  • Exhaustive functional verification at the architectural and RT levels, both before and after the deployment of power optimization circuitry
  • Verification of hardware functionality compliance with software control sequences

The first requirement can be addressed with commercially available tools that use simulation and formal methods. The rest of this section deals with the remaining requirements.

As previously indicated, the power management scheme starts at the architectural level, so any available architectural features such as communication protocols must first be verified (see Figure 1).

Figure 1: Ongoing power-aware optimization and verification

In the subsequent functional implementation (RTL) flow, low-power management constructs are introduced at different phases in the SoC development, depending upon the data available and the optimizations required. Taking the deployment of power domains as an example, verification must ensure that:

  • Normal design functionality is not adversely affected by the addition of power domains and controls. “Before and after” checking is critical.
  • A domain recovers the correct power states at the end of the power-switching sequence, and generates no additional Xs at outputs or in given block signals.
  • It achieves a high level of coverage of power-up /power-down events, which are very control-intensive operations.
  • Switching off a power domain does not break connectivity between IP blocks.

Therefore, taking the RTL model verified prior to the insertion of power management circuitry as the “golden” reference model, power-aware verification requires a combination of:

  • Architecture-level verification
  • IP white-box functional visualization and analysis
  • Exhaustive functional verification
  • Sequential equivalence checking
  • Control and Status Register (CSR) verification
  • X-propagation analysis
  • Connectivity checking

Limitations of Traditional Power-Aware Verification

Various tools and approaches are used for power-aware analysis and verification. This patchwork of tools and approaches clearly provides limited analysis and verification capability, and demonstrably achieves inadequate QoR. Automated structural analysis and limited, manual functional analysis can identify potential opportunities for the use of power management circuitry. Such analysis can assure consistency between the RTL design and the UPF/CPF specification, but cannot verify design correctness. At the architectural level, power analysis usually is performed manually with spreadsheets.

Power-aware simulation is used at the RTL, but, like conventional simulation, is not exhaustive. This situation is exacerbated by the state space explosion resulting from the insertion of complex power management schemes. It not only significantly degrades simulation performance, but also fails to systematically avoid X optimism and pessimism.

Power-related DRC can enable limited power integrity analysis at the gate level.

Meeting Power-Aware Verification Requirements with JasperGold Apps

The JasperGold power-aware verification flow comprehensively meets power- aware verification requirements with the requisite QoR.

The front-end of the flow is the JasperGold LPV App (see Figure 2), which automatically creates power-aware transformations and automatically generates a power-aware model that identifies power domains, the power supply network and switches, isolation rules, and state retention rules. It does so by parsing and extracting relevant data from the UPF/CPF specification, the RTL code, and any user-defined assertions. It then automatically generates assertions that are used by other JasperGold Apps to verify that the power reduction and management circuitry conforms to the UPF/CPF specification and does not corrupt the original RTL functionality.

Figure 2: Jasper Gold power-aware verification flow

Power-aware model

The resulting power-aware model enables the analysis and verification of a wide-range of power management characteristics, for example:

  • Power-domain correctness, such as the absence (or otherwise) of clock glitches and correct operation of reset logic
  • Power state table consistency, analyzing all possible power-state transitions and detecting illegal ones
  • Retention cell verification, validating the integrity of saved data in all power states.
  • Power-supply network connectivity, to detect power intent errors made when defining the power-supply network
  • Power-aware correctness, ensuring equivalence between a power-aware design and the original RTL when all power domains are continuously on

LPV-generated assertions

Examples of assertions automatically generated by the JasperGold LPV App include:

  • Ensure that a power domain clock is disabled when the domain’s power supply is switched on or off
  • If a power supply net has resolution semantics, there is never more than one active driver
  • Ensure that the power supply of retention logic is on when the value of an element is restored from that logic
  • Whenever a power domain is powered down, all the isolation conditions related to this power domain are true before, during, and after power shut-off
  • No signal is isolated twice with contradictory clamp values

The JasperGold Apps approach

In contrast to the general purpose, all-in-one formal verification tool approach, the JasperGold Apps approach enables step-wise adoption of formal methods. Each JasperGold App provides all of the tool functionality and formal methodology necessary to perform its intended application-specific task. This approach requires design teams to acquire only the expertise necessary for the particular task at hand, and at a pace that suits the project requirements and user expertise.

Providing an empirical measurement on the effectiveness and progress of the formal verficiation, the JasperGold Design Coverage Verification (COV) App takes in the user’s RTL, assertion properties, and constraint properties and outputs a textual and GUI-based report showing how aspects of the DUT were verified by the formal analysis. These reports how lines of code (“statement coverage”), conditional statements (“branch coverage”), and functional coverage points that were exercised.

The JasperGold Formal Property Verification (FPV) App performs exhaustive verification of (a) all RTL functionality before the insertion of power management circuitry, and (b) the power management circuitry itself. For example, it analyzes and verifies power sequencing both during block design and after integration, including sequence safety such as clock deactivation, block isolation, and power down, as well as state correctness.

The JasperGold Control and Status Register (CSR) Verification App verifies that the design complies with the CSR specification, and that the value read from a register is always correct, both before and after power management insertion.

The JasperGold Sequential Equivalence Checking (SEC) App verifies the equivalence of blocks before and after power management circuitry is inserted, as well as those blocks subject to late-stage modification. In addition, it verifies that memory optimizations do not compromise functionality. For example, where a memory is replaced by two low-power memories with a wrapper, the JasperGold SEC App verifies that the two memory models are equiv- alent to the original memory.

The JasperGold X-Propagation App (XPROP) analyzes and verifies Xs at block outputs caused by power-down, and compares differences in output X behavior before and after application of the UPF/CPF specification.

The JasperGold Connectivity (CONN) Verification App exhaustively verifies RTL connections at the block and unit level, and after integration.


The Cadence JasperGold power-aware formal verification flow enables exhaustive analysis and verification of power-aware designs, achieving QoR superior to those designs produced by the traditional ad hoc patchwork of tools and approaches. Starting with the JasperGold LPV App to automatically generate a power-aware model and the appropriate assertions, the flow leverages an expandable range of additional JasperGold Apps, each targeted at a particular task. Design teams can deploy JasperGold Apps as needed and acquire expertise in a low-risk, step-wise fashion.

References and Further Information

[1] 1801-2013 – IEEE Standard for Design and Verification of Low-Power Integrated Circuits. Available at

[2] Si2 Common Power Format (CPF) Specification. Available at /?page=811.

To learn more about Cadence JasperGold Apps, contact your local sales office at

Contact Information

Cadence Design Systems, Inc.

2655 Seely Avenue
San Jose, CA, 95134

tele: 408.943.1234
fax: 408.943.0513

Next Page »

Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.