Posts Tagged ‘top-story’

Next Page »

“…brain inspired”: Q&A with Paul Washkewicz and Chet Jewan, Eta Compute

Monday, October 1st, 2018

Why microcontrollers architected in a “brain inspired” way could have a growing role in smart metering and other point-to-point communication applications, such as those relying on sub-GHz next generation networks.

Editor’s Note : “Low power solar operation means more deployments with low operational cost,” Eta Compute co-founder  and VP marketing Paul Washkewicz tells EECatalog. One potential use case, according to the company’s Chet Jewan, who serves as VP sales and business development: oil pipeline monitoring. Earlier this year Eta Compute Inc. and ROHM Semiconductor announced  the two have partnered to develop sensor nodes compatible with the Wi-SUN sub-GHz communication protocol—news followed by a successful Sensors Expo demonstration of ROHM’s sensor technology and Eta Compute’s low power MCUs. Edited excerpts of our conversation follow.

MJ1011 Soil Sensor Unit from LAPIS Semiconductor, a ROHM Group Company

EECatalog: What did you want to accomplish by introducing your Sensors Expo attendees to an energy-harvesting sensor evaluation board and how did it go?

Chet Jewan, Eta Compute

Paul Washkewicz, Eta Compute

Chet Jewan (CJ): To convey that it was possible to capture data—including lighting, temperature, pressure—and transmit that from the sensor node to a Wi-SUN receiver in ROHM’s booth, presenting how an energy-harvesting solution captures and transmits data in real time—all without being plugged into the wall—as well as operating from lighting in the exhibit floor, showing a real-life application.

Paul Washkewicz (PW): There was a steady stream  of sensor customers coming by both the Eta Compute booth and the ROHM booth asking about details of the design, especially in relation to the issue of: “How do you effectively deploy systems that have excellent coverage but don’t require a steady stream of management, which raises operating expenses?”

EECatalog: Why pool resources with ROHM?

PW: Pooling resources makes it possible for customers to get the sensor nodes required for deploying next-generation networks. Individually, each of our companies has products that theoretically could work seamlessly together. However, while engineers hear those words “work seamlessly” often, they can find actual implementation more elusive.

For example, a key part of this joint solution is the power management for the energy-harvesting section of the design. By pooling resources, we put together a design that includes our low-power controller and ROHM sensors, but also includes the design of the power management. Anything to streamline an application’s design and prove out the application in hardware and software is a better starting point than a collection of datasheets.

CJ: They can use our reference design as an example, whether they want to change the communication protocol, or change the sensors, or take an existing reference design and be able to deploy that in an existing factory as is, they can do that.

EECatalog: For a running start to deployment as quickly as possible?

CJ: Exactly, so they can take the sensor node as is and deploy it on a trial basis, and from there they can enhance it, modify it, whatever the case may be.

EECatalog: What’s the significance of Wireless Smart Ubiquitous Network (Wi-SUN) compatibility?

PW: Wi-SUN is an open standard based on a low-power mesh network targeting smart cities’ and smart utilities’ networks. With hundreds of millions of possible deployments, making these as near “maintenance free” as possible will help the proliferation of such systems. Wi-SUN is one good example of a low-power sub GHz RF standard for these applications.

EECatalog: What features of this solution do you anticipate catching folks’ attention?

CJ: One aspect people will be interested in is how often can they capture and transmit data from the sensors, and the variable would be, for example, lighting in their environment. If this node is inside a factory, the data customers are looking for is, “Okay, if it’s only 100 to 300 lux type of lighting, how often can I poll the sensors for data, and then how often can I transmit that data?”

EECatalog: But we’re not talking just inside factories?

CJ: Absolutely not, for instance, take cold chain applications. The sensor node could be inside a truck and you could transmit to the cabin to alert the driver—say he’s transporting lettuce—that the temperature is going up.

EECatalog: How does sensor fusion fit in?

CJ: Say along with the temperature sensor that indicates the truckload of lettuce is getting too warm, you also have a GPS unit that is tracking location. For that truck that is moving inventory which supposed to be refrigerated you want to be able to look at location, temperature, pressure and say, “Okay, what’s really happening here?” Maybe action has to be taken because the truck has traveled from a cooler, higher altitude in Flagstaff Arizona to a warmer location, Phoenix or Tucson. You are taking data from multiple sensors and producing  meaningful data for the user. Looking ahead, artificial intelligence integrated into our MCU will make possible a rapid, low cost way of doing sensor fusion at the edge resulting in relevant information for the user.

EECatalog: What is capturing Eta’s Compute’s attention with regard to Artificial Intelligence?

CJ: With the AI market, which is still in its early stage, a lot of the work being done is to transmit the data to the cloud in order to get it processed and then provide data back down. We, on the other hand, are working to provide AI at the edge. So, for any of these applications that are IoT sensor-driven, we want to be able to take the data from the sensor, use artificial intelligence to do sensor fusion, and act [based upon] that data without going to the cloud for processing.

We want to make an edge device that is very low power, with the performance to run AI and process locally, instead of having to transmit the data to the cloud. You want to avoid latency and connectivity issues, where, for example, you’ve got this truck and you’re sending data to the cloud. And by the time you get the data back, you’ve got a refrigeration issue, and it may be too late.

We are developing efficient neural nets, we’ve got the low-power hardware—we are putting that altogether where AI is possible at the edge, and especially for sensor-fusion and IoT types of applications, we can sense, infer, and act locally, without cloud connectivity.

EECatalog: That’s in the context of some applications needing the kind of computing power the cloud can offer?

CJ: Yes, it’s application dependent. For some heavy-lifting performance, yes, you are going to need the cloud, and that is not the market we are going after.

In terms of sensors and IoT things, not everything is connected to the cloud, and to go ahead and connect that is going to cost money—you’ve got to get some kind of subscription service, whether it is LoRaWAN or NB-IoT, or whatever the case is.

 

Figure 1: IoT Soil Environment Monitoring Case Study from LAPIS Semiconductor, a ROHM Group Company

Let’s take an agriculture application, where you’ve got these sensors out in the field checking for moisture, checking for light—those type of things, it is going to be difficult to have all of these devices connected to the cloud, but you may want them connected locally, so you can send a message. Wi-SUN, LoRaWAN typically transmit a kilometer or more, and you may have a base station there within the farm—and now you are sending signals where you are seeing too much moisture, too little moisture; there may be an issue with lighting; there may be an issue with a soil sensor checking different pH levels, whatever the case may be, but you want to be able to handle that locally and not necessarily send all that data up to the cloud to get processed.

If it is a large field you may have 100 sensors there, and you would not have each one connected to the cloud, you might be pulling them together to a local place to get processed and then act, depending on what that sensor is telling you (Figure 1).

Among those 50 billion connected devices that are supposed to be here by 2020, we are trying to connect the ones that need very low power, that are optimal for energy- harvesting There’s a large market where a plug isn’t readily available, and where you don’t necessarily want to rely on batteries that require changing. Locations that have energy-harvesting sources available, vibration, or light or thermal for instance are those where typically there would be enough power for our low-power MCU and sensors to operate.

EECatalog: Unlike a situation where there’s the need to check on and change batteries that could be in difficult to access locations, it’s more “set it and forget it”?

CJ: Yes, to cite another example, oil pipelines. These pipelines run for hundreds of miles across the country, and they are out in the open, so sunlight is available as an energy source making it possible to monitor for flow, leaks, other parameters of concern.

The sensor node can be mounted on a decent-sized magnet that can be positioned on the pipeline. Oil flowing through the pipeline will create a constant vibration and with sensor fusion and AI you know what a typical flow would feel like. And if something stops or changes, you can pick up on that. And all of this can be done without batteries, without  electricity. Replacing batteries would be very expensive, whereas you can just run solar for indefinite periods of time and be able to transmit that to a localized base station or even use an NB-IoT type of application, employing a cellular network or satellite, and transmit that data a few times a day, saying, “Everything is okay, there are no issues,” or if there is an issue, you can transmit that data right away. That’s as compared to sending people out to check the pipeline or using helicopters to fly along the pipeline to make sure there are no issues—that gets very expensive over time.

EECatalog: Anything to add before we wrap up?

PW: Our delay insensitive asynchronous logic, or DIAL, MCU is a good fit for supporting machine learning and machine intelligence in portable devices, mainly due to the extremely low power of operation. With operating currents on the order of micro amps, even coin cells last years or alternatively, we can run on small solar cells.

CJ: Because our DIAL MCU is asynchronous, it consumes very low power. Putting together our asynchronous MCU with a brain inspired neural net which is also running asynchronously produces a solution that we do not believe, for the amount of performance we offer at low power, is found anyplace else out there.

FD-SOI Process Yields Processor on a Power-Sipping IoT Budget

Tuesday, April 10th, 2018

IoT finds new life with technology that adapts to the often intermittent, bursty high-performance followed by periods of ultra-low power that IoT set-ups may often demand.

For Internet of Things (IoT) devices, proficient energy management and ultra-low power consumption are critical. One area of concern for conserving power lies within the IoT device’s embedded processor.

The main factors contributing to the power efficiency of the processor include:

  • Technology node
    • Wide dynamic voltage range
    • Low leakage
  • Architecture
    • Heterogeneous processing
    • Power Islands
    • Power mode enablement

Of all the parameters, the process technology is fundamental to power efficiency. One recent process technology that has dramatically improved the landscape for power efficiency is Fully Depleted Silicon On Insulator (FD-SOI).

Processor Technology and Core Architecture
Shrinking process technologies allow for higher integration along with lower run time power at a cost of increased static power due to higher leakage.

Figure 1: NXP has unique FD-SOI enablement in a large dynamic gate and body biasing voltage range, low quiescent current bias generators, and enhanced Analog-to-Digital Converter (ADC) performance.

FD-SOI can prevent the trend of growing leakage through the following methods:

  • Leveraging existing manufacturing techniques to apply an ultra-thin buried oxide layer on top of the silicon base (see Figure 1). The transistor channel is formed with a thin layer of silicon above the oxide layer. The buried oxide layer curtails the flow of electrons between the source and drain, which naturally reduces leakage current.
  • The thinner transistor channel also allows for the use of lower Vdd voltages through better electrostatic control.

Additionally, FD-SOI’s ability to Reverse Body Bias (RBB), applying a negative voltage on the back side of the channel, can create a barrier preventing the movement of electrons.

Even though the shrinking process technology naturally allows for lower dynamic currents, FD-SOI can go a step further in reducing dynamic power through the following:

  • Wide Vdd voltage range, allowing for operation at very low power levels.
  • Forward Body Biasing (FBB) reduces the operating Vdd voltage for a given frequency. Instead of impeding the movement of electrons due to RBB, now electrons are encouraged to move.
  • FD-SOI construction results in lower parasitics, lowering the dynamic power of the transistor.

Figure 2: Body-biasing means that the device can be faster when required and more energy efficient when performance isn’t critical.

As process technologies have gotten smaller, other trade-offs have arisen. Indeed, designing for analog signals has become more complicated. However, FD-SOI relaxes density rules, allows higher gain, a closer match of components (which reduces the need for compensation during layout), and achieves lower 1/f noise [1].

The industry is hitting a physical wall as miniaturization continues to approach ever-smaller nanometer nodes. The physics of electricity at nanometer scale have begun to interfere with our design dreams of better, faster, smaller, cheaper, and more power-efficient chips. The smaller the architecture, the more significant the potential is for latch-up, as transistors within integrated chips are spaced physically closer to each other. Latch-up provides a potentially catastrophic alternate path for current flow, and until power is cycled to the chip, latch-up is present even after the condition that caused it is no longer present. However, the ultra-thin buried oxide layer utilized in the FD-SOI process provides immunity from latch-up.

Features Enabled by FD-SOI
Taking a markedly different approach than do vendors who rebrand cell phone or tablet chips as ‘IoT SoCs,’ NXP has designed the new i.MX 7ULP applications processor from the ground up as an IoT device—choosing a low-power 28nm FD-SOI process node, a set of power-sensitive peripherals, and an architecture featuring dual power domains based on the Cortex-A7 and the Cortex-M4 cores.

By leveraging the process technology, power friendly heterogeneous architecture, and multiple smart power modes, the processor can achieve impressive ultra-low power consumption levels, with a deep sleep suspend of 50 µW or less.

The i.MX 7ULP offers Rich OS support (Linux, Android), as well as sophisticated Real-Time Operating Software (RTOS) support (FreeRTOS), and additional features suitable for IoT or any portable use case that demands long battery life.

The i.MX 7ULP family of processors is faster when required and more energy-efficient when performance is not as critical, enabling dynamic trade-offs. Engineers no longer face a forced selection: low-power processor or high-performance processor. Rather, the selection for performance or power efficiency can be made instantaneously, as needed, without having to reconfigure.

Figure 3: The i.MX 7ULP block diagram. The i.MX 7ULP is extremely flexible, with the ability to dynamically transition from high performance to ultra-low power consumption while maintaining active operation. Bursty performance increases can be applied as needed, to offer the best of both worlds, which is especially relevant to IoT applications. (Image: NXP Semiconductors)

The i.MX 7ULP is well-suited for IoT edge devices, as well as smart home controls, building automation, portable patient monitoring, wearables, and portable scanners. The iMX 7ULP reaches a new level in IoT by offering the high performance required for rendering rich graphical images on a power-sipping wearable.

The IoT bestows upon nearly every industry the promise of significant forward progress by granting access to data at levels heretofore unseen. Harnessing the benefits of knowledge gained through the IoT is not going to be free or arrive without hazard. Yet seeking ever-higher productivity, with the potential to progress from the unknown to the well-informed, and in acquiring a new vehicle with which to spark innovation, we can’t help but proceed.

[1] Note that 1/f noise is found in electronics, music, nature, and other areas, and cannot be filtered out of an analog circuit. Chopper stabilization can be used but introduces switching noise.


Joe Yu is the Vice President and General Manager of the Low-Power MPU & LPC MCU product lines at NXP Semiconductors. He has been in the semiconductor industry since 1988 working for companies including NXP/Philips, Freescale, Toshiba, Altera and Atmel. Yu’s work experience includes applications engineering, marketing, business development as well as general management. His passion is to develop low-power microprocessors and microcontrollers for broad market applications. As greater levels of processing and connectivity push to the edge nodes, one of the key areas of focus he has been championing is to find new ways to reduce the power consumption of these IoT devices.

Yu has a BSEE degree from Santa Clara and resides in Palo Alto, CA.

Emerging Applications Spell the End of the Battery’s Life

Tuesday, February 20th, 2018

New mobile applications, such as wearables and mobile gaming, mean there is a shift in power management at the system level, seeing alternative ways to meet performance levels and design challenges.

Applications such as autonomous driving and artificial intelligence are driving a demand for higher performance, high efficiency processors. These rely on processing images in real time and dissecting each image to identify and locate an element within that image to detect objects or to learn behavior.

Image processing and image recognition offer the possibility of creating new operations in other markets. Nick Pandher, Director of Market Development for Radeon Professional Graphics, at AMD, believes we are only seeing the tip of the graphics processing iceberg for deep learning. For example, he suggests, they could be used in financial institutions and organizations to model a training framework in a financial data set. This would perform the analysis usually done by someone with a financial background. It can also be used to look for anomalies in staff log-ins to flag issues and highlight at-risk areas in an organization’s operations.

There are also medical uses for treatment, where anomalies can be quickly identified, and in research, where patterns can be detected across multiple data frameworks to link symptoms.

For wearable devices, the same restrictions on weight, size and power apply as in mobile gaming processors. They have to be light enough to be worn during fitness activities or light and unobtrusive if used in medical monitoring.

Power Adopts a Game Face
AMD has based its latest Graphics Processing Unit (GPU), Radeon Mobile, on a 14nm FinFET technology, to meet the form factor and power demands of emerging markets, such as mobile gaming.

Mobile gaming places different demands on a GPU than a desktop application does. For example, for mobile, the GPU has to be light in weight and small in size to integrate into mobile devices or Virtual Reality (VR) headsets. It has to be thermally efficient, as a fan will add weight and space restrictions, yet have a laptop’s performance.

 

Figure 1: Mobile gaming is expected to account for half of the revenue generated by the video game industry worldwide by 2020. Picture Credit: AMD

FinFETs are 3D Field Effect Transistors (FETs), named after their fin-like structure rising above the substrate. The transistor’s gate wraps around the fin to reduce the amount of current leaking when the device is in the off state. This approach lowers threshold voltages to improve power consumption without increasing the die size.

Scott Wasson, Senior Manager of Technical Marketing, AMD, confirms the reason for the choice of transistor: “The key thing for anyone building a chip like [Radeon Vega Mobile] is to keep voltage as low as you can. . . . Radeon WattMan [AMD’s power management, based on Radeon software which controls GPU voltage, clocks, fan speed, and temperature], can be used to tweak and tune the voltage,” he says.

By adding “a few bits of special sauce” to the earlier Polaris architecture, the company has improved switching speed and performance in the Vega mobile architecture, explains Wasson. “It is very important to be always the refining power management algorithms we build into hardware and software,” he says. “The essential strategy is to provide performance when needed and to turn down the clocks, and the power, when you don’t need the performance, in order to conserve power,” he adds.

While the Vega Mobile, announced at CES last month, is not VR-ready yet, it is built to be small and relatively low power to meet the benchmark for VR in anticipation of what AMD’s partners will develop for VR and mobile gaming.

Wearable Challenges
For wearable devices, the same restrictions on weight, size, and power apply as in mobile gaming processors. They have to be light enough to be worn during fitness activities or light and unobtrusive if used in medical monitoring. For both, they should be wireless too, so that the wearer can record or gather data without being tethered.

Products such as watches, trackers, and monitors rely on a battery for power, but the processor’s power system must be able to regulate voltage from the battery. The problem is that the battery runs down, so the system has to manage a source with a declining voltage output. Some wearable device functions need a higher voltage than the 3.2 to 4.2V typical of a rechargeable Lithium Ion battery. Many wearable products use main power rails that are below the minimum charge of a single cell Lithium Ion battery, so the rails are sourced by a step-down regulator, possibly more than one.

Maxim Integrated introduced the MAX14690 battery charge device last year (Figure 2), targeting low-power, wearable applications. It has a linear battery charger with a smart power selector, two low- power buck regulators, and three low-power, Low DropOut (LDO) linear regulators.

Figure 2: The MAX14690’s level of integration minimizes footprint for power management in wearable devices.

If the device is connected to a power source, the power selector allows the device to operate when the battery is dead. The input current to the selector is limited, based on an I2C register, to avoid overloading the power adapter. If the charger power source cannot meet the supply needs for the whole system load, the smart power control circuit can supplement the system load with current from the battery.

To conserve power during periods of light load operation, the synchronous step-down buck regulators have a burst mode option and a fixed frequency Pulse Width Modulation (PWM) mode to regulate the load. The output can be programmed using I2C bus.

The LDO linear regulators can also be programmed via I2C and configured to operate as power switches to disconnect the quiescent load of the system peripherals for power management.

This is all packed into a 36-bump, 0.4mm pitch 2.72 x 2.47mm Wafer Level Package (WLP). Maxim Integrated also offers the MAX14690 Evaluation Board, an assembled and tested circuit for evaluating the device.

Battery Management in the IoT
The nature of the Internet of Things (IoT) means it has its own low-power requirements. Devices are wireless, mobile, often located remotely, and rely on batteries. In many designs, the battery is the sticking point. The battery can be expensive to replace. Additionally, the remoteness of the IoT node, either geographically, or in a hard-to-reach spot in a building or factory, can make battery replacement a time-consuming exercise. Hence the concentration by many Power Management Integrated Circuit (PMIC) manufacturers to take a system-level approach to power management.

The vision of the IoT is for hundreds of billions of nodes to be still active into the next century, despite being located in hard-to-reach, inaccessible, or hostile locations. For Dr. Peter Harrop, Chairman of market research firm IDTechEX, this means batteries will have to go (Battery Elimination in Electronics and Electrical Engineering 2018-20128). In a series of reports, he points out that batteries have “serious limitations of cost, weight, space, toxicity, flammability, explosions, energy density, power density, leakage current, reliability, maintenance and/or life.” He is clearly not a fan. He continues “Lithium Ion batteries will dominate the market for at least 10 years, and probably much longer, yet no Lithium Ion cell is inherently safe and no Lithium Ion battery management system can ensure safety in all circumstances.”

Energy harvesting is an alternative, practical approach to eliminating batteries. A Power Management Unit (PMU) converts DC power from one energy source to another. For example, the ADP5091 and ADP5092 (Figure 3), by Analog Devices can be used in PhotoVoltaic (PV) cell energy harvesting, ThermoElectric Generators (TEG) energy harvesting, industrial monitoring, and self-powered wireless sensor devices as well as portable and wearable devices.

Figure 3: Analog Devices takes a system-level approach to IoT battery management

The PMUs harvest from 6µW to 600mW, with an internal cold start circuit which allows input voltage down to 380mV. In both devices, the charging control function protects the rechargeable energy storage by monitoring the battery voltage with the programmable charging termination voltage and the shutdown discharge voltage.

There is the option to connect a primary cell battery, managed by an internal power path management control block. This enables the power source to switch from the energy harvester, rechargeable battery, and primary cell battery.

Figure 4: The 24-lead LFCSP ADP501 by Analog Devices could eliminate batteries in the Industrial IoT

The company also offers the ADP509 evaluation board, based on the ADP509PMIC and the Alta Device PV cell. It includes a PV panel and power management to enable devices to be powered by energy harvesting.


Caroline Hayes has been a journalist covering the electronics sector for more than 20 years. She has worked on several European titles, reporting on a variety of industries, including communications, broadcast and automotive.

 

Low-Density Serial Flash Is Back—and IoT’s the Reason

Friday, August 4th, 2017

Low power and enhanced system performance fuel a re-emergence.

For the past decade, we have seen an abundance of systems employing off-the-shelf MCU devices with embedded flash. However, as designers now try to future proof their applications by enhancing performance, reducing power consumption, and extending battery life, several new challenges have emerged.

Amazon Echo, Google Home, and many other home hubs, media controllers, and building/home automation controllers are beginning to have a significant impact on our lives. These technologies bring with them new protocols and standards. Designers want to add this new functionality to their applications in the form of Over-the-Air (OTA) updates. This requires larger firmware images for these new applications to retain compatibility, functionality, and interoperability. These enhanced applications need more memory space and often exceed the flash and RAM resources on the embedded MCU. OTA capabilities require extra memory to store the additional firmware code and must support up to three of four complete firmware images: factory default image, current image and the new downloaded image ready to be shadowed to the internal MCU flash. OTA updates must happen easily, reliably, securely, safely, and autonomously.

Figure 1: New edge nodes need OTA update capability, local data storage, and system flexibility. (Courtesy Adesto Technologies)

Figure 1: New edge nodes need OTA update capability, local data storage, and system flexibility. (Courtesy Adesto Technologies)

Problems
So what about the MCUs? As MCU vendors scale to smaller geometries, the size and cost of embedded flash pose problems of their own. Larger density flash embedded inside the MCU can often increase cost while reducing performance and/or increasing power consumption. Even with the latest MCUs, engineers are wondering when they will again run out of memory. We have already seen the first of a new generation of flash-less MCU devices this year: MCU vendors abandoned embedded flash and moved to a more cost-effective, higher-performance MCU-only solution.

Modern day IoT application challenges have brought about an increased demand for low-density serial flash. This is all happening while the market enters a phase of supply shortages, capacity restrictions, and a rash of portfolio-wide end-of-life notifications. An MCU used inside a smart sensor, smart door lock, or other IoT edge node device with 256Kbit to 8Mbit of embedded flash will typically need between 1Mbit and 32Mbit of external flash to provide external OTA capability.

Standard serial flash devices have been available for decades, but evolution has focused on achieving higher density, higher performance, and lower cost. Designers often select the external memory device as a last resort when it is evident they need a larger MCU with more flash; this last-minute decision comes at higher cost, with a new PCB layout and complete new code image. Alternatively, designers choose to include an external memory to allow expansion of the current system to support the new features and requirements.

At Odds
It can be argued that the current range of serial flash solutions are often at odds with application demands. The serial flash devices today have architectures suited to high read performance and lower cost, while power consumption and programming flexibility are sacrificed. Adding these memory devices substantially increases power consumption, reduces the battery life, increases MCU overheads, reduces system performance, and often exceeds the embedded SRAM resources required to temporarily support the external flash during OTA programming and updates.

Figure 2: New protocols and standards are arriving in the wake of new home hubs, media controllers, and building/home automation controllers. [Courtesy Adesto Technologies]

Figure 2: New protocols and standards are arriving in the wake of new home hubs, media controllers, and building/home automation controllers. (Courtesy Adesto Technologies)

Factors such as system security are also impacted. This is due to the need to erase and reprogram large blocks of data, the inability to permanently protect factory code images and the generic un-personalized nature of the commodity memory devices—all of which can lead to vulnerabilities in the device trust chain. In addition, components such as voltage regulators may be required to ensure correct operation.

The latest generation of low-density memory devices have architectures and features optimized to address many IoT device and edge node application demands.

  • Wide Voltage Range Operation
    • Devices that support a wide operating voltage (e.g., 1.65V to 4.4V) that matches the VCC range of the host MCU to eliminate the need for additional voltage regulation and maximize battery life.
  • Optimized Operating Power
    • Read and programming current is optimized to improve energy consumption and to further maximize battery life.
  • Ultra-Low-Power Standby Power Consumption
    • Devices that have a user command driven by ultra-low standby power consumption at under 100 Nano amps negate the need to switch the power supply to the memory device and conserve power when the memory is not being used, saving on further external components and MCU GPIO pins.
  • OTA optimized and Data Storage Small Page Program and Erase Structure
    • Small page erase capability allows code updates as small as 256Bytes to be erased and programmed with minimal MCU over heads, while improving performance and reducing update time.
  • Serial EEPROM Emulation Capabilities
    • A byte write/byte erase capability that does not require large block erase and considerable MCU resources to manage it allows simple data and system configuration to be updated a few bytes at a time.
  • One Time Programmable Lockable Secure Sectors
    • One-time-programmable sector locking capability to protect critical code and factory code images.
  • Intelligent MCU-Friendly Active Memory interface
    • Features that allow the MCU to sleep and reduce system power whilst the memory is programming or erasing. The memory device becomes an active peripheral device that generates its own interrupt signal when a process or operation is completed, negating the need for the memory to wake up, test, or poll the memory for status, or run an inefficient fixed delay loop counter for each programming or erase operation.
  • Enhanced Command Rich Serial Memory Interface
    • A rich command set reducing the MCU overheads, such as a single command that acts as both an erase and buffer program in a single operation, versus having to issue an erase command, wait and monitor the erase operation, before issuing and monitoring the programming command. This saves on MCU overheads and execution time and improves system performance and battery life.
  • On-Chip User Accessible Security Numbers
    • Memory devices that contain user accessible unique identity numbers to allow the device to be integrated into a system trust chain of components for enhanced security.
  • Flexible SRAM Buffers
    • Flexible SRAM buffers inside the memory device that can be read from as well as used in the traditional programming-only mode, allowing data and code storage to be loaded directly to the SRAM buffer and read back or modified later. These R/W SRAM memory buffers can even be used as extended scratch pad space to supplement the embedded SRAM and stack space on the MCU.

Low Density, High Interest
The rise of new IoT applications is introducing new demands and driving a resurgence of interest in low-density memory. Many of these devices are battery powered and need to be Amazon Alexa or Google Home compatible. To keep pace with evolving and growing standards all of these new edge nodes require OTA update capability, local data storage, and system flexibility. Having a low-power memory that forces the MCU to burn more energy to manage the memory is a false economy. The latest Adesto Serial Memory product options provide intelligent, MCU-friendly features to reduce overheads, improve security options, increase device flexibility, improve performance, enhance energy consumption, and extend battery life. The memory device is a critical component in any modern system, and can now be treated as an intelligent system peripheral that works autonomously with the MCU rather than just a simple memory storage solution that is a slave to the host MCU.


IMG_1270Paul Hill is Senior Director of Product Marketing for Santa Clara, CA-based Adesto Technologies. Adesto (NASDAQ:IOTS) is a leading provider of application-specific, ultra-low power, smart non-volatile memory products. Contact Paul at Paul.Hill@adestotech.com.

Cooler, Safer Smart Home Hub: What a Difference a Diode Makes

Thursday, June 22nd, 2017

Steps taken to solve high forward voltage drop and high reverse leakage current issues.

Your customer has successfully launched a new smart home hub with a design that wirelessly controls door locks, lights, thermostats, audio, and electrical appliances, all while sending notifications to the homeowner. The soap-bar sized gadget (Figure 1, lower left corner), powered by a wall adapter, is packed with electronics and includes backup batteries in case of a power outage. However, the sleek design is pushing the device’s thermal limit, and now the customer is concerned that the heat generated by the hub may become a problem, compromising the product’s success. The customer asks for help to seamlessly reduce the heat generated without compromising the overall design. This design solution will examine the best option for addressing this issue.

Figure 1: Smart Home Hub

Figure 1: Smart Home Hub

Power Management Implementation
Figure 2 shows a diagram of the customer’s current smart hub system. It’s powered by a 5V wall adapter and has a non-rechargeable backup battery. Three AA alkaline batteries (1.5V x 3, 2Ah) support the 200mA average load and the always-on buck converter outputs a nominal 2.5V. The radio communicates with home appliances equipped with wireless protocols such as Z-Wave™ and ZigBee™. The Ethernet connection ensures exchange of notifications and events with the cloud, but the device will still work locally provided there is power.

Figure 2: Your Customer’s Smart Hub System

Figure 2: Your Customer’s Smart Hub System

The active and passive components of the smart hub’s power circuit are shown in Figure 3. The two diodes are housed in an SOT23-3 (2.6mm x 3mm) package, outlined in red.

Figure 3: Smart Hub Power Circuit Footprint (46mm2)

Figure 3: Smart Hub Power Circuit Footprint (46mm2)

Design Shortcomings
An initial thermal design analysis shows that the always-on buck regulator is operating quite efficiently. So, the focus quickly turns to the two Schottky diodes. At 200mA, these diodes develop a 600mV voltage drop, dissipating 120mW. The power delivered to the load is 2.5V x 0.2A = 500mW. This means that the diode adds a 24% power dissipation overhead.

Another concern in using Schottky diodes is leakage. In normal operation, the diode connected to the alkaline battery is reverse-biased, dumping a current of about 1µA into the load, with the leakage increasing over temperature. This leakage effectively acts like a trickle charge of the battery for an inordinately long time until a power outage comes along to discharge the battery. The problem is that these are non-rechargeable batteries that carry a warning, “If recharged they may explode or leak and cause burn injury.” This is not what we intended for the customer.

What are the Options?
Clearly, it’s critical to drastically reduce the power dissipation. One option is to use a low RDSON n-channel DMOS-based solution such as the one shown in Figure 4. The diodes intrinsic to the MOSFETs are indicated with dashed lines. In this configuration, the control IC biases the GATE according to the voltage sensed across the MOSFET. A positive source-to-drain voltage turns the MOSFET “on” with current flowing in reverse mode (source to drain). A negative source-to-drain voltage turns the MOSFET “off,” with the intrinsic diode reverse-biased. This solution requires two MOSFETS and two control ICs, making it bulky and expensive. This is a drastic solution that would require the redesign of the entire PCB.

Figure 4: A Discrete Solution to the Diode OR-ing Problem

Figure 4: A Discrete Solution to the Diode OR-ing Problem

The best solution would be a small, diode with dramatically lower losses than a Schottky diode and no or minimal reverse current. Fortunately for your customer, such a device is available.

An Ultra-Tiny, Micropower, 1A Ideal Diode with Ultra-Low Voltage Drop
Based on a low RDSON p-channel DMOS, the MAX40200 diode (Figure 5) drops a voltage about an order of magnitude lower than that of Schottky diodes. The internal circuitry senses the MOSFET drain-to-source voltage and, in addition to driving the gate, keeps the body diode reverse-biased. This additional step allows the device to behave like a true open switch when EN is pulled low, or when hitting its thermal limit. A positive drain-to-source voltage turns the MOSFET “on,” with current flowing in normal mode while the body diode is reverse-biased. A negative drain-to-source voltage turns the MOSFET “off,” with the intrinsic diode again reverse-biased. If EN is low then the device is ‘off’ independently of the VDD-OUT polarity.

Figure 5: Ideal Diode Functional Diagram

Figure 5: Ideal Diode Functional Diagram

When forward-biased and enabled, the MAX40200 conducts with less than 100mV of voltage drop while carrying currents as high as 1A. The typical voltage drop is 43mV at 500mA, with the voltage drop increasing linearly at higher currents. Figure 6 shows the ideal diode’s forward voltage vs. forward current and temperature. The MAX40200 thermally protects itself and any downstream circuitry from overtemperature conditions. It operates from a supply voltage of 1.5V to 5.5V and is housed in a tiny, 0.73mm x 0.73mm, 4-bump wafer-level package (WLP).

Figure 6: Ideal Diode Forward Voltage

Figure 6: Ideal Diode Forward Voltage

The tiny MAX40200 WLP package is about one order of magnitude smaller than the SOT23-3 housing the two Schottky diodes in Figure 3. Two MAX40200s fit easily in place of the SOT23-3 position on the PCB with additional room to spare. The MAX40200 application footprint is shown in Figure 7.

Figure 7: Integrated Smart Hub Power Solution with Two MAX40200s

Figure 7: Integrated Smart Hub Power Solution with Two MAX40200s

With a 20mV drop at 200mA, the device consumes only 20mV x 200mA = 4mW of power, greatly reducing the thermal load. Additionally, the MAX40200 reverse leakage current is mostly from Cathode/OUT to ground, while the Anode/VDD reverse current is practically null (Figure 8). This effectively eliminates the unwanted trickle charge of the non-rechargeable battery. The MAX40200-based solution, easily and quickly implemented by the relieved customer, solves both the heat and leakage problems in the smart home hub device.

Figure 8: Leakage Current Into VDD

Figure 8: Leakage Current Into VDD

Conclusion
We discussed the constraints of a smart hub’s power management system in the context of a customer problem case study. The customer’s smart hub had severe problems directly related to the classic shortcomings of Schottky diodes, namely the high forward voltage drop and high reverse leakage current caused by these diodes. The smart home hub generated too much heat and the reverse leakage current was unwittingly trickle-charging the non-rechargeable battery. To address this, two MAX40200 diodes were used to seamlessly replace the two Schottky diodes with little modification of the PCB. This practically eliminated the leakage current into the battery and reduced the power dissipation overhead by a factor of thirty, making for a happy, satisfied customer.


Nazzareno-Rossetti_2017Nazzareno (Reno) Rossetti, Principal Marketing Writer at Maxim Integrated, is a seasoned Analog and Power Management professional, a published author and holds several patents in this field. He holds a doctorate in Electrical Engineering from Politecnico di Torino, Italy.

Steve-Logan---800x600_smaller-fileSteve Logan is an executive business manager at Maxim Integrated, overseeing the company’s signal chain product line. Steve joined Maxim in 2013 and has more than 15 years of semiconductor industry experience, both in business management and applications engineering roles. Steve holds a BSEE degree from San Jose State University.

Fully Customizable Gateway SoC Platform Targets Wide Variety of IoT Applications

Wednesday, April 19th, 2017

How the design methods used to develop an IoT Gateway SoC with a customizable platform can reduce risk, schedule, and costs.

A gateway device plays a critical role in the Internet of Things (IoT) by collecting data from the sensors located at the network edge. It then filters, analyzes, and normalizes the data, and sends it to the cloud for sharing with the network. Designing a gateway SoC from scratch is a challenge that involves not only developing the SoC architecture, software, and hardware, but managing the integration and validation as well. These activities take a significant amount of time and require the involvement of a large design team, which results in longer design cycles and longer time-to-market. There are a variety of applications where these gateways are used, such as surveillance, deep learning, artificial intelligence, data storage, data security and more. Designing a custom SoC for each application from scratch is not a viable solution. Designing a single SoC for all the applications is also not feasible due to the huge investment, risk, and time consumption.

Gateway SoC design is greatly simplified when a gateway platform is utilized. A gateway reference platform offers a modular design that is fully customizable to enable multiple solutions on a single gateway SoC. This helps in reducing system BOM cost and speeding time-to-market. The reference platform approach enables efficient hardware and software partitioning, custom IP integration, and device software development during custom SoC development. The key to creating cost effective custom silicon for the IoT is the platform approach because it reduces risk, schedule, and cost.

A gateway SoC may be a simple device that captures the data from various slow-speed peripherals, then packs the data and sends it through low- to medium-speed peripherals to the cloud. The gateway SoC can also be a complex device that captures data from different high- and low-speed interfaces, performs pre- and post-processing of data, performs analytics, and then sends the resulting data to the cloud through high-speed interfaces. To build a gateway SoC suitable for different applications, it is important to perform partitioning of the silicon such that the simple gateway device is a subset of a complex gateway SoC. An intelligent approach is to keep the simple gateway SoC design as a basic building block and add other IP blocks to create complex gateway SoC variants. However, adding multiple application-specific IPs into a single SoC will increase cost, since each IP is expensive. Also, if an application does not use an IP, then the IP will be redundant for that SoC.

A gateway SoC needs to be validated for functionality, and this is achieved using a platform approach. All the IPs that need to be part of a simple gateway design are carefully selected such that the same IPs can be seamlessly reused in complex gateway designs. The fastest way to develop a gateway SoC is to use different pre-silicon design methodology platforms—including RTL, virtual, and FPGA platforms. By using these pre-silicon methodology platforms, the design can be verified in simulation. In addition, the software stack can be developed on the virtual prototyping platform, and the end use case application can be tested with real peripheral devices and software for functionality on the FPGA platform.

Software drivers are developed and tested along with RTL verification, so the bring-up effort on software development is greatly reduced. The FPGA prototyping platform aids the designer to test the design with real-life peripheral components or devices and check the functionality of the end product or design. This approach significantly boosts the confidence of the designer and the success rate of first-time working silicon at tape out.

Figure 1 shows the pre-silicon methodology platforms for successful IoT gateway SoC design.

Once the design is verified on pre-silicon methodology platforms and taped out for fabrication, the design team can ready for silicon bring-up by enhancing the software and designing the bring-up board. This increases the efficiency of the design team, consumes less design time, and reduces design errors.

During the gateway SoC design, the design team can create multiple designs for various end applications, retaining common core blocks and adding newer blocks to create additional variants that result in new silicon parts for different applications in a short amount of time.

Figure 1: Gateway SoC Design Platforms

Figure 1: Gateway SoC Design Platforms

The key advantages of using gateway SoC platforms is that the block, which is considered as core block, can be verified, and the software drivers can be written and used as the core library. New application-specific IPs can be seamlessly integrated with the verified core library and used to create new gateway SoC designs. By adding new test cases and designing new daughter cards for application-specific IPs, the validation of new designs can be performed faster, resulting in less design effort and minimal risk.

Figure 2 shows an example of a Gateway SoC consisting of generic and application-specific IP sections.

Figure 2: IoT Gateway SoC Partition

Figure 2: IoT Gateway SoC Partition

A sample IoT gateway SoC consists of two partitions, one with a generic IP section and the second consisting of an application-specific IP section. The generic IP section consists of IP blocks that are common across applications. Some of these are processor complex, DDR3/4 and flash memories, high speed peripherals, such as PCIe/USB/SATA, and low speed peripherals, such as SPI/I2C/UART/CAN/Timers.

The IPs in a generic IP section are carefully selected and verified on pre-silicon methodology platforms. In an FPGA platform, the daughter card is designed and contains all the peripherals required in the end application, and the design is verified close to the end application.

Figure 3. Custom SoC-Based Smart City IoT Gateway Reference Board

Figure 3. Custom SoC-Based Smart City IoT Gateway Reference Board

Once the generic IP section design is verified, different variants of the gateway SoC can be designed by adding application-specific IPs to the generic IP section based on end applications. The design with new IPs integrated is verified on the pre-silicon methodology platforms. An add-on daughter card with specific peripheral devices can be designed to validate the newly added application-specific IPs. Several SoCs can be designed in parallel for different applications. This will reduce time-to-market, minimize bugs in the design, and lower overall design cost.

Case Study

The gateway SoC methodology platforms were utilized in an IoT gateway custom SoC design. The SoC is intended to be used for IoT gateways in smart city applications. Figure 3 shows the smart city IoT gateway reference board built around the custom SoC. The gateway is a full featured device that supports various types of wireless and wireline connectivity, communicates with IoT edge devices, and connects to the cloud through 3G/LTE/WiFi.

Figure 4. IoT Gateway SoC Platform Software Features

Figure 4. IoT Gateway SoC Platform Software Features

Summary

It will be industry best practice to utilize the IoT SoC platform approach, as detailed above, to design custom SoCs for various IoT applications with reduced risk, schedule, and cost.


Naveen-HN Naveen HN is an Engineering Manager for Open-Silicon. He oversees board design, post-silicon validation and system architecture. He also facilitates Open-Silicon’s SerDes Technology Center of Excellence and is instrumental in the company’s strategic initiatives. Naveen is an active participant in the IoT for Smart City Task Force, which is an industry body that defines IoT requirements for smart cities in India.

How Richer Content is Reshaping Mobile Design

Tuesday, March 29th, 2016

Low- and mid-phone memory demand is heating up more rapidly than ever, but with 3D NAND, things are looking (and stacking) up.

Editor’s Note: When Intel and Micron together announced the availability of a NAND technology that tripled capacity compared to other solutions, the two firms pointed to “mobile consumer devices” as among the beneficiaries of the storage breakthrough. Flash forward to March of this year, when Micron’s Mike Rayfield, VP and GM of the company’s Mobile Business Unit, shared the Mobile World Live “How Richer Content is Reshaping Mobile Design” webinar stage with Ben Bajarin, Principal Analyst, Global Consumer Tech, Creative Strategies and moderator Steve Costello, Senior Editor, Mobile World Live. Following are selected edited excerpts from the webinar transcript, with organizational text, figures and captions.

Mike Rayfield, Micron

Mike Rayfield, Micron

Mike Rayfield: [In making] observations at Mobile World Congress this year, I [saw] three things:

1. The content for mobile devices is getting a lot richer. Whether it’s driven by the coming 5G pipes being a lot thicker to push a lot more data or whether it’s high resolution content or virtual reality. But, clearly the richness of data has put a huge strain on the mobile handset and what it’s got to be able to do.

2. [IoT] things are hubbed to your smartphone. And what we’re seeing is it’s becoming a much more important device.

3. One or two billion people don’t have a computer, don’t have a phone, and don’t have access to the Internet. There are now devices, full functionality devices that can be acquired by those folks for well under a hundred dollars that are going to allow them to have that first computer, make that first phone call and surf the Internet. Mobile devices, while they’re important to us now, are getting even more important and are going to become important to folks that will own them in the very near future.

Smartphone as Primary Computer

Ben Bajarin, Creative Strategies

Ben Bajarin, Creative Strategies

Ben Bajarin: So much of the component landscape and innovation from smartphones is driving other sectors. If you look back to the 90s and early 2000s era of the PC, you’d see [that PC] components, (SoCs, memory screens), drove other elements of different industries. And now we’re seeing that same thing play out in smartphones’ [components and innovation] driving the IoT. And those same products are moving to cars. Those same products are moving to virtual reality.

The smartphone, for most of the world, is not just their only computer, but even in developed markets it is their primary computer. That has huge implications on how it’s used, on engagement levels, all the way across the board to new opportunities in software and services.

Rayfield: If you think about mobile device design, the things that it cares about are performance, energy efficiency and footprint. Networking and cloud, enterprise, IoT, automotive—all of those things care about those same three things.

Figure 1: The requirements for memory in each of these areas are critical, accelerating and evolving.

Figure 1: The requirements for memory in each of these areas are critical, accelerating and evolving.

The mobile device is also now becoming a creator of content. That pushes a staggering amount of bandwidth onto the network, [forcing] the network to be more robust and higher performance. It forces more storage onto the cloud—we all want to back things up. And mobile is the reference design for automotive and IoT. All of this innovation we do, whether it be in performance, footprint or energy efficiency in mobile, is touching all of the other different markets. [It] is something that has just started to happen in the past couple of years and is going to accelerate.

Bajarin: Mobile is the catalyst for new things. Things we haven’t even thought of yet. Virtual reality is just one example of those things we weren’t talking a lot about, and now we’re talking very distinctly about how the mobile ecosystem drives the experience. That’s the emphasis of smartphones now and that whole ecosystem—driving all of these other peripheral markets. When we looked just at some usage behaviors, where we’re going in the next generation, 5G, most people are going to consume more and more and more bandwidth (Figure 1).

Burden on Memory Grows

Rayfield: Even a couple of years ago the average [handset] device had 1 GB or less of DRAM. And it was relatively small. It had 4 or 8 GB of NAND. As these devices have become your mobile computer, the burden on the memory has become significantly larger. Flagship devices on average have about 3 GB of DRAM (Figure 2) and we’re working with partners in China where they’re already working on 6 GB DRAM devices.

Figure 2: Low-end and mid-end phone memory demand is accelerating faster than ever before due to the shift to full smartphone capabilities.

Figure 2: Low-end and mid-end phone memory demand is accelerating faster than ever before due to the shift to full smartphone capabilities.

Consumers figured out more is better, and not only because it’s a larger number but because they see a difference in performance.

Everybody has talked about the idea that the cloud is going to replace local storage. The reality is we’re just too impatient for that. Connectivity is not ubiquitous by any means, and ultimately we want our pictures, we want our videos, we want our movies on our phone.

Advances in NAND have allowed us to go from 32 to 64 to 128 GB of local storage, and I see that just continuing. I do a sort of a sample size of my kids and I look at their devices. They’ve got a smartphone with 64 GB NAND and there’s about 8 GB of free storage whenever I look at it. We’re going to continue to go in that direction.

Bajarin: One of the trends that we’re looking at particularly, not just in developed markets, but also as we look more at emerging markets, is: What’s the role cloud services are going to play? If I’m thinking about increasing the capacity of what I can do on a device, and I want to move that to other products to be backed up etc., looking at how many people do this today.

In consumer research, we found that 75 percent of consumers are yet to invest in cloud services [invest meaning] pay for something—pay for iCloud, pay for Dropbox, pay for cloud storage as part of a solution.

[…] local storage is still playing a huge role. […]you’re starting to see increase again in capture. [Consider those] on Snapchat making short videos of their day. Posting them to their stories. Sometimes that’s archived, sometimes it’s not. And dual camera is something you’re going to hear a lot about this year from the component landscape.

[You’ll be able to]  take a 45-plus megapixel picture, [have] multi-range zoom, change the focal length, record two videos: one in slow motion and one in fast motion. We’re talking about a tremendous amount of file storage there. Even if everyone was pervasively subscribing to the cloud today, [consider] how much back-end infrastructure [is needed] to take all that’s being created. There are still some innovations in smartphone cycles, dual camera being one of them, which will put very significant demands on the capabilities of those products.

Mobile to Smart

Rayfield: There are 1-2 billion people that we really haven’t touched with mobile devices yet (Figure 3). Only a couple of years ago, it was assumed that those folks would buy a 20/30/50 dollar feature phone, which ultimately had very little capabilities, just the ability to make a phone call. Clearly, that wasn’t a very interesting market. If you can now look at devices that are well under 100 dollars that are full computers, that give them the ability to access the Internet, that give them the ability to make phone calls, give them the ability to have a computer, I think it changes the landscape a lot even over what we were seeing a couple of years ago.

Figure 3: Smartphone installed base per population.

Figure 3: Smartphone installed base per population.

I’ve seen three examples of phones that are well under 100 dollars (Figure 4). And they’re full functioning computers. They have 1 GB of DRAM, and 4, 8 or 16 GB of NAND, and they have great cameras and high-performance processors. Literally, these are devices that a couple of years ago would’ve been $400, $500 or $600. I think that’s a tipping point that has infrastructure folks putting better infrastructure in developing countries. That is going to open up this huge opportunity for the next 1-2 billion people.

Figure 4: Feature phones that historically had almost no memory are being replaced by very inexpensive smartphones with the same amount of memory as high-end smartphones.

Figure 4: Feature phones that historically had almost no memory are being replaced by very inexpensive smartphones with the same amount of memory as high-end smartphones.

Bajarin: In my analysis of these markets I break out: What did we do to get to this first 2 billion people who are online with their smartphones today? What are their behaviors like? The dynamics between the mid range and the high range are changing within this first 2 billion. We look at the first 2 billion [smartphone users] as a very distinct market.

You hear, ‘oh, smartphones are a saturated market, it’s slowing down,’ and while that’s true, I think we also have to recognize that there are a lot of people who still don’t have smartphones. And that a lot of this is an economic discussion. But the way that I visualize this is: I link our model of smartphone installed base by a number of countries that I track vs. their population (Figure 3). You look at places like China and India, even Brazil which is large, Indonesia, and particularly you look at the continent of Africa, and you see massive, massive, amounts of those populations that are yet to have a smartphone.

Now, what we also keep in mind is that most of them actually have a mobile phone. So it’s not like we’re starting from scratch.

I believe, over time, we will convert the two or so billion people today who are yet to have a smartphone but do have a mobile phone, we will convert those to become smartphone users. And you follow that up with the observation that it will become their primary computer to do things, engage in commerce, learn tips about farming, engage in trade in new ways, and become better educated. All of that has a huge potential increase to the GDP of those countries, particularly less developed parts of the countries.

Planar to 3D

Rayfield: We’ve talked a lot about the smartphones, let’s go back and look at what they’re made up of. Even the 100-dollar devices have computing capabilities that were in PCs only a couple of years ago. From a DRAM standpoint, it’s already the highest performing high-volume DRAM in the industry. It’s all about energy per bit, how do we get more efficient, how do we get battery life to last longer? There’s going to be next generation options whether you put the memory in package or other kinds of things. Those are things that are being driven by the smartphones and are going to be used across other markets. From a NAND standpoint, we’ve talked about people wanting additional storage and capability.

Planar NAND has pretty much run out of steam in terms of lithography, so we’re going to 3D. We’re getting to the point where it will be very easy in one or two or four die configurations to have 64, 128 or 256 GB of memory in the same footprint that you have 4 GB now. That’s an innovation that people will use once they have the storage, and it’s being designed in devices right now. The reality is we think a lot of the mobile phone design is bound by how much memory or NAND [is available].

Bajarin: […] what happens if we move now to a video era? Where all of a sudden we see an increase in capture and creativity and sharing of a range of things, again, probably video experiences we can’t even fathom today because they haven’t been created yet.

There are huge implications across the hardware, software and services landscape, but all of these devices have to be built understanding that consumer demand will be there. If we give them more capacity, they will take advantage of that, but more importantly so will the developer ecosystem. We study millennials a lot, because I think millennials globally put the most demand on their devices [and] video is a normal part of their life and usage, and that’s just going to increase, it’s not going to stop. New behaviors around video and new demands will emerge as this generation starts to get those capabilities from a capture standpoint in terms of sensors, as well as just having the throughput in 5G and beyond to now take advantage of these services. So we look toward this next video era in light of what we saw in the photo era. Unprecedented things happened with photography, and we’re on the cusp of seeing the same thing with video.

Rayfield: One of the things that video puts a burden on is storage. We’ve now gone to 3D NAND (Figure 5). What 3D allows you to do is back off on lithography, making it a little easier to scale and scale vertically. And what we gain is capacity. So in a couple nanometers of silicon you can put all the storage you need in your mobile device.

You also gain is speed. Very quickly the network is getting fast enough that it’s forcing us to get better on storage. And that is going to be the thing that continues to drive us. As we go to this 3D, we get much, much, bigger bandwidth with the storage.

Figure 5: Traditionally, flash has been built in a planar, or two-dimensional structure—much like a one-story building. When we wanted to add space, we had to make the data cells (the rooms in the building) smaller. With 3D, we’re building a vertical building—like a skyscraper.

Figure 5: Traditionally, flash has been built in a planar, or two-dimensional structure—much like a one-story building. When we wanted to add space, we had to make the data cells (the rooms in the building) smaller. With 3D, we’re building a vertical building—like a skyscraper.

In summary, the amount of data that people consume, generate and want to share is driving what the system is. The content that the developed world is generating, consuming and sharing is quickly moving to the next 1-2 billion users and again that’s going to put a lot of pressure both on the devices and their capabilities. And then if you look at a bottleneck of consumer experiences and devices, it’s all about generating, creating, and sharing content. These devices now have a capability where maybe a couple of years ago people wouldn’t have thought that they’d become as ubiquitous as we believe they’ll be around the world. 

Ultra Low Energy and the ULE Alliance

Thursday, March 24th, 2016

A new generation wireless communication technology for the IoT has applicability in cases ranging from energy savings to emergency response to the smart home.

The ULE Alliance is an organization that works to expedite the worldwide deployment and market adoption of Ultra Low Energy (ULE) products. The ULE Alliance works with its members to quickly develop new products and services in the areas of Home Automation, Security and Climate control by certifying standards conformance and ensuring interoperability between IoT products of the different vendors, thereby improving service provider selection, delivering true customer satisfaction and increasing the overall size of the market for all participants. Our goal is to ensure that the proven and superior ULE technology will be a leading infrastructure and standard for home wireless networks, enabling a more safe and convenient life for all people.

Our organization is made up of global service providers, vendors and chipset developers dedicated to developing energy-efficient and powerful solutions for the Smart Home and Office.

AVI_Fig1

Comparison to Other Wireless Options on the Market

ULE is a new generation wireless communication technology for the IoT, based on DECT, an established technology with 20+ years of deployment. Since this is an established technology, the chipsets are reliable and costs are extremely competitive. ULE operates in dedicated, licensed, royalty free spectrum, giving the service provider the longest wireless range of 75-100 meters in building, and up to 300 meters in open range, compared to competitors that range from 10-30 meters from the gateway. This greatly reduces the need for expensive repeaters throughout the home, greatly simplifies the layout and installation, and reduces energy consumption and overall cost; many providers are looking at self-installation as another cost saving option for their intelligent home solutions.

ULE operates in dedicated, licensed, royalty free spectrum, giving the service provider the longest wireless range of 75-100 meters in building, and up to 300 meters in open range, compared to competitors that range from 10-30 meters from the gateway.  This greatly reduces the need for expensive repeaters throughout the home, greatly simplifies the layout and installation, and reduces energy consumption and overall cost; many providers are looking at self-installation as another cost saving option for their intelligent home solutions.

This unique band has no interference from Wi-Fi, ZigBee, Bluetooth, Z-wave, etc. and has the ability to carry two-way voice and video with great stability. This makes possible applications such as fire systems that tell you exactly where the fire rather than just sounding an alarm. Such systems can also open a northbound call to 911 to provide an open channel of communication. Critical information, e.g., “my children are trapped in the back of the house” can be conveyed in a situation where seconds matter.

The ultra-low power asset also makes great use of the battery, so much so that tests have shown some of the batteries can last seven years without changing. Imagine not having to tell customers to change the alarm batteries every six months, or worrying that the system will fail due to a dead battery. Most batteries will die of corrosion before losing power. Tens of millions of home gateways use DECT; these gateways can be easily upgraded to ULE by a software upgrade, something that the service providers perform routinely. This creates a new business opportunity for the service providers to expand the communication services to their customers’ homes by providing smart home services, re-using installed home gateways.

Tens of millions of home gateways use DECT; these gateways can be easily upgraded to ULE by a software upgrade, something that the service providers perform routinely. This creates a new business opportunity for the service providers to expand the communication services to their customers’ homes by providing smart home services, re-using installed home gateways.

Liasons

ULE Alliance has established active work liaisons with ETSI, Home Gateway Initiative (HGI), AllSeen Alliance and Open Connectivity Foundation (OCF – former OIC) to help foster better interoperability, management and bridging between all sensors and devices in
the home. The DECT Forum is an active member of the ULE Alliance as well.

At CES 2016 the ULE Alliance and its partners, All-Seen and OCF, made joint demonstrations, and close cooperation is expected to continue and strengthen all the parties.

Milestones

Since launching the Ultra-Low Energy Certification in mid-2015, 30+ products have successfully achieved certification. We expect the number to top 100 by 2017.

  • ULE Alliance demonstrated at CES 2016 ULE working over IPv6 (6LoWPAN).
  • Members Turkcell and DSP Group announced in January the commercial IoT deployment based completely on ULE, which will serve a 25 million household base.
  • Deutsche Telecom announced that all the next generation home gateways will be equipped with ULE.
  • Panasonic introduced the Home Automation kit, based on the ULE technology, in USA and Europe.

What’s Next

  • With the introduction of 6LoWPAN, the IPv6 connectivity will play an increasing role in extending the IP communication protocol use in the IoT, replacing the proprietary technologies and enabling better interoperability.
  • More companies will adopt the use of the wireless technology agnostic application layers, such as AllJoyn and IoTivity.
  • The consolidation and cooperation between different technologies should happen in order to achieve the goal of seamless integration of new devices into the existing networks, regardless of the wireless technology in use. Interoperability across various wireless technologies is imperative in order to support widespread adoption for IoT.

Ahead for the ULE Alliance

  • ULE Alliance will release the 6LoWPAN support for ULE in Q2’2016
  • ULE Alliance will continue to cooperation with AllSeen and OCF, introducing two projects with each partner:
  • A bridging gateway between existing ULE and AllJoin/IoTivity networks
  • Sensors running AllJoin / IoTivity application layers over ULE protocol
  • New ULE based service deployments by major European service providers
  • Increasing number of device manufacturers worldwide using ULE
  • Number of ULE based sensors/actuators surpassing 250 by end of 2017

Avi Barel_115Avi Barel is Director of Business Development, ULE Alliance. He has over 30 years of broad high tech experience, ranging from software development, semiconductor, engineering and business management.

Barel joined the ULE Alliance in April 2013 as the Director of Business Development and is actively leading the promotion of the ULE technology worldwide. Prior to ULE Alliance, at DSP Group, he served as the Corporate Vice President of sales for 8 years. Before joining DSP Group Barel established and managed Winbond subsidiary in Israel for 5 years, developing semiconductor products for speech processing and communication, managing team of 50+ engineers. At National Semiconductor he held variety of engineering, engineering management and business management positions; involved in development and management of award winning, innovative projects.

Barel holds M.Sc. degree in Computer Science and B.Sc. degree in Mathematics and Physics from the Hebrew University in Jerusalem.

Formal Low-Power Verification of Power- Aware Designs

Monday, November 9th, 2015

Introduction

Power reduction and management methods are now all pervasive in system- on-chip (SoC) designs. They are used in SoCs targeted at power-critical applications ranging from mobile appliances with limited battery life to big-box electronics that consume large amounts of increasingly expensive power. Power reduction methods are now applied throughout the chip design flow from architectural design through RTL implementation to physical design.

Power-Aware Verification Challenges

Power-aware verification must ensure not only that the power intent has been completely and correctly implemented as described in the Unified Power Format (UPF) specification [1] or the Common Power Format (CPF) specification [2] , but also that the functional design continues to operate correctly after the insertion of power management circuitry.

Power estimates are made at several stages in the design flow, with accurate estimates becoming available only after physical layout. Consequently, design changes such as RTL modifications and layout reworks ripple back and forth through the design flow. This iterative power optimization increases verification and debug effort, project risk, and cost. The objective is to achieve the target power consumption while limiting the cost of doing so.

The power management scheme

Initially, an implementation-independent functional specification is devised to meet product requirements. The functionality is then partitioned across hardware and software. An initial power management scheme can be devised at this architectural level, but it defines only which functionality can or should be deactivated for any given use case, not how it should be deactivated. The functionality is then implemented in RTL, making extensive use and reuse of pre-designed silicon IP blocks, together with new RTL blocks. Some of these blocks may be under the control of software stacks.

At this stage, decisions are made about how functionality should be deactivated, using a multiplicity of power reduction methods to achieve the requisite low-power characteristics. These decisions must comprehend both hardware- and software-controlled circuit activity. Common power management methods include:

  • Clock gating
  • Dynamic voltage-frequency scaling (DVFS)
  • Power shut-off
  • Memory access optimizations
  • Multiple supply voltages
  • Substrate biasing

These early-stage power management decisions can be influenced by a physical floorplan, but they do not—and cannot— comprehend the final partitioning at P& R, where accurate power estimates are made. Consequently, the power management scheme can be modified and must be re-verified after P& R.

Clearly, the power management scheme is a moving target, and requires iterative design optimization, verification, and re-verification at every stage of the design flow—including architecture, RTL implementation, and physical design.

Additional complications

Implementing any scheme is often subject to significant additional complications, such as the impact of IP use and re-use and of DFT circuitry. A given IP block can implement several functions, each of which can be or must be switched off independently of the others, for example, by adding interfaces to the power-control registers or signals. These functions can be problematic in the case of third-party IP, where (often) only black box information about its behavior is available. In any case, the verification challenge now includes re-verifying the redesigned IP block(s) as well as verifying the power management circuitry.

In order to minimize test pattern count and test time, conventional DFT assumes that the whole chip operates with all functions up and running. That is how it operates not only on the tester, but also in field diagnostics. With power-aware design, DFT circuitry must now mesh with the design’s power management scheme in order to avoid excessive power consumption and unnecessary yield loss at final test.

Power-Aware Verification Requirements

Functional analysis, optimization, and verification throughout the design flow, complicated by inadequate visibility of third-party IP white-box functionality, mandates the following five principal requirements for implementing and verifying a low-power scheme:

  • Sufficiently accurate power estimates using representative waveforms, both pre- and post-route
  • Accurate visibility and analysis of the white box behavior of third-party IP prior to its modification and reuse
  • Deployment and ongoing optimization and verification of appropriate power reduction techniques, both pre- and post-integration
  • Exhaustive functional verification at the architectural and RT levels, both before and after the deployment of power optimization circuitry
  • Verification of hardware functionality compliance with software control sequences

The first requirement can be addressed with commercially available tools that use simulation and formal methods. The rest of this section deals with the remaining requirements.

As previously indicated, the power management scheme starts at the architectural level, so any available architectural features such as communication protocols must first be verified (see Figure 1).

Figure_1
Figure 1: Ongoing power-aware optimization and verification

In the subsequent functional implementation (RTL) flow, low-power management constructs are introduced at different phases in the SoC development, depending upon the data available and the optimizations required. Taking the deployment of power domains as an example, verification must ensure that:

  • Normal design functionality is not adversely affected by the addition of power domains and controls. “Before and after” checking is critical.
  • A domain recovers the correct power states at the end of the power-switching sequence, and generates no additional Xs at outputs or in given block signals.
  • It achieves a high level of coverage of power-up /power-down events, which are very control-intensive operations.
  • Switching off a power domain does not break connectivity between IP blocks.

Therefore, taking the RTL model verified prior to the insertion of power management circuitry as the “golden” reference model, power-aware verification requires a combination of:

  • Architecture-level verification
  • IP white-box functional visualization and analysis
  • Exhaustive functional verification
  • Sequential equivalence checking
  • Control and Status Register (CSR) verification
  • X-propagation analysis
  • Connectivity checking

Limitations of Traditional Power-Aware Verification

Various tools and approaches are used for power-aware analysis and verification. This patchwork of tools and approaches clearly provides limited analysis and verification capability, and demonstrably achieves inadequate QoR. Automated structural analysis and limited, manual functional analysis can identify potential opportunities for the use of power management circuitry. Such analysis can assure consistency between the RTL design and the UPF/CPF specification, but cannot verify design correctness. At the architectural level, power analysis usually is performed manually with spreadsheets.

Power-aware simulation is used at the RTL, but, like conventional simulation, is not exhaustive. This situation is exacerbated by the state space explosion resulting from the insertion of complex power management schemes. It not only significantly degrades simulation performance, but also fails to systematically avoid X optimism and pessimism.

Power-related DRC can enable limited power integrity analysis at the gate level.

Meeting Power-Aware Verification Requirements with JasperGold Apps

The JasperGold power-aware verification flow comprehensively meets power- aware verification requirements with the requisite QoR.

The front-end of the flow is the JasperGold LPV App (see Figure 2), which automatically creates power-aware transformations and automatically generates a power-aware model that identifies power domains, the power supply network and switches, isolation rules, and state retention rules. It does so by parsing and extracting relevant data from the UPF/CPF specification, the RTL code, and any user-defined assertions. It then automatically generates assertions that are used by other JasperGold Apps to verify that the power reduction and management circuitry conforms to the UPF/CPF specification and does not corrupt the original RTL functionality.

Figure_2
Figure 2: Jasper Gold power-aware verification flow

Power-aware model

The resulting power-aware model enables the analysis and verification of a wide-range of power management characteristics, for example:

  • Power-domain correctness, such as the absence (or otherwise) of clock glitches and correct operation of reset logic
  • Power state table consistency, analyzing all possible power-state transitions and detecting illegal ones
  • Retention cell verification, validating the integrity of saved data in all power states.
  • Power-supply network connectivity, to detect power intent errors made when defining the power-supply network
  • Power-aware correctness, ensuring equivalence between a power-aware design and the original RTL when all power domains are continuously on

LPV-generated assertions

Examples of assertions automatically generated by the JasperGold LPV App include:

  • Ensure that a power domain clock is disabled when the domain’s power supply is switched on or off
  • If a power supply net has resolution semantics, there is never more than one active driver
  • Ensure that the power supply of retention logic is on when the value of an element is restored from that logic
  • Whenever a power domain is powered down, all the isolation conditions related to this power domain are true before, during, and after power shut-off
  • No signal is isolated twice with contradictory clamp values

The JasperGold Apps approach

In contrast to the general purpose, all-in-one formal verification tool approach, the JasperGold Apps approach enables step-wise adoption of formal methods. Each JasperGold App provides all of the tool functionality and formal methodology necessary to perform its intended application-specific task. This approach requires design teams to acquire only the expertise necessary for the particular task at hand, and at a pace that suits the project requirements and user expertise.

Providing an empirical measurement on the effectiveness and progress of the formal verficiation, the JasperGold Design Coverage Verification (COV) App takes in the user’s RTL, assertion properties, and constraint properties and outputs a textual and GUI-based report showing how aspects of the DUT were verified by the formal analysis. These reports how lines of code (“statement coverage”), conditional statements (“branch coverage”), and functional coverage points that were exercised.

The JasperGold Formal Property Verification (FPV) App performs exhaustive verification of (a) all RTL functionality before the insertion of power management circuitry, and (b) the power management circuitry itself. For example, it analyzes and verifies power sequencing both during block design and after integration, including sequence safety such as clock deactivation, block isolation, and power down, as well as state correctness.

The JasperGold Control and Status Register (CSR) Verification App verifies that the design complies with the CSR specification, and that the value read from a register is always correct, both before and after power management insertion.

The JasperGold Sequential Equivalence Checking (SEC) App verifies the equivalence of blocks before and after power management circuitry is inserted, as well as those blocks subject to late-stage modification. In addition, it verifies that memory optimizations do not compromise functionality. For example, where a memory is replaced by two low-power memories with a wrapper, the JasperGold SEC App verifies that the two memory models are equiv- alent to the original memory.

The JasperGold X-Propagation App (XPROP) analyzes and verifies Xs at block outputs caused by power-down, and compares differences in output X behavior before and after application of the UPF/CPF specification.

The JasperGold Connectivity (CONN) Verification App exhaustively verifies RTL connections at the block and unit level, and after integration.

Conclusion

The Cadence JasperGold power-aware formal verification flow enables exhaustive analysis and verification of power-aware designs, achieving QoR superior to those designs produced by the traditional ad hoc patchwork of tools and approaches. Starting with the JasperGold LPV App to automatically generate a power-aware model and the appropriate assertions, the flow leverages an expandable range of additional JasperGold Apps, each targeted at a particular task. Design teams can deploy JasperGold Apps as needed and acquire expertise in a low-risk, step-wise fashion.

References and Further Information

[1] 1801-2013 – IEEE Standard for Design and Verification of Low-Power Integrated Circuits. Available at http://standards.ieee.org/findstds/standard/1801-2013.html.

[2] Si2 Common Power Format (CPF) Specification. Available at http://www.si2.org /?page=811.

To learn more about Cadence JasperGold Apps, contact your local sales office at http://www.cadence.com/cadence/contact_us

Contact Information

Cadence Design Systems, Inc.

2655 Seely Avenue
San Jose, CA, 95134
USA

tele: 408.943.1234
fax: 408.943.0513
www.cadence.com

SoC Makers Recognizing the High Stakes in Low Power

Thursday, May 7th, 2015

Q&A with Sonics CTO Drew Wingard

The opportunities afforded by industrial, medical, consumer and mobile sectors—especially as the IoT grows—are closely entwined with power management.

wingardWhat time is it? Dr. Drew Wingard, CTO of on-chip networks maker Sonics, says that had he not powered off his smart watch before boarding an international flight, the device would not have been able to answer that question upon arrival—it couldn’t hold time for more than a day. During a recent EECatalog interview, Wingard spoke about the factors that led to the “crazy” idea of a smart watch that can’t tell time, where the opportunities to conserve power are to be found, and how battery thresholds enable markets. Edited excerpts of the interview follow.

EECatalog: What makes for a good understanding of power partitioning decisions?

Wingard, Sonics: It certainly has a lot to do with trying to understand the use models, in many cases, not just of the chip, but also of the system in which the chip is going to be used. This is much more difficult to do when building a chip that is truly general-purpose.

Something interesting about the SoC space is that we don’t see that many chips that are designed in a truly general-purpose way. In many respects we are fortunate—in others cursed—that SoCs tend to be pretty application-specific. Say I am building a chip that is going to be the application processor inside a mobile phone. That [choice] defines a set of use cases for the end phone that we can take advantage of in trying to make these choices, or, say it’s going to be a TV processor that is going to be for adding Internet capability. That would have another set of use cases associated with it. The best SoC architects build libraries of knowledge over time about what’s required of the systems the chip is going to plug into.

It is relatively rare that you get a new company trying to attack an SoC application these days. Most companies work with what they’ve learned over the years of integrating ever-larger system chips, making most designs at some level evolutionary. A company may make radical changes in its architecture, but this occurs within the framework of understanding this library of knowledge about what the likely use models are going to be.

EECatalog: Has the answer to what makes for the most effective power management strategy for a multicore SoC changed?
Wingard, Sonics: Before leakage was a problem, when the transistors used to shut off all the way, the biggest user of power in the chip was switching, and power management had everything to do with clock management.

That’s still a very important technique today, but leakage has become such an issue, there is now a whole host of additional techniques to deal with it. At the level of the underlying silicon, that has always been the case, but of course the manufacturing process doesn’t always give us the same transistors on every chip. Sometimes we are at the corner where the transistors are relatively slow. For power [considerations] slower transistors are better because they have lower current. And that means their leakage current tends to be low.

For speed, of course fast transistors are better, but fast transistors are the ones that tend to have the highest leakage. One technique that can be deployed, although it is not very friendly for FinFETs, is to change the voltage of the well contact. You can change the electrical characteristics of a transistor by using the fourth terminal that people rarely think about. There has been a lot of work done to try to optimize the combination of frequency and leakage by playing with the voltage on that terminal. That [playing with the voltage on the terminal] was something that was very attractive for eight of the past 10 years, but now, as the focus has shifted to 3D transistors in the form of FinFETS and others, it turns out that fourth terminal is not available to us to play with any more, so we are using other techniques.
Another technique that has become very common is to optimize the supply voltage of a circuit to match the frequency that it needed to operate at, and so people talk about techniques like dynamic voltage and frequency scaling.

If software can determine that the total loading on your computer is lower than what is needed at peak, it can slow down the clock, which helps save some power, but you can save even more power and a lot of leakage by reducing the supply voltage at the same time. By switching off whole banks of circuits, instead of just stopping their clocks when they are idle, you can cut the power to them and eliminate leakage altogether—this is another technique practiced more frequently today.
The technologies I’ve just noted have substantial implementation costs associated with them, so designers have to be careful about where and when to apply them. This is where the knowledge of use cases and the role of the different IP cores in the chip in those use cases becomes important.

EECatalog: What are the characteristics you would find in companies that are successful in creating ultra-low power solutions?

Wingard, Sonics: While we have been talking about power as being a design imperative for 10 years now, it is still the case for most design teams that power is not a primary design concern—that clearly does not make sense, but it is still the reality. So the people who tend to do the best designs for low power are those who think about low power continuously through their architecture, design and verification processes. There are design teams who do not wait for a characterization tool to tell them after they are done how much power they used, but who [instead] design low power in as part of the goal and continuously track how well they are doing versus their power goal.

Now, there are limitations in the ability of—and certainly in the use of—EDA tools to help in that process. The state of the art in EDA tooling around this is not anywhere near as advanced as it is around the design and verification challenge.

EECatalog: Why?

Wingard, Sonics: Because there has not been as much effort. It boils down to where the semiconductor designers are willing to spend their dollars—it’s not that EDA companies are not willing to build those tools, I think that the market for those tools has not yet fully emerged. There have been companies trying, and there have been some companies who have failed trying to provide tools for doing better architectural low-power analysis. It is not because their solutions didn’t work, but because for most organizations designing for low power remains an afterthought.

EECatalog: What will cause it not to be an afterthought?

Wingard, Sonics: One factor could be this crazy idea of these smart watches that you have to plug in everyday. I had a watch that could not hold time for more than a day, and I had to power it off before I got on an international flight because I knew that if I left it on, it would not know the time when I landed at my destination—that is kind of crazy, right?

We’ve seen that battery lifetime thresholds enable markets. [Say] I have a great wireless communications device that has to be plugged in all the time— that kind of defeats the purpose of having it be wireless.

There have been domains for many years where there was incredibly great work done around low-power design. The original digital watch industry did some fantastic work. They built chips that used transistors in strange regions of operation that normal digital people don’t think of because they could do it and make digital watches that lasted for a couple of years on a battery. Now the question is what part of this mammoth opportunity that we know by the umbrella term “Internet of Things” will drive people to look at the low power issue in a new way?

Certainly the first generation of chips that were all targeting wearable applications is where we saw companies who had largely failed designing cell phone processors trying to dumb down their chips for some of these imaging-capable wearable devices with graphic displays—those have not worked very well. You can make the strong argument that it is form factor and battery life that are two reasons why they haven’t worked very well.

Medical wearables is an area where there are considerable privacy and security concerns, but some people have discussed pretty publicly that maybe the time when wearables take off is when the insurance company determines that it save them money to buy you something to monitor your health.

EECatalog: Where are you seeing the need for better communication among various interest groups so as to achieve a “rising tide lifts all boats” effect with regard to ultra low power?

Wingard, Sonics: Large semiconductor companies [have already] worked very productively with the EDA companies to come up with a set of power intent formats. IEEE 1801 allows for the low-level design of things that have multiple power islands and are done in a fashion that is electrically safe. While there will probably be some small enhancements over the years, this important and productive work is largely done.

Today, it is not about the implementation of these techniques, it is about helping design teams understand how many power islands should they create, how many clock domains should they create. It goes back to some analysis challenges.

Of course most SoCs are built out of a lot of reused components which are licensed in from the outside, and then some new logic, and one thing that we are almost completely bare on is: how do we come up with some standards for the interfaces that signal “hey, I am idle, you might want to shut me down” or “hey, I want to shut you down, are you okay with me doing that?” There are some basic communication interfaces that make dynamic power control much easier to implement when you are looking at an SoC that may have 150 blocks on it. Dealing with vendor- or company-proprietary interfaces or trying to figure out how to add an interface around a block that was designed without this kind of cycling ends up being quite a daunting task for the person trying to integrate an SoC, and I think as we move forward we should start to see some real work in that area.

EECatalog: Should we anticipate announcements from Sonics on the ultra-low power front?

Wingard, Sonics: I would hope so. First we have to make sure that we design our on-chip networks and our other IP products so that they use as little power as we can by aggressively employing conventional techniques. And I think our customers would report that [compared to] competing solutions we use between 40% and 80% less power for the same amount of work.

Our announcement last year that we had been awarded a patent on some active power management technology hints at an area in which we are very interested. I would encourage your readers to watch for more information from Sonics in this area!


Anne Fisher is managing editor of EECatalog.com. Her experience has included opportunities to cover a wide range of embedded solutions in the PICMG ecosystem as well as other technologies. Anne enjoys bringing embedded designers and developers solutions to technology challenges as described by their peers as well as insight and analysis from industry leaders. She can be reached at afisher@extensionmedia.com

Next Page »

Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.