Logic Relaxes: Q&A with Eta Compute



Delay insensitivity and other steps to power saving.

Editor’s Note: Teaching logic to be less sensitive has its benefits, as EECatalog learned when we spoke with Paul Washkewicz. In June Eta Compute announced a reference design for its EtaCore Arm® Cortex®-M3, and Washkewicz, the company’s co-founder and vice president, spoke with us about Eta Compute’s delay insensitive asynchronous logic, or DIAL, design IP and related topics. “Process variations, temperature variations, voltage variations—we are delay insensitive to all those perturbations,” Washkewicz says. He adds, “the logic was specifically created to be insensitive to those and the delays that are caused by those uncertainties.” Edited excerpts of the interview follow.

EECatalog: What’s something our readers should know about the EtaCore Arm® Cortex®-M3 reference design?

Paul Washkewicz, Co-Founder and VP, Eta Compute

Paul Washkewicz, Co-Founder and VP, Eta Compute

Paul Washkewicz, Eta Compute: Some of the development kits out there are overdesigned, with three or four different types of energy harvesting and three or four different applications, which makes for a very interesting development environment. But we found if you want to spend time on the application and not on the hardware, if that hardware is large and bulky, you have to “squint” quite a bit more to visualize how that hardware is actually going to get deployed in the field as a sensor, whereas ours is much closer to the form factor that is required.

We took a different approach and developed a very small sensor node, about an inch by two inches. And it includes the energy harvesting. There’s a solar cell, the Arm core, ADC, RTC, DSP and sensors, and a connection to your computer where you can download your programs onto that little board. You can develop an application on it but, not only that—you can also envision it being deployed as it is.

Figure 1: A 12-bit, 200 kilo sample per second A to D converter developed by Eta Compute. It runs at 1 or 2 microwatts.

Figure 1: A 12-bit, 200 kilo sample per second A to D converter developed by Eta Compute. It runs at 1 or 2 microwatts.

EECatalog: On the “history of digital logic” timeline, where does Eta Compute come in?

Washkewicz, Eta Compute: What we have done at ETA Compute is create a new logic design methodology. A long time back, when digital logic was just coming out, companies like Synopsys created design compilers with RTL language which were converted into gates and optimized for power, performance, and area. We came up with a new paradigm for low-power embedded systems that extends standard tools to generate digital logic that runs at a lot lower voltage and therefore a lot lower power.

And the basics of running at lower voltage included creating a logic library that can operate, in our case, all the way down to .25 volts and which targets power-constrained embedded systems.

When we set out to create this technology, our thinking was, “since we’re creating something revolutionary, we don’t want to be revolutionary in too many areas,” so we worked with Arm to get an Arm license for the Cortex-M3 and subsequently went to an Arm Cortex-M0.

We needed to change part of the logic design process, but at the same time, we wanted to work with a standard platform that everyone knows well. Because once we’re done converting the Arm core into our technology, it runs like an Arm, and it has to use the familiar Arm development environment and the Arm tools. If an engineer has software for the Arm Cortex-M3 already, we want that to run on our technology in exactly the same way.

EECatalog: How does that logic change look to the engineer, for example?

Washkewicz, Eta Compute: Take an engineer working on something for a coin cell battery or very small energy harvesting solar cells using an asynchronous Arm Cortex-M3. With a program that does very simple sensor fusion to calculate location, perhaps, running on that engineer’s standard design at 1.2 volts and burning a certain amount of power, what he could do is, start turning the voltage down, and he would notice power savings running that same sensor fusion algorithm. As long as it’s fast enough for whatever his application is, he can lower the voltage continuously down to .25 volts, and that software program will run the application the same as it always has, it just might run more slowly.

EECatalog: And how does power savings change how a real-world example might look?

Washkewicz, Eta Compute: Embedded systems and especially the IoT are looking for significantly lower power. For example, in the medical world they have real-time location services at hospitals such as Johns Hopkins. At such hospitals 25,000 or 30,000 sensors could be installed to sense where the patients are, where the doctors and nurses are, where the medicines are, where the heart rate monitors are, where all this equipment is. And as people and things move around the hospital, you have a centralized computer program or application that allows somebody in charge to know where these valuable assets are. Logic that conserves energy changes the situation where, say, 30,000 sensors are out there, and the batteries only last a week or two weeks, so that you must hire staff to run around figuring out which batteries are low and changing them. And on top of that you generate a lot of hazardous waste.

EECatalog: I interrupted what you were saying in answer to the digital logic timeline question, so let’s return to that.

Washkewicz, Eta Compute: With the logic change, we got power consumption down into the low single-digit microwatt range, and that is the lowest power Arm core you can find anywhere, I think by a significant margin. But if you want to do an SoC for an embedded system, you find that the analog to digital converter might dominate your power consumption.

That led our analog team off to develop what we believe is the world’s lowest power A to D converter (Figure 1)—it is a 12-bit, 200 kilo sample per second converter that also burns in the low single digits microwatts, [so] now you have an Arm Cortex-M3 running low single-digit microwatts, and you have an A to D converter at 200 kilo samples per second, 12 bits running at 1 or 2 microwatts. Now you are starting to get something that is really, really low power.

And that led us to the green blocks seen on Figure 1. Some folks wanted encryption. You always have a real time clock running. You may want a DSP. Those sorts of requirements caused us to turn to our engineering teams, who then converted what Figure 1 shows in green, using the same logic technology, making it possible to scale these down from typical 1.2 volts down to 0.25 volts, too.

What’s shown at the top of the bus in Figure 1 is all dynamically voltage scalable technology converted from standard RTL to our form of asynchronous logic.

The final step resulted from our asking, “Where do we get very highly efficient power management for .25 volts that scales all the way up to 1.2 volts?” It didn’t exist, so our analog engineers have designed power management that can supply the various rails to the SoC, including the important one that scales down to .25 all the way up to 1.2 volts—we get 85 to 95 percent efficiency to generate that .25 volts.

For everything Figure 1 shows in green, every spare bit of power savings has been squeezed out of all of those blocks—with the result being an SoC that is world class in power consumption.

EECatalog: Are you seeing hesitancy among potential customers to embrace this disruption?

Washkewicz, Eta Compute: When you do something radically different, you get the early adopters, and then possibly the technology leaders and then the followers. We try to go right after the early adopters, and what was helpful to that effort is that it has been formally verified from an Arm synchronous implementation to our asynchronous implementation. In addition, all the test benches exist, and we have silicon evaluation boards on which they can test software and run it through its paces. For an early adopter, it makes quick work of them deciding that, yeah, this is an SoC that operates like an Arm SoC and we can scale the voltage and get the power savings. We can demonstrate that right away, right on the evaluation board.

Ultimately, these are transistors. We are not changing the process at TSMC where Eta Compute has taped out 90nm LP and 55nm ULP. There are no modifications, and the transistors are the same transistors at TSMC. Our logic gates are just configured differently. The early adopters can verify where the code operates the way they want. They can run all their test benches so that they can quickly prove that they can get the benefit of power savings in their familiar development environment.

EECatalog: Could you speak a bit about the collaborative relationship with Arm?

Washkewicz, Eta Compute: We went to Arm with this idea about two years ago. Arm has a lot of licensees, so we are one of many, but we are doing something that is unique. Arm has done some development work along these lines—different method—but along the [same] line of lowering the voltage—internally.

So they were reasonably interested to see our outcome, and supported us, as formal verification was successfully completed. If you want to call it an Arm Cortex-M3, it has to be formally verified to be an Arm product afterwards. So as long as we overcame that hurdle, they were fine with it, and they seem pretty excited about it now.

As Arm leadership has indicated, they are very interested in Arm-based sensors being deployed in billions of units, and this is one way that deployment can happen. If you have things that can [allow] batteries to last 5x or 10x longer or can run off energy harvesting, then that is the way that billions of sensors are going to get deployed. It has been a very good relationship with Arm. The good results we achieved helped, and it supports their company direction for IoT.

EECatalog: Are you agnostic with regard to cores and processors?

Washkewicz, Eta Compute: We definitely have had customers ask us about some of the other cores—from 8051 to MIPS or even RISC-V, for example. But as a small company just coming out with technology it is very beneficial for us to work with Arm, due to its large customer base. It does not matter which core you pick, or what version of the core, you’ve got a lot of engineers that you can work with to get your technology to the marketplace.

EECatalog:  Where do you see energy harvesting running into challenges?

Washkewicz, Eta Compute: One of the interesting things is that transmission of the data has a lot of power usage relative to local computation. You compute things locally and reduce the data that you have to transmit. So, one of the challenges for IoT is the transmission.

LoRaWAN and Narrowband IoT [NB-IoT], which is more based on the cellular companies—what they have done is create a protocol which is much thinner and lighter and with lower data rates [such that] you can have wide area networks for Internet of Things that are low-powered. The steep challenge is being addressed, but it is not rolled out worldwide yet. The RF protocols and the technologies for the WAN is one of the challenges for connecting the IoT.

Many folks are working on lowering power and improving the performance of the devices themselves, including us, but we look for more deployment of the wireless piece of it.

EECatalog: Please put Eta Compute in the context of an overall industry trend or trends.

Washkewicz, Eta Compute: There is a lot of buzz in the industry about machine learning, and this is dominated by big iron, big computers, running deep neural networks. But you don’t want to send all the data back to the main computer and do big data analytics and machine learning back in the big data centers. It’s helpful if you have sensor nodes deployed or wireless networks deployed locally, and it would be nice to do some local learning there.

EECatalog: Why?

Washkewicz, Eta Compute: If it’s an energy-constrained device, like most of these small sensors are, then they don’t have a lot of power, so you get dominated by transmitting data back to the home office or the cloud in order to do machine learning. If you can do computation locally, it can be augmented by big iron, but you don’t have to transmit as much data back.

I think most folks would agree with that, but up until now one of the big issues has been: you burn too much power doing the calculation remotely—you didn’t save anything. Now, however, because we have come up with this technology that is so low power, it does enable more localized computation.

Tags: