“…more AI at the endpoints.” Q&A with Brian Faith, QuickLogic



The battery-powered and power-sensitive devices at the IoT’s endpoints are the targets of QuickLogic’s  recently launched AI initiative.

Editor’s Note: When company CEO Brian Faith spoke with EECatalog he described the “cooperative ecosystem” that QuickLogic, along with General Vision, Nepes, and SensiML, are building to meet the need to “deploy more AI out at the endpoints.”

Faith wants to see AI able to be deployed even by individuals who are not data scientists. “Many times, AI deployment is really limited by that need to have a data scientist, that expert who can look at the data and know how to train the AI models. We are adding a layer of abstraction intended to let even those who are not data scientists deploy AI,” he explains.

 An edited excerpt of our conversation follows, and it includes figures referenced by Faith during the interview.

EECatalog: Is AI at the endpoint needed across the board?

Faith, QuickLogic: Not necessarily. If you look at Figure 1, for example, where the blue rectangles and red circles are nodes collecting sensor data, if that’s for a thermostat, the data itself does not change very rapidly. You can just throw the sets of data up in the cloud, and the cloud can determine, “Oh, it’s a little too warm, I am going to turn on the AC, and as a signal goes to the cloud and back, and turns on your AC, none of that happens in real time. The amount of data being transmitted is almost insignificant. Even if you are just looking for that one green triangle, it is not very much data in the grand scheme of things.

But now there are a lot of devices with sensors that are generating a lot more data. And, for reasons that could be for conserving bandwidth, for security, for power consumption, for data privacy or any combination of those, you don’t want to send all that data back to the cloud.

Figure 1: The amount of data being generated by sensors is growing, and solutions are needed for applications that may not be latency-tolerant.

EECatalog:  So, one would want to send data to the cloud for some cases, but for other applications take advantage of resources that make AI at the endpoint possible?

Faith, QuickLogic: Yes, for example, the Amazon Echo that might be in your kitchen does not send everything back to the cloud. Rather, AI is taking place at the endpoint as Echo listens for the key word “Alexa.”

EECatalog: What makes it challenging to do AI at the endpoints?

Brian Faith, QuickLogic: The IoT space is very fragmented. If you are not Alphabet or Facebook, you are not going to be investing in the data scientists. You are going to want tools that allow you to look at the data, label the data, and build neural network models based on that data. And if you are an IoT OEM designing for smaller volume, niche categories, you are going to want a good reference starting point to work from so that you do not have to do a bunch of hardware engineering, software engineering, just to get to the point of having a minimum product to start characterizing your data.

So, we are identifying some of the players in the space for endpoint AI and bringing them together in our QuickAI ecosystem to give designers a starting point.

EECatalog:  What are some of the details of that ecosystem?

Faith, QuickLogic: As noted on Figure 2, the companies with which QuickLogic has launched the ecosystem are Nepes Corporation, General Vision, and SensiML. Nepes has a 576-neuron chip, the NM500. NeuroMem, the neural net architecture in the NM500, is from General Vision.

Figure 2: The QuickAI Ecosystem

While NeuroMem is now in the NM500 and sold as a discrete neural network processor, your readers may not know that this General Vision technology was licensed by Intel and included in the Quark SE microcontroller that Intel launched in 2015 for its IoT initiative.

Spun off from the software group in Intel that supported that Quark SE processor, SensiML has the data analytic software to take data, label the data, train a model, and program the model down into the hardware. SensiML’s intense focus on data analytics in endpoint applications makes it a natural fit for our QuickAI ecosystem.

And QuickLogic has a multicore low-power microcontroller, with hardware accelerators for voice recognition and sensor fusion, as well as an embedded FPGA block for integration of other components.

EECatalog:  Saving power is one of the reasons you mentioned earlier for AI at the endpoints—how do those savings occur?

Faith, QuickLogic: The NM500 is a neural network, with a parallel interface to it. The parallel interface is nonstandard, which means that you do need some kind of programmable logic to interface the sensor subsystem and the microcontroller into the NM500, and FPGAs are great at doing that type of logic.

In the case of NeuroMem and the NM500, you may need to do the feature extraction outside of the nm500, and FPGAs are very good at doing all the different feature extractions these AI applications require.

A common feature extraction for a voice application, for example, is to do mel-frequency cepstral coefficients (MFCC). You could do finite impulse response (FIR) filters and time series data or fast Fourier Transform (FFT) filters. In the case of vision applications, you might be doing windowing. All that can be done in software, but it is incredibly taxing from a MIPS and power-consumption point of view. If you can do any of those things in hardware accelerators like our EOS S3 embedded FPGA, you can save up to 90 percent of the power functions. We have demonstrable evidence that that is true.

When you think about deploying AI from a system point of view, you can reduce power by 80 or 90 percent, you move the functionality of two chips into one chip, and you have a starting platform that has the software support already.

EECatalog:  Are there any use cases you are able to cite at this time?

Faith, QuickLogic: Yes, Nepes has a semiconductor packaging and manufacturing floor and has used the NM500 to do optical inspection in its own factory. The QuickLogic EOS S3 SoC’s FPGA can connect to the NM500; you can build a camera interface using our embedded FPGA or SPI port. You can store data coming in from the camera for future use in the SD card that is connected to the Arm Cortex-M4. You can also store neural network models for programming the NM500s. And you might have different models, depending on what wafers are running through wafer inspection or what glass you are going to go on. You can load different models and reconfigure it in the field.

Predictive maintenance is another area where costs can be saved with an approach which integrates the Nepes NM500, QuickLogic devices, and SensiML analytic software. On a manufacturing floor one of the most expensive things to do is paying an electrician to come out and lay cable after the fact. But with little sensor modules, you can just go up with a magnet, stick it to the equipment and know if the equipment is operating correctly or if some anomaly is happening that could cause it to break. So predictive maintenance in certain cases uses accelerometers to look at vibration data and know if it is operating well or not well (Figure 3).

Our FFE can take data from the accelerometer at very low power  and send those data patterns to the NM500 that is programmed to know what is a correctly operating vibration pattern. And if it sees an anomaly, it can say, “something’s not right here,” at which point you can send it up to have a technician take action before something catastrophic happens.

Figure 3: Predictive Maintenance

EECatalog:  How have you approached the need designers will have for flexibility in getting AI to the endpoints?

Faith, QuickLogic:  With our QuickAI HDK (Figure 4), we modularized it by putting it on top of the bigger PCB. The devices on the module include additional flash memory, a magnetometer BLE NRF51822 module, so you can do a lot of prototyping over the [Bluetooth Low Energy] BLE Mesh Network, two NM500s for the neural network processing, and a USB connection for debug and so on.

Figure 4: A Flexible Platform

We also have an expansion connector (Figure 4). With the expansion connector, if you want to use a different sensor, you can build a board and connect to it. So, say you want to use a CMOS image sensor. We can do a board with a CMOS imager and connect it to that extension connector, so you can prototype a lot of different applications (Figure 4, left) that are not natively supported on  QuickAI. With the microphones, you get the flexibility of using them for voice recognition such as with Amazon Alexa and Tencent, or you could use them for different applications like sounds. Sounds can be a good data input to detect anomalies, so you can build a neural network inferencing around sounds as opposed to just voice.

With the SensiML data analytic software you can target data models and inferencing models from SensiML software and apply them, for instance, to the EOS S3 SoC as shown in the Figure 4 example.

EECatalog:  QuickLogic’s SoC, the EOS S3 shown in Figure 4, is already in production?

Faith, QuickLogic:  Yes, our EOS S3 device is in mass production. It was designed for low- power always-on sensing. It has a heterogeneous architecture, with an Arm Cortex-M4 for running an RTOS and just general software code that is openly programmable by our customers’ partners. The areas of hardware acceleration are the embedded FPGA,  which we touched on already, another is a block that makes it possible to program, through partner software, saying Alexa, or Google, or whatever the voice system is

EECatalog:  Anything else to add before we wrap up?

Faith, QuickLogic:  For freeing up the Arm Cortex-M4 to do other things and as a result lower the power consumption, we have a  patented technology called Flexibility Fusion Engine, a hardware accelerator for lightweight DSP algorithms. The accelerator has a front sensor manager  so that you can interact motion sensors, biometric sensors, and environmental sensors and all of the data extraction, interface for the sensors, and some lightweight processing can be completely offloaded from the M4.

“General Vision and nepes have been working closely for the last couple of years to design and manufacture a neuromorphic chip, the NM500. It has 576 identical cells (neurons), consisting of logic gates and memory, with each neuron parallelly connected to other neurons.

Designed from scratch for true parallel computing and mass scalability, the NM500 can process and identify 576 different objects or patterns from the data—more than enough for IoT, wearable, hearable or even many vision applications. We started mass producing the NM500 in 2017, and it has been certified for industrial use.

To use the chip, we developed evaluation boards (neuroshield), which follow the Arduino or Arm Mbed shield form factor. And in case you want to expand the number of neurons, we also have neurobrick, which can be  stacked on the neuroshield. Our BrilliantUSB solution makes it possible to develop intelligent applications quickly, as it can be inserted into any standard USB port and instantly begin developing. The Prodigy board is for those want to develop vision applications.

Like a baby, the NM500 at first doesn’t understand the world. You need to train it with data. We provide generic knowledge building software (Knowledge Studio) to create, evaluate, and verify the knowledge for the NM500. We use a show and tell method,  so that the only thing you need do is point and click the data you want to train. It runs on Windows, Mac, and Linux.

The NM500 core consumes 1.46mW while running at 36 MHz. Average power consumption is 135mW in Active mode.  It is delivered in a 4.5mmx4.5mm, 64-pin chip-scale package.

Image recognition, sound/signal recognition, video, data collection, and text and packet recognition are among the applications being targeted. However, the greatest demand is for vision recognition, and that is what we are emphasizing, working with many automotive and video surveillance companies, as well as in the toy and education markets.”

–Brian Faith, CEO, QuickLogic

Share and Enjoy:
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • TwitThis
Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.