“Contextual Sensing” is Another IoT Area for ARM
Willard Tu is ARM’s director of embedded marketing—if it has anything to do with “embedded,” he’s got his finger on the pulse. In our recent discussion—on the heels of his chairing an IoT symposium at the recent Sensors Expo—he gives some insight on sensors. From MEMS microphones and accelerometers, to CO2 detectors, the IoT, he says, can create smarter machines when the sensors are bolted to ARM®-enabled SoCs. Edited excerpts follow.
– Chris A. Ciufo, editor.
Chris “C2” Ciufo: On the heels of Sensors Expo, let’s talk about sensors, ARM® and the IoT.
Willard Tu, ARM: All right. I think the one thing that we really tried to focus in on with the Sensors Expo pre-con symposium track I chaired is this question: ‘What needs to happen to help enable the IoT market?’ The sensors are everywhere, and more are on their way [Figure 1]. There’s a gap in IoT discussions where everybody exclusively focuses on getting their embedded device connected to the Internet. Yet there’s more to it. We’ve talked in the past about how an appliance company, for instance, can make a connected device, but it may not actually provide an optimal return on investment.
This company might be putting anywhere from 10 to 20 dollars in bill of material [BOM] cost into a connected, IoT-enabled appliance such as a washing machine. And at the end, you can monitor the motor and report on it in such a way that they can say, hey, we know that motor’s going bad. And we can send out a service technician to fix the motor before the motor goes out, so you don’t have to be inconvenienced with any machine downtime. Certainly that data can be sent to a cloud service so that preventative maintenance is administered.
But that probably doesn’t necessarily mean [the appliance manufacturer] is increasing revenue. If anything, they might be cutting into their revenue, because now you’re doing more preventative maintenance versus replacement maintenance, right?
Executives might ask about putting in 10 to 20 dollars of BOM costs but seeing a reduction in service revenue. Customers may be happier—and the brand’s image may improve—but reduced revenue and increased BOM costs are not good.
C2: So you’re saying that adding those IoT sensors is a bad thing?
Tu: No: I’m saying we need to step back just a bit and a build a case for the sensors doing more. I’m trying to report that appliances, whether it’s a coffee maker or a washing machine, need to aggregate more data than just the data the appliance itself needs. In other words, if it becomes a sensor node, it can listen to you, it can maybe see you, and it can potentially help the home build some contextual awareness of what is going on with the consumer. And now that becomes data you need to transmit, and another appliance’s data can correlate and aggregate with your appliance’s data to help provide that bigger picture of what’s going on with the consumer inside the home.
C2: Explain a bit more about this concept of a sensor node.
Tu: Think about it: your appliance becomes the sensor node and now it needs more sensors than just the one to monitor the motor. It needs to have a microphone, maybe a camera, and other sensors to help build this contextual awareness of what’s going on inside the home.
The camera could be used just to observe the consumer’s behavior, without needing the consumer to interact with it—if only to realize that the consumer’s in the same room with the appliance.
Somebody else can take that data and say, okay, now we know that the consumer is in the laundry room. What can we do with this information? I mean, this is where the creativity comes in, when you have more data, now suddenly there might be more services that can be brought to bear, because now you potentially understand the context of the consumer. For example, if the consumer is detected in the laundry room, is it a problem that a sensor notices the stove has been set on ‘boil’ for longer than five minutes? Possibly.
C2: Do the sensor companies—ahem!—sense an opportunity in these use cases?
Tu: No, but there’s lots of opportunity [Figure 2]. Let’s talk about how the sensor companies are all predominantly focused on the mobile space. They have seen the rise of mobile over the past few years and are pondering: ‘If I can get into mobile, suddenly I’m going to have all these tremendous volumes, and I can manufacture one billion of these units and life will be great.’
C2: Certainly that has gone well with many companies, such as ST Micro, who recently reported that it was a two hundred million dollar company before shipping piles of MEMS…and now it is a billion dollar company, number one in the world in MEMS.
Tu: Right: the smartphone has enabled other markets. But what you have is a lack of expertise beyond the mobile market. In the smartphone space, if you make an accelerometer or if you make a microphone, you can plug it into a smartphone and the smartphone manufacturers have enough expertise to figure out how to make that sensor work in their system. The OEMs have the scale.
Some of these sensors have a digital output of some sort and they’re easy to integrate into a system, but there are some chemical sensors that don’t even have a digital output. Thinking back to my washing machine sensor platform example, the challenge is a lot of these sensors are not as applicable as is into the broader IoT space. Because an elevator manufacturer, for example, may lack an acoustical engineer who knows how to use microphone technology for voice recognition in an elevator. And the volume might be much smaller than mobile. You might be making 10,000 units, right?
That’s a far cry from over a million smartphone units. So this ability to use the expertise of each individual sensor, whether it’s an image sensor, a microphone or an accelerometer, you start to see some of these companies like ST or Bosch Sensortec start to provide what I call ‘turnkey algorithms‘ or ‘turnkey capability‘ where now it’s the accelerometer that counts the steps instead of a processor elsewhere. In this ‘turnkey’ use case, now a wearable guy can plug [a smarter sensor] into their system and easily create a new device. Because they know the accelerometer helps count steps, and they have no idea—nor any reason to know—how it works, but they know it counts steps.
C2: We’ve touched a little bit on the large and growing automotive market. Any comments on sensors there?
Tu: Earlier I was talking about how appliances need to have more sensors so you can build contextual awareness of what’s going on inside the home. So that applies to the vehicle, as well, right? Because for a truly autonomous vehicle, for a car to be truly autonomous, it needs to anticipate what the driver is doing. In other words, if the driver just had a heart attack, the car needs to know that so it can take over.
Or if the driver has become distracted, it would need to know and take over. So all these advanced automobile capabilities—from self-driving cars to ADAS [safety systems] now almost mandate the need for things like heart rate sensors in the steering wheel or an image camera to monitor not just the driver, but the other passengers. Or a microphone to listen all the time to what is going on in the cabin. There are a lot of different use cases in there. For instance, you could do an enhanced airbag deployment if a car has crashed because you know exactly where a passenger is sitting or how (s)he is orientated.
Do you remember the time when they had pickup trucks with the cut-off switch for the passenger airbag? Or, similarly, Grandma might have the seat pulled all the way to the front because she’s shorter or can’t see so well. However, the dual-stage airbag could do quite a bit of damage if you’re too close to the steering wheel. So that would be an opportunity for a car manufacturer to say, ‘I recognize this person, who is physically too close to the airbag at the time of the crash, let me deploy one stage, versus two stages.’
That’s the whole idea of the contextual awareness. It’s that added value service where the vehicle is serving the driver, before he even knows what he wants.
C2: I’m sold. What’s the roadblock in the way of your vision becoming a reality?
Tu: There are multiple challenges. The sensor space is diverse and very fragmented, and many of what I see as emerging IoT systems don’t have the ability to hire experts in each one of these sensor-type areas. As well, the type of design friendliness to the digital domain isn’t there.
So this is where the sensor companies may have to move up the value chain in a different way. Suppose a microphone sensor company decides to add value by performing automatic audio beamforming using its existing [twin] microphones. That’s an added value, because the IoT system designer doesn’t have to know how to write a beamforming algorithm, he just has to put the two products in there and [the sensor] is going to do it for you.
My vision is that more and more sensor manufacturers must provide what I call turnkey capability because of a lack of expertise or design capability by these smaller design teams, with smaller volumes.
C2: What’s ARM’s strategy, then?
Tu: These little challenges are just another barrier to adoption for IoT. We’re trying to understand where ARM could participate in increasing the value of the digital compute capability inside the sensor. In this way, the sensor might become a black box that might have inside an intelligent microphone or intelligent camera system. And [the box] would also have an input into another ARM processor that processes the application. So if you think about Otis Elevator, as an example, you can have a voice recognition command sensor that would feed into the elevator control system.
Or go back to the accelerometer example. I can just plug the accelerometer into another. I have two now, because at some point, some people may say, hey, there’s intelligence in the accelerometer. I may be able to use that, instead of having to buy yet another microcontroller. But my point is that a black box capability would make it easier for a design team to add these sensing capabilities. So they don’t have to be the experts.
C2: Are there many sensor companies focused on markets besides volume mobile or automotive?
Tu: There are undoubtedly some new entrants that have lower volumes that have never gotten onto the mobile train, and they’re looking to diversify their business through innovation. Or maybe they couldn’t win in mobile, so now they’ve got to go find a new market. I think there’s a lot of sensor companies out there that, whether they lost mobile or they couldn’t hit the mobile price points—whatever—are now searching for that new market by adding some value.
I think the companies that are doing a good job are InvenSense, Bosch SensorTec, Freescale and ST Micro; those guys are certainly moving up the food chain. If you take a look at the type of sensors, such as microphones, there’s a lot of potential. You could do beamforming, echo cancellation, wind cancellation, voice recognition, all those things can be value added if they got integrated into a microphone. You can see some companies already integrating multiple sensors, adding some intelligence. mCube, as an example, has done a monolithic CMOS accelerometer, which is kind of interesting, but at very low cost. The natural progression for them is to start to incorporate the CPU intelligence in there, as well.
C2: What about chemical sensors—methane, CO2, gasoline, and so on?
Tu: Some of those sensors are very new to the market, and you don’t see them applied to the mobile market yet. I could see a mobile sensor that measures local air quality.
But there needs to be some level of replacement capability to these types of sensors; my understanding, in a lot of chemical sensors, there’s probably a wear-out mechanism and a lot of chemical sensors can last on the order of months.
One of the companies I worked with at Sensors Expo is Aryballe Technologies. So this was an interesting company in that it has an array of chemical sensors that are plugged into a Raspberry Pi board. The company isn’t really concerned about building the chemical sensor part; rather, it wants to sell you a ‘decoder ring’ to what those outputs of its olfactory and gustatory chemical sensors mean.
The company is using an ARM processor today. But it’s outside the sensor, so it’s not like an intelligent sensor. But Aryballe is applying compute capability to make the sensors make sense—no pun—of something. But it’s using an ARM-based Raspberry Pi board, in that case.
C2: Is there anything ARM needs to develop or acquire in order to realize either the vision that you’re describing—or the next step of that vision?
Tu: I think the key is offering the variety of IP which we do, from the ARM Cortex®-M4 processor to the Cortex-M7, but maybe there’s a need in the future for more DSP capability. Because a lot of these sensors do need some DSP functionality. But the thing about these various sensors we’ve been describing points to a need for scalable processing. We have that. So it you’re just trying to make a smart sensor, maybe the Cortex-M0 is a really good answer. But if you want to do a sensor hub such as that well-equipped washing machine, then the Cortex-M4 or Cortex-M7 is probably going to do a better job for you.
We have demo boards and HDKs, and we’re investing substantially in software, some of which we’re trying to help commoditize. At Sensors Expo ARM talked about ARM mbed™ OS and all the different layers of software that ARM is planning to provide for mbed OS.
This is a perfect example of ARM helping innovate by providing the basic software that everybody needs to build a connected device. We have a whole timeline for deliverables here, and I think that’s an attractive offering to the market so that people don’t have to add more resources to create more or re-create software that already exists.
C2: Last question, and it’s perhaps more on a personal note. Does ARM have any active plan besides your ‘evangelization’ in calls like these to bring your vision to fruition and lower these barriers to entry for the sensor manufacturers?
Tu: Absolutely. So in certain markets we’re relying on our ecosystem to help us. In image processing, we have companies that make gesture recognition software, which is similar to object detection software. In the microphone area, we’ve got a small company like DSP Concepts that has a tool that helps you make acoustical algorithms using graphical building blocks, so a system is very easy to stitch together.
So here again, the ecosystem is there, the ARM ecosystem is there to help these guys get to market more easily, move up the food chain more easily. Using the microphone example, you could use DSP Concept’s tool Audio Weaver to construct these types of algorithms that we’re talking about, so that it wouldn’t require as much engineering staff to create something. It would be easier for anyone to go create these algorithms.
This article was sponsored by ARM.