The Three Essential Ingredients for Ensuring Device Security

 Embedded security is more than a feeling, notes the CTO of a company responsible for the security underpinning over a billion smartphones.

What does it take to secure a device? Beyond the nuances of the embedded world, there are three core principles at play that professionals in embedded security need to understand.

Simplicity. Legitimacy. And trust.

“Security” is an interesting term. The dictionary defines it as both the “safety against criminal activity” and “the state of feeling safe.”

Figure 1: Before the “main event,” i.e., that of an OEM service enrolling a small embedded device, such as the Nuvoton M2351 powered by ARM Cortex-M23, shown here, much takes place backstage.

The perception of security is extremely important. The Web as we know it didn’t really take off until mechanisms and procedures were put in place to protect against criminal activity and give consumers that all-important confidence that it was safe to shop online. Before the advent of 128bit SSL, the Internet was little more than a glorified encyclopedia.

The embedded world being a rather different place, there is not a monolithic global application or “public” to convince about that “state of feeling safe.” Many embedded applications have gotten along just fine with scant regard for security—either in a technical or emotional sense. However, things are changing, and we can blame the Internet for that.

The opportunities for connected devices are immense—whether “connected” means “to the Internet” or simply a local network. The security risks are there as soon as it is possible for the bad guys to inject themselves into the conversation. Many legacy systems were designed to be physically inaccessible to attack, but physical boundaries no longer apply. This includes factories connecting closed networks to the cloud and critical in-car/in-plane networks being co-opted for infotainment and other new purposes. These scenarios could open up the possibility of indirect online attacks—and even physical attacks cannot be discounted, as device owners try to subvert software-enforced limitations on features or performance.

In the Internet, security was a huge enabler. The costs were localized to a few browser and web server authors, and the benefits were there for anyone wanting to set up or use an online store. The rest, as they say, is history.

What then of the embedded world? Security may be just as necessary—but is there a similar benefit? How does a device maker even “do” security—and what would they get out of it—other than the feeling that they are less likely to be at risk? These are the questions we had penned on a whiteboard 18 months ago, when, as a company responsible for the security underpinning over a billion smartphones, we turned our attention to IoT.

The first and most obvious requirement is simplicity. SSL is an extremely complex system—but it makes it easy for anyone writing a web application to “be secure”—and for anyone using that web application to believe that it is safe. As we have all found, having a strong front door like SSL is not enough—plenty of other things need to be correct to make a system truly secure—but the principle still holds. Security may be a complex subject, but it must be tamed.

One approach is to adopt web standards, such as SSL. That is fine as far as it goes, but SSL and its companion PKI don’t really fit the embedded world very well. An average microcontroller might have 16-64kb of flash for everything. SSL will happily eat that several times over. SSL is also a “channel-based” security scheme. It secures a pipe between a client, such as a web browser, and a server. The two ends then “chat” to establish a secure connection. For some embedded applications, this fits just fine. But for others, there may never be such an end-to-end, two-way pipe. Devices may need to send messages which are collated and eventually forwarded—more of a letter than a phone call.

Aping SSL is certainly an option for some applications, but it is also overkill—SSL supports the need for a browser to securely connect to any server in the world. Embedded devices don’t need that flexibility and, therefore, simpler and far smaller approaches are possible.

In the end, core security can be whittled down to a very small API. Essentially: “encrypt a message to give to Bob” and “decrypt a message from Bob.” Though encryption is only one aspect of security, it is the essence that developers can understand and then use to start to make their systems secure. As with SSL, the security experts behind the scenes can worry about the technobabble of integrity, forward secrecy, impersonation replay, and all the other tricky details.

The second security requirement is to prove that a device—and a message from it—is legitimate. This takes a little more understanding but is absolutely key and unlocks the potential value of security to many device makers.

Imagine that you produce a widget that is going to connect to the Internet. This might be a fitness band or a smart meter. You build this device, sell it and are successful. Now the “bad guys” smell that success and they decide to get in on the act. Maybe they steal your plans. Maybe they get someone in your supply chain to overproduce a key component. By whatever means, they create a counterfeit. To rub salt into your wounds, this counterfeit device will still connect to your servers. You can’t tell the difference between the real and the fake bits on the wire, so you end up supporting customers who have never even bought your product (and have your reputation ruined when fake devices start to fail, and you get the blame). When you consider sensors that produce faulty—or deliberately wrong—readings, the implications are enormous. Having a securely delivered message from an imposter is of no use to anyone.

The need for device legitimacy is fundamental to a secure Internet of Things environment. One method of creating legitimacy of the device is to ensure that each SoC comes with the ability to generate secure messages that can be attested to have come from a secure part. Further, a simple and low cost means of adding a digital “sticker,” also known as a Digital Hologram™, to each device during manufacture can mark that it has passed through a particular factory or stage. Like physical holograms, a digital version cannot be copied or removed, so that each device carries with it a digital ledger, recording its life—SoC to Module; Module to Device; Device through QA… and so on—service records, purchase, end of life—to record whatever information is needed to distinguish between legitimate and illegitimate devices.

The final requirement brings this all together. As with security, trust has an emotional sense—the ability to believe in something. By giving devices both a means to prove that they are legitimate and simple security tools to allow developers to deliver on the key security needs, such as secure communication and storage, we can enable trust in devices. That truly enables the Internet of Things and enables device makers to focus on what they do best.

For example, a small embedded device (Nuvoton ARM m23) is switched on for the first time and seamlessly connects to an Amazon cloud. It can do this because it was securely provisioned with Digital Holograms and has a small “micro TEE” running inside a secure area on the device. At power on, the device generates an AWS enrollment (in the form of a Certificate Signing Request in this case). The library adds an attestation to prove that the request came from this device, and the request is sent to the OEM’s cloud service on Amazon. This, in turn, forwards the attestation to the security provider, which reads and decodes the holograms and returns the device history in long hand (XML). Finally, the OEM service can enroll the device, as it knows that the message can only have come from a legitimate device. This demonstrates how something deeply complicated can be made trivial—as Netscape did with SSL back at the birth of the commercial web.

Security is certainly not simple. But developing products with simplicity, legitimacy, and trust can make security easy to use.

Richard Hayton is the Chief Technology Officer of Trustonic. Prior to Trustonic, Hayton was Chief Architect for Citrix Mobility, where he was responsible for crafting the XenMobile Enterprise Mobility Suite. During almost 20 years at Citrix, he led projects ranging from embedded software to global enterprise systems, with a focus on user and developer experience. He holds a PhD in Computer Science from Cambridge University, focusing on identity federation for users, devices and services.

Share and Enjoy:
  • Digg
  • Sphinn
  • Facebook
  • Mixx
  • Google


Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.