Linux Containers: Smart Home Smarts
Why containers can satisfy the application isolation requirements for embedded devices in our homes, on our wrists, and more.
Though most embedded devices do just one thing and do it very well, securing that one application on the device has always been challenging. That challenge arises due to the constrained nature of processor, memory, and power. Further compounding the challenge, most embedded devices are now ‘connected’ and do many more things than just one. Also, many applications that deal with our personal information are being supported in embedded devices. For example, many wearables are being used either as endpoints to facilitate user authentication or as a key computation hop to collect and perform transactions for healthcare and finance use cases respectively. As more and more ‘things’ connect in our homes, we humans have become one more connected ‘thing’ inside our smart homes, and when we venture out, mobile devices become extensions of our physical beings.
Many security operations, Public Key Cryptography computations as an example, are processor-intensive and hence also end up consuming more power than ‘normal’ computations. In mobile embedded devices, such as wearables, that additional power use means charging the device more often, thereby conflicting with the device’s primary attribute—that of mobility.
While anything and everything is getting connected to the cloud, containers have brought about a major disruption to the cloud infrastructure space. Containers offer attributes of workload portability and more efficient hardware utilization and are perfectly suited for ephemeral microservices. Trusted Computing Group’s TEE (Trusted Execution Environment) architecture does offer a solid security paradigm for isolating regular and secure applications, but that architecture relies heavily on the support from underlying hardware and the OS. However, I believe containers can be used to satisfy the application isolation requirements for embedded devices.
The perfect trifecta of cloud computing, ubiquitous connectivity, and open source software stacks for processing huge amounts of data has led to the ‘smartness of everything.’ While data pertaining to every aspect of existing ‘things’ are being collected and processed in order to make those things smart, new devices are thrown in the mix to create newer interactions with day-to-day human lives or to deeply analyze the existing interactions. All of this has led to a resurgence of new hardware devices: from wearables to smart home sensors to beacons to connected self-driving cars to smart cities to drones, etc.
This second coming of the hardware revolution is further fueled by easy availability of cheap single-board computers and microcontrollers such as Raspberry Pi and Arduino. As an example, it is very easy for anyone to quickly put together a system of smart sensors at home that send alerts or notifications based on certain predefined rules, using such single-board computers and open source software such as OpenHAB.
Containers enable an application development and deployment lifecycle that lends flexibility to developers in choosing whatever software stack is best suited for the application. At the same time, containers let the DevOps team decide and define the underlying immutable infrastructure stack that should be used for running the containers. This flexibility has a key implication: the application team and the DevOps teams can work completely independently, and test out their changes, without having to worry about the impact of their changes on the other team’s progress. This process improvement can significantly speed up the release cycles, as the entire OS and application stack don’t have to be qualified for every change (as is the case with application software released as virtual machine [VM] images). Hence, application developers can quickly try out changes to their application logic and also to the software stack without having to worry about running those changes, and the support those changes would need from the OS stack, with the DevOps guys.
Embedded Development and Production Perks
Containers offer multiple additional benefits during development and in production for embedded devices.
New containerized applications could be easily introduced and deployed on the devices without impacting any of the existing applications. The existing applications would continue to rely on their specific dependencies, in their containers, while being completely transparent to the newly introduced application in their environment.
Independently optimize an application and its dependencies without touching other applications or the underlying infrastructure.
Parallelize the development process: different development teams can independently build and test applications, choosing whatever software stack is best suited for their applications, without having to agree on, and hence settle for, a jack-of-all-trades-but-master-of-none software stack that works for all the applications—within their assigned memory footprint and other embedded limitations.
Over-the-air application updates could be easily applied to, and even reverted from, the containerized applications without being concerned about having to physically reset hard-to-reach devices in case the update doesn’t’t go through correctly.
As mentioned earlier, security is becoming increasingly important for connected embedded devices, especially for something like a smart home hub that controls all the devices and sensors in a connected home. While the security concerns, from a threat perspective, remain the same, containers could be used effectively on embedded devices to address the security concerns.
A dual-layered approach of separating out well-containerized applications from the underlying infrastructure could prevent a compromise in one application from spreading to other applications or to the kernel. What’s more, the segregation of a monolithic software stack into multiple cleanly isolated parts leads to faster testing and updates for bugs and security patches, thereby reducing the window of opportunity for attacks.
External sandboxing of containers, using aMandatory Access Control (MAC) approach such as SELinux or AppArmor, could be used to limit the capabilities of the applications to predefined system calls. Alternately, or in addition to, security could be baked into the containers—by putting probes in various different network, I/O and application layer calls of interest—and used to provide deep visibility into the runtime operations of the containers and also for restraining the application from doing anything that’s abnormal to its behavior.
Beyond the Traditional
Containers offer promising alternatives to the traditional approaches for developing, testing, and deploying applications on embedded devices. Containers are already being used actively for deploying service side components of the fast-growing IoT space, so despite the hardware limitations on the embedded devices, I expect the container runtime (Docker has already been ported onto Raspberry Pi by resin.io) and—most important—the developer toolchains to be migrated sooner than later. The bulkier container lifecycle management and orchestration layer doesn’t need to be migrated onto the embedded ecosystem. The reason containers should be used on embedded devices, as explained above, totally differs from the rationale for making them part of the compute infrastructure layer.
Asif Awan is a successful serial entrepreneur with broad business and technology expertise that spans the enterprise, healthcare and financial industries; and cloud, mobile and deep-learning technologies. He is the founder and CEO of Layered Insight, www.layeredinsight.com, a container-security startup based in the San Francisco Bay Area that has built an industry-first, container-native deep visibility and security solution.