Industrial IoT at Scale: What’s Really Needed



Why cloud-centric architectures traditionally used in consumer IoT applications fall short when it comes to a larger class of IoT applications, especially those of the IIoT.

As the Internet of Things (IoT) continues its rapid pace of growth, there has been no shortage of inflated expectations. Platforms promising to ease the development, deployment and management of IoT systems are now counted in the hundreds. You might be led to believe that all you have to do is pick a platform you like the best, from the vendor you trust the most, and go build your IoT system. Well, the story is not so simple!

ISG_Indu_ManSideView_005In reality, nearly all of the IoT platforms available on the market today are designed to only support cloud-centric architectures. These platforms centralize the “intelligence” in the cloud and require data to be conveyed from the edge to do anything useful with it. Considering the success of the cloud-centric model in IT (and in some IoT applications such as fleet management), you may wonder: “What’s the big deal?”.

Limitations of Cloud-Centric Architectures

Cloud-centric architectures are not applicable to a large class of IoT applications. Most notably, cloud-centric architectures fall short in supporting Industrial IoT (IIoT) systems and struggle with more demanding Consumer IoT applications.

The scary part is that the situation will only get worse with the predicted increase in the number of connected “things”—which could be anywhere from 50 -200 billion by 2020 according to Cisco, IDC and others.

But it’s more than just the sheer number of “things” that is the problem. There is something more fundamental limiting cloud-centric architectures’ applicability for IoT systems. Below, I’ve broken down these key fundamental issues.

Connectivity
Cloud-centric architectures assume that sufficient connectivity exists from the “things” to the cloud. This is necessary for (1) collecting the data from the edge, and (2) pushing insight or control actions from the cloud to the edge. Yet, connectivity is hard to guarantee for several IoT/IIoT applications, such as smart autonomous consumer and agricultural vehicles. As you can imagine, connectivity may be taken for granted in metropolitan areas, but not so much in rural areas.

Bandwidth
Cloud-centric architectures assume that sufficient bandwidth exists to bring the data from the edge into the data center. The challenge here is that several IIoT applications produce incredible volumes of data. For instance, a factory can easily produce a terabyte of data per day— and these numbers will only grow with the continued digitalization of factories.

Latency
Let’s assume that the connectivity and bandwidth problem is solved. Is that sufficient? The short answer is no. There are still a large class of IIoT systems for which the latency required to send data to the cloud, make decisions and eventually send data toward the edge to act upon these decisions may be completely incompatible with the dynamics of the underlying system. A key difference between IT and IoT/IIoT is that the latter deals with physical entities. Reaction time cannot be arbitrary. It must be compatible with the dynamics of the physical entity or process with which the application interacts. Failing to react with the proper latency can lead to system instability, infrastructure damage, or even risk to human operators.

Cost
In the age of smartphones and very cheap data plans, most people assume that the cost of connectivity is negligible. The reality is quite different in IIoT due to either bandwidth requirements or connectivity points. While in consumer applications, the individual person—the consumer—pays for connectivity, in most IoT/IIoT applications, such as smart grids, it is the operator who pays the bill. As a result, the cost is usually carefully considered, as it affects OPEX and consequently operational costs and margins.

Security
Finally, even assuming that all the above listed issues are addressed, when security is an issue, a large class of Industrial IoT applications are either not comfortable with, or are prevented by regulations from pushing their data to a cloud.

In summary, unless you can guarantee that the connectivity, bandwidth, latency, cost and security requirements of your application are compatible with a cloud-centric architecture (1) you need a different paradigm, and (2) 99.9% of the IoT platforms available on the market are not of much use.

Fog Computing

Fog computing is emerging as the main paradigm to address the connectivity, bandwidth, latency, cost and security challenges imposed by cloud-centric architectures. The main idea behind fog computing is to provide elastic compute, storage and communication close to the “things” so that (1) data does not need to be sent all the way to the cloud, or at least not all data and not all the time, and (2) the infrastructure is designed from the ground up to deal with cyber-physical-systems (CPS) as opposed to IT systems. With fog computing, the infrastructure takes into account the constraints that interactions with the physical world impose: latency, determinism, load balancing, and fault-tolerance.

Software-Defined Automation, Digitalization and Fog Computing
As discussed earlier, cloud-centric architectures fall short in addressing a large class of IoT applications. These limitations have motivated the need for fog computing to address the connectivity, bandwidth, latency, cost and security challenges imposed by cloud-centric architectures.

Now let’s consider some additional industry trends that are further motivating this paradigm shift and formulate a more precise definition of fog computing.

Two trends that are in some way at the core of the Industrial Internet of Things revolution are Software-Defined Automation, or Software-Defined Machines, and Digital Twins.

A trend that is disrupting several industries, Software-Defined Automation’s raison d’être is the replacement of specialized hardware implementations, such as a Programmable Logic Controller (PLC) on an industrial floor, with software running in a virtualized environment.

Digital Twins, as the name hints, are a digital representation (computerized model) of a physical entity such as a compressor or a turbine, that is “animated” through the live data coming from the physical brother or sister. Digital Twins have several applications, including monitoring, diagnostics, and prognostics. Additionally, Digital Twins provide useful insights to R&D teams for improving next- generation designs as well as continuously ameliorating the fidelity of their models.

As Software-Defined Automation transforms specialized hardware into software it creates an opportunity for convergence and consolidation. Transform PLCs into software-defined PLCs, for instance, and suddenly they can be deployed on commodity hardware in a virtualized environment and decoupled from the I/O logic, which can remain closer to the source of data.

As a result of Software-Defined Automation and Digital Twins, there is an opportunity for modernizing the factory floor, consolidating its hardware, and increasing availability and productivity. Improved manageability, resilience to failure, and innovation agility also take place. Software-Defined Automation affords the opportunity to manage these systems as a data center. As this trend is influencing a large class of industries, it is worth highlighting that the transformations described above, along with the benefits, are not limited to industrial automation.

But there is a catch! The catch is that the large majority of these systems, whether in industrial transportation, or medical domains, are subject to the performance constraint already described. These systems interact with the physical world, so they must react at the pace the physical device imposes.

As a consequence, while traditional cloud infrastructures would be functionally perfect to address these use cases, they turn out to be inadequate as (1) they were not designed with these non-functional requirements in mind, and (2) they are often too heavyweight. Cloud infrastructures were designed for IT systems in which a delay in the response time may create a bored or upset customer, but will not cause a robot arm to smash against a wall or other machinery, or worse, hurt a human operator.

Fog computing is not just about applying distributed computing to the edge. Fog computing is about providing an infrastructure that—while virtualizing elastic compute, storage, and communication—also addresses the non-functional properties characteristic of these domains.

Fog computing makes it possible to provision and manage software-defined hardware, e.g., a Soft PLC, Digital Twins, analytics, and anything else that might be needed to run on the system while ensuring the proper non-functional requirements and delivering convergence, manageability, availability, agility, and efficiency improvement.

Deploying, monitoring and managing software on the edge is made possible with fog computing’s flexible infrastructure. Simply deploying some logic on an edge gateway isn’t fog computing. Neither is fog computing traditional distributed computing.

Fog and Mist Computing

The attentive reader will have noticed that thus far the we have focused on platforms that virtualize the compute, communication and storage fabric available at the edge of the system. Yet, in many IoT systems, “things” have computational, communication and storage capabilities that should also be exploited and managed uniformly. Thus, the natural question is, where does Fog stop? Below the fog—on the devices—do we have something else?

Mist Computing
Mist is closer to the ground than fog, which in turn is closer to the ground than clouds. Mist computing is about bringing elastic compute, storage and communication directly to things. Thus, if we continue with the meteorological analogy, cloud infrastructure is high up in the data center, fog infrastructure is midway between the “things” and the cloud, and mist infrastructure is simply the “things.”

Mist Computing has two essential goals:

1. Enable resource harvesting by exploiting the computation, storage, and communication capabilities available on the “things.”
2. Allowing arbitrary computations to be provisioned, deployed, managed and monitored on the “things.”

As you can imagine, “things” in IoT applications are extremely heterogeneous with respect to platforms, resources, and connectivity. Thus, the main challenge for mist infrastructures is to be sufficiently lightweight to be able to establish a fabric that virtualize compute, storage, and communication without consuming too many resources.

Cloud, Fog and Mist Computing Convergence

If we take a step back and look from a distance at a generic IoT/IIoT, we will realize that from an infrastructural perspective we will have to deal with data centers that are in a public or private cloud, edge infrastructure, and the actual things. IoT/IIoT systems will need to exploit resources that span across these three tiers and provision, deploy, monitor, and manage applications and services across the three tiers. However, the landscape reveals complete fragmentation among the technologies used for cloud, fog, and mist computing. This fragmentation makes it hard to establish a unified end-to-end perspective on the system, and it makes it practically impossible to treat the system as a uniform and virtualized compute, storage and communication fabric.

At this point the question is, “What can we do about it?”

The first step toward addressing a problem is recognizing it. To this end, the author of this paper has been raising the awareness around the challenges that this fragmentation may induce for the past year or so. The second step is to establish a vision of how the problem can be solved so that the industry can internalize it and eventually address it.

Let’s focus for a moment on what would make sense for the user of an IoT/IIoT platform as opposed to the technical details of whether cloud, fog, or mist is the right answer.

From a high-level perspective, why should somebody designing an IoT/IIoT application care one iota whether he or she will be using cloud, fog, or mist computing paradigms? The only thing that really matters is that the platform provides a way to provision, manage and monitor applications in such a way that applications can meet their end-to-end functional and non-functional requirements. The functional and non-functional requirements drive the allocation of application on “things,” edge infrastructure, or cloud infrastructure, but anything else is just a detail, isn’t it?

Fluid Computing

As a result, cloud, fog, and mist computing are now converging into fluid computing. Fluid computing is an architectural principle based on abstracting the topological details of the computational infrastructure. Fluid architectures provide an end-to-end fabric that can be used to seamlessly provision, deploy, manage and monitor applications, regardless of whether the underlying resource is provided by the cloud infrastructure, the fog infrastructure, or by “things.”

Fluid computing unifies under a single abstraction of cloud, fog and mist computing. Cloud, fog and mist computing can be seen as applying fluid computing in a specific bounded context.

The impact of this line of thought can already be seen in the Open Fog Consortium Reference architecture, which now embraces some of the concepts of fluid architectures discussed above.

Making IoT Happen at Scale

In this paper, we have discussed the evolution of IoT architectures to support the expansion of IoT to more challenging and arguably more beneficial application domains, such as smart grids, smart factories, and autonomous vehicles. This is all good, but there is still something missing to really make IoT happen at scale—that is standardization. Today’s reality is that IoT platforms are growing and continuing to fragment the market, interoperability is non-existent or extremely limited, and most IoT applications are silos with respect to connectivity.

To make IoT happen we need standards at a data exchange and data format level to be established. Some vertical applications seem to be standardizing over the DDS standard[2]. Others are standardizing over OPC-UA[3].DDS tends to be preferred in systems required to operate at massive scale, with high performance and demanding fault-tolerance. OPC-UA, on the other hand, is widely used in the automation industry as a way of providing interactive access to field data. Both standard, along with defining mechanism for data sharing provide mechanism for defining data models. DDS allows augmenting data models with QoS features that capture the non-functional requirements of the data. This is particularly useful for applications that need to control end-to-end QoS in order to ensure proper operations or quality of experience.

Standards exist that are ready to be adopted. End users need to be more aware of the importance of interoperability and governments at a national and international level need to understand that, without interoperability, there won’t be any IoT at scale—only a massive mess of stove-pipes clumsily integrated together.


Angelo_Corsaro_115Angelo Corsaro, Ph.D. is Chief Technology Officer (CTO) at ADLINK Technology Inc. As CTO he leads the Advanced Technology Office and looks after corporate technology strategy and innovation.

Earlier Corsaro served as PrismTech’s (an ADLINK Company) CTO, where he directed the technology strategy innovation for the Vortex IIoT platform. He also served as Scientist at the SELEX-SI and FINMECCANICA Technology Directorate. There, he was responsible for the corporate middleware strategy, for strategic standardization, and R&D collaborations with top universities.

Corsaro is a well-known and cited expert in high-performance and large-scale distributed systems and with hundreds of publications in referred journals, conferences, workshops, and magazines.


[1] https://www.openfogconsortium.org/

[2] http://www.omg.org/spec/DDS/1.4/

[3] https://opcfoundation.org/

Share and Enjoy:
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • TwitThis