Posts Tagged ‘top-story’

Industry Groups Pave Way for Field-Bus Standard

Monday, October 29th, 2018

Industrial Internet Consortium Members, IEEE, OPC Foundation and Shapers Group collaborate on open industry standard.

Industrial networks (also known as field buses) come from simpler times and were originally used to provide deterministic connectivity between PLCs, I/O points, and sensors over relatively longer distances. At that time traditional mechatronic systems used centralized architectures. In these centralized architectures a central CPU connected over a backplane (such as a VME or ISA bus), which also connected to various I/O boards. The boards would then connect to sensors using TTL logic. As machines grew larger and faster and sensors needed to be read at higher frequencies, such architectures started to fail due to noise susceptibility.

The concept of remote I/O was developed to solve these technical issues, and various field buses evolved using technologies such as RS485 and CAN. These technologies proved effective and reliable until the next wave of machine design, where the I/O and throughput requirements increased again, forcing the industry to look at alternative means to connect.

The Evolution to Ethernet
The need for faster machines, more stringent I/O requirements, and larger amounts of data forced the industry to move to a faster medium. The industry adopted Ethernet as a well-understood medium. While Ethernet is fast, it is based on the concept of CSMA/CD, which allows any network component to communicate at any time. When two or more components communicate at the same, instant packet collision occurs (causing data to be corrupted). The sender senses that a collision has occurred and retransmits the packet after some random time. This process is repeated until a packet is transmitted without a collision. The more components exist on the network, the higher the probability of a collision, which leads to larger amounts of network jitter. In order to minimize jitter, various automation vendors came up with their own protocols to manage network collisions. For competitive reasons, some vendors developed their own proprietary protocols.

Some automation vendors resorted to using managed network switches, while others made modifications to the Layer II of the Ethernet standard, thus breaking way from the standard. Using managed switches introduces more complexity, as these switches have to be configured for the specific machine architecture. These switches were also expensive and introduced significant network delays that hindered the performance of motion applications. In all cases on the factory floor, the Operational Technology application (OT) had to run on a separate network from the rest of the IT infrastructure, creating the IT/OT divide, and significant overhead in network and machine management and maintenance.

An Opportunity to Simplify
With the creation of the new Time-Sensitive Networking (TSN) standard, factories now have the opportunity to significantly simplify their networks and their IT infrastructure. A new set of IEEE networking standards will now allow network components to communicate with real-time performance across a wide area network (WAN) without compromising data integrity or security. Three new IEEE standards are responsible for this achievement.

IEEE 802.1ASrev: This standard allows network nodes and switches to have a common sense of time across a wide area network. One such application for this standard provides the capability for a network device to publish and deliver its data to specific destinations on a network in a cyclic manner with minimal jitter.

IEEE 802.1Qbv: This standard provides capabilities and techniques to forward or queue network packets based on their destination and required arrival deadlines.

IEEE 802.Qcc: This standard allows for definition and configuration of a system of nodes on a network that must communicate in real-time. Paths between nodes are calculated to guarantee that each network path can meet its arrival deadline requirements.

The creation and implementation of these three standards allow factory network hierarchies to collapse into one flat open network. Here it will be possible for real-time components (such as drives, sensors, and I/O banks) to co-exist on the same network with non-real-time components such as printers, office desktops, etc.

Figure 1 shows an example of a small factory network. Here we only see a representation of the OT network where the machines are connected together via standard Ethernet. The network mainly consists of a network of controllers that communicate using standard Ethernet protocols. On the other side they communicate using some flavor of real-time Ethernet, not compatible for a standard office network.

Figure 1: A small factory network

Figure 2 illustrates how a network with TSN may look. Here you no longer need a separate network for automation. All the components will sit on the same factory TSN network. Another side effect is that individual machines no longer need to have a PLC. The PLC function may be used into a local fog. The PLC functions may simply become a software function on a server or a local fog.

Figure 2: With a TSN a separate network for automation is unnecessary

The IIC TSN Testbed and the Shapers Group
The IIC Testbed was originally proposed within the IIC framework as a physical platform where IIC members interested in adopting TSN may jointly collaborate, shares tools and best practices and test out the TSN concepts between their mutual devices. The original physical testbed was built and hosted at National Instruments (NI) in Austin, TX. NI would regularly host testbed members to get together a few times a year to connect their components together. The lab space provided by NI proved to be a very effective and collaborative environment where engineers from various companies jointly developed their respective TSN capabilities. Later another testbed was hosted at Bosch Rexroth in Germany to provide a lab area more accessible to European members. Other testbeds will also be added in China.

The Shapers Group is a special interest group formed within the TSN testbed members who recognized that while TSN provides a very effective delivery mechanism, they still needed a standard for managing and abstracting the data being communicated. The Shapers Group started to focus on the data definition and abstraction to provide seamless interoperability between machines and automation components on the network, based on open standards. OPC UA was chosen as the data modelling and communication capability. There were several aspects that led to the choice of using OPC UA. These included:

Independence and Openness: The OPC UA standard is maintained by the OPC Foundation which is an industry consortium that creates and maintains standards for open connectivity of industrial automation devices and systems.

Standardization: OPC UA has a high degree of standardization with very clear definitions on how to use the standard for data modelling and communications.

Powerful Object Model: OPC UA has a very powerful object model that allows data to be abstracted using an object-oriented paradigm. OPC UA also allows objects to define “method calls” that can be activated remotely by other connected entities.

Worldwide Acceptance and Pervasiveness: OPC UA is now available on the majority of modern controllers (PLCs, Industrial PCs, etc.) provided by vendors around the world.

Abundance of Tools and Stacks: Device Manufacturers and OEMs can easily start adopting OPC UA, thanks to a large number of technology vendors who provide commercial software and programming libraries. There is also a significant amount of open-source components for building OPC UA servers and clients.

Scalability: OPC UA can be built to scale from small embedded controllers with small memory, to larger CPU with lots of memory. While OPC UA has a huge number of features and functions the OEMs or device manufacturers can choose to build servers with only the required functionality for their application. A minimal functionality server may require as little as 50kBytes of RAM.

Integrated Security: Data encryption and security is defined and is an integral part of the OPC UA infrastructure allowing networked components to communicate securely. This is necessary to meet modern IIoT (Industrial Internet of Things) standards.

Figure 3 shows the original set of companies that collaborated in the Shapers Group and saw the opportunity to define OPC UA as a new machine interoperability standard in machine automation, using TSN as the real-time transport layer.

Figure 3:  A number of companies are responsible for creating a new machine interoperability standard in machine automation.

In order to efficiently meet the required interoperability performance between machines and components the Shapers also recognized that the OPC UA client/server model was an inappropriate for high-performance real-time applications. Some members worked closely with the OPC Foundation to add the Pub/Sub (publish/subscribe) functionality into OPC UA. This capability allows for one-to-one or one-to-many relationships between networked nodes, thus allowing components to publish data periodically. This is based on the concept of a bus cycle, where data can be shared without being explicitly polled.

The monumental effort put forth by the various working groups at IEEE, The OPC Foundation, the Shapers Group, and the IIC testbed members has now paved the way for a new field-bus standard that is based on open technology and open industry standards. Various vendors have started to develop products based on the new OPC UA /TSN standard. This includes network components such as I/O, industrial controllers, TSN switches, etc. Over time we will soon see drives with TSN network interfaces allowing OEMs and machine builders to deploy machines with very high-end motion performance requirements. Commissioning these machines will become simpler as well due to the plug-and-produce capabilities of this new field bus.

The Shapers Group is also continuing to expand. On April 24th, 2018 Rockwell Automation joined the Shapers Group recognizing that this new technology allows easy and secure sharing of information across different vendor technologies while the TSN suite of standards helps improve latency and robustness in converged industrial networks. The adoption of this communications technology is continuing to grow, and it is now highly likely over time to become the dominant standard in industrial automation.


Sari Germanos is part of the business development and technology marketing teams at B&R Industrial Automation Team. He is responsible for open source technologies and open standards for machine interoperability. He also has significant experience in applying simulation technologies to improve the efficiency of developing large-scale distributed systems. Germanos is chairing the work group developing the OPC UA Companion Specification for ISA TR-88. He also represents B&R at the Industrial Internet Consortium where he co-chairs the Networking Task Group. Sari received his MS in Computer Science from Boston College.

 

 

 

 

Broader Testing Services as the Cloud and More Evolve

Tuesday, October 2nd, 2018

What’s new as the number of nodes in the data center continues to grow

Editor’s Note: “What we are seeing now, because we have run out of IPv4 addresses, is that service providers deploying networks want IPv6-only networks,” Timothy Winters, Senior Executive, Software and IP Networking at the University of New Hampshire InterOperability Laboratory (UNH-IOL), tells EECatalog. Winters spoke with me following the lab’s announcement that its broadened testing services  align with  the National Institute of Standards and Technology (NIST) revamped USGv6 profile. As UNH-IOL co-authored the new profile, I wanted to hear Winters’ perspective on the update. Edited excerpts of our conversation follow.

EECatalog: What are some of the things the NIST update of the USGv6 profile achieves?

Timothy Winters (TW): Besides including all the updates from the IETF, we wanted to break the profile up and move some of the U.S. government specific components out of it. By making it more generic it allows governments in other countries or even just larger user groups to use the profile. Now it’s possible to use parts of the profile more easily and interchangeably. In addition, it helps the vendor community. They avoid having to test IPv6 in multiple ways, it would allow testing once and applying to multiple programs.

We also added new specifications for components and capabilities which are new since the first profile. That includes things such as IoT and transition mechanisms—10 years ago the mechanisms were very much trying to make v6 work over v4. What we are seeing now, because we have run out of v4 addresses, is that service providers deploying networks want v6-only networks. And because of that we need transition mechanisms now to make v4 work over v6.

Last but not least, while the original profile  mentioned applications; the updated USGv6 profile has a much better definition for promoting applications and services including cloud services.

EECatalog: What are some of the ways the Interoperability Lab has updated its testing services to meet the requirements of the new profile?

TW: We set up testbeds here at the IOL that are on v6 only networks because we want to make sure that an application can work. Whether for managing an install or doing an update what we don’t want is for someone to buy something and then try to put it on a v6 only network only to realize they have to connect to a data base or an install software that is only available over v4.

EECatalog: Could you speak to how things have changed with the growth of cloud services?

TW: Government  agencies want to be confident that the applications they purchase will continue to work when connecting to the cloud. So, while we are not testing the innards of the cloud—Amazon or Azure or Google Cloud Computing or Oracle or whatever it might be—we are testing whether an application,  a web browser, a phone here at the lab, can connect to that service in a v6-only environment.

So, we set it up here at UNH-IOL as v6 only to make sure they can still get to the services. Or, if things don’t work, we can tell the application vendor, “Hey, we noticed that this feature doesn’t work on your device; you can’t do an update on the fly because you require v4 connection.” Are the kinds of things that we need to test.

And in this case, we’re just accessing those publicly available services. We access those services over v6 only cloud to try to help promote that the external connections to support v6.

What happens in the cloud is like the kitchen we don’t see while we’re dining in the restaurant—it’s hidden away. But what we are trying to do is expose the external interface and support IPv6.

ECatalog: How does having the support of a testing infrastructure serve companies’ interests?

TW: Many companies have gone to Agile methodologies, which enable those companies to release products at a faster rate. At the same time, though, the companies  need a testing infrastructure to support that more rapid release rate. So, we have made our software available to our customers. Most of them take it and roll it into their continuous integration, so that as they are developing, they are testing as they go. That’s a change for many of our customers from five years ago or so.

While we were giving the production code, we would give it at certain points and only at so many weeks. Now,  with most of our tools available to our customers, they are integrating them in-house and making sure that they meet these requirements as they go along, saving money by avoiding mistakes that could cause them problems when it comes to testing.

EECatalog: With the new IPv6 profile, will data centers find they have more latitude, flexibility, and security?

TW: Although we  have seen slow adoption in the enterprise space for IPv6, in the data center we have seen quick adoption, and a lot of that has to do with the number of nodes they have.

It is a lot easier to address a large number of nodes  when you have all those v6 addresses to use. And, data center operators are quickly finding that they can easily segment sections of their data centers using IPv6 addresses. They can give multiple services different address prefixes, making it easier for them to decide the best possible path for packets.  Being able to use IPv6 addresses opened up some doors for them in coming up with unique ways to communicate inside the data center.

EECatalog: Is the Industrial IoT finding it has more and more in common with the data center?

TW: We see a little bit of truth in that. Looking at the IP layer, we have seen that many technologies which in the past did not support IP have started to support connecting their IoT devices over an IP network.

From that perspective, they suddenly do get a lot closer. And anytime you are putting IP addresses on devices which can connect—whether it is to cloud services or a sensor—the more gateways you have, the more difficult it becomes. If you can get native IPv6 out to something from an IoT perspective, it can be very helpful.

Also, we have more than enough address space that can be  global routable so that connections can be be end to end.

We have seen a little bit of movement in the Industrial IoT space, sometimes in areas such as IPv6 running over low power networks . The Industrial IoT sector  is a younger space for us and for v6 in general. They are starting to move over some low powered technologies. They have run over Bluetooth or Z-wave for example, and are now starting to move into the much lower power NB-IoT, SigFOX, LoRa, and looking into making IPv6 work with those technologies.

From an IP perspective, once you begin getting serious about making things reachable from far away (and there are a lot of use cases for doing that) IPv6 naturally comes up because it’s a standardized way for things to talk to one another.

EECatalog: Anything to add before we wrap up?

TW: One of the advantages of being here at the University is that we do have a lot of young engineers. There are 100 graduate students that work here and 25 to 30 of them work on IPv6. Because of this program they are getting access to the latest embedded systems that  might come in and the opportunity to do testing, whether it’s a camera, sensor, data center devices, whatever it might be—they are getting access to those and learning how to use them from a v6 perspective; how to set them up. This program helps the next generation of network engineers.

Device Lending Enables Composable Architecture

Monday, June 18th, 2018

Creating a composable infrastructure by leveraging the latest PCIe standard equates to something like…using pencils in space. Sometimes it makes sense to think up a simple solution that’s merely crafty rather than succumb to the hype.

Keeping data center infrastructure ahead of rapidly increasing demands can get expensive. Real-time analytics, 5G connectivity, IoT, and Artificial Intelligence (AI) drive growth, but all this innovation is also pushing data centers to improve at a similar pace. For data centers, innovation brings a massive influx of data, shifting requirements, and insatiable business requirements. Data centers must flex while also meeting workload expectations, staying within an operating budget, maintaining efficiency, and leveraging innovation for a competitive edge within the data services market. The rise of hyperscale data centers, driven by big data, IoT, and AI, has massive networking loads supporting a considerable number of external and diverse clients. Hyperscale data centers illustrate the need for more efficient and flexible use of massive amounts of resources.

The volume and spectrum of cloud workloads add pressure that makes inflexibility a non-option. Traditional data architectures made up of servers, storage media, switches, and the like have been available in a large variety of form factors and sizes. The various pieces come together to serve a particular data workload in the data center. However, the workloads of today are changing rapidly, and traditional data center infrastructures cannot flex as fast as needed without adding many hours of labor. The complexity of traditional infrastructures has been mitigated somewhat by converged infrastructures, whereby the compute, storage, and networking fabric converge into a single solution to meet a particular workload. While converged infrastructures relieved hardware-centric challenges, they created another issue, as managing became workload-centric. Having started on the left, then swung far to the right, data center technology has found a sweet spot in composable architecture.

What is Composable Infrastructure?
Taking an application-centric approach, composable infrastructure is the answer to data center flexibility. Composable architecture is the next generation data center design—able to support rapidly changing system configurations, facilitate maximum sharing of both real and virtual infrastructure, and support new hardware technology. While similar to a converged infrastructure, composable infrastructure integrates compute, storage, and networking into a single platform by using a software-defined intelligence that maintains a pool of liquid resources. The application-centric composable infrastructure provides a new approach with which to provision and manage assets (both real and virtual). By using disaggregated programmable infrastructure as code, composable infrastructure seamlessly bridges software and hardware while eliminating management silos. The result is lower operating costs through “right-sizing,” and a higher level of flexibility.

Figure 1: Hyperscale data center networks support many different external clients. The growth of big data with IoT and machine learning/AI are pushing data center infrastructure. As of March 2017, the majority of hyperscale data centers were operated in the U.S.

Several major companies already provide what is also referred to as composable disaggregated infrastructure or composable architecture. For instance, the Intel® Rack Scale Design (RSD) architecture is a composable disaggregated architecture where “hardware resources, such as compute modules, nonvolatile memory modules, hard disk storage modules, FPGA modules, and networking modules, can be installed individually within a rack. These can be packaged as blades, sleds, chassis, drawers or larger physical configurations.”[i]  Resources include a high-bandwidth data connection through an Ethernet link to a dedicated management network, and at least one Ethernet or other high-speed fabric, such as PCI Express (PCIe). Often two networks, including the management network, are connected to top-of-rack (ToR) switches. Historically, ToR switching has been adopted for rack-at-a-time flexibility, through modular data centers. A rack can connect through ToR to other racks to create a management domain referred to as a pod. Even with composable architecture, pods can be linked together in any network topology that best suits the data center.

Figure 2: Composable architecture creates a pool of resources that can be deployed on demand in data centers.

A Lower-Cost Way to Create Composable Infrastructure
But there’s another way to compose resources on the fly to meet changing workloads, and without requiring an Intel RSD compatible rack. Comprehensive rack scale solutions are not always possible due to budget constraints. For machines with access to resources via PCIe, a lower-cost solution can extend composable architecture to existing data centers. A Norwegian company called Dolphin Interconnect Solutions has an elegant solution called device lending. Device lending is a simple software solution that allows one to reconfigure systems and reallocate resources within a PCIe Fabric. Accelerators (GPUs and FPGAs), NVMe drives, network cards or other network fabric “can be added or removed without having to be physically installed in a particular system on the network.”[ii]   Dolphin’s eXpressWare SmartIO software enables device lending, which creates seamless management of a pool of devices while maximizing resources. Device lending achieves both extremely low computing overhead and low latency without requiring any application-specific distribution mechanisms. A low-cost composable infrastructure is within reach, as a remote IO resource appears to applications as if local, with device lending software deployed in the PCIe Fabric. Dolphin has been involved with industry standards (including PCI, ASI, and PCIe) since the 1990s.

Device lending works transparently across PCIe connected racks and between servers and modules with no modifications to drivers, operating systems, or software applications. Device lending enables temporary access to a PCIe device located remotely over a PCIe network. Furthermore, performance in accessing a remote device is similar to accessing a local device, since there is no software overhead in the data transfers themselves. Devices are temporarily borrowed by any system within the fabric, and for as long as necessary. When a device is no longer needed, it can be returned to local use or allocated to another system. One can control the Dolphin device lending software using a set of command line tools and options, which can be used directly or integrated into any other higher-level resource management system, such one that might be used with an Intel RSD or different architecture. Dolphin’s device lending software does not require any particular boot order or power-on sequence. PCIe devices borrowed from a remote system can be used as if they were local devices until returned. Furthermore, Dolphin’s device lending strategy does not require explicit integration into a unified Application Programming Interface (API), since it works by taking advantage of the inherent properties of PCIe to accomplish a composable infrastructure. For more information about how device lending works with hot-adding (or hot-plugging), virtualization, non-transparent bridges (NTBs), IO Memory Management Units (IOMMUs), and DMA remapping, refer to the whitepaper,  Device Lending in PCI Express Networks by Lars Kristiansen, et. al. (PDF).

Device lending is an advanced application of the PCIe standard. PCIe is a stable, standard technology that is widely implemented. PCIe is also set to reach 128 GBps in full-duplex mode over 16 lanes with Gen 5. PCIe Gen 5 will be backward-compatible to prior generations and meet increasing performance needs. PCIe has latencies as low as 300ns end-to-end, dominates I/O bus technology, and has been prolific in the server, storage, mobile, and other markets. PCIe is also a significant player in connecting cloud-based devices that demand the highest performance in interconnects, such as GPU and FPGA accelerators for machine learning and AI. Hyperscale data centers number in the hundreds, with some of the world’s most massive run by Google, Facebook, Amazon, and China’s Baidu.

Figure 3: Device lending leverages the PCIe standard at the PCIe level on the stack, so integrating with other APIs, special bootloaders, and power sequencing are not needed. NTB= non-transparent bridge. (Source: dolphinics.com)

Performance of Device Lending for Composable Architectures
Device lending leverages the PCIe standard to achieve low latency and high bandwidth. Performance accessing a remote device will be very similar to a local device, limited only by the speed of PCIe over longer distances, if any. Dolphin’s eXpressWare SmartIO device lending software does not require personnel to make changes to transparent devices or to a Linux kernel. Borrowed devices get inserted into the local device tree and the transparent device driver receives a “hot-plug” event signaling that a new resource is available. According to Dolphin, “If the transparent driver needs to re-map a DMA window, the re-map will be performed locally at the borrowing side, very similar to what happens in a virtualized system. The actual performance is system and device dependent.” ii

Figure 4:  Comparison of bandwidth performance using device lending for a borrowed device (Borrowed) over a physically local device (Local). (Source: Dolphin ­Interconnect Solutions)

The Cisco, Intel, and Hewlett Packard Enterprise (HPE) strategies for accomplishing composable architectures are not that different from device lending software in that they achieve the same goals. HPE promises “a hybrid IT engine for your digital transformation,” but is also software-defined.[iii]  Legend has it that it makes more sense to use a pencil in space rather than design a pen without gravity-fed ink. Urban legends aside, Dolphin’s software is definitely a clever way to use the PCIe standard to a low-cost advantage for creating or complementing a composable architecture. The ability to break down fixed compute, storage, and networking fabric into a liquid pool of resources is more than desirable. Composing workloads on demand are making headlines in the IT world, and it doesn’t have to have a fancy title to get the job done. Add device lending to the buildup of excitement about composable architecture for meeting the next level of flexibility in data centers.

[i] “Intel® Rack Scale Design (Intel® RSD) Architecture White Paper.” Intel, www.intel.com/content/www/us/en/architecture-and-technology/rack-scale-design/rack-scale-design-architecture-white-paper.html.

[ii] Kristiansen, Lars, et al. “Device lending in PCI Express Networks.” 13 May 2016, pp. 1–6., www.dolphinics.com/download/WHITEPAPERS/PCI_Express_device_lending_may_2016.pdf.

[iii] https://www.hpe.com/us/en/solutions/infrastructure/composable-infrastructure.html, accessed June 4, 2018.


Lynnette Reese is Editor-in-Chief, Embedded Intel Solutions and Embedded Systems Engineering, and has been working in various roles as an electrical engineer for over two decades. She is interested in open source software and hardware, the maker movement, and in increasing the number of women working in STEM so she has a greater chance of talking about something other than football at the water cooler.

Wallflower No More: The Data Center Steps Forward to Dance Closer to Consumers

Thursday, May 10th, 2018

High-performance storage, machine learning, and AR/VR are stepping up their demands, and Intel Xeon based processing platforms are responding.

As mobile, cable, and telecom operators begin to virtualize their networks, they are also building a robust data center technology. Now, as their networks become more open and commoditized, they are moving away from the long life cycle, proprietary hardware, and software architectures of a single supplier to a more “pick-and-choose” environment allowing flexible and open integration. These trends are what make the Open Compute Project (OCP) open architecture platform increasingly attractive for both data center and edge computing architectures.

Figure 1: Data center power efficiency aims spurred the creation of the Open Compute Project (OCP). Now the arrival of the OCP Carrier Grade initiative has OCP architecture set to serve more than just the data center.

The OCP Telecom Group traces its DNA to a 2009 Facebook project aimed at designing the world’s most efficient data center. With around 300 million active users at the time, the social media giant established a data center open to any server, storage, or networking provider as long as they met the specifications. In 2011, along with Intel®, Rackspace, Goldman Sachs, and Andy Bechtolsheim, Facebook launched the Open Compute Project and incorporated the Open Compute Project Foundation.

New Territory
The telecom industry outside of the data center, however, has a slightly different technology infrastructure. In a switching office, for instance, the physical size and layout of the cabinets, cooling, power sources, cabling, and overall environment are different. This is one of the motivating market factors that led ADLINK and Radisys to collaborate and create the OCP carrier grade CG-OpenRack-19 and OpenSled Server specifications within the newly formed OCP Telecom Group. This initiative lays the foundation for OCP gear in a telecom environment (OCP Carrier Grade, or OCP-CG). In essence, OCP-CG is pushing OCP architecture outside the data center to include telecom, mobile, and customer premises equipment (CPE), effectively opening up this open-architecture environment beyond data center walls.

Another practical realm for OCP is edge computing technology or Multi-access Edge Computing (MEC). Simply put, it represents anything outside the data center and closer to the customer, or technology requiring lower latency and improved responsiveness that increases capacity at the edge. From a telecom perspective, OCP-CG for MEC will become a key technology driver for edge computing. With the exponential increase in devices on the network and requirements for low latency and high speed, ensuring a satisfactory user experience becomes paramount. OCP-CG for MEC solves these issues. It’s akin to having a data center at the cell site or in the switching office that enables dedicated servers for specific applications such as geolocation services, video transcoding and delivery, emergency services, or a real-time augmented reality experience of touring through a museum.

Multi-faceted Architecture
The Radisys OCP CG-OpenRack-19 specification details the physical nature of the frame, interconnects, and power that operators install in a central office, and it’s all plug-and-play. CG-OpenRack-19 is a scalable, carrier-grade rack level system that integrates high-performance compute, storage, and networking in a standard 19-inch rack. The CG-OpenRack-19 compliant DCEngine provides some of the widest choice for compute and storage sleds of currently available Network Functions Virtualization Infrastructure (NFVI) platforms. Service providers can choose from any OCP-compliant sled provided by suppliers. All of these sleds can seamlessly plug into the DCEngine rack.

Figure 2: Applications including augmented reality benefit from what Multi-access Edge Computing can offer.

OCP-CG OpenSled is ADLINK’s non-proprietary, OCP-ACCEPTED™ specification that operators can implement at the edge that fits into the CG-OpenRack-19 frame. With OpenSled, ADLINK is opening the architecture of the OCP-CG OpenRack infrastructure and defining the sled specification, enabling a multi-faceted architecture which can have plug-and-playable components within the sled. This open architecture enables any manufacturer to build different types of sleds compatible with the OCP CG-OpenRack-19 and OpenSled specifications.

The OCP-CG environment enables a much greater density of compute, storage, and switching in each frame than traditional server architectures. The ADLINK sled features the latest Intel® Xeon® Scalable Processors with Intel® C620 Series Chipsets. One single sled can accommodate up to eight CPUs with up to 24 cores per CPU. Therefore, one could have 192 virtualized cores in a 2U space with up to 288TB of storage per 2U (up to 12 storage sleds) in the OCP-CG OpenRack layout. In other words, in a 42U frame (38U usable space), one could have well over a thousand cores running thousands of different applications, all on one common architecture while exceeding existing throughput and meeting the latest open management requirements.

Figure 3: The ADLINK sled features the latest Intel® Xeon® Scalable Processors with Intel® C620 Series Chipsets.

Feature enhancements over previous versions of Intel® Xeon® Processor-based platforms include 1.5x memory bandwidth, integrated network/fabric, and optional integrated accelerators. Providers can also implement various accelerators, cryptology, enhanced I/O and even proprietary-based components for security, deep packet inspection, or routing functionality, adding unique value to their service offerings, such as next generation machine learning, artificial intelligence, and analytics.

New Revenue Stream
Another use case utilizing edge computing technologies is customer premises equipment (CPE). Providing the required processing and storage power of a server but situated adjacent to or at the customer’s premises, CPE brings the necessary power, functionality, processing speed, and a positive user experience to customers who do not want to invest in a large IT infrastructure. It also opens up a new revenue stream for operators who can now provide a box, internet access, security services, routing services, databases, and virtualized applications to enable a powerful yet cost effective solution.

ADLINK’s OCP-CG OpenSled specification is opening up the Open Compute Project to new market opportunities in telecom. OpenSled provides a common architecture, plug-and-play capabilities, and dense and next-generation compute horsepower, all in a tested environment that’s been around nearly 10 years, simplifying implementation and extending capabilities. Additionally, working as partners with other suppliers in an open architecture environment enables a broader ecosystem of partners who ultimately bring their unique products and services for mutual benefit.


Jeff Sharpe has more than 31 years of experience in the network and mobile communication industries, providing strategic direction for next generation products and platforms. As a senior strategic product manager at ADLINK, Sharpe is responsible for driving ADLINK’s global product direction in mobile networking, network functions virtualization (NFV) and software-defined networking (SDN).

Realizing Effective DPI and Cloud Computing Security

Wednesday, August 23rd, 2017

New solutions for integrating hardware and software that meet the IoT era’s security demands are arriving.

Deep packet inspection (DPI) technology offers network traffic user, application, and location information for fine-tuned traffic control. Employing software-defined networking (SDN) technology makes it possible to implement programmable network traffic, redirect traffic, and configure automatic security policy. With network function virtualization (NFV), security resource pools can be established for collaborative deployment among computing and security assets.

Figure 1:  Key deployment points of unified DPI equipment. Sharing DPI equipment at key points of the telecom network, unified DPI reduces duplication of equipment deployment, allows DPI equipment and DPI applications to evolve independently of each other, and greatly enhances the ability to innovate DPI applications.

Figure 1: Key deployment points of unified DPI equipment. Sharing DPI equipment at key points of the telecom network, unified DPI reduces duplication of equipment deployment, allows DPI equipment and DPI applications to evolve independently of each other, and greatly enhances the ability to innovate DPI applications.

Slashing Inefficiency via Unified DPI

With the network traffic visualization DPI achieves, telecom operators can optimize their businesses for specific services, develop value-added services based on user identity, and handle security more effectively. DPI equipment has been tightly coupled with the corresponding DPI application. As the number of DPI applications increases, more and more DPI equipment must be deployed, leading to inefficiencies.

The unified DPI concept proposes to reduce inefficiency by enabling DPI equipment sharing. Unified DPI standardizes network traffic visualization. By coordinating DPI equipment deployment from a network-wide perspective, the DPI requirements are unified at key network locations, and DPI services are shared among multiple DPI applications through a suite of unified northbound API. This approach avoids the repeated deployment of DPI equipment (Figure 1).

The number of traffic types DPI equipment can identify, the foundation of all upper layer DPI application innovations, affects the equipment’s value. As do the analytic capabilities, which depend on software design and to a greater extent on DPI hardware platform computing performance. Performance must be powerful enough to identify more types of traffic in real time.

Rich and Flexible Network IO
To avoid coupling between upper layer DPI applications and lower layer DPI equipment, to allow a range of deployment locations, and to be compatible with different DPI application hardware, unified DPI specifies the requirements for hardware interface support. Depending on the deployment location in the telecom network, DPI equipment needs to support ingress/egress network traffic with 1G/10G WAN/LAN, 2.5G/10G POS networks or 100GbE. For DPI equipment that needs to be concatenated in the network, the required network interfaces must be implemented natively on the DPI equipment. Implementation by using external splitters, switches, or protocol convertors is disallowed, preventing the added devices from introducing extra failure risk. Therefore, in addition to sufficient ingress/egress ports and processing bandwidth, unified DPI equipment must also provide flexible network IO configuration, so the appropriate interface modules can be selected to adapt to different deployment locations.

Full Gamut of Innovation
Supporting various upper layer DPI applications calls for common protocol identification and statistical functions and strong flow control logic. To enable the fine-grained traffic control upper layer applications demand, the DPI equipment must support flow control based on the identified network traffic, including minimum bandwidth guarantee, maximum bandwidth limit, flow pass-through, and flow discarding.

Developing network security applications means DPI equipment must support white-list and black-list settings based on flow metadata such as source address, destination address, protocol number, source port, destination port, domain name, and user ID. If these functional requirements are implemented with traditional methods, functional implementation is not transparent, and available upgrade space is limited. SDN technology can be used to classify traffic flow based on multi-dimensional metadata and set different flow control strategies. Constructing DPI service logic based on SDN architecture enables the DPI analysis, statistics, control, multiplex, and security functions to be centralized and opens the possibility of a wide range of DPI application innovations.

High Availability and LAN Bypass
Unified DPI equipment must provide 99.999% high availability, with the main service components providing an appropriate hot-standby solution. Other system components, such as power supply unit and fan tray must provide appropriate redundancy and support online replacement when they experience a failure. DPI equipment that needs to be concatenated in the communication link needs a LAN bypass function. When DPI equipment suffers from a power loss or a self-test failure, it can switch the communication link from the main unit to the bypass unit automatically, ensuring continuity of service.

Modular and Scalable
Data communications begin at the user and go sequentially over the access network, metro network, provincial network, and backbone network. Therefore, an overview of network traffic conditions can be determined by deploying DPI equipment at key points along the route which the data traffic travels. According to the unified DPI specification from China Mobile, these key deployment points can be summarized as: PS side, IDC export, provincial network export, inter-provincial network export, and inter-backbone network export. The required external network interfaces are different at these key points, as is network traffic size, so a single device can’t meet the different requirements at all deployment points. However, to reduce TCO and reserve upgrade capacity for future use, most DPI service providers would prefer a single scalable platform that can deal with most deployment scenarios. Adopting modular design and supporting linear expansion for the computing units solves this dilemma.

Cloud Computing Security

Traditional network security uses firewalls, unified threat management (UTM), intrusion prevention systems (IPS) or other network security products to block attacks at the network entrance boundary. This network boundary forms when networks that have different security levels are connected together. For example, the private network of an enterprise has a higher security level requirement than the public internet, and the connection point between private network and public internet forms the natural security boundary. Once, preventing an intrusion from outside the enterprise network simply required establishing reliable security protection measures at the boundary.

But with the advent of cloud computing, enterprises are moving more of their operations to the public cloud. As a result, the network boundary between private and public networks is blurring. Relying on the traditional concept of the network security boundary is no longer viable. Cloud computing needs a new generation of network security equipment.

Security Cornerstone
In network security, DPI has played an increasingly important role, and it is becoming the cornerstone of cloud computing security in the IoT era. More end-user applications are using HTTP/HTTPS protocols to exchange data. If traditional matching techniques based on TCP/IP 5-tuples are used, most of the traffic flows belonging to other applications will be misidentified as normal Web surfing. Whether for business optimization, content review, or security, it is necessary to 1) know the identity of the traffic flow using DPI analysis, and 2) perform the appropriate control strategy based on protection needs. Because the working mode of DPI analytics engines is generally concatenated, the average computational workload will increase linearly as the number of application types to be identified increases. To guarantee a smooth implementation of DPI analytics, DPI equipment must be equipped with adequate computing power based on the application types to be identified and the traffic size to be handled.

Making Virtualization Viable
Cloud computing’s “resource allocation on demand” requires that computing, storage, and network resources be taken from resource pools as needed. Resource virtualization is the fundamental technology for achieving this goal. In cloud computing, a large number of virtual machines are continually created, migrated, and destroyed. As business requirements change, the resources needed for computing, storage, and networks will vary. Therefore, network security resources that serve the cloud must also be dynamic. Network security equipment for cloud computing must also support virtualization like other cloud computing resources. NFV technology can be used to shape network security equipment into a resource pool, enabling dynamic security allocation based on business changes. In order to better support NFV, network security equipment must abandon proprietary computing technology and be built based on open computing technology, allowing it to support virtualization more easily in order to achieve “security on demand.”

Boundaries Blurring
Traditional network security equipment is deployed at the physical security boundary, monitoring traffic flow that enters and leaves the security zone and then performing the required network security tasks. Cloud computing’s multi-tenant environment and frequent virtual machine migrations mean that a security zone with a physical boundary does not exist. Even the logical boundaries of the security zone will experience constant changes as virtual machines migrate, resulting in a significant challenge for cloud computing security.

As a virtual machine migration occurs, the network security policies configured for that virtual machine must also be adjusted. If the migration does not go beyond the protective scope of the current network security appliance, then the security policies related to that virtual machine can be adjusted. However, if the virtual machine migration extends beyond the scope of the current network security appliance, then the security policies related to that virtual machine must be migrated to the new network security appliance. The configurations of traditional network security appliances are often static, localized, and need human intervention, making it difficult to implement dynamic, globalized, and automated re-configuration on traditional network security platforms. The industry has been trying to introduce SDN technology to overcome the challenges brought by a blurred security boundary in a virtualized network environment. SDN can direct targeted network traffic flow to a virtualized network security appliance through flow diverting and aggregating. And when a virtual machine migrates, SDN, with its global perspective and flexible programmability, can help to achieve the automated migration of relevant network security policies.

Isolation Issues Tackled
In order to solve the security isolation issues of a multi-tenant virtualization environment, tunneling technologies such as Virtual eXtensible Local Area Network (VXLAN) can be used extensively. VXLAN is an encapsulation technology that repackages layer 2 packets in a layer 3 protocol and can help solve the limitations of MAC table size and VLAN ID space found in top-of-rack (TOR) switches. Because network security appliances are often concatenated in the communication link, they must support VXLAN to handle the network traffic flowing through them. Removing and adding VXLAN headers consumes significant CPU resources and noticeably lowers network security equipment overall performance, but using an extra hardware acceleration unit to assist in VXLAN processing helps.

Encryption technology is being used more extensively in cloud computing to enhance security. As with VXLAN, processing encrypted network traffic will also consume significant CPU resources and can be better managed with an additional hardware unit to assist in the encryption and decryption process. When the CPU is relieved of these resource intensive tasks, it can better focus on the key task of performing DPI more effectively and efficiently.

Figure_2A

Figure 2: The CSA-5100/5200 1U/2U rackmount network security platform addresses the needs of small and medium enterprises (SME) with a design targeting low- and mid-level security application scenarios. Through four IO expansion slots, up to 32 10G SFP+ ports can be provided on this platform.

Figure 2: The CSA-5100/5200 1U/2U rackmount network security platform addresses the needs of small and medium enterprises (SME) with a design targeting low- and mid-level security application scenarios. Through four IO expansion slots, up to 32 10G SFP+ ports can be provided on this platform.

Figure 3: The ADLINK CSA-7200 is designed to be a next generation network security appliance, featuring high-performance dual Intel® Xeon processor E5 v3 and up to 64x 10G SFP+ ports through eight Network Interface Modules (NIMs).

Figure 3: The ADLINK CSA-7200 is designed to be a next generation network security appliance, featuring high-performance dual Intel® Xeon processor E5 v3 and up to 64x 10G SFP+ ports through eight Network Interface Modules (NIMs).

Figure 4: The ADLINK CSA-7400 is a high-performance high-density computing platform supporting four dual-processor Intel® Xeon® processor E5 v3 computes nodes interconnected by dual redundant switch modules. The CSA-7400 ensures uninterrupted service delivery through hot-swappable compute nodes and switch modules. It is ideally suited for building next generation high-performance firewalls and virtualized telecom elements.

Figure 4: The ADLINK CSA-7400 is a high-performance high-density computing platform supporting four dual-processor Intel® Xeon® processor E5 v3 computes nodes interconnected by dual redundant switch modules. The CSA-7400 ensures uninterrupted service delivery through hot-swappable compute nodes and switch modules. It is ideally suited for building next generation high-performance firewalls and virtualized telecom elements.

Network Security Platforms

DPI equipment computing requirements are increasing, while SDN is expected to be supported on DPI equipment to enhance its functionality and adaptability. In addition, network security equipment is standardizing. To strengthen NFV, SDN, and big data technologies from open platforms, network security equipment is shifting from traditional proprietary computing platforms to open, COTS-based computing platforms. The Cyber Security Appliance (CSA) series of products from ADLINK Technology is designed and built to meet these trends and needs. By integrating the special requirements of next-generation network security appliances on open computing platforms, ADLINK’s CSA products can assist network security providers in constructing services that meet DPI and cloud computing security requirements in the IoT era (Table 1).

Table 1:  Rising to the needs of the IoT era, ADLINK CSA products feature high-density design to solve high-capacity and high-bandwidth network security demands. CSA solutions also introduce the latest computing and communication technologies from the open computing domain, ensuring a rich set of new features that allow users to easily meet network security challenges.

Table 1: Rising to the needs of the IoT era, ADLINK CSA products feature high-density design to solve high-capacity and high-bandwidth network security demands. CSA solutions also introduce the latest computing and communication technologies from the open computing domain, ensuring a rich set of new features that allow users to easily meet network security challenges.

Figure 5: CSA Application Ready Intelligent Platforms (ARiP) from ADLINK Technology

Figure 5: CSA Application Ready Intelligent Platforms (ARiP) from ADLINK Technology

Conclusion

ADLINK Technology has taken great efforts to fully understand the requirements of DPI and cloud computing security in the IoT era, and introduced the CSA series of computing platforms to meet these requirements. By integrating high-performance DPI processing capability and support for NFV, SDN and hardware acceleration units, the CSA series forms a solid foundation for developing the next generation of network security equipment. CSA platforms are designed with a modular concept in order to achieve maximum intercompatibility of components across the product line and reduce TCO. ADLINK has also developed and integrated the requisite software components and open source middleware, reducing development efforts required by customers. The growth of applications in cloud computing is accelerating at a faster pace as we enter the IoT era, bringing with it an increasing number of network security threats. ADLINK Technology is committed to providing high-performance, high-availability ARiP platforms that meet the requirements of network security for industrial IoT, and will continue to analyze new trends and challenges in the network security industry, listen to customer feedback, and provide the best network security platforms built on open computing technologies.


headshotQizhi Zhang is System Architect for the Network & Communication Business Center, ADLINK Technology, where he is responsible for product definition, architectural design, and technical consulting for the enterprise’s Network & Communication platforms. Dr. Zhang received his Ph.D. in Automation from Shanghai Jiao Tong University. With 10+ years’ working experience in telecom industry, he is equipped with solid expertise in system management, high availability systems, and network security devices.

Networking Technology Steels Itself for Emerging Markets

Thursday, August 3rd, 2017

The Bluetooth Special Interest Group (SIG) announced Bluetooth mesh technology in July. It targets industrial automation and smart buildings by extending the reach of a network, while maintaining the low energy consumption of Bluetooth technology.

The distinctive feature of Bluetooth mesh networking is that it enables many-to-many (m:m) device communication. Rather than a star topology, where one central device communicates with others in a point-to-point network (or piconet), the mesh topology allows a device to communicate with every other device in the mesh.

Bluetooth mesh networking is designed for building automation applications, such as lighting, heating, cooling, and security. It can be used to expand sensor networks, beacons and for asset tracking—locating and tracking goods in real-time across an area.

The Bluetooth mesh system is based on the Bluetooth Low Energy stack. Bluetooth Low Energy is the Wireless Personal Area Network (WPAN) technology used by smartphones, tablets, and computers in smart homes, healthcare, and entertainment.

On top of the Bluetooth Low Energy stack is a bearer layer that defines how mesh Protocol Data Units (PDUs) will be handled. This will be by either advertising or scanning to send or receive PDUs (the advertising bearer), or by communicating indirectly with nodes on a mesh network which support the advertising bearer; this is the Generic Attribute Profile (GATT) bearer.

Next is the network layer. This layer processes messages from the bearer layer and defines the network interface over which messages will be sent as well as the message address type and format. It can support multiple bearers.

The lower transport layer takes PDUs from the upper transport layer, where encryption, decryption, and authentication of application data take place. The lower transport layer may perform segmentation and reassembly if required.

Above the upper transport layer is the access layer, which defines the format of application data, defines and controls encryption and decryption performed in the upper transport layer, and verifies the data received from the upper transport layer before forwarding the data.

The foundation model layer implements the configuration and management of a mesh network. Finally, the model layer implements behaviors, messages, and states (e.g. on/off) to define the functionality of a particular element within a node. For example, a Light Emitting Diode (LED) luminaire may have three LED lights. Each light is viewed as one element.

Figure 1: Bluetooth mesh networking is particularly suitable for factory automation. (Source: Bluetooth SIG)

Figure 1: Bluetooth mesh networking is particularly suitable for factory automation. (Source: Bluetooth SIG)

Network Range

Bluetooth SIG has opted for a managed flood message transmission system. Other mesh networks, (for example, ZigBee) use a routed mesh framework, where devices communicate on a defined path. Others, like Thread, use a flooding technique, where every device on the network communicates to every device. Managed flooding controls which device can pass messages. All devices will use Bluetooth Low Energy, but only mains-powered devices will relay messages, saving battery power.

The mesh’s multi-hop communication method extends the range of connections and allows for network scalability, while reducing power consumption due to shorter transmission distances between the nodes.

Emerging Markets

ABI Research predicts nearly one third of the 48 billion Internet-enabled devices installed by 2021 will include Bluetooth, which will find new applications.
“While smartphones and audio accessories remain Bluetooth’s largest markets, the technology is becoming more attractive to low-power IoT applications,” says Andrew Zignani, Industry Analyst at ABI Research. “Though Bluetooth still faces strong competition from the other standards, mesh networking will enable new opportunities for the technology in the smart home, building automation, and emerging IoT markets in which robustness, low latency, scalability, minimal power consumption, and strong security are all additional critical requirements.”

Three characteristics are particularly important for an industrial-grade network: reliability, scalability and security.

Reliability and Scalability

The peer-to-peer communication, where nodes communicate directly with each other, makes Bluetooth mesh connectivity reliable. The structure eliminates the need for a centralized hub or gateway, or routing nodes, so there are no single points of failure. Additionally, its managed flood message relay architecture is inherently multi-path and self-healing.

The Bluetooth mesh is specified to allow up to 32,000 devices, or nodes, per network, sufficient for high density lighting or sensor environments to scale in size as network demands increase.

Building automation uses multicast messaging, where messages are sent to various destinations simultaneously. Bluetooth mesh’s managed flood message relay architecture and the publish/subscribe (send/process) procedure for group messaging are designed to handle the volume of multicast messaging traffic typically found in building automation environments.

Figure 2: Bluetooth’s low power consumption and accessibility are expected to appeal to mesh developers. (Source: Bluetooth SIG)

Figure 2: Bluetooth’s low power consumption and accessibility are expected to appeal to mesh developers. (Source: Bluetooth SIG)

Security

Large wireless device networks present security challenges. These are addressed by Bluetooth mesh technology with several architectural features. First, devices are added to a network using a 256-bit elliptic curve and out-of-band authentication. Within this provisioning process, security measures include an exchange of public keys between the provisioner and the device to be added, followed by authentication of the device and the issue of a security key, or NetKey, to add the device.

In operation, all mesh communication is encrypted and authenticated with 128-bit keys. Encryption and authentication is also implemented on both the network layer and the application layer. Content is secured with a separate application key for end-to-end security.

Each mesh packet is obfuscated so that identifying content is removed from the message. This prevents tracking and is particularly useful when devices move within range of other networks.

Figure 2: Bluetooth’s low power consumption and accessibility are expected to appeal to mesh developers. (Source: Bluetooth SIG)

Figure 3: Silicon companies such as Toshiba Electronics have already announced Bluetooth mesh support in their Bluetooth products. (Source: Toshiba Electronics Europe)

Design Support

Silicon companies are already providing support for the Bluetooth mesh standard. Toshiba Electronics Europe has announced support for its Bluetooth Low Energy products.

Heiner Tendyck, System LSI Marketing Manager, Toshiba Electronics Europe, believes Bluetooth mesh will introduce the technology to new areas. “This standards-based approach means that new untapped markets, such as industrial and commercial, can now leverage ever-present Bluetooth cell phones or tablets to easily control and monitor their systems,” he says.

Silicon Labs has also announced that its Blue Gecko Bluetooth Wireless Starter Kit provides Bluetooth mesh connectivity as well as Bluetooth 5 capability. The company can also provide a Bluetooth mesh stack for Android, allowing smartphones to configure and control nodes on the mesh.


hayes_caroline_115Caroline Hayes has been a journalist covering the electronics sector for more than 20 years. She has worked on several European titles, reporting on a variety of industries, including communications, broadcast and automotive.

Increased Interoperability: Q&A with Don Clarke, ETSI

Thursday, July 14th, 2016

NFV’s objectives, why Telecommunications infrastructures require rigorous specifications, and more.

ETSI-Logo_Press_web
The European Telecommunications Standards Institute (ETSI) develops Information and Communications Technologies standards deployed worldwide for fixed, mobile, radio, broadcast and Internet. This role naturally makes this standards organization the holder of a key role in the development of Network Functions Virtualization (NFV) technologies. Don Clarke, chairman of the Network Operator Council group in the ETSI NFV Industry Specification Group recently responded to e-mailed questions from EECatalog about the first NFV Plugtests Event organized by the ETSI Center for Testing and Interoperability, which will be held from January 23 to February 3, 2017, and other data center and virtualization topics. Edited excerpts follow.

EECatalog: What should our readers be aware of regarding the NFV Plugstests Event being held at the beginning of next year [January 23 to February 3, 2017]?

Don_Clarke_NEW_2016_webDon Clarke, ETSI: ETSI Plugtests are an essential source of feedback to our standardization activities, which allow us to validate and improve the quality of the specifications as they are being developed.

The first NFV Plugtest focuses on testing relevant ETSI Network Functions Virtualization (NFV) capabilities over a number of combinations of NFV infrastructure, Management and Orchestration (MANO) solutions and Virtual Network Functions (VNFs) provided by the industry and Open Source projects. This focus allows ETSI to evaluate and increase the interoperability among vendor and open source implementations.

Besides being a source of essential feedback for ETSI NFV, the NFV Plugtest is also a great opportunity for the industry and open source projects to learn how the rest of the NFV ecosystem uses their implementations.

EECatalog:
Many open source communities have emerged to drive NFV implementation. Are standards still needed?

Clarke, ETSI: Open source is an excellent way for the common elements of an implementation to be created collaboratively, and for vendors to focus their individual commercial efforts on capabilities built on top of open source. But Telecommunications infrastructures require rigorous specifications to ensure interoperability and to support legacy services that are deployed at massive scale. Telecommunications networks must also meet numerous regulatory requirements including support for critical national infrastructures. Current open source governance models do not provide these guarantees. Ideally there is a model where Standards Development Organizations (SDOs) developing specifications work more quickly and hand-in-hand with open source communities.

ETSI NFV has led the way in converging and specifying operator requirements (38 operators are involved) and the ETSI NFV work is widely referenced by the industry including open source communities. ETSI consequently established the Open Source MANO (OSM) group in February 2016 to deliver an open source NFV MANO stack using best-in-class open source workflows and tools to ensure rapid development and delivery. The activity is closely aligned with the evolution of ETSI NFV and provides a regularly updated reference implementation of NFV MANO. OSM enables an ecosystem of NFV solution vendors to rapidly and cost-effectively deliver solutions to their users.

EECatalog:
How would you say embedded virtualization differs from that used for data centers and enterprise IT networks?

Clarke, ETSI: I prefer to use the term Network Functions Virtualization (NFV). The objective of NFV is to use IT and Cloud techniques, including virtualization and management and orchestration, but to identify and specify additional requirements that will enable these technologies to be used to create “carrier grade” network solutions inside cloud environments. In this context, “carrier grade” means the ability to assure deterministic bandwidth, jitter and latency, and to enable configurations that can deliver the appropriate level of reliability and availability for the services being delivered via the virtualized infrastructure.

In addition, network operators require cloud infrastructures to be “distributed,” that is, extending beyond the data center. For example, instances of cloud infrastructure could be physically located in the access network, and even in the end user premises. Such virtualized infrastructures need to be managed end-to-end, which requires new standards and new tools.

EECatalog: What are some examples you have seen of embedded developers putting virtualization to innovative use?

Clarke, ETSI: We are seeing the early application of NFV to enable high-performance software implementations of network functionality previously only possible using hardware devices for such tasks as routers, firewalls and security monitoring. Implementing these functions purely in software enables automation and faster deployment, including customer self-provisioning.

EECatalog: How do you expect virtualization where the need for real-time response is also involved to look five years from now?

Clarke, ETSI: Achieving automation is key. There is still a lot of work to do to enable network operators to fully automate network design, provisioning and operations. Currently virtualized networks need a lot of manual intervention to design and deploy. This is why early NFV deployments are often in conventional data center environments where existing tools can be used. A key area of focus is to converge information modeling approaches across the industry to minimize complexity and simplify tooling and skill requirements. A collaborative multi-SDO effort is underway to do that.

EECatalog: What technology developments are you keeping especially close watch on?

Clarke, ETSI: The emergence of container technology as an alternative to virtual machines is of high interest. Containers are more resource efficient and faster to deploy than virtual machines, but there is more dependency on the host operating system version, which needs to be taken into account to ensure interoperability.

Today, commercial VNFs are often based on hardware appliances that have been re-purposed to run in a cloud environment. Such re-purposing can be inefficient in use of resources, so we are interested to see VNFs designed from the ground up to be more resource efficient and more optimized for automated deployment and operations.


Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.