Posts Tagged ‘top-story’

Realizing Effective DPI and Cloud Computing Security

Wednesday, August 23rd, 2017

New solutions for integrating hardware and software that meet the IoT era’s security demands are arriving.

Deep packet inspection (DPI) technology offers network traffic user, application, and location information for fine-tuned traffic control. Employing software-defined networking (SDN) technology makes it possible to implement programmable network traffic, redirect traffic, and configure automatic security policy. With network function virtualization (NFV), security resource pools can be established for collaborative deployment among computing and security assets.

Figure 1:  Key deployment points of unified DPI equipment. Sharing DPI equipment at key points of the telecom network, unified DPI reduces duplication of equipment deployment, allows DPI equipment and DPI applications to evolve independently of each other, and greatly enhances the ability to innovate DPI applications.

Figure 1: Key deployment points of unified DPI equipment. Sharing DPI equipment at key points of the telecom network, unified DPI reduces duplication of equipment deployment, allows DPI equipment and DPI applications to evolve independently of each other, and greatly enhances the ability to innovate DPI applications.

Slashing Inefficiency via Unified DPI

With the network traffic visualization DPI achieves, telecom operators can optimize their businesses for specific services, develop value-added services based on user identity, and handle security more effectively. DPI equipment has been tightly coupled with the corresponding DPI application. As the number of DPI applications increases, more and more DPI equipment must be deployed, leading to inefficiencies.

The unified DPI concept proposes to reduce inefficiency by enabling DPI equipment sharing. Unified DPI standardizes network traffic visualization. By coordinating DPI equipment deployment from a network-wide perspective, the DPI requirements are unified at key network locations, and DPI services are shared among multiple DPI applications through a suite of unified northbound API. This approach avoids the repeated deployment of DPI equipment (Figure 1).

The number of traffic types DPI equipment can identify, the foundation of all upper layer DPI application innovations, affects the equipment’s value. As do the analytic capabilities, which depend on software design and to a greater extent on DPI hardware platform computing performance. Performance must be powerful enough to identify more types of traffic in real time.

Rich and Flexible Network IO
To avoid coupling between upper layer DPI applications and lower layer DPI equipment, to allow a range of deployment locations, and to be compatible with different DPI application hardware, unified DPI specifies the requirements for hardware interface support. Depending on the deployment location in the telecom network, DPI equipment needs to support ingress/egress network traffic with 1G/10G WAN/LAN, 2.5G/10G POS networks or 100GbE. For DPI equipment that needs to be concatenated in the network, the required network interfaces must be implemented natively on the DPI equipment. Implementation by using external splitters, switches, or protocol convertors is disallowed, preventing the added devices from introducing extra failure risk. Therefore, in addition to sufficient ingress/egress ports and processing bandwidth, unified DPI equipment must also provide flexible network IO configuration, so the appropriate interface modules can be selected to adapt to different deployment locations.

Full Gamut of Innovation
Supporting various upper layer DPI applications calls for common protocol identification and statistical functions and strong flow control logic. To enable the fine-grained traffic control upper layer applications demand, the DPI equipment must support flow control based on the identified network traffic, including minimum bandwidth guarantee, maximum bandwidth limit, flow pass-through, and flow discarding.

Developing network security applications means DPI equipment must support white-list and black-list settings based on flow metadata such as source address, destination address, protocol number, source port, destination port, domain name, and user ID. If these functional requirements are implemented with traditional methods, functional implementation is not transparent, and available upgrade space is limited. SDN technology can be used to classify traffic flow based on multi-dimensional metadata and set different flow control strategies. Constructing DPI service logic based on SDN architecture enables the DPI analysis, statistics, control, multiplex, and security functions to be centralized and opens the possibility of a wide range of DPI application innovations.

High Availability and LAN Bypass
Unified DPI equipment must provide 99.999% high availability, with the main service components providing an appropriate hot-standby solution. Other system components, such as power supply unit and fan tray must provide appropriate redundancy and support online replacement when they experience a failure. DPI equipment that needs to be concatenated in the communication link needs a LAN bypass function. When DPI equipment suffers from a power loss or a self-test failure, it can switch the communication link from the main unit to the bypass unit automatically, ensuring continuity of service.

Modular and Scalable
Data communications begin at the user and go sequentially over the access network, metro network, provincial network, and backbone network. Therefore, an overview of network traffic conditions can be determined by deploying DPI equipment at key points along the route which the data traffic travels. According to the unified DPI specification from China Mobile, these key deployment points can be summarized as: PS side, IDC export, provincial network export, inter-provincial network export, and inter-backbone network export. The required external network interfaces are different at these key points, as is network traffic size, so a single device can’t meet the different requirements at all deployment points. However, to reduce TCO and reserve upgrade capacity for future use, most DPI service providers would prefer a single scalable platform that can deal with most deployment scenarios. Adopting modular design and supporting linear expansion for the computing units solves this dilemma.

Cloud Computing Security

Traditional network security uses firewalls, unified threat management (UTM), intrusion prevention systems (IPS) or other network security products to block attacks at the network entrance boundary. This network boundary forms when networks that have different security levels are connected together. For example, the private network of an enterprise has a higher security level requirement than the public internet, and the connection point between private network and public internet forms the natural security boundary. Once, preventing an intrusion from outside the enterprise network simply required establishing reliable security protection measures at the boundary.

But with the advent of cloud computing, enterprises are moving more of their operations to the public cloud. As a result, the network boundary between private and public networks is blurring. Relying on the traditional concept of the network security boundary is no longer viable. Cloud computing needs a new generation of network security equipment.

Security Cornerstone
In network security, DPI has played an increasingly important role, and it is becoming the cornerstone of cloud computing security in the IoT era. More end-user applications are using HTTP/HTTPS protocols to exchange data. If traditional matching techniques based on TCP/IP 5-tuples are used, most of the traffic flows belonging to other applications will be misidentified as normal Web surfing. Whether for business optimization, content review, or security, it is necessary to 1) know the identity of the traffic flow using DPI analysis, and 2) perform the appropriate control strategy based on protection needs. Because the working mode of DPI analytics engines is generally concatenated, the average computational workload will increase linearly as the number of application types to be identified increases. To guarantee a smooth implementation of DPI analytics, DPI equipment must be equipped with adequate computing power based on the application types to be identified and the traffic size to be handled.

Making Virtualization Viable
Cloud computing’s “resource allocation on demand” requires that computing, storage, and network resources be taken from resource pools as needed. Resource virtualization is the fundamental technology for achieving this goal. In cloud computing, a large number of virtual machines are continually created, migrated, and destroyed. As business requirements change, the resources needed for computing, storage, and networks will vary. Therefore, network security resources that serve the cloud must also be dynamic. Network security equipment for cloud computing must also support virtualization like other cloud computing resources. NFV technology can be used to shape network security equipment into a resource pool, enabling dynamic security allocation based on business changes. In order to better support NFV, network security equipment must abandon proprietary computing technology and be built based on open computing technology, allowing it to support virtualization more easily in order to achieve “security on demand.”

Boundaries Blurring
Traditional network security equipment is deployed at the physical security boundary, monitoring traffic flow that enters and leaves the security zone and then performing the required network security tasks. Cloud computing’s multi-tenant environment and frequent virtual machine migrations mean that a security zone with a physical boundary does not exist. Even the logical boundaries of the security zone will experience constant changes as virtual machines migrate, resulting in a significant challenge for cloud computing security.

As a virtual machine migration occurs, the network security policies configured for that virtual machine must also be adjusted. If the migration does not go beyond the protective scope of the current network security appliance, then the security policies related to that virtual machine can be adjusted. However, if the virtual machine migration extends beyond the scope of the current network security appliance, then the security policies related to that virtual machine must be migrated to the new network security appliance. The configurations of traditional network security appliances are often static, localized, and need human intervention, making it difficult to implement dynamic, globalized, and automated re-configuration on traditional network security platforms. The industry has been trying to introduce SDN technology to overcome the challenges brought by a blurred security boundary in a virtualized network environment. SDN can direct targeted network traffic flow to a virtualized network security appliance through flow diverting and aggregating. And when a virtual machine migrates, SDN, with its global perspective and flexible programmability, can help to achieve the automated migration of relevant network security policies.

Isolation Issues Tackled
In order to solve the security isolation issues of a multi-tenant virtualization environment, tunneling technologies such as Virtual eXtensible Local Area Network (VXLAN) can be used extensively. VXLAN is an encapsulation technology that repackages layer 2 packets in a layer 3 protocol and can help solve the limitations of MAC table size and VLAN ID space found in top-of-rack (TOR) switches. Because network security appliances are often concatenated in the communication link, they must support VXLAN to handle the network traffic flowing through them. Removing and adding VXLAN headers consumes significant CPU resources and noticeably lowers network security equipment overall performance, but using an extra hardware acceleration unit to assist in VXLAN processing helps.

Encryption technology is being used more extensively in cloud computing to enhance security. As with VXLAN, processing encrypted network traffic will also consume significant CPU resources and can be better managed with an additional hardware unit to assist in the encryption and decryption process. When the CPU is relieved of these resource intensive tasks, it can better focus on the key task of performing DPI more effectively and efficiently.

Figure_2A

Figure 2: The CSA-5100/5200 1U/2U rackmount network security platform addresses the needs of small and medium enterprises (SME) with a design targeting low- and mid-level security application scenarios. Through four IO expansion slots, up to 32 10G SFP+ ports can be provided on this platform.

Figure 2: The CSA-5100/5200 1U/2U rackmount network security platform addresses the needs of small and medium enterprises (SME) with a design targeting low- and mid-level security application scenarios. Through four IO expansion slots, up to 32 10G SFP+ ports can be provided on this platform.

Figure 3: The ADLINK CSA-7200 is designed to be a next generation network security appliance, featuring high-performance dual Intel® Xeon processor E5 v3 and up to 64x 10G SFP+ ports through eight Network Interface Modules (NIMs).

Figure 3: The ADLINK CSA-7200 is designed to be a next generation network security appliance, featuring high-performance dual Intel® Xeon processor E5 v3 and up to 64x 10G SFP+ ports through eight Network Interface Modules (NIMs).

Figure 4: The ADLINK CSA-7400 is a high-performance high-density computing platform supporting four dual-processor Intel® Xeon® processor E5 v3 computes nodes interconnected by dual redundant switch modules. The CSA-7400 ensures uninterrupted service delivery through hot-swappable compute nodes and switch modules. It is ideally suited for building next generation high-performance firewalls and virtualized telecom elements.

Figure 4: The ADLINK CSA-7400 is a high-performance high-density computing platform supporting four dual-processor Intel® Xeon® processor E5 v3 computes nodes interconnected by dual redundant switch modules. The CSA-7400 ensures uninterrupted service delivery through hot-swappable compute nodes and switch modules. It is ideally suited for building next generation high-performance firewalls and virtualized telecom elements.

Network Security Platforms

DPI equipment computing requirements are increasing, while SDN is expected to be supported on DPI equipment to enhance its functionality and adaptability. In addition, network security equipment is standardizing. To strengthen NFV, SDN, and big data technologies from open platforms, network security equipment is shifting from traditional proprietary computing platforms to open, COTS-based computing platforms. The Cyber Security Appliance (CSA) series of products from ADLINK Technology is designed and built to meet these trends and needs. By integrating the special requirements of next-generation network security appliances on open computing platforms, ADLINK’s CSA products can assist network security providers in constructing services that meet DPI and cloud computing security requirements in the IoT era (Table 1).

Table 1:  Rising to the needs of the IoT era, ADLINK CSA products feature high-density design to solve high-capacity and high-bandwidth network security demands. CSA solutions also introduce the latest computing and communication technologies from the open computing domain, ensuring a rich set of new features that allow users to easily meet network security challenges.

Table 1: Rising to the needs of the IoT era, ADLINK CSA products feature high-density design to solve high-capacity and high-bandwidth network security demands. CSA solutions also introduce the latest computing and communication technologies from the open computing domain, ensuring a rich set of new features that allow users to easily meet network security challenges.

Figure 5: CSA Application Ready Intelligent Platforms (ARiP) from ADLINK Technology

Figure 5: CSA Application Ready Intelligent Platforms (ARiP) from ADLINK Technology

Conclusion

ADLINK Technology has taken great efforts to fully understand the requirements of DPI and cloud computing security in the IoT era, and introduced the CSA series of computing platforms to meet these requirements. By integrating high-performance DPI processing capability and support for NFV, SDN and hardware acceleration units, the CSA series forms a solid foundation for developing the next generation of network security equipment. CSA platforms are designed with a modular concept in order to achieve maximum intercompatibility of components across the product line and reduce TCO. ADLINK has also developed and integrated the requisite software components and open source middleware, reducing development efforts required by customers. The growth of applications in cloud computing is accelerating at a faster pace as we enter the IoT era, bringing with it an increasing number of network security threats. ADLINK Technology is committed to providing high-performance, high-availability ARiP platforms that meet the requirements of network security for industrial IoT, and will continue to analyze new trends and challenges in the network security industry, listen to customer feedback, and provide the best network security platforms built on open computing technologies.


headshotQizhi Zhang is System Architect for the Network & Communication Business Center, ADLINK Technology, where he is responsible for product definition, architectural design, and technical consulting for the enterprise’s Network & Communication platforms. Dr. Zhang received his Ph.D. in Automation from Shanghai Jiao Tong University. With 10+ years’ working experience in telecom industry, he is equipped with solid expertise in system management, high availability systems, and network security devices.

Networking Technology Steels Itself for Emerging Markets

Thursday, August 3rd, 2017

The Bluetooth Special Interest Group (SIG) announced Bluetooth mesh technology in July. It targets industrial automation and smart buildings by extending the reach of a network, while maintaining the low energy consumption of Bluetooth technology.

The distinctive feature of Bluetooth mesh networking is that it enables many-to-many (m:m) device communication. Rather than a star topology, where one central device communicates with others in a point-to-point network (or piconet), the mesh topology allows a device to communicate with every other device in the mesh.

Bluetooth mesh networking is designed for building automation applications, such as lighting, heating, cooling, and security. It can be used to expand sensor networks, beacons and for asset tracking—locating and tracking goods in real-time across an area.

The Bluetooth mesh system is based on the Bluetooth Low Energy stack. Bluetooth Low Energy is the Wireless Personal Area Network (WPAN) technology used by smartphones, tablets, and computers in smart homes, healthcare, and entertainment.

On top of the Bluetooth Low Energy stack is a bearer layer that defines how mesh Protocol Data Units (PDUs) will be handled. This will be by either advertising or scanning to send or receive PDUs (the advertising bearer), or by communicating indirectly with nodes on a mesh network which support the advertising bearer; this is the Generic Attribute Profile (GATT) bearer.

Next is the network layer. This layer processes messages from the bearer layer and defines the network interface over which messages will be sent as well as the message address type and format. It can support multiple bearers.

The lower transport layer takes PDUs from the upper transport layer, where encryption, decryption, and authentication of application data take place. The lower transport layer may perform segmentation and reassembly if required.

Above the upper transport layer is the access layer, which defines the format of application data, defines and controls encryption and decryption performed in the upper transport layer, and verifies the data received from the upper transport layer before forwarding the data.

The foundation model layer implements the configuration and management of a mesh network. Finally, the model layer implements behaviors, messages, and states (e.g. on/off) to define the functionality of a particular element within a node. For example, a Light Emitting Diode (LED) luminaire may have three LED lights. Each light is viewed as one element.

Figure 1: Bluetooth mesh networking is particularly suitable for factory automation. (Source: Bluetooth SIG)

Figure 1: Bluetooth mesh networking is particularly suitable for factory automation. (Source: Bluetooth SIG)

Network Range

Bluetooth SIG has opted for a managed flood message transmission system. Other mesh networks, (for example, ZigBee) use a routed mesh framework, where devices communicate on a defined path. Others, like Thread, use a flooding technique, where every device on the network communicates to every device. Managed flooding controls which device can pass messages. All devices will use Bluetooth Low Energy, but only mains-powered devices will relay messages, saving battery power.

The mesh’s multi-hop communication method extends the range of connections and allows for network scalability, while reducing power consumption due to shorter transmission distances between the nodes.

Emerging Markets

ABI Research predicts nearly one third of the 48 billion Internet-enabled devices installed by 2021 will include Bluetooth, which will find new applications.
“While smartphones and audio accessories remain Bluetooth’s largest markets, the technology is becoming more attractive to low-power IoT applications,” says Andrew Zignani, Industry Analyst at ABI Research. “Though Bluetooth still faces strong competition from the other standards, mesh networking will enable new opportunities for the technology in the smart home, building automation, and emerging IoT markets in which robustness, low latency, scalability, minimal power consumption, and strong security are all additional critical requirements.”

Three characteristics are particularly important for an industrial-grade network: reliability, scalability and security.

Reliability and Scalability

The peer-to-peer communication, where nodes communicate directly with each other, makes Bluetooth mesh connectivity reliable. The structure eliminates the need for a centralized hub or gateway, or routing nodes, so there are no single points of failure. Additionally, its managed flood message relay architecture is inherently multi-path and self-healing.

The Bluetooth mesh is specified to allow up to 32,000 devices, or nodes, per network, sufficient for high density lighting or sensor environments to scale in size as network demands increase.

Building automation uses multicast messaging, where messages are sent to various destinations simultaneously. Bluetooth mesh’s managed flood message relay architecture and the publish/subscribe (send/process) procedure for group messaging are designed to handle the volume of multicast messaging traffic typically found in building automation environments.

Figure 2: Bluetooth’s low power consumption and accessibility are expected to appeal to mesh developers. (Source: Bluetooth SIG)

Figure 2: Bluetooth’s low power consumption and accessibility are expected to appeal to mesh developers. (Source: Bluetooth SIG)

Security

Large wireless device networks present security challenges. These are addressed by Bluetooth mesh technology with several architectural features. First, devices are added to a network using a 256-bit elliptic curve and out-of-band authentication. Within this provisioning process, security measures include an exchange of public keys between the provisioner and the device to be added, followed by authentication of the device and the issue of a security key, or NetKey, to add the device.

In operation, all mesh communication is encrypted and authenticated with 128-bit keys. Encryption and authentication is also implemented on both the network layer and the application layer. Content is secured with a separate application key for end-to-end security.

Each mesh packet is obfuscated so that identifying content is removed from the message. This prevents tracking and is particularly useful when devices move within range of other networks.

Figure 2: Bluetooth’s low power consumption and accessibility are expected to appeal to mesh developers. (Source: Bluetooth SIG)

Figure 3: Silicon companies such as Toshiba Electronics have already announced Bluetooth mesh support in their Bluetooth products. (Source: Toshiba Electronics Europe)

Design Support

Silicon companies are already providing support for the Bluetooth mesh standard. Toshiba Electronics Europe has announced support for its Bluetooth Low Energy products.

Heiner Tendyck, System LSI Marketing Manager, Toshiba Electronics Europe, believes Bluetooth mesh will introduce the technology to new areas. “This standards-based approach means that new untapped markets, such as industrial and commercial, can now leverage ever-present Bluetooth cell phones or tablets to easily control and monitor their systems,” he says.

Silicon Labs has also announced that its Blue Gecko Bluetooth Wireless Starter Kit provides Bluetooth mesh connectivity as well as Bluetooth 5 capability. The company can also provide a Bluetooth mesh stack for Android, allowing smartphones to configure and control nodes on the mesh.


hayes_caroline_115Caroline Hayes has been a journalist covering the electronics sector for more than 20 years. She has worked on several European titles, reporting on a variety of industries, including communications, broadcast and automotive.

Increased Interoperability: Q&A with Don Clarke, ETSI

Thursday, July 14th, 2016

NFV’s objectives, why Telecommunications infrastructures require rigorous specifications, and more.

ETSI-Logo_Press_web
The European Telecommunications Standards Institute (ETSI) develops Information and Communications Technologies standards deployed worldwide for fixed, mobile, radio, broadcast and Internet. This role naturally makes this standards organization the holder of a key role in the development of Network Functions Virtualization (NFV) technologies. Don Clarke, chairman of the Network Operator Council group in the ETSI NFV Industry Specification Group recently responded to e-mailed questions from EECatalog about the first NFV Plugtests Event organized by the ETSI Center for Testing and Interoperability, which will be held from January 23 to February 3, 2017, and other data center and virtualization topics. Edited excerpts follow.

EECatalog: What should our readers be aware of regarding the NFV Plugstests Event being held at the beginning of next year [January 23 to February 3, 2017]?

Don_Clarke_NEW_2016_webDon Clarke, ETSI: ETSI Plugtests are an essential source of feedback to our standardization activities, which allow us to validate and improve the quality of the specifications as they are being developed.

The first NFV Plugtest focuses on testing relevant ETSI Network Functions Virtualization (NFV) capabilities over a number of combinations of NFV infrastructure, Management and Orchestration (MANO) solutions and Virtual Network Functions (VNFs) provided by the industry and Open Source projects. This focus allows ETSI to evaluate and increase the interoperability among vendor and open source implementations.

Besides being a source of essential feedback for ETSI NFV, the NFV Plugtest is also a great opportunity for the industry and open source projects to learn how the rest of the NFV ecosystem uses their implementations.

EECatalog:
Many open source communities have emerged to drive NFV implementation. Are standards still needed?

Clarke, ETSI: Open source is an excellent way for the common elements of an implementation to be created collaboratively, and for vendors to focus their individual commercial efforts on capabilities built on top of open source. But Telecommunications infrastructures require rigorous specifications to ensure interoperability and to support legacy services that are deployed at massive scale. Telecommunications networks must also meet numerous regulatory requirements including support for critical national infrastructures. Current open source governance models do not provide these guarantees. Ideally there is a model where Standards Development Organizations (SDOs) developing specifications work more quickly and hand-in-hand with open source communities.

ETSI NFV has led the way in converging and specifying operator requirements (38 operators are involved) and the ETSI NFV work is widely referenced by the industry including open source communities. ETSI consequently established the Open Source MANO (OSM) group in February 2016 to deliver an open source NFV MANO stack using best-in-class open source workflows and tools to ensure rapid development and delivery. The activity is closely aligned with the evolution of ETSI NFV and provides a regularly updated reference implementation of NFV MANO. OSM enables an ecosystem of NFV solution vendors to rapidly and cost-effectively deliver solutions to their users.

EECatalog:
How would you say embedded virtualization differs from that used for data centers and enterprise IT networks?

Clarke, ETSI: I prefer to use the term Network Functions Virtualization (NFV). The objective of NFV is to use IT and Cloud techniques, including virtualization and management and orchestration, but to identify and specify additional requirements that will enable these technologies to be used to create “carrier grade” network solutions inside cloud environments. In this context, “carrier grade” means the ability to assure deterministic bandwidth, jitter and latency, and to enable configurations that can deliver the appropriate level of reliability and availability for the services being delivered via the virtualized infrastructure.

In addition, network operators require cloud infrastructures to be “distributed,” that is, extending beyond the data center. For example, instances of cloud infrastructure could be physically located in the access network, and even in the end user premises. Such virtualized infrastructures need to be managed end-to-end, which requires new standards and new tools.

EECatalog: What are some examples you have seen of embedded developers putting virtualization to innovative use?

Clarke, ETSI: We are seeing the early application of NFV to enable high-performance software implementations of network functionality previously only possible using hardware devices for such tasks as routers, firewalls and security monitoring. Implementing these functions purely in software enables automation and faster deployment, including customer self-provisioning.

EECatalog: How do you expect virtualization where the need for real-time response is also involved to look five years from now?

Clarke, ETSI: Achieving automation is key. There is still a lot of work to do to enable network operators to fully automate network design, provisioning and operations. Currently virtualized networks need a lot of manual intervention to design and deploy. This is why early NFV deployments are often in conventional data center environments where existing tools can be used. A key area of focus is to converge information modeling approaches across the industry to minimize complexity and simplify tooling and skill requirements. A collaborative multi-SDO effort is underway to do that.

EECatalog: What technology developments are you keeping especially close watch on?

Clarke, ETSI: The emergence of container technology as an alternative to virtual machines is of high interest. Containers are more resource efficient and faster to deploy than virtual machines, but there is more dependency on the host operating system version, which needs to be taken into account to ensure interoperability.

Today, commercial VNFs are often based on hardware appliances that have been re-purposed to run in a cloud environment. Such re-purposing can be inefficient in use of resources, so we are interested to see VNFs designed from the ground up to be more resource efficient and more optimized for automated deployment and operations.