Posts Tagged ‘top-story’

Ethernet Entrees as Gargantuan Appetite for Data Centers Continues

Monday, December 10th, 2018

Ethernet technology is stretching to meet the growing demands of data centers for a data-centric society.

Data centers are reaching phenomenal sizes. A 3.5 million square foot data center in Las Vegas, Nevada “provides fiber optic-speed information retrieval to over 50 million customers in the USA,” according to Gigabit Magazine.[i] Another 6.3 million square foot facility in Langfang, China covers an area similar in size to the U.S. Pentagon. The Kolos Data Center in Norway is projected to be around 6.46 million square feet and run entirely from renewable energy sources.[ii]

The era of personal computers gave way to the age of the internet, then mobile devices. Now, a massive amount of data is flowing through data centers, driven by growth in mobile applications and content, the ubiquitous use of smartphones in everyday life, Artificial Intelligence (AI), and the dream of autonomous vehicles. In this new data economy, wireless 5G is a hot topic, but data centers are just as critical as “the last mile” that 5G is expected to deliver.

IHS Markit’s latest report that tracks the Ethernet switching market forecasts a low- to mid-single-digit increase in demand from 2019 to 2022. The 10 Gigabit Ethernet (10GbE), 25GbE, 100GbE, 200GbE, and 400GbE segments are projected to show exceptional growth over the next few years. In June 2018, the Ethernet switching market had surged 12 percent over the previous year, mainly due to increasing data demands and subsequent infrastructure upgrades. “The market enjoyed its strongest growth in seven years in 2017, and the momentum continued into 2018, fueled by continuing data center upgrades and expansion, as well as growing demand for campus gear due to improving economic conditions. The transition to 25/100GE architectures in the data center is in full swing, driving strong gains in 25GE, 100GE and white box shipments,” per IHS Markit. [iii]

Figure 1: Total Ethernet revenue per switch by quarter. (Source: IHS Markit)

Although 100GbE networks are in wide use today, the next big step for data centers in accommodating an ever-increasing need for bandwidth is physical layer connectivity at an Ethernet speed of 400Gb/s. The 200Gb/s and 400Gb/s Ethernet (IEEE p802.3bs) standard was approved and fully ratified by the IEEE-SA standards board in late 2017 after about four and a half years of effort.[iv] Whereas 200Gb/s is a two-fold improvement, 400GbE not only quadruples the speed of 100GbE, it provides a more dense configuration and proportionate savings in cost due to an increase in the level of throughput.

Standards are created by industry players working together to create the best solution, although some Multiple-Source Agreements (MSAs) have been made between a consortium of companies. These companies, anticipating market demand, couldn’t wait for the official standards and launched proprietary transceivers ahead of the ratified specification. Although the MSA companies, for the most part, followed IEEE specifications in the design and construction of transceivers that they manufactured, they are not part of the IEEE. Some of the work done by MSA companies made it into the IEEE specifications, however.[v]

Manufacturers are aggressively pursuing 400GbE transceivers, optical modules, and Ethernet switches. Technology at 100GbE alone has several architectures. For instance, “for 100G connectivity, there are currently 18 optical physical media dependent (PMD) architectures—either standardized by IEEE or under various multi-source agreements amongst a select group of companies,” according to Nexans’ 400 Gb/s Landscape Potential Interfaces and Architectures. The report also states that the trend for proprietary MSA devices is continuing at 400GbE with various architectures that are not fully compliant with the IEEE standard.[vi] The demand for faster, bigger pipes for the era of big data arose sooner than the official IEEE 400 GbE standard, and the market has responded.

400 GbE Means Multiplexing
To achieve speeds higher than 25Gb/s for LAN applications, optical signals must be multiplexed. The IEEE standard also implements a Clock Data Re-Timer (CDR) in its criteria for physical media access. Multiplexing lower-speed signals to achieve higher throughput is accomplished with Wavelength Division Multiplexing (WDM) and by using additional fibers through parallel optics in aggregate. 400GbE is much more complicated and challenges data center network managers in deciding between multimode and singlemode fiber options, among other items. Again, per Nexans, “For 40G and 100G (except for 100GBASE-SR10) data rates, the signaling rate and its encoding scheme was the same for all its respective PMD architectures, and the only variables were number of WDM channels and fibers per connector. WDM transmits multiple wavelengths on a single fiber.) At 400GbE, the landscape consists of at least four different signaling rates with both NRZ and PAM-4 encoding schemes; up to eight lanes of WDM; and up to 32 fibers per connector…in some cases, a combination of more than one multiplexing scheme is required to achieve 400Gb/s.”

The 400GbE transceiver can have as many as eight channels and uses the PAM-4 encoding scheme to stretch transmissions to 50Gb/s per channel. Several 400GbE interface standards for transceivers have been released, including 400GBASE-SR16, 400GBASE-FR8, 400GBASE-LR8, and 400GBASE-DR4. Organized by distance covered:

100 m: The 400GBASE-SR16 interface standard has 32 fibers (16 transmit and 16 receiving) carrying 25Gb/s each over a range of at least 100m. But at 32 fibers, supporting 400Gb/s gets awkward.

500 m: Extending for at least 500 meters, the 400GBASE-DR4 standard runs over single-mode fiber. The 400GBASE-DR4 standard delivers a 4x100Gb/s PMD constructed using PAM-4 modulation on four parallel fibers running 100Gb/s in each direction. Operating at 400Gb/s is possible with just eight fibers, which is in common use for 40G and 100G links.

2 km: The 400GBASE-FR8 specification employs 8x50G PAM-4 WDM for distances of at least two km with a single-mode fiber in each direction. Output signals are multiplexed into one fiber transmitting at 400Gb/s. The receiver then de-multiplexes the signal, transferring it onto eight optical channels at 50Gb/s.

Figure 2: Block diagram for a 400Gb/s singlemode optical PMD architecture. (Source: nexansdatacenter.com)

 

10 km: The 400GBASE-LR8 specification is like -FR8 above but reaches a longer distance of at least 10 km using single-mode fiber.


Lynnette Reese is Editor-in-Chief, Embedded Intel Solutions and Embedded Systems Engineering, and has been working in various roles as an electrical engineer for over two decades. She is interested in open source software and hardware, the maker movement, and in increasing the number of women working in STEM so she has a greater chance of talking about something other than football at the water cooler.

[i] https://www.gigabitmagazine.com/top10/top-10-biggest-data-centres-world

[ii] http://kolos.com/

[iii] https://technology.ihs.com/550593

[iv] http://www.ieee802.org/3/400GSG/email/msg01519.html

[v] https://www.cablinginstall.com/articles/print/volume-26/issue-6/features/technology/multimode-and-singlemode-cabling-options-for-data-centers.html

[vi] https://nexansdatacenter.com/wp-content/uploads/2018/06/400G-Ethernet-Landscape-WP_FINAL.pdf

Time-Sensitive Networking’s Journey from Layer 2 to Layer 3

Tuesday, August 21st, 2018

Why a completely new model is needed to integrate determinism into routed networks.

Introduction
In the last decade, Ethernet has made its case as the communication medium for industrial and automotive networks. The applications in these types of networks require time-critical delivery of traffic —not what Ethernet was originally designed for. A new set of technologies has evolved to ensure existing applications run on new Ethernet-based communication systems. New technologies like audio/video bridging (AVB) and time-sensitive networking (TSN) have matured and secured their application in the car network. Likewise, as these technologies matured, they became more widely used in factories’ industrial floor- and control-networks.

TSN handles three basic traffic types found in automotive or industrial networks:

  • Scheduled Traffic such as hard-real-time message or control messages that are periodic in nature—an example can be the control messages coming from a sensor in an Advanced Driver Assistance System (ADAS).
  • Best Effort Traffic, i.e. traffic that is not sensitive to any other Quality of Service metrics, for example the analytics data uploaded from a factory network.
  • Reserved Traffic for frames allocated in different time slots but with a specified bandwidth reservation for each priority type, such as audio or video traffic for a car infotainment system or a professional broadcasting network.

To handle workloads such as those just cited, the IEEE 802.1TSN working group defined two pillars of traffic shaping: the time-aware shaper (IEEE 802.1Qbv) and the credit-based shaper (IEEE 802.1Qav).

The time aware shaper defines the transmission path into fixed length, repeating time cycles. These cycles are divided into time slots according to the TSN configuration agreed between the talkers, forwarders, and listeners. The different time slots can be configured and assigned to one or more of the eight Ethernet priorities. This design provides dedicated time slots for time-critical traffic to flow through without being contested by other lower priority traffic. The reserved traffic and the best effort traffic can be assigned to other time slots not occupied by the hard-real-time traffic. The credit-based shaper assigns the reserved traffic to a higher priority position than the best-effort traffic.

Extending TSN to Layer 3 Networks
As adopting Ethernet for closed and small-sized networks has proven successful, increasing interest is surfacing from multiple industries with relatively similar needs for latency guarantees and ultra-low packet loss. The new ecosystem of traffic shaping and time synchronization standards under the IEEE 802.1TSN working group focuses on providing the determinism in the Ethernet-based bridged networks which are within the boundary of a single LAN segment or broadcast domain. The emerging application areas will require latency guarantees over a routed network connecting several geographical locations, i.e. across multiple LAN segments. Applications include professional audio/video, electrical utilities, building automation systems, wireless for industrial communication, 5G fronthaul, machine-to-machine communication, mining, private blockchain, and 5G network slicing. The IETF Deterministic Networking (DetNet) working group is focused on identifying use cases, defining the problem, and finding solutions, working in collaboration with the IEEE 802.1TSN working group.

Blockchain and M2M Examples
Blockchain is a digitized and decentralized method of storing data in public networks. In a blockchain consensus process, communication happens between all the blocks which are stored in different nodes separated by a public network. In general, these nodes are connected by L2 or L3 VPN connections today. The network treats these blockchain consensus messages as best effort because of the inherent nature of Ethernet traffic. This whole process could be made more efficient with deterministic and low-latency behavior.

Taking another example from the industrial machine-to-machine communication space, there can be multiple machine sections working together to comprise a single machine. Figure 1 describes a representative architecture of an industrial control application and associated network design (Referring from Industrial theory of Operation by Avnu Alliance). In this system, a single machine, which consists of four different sections, controls an industrial process. A manufacturing site could include multiple machines like this. Each section of a machine is a subnet with a unique VLAN, and they are connected through an L3 network. The whole system is synchronized and coordinated to produce the final product. In this type of a scenario, there are existing proprietary technologies, TSN profiles within the L2 bridges, and the DetNet profile within the L3 router working in tandem.

A Probable Solution Space for Layer 3 Deterministic Networks
There are challenges with respect to using the 802.1TSN techniques in the routed domains. Some of the fundamental TSN technologies, such as the time-aware shaper, require re-computation of the entire path whenever a new flow is added or an existing flow is modified. This type of approach may not be suitable for a larger-scale routed network.

On the other hand, a geographically separated application, a professional broadcasting network say, requires the Internet as a L3 routed network to serve as a connectivity medium between sites. There are a great number of heterogeneous devices in a large-scale network such as the Internet. It is difficult and costly to keep precise time synchronization among all these devices. In the absence of a universal sense of a clock it’s not possible for the network to use mechanisms such as scheduled traffic. Therefore, a completely new model is needed to integrate determinism into the routed networks.

There are different points of views which are in discussion between the IETF DetNet working group members. Some of the basic building blocks for providing deterministic networking in a routed network requires:

  • A global view of the network for assigning appropriate path for a flow
  • An end-to-end connectivity mechanism
  • Ability to assign QoS for individual flows and redundant paths.

A Network Management Entity ( ME) role may be important for managing the timeslot for data transmission and device resources like available bandwidth. A Path Computation Element (PCE) function can compute and assign an end-to-end path for the TSN flows. For the end-to-end path, MPLS Pseudowire is an established technology which can provide connectivity across L2 and L3 domains. Figure 2 shows an example modelling of a time-critical flow, which originates from a bridged TSN domain 1, crosses a public routed network and terminates in another bridged TSN domain 2. In this example, the time-critical flow carries its TSN metadata for the bridged domain in the “TSN Encap” header. In the routed domain, the flow is encapsulated with a MPLS Pseudowire label stack for reachability to the TSN domain 2.

Figure 2: Modeling a time-critical flow

Conclusion
Connecting time-critical components over a public network is an extremely complex technology and getting things to work at scale is a long shot. The interplay of various types of traffic adds to the complexity. From ixia’s (Now part of Keysight) huge experience of validating the service provider networks, data centers, and other types of networks we have learned how even a very robustly designed system can fail under stress conditions and negative scenarios. The designer of deterministic networks will have to keep these aspects in mind. The outlook is promising, but implementers face a huge challenge to cover every corner case. Therefore, a robust validation strategy will be key.


Avik Bhattacharya is the Product Manager for the Automotive and Industrial Ethernet validation portfolio in Ixia Solution Group now part of Keysight. He has more than 12 years’ experience working on cutting-edge networking technologies.

 

 

 

The Sharing Industrial Model

Wednesday, December 6th, 2017

As the Internet of Things (IoT) and the Industrial IoT (IIoT) gain traction, Time Sensitive Networks (TSNs) are causing Ethernet networking to evolve, making it fit for the sharing industrial age.

Automating the manufacturing environment is the natural extension to connecting devices via the Internet of Things (IoT). Connected sensors, measurement devices, cameras and meters make up the Industrial Internet of Things (IIoT), enabling ‘smart’ and automated factory environments.

Among the benefits of the IIoT are improved efficiency, in terms of inventory as well as less down-time if a production line has to change over from manufacturing one product to another. As well as improvements to productivity and asset management, IIoT can improve reliability as optical-based analytics can perform quality checking and assurance and the use of real-time data can be used to compile production schedules that can be managed around the factory. The same data can be used to minimize waste and increase sustainability, with energy or lighting not in use in downtimes or areas that are temporarily unused. Safety can be improved as sensors report where personnel or machinery are in close proximity. Sensors can also highlight any areas that may need maintenance; if it can be carried out before equipment fails, that saves time and cost. The IIoT can also reduce labor costs, to make production plants competitive with low-labor cost countries and regions, and even to bring some manufacturing back to an area from which it had moved due to rising staff costs.

Figure 1: The IIoT relies on low latency, low jitter communication to improve reliability and efficiency (Picture: Automobile production © DEPRAG SCHULZ GMBH U. CO.)

IIoT Needs Bandwidth
Businesses from agriculture to manufacturing, and from software integrators to microprocessor vendors, are embracing the IIoT. Analyst firm Markets and Markets predicted that it will see a Compound Annual Growth Rate (CAGR) of 7.89 percent between 2016 and 2022 and that the IIoT market will be worth nearly $200 billion by the end of that period.

For all of this data from sensors, optical systems, and analytical systems to come together in a single network will take more than the existing Ethernet Local Area Networks (LANs) used in factory environments today.

The ‘smart factory’ has increasing requirements for real-time data from sensors and optical data, as well as for analytics as part of the IIoT. This increases the amount of networking traffic in a time-sensitive network, where data has to be delivered securely with minimal latency and without increasing the processing load.

“The IIoT requires convergence and interoperability between IT systems and end control systems,” said Todd Walter, Chief Marketing Manager, National Instruments. Interoperability is critical to ensure that all the data collected is accessible by operators and managers for control and access to analytics for decision making. Control data and IT traffic have to share the network, points out Walter, which has led to different vendors working to ensure that nodes on the network interoperate correctly. Today’s factory environments, for example, may include video data as well as monitoring data. This level of data could saturate a network, leading to missed packets of data, which can introduce errors and/or increase latency. The solution, says Walter, is a Time Sensitive Network (TSN).

Time Sensitive Network (TSN)
TSN, or IEEE 802.1Q, is being developed by the IEEE’s Time Sensitive Network Task Group. It is the Ethernet standard that defines the distributed time synchronization, low latency and convergence characteristics for deterministic messaging on standard Ethernet. It allows for data to be sent every millisecond but in a deterministic and protected way. It sends data from one point to another in a fixed, predictable timeframe. Using time scheduling minimizes jigger for real-time applications in the manufacturing, transportation, aerospace, automotive, and utilities sectors, where deterministic communication is needed for increased levels of connectivity. The predictability increases efficiency and allow time-synchronized, low latency data streaming over a distributed network for real-time control and communication. It also means that components can be added without altering either the network or equipment.

TSN allows users to synchronize devices without the need for signal-based synchronization to schedule traffic. Also made possible by TSN is the deterministic transfer of data to meet the demands of low latency and minimal jitter of closed-loop control applications, such as process and machine control.

Many suppliers are investing in supporting TSN for the IIoT. At Ni Week 2016, National Instruments (NI) introduced an early access technology platform for TSN, developed with Cisco and Intel. Today, suppliers are building the TSN for Flexible Manufacturing Industrial Internet Consortium (IIC) testbed. The testbed is located at the NI headquarters in Austin, Texas. A second testbed is at a Bosch facility in Germany.

 

This year, NI announced that it has released two multi-slot Ethernet chassis that introduce time-based synchronization to advance TSN. The cDAQ-9185 and cDAQ-9189 are four- and eight-slot Ethernet chassis that provide synchronization with TSN and improve scalability of the distributed systems. They have an operating temperature range of -40 to +70 degree Celsius, shock resistance up to 50g and vibration resistance up to 5g, for operation in harsh environments.

Microprocessor Support
A large part of the IIoT is data mining to track goods and personnel, manage inventory, and reduce energy and material waste. For this reason, microprocessor vendors are also investing in the ecosystem for Ethernet protocols and networking.

At this year’s SPS IPC Drives, in Nuremberg, Germany, Renesas Electronics announced the RZ/NI microprocessor solution kit , which is based on its RZ/N1S microprocessor to help developers reduce the time spent in integrating industrial Ethernet protocols and to accelerate industrial Ethernet development.

The microprocessor integrates an Arm® Cortex®-A7 core, with 6-Mbyte of Static Random Access Memory (SRAM) in either a 324-pin or 196-pin Low Profile Fine Ball Grid Array (LFBGA) measuring 15 x 15mm and 12 x 12mm respectively. The microprocessors reduce the peripheral component count for Programmable Logic Controllers (PLCs) and Human Machine Interface (HMI) applications in industrial equipment.

Figure 3: Renesas aims to reduce industrial Ethernet integration with the RZ/NI microprocessor solution kit.

The kit (Figure 3) has hardware and software to prototype EtherCAT, EtherNET/IP and other Ethernet protocols. The company claims that it can reduce network protocol integration development time by up to six months.

The kit includes sample applications, development tools, drivers and evaluation versions of the protocol stacks.

The kit’s Central Processing Unit (CPU) development board is based on the processor and is accompanied by a software package that includes all the drivers, middleware, sample protocol stacks, U-Boot and Linux-based Board Support Package (BSP). To develop the industrial Yocto-based Linux OS, there are instructions provided in the kit to build file systems. Developers can use Express Logic’s ThreadX industrial grade Real-Time Operating System (RTOS) for the applications sub-system as well as Linux. The former is designed specifically for deeply embedded, real-time applications and has scheduling, communication, synchronization, timer, memory management and interrupt management that can be used to suit the project’s requirements and to support industrial Ethernet protocols.

There are also communications software and tools to generate C-code header files to ease pin configuration to further reduce development time.

Texas Instruments is also working on ways to make Ethernet connection more streamlined and has added the weight of its SimpleLink platform to that end.

It has introduced the SimpleLink MSP432 Ethernet microcontrollers, to reduce automation gateway development time (Figure 4).

 

Figure 4: Bridging the wired and wireless worlds of an industrial Ethernet network, the SimpleLink MSP432 microcontrollers are from Texas Instruments.

The microcontrollers are based on the Arm Cortex-M4F with an integrated Media Access Controller (MAC), PHY (physical layer), Universal Serial Bus (USB), Controller Area Network (CAN) and cryptography accelerators for secure end-to-end communications.

Using the microcontroller’s integrated serial interfaces, developers can combine wired communications with wireless connectivity technology options, such as Wi-Fi, Bluetooth and Sub-1 GHz to connect end nodes to the cloud, using the SimpleLink Software Development Kit (SDK).

The microcontroller is application code compatible with the wired and wireless Arm microcontrollers in the company’s portfolio. The same code base can be used for end node and intelligent gateways, saving rework and building on legacy blocks in the industrial network. Networks can mix wired and wireless technology, as the SimpleLink wireless microcontrollers can be used in gateways that are added to existing wired installations. Up to 50 secure sensor nodes can be connected to a single gateway that uses the microcontroller as a central management console. The console processes data and transfers it to the Cloud via the Ethernet network. Operators can access real-time data to monitor and manage activity in a factory setting, mixing wired and wireless components on the network without interruption.

The MSP432 microcontrollers are in mass production now. As part of the SimpleLink platform, there is an accompanying development kit, the MSP432E401Y MCU LaunchPad™ development kit (MSP-EXP432E401Y).

As the ecosystem for the IIoT grows and as the Ethernet standard evolves to take into account the increased data usage and real-time data demands, the growth of the IIoT looks set on a steady course to increase productivity and efficiency.


Caroline Hayes has been a journalist covering the electronics sector for more than 20 years. She has worked on several European titles, reporting on a variety of industries, including communications, broadcast and automotive.

Power to the People—Delivering on the Promise of PoE

Tuesday, August 15th, 2017

Why finding products certified against IEEE 802.3 PoE standards has become easier.

The Ethernet community never stops innovating. We’re always pushing the boundaries, finding new ways of advancing this important technology that has become the cornerstone of today’s high-speed networks. As we embark on the next Ethernet era, the industry is hard at work on an array of innovations that will drive Ethernet to higher speeds, and into more applications and markets. Ultimately, this work will benefit end users at every level, from the data center, to the enterprise, to consumers looking to take advantage of emerging Internet of Things (IoT) and automotive products and services.

…how do you pick from an abundance of PoE products and still rest assured that you’re getting the plug-and-play interoperability and reliability that is Ethernet’s hallmark?

Figure 1: By giving power a means to travel over existing Ethernet cables, Power over Ethernet (POE) simplifies network installation, cuts costs, and strengthens energy management—benefitting a number of applications, including automotive.

Figure 1: By giving power a means to travel over existing Ethernet cables, Power over Ethernet (POE) simplifies network installation, cuts costs, and strengthens energy management—benefitting a number of applications, including automotive.

One of the important innovations gaining traction in the marketplace is Power over Ethernet (PoE). PoE reduces network installation complexities and cost, and improves energy management by enabling delivery of power over existing Ethernet cables for IP network devices like Voice over IP (VoIP) phones, cameras, lighting, and wireless access points.

Clarifying Choices
We see a wide mix of proprietary, quasi-standard, and IEEE 802.3™ standards-based PoE products popping up. While having a variety of PoE solutions to choose from is a good thing, it can also lead to some head-scratching moments of confusion—how do you pick from an abundance of PoE products and still rest assured that you’re getting the plug-and-play interoperability and reliability that is Ethernet’s hallmark?

As PoE’s many benefits continue to attract ever-greater numbers of end users, there’s an inherent need to minimize confusion over the growing diversity of products and solutions. Products based on IEEE 802.3 PoE specifications bring with them predictable power delivery, proven interoperability, and increased network safety. But, figuring out whether your PoE solution of choice meets those standards can be a tricky business. And that’s where the Ethernet Alliance comes in.

The Ethernet Alliance (EA) PoE certification program will enable end users to identify at a glance those products designed to released IEEE 802.3 PoE standards. As part of this initiative, Ethernet manufacturers and vendors can submit their equipment for verification testing against the Ethernet Alliance’s PoE certification specification, which conforms to current IEEE 802.3 PoE standards. After successfully completing testing, products will be designated as PoE certified and allowed to use the program’s certified logo. They’ll also be added to a searchable public registry of EA PoE-Certified products.

By empowering them to quickly and easily find products certified against IEEE 802.3 PoE standards, we’re reducing confusion and improving the overall experience for end users. And it’s a win-win situation for the industry too—this program opens the door to new business opportunities between powered device (PD) manufacturers and power sourcing equipment (PSE) vendors, while simultaneously helping to increase end user trust in PoE and Ethernet.

What it comes down to is this—it doesn’t matter if you’re talking wired or wireless, everything still needs power. PoE holds the promise of being able to meet that need with minimum hassle, but maintaining interoperability and robust Ethernet performance is a must. The Ethernet Alliance PoE certification program delivers on the promise of PoE by taking the guesswork out of the equation and getting power to the right people at the right place at the right time.

Visit the Ethernet Alliance web site to learn more about the PoE certification program.


Ethernet-Alliance-John-DAmbrosia-Chairman-2016_thumbJohn D’Ambrosia is known in the industry for his efforts as Ethernet’s advocate.  He is the Chairman of the Ethernet Alliance, an organization dedicated to the promotion of all Ethernet technologies. In his role as a Senior Principal Engineer at Huawei, D’Ambrosia participates in several IEEE industry standards efforts that are driving Ethernet’s on-going evolution and its move to higher speeds.  A popular blogger on Ethernet matters, D’Ambrosia was awarded the IEEE-SA 2013 Standards Medallion and was inducted into the Light Reading Hall of Fame. His previous experience includes Dell, Force10 Networks, and Tyco Electronics.

100GbE on the Menu

Wednesday, June 8th, 2016

A revision to PICMG 3.1 caters to the hunger for bandwidth that service providers, operators and consumers crave.

The increase in cloud services and Internet traffic has placed considerable demands on bandwidth and capacity in recent years. As a result, data center developers have been forced to either invest in additional equipment or consider an architecture that can be scaled to add the capacity needed.

The dramatic increase in data traffic has been advanced by the proliferation of mobile devices. There will soon be more traffic originating from mobile devices than from PC-based ones. Last year, Cisco tracked this trend, in its Cisco Visual Networking Index (VNI) forecast. Data originating from non-PC devices, such as smartphones, tablets and TVs, grew from 40 percent in 2014 to a projected 67 percent in 2019.

There is expected to be an increase in all data traffic in that period, as Internet and cloud services are joined by IoT and Machine-to-Machine (M2M) data. Cisco believes that PC-originated data will grow at a Compound Annual Growth Rate (CAGR) of nine percent, significant, but dwarfed by tablet data traffic growing at a CAGR of 67 percent, smartphones at 62 percent and M2M at 71 percent CAGR. Another significant milestone by 2019, says Cisco, is that traffic from wireless and mobile devices will exceed that from wired devices. WiFi and mobile devices will account for 66 percent of IP traffic, compared to wired devices’ 33 percent; in 2014 wired devices accounted for over half (54 percent) of traffic. In addition, an estimated 80 percent of telecommunications network traffic is video, which consumes a lot of bandwidth, and augments the case for 100G data transfer rates.

100G Challenges

To realize 100G operation, networks require Deep Packet Inspection (DPI) to process large data flows in real time. They also need support for Software Defined Networking (SDN) for design flexibility and network security.

Scalable systems are needed to support the network infrastructure as it grows, while upgradeable systems allow service providers to introduce services in response to customer demand.

Both SDN and Network Function Virtualization (NFV) allow features to be added to the networks, or to be reconfigured without upgrading a large portion of the hardware.

Figure 1: Doug Sandy, CTO PICMG, believes PICMG 3.1 R3.0 is a “robust alternative” for IT equipment.

Figure 1: Doug Sandy, CTO PICMG, believes PICMG 3.1 R3.0 is a “robust alternative” for IT equipment.

AdvancedTCA was introduced by the PICMG at the end of 2001, as a common hardware platform for computing and telecommunications equipment, with availability for central office applications. The intervening 15 years has seen the industry progress from 10GbE to 40GbE and the IEEE 802.3bj-2014 standard, which adds 100 Gbit per second Physical Layer (PHY) specifications and parameters.

Incorporating 100Gbit backplane Ethernet (GbE) into the Advanced Telecom Computer Architecture (AdvancedTCA or ATCA) standard became the next step.

PICMG 3.1 Revision 3 was drawn up to accommodate 100GbE. The open standard remains true to its values of preserving the ability to combine boards, switches and backplanes from multiple vendors to create 100GbE systems.

“PICMG 3.2 R3.0 platforms provide a more robust alternative to operators who are not comfortable deploying standard IT equipment in their facilities,” explains Doug Sandy, Chief Architect/Lead Hyperscale Technologist for Artesyn’s Embedded Power Business, and Vice President of Technology / Chief Technology Officer of PICMG. “Some reasons might include backward compatibility with existing equipment, or more rugged requirements such as those found in traditional telecommunications central offices,” he continues, adding: “ATCA is also finding increasing adoption in military/aerospace applications.”

Sandy confirms that the revision, which incorporates 100-Gbit and 25-Gbit Ethernet into the AdvancedTCA platform was adopted last month and that the specification is ready for purchase from PICMG.

As well as backward compatibility, and multi-vendor interoperability, 100GbE operation throws up some technical challenges, for example, managing losses at 100G signalling rates, impedance control, which is achieved by limiting via stubs and crosstalk control, using trace geometries and grounding through connectors.

“Primary challenges were related to high speed signal integrity, interoperation between multiple vendors and backward compatibility,” says Sandy. “Artesyn Embedded Technologies’ Embedded Computing division spearheaded this work with a connector vendor and brought the initial concept to PICMG,” relates Sandy. “Through committee, collaboration (and lots of simulation work), the solution was expanded, improved and refined. The result is the specification that we have today.”

Realizing 100GbE Operation

Artesyn offers 100G shelves, based on a QuadStar backplane architecture that has four switch blades or hubs, each fanning out to other cards in a rack. This increases the available bandwidth compared with dual-dual star architecture. Three switches can be active, and the fourth is in standby mode to offer 3+1 redundancy. The technology can be scaled up to 100G. This enables each blade to deliver up to 400G, or 4-Tbit per second aggregate system bandwidth in a non-redundant implement, and 300G bandwidth with redundancy.

An example is the ATCA-7480 QuadStar packet processing blade. It is based on two Intel® Xeon® E5-2600 v3 family processors providing up to 28 processing cores per blade.

Figure 2: The Centellis 8000 40G/100G 14-slot ATCA system from Artesyn.

Figure 2: The Centellis 8000 40G/100G 14-slot ATCA system from Artesyn.

Together with connector manufacturer, ERNI, the company has developed connector and backplane technology for 100GbE connectivity in an AdvancedTCA shelf. Following the launch of the Centellis 8000 14-slot systems (Figure 2) for high availability applications, the company added the Centellis 8840 AdvancedTCA open standard server, with 100G AdvancedTCA technology integrated into a Network Equipment Building System (NEBs)-ready platform that will accept 40G and 100G blades, as they become available.

Figure 3: Advantech’s ATCA-9223 uses the Intel Atom C2000 processor for programming and Virtex-7 FPGAs for packet processing on 100GbE ports.

Figure 3: Advantech’s ATCA-9223 uses the Intel Atom C2000 processor for programming and Virtex-7 FPGAs for packet processing on 100GbE ports.

Also available for 100G operation, Advantech targets network security in carrier and large enterprise data center networks, with the ATCA-9223, 100GbE AdvancedTCA hub blade (Figure 3).

Like Centellis, it is designed for use in both SDN and NFV. According to the company it optimizes traffic flows and load balancing across clusters of AdvancedTCA node blades. It uses an onboard Intel Atom Processor C2000 for programmability, with a Broadcom BCM56150 base fabric switch with 70-Gbit per second switching for PICMG 3.1 GbE backplane connectivity. Two Xilinx Virtex-7 690T FPGAs connect two 100GbE and eight 10GbE for inline processing of packets between external ports and blades on the fabric interface.


hayes_carolineCaroline Hayes has been a journalist, covering the electronics sector for over 20 years. She has worked on many titles, most recently the pan-European magazine, EPN.

Q&A with the Ethernet Alliance Chair John D’Ambrosia

Tuesday, March 8th, 2016

Yes, he’s chairing the IEEE 802.3™ 400 Gb/s Ethernet  Task Force, but he wants you to know it’s not just about speed.

EECatalog asked Ethernet Alliance Chair John D’Ambrosia to catch us up on what’s been happening within the Alliance and with Ethernet’s role in general.

EECatalog: You’ve noted that it took the Ethernet standard 27 years to create six speeds, yet another six speeds are arriving in the relatively short space of the next three to five years—what’s driving this?

Ethernet-Alliance---John-D'Ambrosia,-Chairman-2016_thumbJohn D’Ambrosia, Ethernet Alliance: There isn’t one particular driver.  You could say that the industry rccognized that there are different application spaces out there, that would be fair.  You could say that the industry is leveraging existing technologies to compete in new spaces and solve new problems which would be  2.5 and 5 gig examples, which is leveraging 10 gig technology over existing cabling to run at 2.5 and 5 gig.  You could talk about the growth of hyperscale data centers and the need for speed that they have, you could talk about the growth of bandwidth, so there are a number of different drivers.

EECatalog: Could you please speak a bit about 2.5 Gigabit Ethernet (GbE) and 5 GbE with regard to the phenomenon of more intelligence moving to edge of the IoT and about wireless access as a market driver?

D’Ambrosia, Ethernet Alliance: Two drivers are  in place here.  First, you had the wireless  Ethernet market introducing the latest and greatest  in wireless access, IEEE 802.11ac. When you look at how that is going to impact the data that is coming in and out of the wireless access routers, it will have a tremendous effect. The solution that you see on the back of these wireless access points is GbE, it’s GbE with Power over Ethernet [PoE].

Figure 1: John D’Ambrosia, Ethernet Alliance Chair, decribed for us how Power over Ethernet reliability kept networking over wireless access points up and running during a Superbowl power outage. (commons.wikimedia.org by Au Kirk)

Figure 1: John D’Ambrosia, Ethernet Alliance Chair, decribed for us how Power over Ethernet reliability kept networking over wireless access points up and running during a Superbowl power outage. (commons.wikimedia.org by Au Kirk)

In 2013 when when the lights went out in the Mercedes-Benz Superdome during the Superbowl,  that was the first time at a Superbowl where the bandwidth on the upload  outpaced the bandwidth on the download—power in the stadium went off, but networking never went down because all of those wireless access points were enabled through Power over Ethernet.  The ruggedness and reliability of Ethernet really shone at that moment.

People might say, Ethernet is dying because of wireless and it is absolutely not true: Ethernet is evolving because of wireless.  Ethernet and wireless are symbiotic: they work together.  For every wireless router there is, there is an Ethernet port there hooking up to the network, and it’s a great way to use that shared bandwidth over the Ethernet port. look at it as Metcalfe’s Law; the more people that connect to that network,  the more valuable that network becomes.  I don’t need to have a copper wire connection to connect to it, I can do it wirelessly, so the two work together at that point.

So now I have this great infrastructure in place and I’ve got GbE and I’ve got GbE. The problem is, to jump to 10 GbE, you need to have better cabling than what is  deployed in around 90 percent of the market.

From 2003 to 2014 there were 70  billion meters  of  Cat5e and Cat-6 cabling sold. Some people would say, “Just put in new cabling,” but economics don’t work that way.  Before people are going to do that they are going to see if there is something better they can do with technology.

That is where 2.5 and 5 gig came in.  They were able to leverage 10GBASE-T technology and run it over the existing cabling infrastructure to do 2.5GbE and 5 GbE.

IoT is a very interesting case. IoT spans a gamut of different applications Some are looking at the IoT space for use with low bandwidth basic sensors, and then in the high-speed applications I’ve heard people talking about it from a video application perspective. There is also discussion about a new extended reach single pair BASE-T effort. ere are applications where people need to go further, but don’t necessarily need the speed

EECatalog: Have you had times where you felt some course corrections or changes in thinking were needed?

D’Ambrosia, Ethernet Alliance: Ethernet has been undergoing a cost correction over the past few years. We started 400 GbE back in 2013; in 2014 we saw an explosion of people talking about, “You know what, we have 100 GbE  based on 4 x25 Gb/s,, so should do 25 GbE as the next generation for servers because 40 GbE just isn’t going to solve the problems.  That was [actually] the second step.  The first step was the introduction of 40 GbE when people started to realize that 10x leaps in speed is not always a good thing.  It takes too long to get to that to be cost-effective for everybody.   The willingness to now break out of that whole 10x mindset I see as a postive.

Then you saw the new projects coming in such as automotive. It used to be that everybody focused on the higher speeds. However there is a whole marketplace out there of applications that are not just about higher speeds.  “Higher” speed is relative to the application.

And while the highest speeds such as 400 gig are important to the overall system—it allows the aggregation of data—you are not going to see  400 gig servers in the next year be a volume driver. We have to remember that speed is relative.

Every new application space will eventually want to go faster. While Base-T isn’t going to be jumping to 400 GbE, there are projects underway to go beyond 10 GbE and do 25 and 40 GbE, and at the same time Base-T is also pushing the envelope at lower speeds.  Automotive is being done over a single pair-100 megabit and gigabit.   And there are discussions underway for new projects that will be doing extended reach—they are going  to be willing to sacrifice speed in order to go further, this is a need for the industrial space.

EECatalog: What Ethernet issues should embedded designers and developers keep their eyes on over the next year and why?

D’Ambrosia, Ethernet Alliance: Engineers should look at Ethernet from two perspectives: [1] What does [Ethernet connectivity] mean for my embedded design connecting to the world and [2] what does it mean to the inside of my embedded device?

There is work underway to develop 50 gigabit per second signaling both electrically and optically,  so there might be a lot of people out there today that aren’t looking at those speeds, but I am sure there are embedded designers who are going that fast.

What that means to them is that they are going to have to do some re-learning, because there’s going to be fundamental technology change as we move past  25Gbps signaling into 50 Gbps signaling with PAM4 modulation.  At 25 Gb/s the modulation scheme, is NRZ signaling, which has two levels: a zero and a one.  As we go to 50 Gbps, we’re looking at PAM4, where there is four levels, and you are sending two bits ofinformation in a single transmission, —whereas before, it was one bit with  every transmission It’s an example of working smarter.

It also means engineers need to think about whether the test equipment they use for debug can support this sort of thing—they have to start planning for that and becoming familiar with it because it is going to happen more quickly than some will recognize.

And with regard to  the outside connection, embedded engineers have a whole host now of speeds and feeds and media types that they can support, depending on what their application  space is.  It could be very short traces on boards, they could be working with modules, it could be going a couple meters over copper cable, it could be going 40 kilometers over optical fiber, or it could be using parallel cable where you have multiple fibers in  parallel with each other and you are sending data over each one to cause an aggregate speed. A single port of Ethernet could support all of these different spaces.

A chart on the forthcoming Ethernet Alliance 2016 roadmap will illustrate  the magnitude of all the different physical layer specs over all the speeds and all the media types.

EECatalog: What do you want our readers to know about the Ethernet Alliance?

D’Ambrosia, Ethernet Alliance: The Ethernet Alliance is a marketing alliance for Ethernet IEEE 802(TM) technologies—that’s it, plain and simple. We’re promoting IEEE 802 Ethernet technologies to enable the rapid adoption of Ethernet solutions in the marketplace. Through marketing and education, and also through interoperability events. We get a group of companies together, throw their engineers all in a room, under an NDA, and they work together to verify the interoperability between their equipment

And while the IEEE 802.3  standards do a great job on that (and many in the Ethernet Alliance participate in IEEE 802.3) the reality is that interoperability just doesn’t come from implementation of the standards, it is something that  you really do need to actually go out and verify, by hooking up equipment.

I can say from my own experience I am really proud of the work of IEEE 802.3, but at the end of the day it’s somewhere in the order of 250 to 400 pieces of paper—that is an impressive amount of work, but it is even more impressive when you get companies, literally from around the world, together, and you put equipment together, and it works.

Broadcom Delivers Industry’s First High-Density 25/100 Gigabit Ethernet Switch for Cloud-Scale Networks

Thursday, September 25th, 2014

Now Sampling to Customers, New StrataXGS® Tomahawk™ Series Delivers 3.2 Tbps Bandwidth with Comprehensive SDN Control and Visibility Features

  • First to deliver 32 ports of 100GE, 64 ports of 40GE/50GE or 128 ports of 25GE on a single chip
  • Significantly improves efficiency of cloud-scale networks using high-density 25/50GE data center link protocols1
  • In-field configurable flow processing and instrumentation engines enrich network control and visibility
  • Leverages broad ecosystem of network software, hardware, OEM, operator and application partners

Broadcom Corporation (NASDAQ: BRCM), a global innovation leader in semiconductor solutions for wired and wireless communications, today announced the immediate availability of a new line of switches optimized for cloud-scale data centers. Building on its widely deployed StrataXGS® Trident and StrataDNX™ products, the new StrataXGS® Tomahawk™ Switch Series is the industry’s highest performance Ethernet switch, delivering 3.2 Terabits per second (Tbps) switching capacity, unparalleled port density and SDN-optimized engines in a single chip. For more news, visit Broadcom’s Newsroom.

With more than 7 Billion integrated transistors, the StrataXGS Tomahawk Series enables the transformation of next-generation cloud fabrics to all-25Gbps per-lane interconnect, increasing link performance by 2.5X2. With dense 100GE connectivity and authoritative support for new 25GE and 50GE protocol standards, the StrataXGS Tomahawk Series significantly bolsters the bandwidth capacity, scalability, and cost efficiency of today’s mega data centers and high performance computing (HPC) environments.

“Our StrataXGS Tomahawk Series will usher in the next wave of data centers running 25G and 100G Ethernet, while delivering the network visibility required to operate large-scale cloud computing, storage and HPC fabrics,” said Rajiv Ramaswami, Broadcom Executive Vice President, Infrastructure & Networking Group. “This is the culmination of a multi-year cooperative effort with our partners and customers to prepare for this transition. We are pleased to see significant industry investment in the Tomahawk 32x100GE form factor as well as the 25G/50G Ethernet specification, which Broadcom defined and co-founded as an industry standard.”

Transforming Leaf-Spine Networks to 25/100GE for Maximum Efficiency and Scale-Out

By deploying StrataXGS Tomahawk based switches, data center networks currently running 10GE at the top-of-rack (leaf) level and 40GE at the end-of-row (spine) level can upgrade to 25GE and 100GE interconnect, respectively, to accommodate growth in distributed server/storage workloads without increasing network equipment footprint or cabling complexity. A three-tier data center fabric of StrataXGS Tomahawk switches, using standard, compact, CAPEX-efficient form factors, can deliver over 15X higher network bandwidth capacity3.

In lieu of upgrading server-to-switch connections to 40GE, a StrataXGS Tomahawk based network driving 25GE to the server reduces cabling elements within the rack by as much as 75 percent, while quadrupling the number of server and storage nodes that can be interconnected in a leaf-spine topology4. This dual-pronged improvement in bandwidth efficiency and port density compared to existing 40GE solutions gives modern data centers unprecedented ability to scale out their networks and achieve significant return on investment.

Comprehensive Visibility and Control for Software-Defined Data Centers

Optimized for Software Defined Network (SDN) application ecosystems, Broadcom’s new BroadView™ instrumentation feature set enables data center operators to have full visibility of network and switch-level analytics. With extensive application flow and debug statistics, link health and utilization monitors, streaming network congestion detection and packet tracing capabilities, the StrataXGS Tomahawk Series provides operators the telemetry to troubleshoot large-scale networks, apply controls for optimal performance, respond to potential problems before they happen and drive down OPEX.

Featuring new FleXGS™ packet processing engines, the StrataXGS Tomahawk Series enables operators to adapt to changing workloads and control their networks, with an extensive suite of user configurable functions for flow processing, security, network virtualization, measurement/monitoring, congestion management and traffic engineering. Among other benefits, FleXGS engines provide in-field configurable forwarding and classification database profiles, more than 12X greater application policy scale compared to previous generation switches, increased flexibility of packet lookups and key generation, and rich load balancing and traffic redirection controls. All these configurable capabilities are accessible to the network control plane via industry-proven software APIs and come without sacrificing network data plane throughput or latency.

StrataXGS Tomahawk Key Features

  • 3.2 Tbps multilayer Ethernet switching
  • Integrated low-power 25Ghz SERDES
  • Authoritative support for 25G and 50G Ethernet Consortium specification
  • Configurable pipeline latency enabling sub 400ns port-to-port operation
  • Supports high performance storage/RDMA protocols including RoCE and RoCEv2
  • BroadView instrumentation: provides switch- and network-level telemetry
  • High-density FleXGS flow processing for configurable forwarding/match/action capabilities
  • OpenFlow 1.3+ support using Broadcom OF-DPA™
  • Comprehensive overlay and tunneling support including VXLAN, NVGRE, MPLS, SPB
  • Flexible policy enforcement for existing and new virtualization protocols
  • Enhanced Smart-Hash™ load balancing modes for leaf-spine congestion avoidance
  • Integrated Smart-Buffer™ technology with 5X greater performance versus static buffering
  • Single-chip and multi-chip HiGig™ solutions for top-of-rack and scalable chassis applications

Availability

The Broadcom StrataXGS BCM56960 Tomahawk Switch Series is now sampling.

Resources

  1. As defined by 25 Gigabit Ethernet Consortium and under standards development in IEEE 802.3

  2. As compared to existing 10Gbps per-lane interconnect

  3. As compared to previous generation 40GE switch devices

  4. As compared to an equivalent number of currently available 40GE switch platforms in the same form factor, connected in the same leaf-spine topology

About Broadcom

Broadcom Corporation (NASDAQ: BRCM), a FORTUNE 500® company, is a global leader and innovator in semiconductor solutions for wired and wireless communications. Broadcom® products seamlessly deliver voice, video, data and multimedia connectivity in the home, office and mobile environments. With the industry’s broadest portfolio of state-of-the-art system-on-a-chip solutions, Broadcom is changing the world by connecting everything®. For more information, go to www.broadcom.com.

Contact Information

Automotive Ethernet: No Simple Answers

Wednesday, November 27th, 2013

Industry leaders speak out on the status of automotive Ethernet and where the technology needs to go.

As Microchip Technology’s Henry Muyshondt states, “Ethernet means different things to different people.” And that makes a discussion of the adoption of Ethernet in automotive applications tricky. In our roundtable discussion, I’ve pulled Muyshondt’s explanation into a sidebar, which deserves consideration even outside the specifics of automotive applications. Along with Muyshondt, senior marketing manager for the Automotive Information Systems Division of Microchip Technology, Inc., we have Armin Lichtblau, business development director for the Automotive Network Design at Mentor Graphics (Deutschland); Joel Hoffman, automotive strategist for Intel’s expanding Automotive Solutions Division; and Nick DiFiore, director of the Automotive Segment for Xilinx. Many thanks to our panel for their thought-provoking responses!

EECatalog: What’s your view of the status of Ethernet adoption in automotive applications?

muyshondt_henry

Henry Muyshondt, Microchip: For streaming data within the vehicle, the automotive industry has already pretty much settled on the MOST standard (Media Oriented Systems Transport). Over the last 13 years, it has come to be used in more than 150 automobile models already on the road, from most of the major car makers, in all regions of the world. There are more than 100 million MOST devices in use, as multiple devices are used in each vehicle. This technology can handle both Ethernet frames, as well as streaming information, all running in parallel. The higher-layer protocols used in the IT industry can communicate over the MOST standard without needing any changes, other than at the low-level link layer. Ethernet frames are sent unmodified, and other channels are available to transport audio and video data without overhead.

In terms of the actual Ethernet physical layer, it is not well suited for automotive applications due to electromagnetic compatibility issues as well as the challenges associated with running standard Ethernet cabling within the vehicle. There are proprietary technologies, such as Broadcom’s BroadR-Reach that could be applied, but those are not really standard Ethernet and at this point have not received the wide adoption among carmakers that some in the industry press would suggest. There are many challenges that would need to be solved, in order to implement a low-cost, standards-based Ethernet solution in automobiles, and it will be several years before the current IEEE efforts toward defining a Gigabit physical layer for automobiles become a reality.

In short, Ethernet-style frames are definitely seeing increased use in automotive applications. The Ethernet physical layer, not so much.

lichtblau_armin

Armin Lichtblau, Mentor Graphics: Particularly for new generation E/E designs and often AUTOSAR systems, Ethernet in the car is a requirement. The Ethernet communication is required as diagnostics over Ethernet to achieve high-speed, end-of-line programming in one use case. The other use case is the adoption of high-speed applications in the car environment, e. g., for video signal processing on safety and active safety systems (collision camera, etc.).

hoffmann_joel

Joel Hoffman, Intel: Ethernet adoption is occurring slowly in automotive. This is partially due to the entrenched nature of existing closed solutions (such as MOST, CAN, Flex Ray and others) that have been developed specifically and exclusively for automotive, along with differing ideas on how to deploy the technology (such as the proprietary “OPEN” protocol by Broadcom).

These issues existed in the tech and enterprise segments as well, when Token-Ring, Asynchronous Transfer Mode (ATM) and other complex designs claimed technical advantages until more advanced silicon and software were created for the broader Ethernet market. If these issues had lingered, we would not have the cost-effective enterprise cloud and connectivity that we have today.

difiore_nick

Nick DiFiore, Xilinx:Ethernet for automotive applications is rapidly growing among automotive OEMs to meet the growing bandwidth demands for new in-car audio and video (A/V) features. Many automobile vendors are planning to roll out Ethernet-based infotainment and advanced driver assist systems (ADAS) products in their 2015-2017 models. As a result, several high-profile automotive OEMs including GM, BMW and Hyundai have joined the AVnu Alliance—an industry forum dedicated to the advancement of professional-quality audio video in markets that include automotive, professional AV and consumer electronics. Xilinx is one of the founders of the AVnu Alliance.

Ethernet vs. IP – Henry Muyshondt, Microchip Technologies, Inc.

The term Ethernet means different things to different people. Ethernet really refers to the IEEE 802.3 standard that defines the physical and data-link layers used to connect computers. It specifies the four twisted-pair CAT5 wires and the electronics they connect to on a typical office or home computer, along with a particular format for a packet of information. It does NOT refer to the higher communication protocols above the physical layer, such as the various Internet protocols and other communication mechanisms used in the IT communications world. It also does NOT refer to things like Wi-Fi, Bluetooth, or other networking physical connections, even when those connections use packets of information similar to those used in a proper Ethernet system. Many people say Ethernet when what they really mean is Internet Protocol (IP) or another packet-based communication protocol, and not the physical layer. This has led to some confusion in the industry when referring to Ethernet.

IP communication is certainly the wave of the future, especially as it relates to communicating between different domains within the automobile, or to the world outside the vehicle. This is already starting to happen, with many telematics applications. We expect these types of communications to continue to expand their presence in the automobile. We also expect packet-based communication to coexist with other more efficient streaming technologies, as engineers exploit the advantages that each technology brings to the table.

For example, when independent systems need to communicate—particularly with off-board systems, such as the Internet or OEM diagnostics systems—they can take advantage of packet communication’s characteristics, such as being able to adapt to changing interconnections between the data’s source and destination. They can also take advantage of standardized protocols that are designed to work over unreliable connections that might require retransmission of packets, and where the path between source and destination may include links that disappear, while others appear along the way, as computers are turned off or links are otherwise interrupted. These mechanisms might also be needed to communicate between domains inside the vehicle that are developed separately from each other, such as engine control and infotainment. This type of communication provides a common language for disparate groups to talk to each other. Ethernet-style frames and packets are a useful tool for this purpose.

Other applications, such as when there is a continuous flow of information between devices, can use more efficient mechanisms that don’t require the overhead of packet communications. If you have a continuously flowing stream of information—say video going from a camera in the driver-assist system to an in-cockpit display, or audio going to an amplifier—there is no need to break up the stream into separate packets, which add a significant amount of addressing and error-correction information. These packets would then have to interrupt a processor that would need to process each one, discard up to 3/4 of the transmitted overhead bits, and again assemble the data into a continuous stream to be fed to a digital-to-analog converter of some sort. Ethernet also doesn’t supply the higher-level protocols that are needed to manage communication channels, transmission errors, control of various devices, etc. Such higher-level protocols do exist, but they are no longer part of the Ethernet realm. In fact, they could be used over many other physical layers that are not called Ethernet. Packet communication of streaming data wastes a significant amount of bandwidth and very significantly increases processor performance requirements to handle the increased interrupt load along with the software stacks needed to process the packets. These stacks make determinism difficult and introduce varying amounts of latency that affects audio and video presentation.


EECatalog: Where does the technology need to go to cement Ethernet’s role in the car?

Muyshondt, Microchip: If by Ethernet you mean packet-based communication technology, that role is pretty well cemented in the car for future applications, as mentioned above. The cement is still hardening, but there is no doubt that for inter-domain communication, both inside and outside the vehicle, packet-based communication is very relevant.

Much remains to be seen about what is implemented at the physical level, to transport these packets of information. So far, there are more than 150 vehicle models that have standardized on MOST technology for their high-speed network. MOST, in turn, has been enhanced to include packet communications and a dedicated Ethernet packet channel that can move unmodified Ethernet frames. We expect Ethernet to be used for some types of communication within the vehicle, using the existing MOST infrastructure. Additionally, a next-generation, multi-gigabit version of the MOST standard is already being developed for future applications. Even if packet communication is used for all in-vehicle applications, the MOST technology’s full bandwidth can be allocated to Ethernet-style communication, providing an automotive-proven physical layer that also has the flexibility to allow streaming communication in parallel with the packet-based channels.

The MOST standard has already cemented itself with companies such as General Motors, Volkswagen and Toyota, along with most of the other major car makers of the world. These are large-volume manufacturers that have already proven the MOST technology’s cost-effectiveness for even midrange to low-end vehicles. Additionally, the MOST Ethernet Packet channel is helping car makers implement Ethernet-style communications in vehicles.

Ethernet-style communications are already in place, while the Ethernet physical layer really isn’t being used in vehicles.

Lichtblau, Mentor Graphics: Ethernet stacks need to be implemented in the Embedded Basic software stack which is used as the central operating platform in the car ECUs. The tooling provided for the ECU development needs to be set up to configure and test the Ethernet components. The IP communication protocol needs to be adjusted with higher level protocols to accommodate the automotive requirements.

Hoffman, Intel: OEMs need to take a leadership position in solving any remaining technical issues and pulling their supply chain into the ecosystem. This begins with audio systems implementing audio-video bridging (AVB), and extends to other deterministic control systems in the vehicle. While there are risks with implementing any new technology, automakers have great influence in stimulating the development of related software that will be needed. For the first cycle or two there may not be significant savings; however, the first adopter will gain the most in the end through reduced development costs, reduced equipment costs and long-term vehicle fuel savings due to vehicle weight reduction.

DiFiore, Xilinx: There are two main issues:

  1. 1Gbps physical transport in automotive environments. There’s a need for a low-cost, single, unshielded twisted-pair (UTP) solution that meets automotive EMC requirements. This need is currently met by Broadcom’s Broad-R-Reach technology for 100Mbps Ethernet networks. The industry still needs an affordable 1Gbps UTP solution to really cement use of Ethernet in automotive applications, especially for A/V uses. Broadcom and other Ethernet vendors are working on technology to meet the 1Gbps requirement.
  2. Guaranteed quality of service (QOS) over Ethernet for reliable, deterministic, real-time data delivery and audio/video streaming over Ethernet in a car. This need is not fully satisfied by current IEEE Ethernet standards. However standards dealing with bandwidth reservation, time synchronization and packet prioritization are emerging and the AVnu Alliance is developing ways of adapting these standards for automotive needs. Many automotive OEMs are currently evaluating Ethernet extensions against their list of requirements. At this point, it’s not clear if OEMs will agree upon a complete, standardized Ethernet stack or if different OEMs will simply adopt variants of Ethernet AVB.

EECatalog: What are the challenges you expect to address?

Muyshondt, Microchip: The biggest challenge is to have the right system, at the right point in time, at the right cost point. The networking infrastructure inside vehicles has much different requirements than the typical IT infrastructure, in terms of robustness and surviving in a harsh environment. Microchip is a leader in both consumer and automotive applications, and I can tell you that it is not easy to simply take a consumer device and harden it for automotive applications. The device has to start out with the design goal of being used in automotive applications.

Bandwidth costs money. You can either spend that money on the physical connection, or you spend it on bigger processors with more computing capabilities. The challenge is to balance all sides of the equation to obtain the optimum system. This requires participation by the people designing the system. Carmakers have to be involved from the ground up in specifying the various parameters, including how the system will be supported after it rolls off the assembly line, and for many years after it rolls away from the dealership. Only carmakers and their suppliers can properly assess the correct trade-offs among the various competing objectives of a networking and communication system. Their timeframes for choosing a technology have to look five years or even a decade into the future, in order to select the appropriate technology to use. Carmakers have to be technology decision-makers, and not just the users of what other industries may provide.

Infrastructure, such as a networking system, doesn’t provide a lot of differentiation between cars. As such, it is in everyone’s best interest to use common “plumbing” to move the data behind the dashboard. Carmakers need to cooperate with each other on a technology they can truly influence, rather than relying on devices primarily built for other industries that have much higher volumes, and therefore much higher influence, than the automotive industry. Their products have very long life cycles, measured in dozens of years rather than a few months, and therefore their choices need to have built-in reliability and robustness. They also have to have the confidence that the actual devices they select to implement a technology will still be manufactured more than a decade later, when the consumer and IT devices of today are but a long-forgotten memory.

Ethernet-style communications can help address some of these challenges, alongside streaming technologies such as the MOST standard. The Ethernet physical layer, on the other hand, just isn’t well suited for automobiles.

Lichtblau, Mentor Graphics: Mentor is committed to provide an Ethernet stack as part of their AUTOSAR solution and will fully support Ethernet in cars and diagnostics over Ethernet with their Embedded SW AUTOSAR stack as well as the AUTOSAR tooling.

Hoffman, Intel: Intel believes in standards and has benefited most when standards are widely adopted. Our participation in automotive forums including AVnu Alliance (avnu.org), GENIVI Alliance (genivi.org), Wi-Fi Alliance (wi-fi.org) and others bring the best of the ecosystem together to agree on common methods for solving common problems. Collaboration with these groups allows Intel to produce silicon solutions such as the Intel Ethernet Controller I21X family with support for AVB. Intel also develops proof point technologies to jump start the industry including reference implementations such as Open AVB (https://github.com/intel-ethernet/Open-AVB).

DiFiore, Xilinx: Xilinx solutions already address Ethernet QOS challenges and the company has worked with Digital Design Corporation (DDC, a Xilinx Alliance Program member) to develop a complete Ethernet AVB solution called EAVB for automotive applications for use with Xilinx FPGAs and the Xilinx Zynq All Programmable SoC. Ethernet AVB implementations on Xilinx All Programmable platforms allow our customers to develop and field system designs well ahead of their competitors and to react quickly as the automotive uses of Ethernet and the related standards evolve and grow.


Cheryl Berglund Coupé is managing editor of EECatalog.com. Her articles have appeared in EE Times, Electronic Business, Microsoft Embedded Review and Windows Developer’s Journal and she has developed presentations for the Embedded Systems Conference and ICSPAT. She has held a variety of production, technical marketing and writing positions within technology companies and agencies in the Northwest.

Ethernet Traffic Spawns New Class of Processors

Thursday, February 23rd, 2012

Open-Architecture Packet Processors Address Increased Ethernet Traffic and Network Security Demands

Packet processors are a distinct class of purpose-built processors that have evolved in response to increased utilization and dependence on the Internet and networks in general. Most intelligent devices, including mobile phones, set-top boxes, game stations, entertainment centers and PCs, are connected to networks which are becoming more and more IP-based. While universal connectivity offers numerous advantages, it also introduces new risks and concerns. Ethernet network management, monitoring and security are becoming key issues, and packet processors are built specifically to deal with these issues.

Packet processing is the act of data identification, inspection, extraction, manipulation or otherwise accessing the elements of a data packet. Its purpose is to gather statistics, interpret data, provide secure communications and perform traffic shaping and routing. When the data packet has been examined, a predefined action is taken based on its contents. Packet processors offer several advantages over traditional Ethernet network processors because their hardware and software is tailored for packet movement, which gives them the ability to perform at multi-gigabit line rates.

Packet Processors Improve Network Management
As networks expand and become more complex, they place an additional processing burden on communications systems. Packet processors can perform traffic shaping, security algorithms, compression, encryption and other tasks, prior to the network traffic reaching a server – and thus they represent a cost-effective alternative to upgrading the server’s host processors. Packet processing is critical to the packet-dependent applications that are at the heart of today’s networks – applications such as session border control, secure access (IPsec), firewalls, network address translation (NAT), traffic management, routers, switches and packet inspection.

The increasing demand for network security and content management has placed severe demands on network service providers. In order to truly control their networks, providers need to know where traffic originates, where it is going and most importantly what the traffic contains. This means that all layers of individual packets must be analyzed for content, a process known as “deep packet inspection” or DPI.

Deep Packet Inspection
Increasing dependence on IP networks has not been without its challenges. One of the thorniest issues has been Ethernet network security. The migration from the LAN to wider public networks introduced threats such as denial-of-service attacks, spam, worms, viruses, hackers, malware and spyware.

In response to these threats, network users and administrators turned to tactics such as VPNs, tunneling, firewalls, anti-virus software, MAC filtering and encryption for wireless networks. But most of these strategies focused on how message envelopes are passed through the network when the heart of the issue is what’s inside the envelope — the message itself. Deep packet inspection addresses this issue head-on and DPI is one of the key capabilities of packet processors.

DPI gives network managers the ability to examine a packet at Layer 2 through Layer 7 including IP packet headers, data protocol structures and the actual data content (payload) of a message. It can be used to search for non-protocol compliance violations, look for viruses, spam, network intrusions and provide pre-defined decision criteria to determine if an IP packet should be:

  • Blocked or passed through only a certain point in the network
  • Routed to a different destination
  • Marked or tagged (e.g., for QoS)
  • Collected as statistical information (e.g., billing or traffic related)

Controlling network usage has become another major factor behind the deployment of DPI. With DPI, it is possible for network managers to understand in granular detail the amount of traffic generated by each user and each application. Based on this information, they can make intelligent management decisions, for example identifying latency critical traffic and prioritizing it above other packets.

ethernet_traffic_1
Figure 1: Packet Processors in IP Communication Networks

Figure 1 illustrates some of the points in today’s networks where packet processors are playing a key role.

Example Application: Policy Enforcement, Data Retention and Lawful Intercept
As networks migrate to IP-based communications, service providers require a new level of network control. Operators need to identify and classify traffic, giving higher priority to latency-sensitive packets such as VoIP and streaming video, enforcing policies and service agreements, monitoring perceived VoIP and video quality, searching and blocking viruses and collecting statistics and billing information. All this is a function of DPI.

Traditionally, standard x86-based servers were able to perform this function. But as traffic reaches 10Gbit/sec data rates, traditional servers are no longer capable of sustaining such speed. Packet processors are an ideal solution for this problem.

ethernet_traffic_2
Figure 2: Traditional Server Upgraded with Packet Processor

Figure 2 shows how a traditional “pizza box” server can be upgraded with a packet processor to sustain 10Gbit/sec data rates. Note that in the upgraded architecture, high data-rate traffic is localized to the packet processor module, while only selected data records, or packets of specific interest, are captured and forwarded to the host for further processing and retention.

Example Application: Secure Gateway, Router, Firewall
Enterprise networks now extend far beyond the walls of the actual office location. Multiple sites are connected together, remote workers are connected to the main office, everyone needs to have high-speed access to the Internet and VoIP (which uses the same network) dominates voice services. This level of connectivity, coupled with ever-increasing data bandwidth, increased flexibility, as well as increased security risks, drives the need for high-performance routers, firewalls and secure gateway devices.

These devices must be capable of supporting multiple high-speed VPN tunnels that require data encryption; searching all packets for virus signatures; prioritizing latency-sensitive VoIP packets; protecting internal networks by blocking certain packets and performing routing and address translation. All this needs to happen at speeds of up to 10Gbit/sec. Using x86 servers to cope with such performance demands would require a significant number of units, which makes that solution impractical. However, packet processors are built to handle such tasks much more efficiently. Figure 3 shows how a traditional server performs this function, in contrast to a packet processor, which can achieve much higher performance.

ethernet_traffic_3
Figure 3: General Purpose Processor vs. Packet Processor

The packet processor utilizes a number of hardware offload blocks that assist in moving packets through, allowing the processor cores to focus on other tasks. The packet processors can be used alone, for example in a MicroTCA™ chassis or 2U-3U 19-inch packet processing appliance, or the packet processors can be added to standard x86 servers in the form of PCI-Express cards, offloading some performance-critical tasks, such as data encryption, virus pattern search, routing and so on. Packet processing solutions, such as those from GE Intelligent Platforms, can significantly speed up network performance and traffic analysis.

Example Application: Network Test Equipment
Network test and monitoring equipment typically consists of test probes and servers that consolidate and manage test data. High performance test probes most often are implemented using FPGAs. There is no doubt that FPGAs offer very high performance, but FPGAs have difficulty keeping up with ever-evolving protocols. Packet processors, on the other hand, offer tremendous flexibility because they can be programmed using standard C language, they can support a large number of protocols and they can be quickly and easily updated in the field. Figure 4 shows a typical FPGA test and monitoring system implementation and a proposed implementation using packet processors.

ethernet_traffic_4
Figure 4: FPGA Testing Gear vs. Packet Processor Testing Gear

The variety of packet processor form factors available allows for several different implementation options. MicroTCA seems to be a very good choice for test probe implementations. Small 1U MicroTCA chassis can be easily installed and offer significant expansion options. For example, it would be possible to simply plug in additional packet processor AdvancedMC modules to gain additional test ports. Hot-swapability, which is an inherent feature of AdvancedMCs, enables new usage models, where additional modules can be added and removed from a running test probe. On the server side, a PCI Express packet processor could be optionally used to gain extra performance.

Packet Processors Plus Software Equal Solutions
By combining the capabilities of an extensive selection of packet processors with middleware from industry partners, valuable Ethernet network applications can be built from commercial off-the-shelf products. These are precisely the kind of applications which network designers and administrators are struggling to deploy in their existing and next generation communications networks.

Conclusion
Increased Ethernet network traffic and demands for network security have spawned a new class of processors known as packet processors, which supplement the capabilities of the host system. Packet processors allow network service providers to see where traffic originates, where it is going and most importantly what the traffic contains.

Packet processors are able to analyze all layers of individual packets at full line speed, a process known as “deep packet inspection” or DPI. The Ethernet network devices that incorporate this DPI technology are increasingly being designed to open, scalable and modular architectures, and GE Intelligent Platforms offers an extensive selection of packet processors and partner software that fully addresses this need.


ethernet_traffic_5

Rubin Dhillon is the industry manager for communications and networking solutions in the Military and Aerospace division of GE Intelligent Platforms. Rubin holds a bachelor of business degree from the Victorian University of Technology in Melbourne, Australia and has more than 18 years’ experience in embedded communications technologies for the commercial, telecommunications and military markets.


Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.