More Able than Cable



HPEC Uses PCIe Interconnects for Enhanced Data Networking

As network applications have increased in number over the past few decades, communication protocols have followed suit, with an estimated 150 alone used by the US government. The list includes popular protocols, like Ethernet, Fibre Channel and InfiniBand, but also a host of not as well-known ones, such as Camera Link and SpaceWire.

PCIe is being used to provide box-to-box external data paths between rugged embedded systems as well as for the internal data path in backplane architectures, such as VPX.

As new protocols have been introduced, each has come with a different benefit to implementation. For example, Fibre Channel offered reliable delivery, whereas InfiniBand offered low latency.

But because this space is such a crowded one, it’s been especially hard for any single protocol to establish a solid ecosystem. With insufficient revenues, many communication protocols have not been able to advance their capabilities in a timely fashion and fund new product developments, so how are they expected to gain any ground?

Even protocols with strong value propositions and strong revenues have succumbed to these pressures, placing many protocols, along with their equipment lines and suppliers, on the path of extinction. Think of the once-popular Asynchronous Transfer Mode (ATM) that offered superior Quality of Service (QoS), which is now obsolete.

Interesting to note in this volatile world of communication protocols is that PCI Express (PCIe) interconnects are gaining significant momentum in low latency, real-time solutions, since their transceiver technology has a strong performance roadmap, and they can be used to support High Performance Embedded Computing (HPEC) low latency data transfers over copper or optical cables. PCIe is being used to provide box-to-box external data paths between rugged embedded systems as well as for the internal data path in backplane architectures, such as VPX.

In fact, many current “out-of-the-box” solutions are being used to interconnect standard Intel-based servers in traditional commercial computer environments to shared I/O devices on Windows or Linux operating systems by recognizing that different requirements mandate a different infrastructure. A good example is the Intel’s Intel® Virtuous Cycle and the different requirements for the Cloud/Core infrastructure versus the requirements for infrastructure at the edge.

Building an Economically Viable Ecosystem
Responding to the consolidation happening within the communication protocol space, network companies, such as Accipiter Systems, have seen the increased need for protocol-independent transports. These systems carry forward the protocol value proposition without requiring any of the specific protocol associated products and their supply chains.

Although InfiniBand has retained its stature in niche markets, such as DOE-funded supercomputers, it is experiencing a similar market dynamic as the ATM protocol, and therefore just as threatened. Just as the government system engineers and chief scientists across multiple services are seeing the obsolescence of ATM, they are equally reluctant to design InfiniBand into next generation systems that require 20-year lifecycles.

Figure 1:. PCIe serves low-latency markets not well served by Ethernet.

A good example of a protocol-independent transport that is thriving in new market verticals is Ethernet. For example, Fibre Channel’s reliable delivery is now available as Fibre Channel over Ethernet (FCoE). The logical progression is for any newer types of protocols to proliferate all aspects of communications, but their dominance is still far away.

In the interim, the industry is searching for a low latency, protocol-independent InfiniBand replacement, especially for inter-box interconnects, that also has an economically viable ecosystem of technologies and suppliers. PCIe is an emerging candidate, since there is a distinct point at which the Ethernet-served market verticals diverge from the low latency market verticals served by PCIe (Figure 1) .

When used as a system interconnect, PCIe provides high data rates, ease of integration, low latency and low cost as well as a strong supply chain, which all benefit the rack-level system architects. PCIe spanned systems now cover a rack and include a heterogeneous suite of processing elements, including FPGA accelerators, GPUs, and sequential processors as well as best-in-class storage elements.

Improving Performance and Data Transmission
Real-time resolution network products, like those from Accipiter Systems, that enable PCIe to span multiple box systems relaxes ease the expansion constraints of motherboards or chassis by limiting expansion slots and module slots, respectively, as well as reduceings the homogenous mechanical form factor constraints of both (Figure 2).

Figure 2:. Real-time networked platforms enable PCIe to span multiple systems (Elma Electronic Inc.)

Although the most commonly used method of moving data between separate computers is Ethernet, shared memory concepts via standards like VME have been used for years. Providing this capability inside the box is sometimes needed to efficiently move low latency data in advanced applications where timing is critical.

These methods do provide results, but the application programming required to take advantage of them still has steep development costs. Developers needed something similar to the upper layers, like IP, UDP and TCP, that allowed the use of simpler programming models. But, the heavy price to pay in overhead and latency with these protocols left faster and faster speeds as the only solution to try and overcome the issue.

Overcoming Development Challenges
As the cost of components in embedded systems has decreased, while their performance has steadily increased, a very cost-effective solution has begun to emerge. Serial point-to-point connectivity of a protocol such as PCIe provides the underlying structure where PCI and PCIe are used as interconnects between almost everything inside today’s embedded systems.

However, PCIe is based on the PCI bus standard, which has a basic design structure that ONE CPU controls EVERYTHING in the box, in a top down hierarchy. This presents some challenges.

If all the CPUs are to be peers, how can two CPUs talk to each other over PCIe, even when using a nice serial point-to-point protocol like PCIe? Fortunately, special bridges called non-transparent bridges (NTB) provide a method to “bridge” elements, allowing each CPU node to act like a normal top-down PCI structure, but still have “windows” into each node from the other. As previously noted, shared memory concepts are now non-trivial from a programming perspective.

Standards organizations have long been at the forefront of emerging technology platforms. VPX, the workhorse of modern embedded systems, is the next-generation descendant of the VME standard and is managed by VITA, a non-profit industry organization dedicated to promoting the concept of open technology. Using the guidelines set forth by the trade association, most of today’s VPX SBC vendors have developed a middleware layer of abstraction to allow easier access to configuration and setup required within the PCI structure.

Both transparent and non-transparent bridges exist in numerous places within embedded systems. As more PCIe switching is added to system designs, manufacturers need to set up their individual configuration spaces to enable data movement and access between, and among, all the nodes. A middleware can provide the tools and architecture to manage this environment (see Figure 3).

Figure 3:. Middleware tools, like Multiwear from Interface Concepts, can manage data movement and access

Virtual Ethernet over PCIe benefits from this type of middleware, as it allows for the use of the well-known socket programming model, while taking advantage of PCIe’s speed and ever-increasing bandwidth. But for the low latency required in today’s eternally time critical applications, this technique is not as effective.

Enter programming APIs, developed to allow for simpler access to a pre-configured “shared memory.”. Programmers no longer need to know how to configure all the various spaces or how to set up and use various RDMA (Remote Direct Memory Access (RDMA) schemes within the architecture. The middleware packages developed by various vendors, each with their own set-up concepts and methods, provide all the tools necessary.

Vendor-specific APIs solve the “now” problem of taking full advantage of PCIe as an interconnect in low latency, high bandwidth HPEC systems. But there needs to be movement toward standardizing how to work in a multiple vendor environment that fundamentally works at the PCIe level. For now, we find ourselves trying to abstract the programming effort to allow for multiple applications to be developed using a “fixed,”, but non-standard, API structure that only works with the vendor’s middleware for which the new APIs have been developed.

A Look to Future Requirements
For some time, PCIe has been used as a low-level backbone network data path to provide a protocol with wide support and a forward path for long term system developers. However, more standardization of the APIs above the lower level hardware protocol is still needed.

Emerging markets may become the necessary catalyst to move this definition forward. And with the ecosystem supporting PCIe constantly evolving, and the need for a low latency, system-aware protocol growing, the urgency to provide a box-to-box PCIe implementation for data-intensive applications is more critical than ever.


As a senior field application engineer for Elma Electronic, David Hinkle applies more than 30 years of experience architecting and designing integrated systems for the military, aerospace, telecom, and industrial automation markets. His focus has been on being in the field, assisting customers in understanding open standards and architectures, including VITA and PICMG, and how best to take advantage of them.

 

Dan Flynn has over 20 years of experience designing computer networking solutions for commercial and Government customers. His passion is understanding customer challenges and then envisioning and driving innovative computing networking products and solutions into the marketplace that solve those challenges. Flynn has led development teams to produce system, software, hardware and ASIC level products.

Share and Enjoy:
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • TwitThis