AdvancedTCA ASI Fabric

It is interesting to watch the positioning and posturing around the Fabric alternatives in the AdvancedTCA market today. There used to be Bus Wars but the focus has shifted because form factor discussions aren’t as exciting as the new Fabric Wars.
Choices in AdvancedTCA Fabric are limited to four options: Fibre Channel, Infiniband, Ethernet and Advanced Switching Interconnects (ASI). Fibre Channel is generally accepted as a storage Fabric, Infiniband was hot then cooled and is now warming up again, Ethernet continues to chug along
and ASI in its infancy is a topic of much discussion. I have heard “You can’t go wrong with Ethernet” and granted, Ethernet has proven to be reliable and safe but it reminds me of a desktop computer vendor that was also proven to be reliable and safe in the early 1980s. That vendor is no longer a major force in the desktop PC arena. Market shifts happen and it happens when you least expect it.
The term, “Fabric” when applied to AdvancedTCA refers to board-to-board interconnect routed over a backplane. Requirements are relatively straight-forward: they include a deterministic transport (the ability to bound transport time), have support for peer-peer connections (allowing
multiple boards of the same type to exist on the Fabric), work with existing drivers and are reasonably priced. The Fabrics today meet these requirements for varying degrees and it seems ASI is a better match for these requirements than alternatives.
The concept behind Advanced Switching Interconnect is direct: use a PCI ExpressTM physical layer and top it with a set of enhancements
specific to the protocol that aligns the ASI with the Fabric requirements.
The architects of ASI made enhancements that include three different traffic types through an adaptation layer call Protocol Interfaces (PIs). The first of the three PIs is Fabric services which define device configuration, discovery and enumeration. The second is tunneling which allows the ASI to encapsulate protocols such as PCI ExpressTM, Ethernet, ATN, iTDM and Fibre Channel. The third includes native data movement protocols.
One of the more interesting aspects of ASI is its tunneling capabilities. With encapsulation, an ASI bridge device takes the incoming native protocol and converts it into ASI packets by adding additional header information. The receiving ASI bridge removes this header information
and passes the packet to the target device. Tunneling allows architects to support time domain multiplex, Fibre Channel, PCI and Ethernet simultaneously over a single ASI link so designers can preserve existing hardware and software investments and reduce platform costs. This is a powerful concept for not only AdvancedTCA but also for Advanced Mezzanine Cards where a single ASI switch is all that is required for handling multiple protocols.
Today’s ASI products use the current generation of PCIe data transport layer at 2.5Gb/s translating to a 10Gb/s SERDES rate in x4 AdvancedTCA platforms or roughly 8Gb/s of data throughput. This compares favorable against a gigabit Ethernet, the current Fabric of choice. The higher bandwidth is a perfect fit for media servers which may require four OC-12 ports (622Mb/s), a gigabit IP interface and a 4Gb/s storage interface. The sum of the data transports quickly floods gigabit Ethernet and would likely require separate Fibre Channel and TDM transports and switches. Using PIs, congestion management and QoS in ASI allow the different data types to coexist on a single Fabric interface. Since ASI is based on a PCIe data transport layer, the performance of ASI is enhanced as the PCI SIG handles the details of next generation PCIe signaling approaching 6Gb/s.
With change, there is always some resistance until a clear winner is declared. In evaluating the different Fabric Interface options for
AdvancedTCA, consider some of the obvious advantages of ASI: the ability to support multiple protocols over a single transport, a foundation based in PCIe that takes full advantage of next generation signaling rates and the reuse of existing PCI/PCIe drivers. Market forces will ultimately determine the success or failure of the various Fabric alternatives based on merit but ASI appears to have many benefits over others.
Jeff Munch is the Chief Technology Officer at ADLINK Technology. He is also the Chair of the AdvancedTCA subcommittee. He has more than twenty years experience in hardware design, software development and engineering resource management.

Contact Information

ADLINK Technology Inc.

5215 Hellyer Ave. #110
San Jose, CA, 95138

tele: 1.408.360.0200
toll-free: 1.866.4.ADLINK

Share and Enjoy:
  • Digg
  • Sphinn
  • Facebook
  • Mixx
  • Google
  • TwitThis
Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.