Critical Mass and Innovation Keep PCI Express Moving Forward
PCI Express (PCIe) has passed the point of critical mass, with PCIe ports forming the standard interconnect from servers to storage, and consumer to embedded. The technology – and its growth – show no signs of slowing.
High performance, low cost and power management are the big three demands for devices ranging from consumer to deeply embedded. PCI Express offers all three, and with new protocols and standards in the works, the technology shows no signs of slowing. EECatalog talked to John Wiedemeier, senior product marketing manager at Teledyne LeCroy Corporation, as well as Larry Chisvin, vice president of strategic initiatives and Akber Kazmi, senior director of product marketing for PCI Express switches at PLX Technlogy to hear why they’re bullish on PCIe.
EECatalog: We’re hearing a lot about PCI Express as an SSD interface in the enterprise for its combination of high performance, low cost and power-management capabilities. How do you see this playing out in embedded applications? What are the challenges that need to be addressed?
John Wiedemeier, Teledyne LeCroy Corporation: Over the last year we have seen PCI Express adopted as a host interface by three SSD storage technologies:
|Traces taken with the Teledyne LeCroy Protocol Analyzer|
Each of these standards built on the PCI Express protocol are gaining momentum in the storage market place. Some of these are moving faster than others. For example, new products based on SATA Express have been introduced earlier this year. Although SATA Express has gotten an early start in consumer and enterprise, the other two technologies have strong market expectations in the enterprise. Recently, a NVMe enterprise flash memory controller with native support for PCI Gen 3 was announced by IDT. NVMe chips available at this stage of the game will be highly sought-after by storage companies wanting to take advantage of the higher performance. SCSI Express, the newest released of the three technologies, is actually based on SCSI, which is already widely accepted in most storage solutions in the market.
Each of these technologies has potential in the embedded space. The small size and high performance automatically qualify them for highly integrated embedded applications. Rotational media has had some big disadvantages in the past due to size, longevity and sensitivity to shock. Like all embedded devices, the specific environments that they are used in will have to be addressed through better SSD packaging and continued improvement in performance.
|Larry Chisvin||Akber Kazmi|
Larry Chisvin and Akber Kazmi, PLX Technology: The trends that are causing PCIe to be an increasingly popular interface in the enterprise market are the same ones that will be valuable in the embedded market. Low power is important, if not critical, for embedded applications, depending upon the type of product. Clearly, embedded products that run on a battery can get a premium if they last longer on each charge. But even products that are not battery-powered gain a benefit from lower power, since it leads to lower cooling requirements and smaller power subsystems, which in turn provides more flexibility in form factor used – an often-critical factor in embedded applications – and lower overall system cost. Having SSDs with a direct PCIe interface also offers lower cost due to fewer needed components, since most CPUs and protocol controllers have PCIe as an interface, so a direct connection can be made.
The performance of SSDs over HDDs is well-understood, and having a PCIe connection matches well with this advantage, since the bandwidth and latency of PCIe allows the full performance benefit to be realized. This is as true in high-performance embedded applications as in the enterprise market. Leaders in the SSD market selected PCIe as a primary interconnect technology due its broad availability and scalability to match any performance or system need. For example, PCIe technology has evolved from 2.5GT/s to 8GT/s, and is moving to 16GT/s for its next generation, in addition to allowing two, four, eight or 16 PCIe lanes into a single port. In addition to standard bandwidth-related enhancements, some PCIe vendors have added valuable features to their products that are key to implementing the high availability, serviceability and robustness needed for embedded systems.
The main challenge facing the adoption of PCIe as a direct connection to SSDs is to offer standardized interfaces that offer features similar to traditional HDD interfaces, in terms of reliability and maintainability. There are several developments underway that will help system designers to overcome some of the challenges, such as sharing of high-speed clock sources amongst sub-systems, containment of surprise down events and connecting PCIe subsystems through cabling. To overcome these challenges, the PCI-SIG is working on new standards to help develop universal solutions through downstream port containment (DPC), Enhanced DPC and a new, lower-cost cabling specification.
EECatalog: As NVM Express-based products come to market, what is the impact on developers? What still needs to happen to assure the success of this new standard?
Wiedemeier, Teledyne LeCroy Corporation: Like all of the PCI SSD technologies NVM Express (NVMe) is a very complicated protocol that removes the overhead of traditional rotational media and improves performance.
|UNH IOLs NVMe Compliance tests running on the Teledyne LeCroy Protocol Analysis Tools.|
New protocol analysis are required that can decode NVMe protocol traffic between the PCIe host and NVMe device. Developers need to be able to see the NVMe read and write commands to determine correctness and performance of firmware, drivers and devices. A compliance test program is being developed by the UNH- IOL that will insure that NVMe devices behave according to the spec and are interoperable with PCIe systems. The combination of NVMe protocol analysis and test tools for development with an effective compliance program will insure that NVMe is a solid standard.
Chisvin and Kazmi, PLX Technology: The primary impact on developers is better termed an opportunity. Just using PCIe-enabled SSDs offers the value that was mentioned earlier, but once you have PCIe as your primary interface to memory, there are other advantages that can be provided if developers modify their designs to enable them. For example, hooking up the SSD directly to PCIe can reduce the latency of the persistent memory subsystem due to the elimination of legacy bridges. This reduced latency can provide an improvement in overall performance, but the software may need to be modified to fully take advantage of it. Just removing the additional bridges offers some improvement in performance, but there is likely to be code that is not fully optimized now that a critical path has been reduced. The code needs to be aware that a new critical path – somewhere else – is now the limit to performance.
In addition, PCIe can be used for more than just simple interconnection. There are initiatives now being deployed that enable these PCIe-based SSDs to be used as a fabric. This enhances the flexibility of using SSDs in a virtualized environment, for example. Embedded systems often use standard Linux-based operating systems, and can thus make use of the same hardware and software that facilitate multiple hosts, and high-speed, memory-to-memory transfer.
There are several industry forums that are working on standards that will allow developers to adopt this new technology successfully with the least amount of disruption to the existing infrastructure, and with seamless inter-operation between legacy products and new products. Standards like SATA Express, SCSI over PCIe, NVMe, SFF-8639, DPC, Enhanced DPC, and PCIe cable are meant to ensure that the deployment of the PCIe-based products goes smoothly.
EECatalog: As PCI Express evolves, how are developers addressing performance vs. power consumption issues, especially for handheld/mobile applications facing the demand for “all-day power”?
Wiedemeier, Teledyne LeCroy Corporation: Recently the PCI SIG has announced a liaison agreement between the PCI SIG and MIPI alliance for the new standard of “PCI Express (PCIe) over M-PHY.” The M-PHY specification was recently announced by the MIPI alliance as a power-optimized, short-channel technology ideal for broad applications in the mobile segment. Layering PCI Express on top of the low-power M-PHY presents developers with a versatile, low-power interconnect for smartphone and tablet applications. This will give PCI Express a much-needed attribute to continue its integration into consumer and embedded applications.
Chisvin and Kazmi, PLX Technology: One of the major advantages of PCIe is its flexibility. You can tune your bandwidth needs by selecting the number of lanes that you need, and for mobile applications a single lane is often enough. But if you need just a little bit more, you can add another lane to supplement your bandwidth. You can also use the slower, Gen 1 speed to save power, or increase to Gen 2 – or beyond – when necessary. In addition, PCIe has low-power modes that offer reduced power when the link is not being used fully.
Power management and power reduction are key drivers across the industry, especially in the mobile market. The PCI-SIG has initiated work on a new specification called “PCIe over M-PHY" that would enable the industry to expand the use of PCIe into mobile application.
EECatalog: Where do you expect to see growth in the implementation of cabled PCI Express technology?
Wiedemeier, Teledyne LeCroy Corporation: Today cabled PCI Express is frequently being used to extend I/O connectivity between servers and storage systems. PCIe switches allow servers to fan out to more I/O devices. These I/O devices can be outside of the server blade or cabinet chassis and accessed with cabled PCI Express. Systems will most likely grow in their capacity and make use of this new cable technology.
Chisvin and Kazmi, PLX Technology: PCIe is ideal for cabling both inside and outside an embedded box. From the point of view of an application, the subsystems that are cabled together appear as one system, since the protocol that comes across the cable is the same PCIe that exists within the subsystem. Bridging to another interconnect in order to send the information across a cable is unnecessary, and would force the application or drivers to understand where the cable exists. So, for applications that do not need the special features on another protocol (for legacy reasons, for example), this is a natural and sensible thing to do.
PCIe can be sent across a copper or optical cable – PLX Technology has demonstrated this – and can thus span either short distances (inside a box), medium distances (between boxes) or long distances (across the room). This can be done directly from a device or switch, through a PCIe redriver, or through a standard optical transceiver. There are standard, low-cost cables – such as QSFP+ and mini SAS HD – that have been demonstrated to reliably deliver PCIe signals at Gen 3 speeds. All of this shows that PCIe is a cost-effective, high-performance, scalable box-box technology.
Use of PCIe will grow in the enterprise space as PCIe expands outside the box in data centers as PCIe is deployed as a fabric for storage and server systems. Use of PCIe cabling in consumer applications is limited at this juncture but it is about to change as Thunderbolt and new PCIe cables are expected to create economies of scale for high-speed, consumer-grade cabling.
EECatalog: What are your expectations for PCI Express looking ahead?
Wiedemeier, Teledyne LeCroy Corporation: The new PCIe 4.0 standard announced last year describes a doubling of the spec performance to 16 GT/s. This increase in performance matches the ever-increasing demand for data-rate performance by storage systems. Now that PCIe SSD devices are starting to replace some of the applications of rotational media, storage systems will require higher bandwidth and throughput.
Chisvin and Kazmi, PLX Technology: PCIe has already become the standard interconnect in a wide variety of markets: servers, storage, communications, consumer and embedded. We believe that this trend will inevitably continue, since it has passed the critical mass where every component has a PCIe port. The SSD market is moving the the direction of ubiquitous PCIe, with standards such as NVMe and SCSI Express. It’s reached the point where the success so far has led to more success in a cycle.
The use of PCIe as a fabric is still early, but PCIe competes well against Ethernet and InfiniBand in embedded applications, providing high performance at reduced cost and power. This trend will soon establish PCIe as the primary interconnect in this new market. Additionally, consumer interconnects are expected to consolidate into two or three major technologies that includes PCIe.
Cheryl Berglund Coupé is editor of EECatalog.com. Her articles have appeared in EE Times, Electronic Business, Microsoft Embedded Review and Windows Developer’s Journal and she has developed presentations for the Embedded Systems Conference and ICSPAT. She has held a variety of production, technical marketing and writing positions within technology companies and agencies in the Northwest.