Q&A with PCI-SIG’s Board Member Ramin Neshati
Catching up with PCI-SIG ahead of PCI-SIG 2016 Developers Conference
Editor’s Note: Not long before the PCI-SIG’s annual DevCon, which has topics this year ranging from designing solutions in the data center to PCIe technology in automotive applications to potential PCI Express signaling beyond 16GT/s, PCI-SIG board member Dr. Ramin Neshati spoke with Embedded Intel Solutions.
Embedded Intel Solutions: Please place next year’s planned finalization of the PCI Express 4.0 specification in context for us.
Ramin Neshati, Board Member, PCI-SIG: The PCI Express® (PCIe®) 4.0 architecture represents a doubling of the bit rates over the PCIe 3.0 architecture, from 8 gigatransfers per second [GT/s] to 16 GT/s (Figure 1). What is important is that the increase in bit rate comes with no compromise to or sacrificing of PCI-SIG’s traditional technology objectives.
These objectives are and continue to be:
Backward compatibility to all previous versions of the technology—You can take a PCI Express 4.0 system and plug in a PCI Express 1.0 card into that slot and it should work. Now, it will not work at PCI Express 4.0 speeds, obviously, because the card doesn’t support 4.0 speeds. Instead, it will work at the lowest common denominator speed—you will not see a blue screen. You will not see a freeze up. That is the assurance of interopability and compatibility that PCI-SIG provides.
Low cost—We do not require the ecosystem to adopt exotic materials or exotic mitigations for implementing PCI Express 4.0 technology. Some of these materials or other enablers for increasing the speed of your interconnect may be so prohibitively expensive that the little margin people have in their systems could vanish with the adoption of faster or better materials.
OEMs can benefit from using high-volume, manufacturable systems and materials and leverage the volume economics to deliver high-performance solutions at the lowest cost possible.
Low power—We are providing the PCI Express 4.0 architecture at the same ISO level of power consumption as defined for the PCI Express 3.0 architecture. Despite the doubling of performance, there is no doubling of the power budget. It is not a 2x factor and it receives about the same amount of power as the PCI Expess 3.0 architecture.
Embedded Intel Solutions: What were some of the things that needed to be in place for the PCIe 4.0 specification to meet cost, compatibility and power efficiency goals?
Neshati, PCI-SIG: What was defined as a server channel in the PCI Express 3.0 architecture presented a difficult challenge: In order to get data from point A to point B, the channel had to go through a number of transitions or discontinuities, which led to some signal boosting across the long channel. To maintain fidelity of the signal, there had to be some enablers within that channel to make sure B got the signal that A sent, so, lots of mitigation to solve that long difficult channel.
For [developing] the PCIe 4.0 architecture , it was pretty much known by the ecosystem, and all the experts in the field, that to go 16 GB across a long channel (20 inches and two connectors) without raising costs was going to be almost impossible. The decision was made, that in order to maintain the lowest cost profile for this architecture, PCI-SIG would shorten the reach for the server channel to about 12 to 14 inches. Anything beyond this length would require a repeating device such as a retimer or redriver.
Embedded Intel Solutions: What are some of the solutions coming into view in the storage space?
Neshati, PCI-SIG: Storage has been on a continuous, disruptive flow for the past two or three years. I don’t think we are going to see it settle down anytime soon as there continues to be innovation in the marketplace.
For example, server-based storage is becoming very prevalent, while cloud-based storage and the SaaS/PaaS/IaaS SPI models are emerging. As storage-based architectures continue to evolve, the devices become faster, better and less expensive.
Embedded Intel Solutions: What is the role of PCI Express technology in this evolving landscape?
Neshati, PCI-SIG: PCI Express is now the undisputed interconnect of choice for storage. SATA is on a downward trend and the transition from SATA to the PCI Express architecture has already begun. SAS space storage and servers may continue at the maintenance level, but even that slice of the pie is transitioning to PCIe-based storage to some extent.
I am seeing numbers from analyst firms’ data, such as Forward Insights, that show the decline for SATA and the rise of PCIe adoption over the next several years [Figures 2 and 3].
Embedded Intel Solutions: What are some of the characteristics of the PCIe architecture that serve embedded applications?
Neshati, PCI-SIG: PCIe technology is very well positioned for broad adoption in the embedded industry. Applicable features run the gamut from extremely low power implementations to the high performance with the PCIe 4.0 specification and beyond. The latency of the PCIe architecture has always been low – not as low as memory – but low enough and optimized for I/O access as much as possible.
In addition, common device enumeration and the stable software programming model are very useful in embedded applications like IoT and automotive, because they allow devices that use PCIe interconnects to declare themselves as whatever devices are involved. For instance, the software that controls the interconnect can decide, “Yes, I have an entertainment box here, I have a control mechanism there, and I can tell this [device] to do that and I can tell that [device] to do this.”
PCI Express enumerability and its well-known programming model are key for software programmers who are looking to add value on top of a PCI Express solution. Programmers already know how to do this because PCIe technology has been natively supported in all operating systems so far—from Windows to Macs to all variations of open source.
Given the broad adoption of PCIe technology and availability of IP from multiple sources, the PCIe architecture is suitable for automotive applications like controls, navigation, diagnostics, entertainment, etc. that need higher performance at lower costs. And because fully functional IP blocks have been available on the market for some time, it is easy for people to go out and buy or create new PCIe-based solutions at a relatively low cost. The amortization of the original investment has happened, so they are available at the lowest possible cost, and if they are implemented in volume, there is a good volume play there as well.
In terms of the performance requirements from IoT, automotive and embedded applications, these use cases will not stress the performance offered by the PCIe architecture. The PCIe architecture provides a much fatter pipe for these types of applications, which can modulate the bandwidth they need in a case-by-case basis (i.e. HPC, Big Data and machine learning).