Q&A with Al Yanes, PCI-SIG

The first quarter of 2019 is on its way and so is the 1.0 version of PCI Express® (PCIe®) 5.0 specification.

Editor’s Note: The PCIe 5.0 Version 0.7 specification was released to members in June 2018. “5.0 primarily targets 400 Gigabit Ethernet and solutions that require doubling of the bandwidth without going wider,” PCI-SIG® President and Board Chair Al Yanes tells us. Edited excerpts of our interview follow.

PCI-SIG® is focusinQ&A with Al Yanes, PCI-SIGg primarily on the speed change to accelerate our development of PCI Express® 5.0.

EECatalog:  PCI Express has that advantage of synergy because it is found in so many sectors.

Al Yanes, PCI-SIG: You hit the nail on the head. If you look at an embedded processor, it is going to use PCIe architecture; if you go to a mobile solution, you are going to have PCIe architecture. PCI Express technology is ubiquitous. For example, if you are designing a solution for enterprise, you may have picked it up on your previous assignment on storage for NVMe™.

You might realize, “I was working on storage for NVMe, and I have been moved to a mobile space and wanted to do an IoT solution—I can use PCI Express technology for that.”  This is where the incumbent years of experience help; 20 million lines of code support the PCI Express devices. It is a robust, solid, infrastructure from a tools perspective, from a software perspective, and from the perspective of debugging in the lab using oscilloscopes and logic analyzers, for example. PCI Express technology is very familiar and common, so the proliferation of that knowledge makes sense.

EECatalog:  What are some of the market drivers for PCIe 4.0 and 5.0 architectures?

Al Yanes, PCI-SIG: Traditional enterprise servers as well as cloud storage via NVMe solutions. In the mobile space, we have seen increased adoption, primarily due to our L1 substates technology. L1 substates have a near zero idling power—this is something we introduced several years ago, and I have seen several of the mobile manufacturers adopt PCI Express technology, which is very good from a volume perspective.

EECatalog: Please give us a brief summary of the PCIe specification roadmap.

Al Yanes, PCI-SIG: The PCIe 4.0 specification came out in October 2017. We have completed the 0.7 version of the PCIe 5.0 specification, and we are projecting that the final spec will be available in the first quarter of 2019. The release of the PCIe 4.0 specification took seven years but on average we have been doubling bandwidth every three years. We are catching up with the PCIe 5.0 specification, which is anticipated to arrive in only two years.

The PCI Express 5.0 specification primarily targets 400 Gigabit Ethernet and solutions that require doubling of the bandwidth without going wider—where the only option they have is to go faster; we will remain the state of the art on bandwidth. If you do the math 400 GbE is 50 Gbps in each direction and a PCIe 5.0 x16 solution will give you 64 gigabytes in each direction for a total of 120 Gbps.

There is a lot of momentum already in the industry with 28 gig solutions and 56 gig solutions, so we are leveraging that. We are focusing primarily on the speed change to accelerate our development of PCIe 5.0 specification, even adjusting some of our specification processes to enhance development.

We believe PCI Express 5.0 architecture will continue to keep us at the forefront of interconnect technology with our members and with others who utilize PCIe architecture for their solutions. PCIe 5.0 specification will meet the bandwidth needs across a range of industries like mobile, storage, 400 Gigabit Ethernet or Infiniband, accelerators and machine learning.

EECatalog: So PCI Express technology will be important for Artificial Intelligence, then?

Al Yanes, PCI-SIG: Yes, I was just reading the other day about Microsoft’s Project Catapult and those kinds of technologies are going to primarily utilize PCIe architecture based solutions.

At the Open Compute Project Summit, they emphasized 400 GbE; communication of data to their accelerators—all that is PCIe technology based. If you look at Microsoft’s Project Olympus—data is going to the GPUs via PCI Express architecture.

PCI Express technology is the main throughput for I/O into the CPU. Data going through PCIe. 5.0 technology will facilitate the machine learning, AI solutions and the accelerator attachments that Microsoft, Amazon and other heavy hitters are creating.



Share and Enjoy:
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • TwitThis
Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.