CERN Upgrades Particle Accelerator with an FPGA-based PCIe-to-VME64x Bridge


Thanks to an open-sourced, FPGA-based PCIe-to-VME64x bridge, CERN’s existing particle accelerator equipment can now be upgraded with the latest processor performance and remain in operation for years to come. CERN plans to use VME until the scheduled end of the Large Hadron Collider in 2032.

CERN (an anonym derived from Conseil Européen pour la Recherche Nucléaire) is the world’s largest research center in the field of particle physics whose foundations date back to 1954. Today, over 2,500 employees and over 12,000 visiting scientists from 85 nations are researching the building blocks of our Universe there. Probably best known is the 27-km long Large Hadron Collider (LHC), which began operation in 2008. LHC helps to examine why our universe consists mainly of matter…and not also in equal parts of antimatter. The most advanced equipment in the world is used for such tasks, with enormous investments made every year.

Flexible crates for data acquisition and accelerator control

Thousands of crates have been installed in CERN particle accelerators for a long time. Deployed in support infrastructure of various particle detectors, modular crates are typically used for trigger electronics and data acquisition[i]. The crates have a typical slot-based configuration with backplane and freely configurable modules. The system is widely used in institutions like CERN since modular systems enable circuits to be re-used, deployed in multiple systems, and in various configurations. Upon experiment completion, crates get reused in different configurations for new experiments, benefitting a long-term investment.

One type of crate is based on the VME bus, first specified in 1981 and continuously developed since. Currently, over 900 such crates at CERN primarily control accelerators. Crates of different configurations are used for data acquisition. In the Large Hadron Collider “beauty” (LHCb) experiment, for example, crates are used to pre-process portions of raw data from close to a million sensors so that scientists only receive relevant data. Additional crates are found in numerous other CERN detectors, too. Since tasks change with every experiment, new crate configurations are constantly developed, continuously providing the latest computer performance. It is within the foreseeable future that around 200 new crates will be installed during the planned “Long Shutdown” in 2019 to 2020 to repair and completely overhaul equipment.

Figure 1: Crates at CERN. Around 900 VME-based crates are used at CERN to control accelerators for various experiments. (Source: CERN)


One problem with VME-based systems, however, is that the processors do not natively support communication over the VME bus. A PCIe-to-VME64x bridge to interface with the VME bus is necessary. However, discrete components have only been available from a few manufacturers, and one main vendor has announced end-of-life for a current component (TSI148). The enormity of this problem is clear when considering the number of Single Board Computers (SBCs) with a VME bus installed at CERN. Presently, over 900 VME-based SBCs from MEN Mikro Elektronik (with Intel® Core™ Duo and Core™ 2 Duo) are installed. This volume alone certainly wouldn’t justify manufacturing a discrete component just for this purpose. Therefore, CERN’s Beams Department/Control Group began looking for sustainable alternatives by issuing a new tender.

CERN Searches for a New Bridging Solution to the VME64x Bus

Three possible options were specified for PCI/PCIe to VME64x bridging. The bidding company should:

  • have enough TSI148 chips to produce the boards specified in the contract, or
  • use the Tundra Universe II, the predecessor of TSI148, or
  • use FPGA technology. In this case, CERN required bidders to make available the complete Very High-Speed Integrated Circuit Hardware Description Language (VHDL) sources of the FPGA design through a GPL3-or-later license.

CERN knew there were companies with proprietary implementations of VME bridges using FPGAs. For example, previous generations of SBCs used at CERN (before TSI148-based boards) had a PowerPC processor with an FPGA attached for interfacing to a VME bus. Therefore, with the last option in the call for tender, CERN hoped that at least one company would open-source their bridging implementation. Yet to ensure fairness for all bidders, CERN did not indicate a preference. Ultimately, the company offering the best pricing was granted the contract.

Figure 2: IHS Markit estimates that the VME market will continue to generate millions in USD of board sales per annum. (Source: IHS Markit, Embedded Tech Trends Conference, 2018)


An Open-source PCIe-to-VME64x Bridge

The tender resulted in a solution based on FPGA technology. Hence, VHDL sources are now available under the GPL3-or-later license. The Linux driver package is also available on the PCIe-to-VME bridge project page of the Open Hardware Repository.[ii] Open sourcing the PCIe-to-VME bridge is a huge step forward for CERN and all institutions where VME use continues.

Figure 3:  PCIe-to-VME bridge block diagram. With the introduction and GPL release of the FPGA-based PCIe-VME64x bridge to the VME bus, MEN ensures long-term availability of VME-based crates beyond their use at CERN. (Source: MEN)


CERN engineers are no longer locked into or dependent upon a single vendor. If the FPGA chip becomes obsolete, access to complete VHDL sources enables users to port the bridge to another FPGA. Institutes and companies can now buy an up-to-date product with that bridge and build another VME SBC using identical VME bridging technology, if desired. The same VME bridge, used across different SBCs, leverages Linux kernel drivers and user space VME APIs for free use, without royalties, for all. Open source assets allow engineers to efficiently collaborate and to possess more freedom to share and re-use Linux kernel drivers for VME slave boards that, for example, CERN engineers design on their own.

New FPGA Logic Contributor

The company willing to invest tremendous effort with CERN in designing, testing, and validating a suitable PCIe Gen 3 bridge to VME 64 bit systems was MEN Mikro Elektronik (MEN). MEN worked together with the CERN team on publishing the PCIe-to-VME bridge to open source. The PCIe-to-VME bridge (thereafter referred to as “bridge”) translates read/write operations in the PCIe address space to read/write transactions on the VME bus. The bridge also acts as a PCIe endpoint on one side and VME bus master on the other. The bridge can generate VME single cycles and block transfers, and currently supports the following access types:

  • VME single cycles: A16, A24, A32 with any of the D8, D16, D32 data widths
  • VME block transfers (BLT): A24D16, A24D32, A32D32 plus the A24D64 and A32D64 multiplexed block transfer (MBLT)
  • CR/CSR configuration space access

VME block transfers are executed by a built-in Direct Memory Access (DMA) engine where blocks of data transfer between system memory and the VME bus, bypassing the CPU. Additionally, it is also possible to use a DMA with single cycles, which is especially useful for connected boards that don’t support BLT access mode. In general, this is a faster and more efficient way to exchange multiple data words, as the CPU is free to continue normal operation until the DMA engine is done with a programmed task.

The PCIe-to-VME64x bridge also supports some features added in the VME64x extensions and can use the geographical address pins and generate a special type of A24 access to read and write the CR/CSR configuration space of VME slaves installed in the same crate. However, while none of the fast transfer modes (e.g., 2eVME, 2eSST) are currently supported, these could be implemented in the future. Moreover, MEN’s VME bus module can act both as VME master or slave. The duality enables not only the VME bus module’s use in VME SBCs that run as masters, but also allows one to leverage them on I/O and other peripheral boards connected as slaves. (Although for the VME SBC application, the configuration focusses only on the VME master functionality.) Currently, the whole bridging design occupies about 30 percent of the Intel Cyclone’s FPGA area, which gives plenty of space to implement new features (such as 2eVME and 2eSST transfers).

Long-term availability guaranteed?

By releasing the specification along with the deployment of the first boards with the new FPGA-based PCIe-to-VME64x bridge, CERN has reached an important milestone for the long-term availability of its VME-based crates for data acquisition and accelerator control. This reference design of the FPGA IP core is also a milestone for all other existing users of VME-based systems because the availability of suitable logic is now guaranteed for all in the long term. According to current estimates, the market for the new boards will still amount to over USD 200 million in 2020.[iii]

During the project, MEN demonstrated a well-known reputation for advanced FPGA expertise in standardized embedded computer technology and superlative competence in VME CPU boards. With the PCIe-to-VME bridge, customers benefit from ascertaining the long-term availability of existing installations. MEN can also offer comparable solutions upon request. For example, MEN can also provide PCIe-to-PCI or even PCIe-to-ISA bridges, making legacy hardware from OEMs more available in the long term, and thereby providing an even longer return on investment.

Besides engineering bridging solutions for internal legacy buses to ensure long-term availability, the company also offers FPGA-based bridges to external interfaces and buses including UART, CAN bus, or controllers with Queued Serial Peripheral Interface (QSPI) using SPI. Such an application scenario allows OEM customers to create variants in an extremely cost-efficient way. For instance, a single CPU board design can be implemented in completely different applications. Even when batch sizes are small, it is possible to service significantly more applications, such as system solutions with different fieldbuses or industrial Ethernet variants.   Or specifically, solutions with migration requirements can also benefit; in railway engineering or aircraft construction, for example. Leveraging a single CPU board across different applications means that OEMs can use a single hardware platform in all variants, which significantly simplifies service, documentation, and certification.

First Board with the New Bridge

The first board with the new open source FPGA-based bridge to the VME bus used at CERN is the A25 SBC from MEN. The A25 is equipped with Intel’s Xeon D-1500 server CPU. Besides the new FPGA, the A25 combines high cost efficiency with a rich feature set. Moreover, the A25 (with temp range of ‑40°C to +60°C) supports system size reductions, reliable long-term operation without forced air cooling, and manifold computing functions with just one computer board.

Figure 4: The A25 VME SBC from MEN with an FPGA-based PCIe-VME64x bridge to the VME bus is equipped with Intel’s Xeon D-1500 server CPU and combines high cost efficiency with a rich feature set. (Source: MEN)

Featuring two USB 3.0 ports, up to three Gigabit Ethernet ports, and two RS232 COMs at the front, the board offers the crucial basics of a multi-purpose industrial computer. A25 is equipped with up to 8 GB of DDR4 SDRAM and flexible mass storage extensions covered by slots for microSD and mSATA. In addition, the A25 can be equipped with one XMC/PMC mezzanine card and one PCI Express Mini Card, providing additional front I/Os (XMC/PMC).

The modular extension with I/O mezzanines on an SBC allows one to configure tailored systems from open standard components, reducing integration time and cost. The rugged board also withstands shock and vibration, ensuring reliable operation and a longer product life-time.

Gunther Gräbner is Product Line Manager at MEN Mikro Elektronik GmbH.






Share and Enjoy:
  • Digg
  • Sphinn
  • Facebook
  • Mixx
  • Google
  • TwitThis


Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.