The Cost of Ownership of Hardware Emulation

In general, the cost-of-ownership (COO) of a piece of equipment encompasses the purchase price plus several additional expenses that users incur during the lifetime of the equipment –– anywhere from three to five years –– to keep it operational, usable, safe and in good condition.

Some of these expenses are obvious, others not so. All fall into four buckets:

  1. Installation cost and infrastructure expenses required to accommodate the equipment on the premises
  2. Maintenance and recurring costs, inclusive of expenses for renting or amortizing the facility, to keep the machine running and to ensure proper environmental conditions of temperature, humidity, as well as pollution-free air
  3. Operational expenses to use and operate the piece of equipment
  4. Learning and training expenses to bring users/engineers up to speed on the use of the machine

Because of the cumulative effects of the expenses, the COO of a piece of equipment typically amounts to a multiple of the purchase price. Frequently, the COO decreases with each new generation as a result of improvements implemented by the manufacturer. In a highly competitive environment, the equipment suppliers strive to push the envelope and propose better and better machines.

A noteworthy piece of equipment is a hardware emulator whose COO is an interesting case for the range of parameters and criteria that contribute to its outcome.

Hardware emulation is a mandatory design verification engine for a broad range of applications, from hardware verification to early embedded software validation and intellectual property (IP)-level to system-level. Its COO has changed dramatically since its inception in the mid-1980s.

Early emulators were hard to use and endure. They were expensive to acquire, painful and time consuming to deploy, cumbersome for design debugging, and costly to install and maintain. Further, they were limited to testing a single design at a time, in only one usage mode called in-circuit-emulation (ICE), and they did not allow for remote access.

All contributed to make the COO of an early emulator prohibitively expensive. Over time, a long list of improvements alleviated the emulation drawbacks and ameliorated its COO.

Enhancements have been broad and deep, touching all aspects, from purchasing the product to deploying, installing and maintaining it, and learning to use it. Today, the COO of a modern emulator is a fraction of what it was.

Still, not all emulators are created equal, and the differences may be significant, making a COO comparison among the modern offerings somewhat difficult. To begin, each of the current three emulation vendors –– Mentor, Cadence and Synopsys –– use its unique technology and architecture based on one of three approaches listed in Table 1.

EDA Company Emulator Name Emulation Technology
Cadence Palladium-Z1 Processor-based
Mentor Veloce Strato Custom FPGA-based
Synopsys ZeBu-Server Commercial FPGA-based

 Table 1: Emulation providers differ in their approaches to their emulation platforms.

The three architectures weigh on the total COO rather differently. Several critical characteristics play a role in making up the COO.

Today, the hardware emulation market is red hot and extremely competitive. In this environment, none of the three vendors disclose publicly their selling prices. It is worth remembering that not so long ago, one widespread measurement to compare emulators was to calculate its purchase price normalized on a “dollar-per-gate” basis. As a reference, in the early 1990s, the leading emulator was Quickturn’s System Realizer that sold at about $3 per gate. Almost 30 years later, the price dropped well below one “penny per gate.”

As we will see, the COO may exceed the purchase price by three to five times calculated over four years.

All emulators are implemented in large, heavy cabinets designed to accommodate the vast number of programmable devices necessary to map a wide range of design sizes. The weighty cabinets use up considerable space and stress the floor of the premises. They also consume large quantities of electricity and generate plenty of heat.

All offerings are scalable and expandable to accommodate large designs, up to several billion application specific integrated circuit (ASIC)-equivalent gates. Leading processor companies claim their top capacity needs reach close to 10-billion gates. Currently, the largest single box emulator provides a design capacity of two-and-half-billion gates. Larger capacity requirements are accommodated by interconnecting multiple cabinets.

While there are differences in dimensions and weights among the three platforms, all possess a large footprint that lead to significant installation costs. In this regard, the commercial field programmable gate array (FPGA)-based emulator may have an edge since off-the-shelf FPGAs have larger capacities than the other approaches.

The major difference concerns power consumption. The award for worst solution goes to the processor-based emulator since processor devices are active all the time. While all platforms encompass powerful arrays of fans, the processor-based emulator demands additional cooling obtained via fluid circulation that requires a piping system constraining the placement of the emulators. For security reasons regulated by strict laws, pipes cannot be too close to information technology (IT) equipment restricting locations for liquid-cooled machines to the corners of the building.

As an aside, arrays of fans generate ample noise that require acoustic isolation of the premises.

While the characteristics and constraints add up to increase installation cost, it is a one-time expense and ranges widely. The high end of the cost range is reached when premises must be built from scratch, the low-end when an existing facility is already setup to accommodate the emulator.

The cost for renting or amortizing the premises is strictly dependent upon the regional location of the facility and may vary broadly. Maintenance costs to keep the emulator up and running comprise recurrent maintenance expenses and potential repair expenses. The expense to ensure proper environmental conditions boils down to a hefty monthly electricity bill, creating significant differences among platforms. The most power-thrifty emulators are close to 10 Watts-per-million gates (W/MG). A steep additional expense is required for running A/C, lighting and for feeding a variety of electronics and computers.

At 10 W/MG rate, the emulator that consumes the least power with a one-billion gate capacity would consume 87,600 kilowatt hour (kWh) in one year (10 kW x 8,760 hours). This may double when adding the additional expenses.

As stated earlier, the processor-based emulator consumes more wattage, possibly up to an order of magnitude more than custom and commercial FPGA-based machines. Also, cleaning and replacing the cooling fluids must be done on a regular basis, similar to changing the oil in a car.

Depending on the energy cost and the tariff offered by the energy provider in an area where the emulator sits, the energy bill for a one-billion gate emulator may range from a couple to several hundreds of thousand dollars per year.

Repair expenses happen in case of a failure. High values for Mean-Time-Between-Failures (MTBF) and low values for Mean-Time-To-Repair (MTTR) are essential for keeping these expenses to a minimum. In general, MTBF should be measured in several months or even a few years, and, whenever a failure occurs, MTTR should be as short as possible. Optimally, it should be less than a day.

The remarkable progress made in the technology alleviated all the limitations of early emulators. Traditionally, two critical deployment areas required highly skilled and experienced personnel to deploy and operate them: design compilation and design debug. To some extent, that requirement still exists.

Both talent and experience in this endeavor is rather scarce in the industry leading to highly compensated jobs. In high-tech areas in the U.S., such jobs may cost an employer $250,000 to $300,000 per year or so, inclusive of social contributions paid by the employer. Expenses for a team of three experts may reach $1 million per year.

The technology at a disadvantage for addressing this shortage is the commercial-based FPGA emulator for two reasons. First, mapping a design on an array of off-the-shelf FPGAs still needs a degree of manual assistance from experienced engineers. Second, the somewhat limited FPGA internal visibility hinders and prolongs the design debugging process, calling for skilled engineers to navigate through the dark side of the Moon.

Despite the significant progress made in the technology, emulators are not push-button machines. Granted, they are easier to deploy, operate and maintain than a decade ago. Still, staffing resources necessary for all of the tasks may play a critical role in the overall COO.

Training an engineer in the art of emulation deployment is expensive and time consuming. It may require a minimum of three to six months of learning and an additional six months to a year to develop on-the-job experience. Based on the cost considerations for the operational costs, the same level of expenses applies to train an engineer.

Again, the commercial-based FPGA emulator pushes the usage learning to twice if not longer the time to learn how to use the other two technologies.

The success in the adoption of emulation stems from its ability to find design bugs that no other verification technology can. Unearthed bugs in the design-under-test (DUT) hardware after tape-out lead to re-spins. According to latest data, a re-spin at 7nm may reach $30 million. Uncovered hardware and software bugs delay the time-to-market (TTM) of the new chip. Per several studies, missing a schedule in a highly competitive market by three-months shrinks product revenues by about 30%.

Emulation can help with both and a capability essential for the job is the virtual environment deployment.

A virtual environment deployment replaced physical peripherals in an ICE setup with equivalent software models. Described at a high level of abstraction in C/C++ or SystemVerilog, virtual peripherals are processed in a host server connected to the emulator.

The shift to virtual emulation offered numerous advantages.

First, it opened the door to new use models beyond ICE and new verification tasks beyond hardware functional verification. Based on a transactional interface, a DUT mapped inside an emulator could be exercised by a software testbench executed in the host server. The software testbench could be a universal verification methodology (UVM) testbench, ideal for block-level verification, a peripheral or a set of peripherals or a software-stack executed by a fast model of an embedded CPU.

Second, the virtual approach combined with hardware/software advances for fast retrieval of DUT data made possible new tasks. Those include database generation for power analysis in the context of software processing, design-for-testability (DFT) analysis, deterministic ICE and new Apps, each targeting a specific verification challenge. Apps expand the numbers in the emulation user community beyond the traditional deployment for acceleration. More verification tasks serving more users increase the utilization of the emulator.

Further, a virtual target system can be accessed remotely 24/7 on a worldwide basis. When a user swaps one design for another or a new user signs in, there is no need for manual assistance. More to the point, emulators now support multiple concurrent users, albeit trading off the total capacity for the number of users. This powerful capability requires an efficient queuing and scheduling mechanism to manage multiple emulation jobs submitted simultaneously through a queuing process that privileges higher-priority jobs for maximum utilization.

As important, the absence of physical dependencies in a virtual environment improves system reliability.

The virtual mode had a positive impact on the COO. More use modes and verification tasks, remote access and multiple concurrent users dropped the COO by at least an order of magnitude from where it was in ICE mode.

Specifically, high throughput in virtual mode is essential to better the COO, and the main beneficiary is the custom-based FPGA emulation architecture since it has an elevated throughput. The same emulation vendor that boasts the highest throughput offers an extended emulation Apps library that further enhances the COO.

Additionally, the more concurrent users, the lower the COO. In this regard, the processor-based emulator leads the pack.

The COO of a hardware emulator is influenced by several criteria that may exceed the cost of purchasing the system by a few multiples over the lifetime of the equipment.

When deployed in a data center, an emulator that is easier to use, provides faster design bring up and simplified debug needs fewer and less experienced engineers to operate with a substantially positive effect on the COO.

In a future piece, I will discuss in detail the requirements to deploy a hardware emulation platform in a data center with emphasis on its COO.

Dr. Lauro Rizzatti is a verification consultant and industry expert on hardware emulation. Previously, Dr. Rizzatti has held positions in management, product marketing, technical marketing, and engineering.

Share and Enjoy:
  • Digg
  • Sphinn
  • Facebook
  • Mixx
  • Google
  • TwitThis
Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.