Practical Applications of Cloud Computing in Semiconductor Chip Design



Cloud computing will play an increasingly significant role in FPGA designs because the benefits to designers are tremendous.

Chip design engineers face a myriad of challenges in their work, be it brainstorming ways to implement a certain feature set, figuring out how to meet performance requirements (and still stay within budget), or running enough simulations and compilations to verify functionality and test coverage before manufacture. These are the processes that bring our smartphones, smart TVs, industrial robots and most other electronics to life. Over the years, much thought and hard work has also been put into improving design methodologies, software tools and computing hardware for chip design, in order to shorten product time-to-market and lower development costs. Today, the cloud can help with this, by dramatically accelerating chip design workflows.

Cloud computing has steadily made inroads into the enterprise; therefore it comes as little or no surprise that a compute-intensive endeavor such as semiconductor chip design will find practical applications for the cloud. The prospect of scaling computation resources on-demand and running simulations and compilations in parallel is attractive–both for IT departments and for design teams. Some possible FPGA design use cases are depicted in Table 1.

130206_fpga_1
Table 1: FPGA design use cases for cloud computing.

One application of cloud computing is in FPGA design, where designers offload synthesis and place-and-route tasks for more efficient processing. When an engineer is in the field, for example, he or she may be debugging multiple designs at the same time, making changes and re-running compilation builds to test those changes. Many designs take up large amounts of CPU and memory resources, making it difficult to get several designs done in a timely fashion. With access to a remote server farm, the engineer can quickly launch a bunch of synthesis and place-and-route builds in parallel, simultaneously trying different debug strategies.

If Customer A and Customer B use different software versions, previously the engineer had to replicate the same environment, often maintaining legacy tools on his or her laptop. A compute cloud, however, is ideally suited for keeping different design environments ready for use at any time. The “fire-and-check-back-later” nature of cloud computing also makes it easier for road warriors working off-site.

At the office, design teams and application engineers face similar problems: design exploration during early design phases, simulations when RTL design entry is in progress, and timing closure iterations in later stages are steps requiring hundreds of compute hours and non-insignificant compute power. Companies with internal server farms can manage resource needs to a certain degree but may be strained at times of peak demand, especially when multiple design teams are rushing to meet deadlines. An oft-seen occurrence is that a design engineer may run an overnight compilation build or two, check the results the following morning, make some changes and compile again just before lunch. The capability to run multiple builds at a time flexibly can speed up the “blocking” nature of such a workflow.

Figure 1 shows the flow of data from the user to the cloud and then back in each of the three use-cases mentioned in Table. Starting from the left, the engineer initiates a cloud compilation via a custom API and gets output files as well as results of analysis from the same cloud API layer. Depending on the design stage, different types and numbers of servers will run in the server farm. The cloud SW client, API layer and server farm are new technology (and business) components for utilizing cloud computing in chip design.

130206_fpga_2
Figure 1: Data flow between the user and the managed cloud

Based on feedback from engineers, the following attributes are key:

  • Security: Design data must be encrypted and transmitted over secure channels.
  • Ease of use: The process of offloading each design task must be integrated with the engineer’s existing workflow.
  • Reasonable transfer latencies: Design files can be tens, hundreds and even gigabytes in size, so upload and download speeds must be optimized.

The first attribute above is particularly important for semiconductor companies. Designers are understandably concerned about the risks of letting confidential IP leave their company networks and are implementing private and hybrid cloud approaches, where designs of a lower priority get compiled in a cloud-like infrastructure. Noticeably, startups and SMEs tend to take the lead in applying cloud technology in their development process. Both technical and business standards and practices are being applied to address security concerns.

Initial implementations are showing that apart from offloading compute-intensive tasks to get the job done faster, parallelizing the workflow is encouraging a shift in design methodology. With more data generated, engineers now have more information on hand to make technical decisions more quickly. Companies and design teams can become more agile at evaluating and developing products because they can conduct more experiments and still support current product lines. Over time, it is not hard to imagine that, for such a complex and resource-intensive engineering workflow as semiconductor chip design, cloud computing will play an increasingly significant role.

 


ng_harnhua

Harnhua Ng is one of the founders of Plunify, which provides customization, automation and management capabilities on a secure, scalable, on-demand cloud computing platform preloaded with chip design software tools.  Contact him at Harnhua@plunify.com

Share and Enjoy:
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • TwitThis

Tags: