DVCon Highlights: Software, Complexity, and Moore’s Law



The first DVCon United States was a success. It was the 27th Conference of the series and the first one with this name to separate it from DVCon Europe and DVCon India. The last two saw their first event last year and following their success will be held this year as well.

Overall attendance, including exhibit-only and technical conference attendees, was 932.

If we count, as DAC does, exhibitors personnel then the total number of attendees is 1213. The conference attracted 36 exhibitors, including 10 exhibiting for the first time and 6 of them headquartered outside of the US. The technical presentations were very well attended, almost always with standing room only, thus averaging around 175 attendees per session. One cannot fit more in the conference rooms that the DoubleTree has. The other thing I observed was that there was almost no attendees traffic during the presentations. People took a seat and stayed for the entire presentation. Almost no one came in, listened for a few minutes and then left. In my experience this is not typical and points out that the goal of DVCon, to present topics of contemporary importance, was met.

Process Technology and Software Growth

The keynote address this year was delivered by Aart de Geus, chairman and co-CEO of Synopsys. His speeches are always both unique and quite interesting. This year he chose as a topic “Smart Design from Silicon to Software”. As one could have expected Aart’s major points had to do with process technology, something he is extremely knowledgeable about. He thinks that Moore’s law as an instrument to predict semiconductor process advances has about ten years of usable life. After that the industry will have to find another tool, assuming one will be required, I would add. Since, as Aart correctly points out, we are still using a 193 nm crayon to implement 10 nm features, clearly progress is significantly impaired. Personally I do not understand the reason for continuing to use ultraviolet light in lithography, aside for the huge costs of moving to x-ray lithography. The industry has resisted the move for so long that I think even x-ray has a short life span which at this point would not justify the investment. So, before the ten years are up, we might see some very unusual and creative approaches to building features on some new material. After all whatever we will use will have to understand atoms and their structure.

For now, says Aart, most system companies are “camping” at 28 nm while evaluating “the big leap” to more advanced lithography process. I think it will be along time, if ever, when 10 nm processes will be popular. Obviously the 28 nm process supports the area and power requirements of the vast majority of advanced consumers products. Aart did not say it but it is a fact that there are still a very large number of wafers produced using a 90 nm process. Dr. de Geus pointed out that the major factor in determining investments in product development is now economics, not available EDA technology. Of course one can observe that economics is only a second order decision making tool, since economics is determined in part by complexity. But Aart stopped at economics, a point he has made in previous presentations in the last twelve months. His point is well taken since ROI is greatly dependent on hitting the market window.

A very interesting point made during the presentation is that the length of development schedules has not changed in the last ten years, content has. Development of proprietary hardware has gotten shorter, thanks to improved EDA tools, but IP integration and software integration and co-verification has used up all the time savings in the schedule.

What Dr. De Geus slides show is that software is and will grow at about ten times the rate of hardware. Thus investment in software tools by EDA companies makes sense now. Approximately ten years ago, during a DATE conference in Paris I had asked Aart about the opportunity of EDA companies, Synopsys in particular, to invest in software tools. At that time Aart was emphatic that EDA companies did not belong in the software space. Compilers are either cheap or free, he told me, and debuggers do not offer the correct economic value to be of interest. Well without much fanfare about the topic of “investment in software” Synopsys is now in the software business in a big way. Virtual prototyping and software co-verification are market segments Synopsys is very active in, and making a nice profit I may add. So, it is either a matter of definition or new market availability, but EDA companies are in the software business.

When Aart talks I always get reasons to think. Here are my conclusions. On the manufacturing side, we are tinkering with what we have had for years, afraid to make the leap to a more suitable technology. From the software side, we are just as conservative.

That software would grow at a much faster pace than hardware is not news to me. In all the years that I worked as a software developer or managers of software development, I always found that software grows to utilize all the available hardware environment and is the major reason for hardware development, whether is memory size and management or speed of execution. My conclusion is that nothing is new: the software industry has never put efficiency as the top goal, it is always how easier can we make the life of a programmer. Higher level languages are more powerful because programmers can implement functions with minimal efforts, not because the underlying hardware is used optimally. And the result is that when it comes to software quality and security the users are playing too large a part as the verification team.

Art or Science

The Wednesday proceedings were opened early in the morning by a panel with the provocative title of Art or Science. The panelists were Janick Bergeron from Synopsys, Harry Foster from Mentor, JL Gray from Cadence, Ken Knowlson from Intel, and Bernard Murphy from Atrenta. The purpose of the panel was to figure out whether a developer is better served by using his or her own creativity in developing either hardware or software, or follow a defined and “proven” methodology without deviation.

After some introductory remarks which seem to show a mild support for the Science approach, I pointed out that the title of the panel was wrong. It should have been titled Art and Science, since both must play a part in any good development process. That changed the nature of the panel. To begin with there had to be a definition of what art and science meant. Here is my definition. Art is a problem specific solution achieved through creativity. Science is the use of a repeatable recipe encompassing both tools and methods that insures validated quality of results.

Harry Foster pointed out that is difficult to teach creativity. This is true, but it is not impossible I maintain, especially if we changed our approach to education. We must move from teaching the ability to repeat memorized answers that are easy to grade on a test tests, and switched to problem solving, a system better for the student but more difficult to grade. Our present educational system is focused on teachers, not students.

The panel spent a significant amount of time discussing the issue of hardware/software co-verification. We really do not have a complete scientific approach, but we are also limited by the schedule in using creative solutions that themselves require verification.

I really liked what Ken Knowlson said at one point. There is a significant difference between a complicated and a complex problem. A complicated problem is understood but it is difficult to solve while a complex problem is something we do not understand a priori. This insight may be difficult to understand without an example, so here is mine. Relativity is complicated, black matter is complex.

Conclusion

Discussing all of the technical sessions would be too long and would interest only portions of the readership, so I am leaving such matters to those who have access to the conference proceedings. But I think that both the keynote speech and the panel provided enough understanding as well as thought material to amply justify attending the conference. Too often I have heard that DVCon is a verification conference: it is not just for verification as both the keynote and the panel prove. It is for all those who care about development and verification, in short for those who know that a well developed product is easier to verify, manufacture and maintain than otherwise. So whether in India, Europe or in the US, see you at the next DVCon.


Gabe Moretti has been in EDA for 45 years. First as an individual contributor with TRW Systems and Compucorp. Then as a manager with Intel and Signetics. He has been a member of the executive management team with EIS Modeling (a company he founded), HDL Systems, and Intergraph/Veribest. From 2000 to 2005 he was technical editor for EDA at EDN. Since then Gabe has run his own consulting company, GABEonEDA. He has a B.A. in Business Administration and a Master in Computer Sciences.

Share and Enjoy:
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • TwitThis