Developing the Portable Stimulus Standard

I took the opportunity to interview the leaders of the Accellera Working Group (WG) that is developing the new Portable Test & Stimulus Standard:  Faris Khundakjie, Leading Technologist at Intel and Chair of the WG; Tom Fitzpatrick, Strategic Verification Architect, Design Verification & Test Division at Mentor, a Siemens Business, the WG Vice Chair; David Brownell, Design Verification Manager at Analog Devices, the WG Secretary; and Larry Melling, Director Product Management and Marketing, System and Verification Group, Cadence.

Moretti: I understand that the purpose for the Portable Test & Stimulus Standard is productivity and quality.  We want to make designers, and especially verification engineers, more productive and increase the quality of their work.  How does the Portable Stimulus Working Group achieve these goals?

Melling: We should start with the users.  They are the ones that are the true measure of our success, and they represent the market demand for vendors to develop tools that create a superset that benefits the community.

Khundakjie: The number one thing is that this is not just about verification.  Portable Stimulus covers different platforms.  Verification is mostly mature right now.  A number of standards have been developed and deployed and are commonly used at this point.  The value will be measured across platforms and support of IP bring-up and concurrency validation.

Brownell: I think that for Analog Devices specifically, we make individual components all the way up to SoCs.  All of our projects are getting much more complex, and the tools we have today do not scale.  We are not able to do verification of IP and then reuse anything other than checkers for system-level verification.  We are not able to work on one project and then take what we have done and easily reuse it on the next.  I see Portable Stimulus as enabling us to do that and scale to system level.  No one is smart enough to understand a complex system entirely.  We need tools that enable us to easily specify what we want to happen, and then the tools can take care of those items that people are not capable of doing manually.

Moretti: From what I hear there is still confusion between the standard and the tools.

Fitzpatrick: The standard stops when you have defined the set of actions you want to accomplish.  You diagram the network of things that you want to happen, and then at the leaf nodes of that graph-based representation you define the implementation for the specific behavior, and there is where the specification stops.  What the tools do is they analyze the overall graph, apply all of the constraints and requirements to figure out what that representation turns into when you consider all possible scenarios, and then create the test implementation for your target.  So even if we talk about what the actual test is going to be, that is not really part of the standard.  The tools do the implementation.

Brownell: Is it fair to say Portable Stimulus is like Verilog that allows you to define the intent but then a synthesis tool implements the description?

Khundakjie: It is even better because in Portable Stimulus, for example, one does not have to write the code that says call the function to randomize.  The tool will do the randomization.  I just express the need for randomization.  The language stops at the “What,” and the “How” to accomplish the “What” is where the vendors’ tools come in.  The specification does not say how something will be implemented in a platform or environment; it is the tool’s responsibility to choose the different functions necessary to accomplish the task.  The standard does not talk about implementations; it stops at what a user wants to see happen and then the tools take over.

Melling: I believe the topic has been nicely described in a couple of tutorials so far.  Until people get to use the standard, it is understandable that there will still be some confusion.  I think that some confusion exists because Portable Stimulus gives you the ability to completely describe your intent.  It does not require the user to understand what the testbench is composed of or even what its nature is.  The user stipulates what the test must do.  The standard focuses on what the test is about, not how it is implemented.  That is why people have difficulty with it because they are used to thinking of implementation requirements.  But that is where the tools come in.

Khundakjie: Engineers are so used to building the implementation to achieve the required functionality.  Portable Stimulus forces engineers to think about features and flows while the tools take care of the rest.  If you deliver IP to a customer, for example, you state what features it implements… temperature sensor, image sensor, whatever operating specification it implements.  This is what we must focus on.

Moretti: There has been discussion about graphic input versus language input.  When I think of the Portable Stimulus language, DSL, I imagine a flow graph, a sequence of related activities.  People have said that a graph can be of help, but eventually one must write in DSL.

Fitzpatrick: I worked on several graphical tools in the past where the role of the graphics is to allow the user to think through the problem, but the tools do not process the graphical representation, they process the textual representation that defines the semantics.  Portable Stimulus is the same.  We are defining a language and its semantics.  A vendor might provide a way to generate DSL from a graphical environment, but I believe that such a tool is only useful for the first time someone uses the standard. With experience, the user will realize that it is much more efficient to write the code.  There is no value added in the standard in defining a graphical context other than there are a few examples in the specification that are graphical to show what the semantics are.

Moretti: Tutorial attendees seem to be confused between UVM and DSL.  Is there something in UVM that gets in the way of DSL?

Melling: Change is hard for grownups.  That is the basic problem.  People have spent a lot of time and energy in learning UVM and now there is a significant change proposed.  It is unsettling.  It is only natural that people are going to ask questions.  And Portable Stimulus will not get accepted if there is not a compelling value and reason for people to adopt it.  It will get a lot of scrutiny from exactly those people that have spent a lot time learning the UVM techniques.

Fitzpatrick: I don’t think that there is anything that gets in the way.  As Larry was saying, if a user is targeting his Portable Stimulus methodology to an existing UVM environment, the idea is that the representation of your test intent in UVM is generated automatically from the Portable Stimulus tool. The tool creates the UVM sequences for you.  A significant part of the verification team now spends a lot of time doing this with UVM.  It is usually a smaller group that puts together the testbench, and then the test writers put together the test sequences.  With Portable Stimulus, the test writers will be able to write their test in DSL and let a tool generate the appropriate UVM sequences.

Khundakjie: It is also important to understand the background of people asking the questions.  Part of the challenge for the Portable Stimulus Working Group, beyond creating DSL, is the culture of separation. The silos that exist today — simulation engineers barely talking to emulation engineers, barely talking with post-silicon people and with FPGA and virtual platform engineers.  When you get the question of why UVM is not good enough, it is because those persons have not looked beyond their world and responsibilities and considered the entire flow.  Every time I see a chip go through an actual product release, and I work mainly in simulation, I am humbled because I spent so many months, sometimes years, on a project and all I have done is get bugs out of the design by simulating five milliseconds of the possible functionality of the chip.  This is the truth — in simulation I am mimicking five milliseconds of a real chip functionality.  I watched the entire development cycle going through emulation and post-silicon and the type of bugs the other engineers find, and I am humbled because there is a lot of work done and if you are not looking to the entire development cycle you tend to ask questions that are just in your silo and say, “UVM is good enough.  Everybody promised me that UVM was going to be enough.  Why is it that now it is not good enough?”  This is one of the challenges we bump into.

Fitzpatrick: And the other thing for me is that I spent years telling people, “You put together your block-level environment.  When you take that block and put it into the system you can use parts of that block-level environment to compose your system-level environment, but when you do that for a really large chip you get a lot of passive stuff that is sitting there doing very little more than using simulation time.  The ability now to move up in abstraction and still get the same value of having specified what needs to occur but allowing the tool to create the representation of that at the appropriate level of abstraction gives you the best of both worlds.

Moretti: The proposed standard is out there.  What do you expect from the early adopters?  When is it that you are going to say that you have enough feedback to write the 1.0 standard?

Fitzpatrick: The normal Accellera review period is ninety days.* We are hoping that enough people will look at the proposal and provide feedback.  We do know already that the working group needs to do additional work, even if we receive no feedback.  But assuming we receive feedback, we will incorporate it in the required work.  Accellera has set up a discussion forum and everyone in the working group is ready to look at comments as they come in, provide clarification, and get additional input.  As necessary we will add those issues to our database.  This is the process we have.  Having already done this a few times I do not expect a large volume of comments.  The conclusion of this process will give vendors a target to shoot for.  Up until now vendors have been looking at the ongoing discussion, but a solid foundation is needed, even if it is only 80% of what the ultimate standard will be.  Vendors will be confident that what is there will not change so the tool work can proceed in a more deterministic manner.

Moretti: The feedback can come from “language lawyers” or from test designers?

Khundakjie: What I would love to see feedback from is from emulation, post-silicon, and virtual platform engineers.  We tend to be lacking representation of these areas from users.  It would be nice if we can engage them.  But the process is open to everyone.  The thing I look for is not detailed comments on a word or a sentence.  Those things can be changed easily.  What I am looking for is any big gaps.  Anyone coming up and saying, “I am sorry but you missed so much scope here.  I am in automotive, or security, or whatever, and you are still only thinking purely digital functions and have not addressed my problem.”  These would be fantastic points of feedback about functions we have not thought about.  Anything outside the main functional verification space would be very important.

Brownell: What we need are real use cases.  Everyone has joined the working group with their own ideas on the purpose of Portable Stimulus and how it would be used.  The standard has been shaped to support those specific ideas and something may have been missed.  We need people to look and verify that what has been proposed will help them, and they really need to comment if they do not believe the standard is going to significantly improve how they do their job today.

Fitzpatrick: When we started the process, the first thing we did was to create a set of actual use case examples for typical scenarios.  We used those to evaluate the contributions received to see how they would address the problems.  Each of the contributors showed how their proposal would address a specific scenario.

Melling: During the comments period we will revisit the scenarios and update the requirements in light of the feedback received.

Moretti: The comments period of ninety days seems short.  On the other hand it underscores the sense of urgency to receive feedback.

Melling: If someone called the day the comments period is over and told us that he or she has good comments but a couple of more days are required we will of course be flexible, but we need a deadline to focus the efforts.

Fitzpatrick: Accellera and IEEE continue to get feedback on existing standards, so I expect that we will receive feedback after the Accellera standard is released.  It is natural for people to find issues when using the standard instead of making a mind model and looking for problems.  I expect that for the first couple of years after the standard is released by Accellera we will be doing upgrades, and when we feel that the specification has stabilized it will be turned over to the IEEE for its standardization.

Moretti: Any concluding remarks?

Khundakjie: The working group has been active for about two years.  I have seen a tremendous amount of effort, I have seen energy nothing like I have seen before.  The amount of work that each member has contributed humbles me.  I hope that the standard will be extremely valuable and that it will revolutionize how people perform validation.  It is our requirement to manage the specification for the next couple of years to make sure that it is clear for the users and that engineers are comfortable about how to use it and appreciate its value.

Fitzpatrick: Speaking as the representative of one of the vendors that made the initial proposal, it is relatively easy for us to imagine that the specification says what we think it says.  We know what it is that we wanted to have happen.  The goal as a standards group is to create a document that will allow someone coming at it fresh to implement a tool that will do fundamentally the same things we imagined.  With my Accellera hat on that is also the goal.  It may very well be that we spend the next several months just clarifying what has already been developed without adding any new features.  If some other vendor that has not been involved in the standard development wants to implement such a tool, is there enough information to allow them to do that?  If not, we need to clarify the standard.

Melling: I am looking forward to releasing the specification and seeing what kind of response we get.   The feedback so far is encouraging, and it promises the upgrade in productivity we are hoping for.

Brownell: This is my first experience on a standards committee, and I have been excited to see how open vendors have been by offering significant pieces of their own technology to get the effort off the ground and how through Faris’s leadership all of the committee members came together and worked very hard to get the standard to where it is today.  Eight months ago it was not clear we would have anything to release, so it is great to be talking to you today about the draft release, and I am really proud of what the working group has accomplished.

*After this interview took place, the public review period was extended to October 30, 2017.



Share and Enjoy:
  • Digg
  • Sphinn
  • Facebook
  • Mixx
  • Google
  • TwitThis


Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.