Portable Stimulus: The Making of a Standard
by Gabe Moretti
August 15, 2017
The Accellera-sponsored DVCon U.S. 2017 covered, among many other topics, the work on the Portable Stimulus proposed standard. I took the opportunity to interview the leaders of the Accellera Working Group (WG) that is developing it: Faris Khundakjie, leading technologist at Intel and Chair of the WG and Tom Fitzpatrick, Verification Technologist, Design Verification & Test Division at Mentor Graphics, the WG Vice Chair.
Moretti: What is the purpose of Portable Stimulus? What problem does it solve?
Fitzpatrick: There are various platforms that are used throughout verification: simulation, emulation, FPGA prototyping, and more, and there are different stakeholders throughout the process as well: architects, designers, verification engineers, software developers and debuggers. They all have different requirements; they use different languages and different approaches specific to what aspect of the system they are trying to test. It is very difficult to reuse the efforts of one of those persons on one of the platforms with a different person or on a different platform. What we are trying to do is to provide a mechanism to specify the test intent — what specific operation you want to make sure happens — in an abstract way that can then be mapped onto those different platforms. By describing it abstractly, we allow all of the different stakeholders to think about the problem by asking what is the intent rather than how is it done, so that the fact that they tend to use different languages or whatever is abstracted away. Thus, there is a single specification for what it is that one wants to test that is sufficiently rigorous so that a tool can analyze it and create the actual implementation in the target language on the specific platform.
Moretti: Does the use of the portable stimulus happen much earlier than what people traditionally think of testing? One just does not develop a test piecemeal. Is Portable Stimulus development dealt with at the architectural level?
Khundakjie: Not necessarily. I think your question may be mixing agility with the level of abstraction. Portable Stimulus does not mean that you must get one hundred percent of your requirements understood before you write, and then you go and write everything. You still can develop in Portable Stimulus with the same agility you have today. As you figure out new requirements or new gaps in the architecture flow and as you understand them, you go to your Portable Stimulus library and add more code. So there is nothing about it that implies that you have to know everything up front. It encourages you to have the discussion up front so that you can get the most value out of it, unlike the execution that is isolated in every platform where people just do what they think is correct for that specific platform and move on in kind of silos, with isolated execution. Does this answer your question?
Moretti: It does but it seems that Portable Stimulus interjects some amount of rigorous thinking about what it means to test and what is it that you want to test.
Fitzpatrick: Yes, very much so because the key aspect of the language we are creating is that it allows you to specify the behavior that the device is supposed to support and the requirements that those actions have. What are the inputs and outputs for a given action, what resources in the system do they require, and how do they relate to each other. Once you have all of that, you have a very good understanding of the capabilities of the system. This is not something that we expect the architect will use to create the system specification. The Portable Stimulus model will be developed from the system specification, possibly in tandem to a certain extent. For example, if the system specification requires a memory controller, you need to understand what operations that element can support and what the requirements are for each of those operations. Once you have that, then you can specify the individual scenarios that you want to test — if I want to make sure that I can receive data in and DMA it into memory, for example. That is what we mean by test intent — a very high-level specification of what it is that you want. Then you can add more details as you move forward. If you have an architectural SystemC model, you can say that to do this piece of test intent I want, I will map these actions into the SystemC calls I need, or I can map them into UVM sequences, on into actual C code driver calls at the system level. It is a separation of what operations you are trying to do and what the requirements are for those operations, versus what is going to be implemented to perform those operations on a given platform.
Moretti: Okay. The output of the WG is the Domain Specific Language (DSL) that must satisfy a wide range of people. You are talking about going from people doing small IP blocks with pure transactional UVM testbenches all the way to people doing pre-op test in the lab on silicon fresh from the foundry. Specifying one representation that would be acceptable to that broad range of people must have been a real challenge.
Fitzpatrick: We answered that by defining two input formats, two alternative formats of equivalent semantics that are completely interchangeable: DSL that is more SystemVerilog-like and an alternative C++ for people in the bring-up lab and later phases that have been using C++.
But when we say that there is a C++ equivalent input format, the important distinction between Portable Stimulus and other ways of writing stimulus is that DSL is declarative by definition. It allows you to specify all test intents and how they relate. Then a tool can analyze that set and figure out what it needs to create for a given platform. When we say that there is a C++ input format, a lot of people may think that you just write C++ and it will execute on your platform. That is not what we are talking about. What we are talking about is a limited C++ library that can be executed to create the same model that the DSL specification would create and then from that model the target implementation can be generated. It is a way to allow people familiar with C++ to be able to create specifications more quickly and more naturally without having to learn a new language. But it is a subset of C++ — it is a library of C++ that has specific things that map to the DSL language constructs. It is pretty much a one-to-one mapping. There is a line in the DSL and an equivalent C++ library entry that you can call that is exactly equivalent. It is a way of having two different input formats for creating the same model specification that the tools can use to generate the outputs. That is a very important distinction that needs to be made. Because each one of those things can be largely self-contained, you can combine them in a single model. If one engineer is writing in C++ and another is writing in DSL, they each have a piece of the system represented. A third engineer can take them, put them together and create a single coherent output for whatever target platform he is using.
Khundakjie: The main challenge shows up on multiple dimensions because this is revolutionary rather than evolutionary. If you take UVM, it came from OVM that came predominately from AVM and ERM. People have started with a seed that kept growing and growing, and it is still growing. This is very different. If you forget the language input discussion, the moment you ask people to think about what you are trying to accomplish not how, that in itself is revolutionary. Right?
Moretti: Agreed.
Khundakjie: It is not novel; it is revolutionary for the community we work with. The verification engineers using simulation and emulation; the silicon validation engineers and those doing virtual platform testing. To get people to say; “OK look, my company expects me to show up every day at work. They do not care if I drive a Suzuki or a BMW.” What we have been doing so far is competing on how to perfect the vehicle I drive in our simulation. To make it look more beautiful more organized, more polished every day. But that is not really the game. The game is to show up at work every day. That is the outcome you want to see. So focus on the outcome. The behavior you want to see in the system when the test eventually runs, specify that, do not specify how you will implement it, but let another layer, another set of hidden layers that are subject to change from platform to platform and exist in another part of your environment implement that part.
Moretti: Okay. You used the term “revolutionary.” Revolutions are by their nature disruptive because change is disruptive. What will be the effect of introducing Portable Stimulus to the verification engineer and the design engineer that will have to use it?
Fitzpatrick: I have gotten that question a few times already. There is still important knowledge that is going to be retained by each stakeholder. Our job is to make it easy for them to share the stuff that needs to be shared to apply their specific knowledge when it comes their turn to use Portable Stimulus. For an architect, for example, the job is to figure out what are the things that the design needs to do, how they relate to each other, maybe some performance characteristics and things like that. Those, the performance in particular, are things that can be layered into Portable Stimulus once you decide what type of operation one must be able to do. This one takes five units, another takes ten, or whatever. When you put them together, depending on how they get combined, you can figure out how long it is going to take. That performance layer is something that would be specific at the architectural level that may need to be measured later in a different way on a different platform to get some idea of what is going on. The verification guys are the ones that are used to do the block level or subsystem level with a lot of RTL pieces. They still need to be able to understand how a piece of, I will say stimulus but in fact it is the test intent, will interact with that RTL. In the UVM example we are talking about being able to generate, UVM sequences from Portable Stimulus because that is what is going to drive the traffic, but the way that traffic interacts with the design is through a UVM environment through agents, drivers, monitors, that kind of stuff and all that is still going to be there. We are not talking about being able to synthesize a complete testbench from DSL. That is well beyond what anyone is envisioning and probably beyond what is capable to be specified at that level. The infrastructure that is currently required for whatever platform is still going to be there but we get to reuse the test intent. The RTL verification guy can take the operations the architect specified and map them to his own UVM environment and make that happen. When you get to the system level, the software guy can say that for this stuff I am going to create C code that is going to run on my processor on an emulator or FPGA prototype or whatever, or maybe the implementation of those operation is going to be a set of driver calls from the software that is going to be created. The software guys are going to write the software. I don’t expect that someone is going to use Portable Stimulus to create a driver but what they are going to do is to use it to say, “I have my driver, it has all these functions in it and I am going to create scenarios that are going to call all of those functions in interesting ways so that I can make sure that the device really does what it needs to do.” There are still the infrastructure pieces at each level by each stakeholder but the intent is what can be reused. That is the automation that gets mapped to take advantage of the infrastructure on each platform. Each time you go and generate it you can create multiple scenarios from a given specification of intent. If you want data to go out here, where is the data coming from? It can come from anywhere in the system. There may be multiple ways to create it. If all I care about is that the data is going to go out, the tool can then infer from the pile of stuff that you have at the specification level to pick whatever operations are necessary to create the data that is going to go out. It allows you to reuse the infrastructure that you have, to interact with the target implementation at the given platform but reuse that intent to create novel scenarios that still match the intent on other platforms.
Khundakjie: I agree with that but I think we are asking verification engineers to think in a higher level of abstraction. It is not a perfect analogy, but often people compare this to the introduction of RTL synthesis. You did not write a detailed implementation for your gate-level library or layout. You are writing in a somewhat more abstract way and the tool is taking care of automating that. A bit of the same thing is happening here. Instead of having to write the actual bits for each type of target platform, you are developing a more abstract model and the tool is automating that for you. It is not a perfect analogy but I think it is somewhat the same type of transition designers went through in moving to synthesis … same correlation as when verification engineers went though from entering test vectors to UVM and now Portable Stimulus. They must think a bit more abstractly, they must develop higher level models. Think about “what” not “how.” But there is great benefit with that. You are eliminating a bunch of tedious work that they used to have to do. You are not completely eliminating that work, and again the point was made here. You are going to leverage the UVM testbench components. If you do not have a UVM VIP sitting there for all the ports of your chip, then you must write a Portable Stimulus model that understands how to talk to the bit-level protocol using PCI XPRESS. That is a very complicated thing to have to express in a model. But you want to have that model talk to the VIP at a much higher transaction level and then leverage the VIP. Coverage components, scoreboards … there are lots of pieces of existing UVM testbenches that can be leveraged in a Portable Stimulus environment. Although the thinking is revolutionary, the implementation is evolutionary in the sense that you are not throwing away everything you are doing today. You are leveraging the things that still have value with this higher level of abstraction.
Fitzpatrick: Just to build on that idea a little bit, verification engineers have already moved from directed testing to constrained random verification. So initially it was writing random data and now with UVM it is constrained random transactions which is still a new way of thinking about things. What we are doing now is taking that idea of constrained random transactions and bringing it up to the scenario level. As I have mentioned before if you have a certain set of operations that you want to happen to send data out. The constrained randomization is all of those actions and the relations between them are effectively constraints at the scenario level. To solve those constraints you stipulate: I can do action A, which is going to create data for action B, which will send data over to action C, and then that data can be sent to the specific set of actions that I have set in my graph, my top-level representation of the intent. I want these things to happen, and so for those things to happen I need to solve scenario-level constraints to produce a set of actions that will give me what I need to accomplish the critical stuff that I specified. Although we are talking about abstraction, we are not enforcing a specific level of abstraction on anybody. If you do not have a VIP that can talk to your PCIE, you can model each layer of that protocol using Portable Stimulus. Eventually you will get to a model that is very low level and deals with bits. But you can have multiple layers on top of that and eventually you are going to have an action that says read my PCIE bus. How you choose to implement any action at any level is up to you. This is how you can go from a high-level specification targeted at an architectural model, to then refine it slightly to target your RTL, and refine it in a different way to target your FPGA prototype or the silicon.
Moretti: It sounds like you had in mind how Portable Stimulus would be used given what exists today: UVM, SystemVerilog, SystemC, and so on. I understand from a couple of discussions that there was technology offered to develop Portable Stimulus. If it is such a revolutionary idea how did you end up using existing technology?
Fitzpatrick: The problem is that each of these revolutionary ideas was proprietary. We were in the same situation standard making organizations have found themselves in before, which is that we did not want to roll the dice on a single solution. Mentor took the lead in asking Accellera to form the preliminary working group. The work started in 2014 with a six-month deadline to identify requirements and complete a feasibility study. We gathered the requirements and looked at the technology that was already out there. Breker, Cadence and Mentor each had a solution in this space so we reasoned that given the requirements and what the existing tools could do, it would be feasible given additional input, and with Intel as a user it would be feasible to generate a standard. The actual working group was formed and we spent around six months doing preliminary work evaluating submitted proposals. One of them was a combined proposal from Cadence and Mentor. They had their own declarative language solutions that were merged off-line to create what we call DSL. Breker had an approach similar to the Mentor approach based on the notion of graphs for specifying the relations between things. They use a C++ approach. We debated the language issue for a while until we decided to focus on the capabilities we needed and then figure out what the language would be. We decided that we would use DSL to define the semantics and that after we would create C++ equivalent semantics. We are not doing the two things in parallel. We are focusing on the DSL for defining what the constructs are and what they mean and how they interact. What are their semantics and how are we going to use C++ to define the same semantics. We do not have two things that are almost the same but still different, we do not have one as a superset of the other. They are two entities specifying exactly the same thing and we are using the DSL, because it is a new language, to define the semantics, because we can assign the meaning that we want and then we have a way in C++ to create a class, or whatever, that is going to allow us to model the same semantics.
Khundakjie: Obviously you do not go too far unless you have seen glimpses of success. You have these technologies from different vendors but even within one company there are different uses. Not everyone uses the tool: it may be only for one type of design or one level of abstraction and across the industry not everybody is using one technology or all three technologies everywhere. There are interesting flavors of solutions tried and interesting successes reported. Let’s make this mainstream by bringing everyone to the table and making a standard out of it. This is typical for Accellera. Accellera did not invent assertions, but it decided to standardize an assertion format, looked at multiple kinds of use from experts. Accellera did not invent constrained random verification either. They looked at methodology and eventually converged in UVM. There are commercial solutions that have been successful to varying degrees and enough proof points that the industry felt it was time to standardize. The process we are going through is pretty much how standards get developed. It is quite unusual for someone to define a standard for a new technology that does not already have commercial proof points, at least in EDA.
Moretti: You have worked in the digital domain. Any idea to go beyond?
Khundakjie: Analog is part of the scope, but it is not included in the first release. To give it justice, I do not think we have analog-specific use cases. When appropriate, I expect scenarios to be specific to power management. Adding real to data types will go a long way but I do not think we have the scenario-based analog use cases.
Moretti: What else?
Fitzpatrick: As mentioned earlier, the DSL specification is a declarative specification of the possible solutions. But people are still tending to think about it as a procedural specification, which it is definitely not. With the existence of the C++ format as well, they tend to think that way even more. We must be very careful to help people understand that what we are talking about is a declarative specification of the set of actions that are available and how they interact and what it is that one wants to accomplish for a given task. That by itself is not executable. It is static, it is analyzable, so we expect the tools to look at a set of models and determine what can be created. The tools can then look at that set and generate what can be run on a given platform, provide additional information about the set, and even do some pre-coverage analysis so that the user can determine which direction he is going before he goes to the trouble of generating the actual test.
Khundakjie: This is the reason I said “revolutionary.” People want to get quickly into how the tool is supposed to do this for them. Rightfully they want to understand how it works A to Z. We as a standards committee have to stop at a certain point trusting that all of the participating vendors can actually take the spec and work with it. We will have to do tutorials, do presentations, do every single form of communication we can get into, plus have the vendors working behind the scenes diligently with user companies to test stuff to get into what this is really about.
There is a natural tendency of R&D in chip companies that develop SoCs to adopt the standard first. That is where the highest value is. That is where their real problem of integrating and doing concurrency, power management, and other flows. With IP suppliers, I anticipate it will be more over time, especially when you have IP suppliers that are not connected to other platforms. Their job is to license their RTL and depend on their customer manufacturing, emulation, and the rest of the tools to come back with a feedback. If they are not involved in the process, it will be more difficult to be convinced to change how they do things and cross to the other side and understand from their customer prospective. The SoC customer needs everything put in the form that the SoC wants to execute as overlapping with their test. Getting there is going to be a slower process. The standard is set up to allow that. If I am an IP provider and have a piece of RTL that does ABC, I can create a Portable Stimulus model for what ABC is supposed to do as well as what ABC expects from the rest of the system so that when the SoC developer takes the IP block and integrates it into the system they can use the model and make it part of their Portable Stimulus environment.
There are capabilities in Portable Stimulus and interesting things that even if they were not portable they would still improve quality. For example, look at partial specification where I can do things that are yet to be specified. If you look at the scheduled features, I can find exactly the relationship timing-wise between different threads of execution and I can figure out a few things. There are values like that that will motivate someone that wants better quality than other IP suppliers to say, “Look, I can give you healthier RTL because I am using these features of Portable Stimulus. Not because it is necessarily portable but because it is doing navigational exploring that you have not been able to do in the past with the techniques available today.”
Moretti: I hope we can continue the discussion. I am anxious to see the progress you can make in the coming months.