Archive for Tom Fitzpatrick

15 June, 2017

I am happy to announce that Accellera has just released an Early Adopter version of the new Portable Stimulus Specification for Public Review. This document represents a years-long effort on the part of the Portable Stimulus Working Group (PSWG) to create an industry standard that builds on the Intelligent Testbench Automation language and technology pioneered by Questa inFact, with signification contributions from Cadence and Breker, as well as the other members of the PSWG.

The Early Adopter specification provides a comprehensive explanation of the new Portable Stimulus Domain Specific language. This declarative language is designed for abstract behavioral description using actions; their inputs, outputs and resource dependencies; and their composition into use cases including data and control flows. These use cases capture test intent that can be analyzed to produce a wide range of possible legal scenarios for multiple execution platforms (e.g., virtual platforms, simulation, emulation, prototypes, silicon, etc.). There is also a semantically-equivalent C++ Class Library to specify the same declarative abstract behavior descriptions in an environment that may be more comfortable to some users. The Early Adopter specification also includes a preliminary mechanism to capture the programmer’s view of a peripheral device, independent of the underlying platform, further enhancing portability.

The PSWG is actively seeking your feedback on the Spec and has set up a Portable Stimulus Discussion Forum where you can post your questions, comments and/or suggestions on this exciting new standard. The Public Review Period will end on Friday, September 15, 2017, so this would be the perfect “summer reading” during your vacation.

To help you get started, I’ll be presenting “Portable Stimulus is Here! (Almost)” at the Verification Academy booth (#429) at DAC at 10am on Monday. The PSWG will be presenting a joint tutorial on Monday afternoon at 1:30 in room 18CD. For those of you not attending DAC, we also have a Portable Stimulus Basics course , as well as a webinar on “Automating Reusable Retargetable Scenario-level Tests with Portable Stimulus” on

Accellera will be sponsoring a breakfast and Portable Stimulus Town Hall meeting on Tuesday morning from 7:30-9:00am in Room 10AB. The breakfast is free, but you’ll need to register in advance. I’ll be moderating a discussion between the audience and several members of the PSWG.

I’ll also be participating in a Panel Discussion about Portable Stimulus on Tuesday at 10:30am in Ballroom E.

I hope to see you at one of these events, or just stop by the Verification Academy Booth and say “hi.”

17 April, 2017

Although we had a very successful Portable Stimulus tutorial at DVCon US, there were still a couple of points of confusion that I’d like to take a moment to clear up.

Difference Between Standard and Tools

The first issue is apparently where we draw the line between what the Standard defines and what a tool is expected to do. As illustrated in this picture,

the Standard defines the syntax, concepts and semantics to define an abstract Portable Stimulus model, which represents your verification intent. That’s it. As we’ve discussed, the model can actually be a partial specification of your critical verification intent, and the semantics of the model define the solution space for scenario-level randomization to be applied to generate the myriad legal scenarios that conform to your specified intent given the available behaviors and operational constraints of the target platform.

As with most standards, Portable Stimulus is being created to allow you to write your verification intent model once, and allow it to be reused (i.e. “portable”) in many different , including across tools from multiple vendors. The reuse comes from the tools that will be used to process the model and create executable tests that implement the specified verification intent on a variety of platforms, including UVM simulations, architectural exploration, embedded processors (both simulation/emulation models and actual silicon) and any other environment that might be required. But it is important to understand that the standard does not require a particular tool to create a particular output. Just as with constraint solving in SystemVerilog, different tools are not required to come up with the same solution to a set of constraints, but they are required to come up with a legal solution. As with SystemVerilog tools, how well a given tool solves the set of constraints to achieve coverage or other goals will be a source of competition and innovation among tools.

The Choice of Input Language

The other issue I’d like to address is the choice of input language defined by the standard. The primary purpose of the Portable Stimulus Standard is to define a domain-specific language (DSL) that is declarative in nature to specify the verification intent. This requires a new way of thinking about the problem of stimulus and intent specification that truly does not fit in any existing language. In addition to the DSL, the Standard will also define a C++ input format that models the same semantics. Note that this does not mean that you can just arbitrary procedural C++ code to describe your verification intent. A standard C++ compiler won’t know what to do with it since a standard C++ compiler doesn’t understand the semantics of Portable Stimulus.

Rather, the C++ input format is a C++ library that defines a set of classes and related methods to mimic the constructs and semantics of the DSL. When the C++ Portable Stimulus model is compiled, it is linked against a library implementation (which is not itself part of the Standard – only the header files) that, when executed, will create the same data structures that the Portable Stimulus tool will use to generate the target implementation. The only advantage to a C++ user in using the C++ input format for Portable Stimulus is that the general syntax will be familiar. However, the C++ user will still need to learn what each of the classes and methods does, and how to use them. In effect, the C++ user will need to learn the new Portable Stimulus “language,” whether it is the DSL or the C++ library. The choice of which format to use will be determined by a variety of factors, including the user’s familiarity with a given language, but also including the general requirements of a company’s verification & tool flow, and the ecosystem with which they are familiar.


As with all standards, the key in developing the Portable Stimulus Standard will be in standardizing only as much as is required to allow users to describe portable verification intent models, and leave it to the tool developers to take it from there. By defining the precise semantics of a portable stimulus model, it will be possible for a single model to be written according to the standard that can be processed by a variety of Portable Stimulus tools, each of which will be able to generate a consistent and coherent implementation of the verification intent, whether for a UVM simulation environment, as C code to be compiled for a processor (or processor model), or any other desired target implementation. If there are multiple legal scenarios that conform to the specified constraints, each scenario can be generated individually as a separate “test.” Because the semantics of a portable stimulus model are declarative rather than procedural, users will have to think about the specification in a new way. Whether they will find the new DSL a better vehicle for expressing this intent or will prefer using the C++ class library input format will be left up to users. Just remember: If someone tells you that you can do Portable Stimulus just by knowing C++, don’t believe it.

For more details on the new Portable Stimulus standard, please check out our new Portable Stimulus Basics course on Verification Academy.

13 March, 2017

Just getting around to gathering my thoughts about the great week we had at DVCon U.S. As Program Chair for the conference, I felt a great sense of pride that, with a great deal of help from my colleagues on the conference Steering Committee and especially the great team of experts on the Technical Program Committee, we were able to provide the attendees with a packed program of interesting, informative and entertaining events. But, as always happens, there was one topic that seemed to get the lion’s share of attention. This year, it was Portable Stimulus.

Starting with a standing-room-only crowd (even after bringing in more chairs) of nearly 200 people on Monday morning for the Accellera Day tutorial presented by the members of the Portable Stimulus Working Group (including yours truly), Portable Stimulus never seemed to be far from any of the discussions.

Full house at the DVCon U.S. 2017 Portable Stimulus Tutorial

Full house at the DVCon U.S. 2017 Portable Stimulus Tutorial

If you weren’t able to attend the conference, Accellera will be presenting the tutorial as a series of webinars in early April, so you’ll be able to see what got everyone so excited. In addition to the tutorial, there was a “Users Talk Back” panel session on Wednesday morning that gave several user companies a chance to voice their opinions about the upcoming Portable Stimulus standard. Having been so involved in the standardization effort, I was gratified to hear the generally positive feedback by these industry leaders.

We were pleased also to have two great Portable Stimulus articles in our most recent issue of Verification Horizons. The first article  is from our friends at CVC showing how they used Questa inFact to create a portable graph-based stimulus model that they used in their UVM environment to verify a memory controller design. The second is from my colleague Matthew Ballance, who is also a key technical contributor to the PSWG efforts, and discusses Automating Tests with Portable Stimulus from IP to SoC Level. In this article, you’ll learn about some of the concepts and language constructs of the proposed standard to see how the declarative nature of the standard makes it easier to specify complex scenarios for block-level verification and to combine those into SoC-level scenarios that are often difficult to express with UVM sequences.

The other exciting news I wanted to share with you is our new Portable Stimulus Basics video course on Verification Academy. We can’t yet share all the details of the upcoming standard, since things are still being finalized in the Working Group, but as things are made public, we’ll be sharing what we can so you’ll be the first to learn about this exciting new standard. As we add new sessions to the course, we’ll be sure to let you know. Please go check it out.

, , ,

23 February, 2017

We recently reached yet another important milestone in the life of the Universal Verification Methodology. The IEEE 1800.2 UVM Standard was recently approved and will be published shortly. That’s great news, especially to those of us who have spent the past few years working on this effort. This IEEE effort was a bit different from the development of previous UVM versions in Accellera because our IEEE deliverable was just a “paper” specification, instead of the reference base class library implementation that we’ve traditionally provided via Accellera.

In developing the IEEE 1800.2 spec, we started with UVM1.2, which is the latest “official” version, and took advantage of the opportunity to clean up the spec. Since our efforts in Accellera always focused on developing a reference implementation for UVM, each version of the Accellera standard served as documentation of the entire UVM library, including classes and methods to provide the infrastructure for UVM that users were not expected to (and never should) call directly. Most of our work in the IEEE committee involved identifying and documenting the “user-facing” API, including adding accessor methods and other “proper” object-oriented programming practices that had been left out of previous versions. The theory is that someone should be able to take the 1800.2 spec and implement a compatible library that, even if the underlying implementation is different from the reference implementation, would still work when compiled with a user’s UVM code.

Well, now we’re back in Accellera and we’re trying to update the reference implementation so that it matches the 1800.2 spec and also includes all the infrastructure to make it work. And now we’re faced with the scourge of all new software versions: backward compatibility. Each version of UVM not only is different from the previous version but also deprecates certain features, which means that we try to write the code so that users will be notified if they’re using a feature that they shouldn’t be or that’s changed from the previous version. The idea is to allow an easy migration path from the previous version to the new version, and so far it’s worked pretty well. It doesn’t mean that users won’t have to change their code, but it helps identify where the changes need to be made.

The problem that we’re facing is that, even though UVM1.2 is the previous version, there is a large number of UVM users – likely a significant majority – who never migrated from UVM1.1d to UVM1.2. There’s even a significant number of OVM users out there. It would be exceedingly difficult for the committee to release an implementation that flags deprecated constructs from both previous releases, so we will probably only flag the 1.2 deprecated features. I would never recommend that a 1.1d user migrate to 1.2 just to migrate to 1800.2, so the 1.1d-to-1800.2 path will be a bit more difficult than a typical version change. Of course, if you followed our cookbook suggestion of writing UVM 1.1.d that is fully compatible with UVM 1.2, you’ll be in much better shape.

Now that 1800.2 is going to be the official UVM standard, once it becomes official and Accellera releases the accompanying reference implementation around the end of the year, it’ll be time to make the change. Please let us know if you’re a current 1.1d or 1.2 user, and when you think your company will be making the move. There are a few things we can do to help and we’ll do whatever we can to help you migrate quickly and easily.

And for those of you attending DVCon US February 27 – March 2, please make sure to stop by the Mentor booth on the Exhibit Floor and let us know what you think about this.

12 August, 2016

As a leading proponent of Accellera’s work in the Portable Stimulus Working Group (WG) for a couple of years now, we wanted to update you on the latest significant milestone in the process. After many months of evaluation and discussion, the Working Group has decided to base its standard on the domain-specific language (DSL) contribution made by Mentor Graphics and Cadence Design Systems. The DSL combines the graph-based stimulus-specification approach used by our Questa inFact testbench automation tool with a model-based approach used by Cadence’s Perspec tool, providing several advantages to the user. By a near-unanimous majority, the members of the working group decided that this DSL is the best vehicle to use to define the semantics and syntax for Portable Stimulus.
At the same time, the WG also agreed to develop a C++ input format for Portable Stimulus that will be semantically equivalent to the agreed-upon DSL. This input format will be based on the contribution from Breker Verification Systems, which is a C++ library that they’ve been updating pretty continuously in an attempt to match the semantics of our DSL, but there’s still a lot of work to do there. It’s not yet clear how much a semantically-equivalent C++ input format would resemble the current Breker proposal, and we (Mentor and Cadence) have some ideas in this area that may help us arrive at the right solution.
The real key here is that the WG recognized that, even though a domain-specific language is a fundamental requirement for a portable stimulus solution, there is a large contingent of potential users who prefer writing their tests in C++. By having the standard support both input formats, and ensuring their interoperability and semantic equivalence, we will have portability between users and between vendors, as well as the inter- and intra-project portability that is the main technical goal of the WG.
If you’d like to participate in the Portable Stimulus Working Group (and your company is an Accellera member), you can sign up here. If you’d like your company to become an Accellera member, see here.

, , ,

25 July, 2016

As I mentioned in my last UVM post, UVM allows engineers to create modular, reusable, randomized self-checking testbenches. In that post, we covered the “modularity” aspect of UVM by discussing TLM interfaces, and their value in isolating a component from others connected to it. This modularity allows a sequence to be connected to any compatible driver as long as they both are communicating via the same transaction type, or allows multiple coverage collectors to be connected to a monitor via the analysis port. This is incredibly useful in creating an environment in that it gives the environment writer the ability to mix and match components from a library into a configuration that will verify the desired DUT.

Of course, it is often the case that you may want to have multiple environments that are very similar, but may only differ in small ways from each other. Every environment has a specific purpose, and also shares most of its infrastructure with the others. How can we share the common parts and only customize the unique portions? This is, of course, one of the principles of object-oriented programming (OOP), but UVM actually takes it a step further.

A naïve OOP coder might be tempted to instantiate components in an environment directly, such as:
class my_env extends uvm_env;
  function void build_phase(…);
    my_driver drv = new("drv", this); //extended from uvm_driver

The environment would be similarly instantiated in a test:
class my_test extends uvm_test;
  function void build_phase(…);
    my_env env = new("env", this); //extended from uvm_env

Once you get your environment running with good transactions, it’s often useful to modify things to find out what happens when you inject errors into the transaction stream. To do this, we’ll create a new driver, called my_err_driver (extended from my_driver) and instantiate it in an environment. OOP would let us do this by extending the my_env class and overloading the build_phase() method like this:
class my_env2 extends my_env;
  function void build_phase(…);
    my_err_driver drv = new(“drv”, this);

Thus, the only thing different is the type of the driver. Because my_err_driver is extended from my_driver, it would have the same interfaces and all the connections between the driver and the sequencer would be the same, so we don’t have to duplicate that other code. Similarly, we could extend my_test to use the new environment:
class my_test2 extends my_test;
  function void build_phase(…);
    my_env2 env = new(“env”, this); //extended from my_env

So, we’ve gone from one test, one env and one driver, to two tests, two envs and two drivers (and a whole slew of new build_phase() methods), when all we really needed was the extra driver. Wouldn’t it be nice if there were a way that we could tell the environment to instantiate the my_err_driver without having to create a new extension to the environment? We use something called the factory pattern to do this in UVM.

The factory is a special class in UVM that creates an instance for you of whatever uvm_object or uvm_component type you specify. There are two important parts to using the factory. The first is registering a component with the factory, so the factory knows how to create an instance of it. This is done using the appropriate utils macro (which is just about the only macro I like using in UVM):
class my_driver extends uvm_driver;
  `uvm_component_utils(my_driver) // notice no ‘;’

The uvm_component_utils macro sets up the factory so that it can create an instance of the type specified in its argument. It is critical that you include this macro in all UVM classes you create that are extended (even indirectly) from the uvm_component base class. Similarly, you should use the uvm_object_utils macro to register all uvm_object extensions (such as uvm_sequence, uvm_sequence_item, etc.).

Now, instead of instantiating the my_driver directly, we instead use the factory’s create() method to create the instance:
class my_env extends uvm_env;
  function void build_phase(uvm_phase phase);
    drv = my_driver::type_id::create(“drv”, this);

The “::type_id::create()” incantation is the standard UVM idiom for invoking the factory’s static create() method. You don’t really need to know what it does, only that it returns an instance of the my_driver type, which is then assigned to the drv handle in the environment. Given this flexibility, we can now use the test to tell the environment which driver to use without having to modify the my_env code.

Instead of extending my_test to instantiate a different environment, we can instead use an extension of my_test to tell the factory to override the type of object that gets instantiated for my_driver in the environment:
class my_factory_test extends uvm_test;
  function void build_phase(…);

This paradigm allows us to set up a base test that instantiates the basic environment with default components, and then extend the base test to create a new test that simply consists of factory overrides and perhaps a few other things (for future blog posts) to make interesting things happen. For example, in addition to overriding the driver type, the test may also choose a new stimulus sequence to execute, or swap in a new coverage collector. The point is that the factory gives us the hook to make these changes without changing the env code because the connections between components remain the same due to the use of the TLM interfaces. You get to add new behavior to an existing testbench without changing code that works just fine. It all fits together…

, , , ,

26 May, 2016

Living on the cutting edge, as I do, I’ve been focusing most of my attention recently on the problem of Portable Stimulus. As you may know, the Accellera Portable Stimulus Working Group (PSWG) has spent the last 14 months or so (starting in March, 2015) working on developing a standard to address this important aspect of functional verification. Although I’ve served as the Vice Chair of the PSWG since its inception, the views expressed here are my own and are not intended to represent the official position of the PSWG as a whole.

As we all know, functional verification testing takes many forms, depending on where you are in the process, and the testing is done by different people with different areas of focus along the way. The idea of Portable Stimulus is that we can have a single description of the scenario(s) to be exercised that can be reused by everyone from architects to RTL verification engineers to post-Silicon validation engineers and software teams on a variety of platforms from simulation and virtual platforms to FPGA prototypes, emulation and even post-Silicon. To describe it in its most simplistic form, we’re talking about a single representation of test intent that can be used to drive a UVM-based simulation for an IP block and can also be used to generate executable C code to run on an embedded processor to drive the same IP block inside an SoC in an emulator, FPGA prototype or even in the actual chip. No one on the committee that I’ve talked to is under the illusion that this is an easy problem to solve, but it’s an important one for us to take the next step in verification productivity.

The WG is actively considering two alternate proposals: a joint proposal from Mentor Graphics and Cadence and a one from Breker. While there is a considerable amount of common conceptual ground between the two proposals, there are some important conceptual (and practical) differences between the two. One point of agreement I see amongst the WG members is that to achieve the necessary level of abstraction and automation, the description of stimulus scenarios must be declarative. As shown in the picture below, the abstract model then gets processed by a toolBigPicture(affectionately referred to as “secret sauce”) that produces the output for the desired target implementation of the test.

Both proposals rely on the idea of a graph to describe the scenarios. The Mentor-Cadence proposal uses a domain-specific language to specify the graph declaratively, as well as actions that represent the individual behaviors to be executed and resources, components and flow objects that describe the rules and constraints of the system under test that the “secret sauce” tool will use to generate the appropriate target implementation of the test. These concepts not only raise the level of abstraction of the test, but they also raise the level of randomization. Consider a system like:

Here we have a system that can receive data via either a USB port or a modem port. When data is received, it will be transferred to a video decoder which will decode the video. In an actual SoC, there would be a lot more going on, but we’ll keep it simple for now. Note that the only data transfer option from the USB port is a DMA transfer, which the Model port supports both DMA and mem copy. A simple graph specification in the Mentor-Cadence proposal would look something like this:

graph {
  select {
    { USB receive;
      DMA xfer; }
    { Modem receive;
      select {
        DMA xfer;
        mem copy;
  Decode vid;

Each of the actions (the ovals in the diagram) represents a unit of behavior that will be implemented on the target platform. The implementations can be associated directly with the action as part of the model or imported from existing C/C++ code using a Direct Programming Interface (DPI). I’ll save a more detailed discussion for later, but there’s one other key point I’d like to make. The declarative specification is flexible enough to describe the actions and their relationships and data flow to allow the tool to infer whatever details are necessary to complete the scenario. For example, given that the system can only support a DMA transfer from the USB port, we could write a partial specification:

graph {
  USB receive;
  Decode vid;

In this scenario, the important behaviors we want to exercise are the USB receive action followed by the Decode vid action. By specifying these two actions, the other parts of the model tell the secret sauce tool that the only way to get data from the USB to the video decode is via a DMA Xfer action, so that action would be inferred. In fact, there could actually be multiple DMA transfers between the two actions, as long as that satisfied the rest of the system constraints specified in the model.

The flexibility of the partial specificaion made possible by the declarative language we’ve proposed makes it much easier to describe a set of possible scenarios and rely on the tool to choose the specific scenario and generate the output test for the target platform.

You’ll have many chances to learn more about Portable Stimulus at DAC this year. The PSWG is offering a tutorial on Monday: “How Portable Stimulus Addresses Key Verification, Test Reuse, and Portability Challenges.” You can register here. And I’ll be presenting “Get Ready for Portable Stimulus” in the Verification Academy Booth (#627) at 4pm on Tuesday. I’ll be going into much more detail about Portable Stimulus and will be happy to answer any questions you might have. Hope to see you there!

, ,

25 April, 2016

Having been deeply involved with Universal Verification Methodology (UVM) from its inception, and before that, with OVM from its secret-meetings-in-a-hidden-hotel-room beginnings, I must admit that sometimes I forget some of the truly innovative and valuable aspects of what has become the leading verification methodology for both ASIC (see here) and FPGA (see here) verification teams. So I thought it might be helpful to all of us if I took a moment to review some of the key concepts in UVM. Perhaps it will help even those of us who may have become a bit jaded over the years just how cool UVM really is.

I have long preached that UVM allows engineers to create modular, reusable, randomized self-checking testbenches. Of course, these qualities are all inter-related. For example, modularity is the key to reuse. UVM promotes this through the use of transation-level modeling (TLM) interfaces. By abstracting the connections using ports on the “calling” side and exports on the “implementation” side, every component in a UVM testbench is blissfully unaware of the internal details of the component(s) to which it is connected. One of the most important places where this abstraction comes in handy is between sequences and drivers.

seq2driverconnection Figure 1: Sequence-to-Driver Connection(s)

Much of the “mechanical” details are, of course, hidden by the implementation of the sequencer, and the user view of the interaction is therefore rather straightforward:

seqdriverapi Figure 2: The Sequence-Driver API

Here’s the key: That “drive_item2bus(req)” call inside the driver can be anything. In many cases, it will be a task call inside the driver that manipulates signals inside the virtual interface, or the calls could be inline:

task run_phase(uvm_phase phase);
  forever begin
    my_transaction tx;
    @(posedge dut_vi.clock);
    dut_vi.cmd  = tx.cmd;
    dut_vi.addr = tx.addr; =;
    @(posedge dut_vi.clock)
endtask: run_phase

As long as the get_next_item() and item_done() calls are present in the driver, everything else is hidden from the rest of the environment, including the sequence. This opens up a world of possibilities.

One example of the value of this setup is when emulation is a consideration. In this case, the task can exist inside the interface, which can itself exist anywhere. For emulation, the interface often will be instantiated inside a protocol module, which includes other protocol-specific information:

dualtop Figure 3: Dual Top Architecture

You can find out more about how to set up your environment like this in the UVM Cookbook. And if you’re interested in learning more about setting up your testbench to facilitate emulation, you can download a very interesting paper here.

The flexibility of the TLM interface between the sequence and the driver gives UVM users the flexibility to reuse the same tests and sequences as the project progresses from block-level simulation through emulation. All that’s needed is a mechanism to allow a single environment to instantiate different components with the same interfaces without having to change the code. That’s what the factory is for, and we’ll cover that in our next session.

I’m looking forward to hearing from the new and advanced UVM users out there!

, , , , , ,

21 March, 2016

As I’m sure I’ve mentioned before, DVCon (in the US – I haven’t made it to any of the new, international events yet) is one of my favorite weeks of the year. In addition to seeing friends and colleagues, I really enjoy seeing how the industry has progressed from year to year. As one of the early (and still enthusiastic) proponents of UVM, I was especially interested to see all the UVM-related activity at this year’s conference.

The UVM emphasis started first thing Monday morning with the tutorial “Preparing for IEEE UVM Plus: UVM Tips and Tricks,” which by my unofficial tally was the most well-attended tutorial on Monday. Judging by the audience’s attentiveness, it was apparent that they found the “Tips and Tricks” discussion, which was divided into compile-time and run-time categories, to be very helpful, although many of them are already included in the online UVM Cookbook on Verification Academy. In addition, there were three separate “UVM Applications” sessions, each of which was the most popular session in its timeslot, and 9 posters on UVM.

One poster in particular caught my eye, “Slaying the UVM Reuse Dragon: Issues and Strategies for Achieving UVM Reuse,” (viewable here) by my Mentor Graphics colleague Bob Oden and Mike Baird of Willamette HDL. Bob is the creator of our new UVM Framework reuse environment (more about that in a future post) and, besides being one of the leading UVM and SystemVerilog trainers out there, Mike holds the distinction of being the guy who taught me Verilog way back in the dark ages. These guys really know their stuff, and the paper lays out a straightforward approach to organizing, grouping, and packaging the different parts of your UVM component library to maximize their reuse from project to project. It also shows you how to architect your components and environments to make them self-contained and configurable so you’ll be able to use them in whatever context you need to.

Given the remarkable and still-growing popularity of UVM, I’m going to take some time over the next few weeks and months to highlight some of the key points of effective UVM usage here on Verification Horizons. As you know, there’s a wealth of UVM-related information on Verification Academy, but I think it might help to point out some of the more important features. Stay tuned!

By the way, you can see all of the papers and posters written by Mentor Graphics authors here. Enjoy!

, ,

2 March, 2016

Just wanted to take a minute from DVCon to let you know that the latest super-sized Verification Horizons is now available. If you read the editor’s note, you’ll be able to read about my recent family vacation to Hawaii (which was awesome!), but you should really check out the articles:

We begin this issue with two case study articles from users “in the trenches” of verification. First, our friends at Baker Hughes share “An Evaluation of the Advantages of Moving from a VHDL to a UVM Testbench,” in which they discover the advantages of self-checking randomized testing in UVM, even for FPGA designs. For those of you doing FPGA designs in VHDL, this article should allay any fears you may have about moving to UVM as most of your competitors are doing.

Our second case study comes from our friends at Qualcomm, with an assist from XtremeEDA, where they share their “First Time Unit Testing Experience Report with SVUnit.” Their methodology stresses unit testing critical testbench components to avoid the dreaded “is it a design bug or a testbench bug?” question that so often plagues verification engineers, particularly at the integration stage. As you’ll see, this approach does require some up-front effort, but the payoff is clear. If you can prevent bugs from getting through to tapeout, why wouldn’t you?

We begin a set of articles from my Mentor colleagues by introducing “The Verification Academy Patterns Library,” a new feature of the Verification Academy website that documents good design practices to solve often-recurring problems in verification. The concept of design patterns is not new, but we believe this is the first and most extensive effort to document a pattern library specifically for verification. As you’ll see, the pattern library is clearly organized into categories so it will be easy to locate a pattern that may be applicable to your specific problem and allow you to take advantage of the knowledge provided by a diverse team of experts from assertion-based and formal verification to constrained-random and coverage-driven verification across simulation, hardware-assisted verification and emulation.

Next we learn how to achieve “Increased Efficiency with Questa VRM and Jenkins Continuous Integration” by applying the software practice of Continuous Integration to verification management. Experience and common sense show that the longer a branch of code is checked out the more it drifts away from the previous version in the repository, making it more likely that problems will occur when checking it back in. The article shows how Jenkins, a free open-source tool can be used to monitor the source repository and use Questa’s Verification Run Manager (VRM) to handle the necessary verification tasks and supply results back to Jenkins for display in a dashboard.

Our next several articles highlight different aspects of Questa Verification IP, beginning with “Verifying Display Standards: A Comprehensive UVM-Based Verification IP Solution.” This article offers practical advice on how to set up your UVM environment to include QVIP as well as highlighting some of the benefits of QVIP in general. In “9 Effective Features of NVMe® Questa Verification IP to Help You Verify PCIe-Based SSD Storage,” you’ll get an overview of the new Non-Volatile Memory Express® (NVMe) specification and see how our new Questa NVMe VIP can help you accelerate the verification of your PCIe-based Solid State Drives that use the NVMe interface. In “MIPI C-PHY: Man of the Hour,” you’ll get an introduction to the three physical layers used in the MIPI Alliance for mobile imaging systems and the tradeoffs between them, and learn what features Questa VIP provides to assist in their verification. We wrap up the QVIP articles with “Total Recall: What to Look for in a Memory Model Library,” which provides an extremely useful analysis of the key features you should look for in evaluating a VIP Memory library. It highlights some of the unique features of the QVIP Memory Library, including on-the-fly configuration.

Our next article, “Certus Silicon Debug: Don’t Prototype Without It,” addresses that age-old question of what do once you’ve gotten your full SoC running as an FPGA prototype in the lab and you find a problem. It highlights the many layers of the debug problem and shows how our CertusTM Silicon Debug tool provides unsurpassed visibility into the inner workings of your FPGAs and lets you see the results in the VisualizerTM Debug Environment, just as if you were running in simulation. The idea of defining trigger conditions and capturing HW signals reminds me of my days designing logic analyzers back in the 80s (yes, I’m that old), and I find it fascinating that we can now do the same thing inside an FPGA with millions of gates. This is some really great technology that you have got to check out.

Next we have the first of several articles relating to DO-254 verification. We begin with “Simplified UVM for FPGA Reliability,” where we see how the component-based nature of UVM can help with the auditing process in DO-254. This article also reiterates some of the conclusions from the Baker Hughes article.

In our Partners’ Corner, we continue our DO-254 sub-theme with a discussion of “Complex Signal Processing Verification Under DO-254 Constraints,” in which our friends at AEDVICES Consulting show how they combined assertions, UVM and functional coverage to support requirements-based verification for safety critical processes like DO-254 and ISO 26262.

Since no DO-254 project is complete without documentation, our friends at eInfochips walk us through “Simplifying Generation of DO-254 Compliant Verification Documents for Airborne Electronic Hardware (AEH) Devices.” They show us a step-by-step process to go from a Verification Case Document (VCD) to importing a testplan into a UCDB in Questa, against which you can measure your functional coverage from your UVM simulation. We follow this with a discussion of “DO-254 Compliant UVM VIP Development” from Electra IC, in which they provide a case study of putting together a UVM environment using Questa VIP and Verification Run Manager for a recently completed DO-254 project. And last but not least, we learn from our friends at Ensilica how to build a “Reusable Verification Framework,” where they use UVM to build BFMs in the interface instead of virtual interfaces in the driver to simplify block-to-top reuse of interface components.

DVCon U.S. has been a great show this week. If you weren’t able to attend, I hope you’ll be able to join us next year. In the meantime, we’ll continue to keep you up to date with all the latest information you need to expand your Verification Horizons.

@dennisbrophy tweets

Follow dennisbrophy

@dave_59 tweets

Follow dave_59

@jhupcey tweets

Follow jhupcey