Posts Tagged ‘SystemVerilog’

22 July, 2017

There’s a wonderful quote in Brian Kernighan book The Elements of Programming Style, where he says “Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?” Humor aside, our 2016 Wilson Research Group study supports the claim that debugging is a challenge today, where most projects spend more time involved in debugging than any other task.

One insidious aspect of debugging is that it is unpredictable if not properly managed. For example, from a management perspective, it is easy to gather metrics on various processes involved in the product development lifecycle from previous projects in order to plan for the future. However, the unpredictability of debugging becomes a manager’s nightmare. Guidelines for efficient debugging are required to improve productivity.

In reality, debugging is required in every process that spans a products development lifecycle, from conception, architectural design, implementation, post-silicon, and even the test and testbenches we create to verify/validate a design. The emergence of SystemVerilog and UVM has necessitated the development of new debugging skills and tools. Traditional debugging approaches that were adopted for simple RTL testbenches have become less productive. To address this dearth of new debugging process knowledge I’m excited to announce that the Verification Academy has just released a new UVM debugging course. This course consists of five video sessions. Topics that are covered in this course include:

• A historical perspective on the general debugging problem
• Ways to effectively debug memory leaks in UVM environments
• How to debug connectivity issues between UVM components
• Guidelines to effectively debug common issues related to UVM phases
• Methods to debug common issues with the UVM configuration database

As always, this course is available free to you. To learn more about our new Verification Academy debugging course, as well as all our other video courses, please visit

, , , , , ,

24 February, 2017

My last blog post was written a few years ago before attending a conference when I was reminiscing about the 10-year history of SystemVerilog. Now I’m writing about going to another conference, DVCon, and being part of a panel reminiscing about the 15-year history of SystemVerilog and envisioning its future. My history with SystemVerilog goes back much further.

Soul of a New MachineMy first job out of college was working with a group of Data General (DG) engineers made public in the book, Soul of a New Machine, by Tracy Kidder in 1981. Part of my job was writing a program that simulated the microcode of the CPUs we designed. Back then, there were no hardware description languages and you had to hard code everything for each CPU. If you were lucky you could reuse some of the code for the user interface between projects. Later, DG came up with a somewhat more general-purpose simulation language. It was general-purpose in the sense that it could be used for a wider range of projects based on the way DG developed hardware. But getting it to work in another company’s environment would have been a challenge.  By the way, Badru Agarwala was the DG simulation developer I worked with who later founded the Verilog simulation companies Frontline and Axiom. He now manages the Calypto division at Mentor Graphics.

Many other processor companies like DEC, IBM and Intel had their own in-house simulation languages or were in the process of developing one because no commercially viable technologies existed. Eventually, Phil Moorby at Gateway Design began developing the Verilog language and simulator. One of the benefits of having an independent language, although not an official standard yet, was you could now share or bring in models from outside your company. This includes being able to hand off a Verilog netlist to another company for manufacturing. Another benefit was that companies could now focus on the design and verification of their products instead of the design and verification of tools that design and verify their products.

I evaluated Verilog in its first year of release back in 1985/1986. DG decided not to adopt Verilog at that point, but I liked it so much I left DG and joined Gateway Design as one of its first application engineers. Dropping another name, Karen Bartleson was one of my first customers as a CAD manager working at UTMC. She recently took the office of President and CEO of the IEEE.

IEEEFast forward to the next decade, when Verilog became standardized as IEEE 1364-1995. But by then it had already lost ground in the verification space. Companies went back to developing their own in-house verification solutions. Sun Microsystems developed Vera and later released it as a commercial product marketed by Systems Science. Arturo Salz was one of its developers and will be on the DVCon panel with me as well. Specman was developed for National Semiconductor and a few other clients and later marketed by Verisity. Once again, we had the problem of competing independent languages and therefore limiting the ability to share or acquire verification models. So, in 1999, a few Gateway alums and others formed a startup which I joined a year later hoping to consolidate design and verification back into one standard language. That language was SUPERLOG and became the starting point for the Accellera SystemVerilog 3.0 standard in 2002, fifteen years ago.

IEEE Std 1800-2012There are many dates you could pick for the start of SystemVerilog. You could claim it couldn’t exist until there was a simulator supporting some of the features in the standard. For me, it started when I first read an early Verilog Language Reference Manual and executed my first initial block 31 years ago. I’ve been using Verilog almost every day since. And now all of Verilog is part of SystemVerilog. I’ve been so much a part of the development of the language from its early beginnings; that’s why some of my colleagues call me “The Walking LRM”. Luckily, I don’t dream about it. I hope I never get called “The Sleeping LRM”.

So, what’s next for SystemVerilog? Are we going to repeat the cycle of fragmentation and re-consolidation? Various extensions have already begun showing up in different implementations. SystemVerilog has become so complex that no one can keep a good portion of it in their head anymore. It is very difficult to remove anything once it is in the LRM. Should we start over? We tried to do that with SUPERLOG, but no one adopted it until it was made fully backward compatible with Verilog.

The Universal Verification Methodology (UVM) was designed to cut down the complexity of learning and using SystemVerilog. There are now a growing number of sub-methodologies for using the UVM because the UVM itself has exploded in complexity  (UVM Framework and Easier UVM to name a couple). I have also taken my own approach when teaching people SystemVerilog by showing a minimal subset (See my SystemVerilog OOP for UVM Verification course).

DVConI do believe in the KISS principle of Engineering and I hope that is what prevails in the next version of SystemVerilog, whether we start over or just make refinements. I hope you will be able to join the discussion with others and me at the DVCon panel next week, or in the forums in the Verification Academy, or on Twitter.


, , , , , , , , , , , ,

5 January, 2017

Face facts: power supply nets are now effectively functional nets, but they are typically not defined in the design’s RTL. But proper connection and behaviors of power nets and logic – power down, retention, recovery, etc. – must be verified like any other DUT element. As such, the question is how can D&V engineers link their testbench code to the IEEE 1801 Unified Power Format (UPF) files that describe the design’s low power structures and behaviors, so verification of all that low power “stuff” can be included in the verification plan?

A real power distribution setup

Fortunately, the answer is relatively straightforward.  In a nutshell, the top level UPF supply ports and supply nets provide hooks for the design, libraries, and annotated testbenches through the UPF connect_supply_net and connect_supply_set commands – these define the complete power network connectivity. Additionally, the top level UPF supply ports and supply nets are collectively known as supply pads or supply pins (e.g. VDD, VSS etc.), where the UPF low power standard recommends how supply pads may be referenced in the testbenches and extended to manipulate power network connectivity in a testbench simulation. Hence it becomes possible to control power ‘On’ and ‘Off’ for any power domain in the design through the supply pad referenced in the testbench.

All the necessary HDL testbench connections are done through importing UPF packages available under the power-aware simulation tool distribution environment. Even better: the IEEE 1801 LRM provides standard UPF packages for Verilog, SystemVerilog, and VHDL testbenches to import the appropriate UPF packages to manipulate the supply pads of the design under verification. The following are syntax examples for UPF packages to be imported or used in different HDL variants.
Example of UPF package setup for Verilog or SystemVerilog testbench

import UPF::*;
module testbench;

Note: UPF packages can be imported within or outside of the module-endmodule declaration.

Example UPF package setup for a VHDL testbench

library ieee;
use ieee.UPF.all;

entity dut is
end entity;

architecture arch of dut is

end arch;

The “import UPF::*” package and “use ieee.UPF.all;” library actually embeds the functions that are used to utilize and drive the design supply pads directly from the testbench. Thus, once these packages are referenced in the testbench, the simulator automatically searches for them from the simulator installation locations and makes the built-in functions of these packages available to utilize in the simulation environment. The following examples explain these functions, namely supply_on and supply_off with their detailed arguments.

Example functions for Verilog and SystemVerilog testbenches to drive supply pads

supply_on( string pad_name, real value = 1.0, string file_info = "");

supply_off( string pad_name, string file_info = "" );

Note: Questa Power Aware Simulator (PA SIM) users do not have to deal with the third argument, string file_info = “” – Questa will be automatically take care of this automatically.

Example functions for a VHDL testbench driving supply pads

supply_on ( pad_name : IN string ; value : IN real ) return boolean;

supply_off ( pad_name : IN string ) return boolean;

Regardless of the language used, the pad_name must be a string constant, and a valid top level UPF supply port must be passed to this argument along with a “non-zero” real value to denote power “On”, or “empty” to denote power “Off”. Questa PA-SIM will obtain the top module design name from the UPF set_scope commands defined below.

Now that the basic package binding and initial wiring is setup, how do you actually control the design supply pad through a testbench?  This is where the aforementioned UPF connect_supply_net or connect_supply_set and set_scope commands come in, as per the following code examples.

Example UPF with connect_supply_net for utilizing supply pads from the testbench

set_scope cpu_top
create_power_domain PD_top

# Create top level power domain supply ports
create_supply_port VDD_A -domain PD_top
create_supply_port VDD_B -domain PD_top
create_supply_port VSS -domain PD_top

# Create supply nets
create_supply_net VDD_A -domain PD_top
create_supply_net VDD_B -domain PD_top
create_supply_net VSS -domain PD_top

# Connect top level power domain supply ports to supply nets
connect_supply_net VDD_A -ports VDD_A
connect_supply_net VDD_B -ports VDD_B
connect_supply_net VSS -ports VSS

Next, the UPF connect_supply_net specified supply ports VDD_A, VDD_B, VSS, etc. can be directly driven from the testbench as shown in the following code example.

import UPF::*;
module testbench;
reg VDD_A, VDD_B, VSS;
reg ISO_ctrl;
initial begin
ISO_ctrl = 1’b1;
supply_on (VDD_A, 1.10); // Values represent voltage & non zero value

// (1.10) signifies Power On

supply_on (VSS, 0.0); // UPF LRM Specifies Ground VSS On at 0.0
supply_on (VDD_B, 1.10);
supply_off (VDD_A);   // empty real value argument indicates Power Off


That’s all there is to it!

As you can glean from the examples, it is pretty easy to design a voltage regulator or a power management unit in the testbench through the functions supply_on and supply_off to mimic a real chip’s power operations. Of course there are many more functions available under these UPF packages, but hopefully this article is enough to get you started.

Joe Hupcey III
Progyna Khondkar
for the Questa Low Power Design & Verification product team

Related posts:

Part 11: The 2016 Wilson Research Group Functional Verification Study on ASIC/IC Low Power Trends

3 Things About UPF 3.0 You Need to Know Now

Whitepaper: Advanced Verification of Low Power Designs

, , , , , ,

31 October, 2016

ASIC/IC Language and Library Adoption Trends

This blog is a continuation of a series of blogs related to the 2016 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on I various verification technology adoption trends. In this blog I plan to discuss various ASIC/IC language and library adoption trends..

Figure 1 shows the adoption trends for languages used to create RTL designs. Essentially, the adoption rates for all languages used to create RTL designs is projected to be either declining or flat over the next year.


Figure 1. ASIC/IC Languages Used for RTL Design

As previously noted, the reason some of the results sum to more than 100 percent is that some projects are using multiple languages; thus, individual projects can have multiple answers.

Figure 2 shows the adoption trends for languages used to create ASIC/IC testbenches. Essentially, the adoption rates for all languages used to create testbenches are either declining or flat. Furthermore, the data suggest that SystemVerilog adoption is starting to saturate or level off in the mid-70s range.


Figure 2. ASIC/IC Languages Used for  Verification (Testbenches)

Figure 3 shows the adoption trends for various ASIC/IC testbench methodologies built using class libraries.


Figure 3. ASIC/IC Methodologies and Testbench Base-Class Libraries

Here we see a decline in adoption of all methodologies and class libraries with the exception of Accellera’s UVM, whose adoption continued to increase between 2014 and 2016. Furthermore, our study revealed that UVM is projected to continue its growth over the next year. However, like SystemVerlog, it will likely start to level off in the mid- to upper-70 percent range.

Figure 4 shows the ASIC/IC industry adoption trends for various assertion languages, and again, SystemVerilog Assertions seems to have saturated or leveled off.


Figure 4. ASIC/IC Assertion Language Adoption

In my next blog (click here) I plan to present the ASIC/IC design and verification power trends.

Quick links to the 2016 Wilson Research Group Study results

, , , , , , , , , , , , , , , , , , ,

21 September, 2016

FPGA Language and Library Trends

This blog is a continuation of a series of blogs related to the 2016 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on FPGA verification techniques and technologies adoption trends, as identified by the 2016 Wilson Research Group study. In this blog, I’ll present FPGA design and verification language trends.

You might note that the percentage for some of the language that I present sums to more than one hundred percent. The reason for this is that many FPGA projects today use multiple languages.

FPGA RTL Design Language Adoption Trends

Let’s begin by examining the languages used for FPGA RTL design. Figure 1 shows the trends in terms of languages used for design, by comparing the 2012, 2014, and 2016 Wilson Research Group study, as well as the projected design language adoption trends within the next twelve months. Note that the language adoption is declining for most of the languages used for FPGA design with the exception of Verilog and SystemVerilog.

Also, it’s important to note that this study focused on languages used for RTL design. We have conducted a few informal studies related to languages used for architectural modeling—and it’s not too big of a surprise that we see increased adoption of C/C++ and SystemC in that space. However, since those studies have (thus far) been informal and not as rigorously executed as the Wilson Research Group study, I have decided to withhold that data until a more formal study can be executed related to architectural modeling and virtual prototyping.


Figure 1. Trends in languages used for FPGA design

It’s not too big of a surprise that VHDL is the predominant language used for FPGA RTL design, although it is slowly declining when viewed as a worldwide trend. An important note here is that if you were to filter the results down by a particular market segment or region of the world, you would find different results. For example, if you only look at Europe, you would find that VHDL adoption as an FPGA design language is about 79 percent, while the world average is 62 percent. However, I believe that it is important to examine worldwide trends to get a sense of where the industry is moving in the future.

FPGA Verification Language Adoption Trends

Next, let’s look at the languages used to verify FPGA designs (that is, languages used to create simulation testbenches). Figure 2 shows the trends in terms of languages used to create simulation testbenches by comparing the 2012, 2014, and 2016 Wilson Research Group study, as well as the projected verification language adoption trends within the next twelve months.


Figure 2. Trends in languages used in verification to create FPGA simulation testbenches

What is interesting in 2016 is that SystemVerilog overtook VHDL as the language of choice for building FPGA testbenches. But please note that the same comment related to design language adoption applies to verification language adoption. That is, if you were to filter the results down by a particular market segment or region of the world, you would find different results. For example, if you only look at Europe, you would find that VHDL adoption as an FPGA verification language is about 66 percent (greater than the worldwide average), while SystemVerilog adoption is 41 percent (less than the worldwide average).

FPGA Testbench Methodology Class Library Adoption Trends

Now let’s look at testbench methodology and class library adoption for FPGA designs. Figure 3 shows the trends in terms of methodology and class library adoption by comparing the 2012, 2014, and 2016 Wilson Research Group study, as well as the projected verification language adoption trends within the next twelve months.


Figure 3. FPGA methodology and class library adoption trends

Today, we see a basically a flat or downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has been growing at a healthy 10.7 percent compounded annual growth rate. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 12.5 percent.

By the way, to be fair, we did get a few write-in methodologies, such as OSVVM and UVVM that are based on VHDL. I did not list them in the previous figure since it would be difficult to predict an accurate adoption percentage. The reason for this is that they were not listed as a selection option on the original question, which resulted in a few write-in answers. Nonetheless, the data suggest that the industry momentum and focused has moved to SystemVerilog and UVM.

FPGA Assertion Language and Library Adoption Trends

Finally, let’s examine assertion language and library adoption for FPGA designs. The 2016 Wilson Research Group study found that 47 percent of all the FPGA projects have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trends related to those participants who have adopted ABV.

Figure 4 shows the trends in terms of assertion language and library adoption by comparing the 2012, 2014, and 2016 Wilson Research Group study, and the projected adoption trends within the next 12 months. The adoption of SVA continues to increase, while other assertion languages and libraries are not trending at significant changes.


Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I will shift the focus of this series of blogs and start to present the ASIC/IC findings from the 2016 Wilson Research Group Functional Verification Study.

Quick links to the 2016 Wilson Research Group Study results

, , , , , , , , , , , , , , ,

8 August, 2016

This is the first in a series of blogs that presents the findings from our new 2016 Wilson Research Group Functional Verification Study. Similar to my previous 2014 Wilson Research Group functional verification study blogs, I plan to begin this set of blogs with an exclusive focus on FPGA trends. Why? For the following reasons:

  1. Some of the more interesting trends in our 2016 study are related to FPGA designs. The 2016 ASIC/IC functional verification trends are overall fairly flat, which is another indication of a mature market.
  2. Unlike the traditional ASIC/IC market, there has historically been very few studies published on FPGA functional verification trends. We started studying the FPGA market segment back in the 2010 study, and we now have collected sufficient data to confidently present industry trends related to this market segment.
  3. Today’s FPGA designs have grown in complexity—and many now resemble complete systems. The task of verifying SoC-class designs is daunting, which has forced many FPGA projects to mature their verification process due to rising complexity. The FPGA-focused data I present in this set of blogs will support this claim.

My plan is to release the ASIC/IC functional verification trends through a set of blogs after I finish presenting the FPGA trends.


In 2002 and 2004, Collett International Research, Inc. conducted its well-known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification at that point in time. However, after the 2004 study, no additional Collett studies were conducted, which left a void in identifying industry trends. To address this dearth of knowledge, five functional verification focused studies were commissioned by Mentor Graphics in 2007, 2010, 2012, 2014, and 2016. These were world-wide, double-blind, functional verification studies, covering all electronic industry market segments. To our knowledge, the 2014 and 2016 studies are two of the largest functional verification study ever conducted. This set of blogs presents the findings from our 2016 study and provides invaluable insight into the state of the electronic industry today in terms of both design and verification trends.

Study Background

Our study was modeled after the original 2002 and 2004 Collett International Research, Inc. studies. In other words, we endeavored to preserve the original wording of the Collett questions whenever possible to facilitate trend analysis. To ensure anonymity, we commissioned Wilson Research Group to execute our study. The purpose of preserving anonymity was to prevent biasing the participants’ responses. Furthermore, to ensure that our study would be executed as a double-blind study, the compilation and analysis of the results did not take into account the identity of the participants.

For the purpose of our study we used a multiple sampling frame approach that was constructed from eight independent lists that we acquired. This enabled us to cover all regions of the world—as well as cover all relevant electronic industry market segments. It is important to note that we decided not to include our own account team’s customer list in the sampling frame. This was done in a deliberate attempt to prevent biasing the final results. My next blog in this series will discuss other potential bias concerns when conducting a large industry study and describe what we did to address these concerns.

After data cleaning the results to remove inconsistent or random responses (e.g., someone who only answered “a” on all questions), the final sample size consisted of 1703 eligible participants (i.e., n=1703). This was approximately 90% this size of our 2014 study (i.e., 2014 n=1886). However, to put this figure in perspective, the famous 2004 Ron Collett International study sample size consisted of 201 eligible participants.

Unlike the 2002 and 2004 Collett IC/ASIC functional verification studies, which focused only on the ASIC/IC market segment, our studies were expanded in 2010 to include the FPGA market segment. We have partitioned the analysis of these two different market segments separately, to provide a clear focus on each. One other difference between our studies and the Collett studies is that our study covered all regions of the world, while the original Collett studies were conducted only in North America (US and Canada). We have the ability to compile the results both globally and regionally, but for the purpose of this set of blogs I am presenting only the globally compiled results.

Confidence Interval

All surveys are subject to sampling errors. To quantify this error in probabilistic terms, we calculate a confidence interval. For example, we determined the “overall” margin of error for our study to be ±2.36% at a 95% confidence interval. In other words, this confidence interval tells us that if we were to take repeated samples of size n=1703 from a population, 95% of the samples would fall inside our margin of error ±2.36%, and only 5% of the samples would fall outside. Obviously, response rate per individual question will impact the margin of error. However, all data presented in this blog has a margin of error of less than ±5%, unless otherwise noted.

Study Participants

This section provides background on the makeup of the study.

Figure 1 shows the percentage of overall study FPGA and ASIC/IC participants by market segment. It is important to note that this figures does not represent silicon volume by market segment.


Figure 1: FPGA and ASIC/IC study participants by market segment

Figure 2 shows the percentage of overall study eligible FPGA and ASIC/IC participants by their job description. An example of eligible participant would be a self-identified design or verification engineer, or engineering manager, who is actively working within the electronics industry. Overall, design and verification engineers accounted for 60 percent of the study participants.


Figure 2: FPGA and ASIC/IC study participants job title description

Before I start presenting the findings from our 2016 functional verification study, I plan to discuss in my next blog (click here) general bias concerns associated with all survey-based studies—and what we did to minimize these concerns.

Quick links to the 2016 Wilson Research Group Study results

, , , , , , , , , , , , , , , , , ,

25 July, 2016

As I mentioned in my last UVM post, UVM allows engineers to create modular, reusable, randomized self-checking testbenches. In that post, we covered the “modularity” aspect of UVM by discussing TLM interfaces, and their value in isolating a component from others connected to it. This modularity allows a sequence to be connected to any compatible driver as long as they both are communicating via the same transaction type, or allows multiple coverage collectors to be connected to a monitor via the analysis port. This is incredibly useful in creating an environment in that it gives the environment writer the ability to mix and match components from a library into a configuration that will verify the desired DUT.

Of course, it is often the case that you may want to have multiple environments that are very similar, but may only differ in small ways from each other. Every environment has a specific purpose, and also shares most of its infrastructure with the others. How can we share the common parts and only customize the unique portions? This is, of course, one of the principles of object-oriented programming (OOP), but UVM actually takes it a step further.

A naïve OOP coder might be tempted to instantiate components in an environment directly, such as:
class my_env extends uvm_env;
  function void build_phase(…);
    my_driver drv = new("drv", this); //extended from uvm_driver

The environment would be similarly instantiated in a test:
class my_test extends uvm_test;
  function void build_phase(…);
    my_env env = new("env", this); //extended from uvm_env

Once you get your environment running with good transactions, it’s often useful to modify things to find out what happens when you inject errors into the transaction stream. To do this, we’ll create a new driver, called my_err_driver (extended from my_driver) and instantiate it in an environment. OOP would let us do this by extending the my_env class and overloading the build_phase() method like this:
class my_env2 extends my_env;
  function void build_phase(…);
    my_err_driver drv = new(“drv”, this);

Thus, the only thing different is the type of the driver. Because my_err_driver is extended from my_driver, it would have the same interfaces and all the connections between the driver and the sequencer would be the same, so we don’t have to duplicate that other code. Similarly, we could extend my_test to use the new environment:
class my_test2 extends my_test;
  function void build_phase(…);
    my_env2 env = new(“env”, this); //extended from my_env

So, we’ve gone from one test, one env and one driver, to two tests, two envs and two drivers (and a whole slew of new build_phase() methods), when all we really needed was the extra driver. Wouldn’t it be nice if there were a way that we could tell the environment to instantiate the my_err_driver without having to create a new extension to the environment? We use something called the factory pattern to do this in UVM.

The factory is a special class in UVM that creates an instance for you of whatever uvm_object or uvm_component type you specify. There are two important parts to using the factory. The first is registering a component with the factory, so the factory knows how to create an instance of it. This is done using the appropriate utils macro (which is just about the only macro I like using in UVM):
class my_driver extends uvm_driver;
  `uvm_component_utils(my_driver) // notice no ‘;’

The uvm_component_utils macro sets up the factory so that it can create an instance of the type specified in its argument. It is critical that you include this macro in all UVM classes you create that are extended (even indirectly) from the uvm_component base class. Similarly, you should use the uvm_object_utils macro to register all uvm_object extensions (such as uvm_sequence, uvm_sequence_item, etc.).

Now, instead of instantiating the my_driver directly, we instead use the factory’s create() method to create the instance:
class my_env extends uvm_env;
  function void build_phase(uvm_phase phase);
    drv = my_driver::type_id::create(“drv”, this);

The “::type_id::create()” incantation is the standard UVM idiom for invoking the factory’s static create() method. You don’t really need to know what it does, only that it returns an instance of the my_driver type, which is then assigned to the drv handle in the environment. Given this flexibility, we can now use the test to tell the environment which driver to use without having to modify the my_env code.

Instead of extending my_test to instantiate a different environment, we can instead use an extension of my_test to tell the factory to override the type of object that gets instantiated for my_driver in the environment:
class my_factory_test extends uvm_test;
  function void build_phase(…);

This paradigm allows us to set up a base test that instantiates the basic environment with default components, and then extend the base test to create a new test that simply consists of factory overrides and perhaps a few other things (for future blog posts) to make interesting things happen. For example, in addition to overriding the driver type, the test may also choose a new stimulus sequence to execute, or swap in a new coverage collector. The point is that the factory gives us the hook to make these changes without changing the env code because the connections between components remain the same due to the use of the TLM interfaces. You get to add new behavior to an existing testbench without changing code that works just fine. It all fits together…

, , , ,

24 May, 2016

Join us at the 53rd Design Automation Conference

DAC is always a time of jam-packed activity with multiple events that merit your time and attention.  As you prepare your own personal calendars and try your best to reduce or eliminate conflicts, let me share with you some candidate events that you may wish to consider having on your calendar.  I will highlight opportunities to learn more about ongoing and emerging standards from Accellera and IEEE.  I will focus on a few sessions at the Verification Academy booth (#627) that feature Partner presentations.  And I will spotlight some venues where other industry collaboration will be detailed.  You will also find me at many of these events as well.


Accellera will host its traditional Tuesday morning breakfast.  Registration is required – or you might not find a seat.  As always, breakfast is free.  The morning will feature a “Town Hall” style meeting that will cover UVM (also known as IEEE P1800.2) and other technical challenges that could help evolve UVM into other areas.   Find out more and learn about all things UVM, register here.


The Verification Academy is “partner-central” for us this year.  Each day will feature partner presentations that highlight evolving design and verification methodologies, standards support and evolution, and product integrations.  Verification Academy is booth #627, which is centrally located and easy to find.  Partner presentations include:

  • Back to the Stone Ages for Advanced Verification
    Monday June 6th
    2:00 PM | Neil Johnson – XtremeEDA

    Modern development approaches are leaving quality gaps that advanced verification techniques fail to address… and the gaps are growing in spite of new innovation. It’s time for a fun, frank and highly interactive discussion around the shortcomings of today’s advanced verification methods.

  • SystemVerilog Assertions – Bind files & Best Known Practices
    Monday June 6th
    3:00 PM | Cliff Cummings – Sunburst Design

    SystemVerilog Assertions (SVA) can be added directly to the RTL code or be added indirectly through bindfiles. 13 years of professional SVA usage strongly suggests that Best Known Practices use bindfiles to add assertions to RTL code.

  • Specification to Realization flow using ISequenceSpec™ and Questa® InFact
    Tuesday June 7th
    10:00 AM | Anupam Bakshi – Agnisys, Inc.

    Using an Ethernet Controller design, we show how complete verification can be done in an automated manner, saving time while improving quality. Integration of two tools will be shown. InFact creates tests for a variety of scenarios which is more efficient and exhaustive than a pure constrained random methodology. ISequenceSpec forms a layer of abstraction around the IP/SoC from a specification.

  • Safety Critical Verification
    Wednesday June 8th
    10:00 AM | Mike Bartley – TVS

    The traditional environments for safety-related hardware and software such as avionics, rail and nuclear have been joined by others (such as automotive and medical devices) as systems become increasingly complex and ever more reliant on embedded software. In tandem, further industry-specific safety standards (including ISO 26262 for automotive applications and IEC 62304 for medical device software) have been introduced to ensure that hardware and software in these application areas has been developed and tested to achieve a defined level of integrity. In this presentation, we will be explaining some of these changes and how they can be implemented.

  • Using a Chessboard Challenge to Discover Real-world Formal Techniques
    Wednesday June 8th
    3:00 PM | Vigyan Singhal & Prashant Aggarwal – Oski Technology

    In December 2015, Oski challenged formal users to solve a chessboard problem. This was an opportunity to show how nifty formal techniques might be used to solve a fun puzzle. Design verification engineers from a variety of semiconductor companies and research labs participated in the contest. The techniques submitted by participants presented a number of worthy solutions, with varying degrees of success.

Industry Collaboration

Debug Data API: “Cadence and Mentor Demonstrate Collaboration for open Debug Data API in Action”  It was just a year ago that the project to create an open debug data API was announced at DAC 52.  Since there several possible implementation styles were reviewed, an agreed specification created and early working prototypes demonstrated.  On Tuesday, June 7th at 2:00pm we will host a session at the Verification Academy (Booth #627).  You are encouraged to register for the free session – but walkups are always welcome!  You can find more information here.

Portable Stimulus Tutorial: “How Portable Stimulus Addresses Key Verification, Test Reuse, and Portability Challenges”  As part of the official DAC program, there will be a tutorial on the emerging standardization work in Accellera.  The tutorial is Monday, June 6th from 1:30pm – 3:00pm in the Austin Convention Center, Room 15.  You can register here for the tutorial.  There is a fee for this event.  Want to know more about the tutorial?  You can find more information here.


It is always good to end the day on a light note.  To that end, on Monday June 6th, will invite you to “grab a cold one” at the Verification Academy booth and continue discussions and networking with your colleagues.  If past year’s experience is any guide to this year, you may want to get here early for your drink!  There is no registration to guarantee a drink, unfortunately!  So, come early; stay late!  See you in Austin!

And if you miss me at any of the locations above, tweet me @dennisbrophy – your message is sure to reach me right away.

, , , , , , , , , , , , , , , , ,

25 April, 2016

Having been deeply involved with Universal Verification Methodology (UVM) from its inception, and before that, with OVM from its secret-meetings-in-a-hidden-hotel-room beginnings, I must admit that sometimes I forget some of the truly innovative and valuable aspects of what has become the leading verification methodology for both ASIC (see here) and FPGA (see here) verification teams. So I thought it might be helpful to all of us if I took a moment to review some of the key concepts in UVM. Perhaps it will help even those of us who may have become a bit jaded over the years just how cool UVM really is.

I have long preached that UVM allows engineers to create modular, reusable, randomized self-checking testbenches. Of course, these qualities are all inter-related. For example, modularity is the key to reuse. UVM promotes this through the use of transation-level modeling (TLM) interfaces. By abstracting the connections using ports on the “calling” side and exports on the “implementation” side, every component in a UVM testbench is blissfully unaware of the internal details of the component(s) to which it is connected. One of the most important places where this abstraction comes in handy is between sequences and drivers.

seq2driverconnection Figure 1: Sequence-to-Driver Connection(s)

Much of the “mechanical” details are, of course, hidden by the implementation of the sequencer, and the user view of the interaction is therefore rather straightforward:

seqdriverapi Figure 2: The Sequence-Driver API

Here’s the key: That “drive_item2bus(req)” call inside the driver can be anything. In many cases, it will be a task call inside the driver that manipulates signals inside the virtual interface, or the calls could be inline:

task run_phase(uvm_phase phase);
  forever begin
    my_transaction tx;
    @(posedge dut_vi.clock);
    dut_vi.cmd  = tx.cmd;
    dut_vi.addr = tx.addr; =;
    @(posedge dut_vi.clock)
endtask: run_phase

As long as the get_next_item() and item_done() calls are present in the driver, everything else is hidden from the rest of the environment, including the sequence. This opens up a world of possibilities.

One example of the value of this setup is when emulation is a consideration. In this case, the task can exist inside the interface, which can itself exist anywhere. For emulation, the interface often will be instantiated inside a protocol module, which includes other protocol-specific information:

dualtop Figure 3: Dual Top Architecture

You can find out more about how to set up your environment like this in the UVM Cookbook. And if you’re interested in learning more about setting up your testbench to facilitate emulation, you can download a very interesting paper here.

The flexibility of the TLM interface between the sequence and the driver gives UVM users the flexibility to reuse the same tests and sequences as the project progresses from block-level simulation through emulation. All that’s needed is a mechanism to allow a single environment to instantiate different components with the same interfaces without having to change the code. That’s what the factory is for, and we’ll cover that in our next session.

I’m looking forward to hearing from the new and advanced UVM users out there!

, , , , , ,

20 April, 2016

Using SystemVerilog to model RTL behavior in a pinch or anytime

A couple weeks ago, sitting here in California on a rainy Friday afternoon. No the drought is not over, but the hills are green again, and the reservoirs are filling up. Daydreaming about the power and flexibility of SystemVerilog and some of the trouble people can get themselves into with it. Wishing my RTL models would show up, so I could test my testbench…

I start to think about writing some SystemVerilog… SystemVerilog is a powerful language. You can write some powerful code with little or no pain. It’s just software.

In the early days of SystemVerilog, I coded some C code and some SystemVerilog code that did the same thing. Some function calls and loops and if-then-else and some adds and multiplies. Just code. Then I disassembled both sides and compared the generated code. It was basically the same generated code! Certainly I could expect the same performance. (And I got it).

That result made me feel very comfortable writing SystemVerilog code anytime I needed some “code”. I didn’t need any special C or C++ code, I could just write SystemVerilog. There are plenty of good reasons to use C and C++ alongside your SystemVerilog, but you certainly don’t need to go there FIRST.

With this in mind on that foggy Friday afternoon two weeks ago, I decided to use SystemVerilog as a stand-in or early implementation for some simple missing RTL. What I mean is that I was going to implement the functionality using high-level SystemVerilog code – not RTL. This is a case where exact timing is not so important, and complete functionality is not so important. What IS important is getting some tests running before the weekend.

In this example, we just want a block implemented that acts like a switch – two input ports and two output ports. That code is simple enough, we can just write some SystemVerilog code using queues.

Imagine a switch with two ports in and two ports out. The ports each have an 8 bit data payload, an “output address” and a ready signal.

2x2 Switch

The protocol is simple. If ready is high on the positive edge of the clock, then a transfer takes place. This transfer takes just one clock cycle. (This is just a simple example). The typedef below defines each “port” connection – or each packet that is transferred on the bus.

typedef struct packed {
    bit ready;       // Valid data ready
    bit output_port; // Send to which port? (1 or 2)
    bit [7:0] data;  // Data payload
} packet_t;

Now we can build a switch, with a simple clock and four ports (two input and two output). This switch is not too smart. It just shuttles “packets” from input port to output port, choosing one of four possible routes: 1→ 1, 1→ 2, 2→ 1 and 2→ 2. The switch ports are defined as the typedef packet – each port has a ready, a data payload and an output destination.

module switch (input clk,
     input packet_t in1,  packet_t in2,
    output packet_t out1, packet_t out2);

The module is implemented using two simple queues. We are NOT doing any reordering of the queues, nor are we supporting prioritized transfers. We could, but it is Friday afternoon, after all.

  packet_t  in_q[$];
  packet_t out_q[$];

When a packet comes into a port, it is put on the input queue. [i.e. On the clock edge, for that port, if the ready signal is high, then capture the connection into the input queue (ready, data payload and output_port)]. Do this for both input ports – in1 and in2.

  always @(posedge clk)
    if (in1.ready == 1)
  always @(posedge clk)
    if (in2.ready == 1)

When the size of the input queue is NOT zero, then a packet is popped off and put on the output queue for “other” processing.

  always begin: Input_Queue
    packet_t p;
    wait (in_q.size() != 0);
    p = in_q.pop_back();
    repeat (1) @(posedge clk);

When the size of the output queue is NOT zero, then a packet is popped off and sent out of the module – it goes out onto the bus. Check to see if it goes out on out1 or on out2.

  always begin: Output_Queue
    packet_t p;
    wait (out_q.size() != 0);
    p = out_q.pop_back();
    case(p.output_port) // Which output port?
      0: begin 
         out1 = p;
         out1.ready = 1;
         @(posedge clk);
         out1.ready = 0;
         @(negedge clk);
      1: begin 
         out2 = p;
         out2.ready = 1;
         @(posedge clk);
         out2.ready = 0;
         @(negedge clk);

The next higher level that instantiates the switches is simple. Just simple connections. The packed struct takes care of the details. A couple of switches connected together using the packed structs is quite easy to write.

  packet_t in1, in2, out1, out2;
  packet_t n1, n2;
  switch ss_A(clk, in1, in2,   n1,   n2);
  switch ss_B(clk,  n1,  n2, out1, out2);

We now have a switch implementation with certain kinds of behavior. We can implement a priority scheme, or re-order the queues as we wish. And it is still Friday afternoon! We used user defined types (typedef of packed structs), queues and multiple threads. Pretty good bang for the buck. And coded and tested in less than a day.

The moral of the story is this. If you need to implement some missing functionality, SystemVerilog is certainly capable. It’s not as exciting as a presidential race. It’s not as exciting as going away to college for the first time, nor as exciting as racing Tesla’s on the highway, but in a pinch on a rainy Friday afternoon it sure beats going home early. Or does it?

It’s up and running, and I see the queue sizes in the switch modules growing and draining, and I call it a day.

Analog Waves for Queue Sizes

Figure 2: Analog Waves for Queue Sizes

You can find some ideas on the Verification Academy along these same lines using other SystemVerilog high-level constructs like classes, associative arrays, dynamic arrays and processes by clicking here. Look for “No RTL Yet? No Problem UVM Testing a SystemVerilog Fabric Model”. If you *really* want to write RTL instead of high-level SystemVerilog, you can find some great tips and tricks at Sunburst Design, especially the FIFO 1 & 2 papers.

Happy Friday! More rain predicted for this coming Friday.


@dennisbrophy tweets

Follow dennisbrophy

@dave_59 tweets

Follow dave_59

@jhupcey tweets

Follow jhupcey