Posts Tagged ‘testbench’

22 July, 2017

There’s a wonderful quote in Brian Kernighan book The Elements of Programming Style, where he says “Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?” Humor aside, our 2016 Wilson Research Group study supports the claim that debugging is a challenge today, where most projects spend more time involved in debugging than any other task.

One insidious aspect of debugging is that it is unpredictable if not properly managed. For example, from a management perspective, it is easy to gather metrics on various processes involved in the product development lifecycle from previous projects in order to plan for the future. However, the unpredictability of debugging becomes a manager’s nightmare. Guidelines for efficient debugging are required to improve productivity.

In reality, debugging is required in every process that spans a products development lifecycle, from conception, architectural design, implementation, post-silicon, and even the test and testbenches we create to verify/validate a design. The emergence of SystemVerilog and UVM has necessitated the development of new debugging skills and tools. Traditional debugging approaches that were adopted for simple RTL testbenches have become less productive. To address this dearth of new debugging process knowledge I’m excited to announce that the Verification Academy has just released a new UVM debugging course. This course consists of five video sessions. Topics that are covered in this course include:

• A historical perspective on the general debugging problem
• Ways to effectively debug memory leaks in UVM environments
• How to debug connectivity issues between UVM components
• Guidelines to effectively debug common issues related to UVM phases
• Methods to debug common issues with the UVM configuration database

As always, this course is available free to you. To learn more about our new Verification Academy debugging course, as well as all our other video courses, please visit www.verificationacademy.com.

, , , , , ,

5 January, 2017

Face facts: power supply nets are now effectively functional nets, but they are typically not defined in the design’s RTL. But proper connection and behaviors of power nets and logic – power down, retention, recovery, etc. – must be verified like any other DUT element. As such, the question is how can D&V engineers link their testbench code to the IEEE 1801 Unified Power Format (UPF) files that describe the design’s low power structures and behaviors, so verification of all that low power “stuff” can be included in the verification plan?

A real power distribution setup

Fortunately, the answer is relatively straightforward.  In a nutshell, the top level UPF supply ports and supply nets provide hooks for the design, libraries, and annotated testbenches through the UPF connect_supply_net and connect_supply_set commands – these define the complete power network connectivity. Additionally, the top level UPF supply ports and supply nets are collectively known as supply pads or supply pins (e.g. VDD, VSS etc.), where the UPF low power standard recommends how supply pads may be referenced in the testbenches and extended to manipulate power network connectivity in a testbench simulation. Hence it becomes possible to control power ‘On’ and ‘Off’ for any power domain in the design through the supply pad referenced in the testbench.

All the necessary HDL testbench connections are done through importing UPF packages available under the power-aware simulation tool distribution environment. Even better: the IEEE 1801 LRM provides standard UPF packages for Verilog, SystemVerilog, and VHDL testbenches to import the appropriate UPF packages to manipulate the supply pads of the design under verification. The following are syntax examples for UPF packages to be imported or used in different HDL variants.
Example of UPF package setup for Verilog or SystemVerilog testbench

import UPF::*;
module testbench;
...
endmodule

Note: UPF packages can be imported within or outside of the module-endmodule declaration.

Example UPF package setup for a VHDL testbench

library ieee;
use ieee.UPF.all;

entity dut is
...
end entity;

architecture arch of dut is

begin
...
end arch;

The “import UPF::*” package and “use ieee.UPF.all;” library actually embeds the functions that are used to utilize and drive the design supply pads directly from the testbench. Thus, once these packages are referenced in the testbench, the simulator automatically searches for them from the simulator installation locations and makes the built-in functions of these packages available to utilize in the simulation environment. The following examples explain these functions, namely supply_on and supply_off with their detailed arguments.

Example functions for Verilog and SystemVerilog testbenches to drive supply pads

supply_on( string pad_name, real value = 1.0, string file_info = "");

supply_off( string pad_name, string file_info = "" );

Note: Questa Power Aware Simulator (PA SIM) users do not have to deal with the third argument, string file_info = “” – Questa will be automatically take care of this automatically.

Example functions for a VHDL testbench driving supply pads

supply_on ( pad_name : IN string ; value : IN real ) return boolean;

supply_off ( pad_name : IN string ) return boolean;

Regardless of the language used, the pad_name must be a string constant, and a valid top level UPF supply port must be passed to this argument along with a “non-zero” real value to denote power “On”, or “empty” to denote power “Off”. Questa PA-SIM will obtain the top module design name from the UPF set_scope commands defined below.

Now that the basic package binding and initial wiring is setup, how do you actually control the design supply pad through a testbench?  This is where the aforementioned UPF connect_supply_net or connect_supply_set and set_scope commands come in, as per the following code examples.

Example UPF with connect_supply_net for utilizing supply pads from the testbench

set_scope cpu_top
create_power_domain PD_top
......

# IMPLEMENTATION UPF Snippet
# Create top level power domain supply ports
create_supply_port VDD_A -domain PD_top
create_supply_port VDD_B -domain PD_top
create_supply_port VSS -domain PD_top

# Create supply nets
create_supply_net VDD_A -domain PD_top
create_supply_net VDD_B -domain PD_top
create_supply_net VSS -domain PD_top

# Connect top level power domain supply ports to supply nets
connect_supply_net VDD_A -ports VDD_A
connect_supply_net VDD_B -ports VDD_B
connect_supply_net VSS -ports VSS

Next, the UPF connect_supply_net specified supply ports VDD_A, VDD_B, VSS, etc. can be directly driven from the testbench as shown in the following code example.

import UPF::*;
module testbench;
...
reg VDD_A, VDD_B, VSS;
reg ISO_ctrl;
...
initial begin
#100
ISO_ctrl = 1’b1;
supply_on (VDD_A, 1.10); // Values represent voltage & non zero value

// (1.10) signifies Power On

supply_on (VSS, 0.0); // UPF LRM Specifies Ground VSS On at 0.0
...
#200
supply_on (VDD_B, 1.10);
...
#400
supply_off (VDD_A);   // empty real value argument indicates Power Off

...
end
endmodule

That’s all there is to it!

As you can glean from the examples, it is pretty easy to design a voltage regulator or a power management unit in the testbench through the functions supply_on and supply_off to mimic a real chip’s power operations. Of course there are many more functions available under these UPF packages, but hopefully this article is enough to get you started.

Joe Hupcey III
Progyna Khondkar
for the Questa Low Power Design & Verification product team

Related posts:

Part 11: The 2016 Wilson Research Group Functional Verification Study on ASIC/IC Low Power Trends

3 Things About UPF 3.0 You Need to Know Now

Whitepaper: Advanced Verification of Low Power Designs

, , , , , ,

31 October, 2016

ASIC/IC Language and Library Adoption Trends

This blog is a continuation of a series of blogs related to the 2016 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on I various verification technology adoption trends. In this blog I plan to discuss various ASIC/IC language and library adoption trends..

Figure 1 shows the adoption trends for languages used to create RTL designs. Essentially, the adoption rates for all languages used to create RTL designs is projected to be either declining or flat over the next year.

BLOG-2016-WRG-figure-10-1

Figure 1. ASIC/IC Languages Used for RTL Design

As previously noted, the reason some of the results sum to more than 100 percent is that some projects are using multiple languages; thus, individual projects can have multiple answers.

Figure 2 shows the adoption trends for languages used to create ASIC/IC testbenches. Essentially, the adoption rates for all languages used to create testbenches are either declining or flat. Furthermore, the data suggest that SystemVerilog adoption is starting to saturate or level off in the mid-70s range.

BLOG-2016-WRG-figure-10-2

Figure 2. ASIC/IC Languages Used for  Verification (Testbenches)

Figure 3 shows the adoption trends for various ASIC/IC testbench methodologies built using class libraries.

BLOG-2016-WRG-figure-10-3

Figure 3. ASIC/IC Methodologies and Testbench Base-Class Libraries

Here we see a decline in adoption of all methodologies and class libraries with the exception of Accellera’s UVM, whose adoption continued to increase between 2014 and 2016. Furthermore, our study revealed that UVM is projected to continue its growth over the next year. However, like SystemVerlog, it will likely start to level off in the mid- to upper-70 percent range.

Figure 4 shows the ASIC/IC industry adoption trends for various assertion languages, and again, SystemVerilog Assertions seems to have saturated or leveled off.

BLOG-2016-WRG-figure-10-4

Figure 4. ASIC/IC Assertion Language Adoption

In my next blog (click here) I plan to present the ASIC/IC design and verification power trends.

Quick links to the 2016 Wilson Research Group Study results

, , , , , , , , , , , , , , , , , , ,

8 August, 2016

This is the first in a series of blogs that presents the findings from our new 2016 Wilson Research Group Functional Verification Study. Similar to my previous 2014 Wilson Research Group functional verification study blogs, I plan to begin this set of blogs with an exclusive focus on FPGA trends. Why? For the following reasons:

  1. Some of the more interesting trends in our 2016 study are related to FPGA designs. The 2016 ASIC/IC functional verification trends are overall fairly flat, which is another indication of a mature market.
  2. Unlike the traditional ASIC/IC market, there has historically been very few studies published on FPGA functional verification trends. We started studying the FPGA market segment back in the 2010 study, and we now have collected sufficient data to confidently present industry trends related to this market segment.
  3. Today’s FPGA designs have grown in complexity—and many now resemble complete systems. The task of verifying SoC-class designs is daunting, which has forced many FPGA projects to mature their verification process due to rising complexity. The FPGA-focused data I present in this set of blogs will support this claim.

My plan is to release the ASIC/IC functional verification trends through a set of blogs after I finish presenting the FPGA trends.

Introduction

In 2002 and 2004, Collett International Research, Inc. conducted its well-known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification at that point in time. However, after the 2004 study, no additional Collett studies were conducted, which left a void in identifying industry trends. To address this dearth of knowledge, five functional verification focused studies were commissioned by Mentor Graphics in 2007, 2010, 2012, 2014, and 2016. These were world-wide, double-blind, functional verification studies, covering all electronic industry market segments. To our knowledge, the 2014 and 2016 studies are two of the largest functional verification study ever conducted. This set of blogs presents the findings from our 2016 study and provides invaluable insight into the state of the electronic industry today in terms of both design and verification trends.

Study Background

Our study was modeled after the original 2002 and 2004 Collett International Research, Inc. studies. In other words, we endeavored to preserve the original wording of the Collett questions whenever possible to facilitate trend analysis. To ensure anonymity, we commissioned Wilson Research Group to execute our study. The purpose of preserving anonymity was to prevent biasing the participants’ responses. Furthermore, to ensure that our study would be executed as a double-blind study, the compilation and analysis of the results did not take into account the identity of the participants.

For the purpose of our study we used a multiple sampling frame approach that was constructed from eight independent lists that we acquired. This enabled us to cover all regions of the world—as well as cover all relevant electronic industry market segments. It is important to note that we decided not to include our own account team’s customer list in the sampling frame. This was done in a deliberate attempt to prevent biasing the final results. My next blog in this series will discuss other potential bias concerns when conducting a large industry study and describe what we did to address these concerns.

After data cleaning the results to remove inconsistent or random responses (e.g., someone who only answered “a” on all questions), the final sample size consisted of 1703 eligible participants (i.e., n=1703). This was approximately 90% this size of our 2014 study (i.e., 2014 n=1886). However, to put this figure in perspective, the famous 2004 Ron Collett International study sample size consisted of 201 eligible participants.

Unlike the 2002 and 2004 Collett IC/ASIC functional verification studies, which focused only on the ASIC/IC market segment, our studies were expanded in 2010 to include the FPGA market segment. We have partitioned the analysis of these two different market segments separately, to provide a clear focus on each. One other difference between our studies and the Collett studies is that our study covered all regions of the world, while the original Collett studies were conducted only in North America (US and Canada). We have the ability to compile the results both globally and regionally, but for the purpose of this set of blogs I am presenting only the globally compiled results.

Confidence Interval

All surveys are subject to sampling errors. To quantify this error in probabilistic terms, we calculate a confidence interval. For example, we determined the “overall” margin of error for our study to be ±2.36% at a 95% confidence interval. In other words, this confidence interval tells us that if we were to take repeated samples of size n=1703 from a population, 95% of the samples would fall inside our margin of error ±2.36%, and only 5% of the samples would fall outside. Obviously, response rate per individual question will impact the margin of error. However, all data presented in this blog has a margin of error of less than ±5%, unless otherwise noted.

Study Participants

This section provides background on the makeup of the study.

Figure 1 shows the percentage of overall study FPGA and ASIC/IC participants by market segment. It is important to note that this figures does not represent silicon volume by market segment.

BLOG-2016-WRG-figure-0-1

Figure 1: FPGA and ASIC/IC study participants by market segment

Figure 2 shows the percentage of overall study eligible FPGA and ASIC/IC participants by their job description. An example of eligible participant would be a self-identified design or verification engineer, or engineering manager, who is actively working within the electronics industry. Overall, design and verification engineers accounted for 60 percent of the study participants.

BLOG-2016-WRG-figure-0-2

Figure 2: FPGA and ASIC/IC study participants job title description

Before I start presenting the findings from our 2016 functional verification study, I plan to discuss in my next blog (click here) general bias concerns associated with all survey-based studies—and what we did to minimize these concerns.

Quick links to the 2016 Wilson Research Group Study results

, , , , , , , , , , , , , , , , , ,

25 July, 2016

As I mentioned in my last UVM post, UVM allows engineers to create modular, reusable, randomized self-checking testbenches. In that post, we covered the “modularity” aspect of UVM by discussing TLM interfaces, and their value in isolating a component from others connected to it. This modularity allows a sequence to be connected to any compatible driver as long as they both are communicating via the same transaction type, or allows multiple coverage collectors to be connected to a monitor via the analysis port. This is incredibly useful in creating an environment in that it gives the environment writer the ability to mix and match components from a library into a configuration that will verify the desired DUT.

Of course, it is often the case that you may want to have multiple environments that are very similar, but may only differ in small ways from each other. Every environment has a specific purpose, and also shares most of its infrastructure with the others. How can we share the common parts and only customize the unique portions? This is, of course, one of the principles of object-oriented programming (OOP), but UVM actually takes it a step further.

A naïve OOP coder might be tempted to instantiate components in an environment directly, such as:
class my_env extends uvm_env;
  function void build_phase(…);
    my_driver drv = new("drv", this); //extended from uvm_driver
    …
  endfunction
  …
endclass

The environment would be similarly instantiated in a test:
class my_test extends uvm_test;
  function void build_phase(…);
    my_env env = new("env", this); //extended from uvm_env
  endfunction
  …
endclass

Once you get your environment running with good transactions, it’s often useful to modify things to find out what happens when you inject errors into the transaction stream. To do this, we’ll create a new driver, called my_err_driver (extended from my_driver) and instantiate it in an environment. OOP would let us do this by extending the my_env class and overloading the build_phase() method like this:
class my_env2 extends my_env;
  function void build_phase(…);
    my_err_driver drv = new(“drv”, this);
    …
  endfunction
  …
endclass

Thus, the only thing different is the type of the driver. Because my_err_driver is extended from my_driver, it would have the same interfaces and all the connections between the driver and the sequencer would be the same, so we don’t have to duplicate that other code. Similarly, we could extend my_test to use the new environment:
class my_test2 extends my_test;
  function void build_phase(…);
    my_env2 env = new(“env”, this); //extended from my_env
    …
  endfunction
  …
endclass

So, we’ve gone from one test, one env and one driver, to two tests, two envs and two drivers (and a whole slew of new build_phase() methods), when all we really needed was the extra driver. Wouldn’t it be nice if there were a way that we could tell the environment to instantiate the my_err_driver without having to create a new extension to the environment? We use something called the factory pattern to do this in UVM.

The factory is a special class in UVM that creates an instance for you of whatever uvm_object or uvm_component type you specify. There are two important parts to using the factory. The first is registering a component with the factory, so the factory knows how to create an instance of it. This is done using the appropriate utils macro (which is just about the only macro I like using in UVM):
class my_driver extends uvm_driver;
  `uvm_component_utils(my_driver) // notice no ‘;’
  …
endclass

The uvm_component_utils macro sets up the factory so that it can create an instance of the type specified in its argument. It is critical that you include this macro in all UVM classes you create that are extended (even indirectly) from the uvm_component base class. Similarly, you should use the uvm_object_utils macro to register all uvm_object extensions (such as uvm_sequence, uvm_sequence_item, etc.).

Now, instead of instantiating the my_driver directly, we instead use the factory’s create() method to create the instance:
class my_env extends uvm_env;
  `uvm_component_utils(my_env)
  function void build_phase(uvm_phase phase);
    drv = my_driver::type_id::create(“drv”, this);
    …
  endfunction
  …
endclass

The “::type_id::create()” incantation is the standard UVM idiom for invoking the factory’s static create() method. You don’t really need to know what it does, only that it returns an instance of the my_driver type, which is then assigned to the drv handle in the environment. Given this flexibility, we can now use the test to tell the environment which driver to use without having to modify the my_env code.

Instead of extending my_test to instantiate a different environment, we can instead use an extension of my_test to tell the factory to override the type of object that gets instantiated for my_driver in the environment:
class my_factory_test extends uvm_test;
  `uvm_component_utils(my_factory_test);
  function void build_phase(…);
my_driver::type_id::set_type_override(my_err_driver::get_type());
    super.build_phase(…);
  endfunction
endclass

This paradigm allows us to set up a base test that instantiates the basic environment with default components, and then extend the base test to create a new test that simply consists of factory overrides and perhaps a few other things (for future blog posts) to make interesting things happen. For example, in addition to overriding the driver type, the test may also choose a new stimulus sequence to execute, or swap in a new coverage collector. The point is that the factory gives us the hook to make these changes without changing the env code because the connections between components remain the same due to the use of the TLM interfaces. You get to add new behavior to an existing testbench without changing code that works just fine. It all fits together…

, , , ,

3 June, 2015

FPGA Language and Library Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on FPGA verification techniques and technologies adoption trends, as identified by the 2014 Wilson Research Group study. In this blog, I’ll present FPGA design and verification language trends, as identified by the Wilson Research Group study.

You might note that the percentage for some of the language and library data that I present sums to more than one hundred percent. The reason for this is that many FPGA projects today use multiple languages.

FPGA RTL Design Language Adoption Trends

Let’s begin by examining the languages used for FPGA RTL design. Figure 1 shows the trends in terms of languages used for design, by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected design language adoption trends within the next twelve months (in purple). Note that the language adoption is declining for most of the languages used for FPGA design with the exception of Verilog and SystemVerilog.

Also, it’s important to note that this study focused on languages used for RTL design. We have conducted a few informal studies related to languages used for architectural modeling—and it’s not too big of a surprise that we see increased adoption of C/C++ and SystemC in that space. However, since those studies have (thus far) been informal and not as rigorously executed as the Wilson Research Group study, I have decided to withhold that data until a more formal study can be executed related to architectural modeling and virtual prototyping.

Figure 1. Trends in languages used for FPGA design

It’s not too big of a surprise that VHDL is the predominant language used for FPGA RTL design, although the projected trend is that Verilog will likely overtake VHDL in terms of the predominate language used for FPGA design in the near future.

FPGA Verification Language Adoption Trends

Next, let’s look at the languages used to verify FPGA designs (that is, languages used to create simulation testbenches). Figure 2 shows the trends in terms of languages used to create simulation testbenches by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected verification language adoption trends within the next twelve months (in purple).

Figure 2. Trends in languages used in verification to create FPGA simulation testbenches

FPGA Testbench Methodology Class Library Adoption Trends

Now let’s look at testbench methodology and class library adoption for FPGA designs. Figure 3 shows the trends in terms of methodology and class library adoption by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected verification language adoption trends within the next twelve months (in purple).

Figure 3. FPGA methodology and class library adoption trends

Today, we see a downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has increased by 28 percent since 2012. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 20 percent.

FPGA Assertion Language and Library Adoption Trends

Finally, let’s examine assertion language and library adoption for FPGA designs. The 2014 Wilson Research Group study found that 44 percent of all the FPGA projects have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trends related to those participants who have adopted ABV.

Figure 4 shows the trends in terms of assertion language and library adoption by comparing the 2010 Wilson Research Group study (in dark blue), the 2012 Wilson Research Group study (in green), and the projected adoption trends within the next 12 months (in purple). The adoption of SVA continues to increase, while other assertion languages and libraries either remain flat or decline.

Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I will shift the focus of this series of blogs and start to present the ASIC/IC findings from the 2014 Wilson Research Group Functional Verification Study.

Quick links to the 2014 Wilson Research Group Study results

, , , , , , , ,

21 May, 2015

Still having fun doing UVM and Class based debug?

Maybe a debug contest will help. I had a contest with a user not too long ago.

We’ll call him Bob. Bob debugged his UVM testbench using that favorite technique – “logfile” debug. He spent a lot of time inserting, moving and removing $display statements, all while re-running simulation over and over. He’d generate a logfile, analyze it (read: run or tuneup a Perl script) and cross-check with waveforms and source code. He’d get close, only to realize he needed more $display. More Simulation.

Hopefully, just one more $display in one more file… More simulation… Repeat…

He wanted to know if there was a better way to climb around through his UVM Testbench.

Blog 2.3 - UVM Debug - Agent UVM Netlist
<UVM Agent Schematic – One way to climb around>

A Better Way to debug Bob’s Testbench – The Contest!

Bob challenged me to a contest. He wanted to know how fast we could find the bug using Mentor’s Visualizer™ Debug Environment and a post simulation database. One simulation. One shot.

We ran Bob’s simulation, capturing the waveform database. Generating that database required two extra switches:

vopt -debug +designfile …
vsim -qwavedb=+signal+class …

Then we ran the debugger in post-simulation mode:

visualizer +designfile +wavefile

When Bob first saw his RTL signals AND his UVM Class based testbench in the waveform window together, he got a big smile – literally getting up out of his seat.

Our contest was this. Who would be fastest to find the bug? Logfile debug or class based debug? This contest was all hindsight. Bob had already figured out what the bug was and had fixed it before we ever met. In the end the bug was a simple coding mistake in the way that a SystemVerilog queue was being used. Just a simple coding error. But I’m getting ahead of myself.

Digging through the testbench

We setup a remote link so that we both could see the post simulation debug session. Bob provided clues about the design and I drove the debugger.

We chased class handles around his design, from driver, across to monitor and into the scoreboard where the problem existed. There was a failure where a transaction contained N sub-transactions. The last two sub-transactions had errors, but only for a certain kind of transaction. And the error happened very late in an otherwise fully functional simulation.

Blog 2.1 - UVM Debug - Driver Transactions - Sequence Parent
<A UVM Driver transaction with derived classes and the parent sequence>

We started at the driver that was driving that transaction. We looked at the sequence_item that the driver received. But we had no idea from looking at the driver source code, what the ACTUAL type of the sequence_item was. Some derived type sequence_item was coming through. We also had no idea which sequence was generating this transaction. There were many sequences running on this driver.

Blog 2.2 - UVM Debug - Class Inheritance

<UVM Class Inheritance Diagram>

We used the waveform and the class inheritance diagram to figure out which class we needed to look at. Really easy. Just put the driver in the waveform and expand the transaction ‘t’ to see the derived type and the parent sequence.

Blog 2.4 - UVM Debug - Actual Sequence Item Type

<The transaction ‘t’ contains a class of type “sequence_item_A_fa”>

Blog 2.4 - UVM Debug - Actual Sequence Type
<The parent sequence is of type “sequence_A”>

Tic – Toc

In about 60 minutes we were at the point of the bug. Bob had spent about 2 weeks getting to this point using his logfiles. Winner!

In our 60 minutes, we saw that a derived class was coming into the driver. We traced the inheritance of that class to find a base class which implemented the SystemVerilog queue processing. That code was the place where the error was. After inspecting the loop control we found and fixed the error.

Coffee break time.

Still having fun.

Testbenches are complex pieces of software. Logfiles are very important debug tools, but with debug tools like Visualizer, post simulation testbench debug can be more than just examining predetermined print statements. You can actually explore the UVM data structure and class based testbench. And you won’t need weeks to do it.

Come to the Verification Academy Booth at DAC in San Francisco June 8, 9 and 10 to hear more about UVM Debug and talk in person about your UVM Debug problems. See you then!

, , , , , , , , , ,

26 February, 2015

A colleague recently asked me: Has anything changed? Do design teams tape-out nowadays without GLS (Gate-Level Simulation)? And if so, does their silicon actually work?

In his day (and mine), teams prepared in 3 phases: hierarchical gate-level netlist to weed out X-propagation issues, then full chip-level gate simulation (unit delay) to come out of reset and exercise all I/Os, and weed out any other X’s, then finally a run with SDF back-annotation on the clocktree-inserted final netlist.

gates

After much discussion about the actual value of GLS and the desirability of eliminating the pain of having to do it from our design flows, my firm conclusion:

Yes! Gate-level simulation is still required, at subsystem level and full chip level.

Its usage has been minimized over the years – firstly by adding LEC (Logical Equivalence Checking) and STA (Static Timing Analysis) to the RTL-to-GDSII design flow in the 90s, and secondly by employing static analysis of common failure modes that were traditionally caught during GLS – x-prop, clock-domain-crossing errors, power management errors, ATPG and BIST functionality, using tools like Questa® AutoCheck, in the last decade.

So there should not be any setup/hold or CDC issues remaining by this stage.  However, there are a number of reasons why I would always retain GLS:

  1. Financial prudence.  You tape out to foundry at your own risk, and GLS is the closest representation you can get to the printed design that you can do final due diligence on before you write that check.  Are you willing to risk millions by not doing GLS?
  2. It is the last resort to find any packaging issues that may be masked by use of inaccurate behavioral models higher up the flow, or erroneous STA due to bad false path or multi-cycle path definitions.  Also, simple packaging errors due to inverted enable signals can remain undetected by bad models.
  3. Ensure that the actual bringup sequence of your first silicon when it hits the production tester after fabrication.  Teams have found bugs that would have caused the sequence of first power-up, scan-test, and then blowing some configuration and security fuses on the tester, to completely brick the device, had they not run a final accurate bring-up test, with all Design-For-Verification modes turned off.
  4. In block-level verification, maybe you are doing a datapath compilation flow for your DSP core which flips pipeline stages around, so normal LEC tools are challenged.  How can you be sure?
  5. The final stages of processing can cause unexpected transformations of your design that may or may not be caught by LEC and STA, e.g. during scan chain insertion, or clocktree insertion, or power island retention/translation cell insertion.  You should not have any new setup/hold problems if the extraction and STA does its job, but what if there are gross errors affecting clock enables, or tool errors, or data processing errors.  First silicon with stuck clocks is no fun.  Again, why take the risk?  Just one simulation, of the bare metal design, coming up from power-on, wiggling all pads at least once, exercising all test modes at least once, is all that is required.
  6. When you have design deltas done at the physical netlist level: e.g. last minute ECOs (Engineering Change Orders), metal layer fixes, spare gate hookup, you can’t go back to an RTL representation to validate those.  Gates are all you have.
  7. You may need to simulate the production test vectors and burn-in test vectors for your first silicon, across process corners.  Your foundry may insist on this.
  8. Finally, you need to sleep at night while your chip is in the fab!

There are still misconceptions:

  • There is no need to repeat lots of RTL regression tests in gatelevel.  Don’t do that.  It takes an age to run those tests, so identify a tiny percentage of your regression suite that needs to rerun on GLS, to make it count.
  • Don’t wait until tapeout week before doing GLS – prepare for it very early in your flow by doing the 3 preparation steps mentioned above as soon as practical, so that all X-pessimism issues are sorted out well before crunch time.
  • The biggest misconception of all: ”designs today are too big to simulate.”.  Avoid that kind of scaremongering.  Buy a faster computer with more memory.  Spend the right amount of money to offset the risk you are about to undertake when you print a 20nm mask set.

Yes, it is possible to tape out silicon that works without GLS.  But no, you should not consider taking that risk.  And no, there is no justification for viewing GLS as “old school” and just hoping it will go away.

Now, the above is just one opinion, and reflects recent design/verification work I have done with major semiconductor companies.  I anticipate that large designs will be harder and harder to simulate and that we may need to find solutions for gate-level signoff using an emulator.  I also found some interesting recent papers, resources, and opinion – I don’t necessarily agree with all the content but it makes for interesting reading:

I’d be interested to know what your company does differently nowadays.  Do you sleep at night?

If you are attending DVCon next week, check out some of Mentor’s many presentations and events as described by Harry Foster, and please come and find me in the Mentor Graphics booth (801), I would be happy to hear about your challenges in Design/Verification/UVM and especially Debug.
Thanks for reading,
Gordon

, , , , , ,

24 November, 2014

SystemVerilog Testbench Debug – Are we having fun yet?

Fun

Debug should be fun. Watching waveforms march by, seeing ERRORS and WARNINGS pop out in a transcript file, tracing drivers back to their source, understanding race conditions between simulators and between source code changes – and my favorite – debugging random stability issues. Fun.

Old School – logfiles and interactive

Or at least it should be fun. It used to be fun. I’d setup my collection of scripts to run tests and examine logfiles. Push the button and go for coffee or go home. The next day I’d examine log files and figure out what happened. Usually I’d have to jump into interactive simulation and debug on the fly. Set some breakpoints and watch what happened. That was then. My tests and RTL were all Verilog. Life was good. I was in control of what was going on, and could get my head around it.

New School – logfiles, interactive and class handles

Fast-forward to today. Still have scripts to run tests. Still have log files. Still push the button and get coffee or go home. Still jump into interactive simulation. Still set breakpoints. But now my tests are SystemVerilog class-based – usually UVM. My tests are C code. My tests are constrained random tests. Debug just got harder. I can’t fit the whole testbench + RTL into my head at once. I need help.

Debugging your class based testbench

I prefer to do as much debug as possible in “post-sim” mode. I want to run simulation and capture as much as possible. Then debug my wavefile and source code. What to do about my SystemVerilog class based testbench? Easy. Capture my classes in the wave database. Show them to me in the wave window.

<UVM Testbench class hierarchy window and those same classes in the wave window>

Wave Window

Wave Window

But that’s not possible. Is it? What IS possible?

What? Objects in the wave database? Yes. Objects and their members in the wave database.

Examine the values of class member variables in post-sim mode. Use the waveform window for classes and class member variables just like signals.

What about the handles that are in my classes? Can I chase them to other objects? Yes. Follow class handle “pointers” to other objects – essentially exploring the OBJECT SPACE that existed at THAT time during simulation. But I’m in post sim!

Can I see all the sequence items that hit my driver? Yes. How? Just put the driver “handle” into the wave window and “open” it. You can see the virtual interface handle (if you have one). You can see the transactions that went through the driver (the driver did a ‘get_next_item (t)’ 100,000 times!).

<Transaction handle ‘t’ from the driver in the wave window, with the driver’s virtual interface>

Driver and 't' in Wave Window

Driver and ‘t’ in Wave Window

In the wave window? Yes. All 100,000 of them? Yes.

Now I’m having fun again. That’s great. I can see what’s going on inside my objects. In post-sim mode.

What’s NOT possible?

Will it babysit? No. One thing at a time.

Are you having fun yet?

Find more details in Verification Horizons article – Old School vs. New School – Visualizer and on Verification Academy – Verification and Debug: Old School Meets New School 

You can find all the sessions on New School verification techniques via the following link:

https://verificationacademy.com/seminars/academy-live

, , , , , ,

30 October, 2013

MENTOR GRAPHICS AT ARM TECHCON

This week ARM® TechCon® 2013 is being held at the Santa Clara Convention Center from Tuesday October 29 through Thursday October 31st, but don’t worry, there’s nothing to be scared about.  The theme is “Where Intelligence Counts”, and in fact as a platinum sponsor of the event, Mentor Graphics is excited to present no less than ten technical and training sessions about using intelligent technology to design and verify ARM-based designs.

My personal favorite is scheduled for Halloween Day at 1:30pm, where I’ll tell you about a trick that Altera used to shave several months off their schedule, while verifying the functionality and performance of an ARM AXI™ fabric interconnect subsystem.  And the real treat is that they achieved first silicon success as well.  In keeping with the event’s theme, they used something called “intelligent” testbench automation.

And whether you’re designing multi-core designs with AXI fabrics, wireless designs with AMBA® 4 ACE™ extensions, or even enterprise computing systems with ARM’s latest AMBA® 5 CHI™ architecture, these sessions show you how to take advantage of the very latest simulation and formal technology to verify SoC connectivity, ensure correct interconnect functional operation, and even analyze on-chip network performance.

On Tuesday at 10:30am, Gordon Allan described how an intelligent performance analysis solution can leverage the power of an SQL database to analyze and verify interconnect performance in ways that traditional verification techniques cannot.  He showed a wide range of dynamic visual representations produced by SoC regressions that can be quickly and easily manipulated by engineers to verify performance to avoid expensive overdesign.

Right after Gordon’s session, Ping Yeung discussed using intelligent formal verification to automate SoC connectivity, overcoming observability and controllability challenges faced by simulation-only solutions.  Formal verification can examine all possible scenarios exhaustively, verifying on-chip bus connectivity, pin multiplexing of constrained interfaces, connectivity of clock and reset signals, as well as power control and scan test signal connectivity.

On Wednesday, Mark Peryer shows how to verify AMBA interconnect performance using intelligent database analysis and intelligent testbench automation for traffic scenario generation.  These techniques enable automatic testbench instrumentation for configurable ARM-based interconnect subsystems, as well as highly-efficient dense, medium, sparse, and varied bus traffic generation that covers even the most difficult to achieve corner-case conditions.

And finally also on Halloween, Andy Meyer offers an intelligent workshop for those that are designing high performance systems with hierarchical and distributed caches, using either ARM’s AMBA 5 CHI architecture or ARM’s AMBA 4 ACE architecture.  He’ll cover topics including how caching works, how to improve caching performance, and how to verify cache coherency.

For more information about these sessions, be sure to visit the ARM TechCon program website.  Or if you miss any of them, and would like to learn about how this intelligent technology can help you verify your ARM designs, don’t be afraid to email me at mark_olen@mentor.com.   Happy Halloween!

, , , , , , , , , ,

@dennisbrophy tweets

Follow dennisbrophy

@dave_59 tweets

Follow dave_59

@jhupcey tweets

  • #ARM now hiring formal verification engineers in Austin: exciting tech challenge + Ram is a great guy to work with.…https://t.co/uwIXLHWqvg
  • Attention all SF Bay Area formal practitioners: next week Wednesday 7/26 on Mentor's Fremont campus the Verificatio…https://t.co/9Y0iFXJdYi
  • This is a very hands-on, creative role for a verification expert -- join us! https://t.co/jXWFGxGrpn

Follow jhupcey