Posts Tagged ‘Verification Academy’

16 November, 2015

Thus far we have talked about the importance of having a VIP which is easy to connect to the DUT in part 1 and having the flexibility to configure the VIP as per your requirements and use the built-in or pre-packaged sequences in part 2. In this final part of the series, we will talk about the various built-in features of a VIP which helps with your debug.

If you have a UVM based testbench with one or multiple VIPs, your testbench could be more complex than your DUT and debugging this environment could be a major challenge. Debugging UVM VIP based environments could be thought of having 3 layers:

  1. UVM’s built-in debug mechanism
  2. Simulator with Class Based Debug
  3. VIP with built-in debug features

These are some of the features that Mentor’s VIP provide as built-in debug mechanisms:

VIP Query Commands:

These query commands provide the ability to query the state of the VIP in the testbench at any given time in a batch or CLI mode to get the summary of the VIP or the current configuration of the VIP and prints it to the simulation log.


For example, in PCI Express, the VIP can output B.A.R. information, to show the addressing set up in the system (above) as well as the PCIe configuration space showing the device capabilities (below).


Error Messaging & Reporting:

Error Messaging is very important as this is the first thing the user checks while debugging. The error messages have proper encoding to differentiate between methodology errors, protocol errors and infrastructure errors. It also provides the flexibility of customizing the report message and logging mechanism.



While the VIP is running, a built-in set of assertions check for any protocol violations to verify compliance with the specification.  When these fire, they result in a message that can be printed to the transcript, or piped through the UVM reporting mechanism.  The text of the message include the interface instance name, a description of what went wrong and a reference back to the specification to help when looking up further details.  Each assertion error is configurable and can be enabled or disabled and have its severity level changed.

 Protocol Debug:

Another important aspect of the VIP is to help with protocol debug.  Mentor VIP is transaction based, and those transactions are available for creating stimulus as well as for analysis, where they can be used in the testbench for scoreboarding and coverage collection.

Transaction Logging:

Transactions can also be logged to a text file, printed out via the standard UVM print mechanism, or output to a tracker file by a provided analysis component. Following is the sample print of transaction log file that shows attribute information that is printed along with the format:
   AXI Clk Cycle = 10.00 ns; AXI Clk Frequency = 100.00 MHz; Data bus width = 32 bits


Here, transaction wise data is printed in the log file which includes whether transaction is read or write, ID of transaction, starting address, accept time of address phase, data of each beat, write strobes, data burst accept time, response of transaction, response phase accept time, length of transaction, burst type, and size of burst.

The VIP can also output text based log, or tracker files.  This can typically be done at both the protocol level, and also at the symbol level for protocols such as PCI Express to help debug link training, or other state machines.  Here we can see the symbols for an OS PLP transmission, a TLP transmission and a DLLP transmission on the bus.


Transaction Linking:


Just logging transactions isn’t sufficient while debugging a cache coherent interconnect (CCI). An originating master request transaction results in snoops to other cached masters and a slave access as needed. While debugging system level stimulus, it becomes difficult to identify which all snoop transactions are related to a specific originating request. A Cache Coherency Interconnect Monitor (CCIM), helps overcome this debugging issue by providing a transaction linking component that connects to all the interfaces around a CCI. CCIM provides a top-level parent sequence item that links to all related child sequence items, such as originating request, snoop to cached masters, and slave access.

Protocol Stack Debug:


Along with the transactions, the VIP also records relationship information, relations to other transactions through the protocol stack and also to the signals themselves.  This allows you to quickly move from transaction, to signal level debug, and highlight not just which signals, but also the time at which those signals participated in any given transaction.

I hope this series has provided you with few insights into what makes a VIP easy to instantiate, connect, configure and start driving stimulus. I would really like to hear about your VIP usage experiences.

, , , , , , , , , , , , , , ,

8 June, 2015

Do you have a really tough verification problem – one that takes seemingly forever for a testbench simulation to solve – and are left wondering whether an automated formal application would be better suited for the task?

Are you curious about formal or clock-domain crossing verification, but are overwhelmed by all the results you get from a Google search?

Are you worried that adding in low power circuitry with a UPF file will completely mess up your CDC analysis?

Good news: inspired by the success of the UVM courses on the Verification Academy website, the Questa Formal and CDC team has created all new courses on a whole variety of formal and CDC-related subjects that address these questions and more.  New topics that are covered include:

* What’s a formal app, and what are the benefits of the approach?

* Reviews of automated formal apps for bug hunting, exhaustive connectivity checking and register verification, X-state analysis, and more

* New topics in CDC verification, such as the need for reconvergence analysis, and power-aware CDC verification

* How to get started with direct property checking including: test planning for formal, SVA coding tricks that get the most out of the formal analysis engines AND ensure reuse with simulation and emulation, how to setup the analysis for rapidly reaching a solution, and how to measure formal coverage and estimate whether you have enough assertions

The best part: all of this content is available NOW at, and it’s all FREE!


Joe Hupcey III,
on behalf of the Questa Formal and CDC team

P.S. If you’re coming to the DAC in San Francisco, be sure to come by the Verification Academy booth (#2408) for live presentations, end-user case studies, and demos on the full range of verification topics – UVM, low power, portable stimulus, formal, CDC, hardware-software co-verification, and more.  Follow this link for all the details & schedule of events (including “Formal & CDC Day” on June 10!):

, , , , , , , , ,

2 June, 2015

This year we are trying something new at the Verification Academy booth during next week’s 2015 Design Automation Conference.  We’ve decided to host an interactive panel on the controversial topic of Agile development. I say controversial because you typically find two camps of engineers when discussing the subject of Agile development—the believers and the non-believers.

My colleague Neil Johnson, principal consultant from XtremeEDA Corporation and a leading expert in Agile development, will provide some context for the topic with a short background on Agile methods to kick the panel off. Then I plan to join Neil on the panel, which will be moderated by Mentor’s own world-renowned Dennis Brophy.  Our intent is to have a healthy, interactive discussion with both the believers and the non-believers in the audience.

So, why is the subject of Agile development even worthy of discussion at DAC? Well, not to entirely give away my position on the subject…but I think it’s worthwhile to note some of the recent findings related to root cause of logical and functional flaws from the 2014 Wilson Research Group Functional Verification Study (see figure below).

Clearly, design errors are a major factor contributing to bugs. Yet a growing concern is the number of issues surrounding the specification that are leading to logical and functional flaws.  In reality, there is no such thing as the perfect specification—and few projects can afford to wait to start development until the perfection is achieved. Furthermore, in many market segments, late stage changes in the specification are common practice to ensure that the final product is competitive in a rapidly changing market. Could Agile development, in which requirements and solutions evolve through collaboration between self-organizing and cross-functional teams, be the saving grace?  Please join us on June 8th at 5pm in the Verification Academy booth at DAC and hear what the experts are saying!

, ,

21 May, 2015

Still having fun doing UVM and Class based debug?

Maybe a debug contest will help. I had a contest with a user not too long ago.

We’ll call him Bob. Bob debugged his UVM testbench using that favorite technique – “logfile” debug. He spent a lot of time inserting, moving and removing $display statements, all while re-running simulation over and over. He’d generate a logfile, analyze it (read: run or tuneup a Perl script) and cross-check with waveforms and source code. He’d get close, only to realize he needed more $display. More Simulation.

Hopefully, just one more $display in one more file… More simulation… Repeat…

He wanted to know if there was a better way to climb around through his UVM Testbench.

Blog 2.3 - UVM Debug - Agent UVM Netlist
<UVM Agent Schematic – One way to climb around>

A Better Way to debug Bob’s Testbench – The Contest!

Bob challenged me to a contest. He wanted to know how fast we could find the bug using Mentor’s Visualizer™ Debug Environment and a post simulation database. One simulation. One shot.

We ran Bob’s simulation, capturing the waveform database. Generating that database required two extra switches:

vopt -debug +designfile …
vsim -qwavedb=+signal+class …

Then we ran the debugger in post-simulation mode:

visualizer +designfile +wavefile

When Bob first saw his RTL signals AND his UVM Class based testbench in the waveform window together, he got a big smile – literally getting up out of his seat.

Our contest was this. Who would be fastest to find the bug? Logfile debug or class based debug? This contest was all hindsight. Bob had already figured out what the bug was and had fixed it before we ever met. In the end the bug was a simple coding mistake in the way that a SystemVerilog queue was being used. Just a simple coding error. But I’m getting ahead of myself.

Digging through the testbench

We setup a remote link so that we both could see the post simulation debug session. Bob provided clues about the design and I drove the debugger.

We chased class handles around his design, from driver, across to monitor and into the scoreboard where the problem existed. There was a failure where a transaction contained N sub-transactions. The last two sub-transactions had errors, but only for a certain kind of transaction. And the error happened very late in an otherwise fully functional simulation.

Blog 2.1 - UVM Debug - Driver Transactions - Sequence Parent
<A UVM Driver transaction with derived classes and the parent sequence>

We started at the driver that was driving that transaction. We looked at the sequence_item that the driver received. But we had no idea from looking at the driver source code, what the ACTUAL type of the sequence_item was. Some derived type sequence_item was coming through. We also had no idea which sequence was generating this transaction. There were many sequences running on this driver.

Blog 2.2 - UVM Debug - Class Inheritance

<UVM Class Inheritance Diagram>

We used the waveform and the class inheritance diagram to figure out which class we needed to look at. Really easy. Just put the driver in the waveform and expand the transaction ‘t’ to see the derived type and the parent sequence.

Blog 2.4 - UVM Debug - Actual Sequence Item Type

<The transaction ‘t’ contains a class of type “sequence_item_A_fa”>

Blog 2.4 - UVM Debug - Actual Sequence Type
<The parent sequence is of type “sequence_A”>

Tic – Toc

In about 60 minutes we were at the point of the bug. Bob had spent about 2 weeks getting to this point using his logfiles. Winner!

In our 60 minutes, we saw that a derived class was coming into the driver. We traced the inheritance of that class to find a base class which implemented the SystemVerilog queue processing. That code was the place where the error was. After inspecting the loop control we found and fixed the error.

Coffee break time.

Still having fun.

Testbenches are complex pieces of software. Logfiles are very important debug tools, but with debug tools like Visualizer, post simulation testbench debug can be more than just examining predetermined print statements. You can actually explore the UVM data structure and class based testbench. And you won’t need weeks to do it.

Come to the Verification Academy Booth at DAC in San Francisco June 8, 9 and 10 to hear more about UVM Debug and talk in person about your UVM Debug problems. See you then!

, , , , , , , , , ,

7 May, 2015

For all things verification, you will want to stop by the Verification Academy booth #2408 at DAC to interact with experts exploring the challenges of IC design and verification.  At the top of each hour, the Verification Academy will feature a presentation followed by a lively conversation.  Presentations will not be repeated so each hour will be unique.

We have themed each of the days as well:

  • Monday is “Debug Day
  • Tuesday is “Standards & FPGA Day
  • Wednesday is “Formal Verification Day

Naturally, you will find a few exceptions to those rules when you look at the program in detail.  Please register for Verification Academy sessions here: Monday Registration | Tuesday Registration | Wednesday Registration.  [NOTE: the Verification Academy sessions are highlighted with a blue background when you visit the registration site.]  A concise listing of all the Verification Academy sessions can be found here.

We will feature an end of the day reception on Monday at the Verification Academy booth after the last presentation.  Neil Johnson (XtremeEDA) and Mentor’s Harry Foster will explore Agile Evolution in SoC Verification in that last session.  The session begins at 5pm.  Neil is a proponent of this methodology as a means to to help build in design quality and simplify the task of verification.  In addition to being an advocate for this, he is also a practitioner of it.  He is an open-source hardware developer and Moderator at  We think the conversation that follows this informative session will be a lively one in which we invite everyone to continue over cocktails and hor d’oeuvres at 5:30pm.

We are sponsoring other events outside of the Verification Academy as well.  Tuesday is truly “Standards Day” at DAC.  In addition to the standards theme at the Verification Academy booth, you can kick off the day at the Accellera Breakfast and later in the day attend the IEEE DASC, Accellera and Si2 System Level Low Power Workshop.  Here is a partial list of Standards Day activities:


If you have not yet registered for DAC, do so now.  If you do not have plans to register for the full technical conference, many conference events are fee free if you select the “I LOVE DAC” registration option before May 19th!  In fact, all the “Standards Day” events listed above are free with early I Love DAC registration. Simply click here and you will be taken to the “I Love DAC” location to register.  Register before May 19th as after that date a $95 minimum fee sets in.

See you at DAC!

, , , , , , , , , ,

1 April, 2015

FPGA Effort Verification Trends (Continued)

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on the controversial topic of effort spent in FPGA verification. This blog continues that discussion. I stated in my previous blog that I don’t believe there is a simple answer to the question, “how much effort was spent on verification in your last FPGA project?” I believe that it is necessary to look at multiple data points to truly get a sense of the real effort involved in verification today. So, let’s look at a few additional findings from the study.

Time FPGA designers spend in verification

For projects that have a separation of teams (i.e., design engineers and verification engineers), it’s important to note that FPGA verification engineers are not the only project members involved in functional verification. FPGA design engineers spend a significant amount of their time in verification too, as shown in Figure 1.


Figure 1. Average (mean) time FPGA design engineers spend in design vs. verification.

You might note (on average) that FPGA design engineers actually spend slightly more time doing verification than design. We are not showing trends here since we have insufficient data related to the questions for FPGA designs from our previous study. We anticipate being able to show trends after our next study (currently scheduled for 2016).

Even if the FPGA project has a separation of teams, the designers are still involved in the verification process, ranging from:

  • Small sandbox testing to explore various aspects of the implementation
  • Full functional testing of IP blocks and SoC integration
  • Debugging verification problems identified by a separate verification team

In fact, getting a better understanding of exactly where FPGA designers spend their time has led us to conduct a series of follow-on discussions with various FPGA projects from various market segments. Through this process we have learned a concern by many project managers related to the increase amount of debugging time spent on a project (both pre-lab and lab debugging time). This is one area of FPGA verification that we plan to continue to explore through a series of in-depth discussions with multiple FPGA projects around the world.

Percentage of time FPGA verification engineers spends in various task

Next, let’s look at the mean time FPGA verification engineers spend in performing various tasks related to their specific project. You might note that verification engineers spend most of their time in debugging. Ideally, if all the tasks were optimized, then you would expect this. Yet, unfortunately, the time spent in debugging can vary significantly from project-to-project, which presents scheduling challenges for managers during a project’s verification planning process.


Figure 2. Average (mean) time verification engineers spend in various task

In our 2012 study we found that FPGA verification engineers spent about 37% of their time involved in debugging task. There was a 16 percent increase in the amount of time spent in debugging between 2012 and 2014. Hence, the data suggest that debugging effort is increasing for both FPGA engineers.

In my next blog (click here) I present our study findings in terms of FPGA schedules, iterations in the lab, and classification of functional bugs.

Quick links to the 2014 Wilson Research Group Study results

, , ,

17 March, 2015


With a name like “Fitzpatrick,” you knew I’d be celebrating today, right?

Well, there’s no better way to celebrate this fine day than to announce that our latest edition of Verification Horizons is available online! Now that Spring is almost here, there’s a bit less snow on the ground than there was when I wrote my introduction, but everything is still covered. I’m considering spray-painting it all green in honor of the occasion, so at least it looks like I have a lawn again.

In this issue of Verification Horizons, I’d particularly like to draw your attention to “Successive Refinement: A Methodology for Incremental Specification of Power Intent,” by my friend and colleague Erich Marschner and several of our friends at ARM® Ltd. In this article, you’ll find out how the Unified Power Format (UPF) specification can be used to specify and verify your power architecture abstractly, and then add implementation information later in the process. This methodology is still relatively new in the industry, so if you’re thinking about making your next design PowerAware, you’ll want to read this article to be up on the very latest approach.

In addition to that, we’ve also got Harry Foster discussing some of the results from his latest industry study in “Does Design Size Influence First Silicon Success?” Harry is also blogging about his survey results on Verification Horizons here and here (with more to come).

Our friends at L&T Technology Services Ltd. share some of their experience in doing PowerAware design in “PowerAware RTL Verification of USB 3.0 IPs,” in which you’ll see how UPF can let you explore two different power management architectures for the same RTL.

Next, History class is in session, with Dr. Lauro Rizzatti, long-time EDA guru, giving us part 1 of a 3-part lesson in “Hardware Emulation: Three Decades of Evolution.”

Our friends at Oracle® are up next with “Evolving the Use of Formal Model Checking in SoC Design Verification,” in which they share a case study of their use of formal methods as the central piece in verifying an SoC design they recently completed with first-pass silicon success. By the way, I’d also like to take this opportunity to congratulate the author of this article, Ram Narayan, for his Best Paper award at DVCon(US) 2015. Well done, Ram!

We round out the issue with our famous “Partners’ Corner” section, which includes two articles. In “Small, Maintainable Tests,” our friends at Sondrel IC Design Services show you a few tricks on how to make use of UVM virtual sequences to raise the level of abstraction of your tests. In “Functional Coverage Development Tips: Do’s and Don’ts,” our friends at eInfochips give you a great overview of functional coverage, especially the covergroup and related features in SystemVerilog.

I’d also like to take a moment to thank all of you who came by our Verification Academy booth at DVCon to say hi. I found it incredibly humbling and gratifying to hear from so many of you who have learned new verification skills from the Verification Academy. That’s a big part of why we do what we do, and I appreciate you letting us know about it.

Now, it’s time to celebrate St. Patrick’s Day for real!

, , , , , , ,

11 March, 2015

FPGA Verification Effort Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on FPGA design trends. In this blog, I present findings from our study related to the effort spent in verification.

Directly asking study participants how much effort they spend in verification will not work. The reason is that it’s hard to find a paper or article on verification that doesn’t start with the phrase: “Seventy percent of a project’s effort is spent in verification…” In other words, the industry is already biased to respond with this effort value. Yet, there are really no creditable references to quantify this value.

I don’t believe that there is a simple answer to the question, “How much effort was spent on verification in your last project?” In fact, I believe that it is necessary to look at multiple data points derived from multiple questions to truly get a sense of effort spent in verification. And that’s what we did in our functional verification study.

Total FPGA Project Time Spent in Verification

To try to assess the effort spent in verification, let’s begin by looking at one data point, which is the total project time spent in verification. Figure 1 shows the trends in total percentage of FPGA project time spent in verification by comparing the 2012 Wilson Research Group study (in dark blue), and the 2014 Wilson Research Group study (in light blue).

Figure 1. Percentage of FPGA project time spent in verification

Between the years 2012 and 2014 the industry did see a seven percent increase in the average time an FPGA project spends in verification. Historically, FPGA projects have spent less time in verification than ASIC/IC projects. The FPGA project strategy has traditionally been to get to the lab as soon as possible, and then iterate on issues in the lab. In a future blog I’ll show data that indicates this strategy does not necessarily yield good results in terms of meeting project schedule or quality objectives. Also, this lab-focused approach to FPGA verification becomes less effective as FPGA complexity increases.

Peak Number of Design and Verification Engineers

Perhaps one of the biggest challenges in design and verification today is identifying solutions to increase productivity and control engineering headcount. To illustrate the need for productivity improvement, we discuss the trend in terms of increasing engineering headcount for FPGA projects. Figure 2 shows the mean peak number of design and verification engineers working on an FPGA project. Again, this is an industry average since some projects have many engineers while other projects have few.

Figure 2. Mean peak number of engineers working on an FPGA project

You can see that the compounded annual growth rate (CAGR) for the peak number of FPGA design engineers between 2012 and 2014 was 4.9 percent, while the CAGR for the peak number of FPGA verification engineers was 20.9 percent. This huge demand for verification engineers on FPGA projects is one indicator of growing verification complexity in FPGA designs. Also, note that the ratio of design engineers versus verification engineers is approaching 1-to-1. This similar trend happened on traditional ASIC/IC designs in 2012.

In my next blog (click here) I focus on the time that FPGA design and verification engineers spends in various task.

Quick links to the 2014 Wilson Research Group Study results

, ,

23 February, 2015

It’s my favorite time of year again—DVCon!  And I believe that the DVCon 2015 technical program committee has put together one of the technically best DVCon’s in years. In this blog I plan on highlighting a few DVCon events that you might want to put on your calendar.


First, at this year’s conference the Verification Academy has a dedicated booth (#301), and I hope you stop by to say hello to myself, my friend Tom Fitzpatrick, and an amazing lineup of other Verification Academy subject matter experts.

Next, on Wednesday morning March 4 I have the honor of participating on a verification panel, titled: “Art of Science.” Here, my fellow panelist and I will debate the issue that verification today is considered by some to be more of an art than a science—and one which is perceived as difficult to master. To learn my position on this topic, you’ll have to stop by!

Also on Wednesday at the Mentor sponsored lunch, my colleague Steve Bailey and I have put together both an informative and entertaining talk we’ve title: “From Tightly Coupled (Loosely Bolted) to Verification Convergence.” Here, we discuss the state of verification past, present and future while examining the results from our recently industry world-wide study, which I started blogging about a few weeks ago (click here for more details). Our talk will examine how advanced techniques are taking hold in mainstream design and provide insights on the recent convergence of verification solutions to meet today’s growing challenges.

Finally, there are two tutorials I’d like to encourage you to attend while at DVCon this year:

  1. Advanced, High-Throughput Debug from Architectural Modeling Through Post-Silicon SoC Validation (click here for more details)
  2. Dead or Alive: Using Automated Formal Techniques to Characterize Dead Code, Reveal Paths to Hit Uncovered States, and Reach Coverage Closure Faster (click here for more details)

I look forward to meeting you at DVCon 2015!

, , ,

11 February, 2015

Accellera Approves Creation of Portable Stimulus Working Group

At DVCon 2014, Mentor Graphics proposed Accellera launch an exploratory exercise, called a Proposed Working Group (PWG), to determine if there was sufficient interest and need to create a standard in this area.  To help motivate the consideration of this activity, we indicated we would offer our graph-based test specification embodied in our inFact verification tool.

Rapid adoption of our technology has been the trend, especially when used in conjunction within a SystemVerilog UVM testbench environment.  One of the major benefits of UVM has been the portable nature of the testbench to facilitate design verification within and across companies.  The exclusive nature of our graph-based test specification language limits its easy use within the industry leading users to suggest we look to standardize it in keeping with the fundamental UVM principle of testbench portability.

After about a year of discussion in Accellera, the group announced it had concluded there should be an official standards project in this area.  Industry participants have likewise offered quotes of support for the formation of the Accellera Portable Stimulus Working Group.

The challenges to efficient and effective verification continue to grow.  If we stop where we are today in verification algorithm advances and standards the trend to require more people, time or compute resources will continue grow unabated at exponential rates.

For Mentor Graphics part, the verification team here has gone to market with innovative technology that has shown remarkable ability to improve verification productivity and efficiency.  The specification we offer to Accellera to seed this project is the same embodied in technology we used when we partnered with TSMC to validate advanced functional verification technology we announced in 2011.  From that announcement, we shared that tests conducted by AppliedMicro in designs destined for TSMC shortened “time-to-coverage by over 100x.”

One need not wonder if it is possible to shrink a month’s worth of verification tests into less than an 8 hour work day.  It is.  To find out how our specific use of this technology works and what motivates us to support standardization of Portable Stimulus in Accellera, I invite you to visit the Verification Academy where a session on Intelligent Testbench Automation shows what can be done.

And for those who would like to help in the development of the standard and may have technology to further underpin it, you should consider attending the first organizational meeting of the Portable Stimulus Working Group at DVCon 2015 March 5th from 6pm-9pm.  Contact Accellera for member-only meeting details or catch me at DVCon 2015 and I can share more information with you.

, , , , , , ,

@dennisbrophy tweets

Follow dennisbrophy

@dave_59 tweets

Follow dave_59

@jhupcey tweets

Follow jhupcey