Verification Horizons BLOG

This blog will provide an online forum to provide weekly updates on concepts, values, standards, methodologies and examples to assist with the understanding of what advanced functional verification technologies can do and how to most effectively apply them. We're looking forward to your comments and suggestions on the posts to make this a useful tool.

1 July, 2015

Continuing on our journey on what is needed to get to productive verification with VIP, the first step is to make the connections between the DUT and the testbench.  This includes connecting the signals, the clock and reset and setting any parameters. This is what we covered in part 1 of the series.

Let us look at other steps need to get productive, taking PCIe VIP as an example.

No2KnowVIP_2_1

Protocol Specific agents enables easy instantiation of agents in your testbench with fewer lines of configuration in the testbench. This is the customization of the generic agent. The PCIe agent will perform link training automatically, without us needing to start a sequence.

No2KnowVIP_2_2

Next we move onto configuring the VIP.  Here we are able to take advantage of a configuration policy that has already been defined for the specific design IP we are using and available alongside the VIP.

It is again important for the VIP to be intuitive, and here you can see that we have used a collection of structs to define the configuration.  These are then declared as variables in a configuration class for the agent, which is used in the testbench to set the configuration.

No2KnowVIP_2_3

As you can see here, the user code which sets the configuration for a test is then intuitive and easy to read.  In this particular example, we have set our agent to be a GEN2 root complex, connected at the serial interface and using an internally generated clock and reset.  We have disabled the build in scoreboard and coverage, but enabled transaction logging.

No2KnowVIP_2_4

Once we are done with configuration, we move on to PCIe link training and enumeration, which the VIP can take care of for us. As I mentioned earlier the agent will perform link training automatically, without us needing to start a sequence.  The VIP also provides a sequence to complete enumeration, which will block until link training is complete.

This means that in the test, we just need to create and start this sequence, the “bring up sequence” in the code here.

No2KnowVIP_2_5

Once that sequence ends, link up and enumeration are complete and we can start a user defined sequence that would typically perform reads and writes to do some application specific verification.

VIP’s include various sequence API for rapid test creation like memory, configure and I/O reads and writes, as well as various scenario and error injection sequences. Finally we are able to use the transaction layer sequences to begin some application specific verification by performing reads and writes across the bus.

Using the automation provided by the VIP it is possible to complete the entire testbench and link up process in hours rather than days, allowing users to become productive much earlier.

I would really like to hear on how you users use various VIPs. In the part 3 of the series, we will talk about various debug features of a VIP.

, , , ,

25 June, 2015

There are intrinsic limitations in the current approach for estimating dynamic power consumption. Briefly, the approach consists of a file-based flow that evolves through two steps. First, a simulator or emulator tracks the switching activity either cumulatively for the entire run in a switching activity interchange format (SAIF) file, or on a cycle-by-cycle basis for each signal in a signal database file such as FSDB or VCD. Then, a power estimation tool fed by as SAIF file calculates the average power consumption of a whole circuit, or an FSDB file computes the peak power in time and space of the design.

image1

Figure 1: Conventional power analysis is grounded in the two-step, file-based approach.

The method may be acceptable when the design-under-test (DUT) is relatively small, in a ball-park of a few million gates or less, and the analysis is performed within a limited time window of up to a million or so clock cycles. Such time windows are typical when the DUT is tested with adaptive functional testbenches.

When applied to modern, large SoC designs of tens of hundreds or millions of gates executing embedded software, such as booting an operating system and running application programs that require billions of cycles, three problems defeat the conventional approach:

  1. The sizes of SAIF and, even more so, FSDB/VCD files become massive and unmanageable
  2. The file generation process slows to a crawl that extends to hours, possibly exceeding a day
  3. File loading into a power estimation tool extends to several days or more than a week

New software for the Veloce emulation platform eliminates the core problems affecting the conventional approach to estimate power consumption. It eliminates the two-step, file-based flow by tightly integrating the emulator to the power analysis tool.

In this new approach, an Activity Plot maps in one simple chart the global design switching activity over time as it is occurring, booting an OS and running live applications.Image 2

Chart 1: The Activity Plot identifies focus areas over long runs.

It identifies time frames of high switching activities that may pose power threats to the design team. While this chart is not unique, its creation is an order of magnitude faster than the generation time of file-based power charts. As a data-point, Veloce takes 15 minutes to generate an Activity Plot of a 100-million gate design for 75-million design clock cycles. Power analysis tools could consume more than a week to generate similar information. Moreover, they may not be able to handle such a large volume of data.

Of course, the next questions are, “where” are those peaks happening in the DUT and “what” is causing them? This is addressed by replacing the file based approach with a Dynamic Read Waveform API for the signal data.

Dynamic Read Waveform API Flow

Once high-switching activity time frames are identified at the top level of the design, the design team can zoom into those frames. Users can dig deep into the hierarchy of the design and the embedded software to uncover the main source of such high-switching activity.

The Dynamic Read Waveform API replaces the cumbersome SAIF/FSDB/VCD file generation process by live streaming switching data from the emulator into the power analysis tool. All operations run concurrently, from emulating the SoC, design capturing switching data, reading switching data by the power analysis tool and generating power numbers. The net effect is a jump in overall performance from booting an OS and running real applications.

As a side benefit, the Dynamic Read Waveform API also delivers improved accuracy compared to SAIF-based average flows because conditional controls are incorporated automatically for switching.

The bottom line is that the Activity Plot and the Dynamic Read Waveform API enable power analysis and power exploration at the system level that is not possible with a file-based flow.

,

8 June, 2015

Do you have a really tough verification problem – one that takes seemingly forever for a testbench simulation to solve – and are left wondering whether an automated formal application would be better suited for the task?

Are you curious about formal or clock-domain crossing verification, but are overwhelmed by all the results you get from a Google search?

Are you worried that adding in low power circuitry with a UPF file will completely mess up your CDC analysis?

Good news: inspired by the success of the UVM courses on the Verification Academy website, the Questa Formal and CDC team has created all new courses on a whole variety of formal and CDC-related subjects that address these questions and more.  New topics that are covered include:

* What’s a formal app, and what are the benefits of the approach?

* Reviews of automated formal apps for bug hunting, exhaustive connectivity checking and register verification, X-state analysis, and more

* New topics in CDC verification, such as the need for reconvergence analysis, and power-aware CDC verification

* How to get started with direct property checking including: test planning for formal, SVA coding tricks that get the most out of the formal analysis engines AND ensure reuse with simulation and emulation, how to setup the analysis for rapidly reaching a solution, and how to measure formal coverage and estimate whether you have enough assertions

The best part: all of this content is available NOW at www.verificationacademy.com, and it’s all FREE!

Enjoy!

Joe Hupcey III,
on behalf of the Questa Formal and CDC team

P.S. If you’re coming to the DAC in San Francisco, be sure to come by the Verification Academy booth (#2408) for live presentations, end-user case studies, and demos on the full range of verification topics – UVM, low power, portable stimulus, formal, CDC, hardware-software co-verification, and more.  Follow this link for all the details & schedule of events (including “Formal & CDC Day” on June 10!): http://www.mentor.com/events/design-automation-conference/schedule

, , , , , , , , ,

4 June, 2015

Learn more about DDA at DAC

At DAC – Mentor Graphics and Cadence Design Systems are coming together to usher in another level of productivity in verification results data access and portability with a modern design debug data application programming interface standard. We call this emerging standard the Debug Data API, or DDA for short.  We want to share more details with you in person at DAC.  Join us on Tuesday, June 9th, at the Verification Academy Booth (#2408) at 5:00pm for a joint presentation and unveiling.  And to get a bit of background and hint of what’s to come, please read on.

History: It Started with VCD

In the beginning we had VCD as the universal standard format to exchange simulation results as part of the IEEE 1364 (Verilog) standard.   Anyone trying to use VCD today on those large SoC’s or complex FPGA’s knows the size of VCD files has all but excluded this portion of the IEEE standard from use in modern design verification practice. So the question is when will it be replaced?

To ask that question today seems fine.  But I was even skeptical in the mid 1990’s when we at Mentor Graphics created Extended VCD to support the IEEE 1076.4 (VITAL) gate level simulation standard.  At that time the largest designs were around 1 million gates. While Extended VCD never became an official IEEE standard, we shared it with our ASIC Vendor and FPGA partners along with our major competitors to ensure debug data access and portability for VITAL users was on par with Verilog.  But Extended VCD also suffers the same fate of being almost impossible to support modern large designs.

Today: VCD Replaced by a Proprietary World

VCD and Extended VCD have remained static for about 20 years. But commercial simulator, emulator and other verification technology suppliers have not stopped innovating to advance support for larger design sizes with larger result data sets. As we move to 2 billion gate designs and beyond, the dependence on these private and closed technological advances and innovations has never been more important.

But that proprietary dependence comes with a cost. We stand at a crossroads where consumers of verification results information lose the open and unencumbered use offered by VCD or they need a path forward that preserves their current benefits while protecting and encouraging producers of such information to continue to innovate by private means.  The only alternative are fully integrated solutions from a single supplier that rarely get consumer endorsement in a best-of-breed; mix-and-match world.

Near Future: Federating the Proprietary World with DDA

Federating proprietary solutions almost sounds like something that is impossible to do. But Mentor and Cadence will share their emerging work on a standard to federate the different sources of verification results that can come from private sources with unencumbered access for the consumer. The Debug Data API standard will offer consumers the benefits of VCD interoperability, data portability and openness while preserving the benefits of private innovation for tool and solution producers. It will not impose data format translations from one format to another as a means to promote data portability.  It will not require the means by which one supplier or another stores verification results to be exposed.  It will offer the best of both worlds to producers and consumers.  I guess in some cases, you can have your cake and eat it too! There are more details to share, and the best place start is to meet us at DAC.

VA DDA Session AbstractMentor and Cadence Share DDA Details at DAC

  • Location: Verification Academy Booth (#2408)
  • Date: Tuesday, June 9th
  • Time: 5:00pm PT
    More Information >

We will discuss the details of DDA and show a proof of concept demonstration that will highlight each company’s simulator and results viewer in action.  Since there is no other session following this one at the Verification Academy booth, we will also be around to discuss the next steps with all present afterwards.

You can read more about this from my colleague and competitor, Cadence’s Adam Sherer, on his blog at the Cadence site here.  He bring his own perspective to this.

See you at DAC!

, , , , , , , , , , , ,

3 June, 2015

I am please to announce that, beginning today, the Accellera Portable Stimulus Working Group (PSWG) is accepting technology contributions to assist in the creation of a Portable Test and Stimulus standard. More details can be found at http://www.accellera.org/.

This milestone marks a critical point in our efforts to bring a Portable Stimulus standard to the industry. Beginning in May of 2014, several companies came together to form the first Proposed Working Group in Accellera to evaluate the feasibility and need for such a standard. After six months of diligent work, the PWG created a set of 120 requirements as part of a proposal to the Accellera Board of Directors that a Working Group be formed. The PSWG began at DVCon this year and for the past several months has been working to define a set of goals, milestones and design objectives to guide the standardization effort. Now it’s your turn to get in on the act.

Contributions will be accepted until August 5th, after which the WG will be evaulating the contributions and deciding which one(s) to use as the basis for the standard.

If you’re at DAC this week, stop by the Verification Academy booth (#2408) and see a full slate of technical presentations including one where I’ll be sharing some of our thoughts on what the standard should look like. To register for any/all of these sessions, please go here. See you at DAC!


3 June, 2015

FPGA Language and Library Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on FPGA verification techniques and technologies adoption trends, as identified by the 2014 Wilson Research Group study. In this blog, I’ll present FPGA design and verification language trends, as identified by the Wilson Research Group study.

You might note that the percentage for some of the language and library data that I present sums to more than one hundred percent. The reason for this is that many FPGA projects today use multiple languages.

FPGA RTL Design Language Adoption Trends

Let’s begin by examining the languages used for FPGA RTL design. Figure 1 shows the trends in terms of languages used for design, by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected design language adoption trends within the next twelve months (in purple). Note that the language adoption is declining for most of the languages used for FPGA design with the exception of Verilog and SystemVerilog.

Also, it’s important to note that this study focused on languages used for RTL design. We have conducted a few informal studies related to languages used for architectural modeling—and it’s not too big of a surprise that we see increased adoption of C/C++ and SystemC in that space. However, since those studies have (thus far) been informal and not as rigorously executed as the Wilson Research Group study, I have decided to withhold that data until a more formal study can be executed related to architectural modeling and virtual prototyping.

Figure 1. Trends in languages used for FPGA design

It’s not too big of a surprise that VHDL is the predominant language used for FPGA RTL design, although the projected trend is that Verilog will likely overtake VHDL in terms of the predominate language used for FPGA design in the near future.

FPGA Verification Language Adoption Trends

Next, let’s look at the languages used to verify FPGA designs (that is, languages used to create simulation testbenches). Figure 2 shows the trends in terms of languages used to create simulation testbenches by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected verification language adoption trends within the next twelve months (in purple).

Figure 2. Trends in languages used in verification to create FPGA simulation testbenches

FPGA Testbench Methodology Class Library Adoption Trends

Now let’s look at testbench methodology and class library adoption for FPGA designs. Figure 3 shows the trends in terms of methodology and class library adoption by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected verification language adoption trends within the next twelve months (in purple).

Figure 3. FPGA methodology and class library adoption trends

Today, we see a downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has increased by 28 percent since 2012. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 20 percent.

FPGA Assertion Language and Library Adoption Trends

Finally, let’s examine assertion language and library adoption for FPGA designs. The 2014 Wilson Research Group study found that 44 percent of all the FPGA projects have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trends related to those participants who have adopted ABV.

Figure 4 shows the trends in terms of assertion language and library adoption by comparing the 2010 Wilson Research Group study (in dark blue), the 2012 Wilson Research Group study (in green), and the projected adoption trends within the next 12 months (in purple). The adoption of SVA continues to increase, while other assertion languages and libraries either remain flat or decline.

Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I will continue presenting findings from the 2014 Wilson Research Group Functional Verification Study.

Quick links to the 2014 Wilson Research Group Study results

, , , , , , , ,

2 June, 2015

This year we are trying something new at the Verification Academy booth during next week’s 2015 Design Automation Conference.  We’ve decided to host an interactive panel on the controversial topic of Agile development. I say controversial because you typically find two camps of engineers when discussing the subject of Agile development—the believers and the non-believers.

My colleague Neil Johnson, principal consultant from XtremeEDA Corporation and a leading expert in Agile development, will provide some context for the topic with a short background on Agile methods to kick the panel off. Then I plan to join Neil on the panel, which will be moderated by Mentor’s own world-renowned Dennis Brophy.  Our intent is to have a healthy, interactive discussion with both the believers and the non-believers in the audience.

So, why is the subject of Agile development even worthy of discussion at DAC? Well, not to entirely give away my position on the subject…but I think it’s worthwhile to note some of the recent findings related to root cause of logical and functional flaws from the 2014 Wilson Research Group Functional Verification Study (see figure below).

Clearly, design errors are a major factor contributing to bugs. Yet a growing concern is the number of issues surrounding the specification that are leading to logical and functional flaws.  In reality, there is no such thing as the perfect specification—and few projects can afford to wait to start development until the perfection is achieved. Furthermore, in many market segments, late stage changes in the specification are common practice to ensure that the final product is competitive in a rapidly changing market. Could Agile development, in which requirements and solutions evolve through collaboration between self-organizing and cross-functional teams, be the saving grace?  Please join us on June 8th at 5pm in the Verification Academy booth at DAC and hear what the experts are saying!

, ,

21 May, 2015

Still having fun doing UVM and Class based debug?

Maybe a debug contest will help. I had a contest with a user not too long ago.

We’ll call him Bob. Bob debugged his UVM testbench using that favorite technique – “logfile” debug. He spent a lot of time inserting, moving and removing $display statements, all while re-running simulation over and over. He’d generate a logfile, analyze it (read: run or tuneup a Perl script) and cross-check with waveforms and source code. He’d get close, only to realize he needed more $display. More Simulation.

Hopefully, just one more $display in one more file… More simulation… Repeat…

He wanted to know if there was a better way to climb around through his UVM Testbench.

Blog 2.3 - UVM Debug - Agent UVM Netlist
<UVM Agent Schematic – One way to climb around>

A Better Way to debug Bob’s Testbench – The Contest!

Bob challenged me to a contest. He wanted to know how fast we could find the bug using Mentor’s Visualizer™ Debug Environment and a post simulation database. One simulation. One shot.

We ran Bob’s simulation, capturing the waveform database. Generating that database required two extra switches:

vopt -debug +designfile …
vsim -qwavedb=+signal+class …

Then we ran the debugger in post-simulation mode:

visualizer +designfile +wavefile

When Bob first saw his RTL signals AND his UVM Class based testbench in the waveform window together, he got a big smile – literally getting up out of his seat.

Our contest was this. Who would be fastest to find the bug? Logfile debug or class based debug? This contest was all hindsight. Bob had already figured out what the bug was and had fixed it before we ever met. In the end the bug was a simple coding mistake in the way that a SystemVerilog queue was being used. Just a simple coding error. But I’m getting ahead of myself.

Digging through the testbench

We setup a remote link so that we both could see the post simulation debug session. Bob provided clues about the design and I drove the debugger.

We chased class handles around his design, from driver, across to monitor and into the scoreboard where the problem existed. There was a failure where a transaction contained N sub-transactions. The last two sub-transactions had errors, but only for a certain kind of transaction. And the error happened very late in an otherwise fully functional simulation.

Blog 2.1 - UVM Debug - Driver Transactions - Sequence Parent
<A UVM Driver transaction with derived classes and the parent sequence>

We started at the driver that was driving that transaction. We looked at the sequence_item that the driver received. But we had no idea from looking at the driver source code, what the ACTUAL type of the sequence_item was. Some derived type sequence_item was coming through. We also had no idea which sequence was generating this transaction. There were many sequences running on this driver.

Blog 2.2 - UVM Debug - Class Inheritance

<UVM Class Inheritance Diagram>

We used the waveform and the class inheritance diagram to figure out which class we needed to look at. Really easy. Just put the driver in the waveform and expand the transaction ‘t’ to see the derived type and the parent sequence.

Blog 2.4 - UVM Debug - Actual Sequence Item Type

<The transaction ‘t’ contains a class of type “sequence_item_A_fa”>

Blog 2.4 - UVM Debug - Actual Sequence Type
<The parent sequence is of type “sequence_A”>

Tic – Toc

In about 60 minutes we were at the point of the bug. Bob had spent about 2 weeks getting to this point using his logfiles. Winner!

In our 60 minutes, we saw that a derived class was coming into the driver. We traced the inheritance of that class to find a base class which implemented the SystemVerilog queue processing. That code was the place where the error was. After inspecting the loop control we found and fixed the error.

Coffee break time.

Still having fun.

Testbenches are complex pieces of software. Logfiles are very important debug tools, but with debug tools like Visualizer, post simulation testbench debug can be more than just examining predetermined print statements. You can actually explore the UVM data structure and class based testbench. And you won’t need weeks to do it.

Come to the Verification Academy Booth at DAC in San Francisco June 8, 9 and 10 to hear more about UVM Debug and talk in person about your UVM Debug problems. See you then!

, , , , , , , , , ,

16 May, 2015

In a recent post on deepchip.com John Cooley wrote about “Who Knew VIP?”. In addition, Mark Olen wrote about this same topic on the verification horizon’s blog. So are VIPs becoming more and more popular? Yes!

Here are the big reasons why I believe we are seeing this trend:

  • Ease of Integration
  • Ready made Configurations and Sequences
  • Debug Capabilities

I will be digging deeper on each of these reasons and topics in a three part series on the Verification Horizons BLOG. Today, in part 1 our focus is Easy Integration or EZ VIP.

While developing VIP’s there are various trade off that one has to consider – ease of use vs amount of configuration, protocol specific vs commonality across various protocols. A couple of years ago; when I first got my hands on Mentor’s VIP, there were some features that I really liked and some that I wasn’t familiar with and some that I needed to learn. Over the last couple of years, there have been strides of improvements with ease of use (EZ-VIP) a big one in that list.

EZ-VIP’s is aimed to make it easier to:

  • Make connections between QVIP interfaces and DUT signals
  • Integrate and configure a QVIP in a UVM testbench

EZ-VIP_Productivity

This makes it easier for customers to become productive in hours rather than days.

Connectivity Modules:

Earlier we gave the users the flexibility on the direction of the ports of VIP based on the use modes. The user needed to write certain glue logic apart from setting the directions of the ports.

QVIP_ConnModule

Now we have created new connectivity modules. These connectivity modules enable easy connectivity during integration. These connectivity modules are wrapper around the VIP interface with the needed glue logic, ports having protocol standard names and with the right direction based on the mode e.g. Master, Slave, EP, RC etc. These now enables the user to quickly integrate the QVIP with the DUT.

EZ-VIP_Productivity_1

Quick Starter Kits:

These kits are specific to PCIe and are customized pre-packaged for all major IP vendors, easy-to-use verification environments for the serial and parallel interfaces of PCIe 1.0, 2.0, 3.0, 4.0 and mPCIe, which can be used to verify PHY, Root Complex and Endpoint designs. Users also get example which could be used as reference. These have dramatically reduced bring up time for PCIe QVIP with these IPs to less than a day.

In the part 2 of this series, I will talk about QVIP Configuration and Sequences. Stay tuned and I look forward to hearing your feedback on the series of posts on VIP.

, , , , , , , ,

11 May, 2015

FPGA Verification Technology Adoption Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on the effectiveness of verification in terms of FPGA project schedule and bug escapes. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2014 Wilson Research Group study.

An interesting trend we see in the FPGA space is a continual maturing of its functional verification processes. In fact, we find that the FPGA design space is about where the ASIC/IC design space was five years ago in terms of verification maturity—and it is catching up quickly. A question you might ask is, “What is driving this trend?” In Part 1 of this blog series I showed rising design complexity with the adoption of more advanced FPGA designs, as well as multiple embedded processor architectures targeted at FPGA designs. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (Part 2 and Part 3). My belief is that the industry creating FPGA designs is being forced to mature its functional verification processes to address today’s increasing complexity.

FPGA Simulation Technique Adoption Trends

Let’s begin by comparing  FPGA adoption trends related to various simulation techniques from the both the 2012 and 2014 Wilson Research Group study, as shown in Figure 1.

Figure 1. Simulation-based technique adoption trends for FPGA designs

You can clearly see that the industry is increasing its adoption of various functional verification techniques for FPGA targeted designs. This past year I have spent a significant amount of time in discussions with FPGA project managers around the world. During these discussions, most mangers mention the drive to improve verification process within their projects due to rising complexity. The Wilson Research Group data suggest that these claims are valid.

FPGA Formal Technology Adoption Trends

Figure w shows the adoption percentages for formal property checking and auto-formal techniques.

Figure 2. FPGA Formal Technology Adoption

Our study looked at two forms of formal technology adoption (i.e., formal property checking and automatic formal verification solutions). Examples of automatic formal verification solutions include X safety checks, deadlock detection, reset analysis, and so on.  The key difference is that for formal property checking the user writes a set of assertions that they wish to prove.  Automatic formal verification solutions do not require the user to write assertions.

In my next blog (click here), I’ll focus on FPGA design and verification language adoption trends, as identified by the 2014 Wilson Research Group study.

Quick links to the 2014 Wilson Research Group Study results

, , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@jhupcey Tweets

  • Loading tweets...