Verification Horizons BLOG

This blog will provide an online forum to provide weekly updates on concepts, values, standards, methodologies and examples to assist with the understanding of what advanced functional verification technologies can do and how to most effectively apply them. We're looking forward to your comments and suggestions on the posts to make this a useful tool.

3 June, 2015

I am please to announce that, beginning today, the Accellera Portable Stimulus Working Group (PSWG) is accepting technology contributions to assist in the creation of a Portable Test and Stimulus standard. More details can be found at http://www.accellera.org/.

This milestone marks a critical point in our efforts to bring a Portable Stimulus standard to the industry. Beginning in May of 2014, several companies came together to form the first Proposed Working Group in Accellera to evaluate the feasibility and need for such a standard. After six months of diligent work, the PWG created a set of 120 requirements as part of a proposal to the Accellera Board of Directors that a Working Group be formed. The PSWG began at DVCon this year and for the past several months has been working to define a set of goals, milestones and design objectives to guide the standardization effort. Now it’s your turn to get in on the act.

Contributions will be accepted until August 5th, after which the WG will be evaulating the contributions and deciding which one(s) to use as the basis for the standard.

If you’re at DAC this week, stop by the Verification Academy booth (#2408) and see a full slate of technical presentations including one where I’ll be sharing some of our thoughts on what the standard should look like. To register for any/all of these sessions, please go here. See you at DAC!


3 June, 2015

FPGA Language and Library Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on FPGA verification techniques and technologies adoption trends, as identified by the 2014 Wilson Research Group study. In this blog, I’ll present FPGA design and verification language trends, as identified by the Wilson Research Group study.

You might note that the percentage for some of the language and library data that I present sums to more than one hundred percent. The reason for this is that many FPGA projects today use multiple languages.

FPGA RTL Design Language Adoption Trends

Let’s begin by examining the languages used for FPGA RTL design. Figure 1 shows the trends in terms of languages used for design, by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected design language adoption trends within the next twelve months (in purple). Note that the language adoption is declining for most of the languages used for FPGA design with the exception of Verilog and SystemVerilog.

Also, it’s important to note that this study focused on languages used for RTL design. We have conducted a few informal studies related to languages used for architectural modeling—and it’s not too big of a surprise that we see increased adoption of C/C++ and SystemC in that space. However, since those studies have (thus far) been informal and not as rigorously executed as the Wilson Research Group study, I have decided to withhold that data until a more formal study can be executed related to architectural modeling and virtual prototyping.

Figure 1. Trends in languages used for FPGA design

It’s not too big of a surprise that VHDL is the predominant language used for FPGA RTL design, although the projected trend is that Verilog will likely overtake VHDL in terms of the predominate language used for FPGA design in the near future.

FPGA Verification Language Adoption Trends

Next, let’s look at the languages used to verify FPGA designs (that is, languages used to create simulation testbenches). Figure 2 shows the trends in terms of languages used to create simulation testbenches by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected verification language adoption trends within the next twelve months (in purple).

Figure 2. Trends in languages used in verification to create FPGA simulation testbenches

FPGA Testbench Methodology Class Library Adoption Trends

Now let’s look at testbench methodology and class library adoption for FPGA designs. Figure 3 shows the trends in terms of methodology and class library adoption by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected verification language adoption trends within the next twelve months (in purple).

Figure 3. FPGA methodology and class library adoption trends

Today, we see a downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has increased by 28 percent since 2012. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 20 percent.

FPGA Assertion Language and Library Adoption Trends

Finally, let’s examine assertion language and library adoption for FPGA designs. The 2014 Wilson Research Group study found that 44 percent of all the FPGA projects have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trends related to those participants who have adopted ABV.

Figure 4 shows the trends in terms of assertion language and library adoption by comparing the 2010 Wilson Research Group study (in dark blue), the 2012 Wilson Research Group study (in green), and the projected adoption trends within the next 12 months (in purple). The adoption of SVA continues to increase, while other assertion languages and libraries either remain flat or decline.

Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I will shift the focus of this series of blogs and start to present the ASIC/IC findings from the 2014 Wilson Research Group Functional Verification Study.

Quick links to the 2014 Wilson Research Group Study results

, , , , , , , ,

2 June, 2015

This year we are trying something new at the Verification Academy booth during next week’s 2015 Design Automation Conference.  We’ve decided to host an interactive panel on the controversial topic of Agile development. I say controversial because you typically find two camps of engineers when discussing the subject of Agile development—the believers and the non-believers.

My colleague Neil Johnson, principal consultant from XtremeEDA Corporation and a leading expert in Agile development, will provide some context for the topic with a short background on Agile methods to kick the panel off. Then I plan to join Neil on the panel, which will be moderated by Mentor’s own world-renowned Dennis Brophy.  Our intent is to have a healthy, interactive discussion with both the believers and the non-believers in the audience.

So, why is the subject of Agile development even worthy of discussion at DAC? Well, not to entirely give away my position on the subject…but I think it’s worthwhile to note some of the recent findings related to root cause of logical and functional flaws from the 2014 Wilson Research Group Functional Verification Study (see figure below).

Clearly, design errors are a major factor contributing to bugs. Yet a growing concern is the number of issues surrounding the specification that are leading to logical and functional flaws.  In reality, there is no such thing as the perfect specification—and few projects can afford to wait to start development until the perfection is achieved. Furthermore, in many market segments, late stage changes in the specification are common practice to ensure that the final product is competitive in a rapidly changing market. Could Agile development, in which requirements and solutions evolve through collaboration between self-organizing and cross-functional teams, be the saving grace?  Please join us on June 8th at 5pm in the Verification Academy booth at DAC and hear what the experts are saying!

, ,

21 May, 2015

Still having fun doing UVM and Class based debug?

Maybe a debug contest will help. I had a contest with a user not too long ago.

We’ll call him Bob. Bob debugged his UVM testbench using that favorite technique – “logfile” debug. He spent a lot of time inserting, moving and removing $display statements, all while re-running simulation over and over. He’d generate a logfile, analyze it (read: run or tuneup a Perl script) and cross-check with waveforms and source code. He’d get close, only to realize he needed more $display. More Simulation.

Hopefully, just one more $display in one more file… More simulation… Repeat…

He wanted to know if there was a better way to climb around through his UVM Testbench.

Blog 2.3 - UVM Debug - Agent UVM Netlist
<UVM Agent Schematic – One way to climb around>

A Better Way to debug Bob’s Testbench – The Contest!

Bob challenged me to a contest. He wanted to know how fast we could find the bug using Mentor’s Visualizer™ Debug Environment and a post simulation database. One simulation. One shot.

We ran Bob’s simulation, capturing the waveform database. Generating that database required two extra switches:

vopt -debug +designfile …
vsim -qwavedb=+signal+class …

Then we ran the debugger in post-simulation mode:

visualizer +designfile +wavefile

When Bob first saw his RTL signals AND his UVM Class based testbench in the waveform window together, he got a big smile – literally getting up out of his seat.

Our contest was this. Who would be fastest to find the bug? Logfile debug or class based debug? This contest was all hindsight. Bob had already figured out what the bug was and had fixed it before we ever met. In the end the bug was a simple coding mistake in the way that a SystemVerilog queue was being used. Just a simple coding error. But I’m getting ahead of myself.

Digging through the testbench

We setup a remote link so that we both could see the post simulation debug session. Bob provided clues about the design and I drove the debugger.

We chased class handles around his design, from driver, across to monitor and into the scoreboard where the problem existed. There was a failure where a transaction contained N sub-transactions. The last two sub-transactions had errors, but only for a certain kind of transaction. And the error happened very late in an otherwise fully functional simulation.

Blog 2.1 - UVM Debug - Driver Transactions - Sequence Parent
<A UVM Driver transaction with derived classes and the parent sequence>

We started at the driver that was driving that transaction. We looked at the sequence_item that the driver received. But we had no idea from looking at the driver source code, what the ACTUAL type of the sequence_item was. Some derived type sequence_item was coming through. We also had no idea which sequence was generating this transaction. There were many sequences running on this driver.

Blog 2.2 - UVM Debug - Class Inheritance

<UVM Class Inheritance Diagram>

We used the waveform and the class inheritance diagram to figure out which class we needed to look at. Really easy. Just put the driver in the waveform and expand the transaction ‘t’ to see the derived type and the parent sequence.

Blog 2.4 - UVM Debug - Actual Sequence Item Type

<The transaction ‘t’ contains a class of type “sequence_item_A_fa”>

Blog 2.4 - UVM Debug - Actual Sequence Type
<The parent sequence is of type “sequence_A”>

Tic – Toc

In about 60 minutes we were at the point of the bug. Bob had spent about 2 weeks getting to this point using his logfiles. Winner!

In our 60 minutes, we saw that a derived class was coming into the driver. We traced the inheritance of that class to find a base class which implemented the SystemVerilog queue processing. That code was the place where the error was. After inspecting the loop control we found and fixed the error.

Coffee break time.

Still having fun.

Testbenches are complex pieces of software. Logfiles are very important debug tools, but with debug tools like Visualizer, post simulation testbench debug can be more than just examining predetermined print statements. You can actually explore the UVM data structure and class based testbench. And you won’t need weeks to do it.

Come to the Verification Academy Booth at DAC in San Francisco June 8, 9 and 10 to hear more about UVM Debug and talk in person about your UVM Debug problems. See you then!

, , , , , , , , , ,

16 May, 2015

In a recent post on deepchip.com John Cooley wrote about “Who Knew VIP?”. In addition, Mark Olen wrote about this same topic on the verification horizon’s blog. So are VIPs becoming more and more popular? Yes!

Here are the big reasons why I believe we are seeing this trend:

  • Ease of Integration
  • Ready made Configurations and Sequences
  • Debug Capabilities

I will be digging deeper on each of these reasons and topics in a three part series on the Verification Horizons BLOG. Today, in part 1 our focus is Easy Integration or EZ VIP.

While developing VIP’s there are various trade off that one has to consider – ease of use vs amount of configuration, protocol specific vs commonality across various protocols. A couple of years ago; when I first got my hands on Mentor’s VIP, there were some features that I really liked and some that I wasn’t familiar with and some that I needed to learn. Over the last couple of years, there have been strides of improvements with ease of use (EZ-VIP) a big one in that list.

EZ-VIP’s is aimed to make it easier to:

  • Make connections between QVIP interfaces and DUT signals
  • Integrate and configure a QVIP in a UVM testbench

EZ-VIP_Productivity

This makes it easier for customers to become productive in hours rather than days.

Connectivity Modules:

Earlier we gave the users the flexibility on the direction of the ports of VIP based on the use modes. The user needed to write certain glue logic apart from setting the directions of the ports.

QVIP_ConnModule

Now we have created new connectivity modules. These connectivity modules enable easy connectivity during integration. These connectivity modules are wrapper around the VIP interface with the needed glue logic, ports having protocol standard names and with the right direction based on the mode e.g. Master, Slave, EP, RC etc. These now enables the user to quickly integrate the QVIP with the DUT.

EZ-VIP_Productivity_1

Quick Starter Kits:

These kits are specific to PCIe and are customized pre-packaged for all major IP vendors, easy-to-use verification environments for the serial and parallel interfaces of PCIe 1.0, 2.0, 3.0, 4.0 and mPCIe, which can be used to verify PHY, Root Complex and Endpoint designs. Users also get example which could be used as reference. These have dramatically reduced bring up time for PCIe QVIP with these IPs to less than a day.

In the part 2 of this series, I will talk about QVIP Configuration and Sequences. Stay tuned and I look forward to hearing your feedback on the series of posts on VIP.

, , , , , , , ,

11 May, 2015

FPGA Verification Technology Adoption Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on the effectiveness of verification in terms of FPGA project schedule and bug escapes. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2014 Wilson Research Group study.

An interesting trend we see in the FPGA space is a continual maturing of its functional verification processes. In fact, we find that the FPGA design space is about where the ASIC/IC design space was five years ago in terms of verification maturity—and it is catching up quickly. A question you might ask is, “What is driving this trend?” In Part 1 of this blog series I showed rising design complexity with the adoption of more advanced FPGA designs, as well as multiple embedded processor architectures targeted at FPGA designs. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (Part 2 and Part 3). My belief is that the industry creating FPGA designs is being forced to mature its functional verification processes to address today’s increasing complexity.

FPGA Simulation Technique Adoption Trends

Let’s begin by comparing  FPGA adoption trends related to various simulation techniques from the both the 2012 and 2014 Wilson Research Group study, as shown in Figure 1.

Figure 1. Simulation-based technique adoption trends for FPGA designs

You can clearly see that the industry is increasing its adoption of various functional verification techniques for FPGA targeted designs. This past year I have spent a significant amount of time in discussions with FPGA project managers around the world. During these discussions, most mangers mention the drive to improve verification process within their projects due to rising complexity. The Wilson Research Group data suggest that these claims are valid.

FPGA Formal Technology Adoption Trends

Figure w shows the adoption percentages for formal property checking and auto-formal techniques.

Figure 2. FPGA Formal Technology Adoption

Our study looked at two forms of formal technology adoption (i.e., formal property checking and automatic formal verification solutions). Examples of automatic formal verification solutions include X safety checks, deadlock detection, reset analysis, and so on.  The key difference is that for formal property checking the user writes a set of assertions that they wish to prove.  Automatic formal verification solutions do not require the user to write assertions.

In my next blog (click here), I’ll focus on FPGA design and verification language adoption trends, as identified by the 2014 Wilson Research Group study.

Quick links to the 2014 Wilson Research Group Study results

, , , , ,

11 May, 2015

Because Clock Domain Crossing (CDC) verification has been around for well over a decade, it’s tempting to think that CDC has attained the status of “solved problem”.  However, with today’s SoCs employing over 50 independent clocks, the reality is that CDC verification is only getting more challenging. Hence, this is why Mentor R&D seeks to stay ahead of the curve by attending cutting edge academic events that most of us have never heard of.  Case in point: the recent 21st IEEE International Symposium on Asynchronous Circuits and Systems — “ASYNC” for short.

At this year’s ASYNC, Mentor CDC R&D lead Chris Kwok reached out to the academic community with a presentation on “Hunting Asynchronous CDC Violations in the Wild”. In a nutshell, Chris updated the researchers on the state of CDC in the commercial EDA world. After providing this market snapshot, Chris went on to describe how the audience’s innovations will be most welcome as the number of clock domains – and the interactivity and dependencies between multiple domains like clock and reset signaling — continues to increase.

2015-5-4 ASYNC IMG_7409

It was readily apparent that Chris’ remarks were well received given the detailed questions he fielded immediately after his presentation, and by the stream of visitors to the Mentor Graphics’ demo table.

2015-5-4 ASYNC IMG_7452

Indeed, as my colleague noted, “The attendees I talked to had some really intriguing ideas around the core analysis technologies that will be needed for CDC verification at advanced nodes. It’s sparked some new thinking about some particularly thorny issues my team has been working on.

In summary, the event was an interesting reminder of all the science that goes into the core of each Questa CDC release, enabling Mentor to stay ahead of our customers’ toughest technical challenges.

Until next time, may your clock domains be synchronized, and reset signaling be properly buffered,

Joe Hupcey III

Reference link: the ASYNC conference website http://ee.usc.edu/async2015/

P.S. Do you dream about improving your flip-flop’s tau figure? Do you calculate MTBF’s in your spare time?  Are you frustrated that your colleagues can’t appreciate that synchronizers and data flip-flops ARE different? If you answered “yes” to any one of these questions, the Questa CDC team would like to invite you to apply for an opening in R&D in Mentor’s Fremont, CA office:
http://chk.tbe.taleo.net/chk04/ats/careers/requisition.jsp?org=MENTOR&cws=1&rid=3462

, , ,

7 May, 2015

For all things verification, you will want to stop by the Verification Academy booth #2408 at DAC to interact with experts exploring the challenges of IC design and verification.  At the top of each hour, the Verification Academy will feature a presentation followed by a lively conversation.  Presentations will not be repeated so each hour will be unique.

We have themed each of the days as well:

  • Monday is “Debug Day
  • Tuesday is “Standards & FPGA Day
  • Wednesday is “Formal Verification Day

Naturally, you will find a few exceptions to those rules when you look at the program in detail.  Please register for Verification Academy sessions here: Monday Registration | Tuesday Registration | Wednesday Registration.  [NOTE: the Verification Academy sessions are highlighted with a blue background when you visit the registration site.]  A concise listing of all the Verification Academy sessions can be found here.

We will feature an end of the day reception on Monday at the Verification Academy booth after the last presentation.  Neil Johnson (XtremeEDA) and Mentor’s Harry Foster will explore Agile Evolution in SoC Verification in that last session.  The session begins at 5pm.  Neil is a proponent of this methodology as a means to to help build in design quality and simplify the task of verification.  In addition to being an advocate for this, he is also a practitioner of it.  He is an open-source hardware developer and Moderator at www.AgileSoC.com.  We think the conversation that follows this informative session will be a lively one in which we invite everyone to continue over cocktails and hor d’oeuvres at 5:30pm.

We are sponsoring other events outside of the Verification Academy as well.  Tuesday is truly “Standards Day” at DAC.  In addition to the standards theme at the Verification Academy booth, you can kick off the day at the Accellera Breakfast and later in the day attend the IEEE DASC, Accellera and Si2 System Level Low Power Workshop.  Here is a partial list of Standards Day activities:

Registration

If you have not yet registered for DAC, do so now.  If you do not have plans to register for the full technical conference, many conference events are fee free if you select the “I LOVE DAC” registration option before May 19th!  In fact, all the “Standards Day” events listed above are free with early I Love DAC registration. Simply click here and you will be taken to the “I Love DAC” location to register.  Register before May 19th as after that date a $95 minimum fee sets in.

See you at DAC!

, , , , , , , , , ,

21 April, 2015

FPGA Verification Effectiveness Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on the amount of effort spent in FPGA verification. We have seen in previous blogs that a significant amount of effort is being applied to FPGA functional verification. In this blog I focus on the effectiveness of verification in terms of FPGA project schedule and bug escapes.

FPGA Schedules

Figure 1 presents the design completion time compared to the project’s original schedule. What was a surprise in the 2014 findings is that we saw an improvement in the number of FPGA projects meeting schedule—compared to 2012. It is unclear why we are seeing this trend now.  Perhaps managers are getting better at scheduling—or are becoming more pessimistic with their schedules.  Or, perhaps it is due to the increase amount of reuse (both design and verification IP). Or, is the increased amount of FPGA verification effort prior to “getting to the lab” starting to pay off for some projects? This data point raises some interesting questions worth exploring further. Regardless, still a significant number of FPGA projects miss their originally planned schedule.

2014-WRG-BLOG-FPGA-4-1

Figure 1. FPGA design completion time compared to the project’s original schedule

FPGA Lab Iterations

ASIC/IC projects track the number of required spins that occur prior to market production.  In fact, this can be a useful metric for determining the overall verification effectiveness of an ASIC/IC project.  Unfortunately, we lack such a metric for FPGA projects.  For the 2014 study, we decided to ask the question related to the average number of lab iterations required before the design went into production. Again, this was done to try and get a sense of the project’s verification effectiveness.  The results are shown in Figure 2. However, I’m not convinced that FPGA lab iterations is analogous to ASIC/IC respin as a verification effectiveness metric.  Perhaps a better metric for future studies would be the number of bugs that escape into production and are found in the field. This might be something we should consider on future studies.

2014-WRG-BLOG-FPGA-4-2

Figure 2. Number of FPGA iterations in the lab (no trend data available)

FPGA Bug classification

For the 2014 study, we asked the FPGA project participants to identify the type of flaws that were contributing to rework in the lab. In Figure 3, I show the two leading causes of rework, which are logical and functional bugs, as well as clocking bugs. The data seems to suggest that these issues are growing. Perhaps due to the design of larger and more complex FPGAs. Again, this is a data point worth exploring further.

2014-WRG-BLOG-FPGA-4-3

Figure 3. Types of Flaws Resulting in FPGA Rework

In Figure 4, I show trends in terms of main contributing factors leading to logic and functional flaws—and you can see that design errors are the main cause of functional flaws.  But note that a significant amount of flaws are related to some aspect of the specification—such as changes in the specification—or incorrect or incomplete specifications. Problems associated with the specification process are a common theme I often hear when visiting FPGA customers.

2014-WRG-BLOG-FPGA-4-4

Figure 5. Root cause of FPGA functional flaws

In my next blog (click here), I plan to presenting the findings from our study for FPGA verification technology adoption trends.

Quick links to the 2014 Wilson Research Group Study results

,

16 April, 2015

dvcon_2015_logo

I was fortunate to be able to attend DVCon this year. One of my favorite aspects of the DVCon show are the paper and poster sessions.  DVCon is a very hands-on show, with the focus being practical applications of new verification techniques. It’s great to be able to listen as industry experts present new techniques and approaches during the paper sessions that they have spent countless hours developing, and be able to interact in a more informal manner with the poster presenters. Once DVCon is over, the content of these papers maintains their value, and I frequently find myself revisiting papers from previous DVCon conferences.

I was happy to be at DVCon this year presenting a poster paper on software-driven hardware verification. Software-driven verification of hardware has been around for a very long time, of course. Going back to the era when systems were composed of discrete packages wired together on a board, running some amount of software on the processor has been a great way to verify that the components of the system have been correctly integrated. Today, as the interactions between software running on multiple processors and hardware IP become more and more complex, software-driven hardware verification continues to be relevant.

There are many challenges in software-driven hardware verification. Some challenges, such as automating creation of stimulus, are addressed by existing tools and are within the scope of the Accellera Portable Stimulus Working Group. Other challenges are more foundational, such as how test functionality is encapsulated and connected to maximize test-creation productivity, and maximize reuse of elements of test functionality. My paper, Jump-Start Software-Driven Hardware Verification with a Verification Framework, proposes a set of key features and capabilities required by a verification framework targeted at software-driven hardware verification.

Continuing the theme of reuse, I’m excited to announce that a collection of papers, poster papers, and interviews from DVCon are now available on Verification Academy. Whether or not you were able to attend DVCon, you can read papers on topics ranging from regression management to formal techniques to software-driven hardware verification. In addition, you can listen to the presenters of poster papers introduce their poster, and see interviews with industry figures. You can find these resources and more at the following link:

https://verificationacademy.com/news/featured-presentations-dvcon-2015

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@jhupcey Tweets

  • Loading tweets...