Posts Tagged ‘Verification Methodology’

5 September, 2017

Time sure does fly, DVCon India 2017 is just around the corner, but I feel like DVCon India 2016 just concluded. DVCon India 2017 has kept me busy this year as the Vice-Chair of the conference, but I have thoroughly enjoyed my participation and my association with DVCon. The entire team endeavors to put together the best possible technical conference for the ever growing and widening industry in India.

I thought I would give you a glimpse of the journey that the team goes through to put together DVCon in India. Our work starts right after the conference ends by consolidating and brainstorming the feedback received and forming the steering committee for the next year. A committee member can be in a position for two years, so some continue in the same role, some take on a new responsibility, and we have new members joining in to contribute. Then the other committees like Technical Programs Committee (TPC), Tutorials, Panel, Conference, and Promotions are formed. Then there is a call for papers, tutorials, and panels. Once the papers, tutorials, and panels are submitted, the committee members are assigned papers and tutorials for review.

Once the reviews are submitted, we get together to select the papers and tutorials. Being part of the Design & Verification (DV) TPC, I have seen and been involved in various debates that we have had to select the best papers, posters, and tutorials. The members of TPC and Tutorials committee take on the additional responsibility of chairing sessions and shepherding papers and tutorials. The shepherds work with the authors to review the final submissions and presentations. In parallel, the technical, conference and the promotions committee work throughout the year to finalize the venue and conference logistics, secure sponsors and exhibitors, finalize the keynote and invited speakers, complete the agenda, promote the event, send out reminders for various deadlines and enable registrations. These two simple paragraphs cannot do enough justice to all the hard work, dedication and commitment of the different teams involved.

Like previous years, this year’s event is a two-day event. The first day is for keynotes, invited talks, panel discussions and tutorials and the second day for keynotes, invited talks and papers and posters in ESL and DV track. Participants are free to attend any track and can move between tracks.

Below please find a list of key presentations from Mentor, A Siemens Business:

Keynote – Innovations in Computing, Networking, and Communications: Driving the Next Big Wave in Verification
Friday, September 15, 2017
9:45am – 10:30am
DV Track, Grand Ballroom, The Leela Palace
More Information >
Register for this event >

Panel – Hardware Emulation’s Starring Role in SoC Verification
Thursday, September 14, 2017
12:10pm – 1:00pm
DV Track, Grand Ballroom, The Leela Palace
More Information >
Register for this event >

DV Tutorial – Low-Power Methodology for Making Micro-Architectural and Sequential Changes at RTL to Achieve Predictable Power Savings Throughout the Design Flow
Thursday, September 14, 2017
2:00pm – 3:30pm
DV Track, Turret, The Leela Palace
More Information >
Register for this event >

DV Tutorial – Back to Basics: Doing Formal the Right Way
Thursday, September 14, 2017
4:00pm – 5:30pm
DV Track, Grand Ballroom, The Leela Palace
More Information >
Register for this event >

Accellera Tutorial: An Introduction to the Accellera Portable Stimulus Standard
Thursday, September 14, 2017
4:00pm – 5:30pm
ESL Track, Royal Ballroom, The Leela Palace
More Information >
Register for this event >

I trust you will like this short glimpse into the DVCon 2017 journey. I am very sure the conference will translate into what we intended for, a Design & Engineering Platform for Learning, Networking, and Promotion for Participants, Exhibitors, and Sponsors. I look forward to seeing at DVCon India, 2017 and if you see me at the show, please say hello and let me know what you think of the conference.

, , , , , , , , , , , ,

21 September, 2016

FPGA Language and Library Trends

This blog is a continuation of a series of blogs related to the 2016 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on FPGA verification techniques and technologies adoption trends, as identified by the 2016 Wilson Research Group study. In this blog, I’ll present FPGA design and verification language trends.

You might note that the percentage for some of the language that I present sums to more than one hundred percent. The reason for this is that many FPGA projects today use multiple languages.

FPGA RTL Design Language Adoption Trends

Let’s begin by examining the languages used for FPGA RTL design. Figure 1 shows the trends in terms of languages used for design, by comparing the 2012, 2014, and 2016 Wilson Research Group study, as well as the projected design language adoption trends within the next twelve months. Note that the language adoption is declining for most of the languages used for FPGA design with the exception of Verilog and SystemVerilog.

Also, it’s important to note that this study focused on languages used for RTL design. We have conducted a few informal studies related to languages used for architectural modeling—and it’s not too big of a surprise that we see increased adoption of C/C++ and SystemC in that space. However, since those studies have (thus far) been informal and not as rigorously executed as the Wilson Research Group study, I have decided to withhold that data until a more formal study can be executed related to architectural modeling and virtual prototyping.

BLOG-2016-WRG-figure-6-1

Figure 1. Trends in languages used for FPGA design

It’s not too big of a surprise that VHDL is the predominant language used for FPGA RTL design, although it is slowly declining when viewed as a worldwide trend. An important note here is that if you were to filter the results down by a particular market segment or region of the world, you would find different results. For example, if you only look at Europe, you would find that VHDL adoption as an FPGA design language is about 79 percent, while the world average is 62 percent. However, I believe that it is important to examine worldwide trends to get a sense of where the industry is moving in the future.

FPGA Verification Language Adoption Trends

Next, let’s look at the languages used to verify FPGA designs (that is, languages used to create simulation testbenches). Figure 2 shows the trends in terms of languages used to create simulation testbenches by comparing the 2012, 2014, and 2016 Wilson Research Group study, as well as the projected verification language adoption trends within the next twelve months.

BLOG-2016-WRG-figure-6-2

Figure 2. Trends in languages used in verification to create FPGA simulation testbenches

What is interesting in 2016 is that SystemVerilog overtook VHDL as the language of choice for building FPGA testbenches. But please note that the same comment related to design language adoption applies to verification language adoption. That is, if you were to filter the results down by a particular market segment or region of the world, you would find different results. For example, if you only look at Europe, you would find that VHDL adoption as an FPGA verification language is about 66 percent (greater than the worldwide average), while SystemVerilog adoption is 41 percent (less than the worldwide average).

FPGA Testbench Methodology Class Library Adoption Trends

Now let’s look at testbench methodology and class library adoption for FPGA designs. Figure 3 shows the trends in terms of methodology and class library adoption by comparing the 2012, 2014, and 2016 Wilson Research Group study, as well as the projected verification language adoption trends within the next twelve months.

BLOG-2016-WRG-figure-6-3

Figure 3. FPGA methodology and class library adoption trends

Today, we see a basically a flat or downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has been growing at a healthy 10.7 percent compounded annual growth rate. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 12.5 percent.

By the way, to be fair, we did get a few write-in methodologies, such as OSVVM and UVVM that are based on VHDL. I did not list them in the previous figure since it would be difficult to predict an accurate adoption percentage. The reason for this is that they were not listed as a selection option on the original question, which resulted in a few write-in answers. Nonetheless, the data suggest that the industry momentum and focused has moved to SystemVerilog and UVM.

FPGA Assertion Language and Library Adoption Trends

Finally, let’s examine assertion language and library adoption for FPGA designs. The 2016 Wilson Research Group study found that 47 percent of all the FPGA projects have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trends related to those participants who have adopted ABV.

Figure 4 shows the trends in terms of assertion language and library adoption by comparing the 2012, 2014, and 2016 Wilson Research Group study, and the projected adoption trends within the next 12 months. The adoption of SVA continues to increase, while other assertion languages and libraries are not trending at significant changes.

BLOG-2016-WRG-figure-6-4

Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I will shift the focus of this series of blogs and start to present the ASIC/IC findings from the 2016 Wilson Research Group Functional Verification Study.

Quick links to the 2016 Wilson Research Group Study results

, , , , , , , , , , , , , , ,

8 August, 2016

This is the first in a series of blogs that presents the findings from our new 2016 Wilson Research Group Functional Verification Study. Similar to my previous 2014 Wilson Research Group functional verification study blogs, I plan to begin this set of blogs with an exclusive focus on FPGA trends. Why? For the following reasons:

  1. Some of the more interesting trends in our 2016 study are related to FPGA designs. The 2016 ASIC/IC functional verification trends are overall fairly flat, which is another indication of a mature market.
  2. Unlike the traditional ASIC/IC market, there has historically been very few studies published on FPGA functional verification trends. We started studying the FPGA market segment back in the 2010 study, and we now have collected sufficient data to confidently present industry trends related to this market segment.
  3. Today’s FPGA designs have grown in complexity—and many now resemble complete systems. The task of verifying SoC-class designs is daunting, which has forced many FPGA projects to mature their verification process due to rising complexity. The FPGA-focused data I present in this set of blogs will support this claim.

My plan is to release the ASIC/IC functional verification trends through a set of blogs after I finish presenting the FPGA trends.

Introduction

In 2002 and 2004, Collett International Research, Inc. conducted its well-known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification at that point in time. However, after the 2004 study, no additional Collett studies were conducted, which left a void in identifying industry trends. To address this dearth of knowledge, five functional verification focused studies were commissioned by Mentor Graphics in 2007, 2010, 2012, 2014, and 2016. These were world-wide, double-blind, functional verification studies, covering all electronic industry market segments. To our knowledge, the 2014 and 2016 studies are two of the largest functional verification study ever conducted. This set of blogs presents the findings from our 2016 study and provides invaluable insight into the state of the electronic industry today in terms of both design and verification trends.

Study Background

Our study was modeled after the original 2002 and 2004 Collett International Research, Inc. studies. In other words, we endeavored to preserve the original wording of the Collett questions whenever possible to facilitate trend analysis. To ensure anonymity, we commissioned Wilson Research Group to execute our study. The purpose of preserving anonymity was to prevent biasing the participants’ responses. Furthermore, to ensure that our study would be executed as a double-blind study, the compilation and analysis of the results did not take into account the identity of the participants.

For the purpose of our study we used a multiple sampling frame approach that was constructed from eight independent lists that we acquired. This enabled us to cover all regions of the world—as well as cover all relevant electronic industry market segments. It is important to note that we decided not to include our own account team’s customer list in the sampling frame. This was done in a deliberate attempt to prevent biasing the final results. My next blog in this series will discuss other potential bias concerns when conducting a large industry study and describe what we did to address these concerns.

After data cleaning the results to remove inconsistent or random responses (e.g., someone who only answered “a” on all questions), the final sample size consisted of 1703 eligible participants (i.e., n=1703). This was approximately 90% this size of our 2014 study (i.e., 2014 n=1886). However, to put this figure in perspective, the famous 2004 Ron Collett International study sample size consisted of 201 eligible participants.

Unlike the 2002 and 2004 Collett IC/ASIC functional verification studies, which focused only on the ASIC/IC market segment, our studies were expanded in 2010 to include the FPGA market segment. We have partitioned the analysis of these two different market segments separately, to provide a clear focus on each. One other difference between our studies and the Collett studies is that our study covered all regions of the world, while the original Collett studies were conducted only in North America (US and Canada). We have the ability to compile the results both globally and regionally, but for the purpose of this set of blogs I am presenting only the globally compiled results.

Confidence Interval

All surveys are subject to sampling errors. To quantify this error in probabilistic terms, we calculate a confidence interval. For example, we determined the “overall” margin of error for our study to be ±2.36% at a 95% confidence interval. In other words, this confidence interval tells us that if we were to take repeated samples of size n=1703 from a population, 95% of the samples would fall inside our margin of error ±2.36%, and only 5% of the samples would fall outside. Obviously, response rate per individual question will impact the margin of error. However, all data presented in this blog has a margin of error of less than ±5%, unless otherwise noted.

Study Participants

This section provides background on the makeup of the study.

Figure 1 shows the percentage of overall study FPGA and ASIC/IC participants by market segment. It is important to note that this figures does not represent silicon volume by market segment.

BLOG-2016-WRG-figure-0-1

Figure 1: FPGA and ASIC/IC study participants by market segment

Figure 2 shows the percentage of overall study eligible FPGA and ASIC/IC participants by their job description. An example of eligible participant would be a self-identified design or verification engineer, or engineering manager, who is actively working within the electronics industry. Overall, design and verification engineers accounted for 60 percent of the study participants.

BLOG-2016-WRG-figure-0-2

Figure 2: FPGA and ASIC/IC study participants job title description

Before I start presenting the findings from our 2016 functional verification study, I plan to discuss in my next blog (click here) general bias concerns associated with all survey-based studies—and what we did to minimize these concerns.

Quick links to the 2016 Wilson Research Group Study results

, , , , , , , , , , , , , , , , , ,

16 November, 2015

Thus far we have talked about the importance of having a VIP which is easy to connect to the DUT in part 1 and having the flexibility to configure the VIP as per your requirements and use the built-in or pre-packaged sequences in part 2. In this final part of the series, we will talk about the various built-in features of a VIP which helps with your debug.

If you have a UVM based testbench with one or multiple VIPs, your testbench could be more complex than your DUT and debugging this environment could be a major challenge. Debugging UVM VIP based environments could be thought of having 3 layers:

  1. UVM’s built-in debug mechanism
  2. Simulator with Class Based Debug
  3. VIP with built-in debug features

These are some of the features that Mentor’s VIP provide as built-in debug mechanisms:

VIP Query Commands:

These query commands provide the ability to query the state of the VIP in the testbench at any given time in a batch or CLI mode to get the summary of the VIP or the current configuration of the VIP and prints it to the simulation log.

debug1

For example, in PCI Express, the VIP can output B.A.R. information, to show the addressing set up in the system (above) as well as the PCIe configuration space showing the device capabilities (below).

debug2

Error Messaging & Reporting:

Error Messaging is very important as this is the first thing the user checks while debugging. The error messages have proper encoding to differentiate between methodology errors, protocol errors and infrastructure errors. It also provides the flexibility of customizing the report message and logging mechanism.

Assertions:

debug3

While the VIP is running, a built-in set of assertions check for any protocol violations to verify compliance with the specification.  When these fire, they result in a message that can be printed to the transcript, or piped through the UVM reporting mechanism.  The text of the message include the interface instance name, a description of what went wrong and a reference back to the specification to help when looking up further details.  Each assertion error is configurable and can be enabled or disabled and have its severity level changed.

 Protocol Debug:

Another important aspect of the VIP is to help with protocol debug.  Mentor VIP is transaction based, and those transactions are available for creating stimulus as well as for analysis, where they can be used in the testbench for scoreboarding and coverage collection.

Transaction Logging:

Transactions can also be logged to a text file, printed out via the standard UVM print mechanism, or output to a tracker file by a provided analysis component. Following is the sample print of transaction log file that shows attribute information that is printed along with the format:
   AXI Clk Cycle = 10.00 ns; AXI Clk Frequency = 100.00 MHz; Data bus width = 32 bits

debug4

Here, transaction wise data is printed in the log file which includes whether transaction is read or write, ID of transaction, starting address, accept time of address phase, data of each beat, write strobes, data burst accept time, response of transaction, response phase accept time, length of transaction, burst type, and size of burst.

The VIP can also output text based log, or tracker files.  This can typically be done at both the protocol level, and also at the symbol level for protocols such as PCI Express to help debug link training, or other state machines.  Here we can see the symbols for an OS PLP transmission, a TLP transmission and a DLLP transmission on the bus.

debug5

Transaction Linking:

debug6

Just logging transactions isn’t sufficient while debugging a cache coherent interconnect (CCI). An originating master request transaction results in snoops to other cached masters and a slave access as needed. While debugging system level stimulus, it becomes difficult to identify which all snoop transactions are related to a specific originating request. A Cache Coherency Interconnect Monitor (CCIM), helps overcome this debugging issue by providing a transaction linking component that connects to all the interfaces around a CCI. CCIM provides a top-level parent sequence item that links to all related child sequence items, such as originating request, snoop to cached masters, and slave access.

Protocol Stack Debug:

debug7

Along with the transactions, the VIP also records relationship information, relations to other transactions through the protocol stack and also to the signals themselves.  This allows you to quickly move from transaction, to signal level debug, and highlight not just which signals, but also the time at which those signals participated in any given transaction.

I hope this series has provided you with few insights into what makes a VIP easy to instantiate, connect, configure and start driving stimulus. I would really like to hear about your VIP usage experiences.

, , , , , , , , , , , , , , ,

22 August, 2015

Impact of Design Size on First Silicon Success

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I present verification results findings in terms of schedules, number of required spins, and classification of functional bugs. In this blog, I conclude the series on the 2014 Wilson Research Group Functional Verification Study by providing a deeper analysis of respins by design size.

It’s generally assumed that the larger the design—the increased likelihood of the occurrence of bugs. Yet, a question worth answering is how effective projects are at finding these bugs prior to tapeout.

In Figure 1, we first extract the 2014 data from the required number of spins trends presented in my previous blog (click here), and then partition this data into sets based on design size (that is, designs less than 5 million gates, designs between 5 and 80 million gates, and designs greater than 80 million gates). This led to perhaps one of the most startling findings from our 2014 study. That is, the data suggest that the smaller the design—the less likelihood of achieving first silicon success! While 34 percent of the designs over 80 million gates achieve first silicon success, only 27 percent of the designs less than 5 million gates are able to achieve first silicon success. The difference is statistically significant.

2014-WRG-BLOG-Conclusion-1

Figure 1. Number of spins by design size

To understand what factors might be contributing to this phenomena, we decided to apply the same partitioning technique while examining verification technology adoption trends.

Figure 2 shows the adoption trends for various verification techniques from 2007 through 2014, which include code coverage, assertions, functional coverage, and constrained-random simulation.

One observation we can make from these adoption trends is that the electronic design industry is maturing its verification processes. This maturity is likely due to the need to address the challenge of verifying designs with growing complexity.

2014-WRG-BLOG-Conclusion-2

Figure 2. Verification Technology Adoption Trends

In Figure 3 we extract the 2014 data from the various verification technology adoptions trends presented in Figure 2, and then partition this data into sets based on design size (that is, designs less than 5 million gates, designs between 5 and 80 million gates, and designs greater than 80 million gates).

2014-WRG-BLOG-Conclusion-3

Figure 3. Verification Technology Adoption by Design

Across the board we see that designs less than 5 million gates are less likely to adopt code coverage, assertions, functional coverage, and constrained-random simulation. Hence, if you correlate this data with the number of spins by design size (as shown in Figure 1), then the data suggest that the verification maturity of an organization has a significant influence on its ability to achieve first silicon success.

As a side note, you might have noticed that there is less adoption of constrained-random simulation for designs greater than 80 million gates. There are a few factors contributing to this behavior: (1) constrained-random works well at the IP and subsystem level, but does not scale to the full-chip level for large designs. (2) There a number of projects working on large designs that predominately focuses on integrating existing or purchased IPs. Hence, these types of projects focus more of their verification effort on integration and system validation task, and constrained-random simulation is rarely applied here.

So, to conclude this blog series, in general, the industry is maturing its verification processes as witnessed by the verification technology adoption trends. However, we found that smaller designs were less likely to adopt what is generally viewed as industry best verification practices and techniques. Similarly, we found that projects working on smaller designs tend to have a smaller ratio of peak verification engineers to peak designers. Could the fact that fewer available verification resources combined with the lack of adoption of more advanced verification techniques account for fewer small designs achieving first silicon success? The data suggest that this might be one contributing factor. It’s certainly something worth considering.

Quick links to the 2014 Wilson Research Group Study results

, , , , , ,

27 July, 2015

ASIC/IC Language and Library Adoption Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I presented our study findings on various verification technology adoption trends. In this blog, I focus on language and library adoption trends.

As previously noted, the reason some of the results sum to more than 100 percent is that some projects are using multiple languages; thus, individual projects can have multiple answers.

Figure 1 shows the adoption trends for languages used to create RTL designs. Essentially, the adoption rates for all languages used to create RTL designs is projected to be either declining or flat over the next year, with the exception of SystemVerilog.

2014-WRG-BLOG-ASIC-10-1

Figure 1. ASIC/IC Languages Used for RTL Design

Figure 2 shows the adoption trends for languages used to create ASIC/IC testbenches. Essentially, the adoption rates for all languages used to create testbenches are either declining or flat, with the exception of SystemVerilog. Nonetheless, the data suggest that SystemVerilog adoption is starting to saturate or level off at about 75 percent.

2014-WRG-BLOG-ASIC-10-2

Figure 2. ASIC/IC Languages Used for  Verification (Testbenches)

Figure 3 shows the adoption trends for various ASIC/IC testbench methodologies built using class libraries.

2014-WRG-BLOG-ASIC-10-3

Figure 3. ASIC/IC Methodologies and Testbench Base-Class Libraries

Here we see a decline in adoption of all methodologies and class libraries with the exception of Accellera’s UVM3, whose adoption increased by 56 percent between 2012 and 2014. Furthermore, our study revealed that UVM is projected to grow an additional 13 percent within the next year.

Figure 4 shows the ASIC/IC industry adoption trends for various assertion languages, and again, SystemVerilog Assertions seems to have saturated or leveled off.

2014-WRG-BLOG-ASIC-10-4

Figure 4. ASIC/IC Assertion Language Adoption

In my next blog (click here) I plan to present the ASIC/IC design and verification power trends.

Quick links to the 2014 Wilson Research Group Study results

, , , , , , , , , , , , , , , ,

3 June, 2015

FPGA Language and Library Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on FPGA verification techniques and technologies adoption trends, as identified by the 2014 Wilson Research Group study. In this blog, I’ll present FPGA design and verification language trends, as identified by the Wilson Research Group study.

You might note that the percentage for some of the language and library data that I present sums to more than one hundred percent. The reason for this is that many FPGA projects today use multiple languages.

FPGA RTL Design Language Adoption Trends

Let’s begin by examining the languages used for FPGA RTL design. Figure 1 shows the trends in terms of languages used for design, by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected design language adoption trends within the next twelve months (in purple). Note that the language adoption is declining for most of the languages used for FPGA design with the exception of Verilog and SystemVerilog.

Also, it’s important to note that this study focused on languages used for RTL design. We have conducted a few informal studies related to languages used for architectural modeling—and it’s not too big of a surprise that we see increased adoption of C/C++ and SystemC in that space. However, since those studies have (thus far) been informal and not as rigorously executed as the Wilson Research Group study, I have decided to withhold that data until a more formal study can be executed related to architectural modeling and virtual prototyping.

Figure 1. Trends in languages used for FPGA design

It’s not too big of a surprise that VHDL is the predominant language used for FPGA RTL design, although the projected trend is that Verilog will likely overtake VHDL in terms of the predominate language used for FPGA design in the near future.

FPGA Verification Language Adoption Trends

Next, let’s look at the languages used to verify FPGA designs (that is, languages used to create simulation testbenches). Figure 2 shows the trends in terms of languages used to create simulation testbenches by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected verification language adoption trends within the next twelve months (in purple).

Figure 2. Trends in languages used in verification to create FPGA simulation testbenches

FPGA Testbench Methodology Class Library Adoption Trends

Now let’s look at testbench methodology and class library adoption for FPGA designs. Figure 3 shows the trends in terms of methodology and class library adoption by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected verification language adoption trends within the next twelve months (in purple).

Figure 3. FPGA methodology and class library adoption trends

Today, we see a downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has increased by 28 percent since 2012. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 20 percent.

FPGA Assertion Language and Library Adoption Trends

Finally, let’s examine assertion language and library adoption for FPGA designs. The 2014 Wilson Research Group study found that 44 percent of all the FPGA projects have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trends related to those participants who have adopted ABV.

Figure 4 shows the trends in terms of assertion language and library adoption by comparing the 2010 Wilson Research Group study (in dark blue), the 2012 Wilson Research Group study (in green), and the projected adoption trends within the next 12 months (in purple). The adoption of SVA continues to increase, while other assertion languages and libraries either remain flat or decline.

Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I will shift the focus of this series of blogs and start to present the ASIC/IC findings from the 2014 Wilson Research Group Functional Verification Study.

Quick links to the 2014 Wilson Research Group Study results

, , , , , , , ,

26 February, 2015

A colleague recently asked me: Has anything changed? Do design teams tape-out nowadays without GLS (Gate-Level Simulation)? And if so, does their silicon actually work?

In his day (and mine), teams prepared in 3 phases: hierarchical gate-level netlist to weed out X-propagation issues, then full chip-level gate simulation (unit delay) to come out of reset and exercise all I/Os, and weed out any other X’s, then finally a run with SDF back-annotation on the clocktree-inserted final netlist.

gates

After much discussion about the actual value of GLS and the desirability of eliminating the pain of having to do it from our design flows, my firm conclusion:

Yes! Gate-level simulation is still required, at subsystem level and full chip level.

Its usage has been minimized over the years – firstly by adding LEC (Logical Equivalence Checking) and STA (Static Timing Analysis) to the RTL-to-GDSII design flow in the 90s, and secondly by employing static analysis of common failure modes that were traditionally caught during GLS – x-prop, clock-domain-crossing errors, power management errors, ATPG and BIST functionality, using tools like Questa® AutoCheck, in the last decade.

So there should not be any setup/hold or CDC issues remaining by this stage.  However, there are a number of reasons why I would always retain GLS:

  1. Financial prudence.  You tape out to foundry at your own risk, and GLS is the closest representation you can get to the printed design that you can do final due diligence on before you write that check.  Are you willing to risk millions by not doing GLS?
  2. It is the last resort to find any packaging issues that may be masked by use of inaccurate behavioral models higher up the flow, or erroneous STA due to bad false path or multi-cycle path definitions.  Also, simple packaging errors due to inverted enable signals can remain undetected by bad models.
  3. Ensure that the actual bringup sequence of your first silicon when it hits the production tester after fabrication.  Teams have found bugs that would have caused the sequence of first power-up, scan-test, and then blowing some configuration and security fuses on the tester, to completely brick the device, had they not run a final accurate bring-up test, with all Design-For-Verification modes turned off.
  4. In block-level verification, maybe you are doing a datapath compilation flow for your DSP core which flips pipeline stages around, so normal LEC tools are challenged.  How can you be sure?
  5. The final stages of processing can cause unexpected transformations of your design that may or may not be caught by LEC and STA, e.g. during scan chain insertion, or clocktree insertion, or power island retention/translation cell insertion.  You should not have any new setup/hold problems if the extraction and STA does its job, but what if there are gross errors affecting clock enables, or tool errors, or data processing errors.  First silicon with stuck clocks is no fun.  Again, why take the risk?  Just one simulation, of the bare metal design, coming up from power-on, wiggling all pads at least once, exercising all test modes at least once, is all that is required.
  6. When you have design deltas done at the physical netlist level: e.g. last minute ECOs (Engineering Change Orders), metal layer fixes, spare gate hookup, you can’t go back to an RTL representation to validate those.  Gates are all you have.
  7. You may need to simulate the production test vectors and burn-in test vectors for your first silicon, across process corners.  Your foundry may insist on this.
  8. Finally, you need to sleep at night while your chip is in the fab!

There are still misconceptions:

  • There is no need to repeat lots of RTL regression tests in gatelevel.  Don’t do that.  It takes an age to run those tests, so identify a tiny percentage of your regression suite that needs to rerun on GLS, to make it count.
  • Don’t wait until tapeout week before doing GLS – prepare for it very early in your flow by doing the 3 preparation steps mentioned above as soon as practical, so that all X-pessimism issues are sorted out well before crunch time.
  • The biggest misconception of all: ”designs today are too big to simulate.”.  Avoid that kind of scaremongering.  Buy a faster computer with more memory.  Spend the right amount of money to offset the risk you are about to undertake when you print a 20nm mask set.

Yes, it is possible to tape out silicon that works without GLS.  But no, you should not consider taking that risk.  And no, there is no justification for viewing GLS as “old school” and just hoping it will go away.

Now, the above is just one opinion, and reflects recent design/verification work I have done with major semiconductor companies.  I anticipate that large designs will be harder and harder to simulate and that we may need to find solutions for gate-level signoff using an emulator.  I also found some interesting recent papers, resources, and opinion – I don’t necessarily agree with all the content but it makes for interesting reading:

I’d be interested to know what your company does differently nowadays.  Do you sleep at night?

If you are attending DVCon next week, check out some of Mentor’s many presentations and events as described by Harry Foster, and please come and find me in the Mentor Graphics booth (801), I would be happy to hear about your challenges in Design/Verification/UVM and especially Debug.
Thanks for reading,
Gordon

, , , , , ,

12 August, 2013

Language and Library Trends (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 8 click here), I focused on design and verification language trends, as identified by the Wilson Research Group study. This blog presents additional trends related to verification language and library adoption trends.

You might note that for some of the language and library data I present, the percentage sums to more than 100 percent. The reason for this is that some participants’ projects use multiple languages or multiple testbench methodologies.

Testbench Methodology Class Library Adoption

Now let’s look at testbench methodology and class library adoption for IC/ASIC designs. Figure 1 shows the trends in terms of methodology and class library adoption by comparing the 2010 Wilson Research Group study (in blue) with the 2012 study (in green). Today, we see a downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has increased by 486 percent since the fall of 2010. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 46 percent.

Figure 1. Methodology and class library trends

Figure 2 show the adoption of testbench methodologies and class libraries for FPGA designs (in red). We do not have sufficient data to show prior adoption trends in the FPGA space, but we anticipate that our future studies will enable us to do this. However, we did ask the FPGA study participants which testbench methodologies and class libraries they were planning to adopt within the next 12 months. Based on these responses, we anticipate that UVM adoption will increase by 40 percent, and OVM increase by 24 percent in the FPGA space.

Figure 2. Methodology and class library trends

Assertion Languages and Libraries

Finally, let’s examine assertion language and library adoption for IC/ASIC designs. The Wilson Research Group study found that 63 percent of all the IC/ASIC participants have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trend related to those participants who have adopted ABV.

Figure 3 shows the trends in terms of assertion language and library adoption by comparing the 2010 Wilson Research Group study (in blue), the 2012 Wilson Research Group study (in green), and the projected adoption trends within the next 12 months (in purple). The adoption of SVA continues to increase, while other assertion languages and libraries either remain flat or decline.

Figure 3. Assertion language and library adoption for Non-FPGA designs

Figure 4 shows the adoption of assertion language trends for FPGA designs (in red). Again, we do not have sufficient data to show prior adoption trends in the FPGA space, but we anticipate that our future studies will enable us to do this. We did ask the FPGA study participants which assertion languages and libraries they planned to adopt within the next 12 months. Based on these responses, we anticipate an increase in adoption for OVL, SVA, and PSL in the FPGA space within the next 12 months.

Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I plan to focus on the adoption of various verification technologies and techniques used in the industry, as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

26 July, 2013

You don’t need a graphic like the one below to know that multi-core SoC designs are here to stay.  This one happens to be based on ARM’s AMBA 4 ACE architecture which is particularly effective for mobile design applications, offering an optimized mix of high performance processing and low power consumption.  But with software’s increasing role in overall design functionality, verification engineers are now tasked with verifying not just proper HW functionality, but proper HW functionality under control of application SW.  So how do you verify HW/SW interactions during system level verification?

 For most verification teams, the current alternatives are like choosing between a walk through the desert or drinking from a fire hose.  In the desert, you can manually write test programs in C, compile them and load them into system memory, and then initialize the embedded processors and execute the programs.  Seems straightforward, but now try it for multiple embedded cores and make sure you confirm your power up sequence and optimal low power management (remember, we’re testing a mobile market design), correct memory mapping, peripheral connectivity, mode selection, and basically anything that your design is intended to do before its battery runs out.  You can get lost pretty quickly.  Eventually you remember that you weren’t hired to write multi-threaded software programs, but that there’s an entire staff of software developers down the hall who were.  So you boot your design’s operating system, load the SW drivers, and run the design’s target application programs, and fully verify that all’s well between the HW and the SW at the system level.

But here comes the fire hose.  By this time, you’ve moved from your RTL simulator to an emulator, because just simulating Linux booting up takes weeks to months.  But what happens when your emulator runs into a system level failure after billions of clock cycles and several days of emulation?  There’s no way to avoid full HW/SW verification at the system level, but wouldn’t it be nice to find most of the HW/SW interaction bugs earlier in the process, when they’re easier to debug?

 There’s an easier way to bridge the gap between the desert and the fire hose.  It’s called “intelligent Software Driven Verification”.  iSDV automates the generation of embedded C test programs, for multi-core processor execution.  These tests generate thousands of high-value processor instructions that verify HW/SW interactions.  Bugs discovered take much less time to debug, and the embedded C test programs can run in both simulation and emulation environments, easing the transition from one to the other.Check out the on-line web seminar at the link below to learn about using intelligent Software Driven Verification” as a way to uncover the majority of your system-level design bugs after RTL level simulation, but before full system level emulation. 

http://www.mentor.com/products/fv/multimedia/automating-software-driven-hardware-verification-with-questa-infact

, , , , , ,

@dennisbrophy tweets

Follow dennisbrophy

@dave_59 tweets

Follow dave_59

@jhupcey tweets

Follow jhupcey