Posts Tagged ‘functional coverage’

17 November, 2014

Few verification tasks are more challenging than trying to achieve code coverage goals for a complex system that, by design, has numerous layers of configuration options and modes of operation.  When the verification effort gets underway and the coverage holes start appearing, even the most creative and thorough UVM testbench architect can be bogged down devising new tests – either constrained-random or even highly directed tests — to reach uncovered areas.

armpaper
At the recent ARM® Techcon, Nguyen Le, a Principal Design Verification Engineer in the Interactive Entertainment Business Unit at Microsoft Corp. documented a real world case study on this exact situation.  Specifically, in the paper titled “Advanced Verification Management and Coverage Closure Techniques”, Nguyen outlined his initial pain in verification management and improving cover closure metrics, and how he conquered both these challenges – speeding up his regression run time by 3x, while simultaneously moving the overall coverage needle up to 97%, and saving 4 man-months in the process!  Here are the highlights:

* DUT in question
— SoC with multi-million gate internal IP blocks
— Consumer electronics end-market = very high volume production = very high cost of failure!

* Verification flow
— Constrained-random, coverage driven approach using UVM, with IP block-level testbenches as well as  SoC level
— Rigorous testplan requirements tracking, supported by a variety of coverage metrics including functional coverage with SystemVerilog covergroups, assertion coverage with SVA covers, and code coverage on statements, Branches, Expressions, Conditions, and FSMs

* Sign-off requirements
— All test requirements tracked through to completion
— 100% functional, assertion and code coverage

* Pain points
— Code coverage: code coverage holes can come from a variety of expected and unforeseen sources: dead code can be due to unused functions in reused IP blocks, from specific configuration settings, or a bug in the code.  Given the rapid pace of the customer’s development cycle, it’s all too easy for dead code to slip into the DUT due to the frequent changes in the RTL, or due to different interpretations of the spec.  “Unexplainably dead” code coverage areas were manually inspected, and the exclusions for properly unreachable code were manually addressed with the addition of pragmas.  Both procedures were time consuming and error prone
— Verification management: the verification cycle and the generated data were managed through manually-maintained scripting.  Optimizing the results display, throughput, and tool control became a growing maintenance burden.

* New automation
— Questa Verification Manager: built around the Unified Coverage Database (UCDB) standard, the tool supports a dynamic verification plan cross-linked with the functional coverage points and code coverage of the DUT.  In this way the dispersed project teams now had a unified view which told them at a glance which tests were contributing the most value, and which areas of the DUT needed more attention.  In parallel, the included administrative features enabled efficient control of large regressions, merging of results, and quick triage of failures.

— Questa CoverCheck: this tool reads code coverage results from simulation in UCDB, and then leverages formal technology under-the-hood to mathematically prove that no stimulus could ever activate the code in question. If it’s OK for a given block of code to be dead due to a particular configuration choice, etc., the user can automatically generate wavers to refine the code coverage results.  Additionally, the tool can also identify segments of code that, though difficult to reach, might someday be exercised in silicon. In such cases, CoverCheck helps point the way to testbench enhancements to better reach these parts of the design.

— The above tools used in concert (along with Questasim) enabled a very straightforward coverage score improvement process as follows:
1 – Run full regression and merge the UCDB files
2 – Run Questa CoverCheck with the master UCDB created in (1)
3 – Use CoverCheck to generate exclusions for “legitimate” unreachable holes, and apply said exclusions to the UCDB
4 – Use CoverCheck to generate waveforms for reachable holes, and share these with the testbench developer(s) to refine the stimulus
5 – Report the new & improved coverage results in Verification Manager

* Results
— Automation with Verification Manager enabled Microsoft to reduce the variation of test sequences from 10x runtime down to a focused 2x variation.  Additionally, using the coverage reporting to rank and optimize their tests, they increased their regression throughput by 3x!
— With CoverCheck, the Microsoft engineers improved code coverage by 10 – 15% in most hand-coded RTL blocks, saw up to 20% coverage improvement for auto-generated RTL code, and in a matter of hours were able to increase their overall coverage number from 87% to 97%!
— Bottom-line: the customer estimated that they saved 4 man-months on one project with this process

2014 MSFT presentation at ARM Techcon -- cover check ROI

Taking a step back, success stories like this one, where automated, formal-based applications leverage the exhaustive nature of formal analysis to tame once intractable problems (which require no prior knowledge of formal or assertion-based verification), are becoming more common by the day.  In this case, Mentor’s formal-based CoverCheck is clearly the right tool for this specific verification need, literally filling in the gaps in a traditional UVM testbench verification flow.  Hence, I believe the overall moral of the story is a simple rule of thumb: when you are grappling with a “last mile problem” of unearthing all the unexpected, yet potentially damaging corner cases, consider a formal-based application as the best tool for job.  Wouldn’t you agree?

Joe Hupcey III

Reference links:

Direct link to the presentation slides: https://verificationacademy.com/Advanced-Verification-Management-Presentation

ARM Techcon 2014 Proceedings: http://www.armtechcon.com/

Official paper citation:
Advanced Verification Management and Coverage Closure Techniques, Nguyen Le, Microsoft; Harsh Patel, Roger Sabbagh, Darron May, Josef Derner, Mentor Graphics

, , , , , , , , , , ,

30 October, 2013

MENTOR GRAPHICS AT ARM TECHCON

This week ARM® TechCon® 2013 is being held at the Santa Clara Convention Center from Tuesday October 29 through Thursday October 31st, but don’t worry, there’s nothing to be scared about.  The theme is “Where Intelligence Counts”, and in fact as a platinum sponsor of the event, Mentor Graphics is excited to present no less than ten technical and training sessions about using intelligent technology to design and verify ARM-based designs.

My personal favorite is scheduled for Halloween Day at 1:30pm, where I’ll tell you about a trick that Altera used to shave several months off their schedule, while verifying the functionality and performance of an ARM AXI™ fabric interconnect subsystem.  And the real treat is that they achieved first silicon success as well.  In keeping with the event’s theme, they used something called “intelligent” testbench automation.

And whether you’re designing multi-core designs with AXI fabrics, wireless designs with AMBA® 4 ACE™ extensions, or even enterprise computing systems with ARM’s latest AMBA® 5 CHI™ architecture, these sessions show you how to take advantage of the very latest simulation and formal technology to verify SoC connectivity, ensure correct interconnect functional operation, and even analyze on-chip network performance.

On Tuesday at 10:30am, Gordon Allan described how an intelligent performance analysis solution can leverage the power of an SQL database to analyze and verify interconnect performance in ways that traditional verification techniques cannot.  He showed a wide range of dynamic visual representations produced by SoC regressions that can be quickly and easily manipulated by engineers to verify performance to avoid expensive overdesign.

Right after Gordon’s session, Ping Yeung discussed using intelligent formal verification to automate SoC connectivity, overcoming observability and controllability challenges faced by simulation-only solutions.  Formal verification can examine all possible scenarios exhaustively, verifying on-chip bus connectivity, pin multiplexing of constrained interfaces, connectivity of clock and reset signals, as well as power control and scan test signal connectivity.

On Wednesday, Mark Peryer shows how to verify AMBA interconnect performance using intelligent database analysis and intelligent testbench automation for traffic scenario generation.  These techniques enable automatic testbench instrumentation for configurable ARM-based interconnect subsystems, as well as highly-efficient dense, medium, sparse, and varied bus traffic generation that covers even the most difficult to achieve corner-case conditions.

And finally also on Halloween, Andy Meyer offers an intelligent workshop for those that are designing high performance systems with hierarchical and distributed caches, using either ARM’s AMBA 5 CHI architecture or ARM’s AMBA 4 ACE architecture.  He’ll cover topics including how caching works, how to improve caching performance, and how to verify cache coherency.

For more information about these sessions, be sure to visit the ARM TechCon program website.  Or if you miss any of them, and would like to learn about how this intelligent technology can help you verify your ARM designs, don’t be afraid to email me at mark_olen@mentor.com.   Happy Halloween!

, , , , , , , , , ,

19 August, 2013

Verification Techniques & Technologies Adoption Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for background on the study, click here).

In my previous blog (Part 9 click here), I focused on some of the 2012 Wilson Research Group findings related to design and verification language and library trends. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2012 Wilson Research Group study.

An interesting trend we are starting to see is that the electronic industry is maturing its functional verification processes, whether they are targeting their designs at IC/ASIC or FPGA implementations. This blog provides data to support this claim. An interesting question you might ask is, “What is driving this trend?” In some of my earlier blogs (click here for Part 1 and Part 2) I showed an that design complexity is increasing in terms design sizes and number of embedded processors. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (click here for Part 5 and Part 6). My belief is that the industry is being forced to mature its functional verification processes to address increasing complexity and effort.

Simulation Techniques Adoption Trends

Let’s begin by comparing  non-FPGA adoption trends related to various simulation techniques from the 2007 Far West Research study  (in blue) with the 2012 Wilson Research Group study  (in green), as shown in Figure 1.

Figure 1. Simulation-based technique adoption trends for non-FPGA designs

You can see that the study finds the industry increasing its adoption of various functional verification techniques for non-FPGA targeted designs. Clearly the industry is maturing its processes as I previously claimed.

For example, in 2007, the Far West Research Group found that only 48 percent of the industry performed code coverage. This surprised me. After all, HDL-based code coverage is a technology that has been around since the early 1990’s. However, I did informally verify the 2007 results through numerous customer visits and discussions. In 2012, we see that the industry adoption of code coverage has increased to 70 percent.

In 2007, the Far West Research Group study found that 37 percent of the industry had adopted assertions for use in simulation. In 2012, we find that industry adoption of assertions had increased to 63 percent. I believe that the maturing of the various assertion language standards has contributed to this increased adoption.

In 2007, the Far West Research Group study found that 40 percent of the industry had adopted functional coverage for use in simulation. In 2010, the industry adoption of functional coverage had increased to 66 percent. Part of this increase in functional coverage adoption has been driven by the increased adoption of constrained-random simulation, since you really can’t effectively do constrained-random simulation without doing functional coverage.

Now let’s look at  FPGA adoption trends related to various simulation techniques from the 2010 Far West Research study  (in pink) with the 2012 Wilson Research Group study  (in red).

Figure 2. Simulation-based technique adoption trends for non-FPGA designs

Again, you can clearly see that the industry is increasing its adoption of various functional verification techniques for FPGA targeted designs. This past year I have spent a significant amount of time in discussions with FPGA project managers around the world. During these discussions, most mangers mention the drive to improve verification process within their projects due to  rising complexity of this class of designs. The Wilson Research Group data supports these claims.

In fact, Figure 3 illustrates this maturing trend in the FPGA space, where we saw a 15 percent increase in the adoption of RTL simulation and an 8.5 percent increase in the adoption of code coverage. For complex FPGA designs, the traditional approach of “burn and churn” and debug in the lab is no longer a viable option. Nonetheless, it is still somewhat alarming that 31 percent of the FPGA study participants work on projects that perform no RTL simulation.

Figure 3. FPGA projects maturing their verification processes

Signoff Criteria Trends

We saw earlier in this blog the increased adoption of coverage techniques in the industry. Coverage has become a major component of a project’s verification signoff criteria. In Figure 4, we see how coverage has increased in importance in verification signoff criteria within the past five years, while other decision attributes have declined in terms of importance.

Figure 4. Non-FPGA functional verification signoff criteria trends

We see the same trends for FPGA designs, as shown in Figure 5.

Figure 5. FPGA functional verification signoff criteria trends

In my next blog (click here), I plan to continue the discussion related to adoption of various verification technologies and techniques as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

5 August, 2013

Language and Library Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 7 click here), I focused on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies. In this blog, I present design and verification language trends, as identified by the Wilson Research Group study.

You might note that for some of the language and library data I present, the percentage sums to more than one hundred percent. The reason for this is that some participants’ projects use multiple languages.

RTL Design Languages

Let’s begin by examining the languages used for RTL design. Figure 1 shows the trends in terms of languages used for design, by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), the 2012 Wilson Research Group study (in green), as well as the projected design language adoption trends within the next twelve months (in purple) as identified by the study participants. Note that the design language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption continues to increase.

Also, it’s important to note that this study focused on languages used for RTL design. We have conducted a few informal studies related to languages used for architectural modeling—and it’s not too big of a surprise that we see increased adoption of C/C++ and SystemC in that space. However, since those studies have (thus far) been informal and not as rigorously executed as the Wilson Research Group study, I have decided to withhold that data until a more formal blind study can be executed related to architectural modeling and virtual prototyping.

Figure 1. Trends in languages used for Non-FPGA design

Let’s now look at the languages used specifically for FPGA RTL design. Figure 2 shows the trends in terms of languages used for FPGA design, by comparing the 2012 Wilson Research Group study (in red) with the projected design language adoption trends within the next twelve months (in purple).

Figure 2. Languages used for Non-FPGA design

It’s not too big of a surprise that VHDL is the predominant language used for FPGA RTL design, although we are starting to see increased interest in SystemVerilog.

Verification Languages

Next, let’s look at the languages used to verify Non-FPGA designs (that is, languages used to create simulation testbenches). Figure 3 shows the trends in terms of languages used to create simulation testbenches by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green).

Figure 3. Trends in languages used in verification to create Non-FPGA simulation testbenches

The study revealed that verification language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption is increasing. In fact, SystemVerilog adoption increased by 8.3 percent between 2010 and 2012.

Figure 4 provides a different analysis of the data by partitioning the projects by design size, and then calculating the adoption of SystemVerilog for creating testbenches by size. The design size partitions are represented as: less than 5M gates, 5M to 20M gates, and greater than 20M gates. Obviously, we find that the larger the design size, the greater the adoption of SystemVerilog for creating testbenches. Yet, probably the most interesting observation we can make from examining Figure 4 is related to smaller designs that are less than 5M gates. Here we see that 58.8 percent of the industry has adopted SystemVerilog for verification. In other words, it is safe to say that SystemVerilog for verification has become mainstream today and not just limited to early adopters or leading-edge design projects.

Figure 4. SystemVerilog (for verification) adoption by design size

Let’s now look at the languages used specifically for FPGA RTL design. Figure 5 shows the trends in terms of languages used for FPGA design, by comparing the 2012 Wilson Research Group study (in red) with the projected design language adoption trends within the next twelve months (in purple).

Figure 5. Trends in languages used in verification to create FPGA simulation testbenches

In my next blog (click here), I’ll continue the discussion on design and verification language trends as revealed by the 2012 Wilson Research Group Functional Verification Study.

, , , , , , , , , , , , , , , , , ,

23 April, 2013

This is the first in a series of blogs that presents the results from the 2012 Wilson Research Group Functional Verification Study.

Study Overview

In 2002 and 2004, Ron Collett International, Inc. conducted its well known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification. However, after the 2004 study, no other industry studies were conducted, which left a void in identifying industry trends.

To address this void, Mentor Graphics commissioned Far West Research to conduct an industry study on functional verification in the fall of 2007. Then in the fall of 2010, Mentor commissioned Wilson Research Group to conduct another functional verification study. Both of these studies were conducted as blind studies to avoid influencing the results. This means that the survey participants did not know that the study was commissioned by Mentor Graphics. In addition, to support trend analysis on the data, both studies followed the same format and questions (when possible) as the original 2002 and 2004 Collett studies.

In the fall of 2012, Mentor Graphics commissioned Wilson Research Group again to conduct a new functional verification study. This study was also a blind study and follows the same format as the Collett, Far West Research, and previous Wilson Research Group studies. The 2012 Wilson Research Group study is one of the largest functional verification studies ever conducted. The overall confidence level of the study was calculated to be 95% with a margin of error of 4.05%.

Unlike the previous Collett and Far West Research studies that were conducted only in North America, both the 2010 and 2012 Wilson Research Group studies were worldwide studies. The regions targeted were:

  • North America:Canada,United States
  • Europe/Israel:Finland,France,Germany,Israel,Italy,Sweden,UK
  • Asia (minusIndia):China,Korea,Japan,Taiwan
  • India

The survey results are compiled both globally and regionally for analysis.

Another difference between the Wilson Research Group and previous industry studies is that both of the Wilson Research Group studies also included FPGA projects. Hence for the first time, we are able to present some emerging trends in the FPGA functional verification space.

Figure 1 shows the percentage makeup of survey participants by their job description. The red bars represents the FPGA participants while the green bars represent the non-FPGA (i.e., IC/ASIC) participants.

 

Figure 1: Survey participants job title description

Figure 2 shows the percentage makeup of survey participants by company type. Again, the red bars represents the FPGA participants while the green bars represents the non-FPGA (i.e., IC/ASIC) participants.

Figure 2: Survey participants company description

In a future set of blogs, over the course of the next few months, I plan to present the highlights from the 2012 Wilson Research Group study along with my analysis, comments, and obviously, opinions. A few interesting observations emerged from the study, which include:

  1. FPGA projects are beginning to adopt advanced verification techniques due to increased design complexity.
  2. The effort spent on verification is increasing.
  3. The industry is converging on common processes driven by maturing industry standards.

A few final comments concerning the 2012 Wilson Research Group Study.  As I mentioned, the study was based on the original 2002 and 2004 Collett studies.  To ensure consistency in terms of proper interpretation (or potential error related to mis-interpretation of the questions), we have avoided changing or modifying the questions over the years—with the exception of questions that relate to shrinking geometries sizes and gate counts. One other exception relates  introducing a few new questions related to verification techniques that were not a major concern ten years ago (such as low-power functional verification).  Ensuring consistency in the line of questioning enables us to have high confidence in the trends that emerge over the years.

Also, the method in which the study pools was created follows the same process as the original Collett studies.  It is important to note that the data presented in this series of blogs does not represent trends related to silicon volume (that is, a few projects could dominate in terms of the volume of manufactured silicon and not represent the broader industry).  The data in this series of blogs represents trends related to the study pool—which is a fair proxy for active design projects.

My next blog presents current design trends that were identified by the survey. This will be followed by a set of blogs focused on the functional verification results.

Also, to learn more about the 2012 Wilson Reserach Group study, view my pre-recorded Functional Verification Study web-seminar, which is located out on the Verification Academy website.

Quick links to the 2012 Wilson Research Group Study results (so far…)

, , , , , , , , , , , , ,

7 February, 2013

The latest revision of the IEEE 1800-2012 SystemVerilog Language Reference Manual (LRM) is about to hit the press; though I doubt people will be printing the 1300+ pages on their own from the soon to be readily available online version. Here’s a little background into what’s in all those pages.

The first SystemVerilog LRM came from Accellera in 2002 as a set of extensions to the IEEE 1364-2001 LRM. This first LRM was called version 3.0 because it was considered the third generation of Verilog. Accellera released a few more versions and turned version 3.1a over to the IEEE in 2004. The IEEE released the 1800-2005 SystemVerilog LRM as a set of extensions to the 1364-2005 Verilog LRM, which became the last revision of the 1364 LRM. Four years later, the IEEE combined the SystemVerilog extensions with the Verilog LRM producing a single 1800-2009 SystemVerilog LRM.

Now, a short three years later, the SystemVerilog IEEE 1800-2012 LRM is ready having addressed 225 issues. The majority of these issues are clarifications and corrections to the existing LRM. However, a few enhancements ranging from the simple removal of the restriction on non-blocking assignments to class members to the major addition of multiple class interface inheritance made their way into the new LRM. A number of those enhancements will undoubtedly be presented at the upcoming Design & Verification Conference.

I’d like to demonstrate two enhancements that should be of value to most verification engineers. They address two of the more commonly asked SystemVerilog questions I receive: How do I generate an array of unique values? and How to I create covergroup bins to get toggle or one-hot functional coverage?

Generating unique array of random values

Many verification scenarios require creating sets of random instructions or addresses with no repeating values, usually represented as elements in a dynamic array. Earlier versions of SystemVerilog required you to use either nested foreach loops to constrain all combinations of array elements so that they would not be equal to each other. Or else repeatedly randomize one element at a time, and then constraining the next element to not be in the list of already generated values.

The new unique constraint lets you use one statement to constrain a set of variables or array elements to have unique values. When randomized, this class generates a set of ten unique values from 0 to 15.

You can also add other non-random variables to the set of unique values which has the effect of excluding the values of those variables from the set of unique values. When randomized, this class generates a set of ten unique values excluding the values 0, 7 and 15.

Complex coverpoint bin expressions

The previous SystemVerilog syntax for specifying functional coverage bins was very limiting. Unless you could explicitly state the individual bin values or range of bin values in your coverpoint definition, or could figure out a way to instantiate multiple copies of your covergroup passing in a different bin value as an argument, you were out of luck. This also made defining coverage crosses extremely difficult.

The new SystemVerilog bin syntax lets you specify a bin expression that is evaluated over the range of possible values of the coverpoint expression. The bin expression acts like a constraint, and the set of coverpoint values where the bin expression is true become the set of bins. The coverpoint below generates as set of bin values between 0 and 127 that are divisible by 3. The range is 0 to 127 because sbyte is a 7 bit variable.

Probably the most powerful feature is the coverpoint bin set that simply allows you to define an array of values that you want as bins. This is useful for specifying one-hot encodings, toggle coverage of a register, or any complex algorithm that can generate the set of bin values you want. The code below builds a list of onehot values in the encodings array, and then constructs the protocol_cg covergroup using the array as a set of bin values.

Available in the latest version of Questa

By the way, every feature discussed in this post is available in the latest version of Questa, 10.2.

, , ,

20 November, 2012

Verification Academy Adds Major New Technical Resource

The Verification Academy adds another major methodology cookbook to focus on effective coverage adoption.  The Coverage Cookbook describes the different types of coverage that are available to track your verification process progress, how to create a functional coverage model from a specification, and provides examples to implement functional coverage for different types of designs.

Verification Academy “full access” members have access to the free Coverage Cookbook and the UVM/OVM Cookbooks as well.  Are you a registered full access member?  If not, register now to become a full access member.  (Restrictions apply.)

Coverage is not a new topic.  It was one of major additions to the SystemVerilog (IEEE Std. 1800™-2009) standard.  But the SystemVerilog functional coverage extensions were left to the verification engineer to use in such as way to return meaningful measurements of how much of the design specification was being tested.  The Universal Verification Methodology (UVM) offers greater structure for coverage over SystemVerilog, but it too, is still only a piece of the puzzle.

imageAs verification teams have come to generate greater amounts of information from use of SystemVerilog, UVM and other verification tools, the data from the verification runs needs to be easily used to drive coverage closure.  Within the Mentor Graphics Questa verification platform, this resulted in the development of the Unified Coverage Database (UCDB) and associated verification management and planning features.

Since verification teams use a variety of tools and technology from many sources, it was an imperative that verification information could be easily shared and combined to help drive faster coverage closure across the industry.  This is why Mentor Graphics donated its UCDB API to Accellera where it became the Unified Coverage Interoperability Standard (UCIS).

It would be great to think that we are done; but we’re not.  Tools and data are just two dimensions of the three dimensions to any IC design project.  A comprehensive approach to verification management that handles all of this adds the third dimension.  The Mentor Graphics Questa Verification Management features handle all this.

Now the question is how to best adopt and use all the capabilities at hand from the standards to the verification technology at your finger tips.

The Verification Academy Coverage Cookbook is one of the important tools you now have to help pull all the information into a single place where you can learn the theory and put that theory into practice.  The Coverage Cookbook is much like the OVM/UVM Cookbooks in that it is web friendly, while supporting the ability for you to generate a PDF file of the whole document in case you want to have a printed copy or have it available for offline reference.

The Theory section covers:

  • What is coverage?
  • Kinds of coverage
  • Code Coverage
  • Functional Coverage
  • Specification to coverage
  • Coding for analysis

The Practice section shows three examples you can use today:

  • Bus protocol coverage using ARM® APB3
  • Block level coverage using UART
  • Datapath coverage using BiQuad IIR Filter

The Coverage Cookbook is a live document. You can expect continued extensions and contributions to enhance it.  As Harry Foster, Mentor Graphics’ Chief Scientist Verification put it, “Methodology is the bridge between tools and technologies, which creates a productive, predictable, and repeatable solution.”  We should expect that our collective use of this technology will help hone the methodology which is the heart of the Coverage Cookbook.  And with this use, we should expect the Coverage Cookbook to evolve as we achieve greater verification productivity.

Let us know what you think about the Coverage Cookbook and what we might be able to do to improve it.  In the meantime, Happy Coverage Closing!

, , , , , , , , , , , ,

28 June, 2011

iTBA Introduction

If you’ve been to DAC or DVCon during the past couple of years, you’ve probably at least heard of something new called “Intelligent Testbench Automation”.  Well, it’s actually not really all that new, as the underlying principles have been used in compiler testing and some types of software testing for the past three decades, but its application to electronic design verification is certainly new, and exciting.

The value proposition of iTBA is fairly simple and straightforward.  Just like constrained random testing, iTBA generates tons of stimuli for functional verification.  But iTBA is so efficient, that it achieves the targeted functional coverage one to two orders of magnitude faster than CRT.  So what would you do if you could achieve your current simulation goals 10X to 100X faster?

You could finish your verification earlier, especially when it seems like you’re getting new IP drops every day.  I’ve seen IP verification teams reduce their simulations from several days on several CPUs (using CRT) to a couple of hours on a single CPU (with iTBA).  No longer can IP designers send RTL revisions faster than we can verify them.

But for me, I’d ultimately use the time savings to expand my testing goals.  Today’s designs are so complex that typically only a fraction of their functionality gets tested anyway.  And one of the biggest challenges is trading off what functionality to test, and what not to test.  (We’ll show you how iTBA can help you here, in a future blog post.)  Well, if I can achieve my initial target coverage in one-tenth of the time, then I’d use at least part of the time saving to expand my coverage, and go after some of the functionality that originally I didn’t think I’d have time to test.

On Line Illustration

If you check out this link – http://www.verificationacademy.com/infact  – you’ll find an interactive example of a side by side comparison of constrained random testing and intelligent testbench automation.  It’s an Adobe Flash Demonstration, and it lets you run your own simulations.  Try it, it’s fun.

The example shows a target coverage of 576 equally weighted test cases in a 24×24 grid.  You can adjust the dials at the top for the number and speed of simulators to use, and then click on “start”.  Both CRT and iTBA simulations run in parallel at the same speed, cycle for cycle, and each time a new test case is simulated the number in its cell is incremented by one, and the color of the cell changes.  Notice that the iTBA simulation on the right achieves 100% coverage very quickly, covering every unique test case efficiently.  But notice that the CRT simulation on the left eventually achieves 100% coverage painfully and slowly, with much unwanted redundancy.  You can also click on “show chart” to see a coverage chart of your simulation.

Math Facts

You probably knew that random testing repeats, but you probably didn’t know by how much.  It turns out that the redundancy factor is expressed in the equation “ T = N ln N + C “, where “T” is the number of tests that must be generated to achieve 100% coverage of “N” different cases, and “C” is a small constant.  So using the natural logarithm of 576, we can calculate that given equally weighted cases, the random simulation will require an average of about 3661 tests to achieve our goal.  Sometimes it’s more, sometimes it’s less, given the unpredictability of random testing.  In the meantime the iTBA simulation achieves 100% coverage in just 576 tests, a reduction of 84%.

Experiment at Home

You probably already have an excellent six-sided demonstration vehicle somewhere at home.  Try rolling a single die repeatedly, simulating a random test generator.  How many times does it take you to “cover” all six unique test cases?  T = N ln N + C says it should take about 11 times or more.  You might get lucky and hit 8, 9, or 10.  But chances are you’ll still be rolling at 11, 12, 13, or even more.  If you used iTBA to generate the test cases, it would take you six rolls, and you’d be done.  Now in this example, getting to coverage twice as fast may not be that exciting to you.  But if you extrapolate these results to your RTL design’s test plan, the savings can become quite interesting.

Quiz Question

So here’s a quick question for you.  What’s the minimum number of unique functional test cases needed to realize at least a 10X gain in efficiency with iTBA compared to what you could get with CRT?  (Hint – You can figure it out with three taps on a scientific calculator.)  It’s probably a pretty small number compared to the number of functions your design can actually perform, meaning that there’s at least a 10X improvement in testing efficiency awaiting you with iTBA.

More Information

Hopefully at this point you’re at least a little bit interested?  Like some others, you may be skeptical at this point.  Could this technology really offer a 10X improvement in functional verification?  Check out the Verification Academy at this site – http://www.verificationacademy.com/course-modules/dynamic-verification/intelligent-testbench-automation – to see the first academy sessions that will introduce you to Intelligent Testbench Automation.  Or you can even Google “intelligent testbench automation”, and see what you find.  Thanks for reading . . .

, , , , , ,

26 July, 2010

For years one of the objectives in EDA has been to make formal property checking easy to use and its results easy to understand. With the Automatic formal check feature in the June release of the 0-In Formal tool version 3.0, I think we have made significant progress in this area.

The feature, which predefines a set of assertion rules to look for design issues automatically, makes formal technology accessible to users who are not yet ready to write properties in System Verilog Assertion (SVA) or Property Specification Language (PSL). To make it easier to comprehend problems in the design, the tool highlights the violations back to the RTL code.

Automatic formal check focuses on three areas inadequately addressed by dynamical simulation:

The first area is functional coverage. Today, when constrained random simulation fails to achieve the targeted coverage goal, engineers have to fine tune the environment or add new tests. These efforts, often attempted relatively late in the verification cycle, can consume vast amounts of time and resources while still failing to reach parts of the design. In contrast, automatic formal check can be used to identify unreachable code early in the verification cycle. These targets can be eliminated from the coverage model. As a result, the coverage measurement is more accurate and you know when you are done.

The next area is design initialization. If a design cannot be initialized reliably in silicon, it will not function correctly. An obvious precursor then is making sure all the registers are initialized correctly at RTL. If X’es are used, we need to monitor the X creation, propagation and usage cycle. Dynamic simulation does not interpret X’es accurately as in silicon, which has only 1s and 0s. Automatic formal check is ideal in verifying register initialization under different modes or configurations. Then, with internal assertions and formal technologies, we can check that although X’es are created, they are not used by downstream registers.

The final area is corner case design issues. Time and time again, designers unintentionally write code that violates logical correctness. Examples include combinational loops, full case violations, parallel case violations, undriven logic, finite-state machine (FSM) deadlocks and FSM livelocks. Unless tests are written to specifically target these corner case design issues are, such issues are difficult to exercise. On the other hand, by formally analyzing the design semantics, automatic formal check identifies these design issues statically and creates the stimuli to highlight them to the users.

If you are interested to know more about the automatic formal check feature in 0-In Formal, please feel free to register for our upcoming seminar in San Jose.

 

, , , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@jhupcey Tweets

  • Loading tweets...