Posts Tagged ‘functional coverage’
This is the first in a series of blogs that presents the results from the 2012 Wilson Research Group Functional Verification Study.
In 2002 and 2004, Ron Collett International, Inc. conducted its well known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification. However, after the 2004 study, no other industry studies were conducted, which left a void in identifying industry trends.
To address this void, Mentor Graphics commissioned Far West Research to conduct an industry study on functional verification in the fall of 2007. Then in the fall of 2010, Mentor commissioned Wilson Research Group to conduct another functional verification study. Both of these studies were conducted as blind studies to avoid influencing the results. This means that the survey participants did not know that the study was commissioned by Mentor Graphics. In addition, to support trend analysis on the data, both studies followed the same format and questions (when possible) as the original 2002 and 2004 Collett studies.
In the fall of 2012, Mentor Graphics commissioned Wilson Research Group again to conduct a new functional verification study. This study was also a blind study and follows the same format as the Collett, Far West Research, and previous Wilson Research Group studies. The 2012 Wilson Research Group study is one of the largest functional verification studies ever conducted. The overall confidence level of the study was calculated to be 95% with a margin of error of 4.05%.
Unlike the previous Collett and Far West Research studies that were conducted only in North America, both the 2010 and 2012 Wilson Research Group studies were worldwide studies. The regions targeted were:
- North America:Canada,United States
- Asia (minusIndia):China,Korea,Japan,Taiwan
The survey results are compiled both globally and regionally for analysis.
Another difference between the Wilson Research Group and previous industry studies is that both of the Wilson Research Group studies also included FPGA projects. Hence for the first time, we are able to present some emerging trends in the FPGA functional verification space.
Figure 1 shows the percentage makeup of survey participants by their job description. The red bars represents the FPGA participants while the green bars represent the non-FPGA (i.e., IC/ASIC) participants.
Figure 1: Survey participants job title description
Figure 2 shows the percentage makeup of survey participants by company type. Again, the red bars represents the FPGA participants while the green bars represents the non-FPGA (i.e., IC/ASIC) participants.
Figure 2: Survey participants company description
In a future set of blogs, over the course of the next few months, I plan to present the highlights from the 2012 Wilson Research Group study along with my analysis, comments, and obviously, opinions. A few interesting observations emerged from the study, which include:
- FPGA projects are beginning to adopt advanced verification techniques due to increased design complexity.
- The effort spent on verification is increasing.
- The industry is converging on common processes driven by maturing industry standards.
My next blog presents current design trends that were identified by the survey. This will be followed by a set of blogs focused on the functional verification results.
Also, to learn more about the 2012 Wilson Reserach Group study, view my pre-recorded Functional Verification Study web-seminar, which is located out on the Verification Academy website.
Quick links to the 2012 Wilson Research Group Study results (so far…)
- Part 1 – Design Trends
Tags: accellera, Assertion-Based Verification, formal verification, functional coverage, functional verification, IEEE, Simulation, Standards, SystemVerilog, UVM, Verification Academy, Verification Methodology, verilog, vhdl
The latest revision of the IEEE 1800-2012 SystemVerilog Language Reference Manual (LRM) is about to hit the press; though I doubt people will be printing the 1300+ pages on their own from the soon to be readily available online version. Here’s a little background into what’s in all those pages.
The first SystemVerilog LRM came from Accellera in 2002 as a set of extensions to the IEEE 1364-2001 LRM. This first LRM was called version 3.0 because it was considered the third generation of Verilog. Accellera released a few more versions and turned version 3.1a over to the IEEE in 2004. The IEEE released the 1800-2005 SystemVerilog LRM as a set of extensions to the 1364-2005 Verilog LRM, which became the last revision of the 1364 LRM. Four years later, the IEEE combined the SystemVerilog extensions with the Verilog LRM producing a single 1800-2009 SystemVerilog LRM.
Now, a short three years later, the SystemVerilog IEEE 1800-2012 LRM is ready having addressed 225 issues. The majority of these issues are clarifications and corrections to the existing LRM. However, a few enhancements ranging from the simple removal of the restriction on non-blocking assignments to class members to the major addition of multiple class interface inheritance made their way into the new LRM. A number of those enhancements will undoubtedly be presented at the upcoming Design & Verification Conference.
I’d like to demonstrate two enhancements that should be of value to most verification engineers. They address two of the more commonly asked SystemVerilog questions I receive: How do I generate an array of unique values? and How to I create covergroup bins to get toggle or one-hot functional coverage?
Generating unique array of random values
Many verification scenarios require creating sets of random instructions or addresses with no repeating values, usually represented as elements in a dynamic array. Earlier versions of SystemVerilog required you to use either nested foreach loops to constrain all combinations of array elements so that they would not be equal to each other. Or else repeatedly randomize one element at a time, and then constraining the next element to not be in the list of already generated values.
The new unique constraint lets you use one statement to constrain a set of variables or array elements to have unique values. When randomized, this class generates a set of ten unique values from 0 to 15.
You can also add other non-random variables to the set of unique values which has the effect of excluding the values of those variables from the set of unique values. When randomized, this class generates a set of ten unique values excluding the values 0, 7 and 15.
Complex coverpoint bin expressions
The previous SystemVerilog syntax for specifying functional coverage bins was very limiting. Unless you could explicitly state the individual bin values or range of bin values in your coverpoint definition, or could figure out a way to instantiate multiple copies of your covergroup passing in a different bin value as an argument, you were out of luck. This also made defining coverage crosses extremely difficult.
The new SystemVerilog bin syntax lets you specify a bin expression that is evaluated over the range of possible values of the coverpoint expression. The bin expression acts like a constraint, and the set of coverpoint values where the bin expression is true become the set of bins. The coverpoint below generates as set of bin values between 0 and 127 that are divisible by 3. The range is 0 to 127 because sbyte is a 7 bit variable.
Probably the most powerful feature is the coverpoint bin set that simply allows you to define an array of values that you want as bins. This is useful for specifying one-hot encodings, toggle coverage of a register, or any complex algorithm that can generate the set of bin values you want. The code below builds a list of onehot values in the encodings array, and then constructs the protocol_cg covergroup using the array as a set of bin values.
Available in the latest version of Questa
By the way, every feature discussed in this post is available in the latest version of Questa, 10.2.
Verification Academy Adds Major New Technical Resource
The Verification Academy adds another major methodology cookbook to focus on effective coverage adoption. The Coverage Cookbook describes the different types of coverage that are available to track your verification process progress, how to create a functional coverage model from a specification, and provides examples to implement functional coverage for different types of designs.
Verification Academy “full access” members have access to the free Coverage Cookbook and the UVM/OVM Cookbooks as well. Are you a registered full access member? If not, register now to become a full access member. (Restrictions apply.)
Coverage is not a new topic. It was one of major additions to the SystemVerilog (IEEE Std. 1800™-2009) standard. But the SystemVerilog functional coverage extensions were left to the verification engineer to use in such as way to return meaningful measurements of how much of the design specification was being tested. The Universal Verification Methodology (UVM) offers greater structure for coverage over SystemVerilog, but it too, is still only a piece of the puzzle.
As verification teams have come to generate greater amounts of information from use of SystemVerilog, UVM and other verification tools, the data from the verification runs needs to be easily used to drive coverage closure. Within the Mentor Graphics Questa verification platform, this resulted in the development of the Unified Coverage Database (UCDB) and associated verification management and planning features.
Since verification teams use a variety of tools and technology from many sources, it was an imperative that verification information could be easily shared and combined to help drive faster coverage closure across the industry. This is why Mentor Graphics donated its UCDB API to Accellera where it became the Unified Coverage Interoperability Standard (UCIS).
It would be great to think that we are done; but we’re not. Tools and data are just two dimensions of the three dimensions to any IC design project. A comprehensive approach to verification management that handles all of this adds the third dimension. The Mentor Graphics Questa Verification Management features handle all this.
Now the question is how to best adopt and use all the capabilities at hand from the standards to the verification technology at your finger tips.
The Verification Academy Coverage Cookbook is one of the important tools you now have to help pull all the information into a single place where you can learn the theory and put that theory into practice. The Coverage Cookbook is much like the OVM/UVM Cookbooks in that it is web friendly, while supporting the ability for you to generate a PDF file of the whole document in case you want to have a printed copy or have it available for offline reference.
The Theory section covers:
The Practice section shows three examples you can use today:
The Coverage Cookbook is a live document. You can expect continued extensions and contributions to enhance it. As Harry Foster, Mentor Graphics’ Chief Scientist Verification put it, “Methodology is the bridge between tools and technologies, which creates a productive, predictable, and repeatable solution.” We should expect that our collective use of this technology will help hone the methodology which is the heart of the Coverage Cookbook. And with this use, we should expect the Coverage Cookbook to evolve as we achieve greater verification productivity.
Let us know what you think about the Coverage Cookbook and what we might be able to do to improve it. In the meantime, Happy Coverage Closing!
If you’ve been to DAC or DVCon during the past couple of years, you’ve probably at least heard of something new called “Intelligent Testbench Automation”. Well, it’s actually not really all that new, as the underlying principles have been used in compiler testing and some types of software testing for the past three decades, but its application to electronic design verification is certainly new, and exciting.
The value proposition of iTBA is fairly simple and straightforward. Just like constrained random testing, iTBA generates tons of stimuli for functional verification. But iTBA is so efficient, that it achieves the targeted functional coverage one to two orders of magnitude faster than CRT. So what would you do if you could achieve your current simulation goals 10X to 100X faster?
You could finish your verification earlier, especially when it seems like you’re getting new IP drops every day. I’ve seen IP verification teams reduce their simulations from several days on several CPUs (using CRT) to a couple of hours on a single CPU (with iTBA). No longer can IP designers send RTL revisions faster than we can verify them.
But for me, I’d ultimately use the time savings to expand my testing goals. Today’s designs are so complex that typically only a fraction of their functionality gets tested anyway. And one of the biggest challenges is trading off what functionality to test, and what not to test. (We’ll show you how iTBA can help you here, in a future blog post.) Well, if I can achieve my initial target coverage in one-tenth of the time, then I’d use at least part of the time saving to expand my coverage, and go after some of the functionality that originally I didn’t think I’d have time to test.
On Line Illustration
If you check out this link – http://www.verificationacademy.com/infact – you’ll find an interactive example of a side by side comparison of constrained random testing and intelligent testbench automation. It’s an Adobe Flash Demonstration, and it lets you run your own simulations. Try it, it’s fun.
The example shows a target coverage of 576 equally weighted test cases in a 24×24 grid. You can adjust the dials at the top for the number and speed of simulators to use, and then click on “start”. Both CRT and iTBA simulations run in parallel at the same speed, cycle for cycle, and each time a new test case is simulated the number in its cell is incremented by one, and the color of the cell changes. Notice that the iTBA simulation on the right achieves 100% coverage very quickly, covering every unique test case efficiently. But notice that the CRT simulation on the left eventually achieves 100% coverage painfully and slowly, with much unwanted redundancy. You can also click on “show chart” to see a coverage chart of your simulation.
You probably knew that random testing repeats, but you probably didn’t know by how much. It turns out that the redundancy factor is expressed in the equation “ T = N ln N + C “, where “T” is the number of tests that must be generated to achieve 100% coverage of “N” different cases, and “C” is a small constant. So using the natural logarithm of 576, we can calculate that given equally weighted cases, the random simulation will require an average of about 3661 tests to achieve our goal. Sometimes it’s more, sometimes it’s less, given the unpredictability of random testing. In the meantime the iTBA simulation achieves 100% coverage in just 576 tests, a reduction of 84%.
Experiment at Home
You probably already have an excellent six-sided demonstration vehicle somewhere at home. Try rolling a single die repeatedly, simulating a random test generator. How many times does it take you to “cover” all six unique test cases? T = N ln N + C says it should take about 11 times or more. You might get lucky and hit 8, 9, or 10. But chances are you’ll still be rolling at 11, 12, 13, or even more. If you used iTBA to generate the test cases, it would take you six rolls, and you’d be done. Now in this example, getting to coverage twice as fast may not be that exciting to you. But if you extrapolate these results to your RTL design’s test plan, the savings can become quite interesting.
So here’s a quick question for you. What’s the minimum number of unique functional test cases needed to realize at least a 10X gain in efficiency with iTBA compared to what you could get with CRT? (Hint – You can figure it out with three taps on a scientific calculator.) It’s probably a pretty small number compared to the number of functions your design can actually perform, meaning that there’s at least a 10X improvement in testing efficiency awaiting you with iTBA.
Hopefully at this point you’re at least a little bit interested? Like some others, you may be skeptical at this point. Could this technology really offer a 10X improvement in functional verification? Check out the Verification Academy at this site – http://www.verificationacademy.com/course-modules/dynamic-verification/intelligent-testbench-automation – to see the first academy sessions that will introduce you to Intelligent Testbench Automation. Or you can even Google “intelligent testbench automation”, and see what you find. Thanks for reading . . .
For years one of the objectives in EDA has been to make formal property checking easy to use and its results easy to understand. With the Automatic formal check feature in the June release of the 0-In Formal tool version 3.0, I think we have made significant progress in this area.
The feature, which predefines a set of assertion rules to look for design issues automatically, makes formal technology accessible to users who are not yet ready to write properties in System Verilog Assertion (SVA) or Property Specification Language (PSL). To make it easier to comprehend problems in the design, the tool highlights the violations back to the RTL code.
Automatic formal check focuses on three areas inadequately addressed by dynamical simulation:
The first area is functional coverage. Today, when constrained random simulation fails to achieve the targeted coverage goal, engineers have to fine tune the environment or add new tests. These efforts, often attempted relatively late in the verification cycle, can consume vast amounts of time and resources while still failing to reach parts of the design. In contrast, automatic formal check can be used to identify unreachable code early in the verification cycle. These targets can be eliminated from the coverage model. As a result, the coverage measurement is more accurate and you know when you are done.
The next area is design initialization. If a design cannot be initialized reliably in silicon, it will not function correctly. An obvious precursor then is making sure all the registers are initialized correctly at RTL. If X’es are used, we need to monitor the X creation, propagation and usage cycle. Dynamic simulation does not interpret X’es accurately as in silicon, which has only 1s and 0s. Automatic formal check is ideal in verifying register initialization under different modes or configurations. Then, with internal assertions and formal technologies, we can check that although X’es are created, they are not used by downstream registers.
The final area is corner case design issues. Time and time again, designers unintentionally write code that violates logical correctness. Examples include combinational loops, full case violations, parallel case violations, undriven logic, finite-state machine (FSM) deadlocks and FSM livelocks. Unless tests are written to specifically target these corner case design issues are, such issues are difficult to exercise. On the other hand, by formally analyzing the design semantics, automatic formal check identifies these design issues statically and creates the stimuli to highlight them to the users.
If you are interested to know more about the automatic formal check feature in 0-In Formal, please feel free to register for our upcoming seminar in San Jose.
About Verification Horizons BLOG
This blog will provide an online forum to provide weekly updates on concepts, values, standards, methodologies and examples to assist with the understanding of what advanced functional verification technologies can do and how to most effectively apply them. We're looking forward to your comments and suggestions on the posts to make this a useful tool.
- Part 1: The 2012 Wilson Research Group Functional Verification Study
- What’s the deal with those wire’s and reg’s in Verilog
- Getting AMP’ed Up on the IEEE Low-Power Standard
- Prologue: The 2012 Wilson Research Group Functional Verification Study
- Even More UVM Debug in Questa 10.2
- IEEE Approves New Low Power Standard
- May 2013 (2)
- April 2013 (2)
- March 2013 (2)
- February 2013 (5)
- January 2013 (1)
- December 2012 (1)
- November 2012 (1)
- October 2012 (4)
- September 2012 (1)
- August 2012 (1)
- July 2012 (6)
- June 2012 (1)
- May 2012 (3)
- March 2012 (1)
- February 2012 (6)
- January 2012 (2)
- December 2011 (2)
- November 2011 (2)
- October 2011 (3)
- September 2011 (1)
- July 2011 (3)
- June 2011 (6)
- Intelligent Testbench Automation Delivers 10X to 100X Faster Functional Verification
- Part 9: The 2010 Wilson Research Group Functional Verification Study
- Verification Horizons DAC Issue Now Available Online
- Accellera & OSCI Unite
- The IEEE’s Most Popular EDA Standards
- UVM Register Kit Available for OVM 2.1.2
- May 2011 (2)
- April 2011 (7)
- User-2-User’s Functional Verification Track
- Part 7: The 2010 Wilson Research Group Functional Verification Study
- Part 6: The 2010 Wilson Research Group Functional Verification Study
- SystemC Day 2011 Videos Available Now
- Part 5: The 2010 Wilson Research Group Functional Verification Study
- Part 4: The 2010 Wilson Research Group Functional Verification Study
- Part 3: The 2010 Wilson Research Group Functional Verification Study
- March 2011 (5)
- February 2011 (4)
- January 2011 (1)
- December 2010 (2)
- October 2010 (3)
- September 2010 (4)
- August 2010 (1)
- July 2010 (3)
- June 2010 (9)
- The reports of OVM’s death are greatly exaggerated (with apologies to Mark Twain)
- New Verification Academy Advanced OVM (&UVM) Module
- OVM/UVM @DAC: The Dog That Didn’t Bark
- DAC: Day 1; An Ode to an Old Friend
- UVM: Joint Statement Issued by Mentor, Cadence & Synopsys
- Static Verification
- OVM/UVM at DAC 2010
- DAC Panel: Bridging Pre-Silicon Verification and Post-Silicon Validation
- Accellera’s DAC Breakfast & Panel Discussion
- May 2010 (9)
- Easier UVM Testbench Construction – UVM Sequence Layering
- North American SystemC User Group (NASCUG) Meeting at DAC
- An Extension to UVM: The UVM Container
- UVM Register Package 2.0 Available for Download
- Accellera’s OVM: Omnimodus Verification Methodology
- High-Level Design Validation and Test (HLDVT) 2010
- New OVM Sequence Layering Package – For Easier Tests
- OVM 2.0 Register Package Released
- OVM Extensions for Testbench Reuse
- April 2010 (6)
- SystemC Day Videos from DVCon Available Now
- On Committees and Motivations
- The Final Signatures (the meeting during the meeting)
- UVM Adoption: Go Native-UVM or use OVM Compatibility Kit?
- UVM-EA (Early Adopter) Starter Kit Available for Download
- Accellera Adopts OVM 2.1.1 for its Universal Verification Methodology (UVM)
- March 2010 (4)
- February 2010 (5)
- January 2010 (5)
- December 2009 (15)
- A Cliffhanger ABV Seminar, Jan 19, Santa Clara, CA
- Truth in Labeling: VMM2.0
- IEEE Std. 1800™-2009 (SystemVerilog) Ready for Purchase & Download
- December Verification Horizons Issue Out
- Evolution is a tinkerer
- It Is Better to Give than It Is to Receive
- Zombie Alert! (Can the CEDA DTC “User Voice” Be Heard When They Won’t Let You Listen)
- DVCon is Just Around the Corner
- The “Standards Corner” Becomes a Blog
- I Am Honored to Honor
- IEEE Standards Association Awards Ceremony
- ABV and being from Missouri…
- Time hogs, blogs, and evolving underdogs…
- Full House – and this is no gamble!
- Welcome to the Verification Horizons Blog!
- September 2009 (2)
- July 2009 (1)
- May 2009 (1)