Archive for June, 2011

28 June, 2011

iTBA Introduction

If you’ve been to DAC or DVCon during the past couple of years, you’ve probably at least heard of something new called “Intelligent Testbench Automation”.  Well, it’s actually not really all that new, as the underlying principles have been used in compiler testing and some types of software testing for the past three decades, but its application to electronic design verification is certainly new, and exciting.

The value proposition of iTBA is fairly simple and straightforward.  Just like constrained random testing, iTBA generates tons of stimuli for functional verification.  But iTBA is so efficient, that it achieves the targeted functional coverage one to two orders of magnitude faster than CRT.  So what would you do if you could achieve your current simulation goals 10X to 100X faster?

You could finish your verification earlier, especially when it seems like you’re getting new IP drops every day.  I’ve seen IP verification teams reduce their simulations from several days on several CPUs (using CRT) to a couple of hours on a single CPU (with iTBA).  No longer can IP designers send RTL revisions faster than we can verify them.

But for me, I’d ultimately use the time savings to expand my testing goals.  Today’s designs are so complex that typically only a fraction of their functionality gets tested anyway.  And one of the biggest challenges is trading off what functionality to test, and what not to test.  (We’ll show you how iTBA can help you here, in a future blog post.)  Well, if I can achieve my initial target coverage in one-tenth of the time, then I’d use at least part of the time saving to expand my coverage, and go after some of the functionality that originally I didn’t think I’d have time to test.

On Line Illustration

If you check out this link – http://www.verificationacademy.com/infact  – you’ll find an interactive example of a side by side comparison of constrained random testing and intelligent testbench automation.  It’s an Adobe Flash Demonstration, and it lets you run your own simulations.  Try it, it’s fun.

The example shows a target coverage of 576 equally weighted test cases in a 24×24 grid.  You can adjust the dials at the top for the number and speed of simulators to use, and then click on “start”.  Both CRT and iTBA simulations run in parallel at the same speed, cycle for cycle, and each time a new test case is simulated the number in its cell is incremented by one, and the color of the cell changes.  Notice that the iTBA simulation on the right achieves 100% coverage very quickly, covering every unique test case efficiently.  But notice that the CRT simulation on the left eventually achieves 100% coverage painfully and slowly, with much unwanted redundancy.  You can also click on “show chart” to see a coverage chart of your simulation.

Math Facts

You probably knew that random testing repeats, but you probably didn’t know by how much.  It turns out that the redundancy factor is expressed in the equation “ T = N ln N + C “, where “T” is the number of tests that must be generated to achieve 100% coverage of “N” different cases, and “C” is a small constant.  So using the natural logarithm of 576, we can calculate that given equally weighted cases, the random simulation will require an average of about 3661 tests to achieve our goal.  Sometimes it’s more, sometimes it’s less, given the unpredictability of random testing.  In the meantime the iTBA simulation achieves 100% coverage in just 576 tests, a reduction of 84%.

Experiment at Home

You probably already have an excellent six-sided demonstration vehicle somewhere at home.  Try rolling a single die repeatedly, simulating a random test generator.  How many times does it take you to “cover” all six unique test cases?  T = N ln N + C says it should take about 11 times or more.  You might get lucky and hit 8, 9, or 10.  But chances are you’ll still be rolling at 11, 12, 13, or even more.  If you used iTBA to generate the test cases, it would take you six rolls, and you’d be done.  Now in this example, getting to coverage twice as fast may not be that exciting to you.  But if you extrapolate these results to your RTL design’s test plan, the savings can become quite interesting.

Quiz Question

So here’s a quick question for you.  What’s the minimum number of unique functional test cases needed to realize at least a 10X gain in efficiency with iTBA compared to what you could get with CRT?  (Hint – You can figure it out with three taps on a scientific calculator.)  It’s probably a pretty small number compared to the number of functions your design can actually perform, meaning that there’s at least a 10X improvement in testing efficiency awaiting you with iTBA.

More Information

Hopefully at this point you’re at least a little bit interested?  Like some others, you may be skeptical at this point.  Could this technology really offer a 10X improvement in functional verification?  Check out the Verification Academy at this site – http://www.verificationacademy.com/course-modules/dynamic-verification/intelligent-testbench-automation – to see the first academy sessions that will introduce you to Intelligent Testbench Automation.  Or you can even Google “intelligent testbench automation”, and see what you find.  Thanks for reading . . .

, , , , , ,

26 June, 2011

Verification Techniques & Technologies Adoption Trends

This blog is a continuation of a series of blogs, which present the highlights from the 2010 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 8 click here), I focused on some of the 2010 Wilson Research Group findings related to design and verification language trends. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2010 Wilson Research Group study.

One of the claims I made in the prologue to this series of blogs is that we are seeing a trend in increased industry adoption of advanced functional verification techniques, which is supported by the data I present in this blog. An interesting question you might ask is “what is driving this trend?” In some of my earlier blogs (click here for Part 1 and Part 2) I showed an industry trend in that design complexity is increasing in terms design sizes and number of embedded processors. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (click here for Part 4). My belief is that the industry is being forced to mature its functional verification process to address increasing complexity and effort.

Simulation Techniques Adoption Trends

Let’s begin by comparing  the adoption trends related to various simulation techniques as shown in Figure 1, where the data from the 2007 Far West Research study  is shown in blue and the data from 2010 Wilson Research Group study is shown in green.  

Figure 1. Simulation-based technique adoption trends

You can see that the study finds the industry increasing its adoption of various functional verification techniques.

For example, in 2007, the Far West Research Group found that only 48 percent of the industry performed code coverage. This surprised me. After all, HDL-based code coverage is a technology that has been around since the early 1990’s. However, I did informally verify the 2007 results through numerous customer visits and discussions. In 2010, we see that the industry adoption of code coverage has increased to 72 percent.

Now, a side comment: In this blog, I do not plan to discuss either the strengths or weaknesses of the various verification techniques that were studied (such as code coverage, whose strengths and weaknesses have been argued and debated for years)—perhaps in a separate series of future blogs. In this series of blogs, I plan to focus only on the findings from the 2010 Wilson Research Group study.

In 2007, the Far West Research Group study found that 37 percent of the industry had adopted assertions for use in simulation. In 2010, we find that industry adoption of assertions had increased to 72 percent. I believe that the maturing of the various assertion language standards has contributed to this increased adoption.

In 2007, the Far West Research Group study found that 40 percent of the industry had adopted functional coverage for use in simulation. In 2010, the industry adoption of functional coverage had increased to 69 percent. Part of this increase in functional coverage adoption has been driven by the increased adoption of constrained-random simulation, since you really can’t effectively do constrained-random simulation without doing functional coverage.

In fact, we see from the Far West Research Group 2007 study that 41 percent of the industry had adopted constrained-random simulation techniques. In 2010, the industry adoption had increased to 69 percent. I believe that this increase in constrained-random adoption has been driven by the increased adoption of the various base-class library methodologies, as I presented in a previous blog (click here for Part 8).

Formal Property Checking Adoption Trends

Figure 2 shows the trends in terms of formal property checking adoption by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green). The industry adoption of formal property checking has increased by an amazing 53 percent in the past three years. Again, this is another data point that supports my claim that the industry is starting to mature its adoption of advanced functional verification techniques.

 

Figure 2. Trends in formal property checking adoption

Another way to analyze the results is to partition a project’s adoption of formal property checking by design size, as shown in Figure 3, where less than 1M gates  is shown in blue, 1M to 20M gates is shown in orange, and greater than 20M gates is shown in red. Obviously, the larger the design, the more effort is generally spent in verification. Hence, it’s not too surprising to see the increased adoption of formal property checking for larger designs.

Figure 3. Trends in formal property checking adoption by design size

Acceleration/Emulation & FPGA Prototyping Adoption Trends

The amount of time spent in a simulation regression is an increasing concern for many projects. Intuitively, we tend to think that the design size influences simulation performance. However, there are two equally important factors that must be considered: number of test in the simulation regression suite, and the length of each test in terms of clock cycles. 

For example, a project might have a small or moderate-sized design, yet verification of this designs requires a long running test (e.g., a video input stream). Hence, in this example, the simulation regression time is influenced by the number of clock cycles required for the test, and not necessarily the design size itself.

In blog 6 of this series, I presented industry data on the number of tests created to verify a design in simulation (i.e., the regression suite). The findings obviously varied dramatically from a handful of test to thousands of test in a regression suite, depending on the design. In Figure 4 below, I show the findings for a projects regression time, which also varies dramatically from short regression times for some projects, to multiple days for other projects. The median simulation regression time is about 16 hours in 2010. 

Figure 4. Simulation regression time trends

One technique that is often used to speed up simulation regression (either due to very long test and lots of test) is either hardware assisted acceleration or emulation. In addition, FPGA prototyping, while historically used as a platform for software development, has recently served a role in SoC integration validation.

Figure 5 shows the adoption trend for both HW assisted acceleration/emulation and FPGA prototyping by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green). We have seen a 75 percent increase in the adoption of HW assisted acceleration/emulation over the past three years. 

Figure 5. HW assisted acceleration/emulation and FPGA Prototyping trends

I was surprised to see that the adoption of FPGA prototyping did not increase over the past three years, considering that we found in increase in SoC development over the same period. So, I decided to dig deeper into this anomaly.

In Figure 6, you’ll see that I partitioned the HW assisted acceleration/emulation and FPGA prototyping adoption data by design size: less than 1M gates (in blue), 1M to 20M gates (in yellow), and greater than 20M gates (in red). This revealed that the adoption of HW assisted acceleration/emulation continued to increase as design sizes increased. However, the adoption of FPGA prototyping rapidly dropped off as design sizes increased beyond 20M gates.   

Figure 6. Acceleration/emulation & FPGA Prototyping adoption by design size

So, what’s going on? One problem with FPGA prototyping of large designs is that there is an increased engineering effort required to partition designs across multiple FPGAs. In fact, what I have found is that FPGA prototyping of very large designs is often a major engineering effort in itself, and that many projects are seeking alternative solutions to address this problem.

In my next blog, I plan to present the 2010 Wilson Research Group study findings related to various project results in terms of schedule and required spins before production.

, , , ,

24 June, 2011

Well, another DAC is behind us, and you know what that means. That’s right, the super-sized DAC issue of Verification Horizons is now available online. You can download the full issue or individual articles from the Verification Horizons tab at Verification Academy.

Over the next few days, I’ll be highlighting some of the articles to give you a taste of the great content available directly to subscribers (now over 50,000 strong) of the newsletter. In the meantime, you can get an overview of the issue by reading my Editor’s Note, through which you can also commiserate about lawn care (go read it to see what I mean).

Our next issue will be coming out in October, so if you’d like to contribute an article that’ll be seen by tens of thousands of your colleagues, drop me a line. For those of you who would like to subscribe, you may do so here.

As always, your comments are welcome. If there are particular topics you’d like to see covered in the future, we’ll do our best to accommodate.

Thanks,
Tom Fitzpatrick
Editor, Verification Horizons

, , ,

21 June, 2011

System Standards Worlds Initiate Unification

accellera SystemC Logo

Accellera, who brought us SystemVerilog, and the Open SystemC Imitative (OSCI), who brought us SystemC have made known their intent to unite to form a single front-end electronic design automation (EDA) standards organization.  You can read their joint press release here.

While this may come as a surprise to many, one thing has remained constant for many years: the two organizations have had a long standing policy of collaborative interactions as both have evolved their standards programs.  At a DATE 2004 panel titled “SystemC and SystemVerilog: Where do they fit?  Where are they going?,” technical members of the two communities gathered to ponder answers to those questions.  At DAC 2004, when I was chair of Accellera and Guido Arnout was chair of OSCI, we stood before a large assembly of SystemC users a few months later to point to what was not so obvious to many, SystemVerilog and SystemC complement each other.

DAC Slide 5

Guido and I dispelled any issues of a “language war” and focused on what the value each language and what it delivered to the design and verification community.  A lot has transpired since then.  Both SystemC and SystemVerilog are now IEEE standards, know as IEEE Std. 1666™ and IEEE Std. 1800™ respectively.  And both OSCI and Accellera have continued to evolve their standards work program in significant and meaningful ways.

In this evolution, it became clear to me that each organization was “completing” the other.  OSCI has developed the popular Transaction Level Modeling (TLM) standards and Accellera had adopted TLM in their Universal Verification Methodology (UVM™).  As the technical teams from each organization have leveraged each other, it began to make more sense to initiate discussions to unite the two groups to address further front-end EDA standards challenges – as one. And, indeed, the two organization recognized this and have taken the steps to determine how best to combine operations into a single organization.

In the months ahead, the unified organization will emerge, but for now, it is business as usual for the standards development teams in OSCI and Accellera.

What do you think about the unification?

, , , , , , , , , ,

17 June, 2011

How do your favorites rank?

Have you ever wondered how popular the different IEEE standards for electronic design automation are? Have you ever wondered which ones show the least interest? When buying books online, popular book buying websites sites will rank customer purchases. Many newspapers manage lists that you can consult to determine what is the most popular; what has the highest demand. But if you have purchased any IEEE standards, you will know this information is not available from the IEEE Store or the IEEE XPlore platform.

On May 4th, the IEEE Standards Association announced its collaboration with Techstreet to create the New IEEE Standards Store.  Until now, anyone who wanted to order a single standard had to use a more complex system that even made it hard to share a permanent link to one’s favorite standard with another.  Just look at the Accellera homepage for an example of where to get the SystemVerilog (IEEE Std. 1800™) standard.  At the writing of this blog, it simply points to www.ieee.org.  [I will share the fact the IEEE’s new site now has fixed links that can now be used to help others find the most current SystemVerilog standard with the Accellera.]

But back to what is the most popular IEEE EDA standards… Any guesses?

Before I delve into those details, let me say the ranking is just by ordinal.  The New IEEE Standards Store shares no information on the actual number of standards purchased.  So the difference between #1 and #10 could be just 10 copies.  It probably isn’t, but it could be.  But talking about #10, why is it even on the list?  The IP-XACT standard (IEEE Std. 1685™) is available for free under the IEEE Get Program.  Under this program you can download a PDF of the IEEE standard for free.  If you want a printed version, you can print your own copy from the free one you download.  Back in December 2010, Accellera reported that since the IEEE started to offer IP-XACT for free, there had been 1200 downloads.  It also looks like many people did not want the hassle to print and simply ordered the print version directly from the IEEE.  The other IEEE EDA standard offered free is SystemC©  And this is probably the reason it is in 32nd place.  It is very popular in terms of the number of free downloads.

And yes, if you search for the those two standards on the New IEEE Standards Store, you will find you can order print copies there and if you read the small print below, you will see there is a link to take you to the free online versions.

Harry Foster has issued several research reports on the popularity of one language or format the past several months.  In his last blog, he discussed which of the design and verification languages are ranked high and those, well, not so high.  And I guess I feel best to share the correlation between his findings and these more “anecdotal” results from the New IEEE Standards Store.  I have been party to many at the top standards  (Verilog/SystemVerilog) and party to the “least highest” (yes, I can’t say the least liked) VITAL 2000.  For vindication, I will note that VITAL-95 comes in at #18.  In whole, it appears to me that the New IEEE Standards Store ordinal rankings of EDA standards matches the scientific data from the research Harry has reported.

Below is the full ranking of IEEE EDA standards.  Where are your favorites?

1 IEEE 1364-2001 Verilog Hardware Description Language
2 IEEE 1800-2009 SystemVerilog–Unified Hardware Design, Specification, and Verification Language
3 IEEE 1076-2002 VHDL Language Reference Manual
4 IEEE 1076-1993 VHDL Language Reference Manual
5 IEEE 1499-1998 Interface for Hardware Description Models of Electronic Components
6 IEEE 1364-1995 Hardware Description Language Based on the Verilog® Hardware Description Language
7 IEEE 1800-2005 SystemVerilog: Unified Hardware Design, Specification and Verification Language
8 IEEE 1076.2-1996 VHDL Mathematical Packages
9 IEEE 1076.1-1999 VHDL Analog and Mixed-Signal Extensions
10 IEEE 1685-2009 IP-XACT, Standard Structure for Packaging, Integrating, and Reusing IP within Tool Flows
11 IEEE 1850-2005 Property Specification Language (PSL)
12 IEEE 1076c-2007 VHDL Language Reference Manual – Procedural Language Application Interface
13 IEEE 1164-1993 Multivalue Logic System for VHDL Model Interoperability (Std_logic_1164)
14 IEEE 1850-2010 Property Specification Language (PSL)
15 IEEE 1076.6-2004 VHDL Register Transfer Level (RTL) Synthesis
16 IEEE 1801-2009 Design and Verification of Low Power Integrated Circuits
17 IEEE 1481-2009 Integrated Circuit (IC) Open Library Architecture (OLA)
18 IEEE 1076.4-1995 VITAL Application-Specific Integrated Circuit (ASIC) Modeling Specification
19 IEEE/IEC 61691-5-2004 IEC 61691-5 Ed.1 (IEEE Std 1076.4(TM)-2000): Behavioural Languages – Part 5: Standard VITAL ASIC (Application Specific Integrated Circuit) Modeling Specification
20 IEEE 1647-2008 Functional Verification Language e
21 IEEE 1076.1.1-2011 VHDL Analog and Mixed-Signal Extensions — Packages for Multiple Energy Domain Support
22 IEEE/IEC 61691-7-2009 Behavioural languages – Part 7: SystemC Language Reference Manual
23 IEEE 1076-1987 VHDL Language Reference Manual
24 IEEE 1076.1.1-2004 VHDL Analog and Mixed-Signal Extensions—Packages for Multiple Energy Domain Support
25 IEEE 1076.3-1997 VHDL Synthesis Packages
26 IEEE/IEC 61523-3-2004 IEC 61523-3 Ed.1 (IEEE Std 1497(TM)-2001): Delay and Power Calculation Standards – Part 3: Standard Delay Format (SDF) for the Electronic Design Process
27 IEEE 1076/INT-1991 Interpretations: IEEE Std 1076-1987, IEEE Standard VHDL Language Reference Manual
28 IEEE/IEC 62531-2007 IEC 62531 Ed. 1 (2007-11) (IEEE Std 1850-2005): Standard for Property Specification Language (PSL)
29 IEEE 1076.6-1999 VHDL Register Transfer Level Synthesis
30 IEEE 1647-2006 Functional Verification Language “e”
31 IEEE/IEC 61691-6-2009 Behavioural languages – Part 6: VHDL Analog and Mixed-Signal Extensions
32 IEEE 1666-2005 SystemC® Language Reference Manual
33 IEEE/IEC 61691-1-1-2004 IEC 61691-1-1 Ed.1 (IEEE Std 1076(TM)-2002): Behavioural Languages – Part 1-1: VHDL Language Reference Manual
34 IEEE 1364-2005 Verilog Hardware Description Language
35 IEEE/IEC 61691-4-2004 IEC 61691-4 Ed.1 (IEEE Std 1364(TM)-2001): Behavioural Languages – Part 4: Verilog® Hardware Description Language
36 IEEE 1076.4-2000 VITAL ASIC (Application Specific Integrated Circuit) Modeling Specification

Learn more about the New IEEE Standards Store

There is much more to the New IEEE Standards Store than just the rankings of the standards we use in electronic design automation.  As I mentioned, it is easier to share fixed links to IEEE standards.  And if you want to track IEEE standards development – and don’t want to have to register your email address with the actual committee developing it just to know when they are done and a standard is ready – you can register to be notified when a new standard is ready.  The New IEEE Standards Store will notify you when a new one is ready.

Check out the short, one minute, video below to learn more about the New IEEE Standards Store.

, , , , , , , , , , ,

2 June, 2011

When we looked at requirements for UVM in Accellera, one of the highest priority requests from users was to have a single standard Register Modeling Package. After much discussion in the committee, Mentor and Synopsys worked together to create the UVM Register package, which was subsequently adopted as part of UVM.

In developing the UVM register layer, we combined the VMM RAL data model with a sequence-based API that lets you write UVM sequences in terms of register read/write transactions, independent of the specific address or bus protocol implemented on the desired interface. The collaboration was productive and efficient and gave UVM a head start in addressing a substantial installed base of register users while keeping a consistent use model for OVM-savvy users as they consider moving to UVM.

So we asked ourselves, if having a standard register package is so important, why not make it available to OVM users without requiring them to move everything else to UVM? Our answer was to release a UVM Register package that does, in fact, work with OVM 2.1.2. As it says in the Release Notes, “This package is a near verbatim copy of the register layer portion of the Accellera Universal Verification Methodology (UVM 1.1) reference library. Minor additions and modifications  were made to enable its use as a standalone package and be compatible  with OVM 2.1.2. ” Having this register package means that OVM users now have a standard, robust and vendor-independent register solution that will not require rewrites or use-model changes if/when they decide to migrate to UVM.

To use the UVM Register package with OVM, you’ll need to import the uvm_reg_pkg and include the uvm_reg_macros.svh file:

`include "ovm_macros.svh"
`include "uvm_reg_macros.svh"
package my_reg_model;
  import ovm_pkg::*;
  import uvm_reg_pkg::*;
  ...
endpackage

That’s it! If you’re using a code generator that creates UVM register code directly, you can use that code as-is with the UVM Register Package and OVM2.1.2, with the minor caveat of requiring the import and include statements mentioned above.

The OVM2.1.2 and UVM Register package kits are available at the Verification Academy here

Enjoy!

-Tom

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...