Posts Tagged ‘functional verification’

17 November, 2014

Few verification tasks are more challenging than trying to achieve code coverage goals for a complex system that, by design, has numerous layers of configuration options and modes of operation.  When the verification effort gets underway and the coverage holes start appearing, even the most creative and thorough UVM testbench architect can be bogged down devising new tests – either constrained-random or even highly directed tests — to reach uncovered areas.

armpaper
At the recent ARM® Techcon, Nguyen Le, a Principal Design Verification Engineer in the Interactive Entertainment Business Unit at Microsoft Corp. documented a real world case study on this exact situation.  Specifically, in the paper titled “Advanced Verification Management and Coverage Closure Techniques”, Nguyen outlined his initial pain in verification management and improving cover closure metrics, and how he conquered both these challenges – speeding up his regression run time by 3x, while simultaneously moving the overall coverage needle up to 97%, and saving 4 man-months in the process!  Here are the highlights:

* DUT in question
— SoC with multi-million gate internal IP blocks
— Consumer electronics end-market = very high volume production = very high cost of failure!

* Verification flow
— Constrained-random, coverage driven approach using UVM, with IP block-level testbenches as well as  SoC level
— Rigorous testplan requirements tracking, supported by a variety of coverage metrics including functional coverage with SystemVerilog covergroups, assertion coverage with SVA covers, and code coverage on statements, Branches, Expressions, Conditions, and FSMs

* Sign-off requirements
— All test requirements tracked through to completion
— 100% functional, assertion and code coverage

* Pain points
— Code coverage: code coverage holes can come from a variety of expected and unforeseen sources: dead code can be due to unused functions in reused IP blocks, from specific configuration settings, or a bug in the code.  Given the rapid pace of the customer’s development cycle, it’s all too easy for dead code to slip into the DUT due to the frequent changes in the RTL, or due to different interpretations of the spec.  “Unexplainably dead” code coverage areas were manually inspected, and the exclusions for properly unreachable code were manually addressed with the addition of pragmas.  Both procedures were time consuming and error prone
— Verification management: the verification cycle and the generated data were managed through manually-maintained scripting.  Optimizing the results display, throughput, and tool control became a growing maintenance burden.

* New automation
— Questa Verification Manager: built around the Unified Coverage Database (UCDB) standard, the tool supports a dynamic verification plan cross-linked with the functional coverage points and code coverage of the DUT.  In this way the dispersed project teams now had a unified view which told them at a glance which tests were contributing the most value, and which areas of the DUT needed more attention.  In parallel, the included administrative features enabled efficient control of large regressions, merging of results, and quick triage of failures.

— Questa CoverCheck: this tool reads code coverage results from simulation in UCDB, and then leverages formal technology under-the-hood to mathematically prove that no stimulus could ever activate the code in question. If it’s OK for a given block of code to be dead due to a particular configuration choice, etc., the user can automatically generate wavers to refine the code coverage results.  Additionally, the tool can also identify segments of code that, though difficult to reach, might someday be exercised in silicon. In such cases, CoverCheck helps point the way to testbench enhancements to better reach these parts of the design.

— The above tools used in concert (along with Questasim) enabled a very straightforward coverage score improvement process as follows:
1 – Run full regression and merge the UCDB files
2 – Run Questa CoverCheck with the master UCDB created in (1)
3 – Use CoverCheck to generate exclusions for “legitimate” unreachable holes, and apply said exclusions to the UCDB
4 – Use CoverCheck to generate waveforms for reachable holes, and share these with the testbench developer(s) to refine the stimulus
5 – Report the new & improved coverage results in Verification Manager

* Results
— Automation with Verification Manager enabled Microsoft to reduce the variation of test sequences from 10x runtime down to a focused 2x variation.  Additionally, using the coverage reporting to rank and optimize their tests, they increased their regression throughput by 3x!
— With CoverCheck, the Microsoft engineers improved code coverage by 10 – 15% in most hand-coded RTL blocks, saw up to 20% coverage improvement for auto-generated RTL code, and in a matter of hours were able to increase their overall coverage number from 87% to 97%!
— Bottom-line: the customer estimated that they saved 4 man-months on one project with this process

2014 MSFT presentation at ARM Techcon -- cover check ROI

Taking a step back, success stories like this one, where automated, formal-based applications leverage the exhaustive nature of formal analysis to tame once intractable problems (which require no prior knowledge of formal or assertion-based verification), are becoming more common by the day.  In this case, Mentor’s formal-based CoverCheck is clearly the right tool for this specific verification need, literally filling in the gaps in a traditional UVM testbench verification flow.  Hence, I believe the overall moral of the story is a simple rule of thumb: when you are grappling with a “last mile problem” of unearthing all the unexpected, yet potentially damaging corner cases, consider a formal-based application as the best tool for job.  Wouldn’t you agree?

Joe Hupcey III

Reference links:

Direct link to the presentation slides: https://verificationacademy.com/Advanced-Verification-Management-Presentation

ARM Techcon 2014 Proceedings: http://www.armtechcon.com/

Official paper citation:
Advanced Verification Management and Coverage Closure Techniques, Nguyen Le, Microsoft; Harsh Patel, Roger Sabbagh, Darron May, Josef Derner, Mentor Graphics

, , , , , , , , , , ,

5 November, 2014

Between 2006 and 2014, the average number of IPs integrated into an advanced SoC increased from about 30 to over 120. In the same period, the average number of embedded processors found in an advanced SoC increased from one to as many as 20. However, increased design size is only one dimension of the growing verification complexity challenge. Beyond this growing-functionality phenomenon are new layers of requirements that must be verified. Many of these verification requirements did not exist ten years ago, such as multiple asynchronous clock domains, interacting power domains, security domains, and complex HW/SW dependencies. Add all these challenges together, and you have the perfect storm brewing.

It’s not just the challenges in design and verification that have been changing, of course. New technologies have been developed to address emerging verification challenges. For example, new automated ways of applying formal verification have been developed that allow non-Formal experts to take advantage of the significant benefits of formal verification. New technology for stimulus generation have also been developed that allow verification engineers to develop complex stimulus scenarios 10x more efficiently than with directed tests and execute those tests 10x more efficiently than with pure-random generation.

It’s not just technology, of course. Along with new technologies, new methodologies are needed to make adoption of new technologies efficient and repeatable. The UVM is one example of these new methodologies that make it easier to build complex and modular testbench environments by enabling reuse – both of verification components and knowledge.

The Verification Academy website provides great resources for learning about new technologies and methodologies that make verification more effective and efficient. This year, we tried something new and took Verification Academy on the road with live events in Austin, Santa Clara, and Denver. It was great to see so many verification engineers and managers attending to learn about new verification techniques and share their experiences applying these techniques with their colleagues.

va_live_sc

If you weren’t able to attend one of the live events – or if you did attend and really want to see a particular session again – you’re in luck. The presentations from the Verification Academy Live seminars are now available on the Verification Academy site:

  • Navigating the Perfect Storm: New School Verification Solutions
  • New School Coverage Closure
  • New School Connectivity Checking
  • New School Stimulus Generation Techniques
  • New School Thinking for Fast and Efficient Verification using EZ-VIP
  • Verification and Debug: Old School Meets New School
  • New Low Power Verification Techniques
  • Establishing a company-wide verification reuse library with UVM
  • Full SoC Emulation from Device Drivers to Peripheral Interfaces

You can find all the sessions via the following link:

https://verificationacademy.com/seminars/academy-live

, , , , , , , , ,

7 May, 2014

My Feb. 4 post introduced Mentor Graphics’ three-step FPGA verification process intended to help design teams get out of the reprogrammable lab more effectively. Since then, I’ve engaged FPGA vendors, design managers and engineers to explain the process, paying special attention to the merits and technical detail for injecting automation into any FPGA verification environment, the hallmark of Mentor’s process. The feedback from these conversations helped me to develop a series of technical webinars, now available for free and on-demand. Check them out and let us know what you think in the comments below. My hope is the webinars might serve as a starting point for your own conversations on verification of FPGAs, demand for which seems to continue to grow as process nodes shrink.

Injecting Automation into Verification – FPGA Market Trends

Injecting Automation into Verification – Code Coverage

Injecting Automation into Verification – Assertions

Injecting Automation into Verification – Improved Throughput

, , , , , , , , ,

25 April, 2014

DVCon 2014 Conference Proceedings Published

2014DVCon_logoWith record attendance announced for DVCon 2014, one might wonder if there is really a need to put some of the “Accellera Day” tutorial videos online.  With more than 1,000 professionals attending in some capacity, it would be easy to conclude that everyone that needs to know about UVM and the developments on the updated version to it, probably know.  Looking at just the LinkedIn design and verification forums one will realize there are 10’s of thousands who would have benefited if they had attended DVCon.  Thus, sharing this information more broadly is in order.

UVM Tutorial Video

UVM – What’s Now and What’s Next” is the tittle of the DVCon 2014 tutorial on UVM.  It covered use cases and pragmatic topics of the current UVM 1.1 standard as well as advanced topics for the next update, UVM 1.2.  The presenters covered sequence creation, register layer use, TLM-based communication, test execution, run-time phases and messaging enhancements.

The tutorial was split into five separate sections delivered by five speakers as follows:

  • Working Group Update: Adam Sherer, Accellera (7 min.)
  • Overview and Library Concepts: John Aynsley, Doulos (36 min.)
  • Stimulus Generation: Shawn Honess, Synopsys (21 min.)
  • UVM Register Layer: Tom Fitzpatrick, Mentor Graphics (36 min.)
  • UVM 1.2 Introduction: Uwe Simm, Cadence Design Systems (25 min.)

You can find out more information about the online tutorial videos hereRegistration is required, but there is no charge for access.  Once you have registered, you will get links to each of the five sections.  You can stream them or download them for offline access as you wish.  They are suitable for viewing on your computer or mobile devices.

DVCon 2014 Proceedings

DVCon 2014 was a full conference; it was more than just the the Accellera Day UVM Tutorial.  And in keeping with DVCon tradition, the conference proceedings are made available to all several months after the conference without charge.  If you visit the DVCon history area, you will find the 2014 proceedings have been published.  What I like about the DVCon proceedings it not only are the papers published, but the slides that were presented at the conference will often accompany the paper.

As an example, if you were interested in the DVCon 2014 Best Oral Presentation paper and presentation (Kelly D. Larson from NVIDIA on , “Determining Test Quality through Dynamic Runtime Monitoring of SystemVerilog Assertions” by the way), you will now find both the paper and presentation available online here.

For all those who did not make it to DVCon 2014, or who were there and could not see everything, the proceedings are now online and the first of the Accellera Day tutorials videos is published. Accellera is busy readying its other tutorial videos.  I’ll share information on their availability as they appear in the weeks and months ahead.

, , , , ,

4 February, 2014

Marketing teams at FPGA vendors have been busy as the silicon nanometer geometry race escalates. Altera is “delivering the unimaginable” while Xilinx is offering “all programmable SoCs” to design centers. It’s clear that the SoC has become more accessible to a broader market today and that FPGA vendors have staked out a solid technology roadmap for the near future. Do marketing messages surrounding the geometry race effect day to day life of engineers, and if so, how – especially when it comes to verification?
An excellent whitepaper from Altera, “The Breakthrough Advantage for FPGAs with Tri-Gate Technology,” covers Altera’s Stratix 10 FPGAs and SoCs. The paper describes verification challenges in this new expanded market this way: “Although current generation FPGAs require a rigorous simulation verification methodology rivaling ASICs, the additional lab testing and ability to reprogram FPGAs save substantial manpower investment. The overall cost of ownership must be considered when comparing an FPGA whose component price is higher than an ASIC of similar complexity.” I believe you can use this statement to engage your management in a discussion about better verification processes.

Xilinx also has excellent published technical resources. Its recent UltraScale backgrounder describes how they are solving the challenges in implementing a design with their reprogrammable silicon. Clearly Xilinx has made an impressive investment to make it easier to implement a design with its FPGA UltraScale products. Improvements include ASIC-like clocking and annealing dataflow bottlenecks without compromising performance. Xilinx also describes improvements when using its Vivado design suite, particularly when it comes to in-lab design bring up.

For other FPGA insights, it’s also worth checking out Electronics Engineering Journal’s recent article “Proliferating Programmability in 2014,” which claims that the long-term future of FPGAs tool flows even though, as Kevin Morris sees it, EDA seems to have abandoned the market. (Kevin, I’m here to tell you you’re wrong.)

Do you think it’s inevitable that your FPGA team will first struggle to make it across the verification finish line before adopting a more process-oriented verification flow like the ASIC market demands? It’s not. I base this conclusion on the many conversations I’ve had with FPGA designers, their managers, sales engineers and many other talented people in this market over the years. Yes, there are significant challenges in FPGA design, but not all of them are technology related. With some emotion, one engineer remarked that debugging the same type of issue over and over in the hardware lab and expecting a different outcome was insane. (He’s right.) Others say they need specific ROI information for their management to even accept their need for change. Still others state that had they only known the solutions I talked about in my seminar a year ago, they would have not spent months and months bringing up their design in the lab.
With my peers here at Mentor Graphics, I have developed a three-step verification flow that includes coverage, assertions and improved throughput. I’ll write about this flow and related issues in the weeks ahead here on this blog. The flow is built on fundamental verification technologies that benefit the broad FPGA market. The goal, in developing the technology and writing about it here, has been to provide practical solutions and help more FPGA teams cross the verification gap.

In the meantime, what are your stories? Are you able to influence your management into adopting advanced technology to aid lab bring-up? Is your management’s bias towards lower cost and faster implementation (at the expense of verification)? Let me know in the comments or, if you prefer, by e-mail: joe_rodriguez@mentor.com.

, , , , , , , , ,

30 October, 2013

MENTOR GRAPHICS AT ARM TECHCON

This week ARM® TechCon® 2013 is being held at the Santa Clara Convention Center from Tuesday October 29 through Thursday October 31st, but don’t worry, there’s nothing to be scared about.  The theme is “Where Intelligence Counts”, and in fact as a platinum sponsor of the event, Mentor Graphics is excited to present no less than ten technical and training sessions about using intelligent technology to design and verify ARM-based designs.

My personal favorite is scheduled for Halloween Day at 1:30pm, where I’ll tell you about a trick that Altera used to shave several months off their schedule, while verifying the functionality and performance of an ARM AXI™ fabric interconnect subsystem.  And the real treat is that they achieved first silicon success as well.  In keeping with the event’s theme, they used something called “intelligent” testbench automation.

And whether you’re designing multi-core designs with AXI fabrics, wireless designs with AMBA® 4 ACE™ extensions, or even enterprise computing systems with ARM’s latest AMBA® 5 CHI™ architecture, these sessions show you how to take advantage of the very latest simulation and formal technology to verify SoC connectivity, ensure correct interconnect functional operation, and even analyze on-chip network performance.

On Tuesday at 10:30am, Gordon Allan described how an intelligent performance analysis solution can leverage the power of an SQL database to analyze and verify interconnect performance in ways that traditional verification techniques cannot.  He showed a wide range of dynamic visual representations produced by SoC regressions that can be quickly and easily manipulated by engineers to verify performance to avoid expensive overdesign.

Right after Gordon’s session, Ping Yeung discussed using intelligent formal verification to automate SoC connectivity, overcoming observability and controllability challenges faced by simulation-only solutions.  Formal verification can examine all possible scenarios exhaustively, verifying on-chip bus connectivity, pin multiplexing of constrained interfaces, connectivity of clock and reset signals, as well as power control and scan test signal connectivity.

On Wednesday, Mark Peryer shows how to verify AMBA interconnect performance using intelligent database analysis and intelligent testbench automation for traffic scenario generation.  These techniques enable automatic testbench instrumentation for configurable ARM-based interconnect subsystems, as well as highly-efficient dense, medium, sparse, and varied bus traffic generation that covers even the most difficult to achieve corner-case conditions.

And finally also on Halloween, Andy Meyer offers an intelligent workshop for those that are designing high performance systems with hierarchical and distributed caches, using either ARM’s AMBA 5 CHI architecture or ARM’s AMBA 4 ACE architecture.  He’ll cover topics including how caching works, how to improve caching performance, and how to verify cache coherency.

For more information about these sessions, be sure to visit the ARM TechCon program website.  Or if you miss any of them, and would like to learn about how this intelligent technology can help you verify your ARM designs, don’t be afraid to email me at mark_olen@mentor.com.   Happy Halloween!

, , , , , , , , , ,

19 August, 2013

Verification Techniques & Technologies Adoption Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for background on the study, click here).

In my previous blog (Part 9 click here), I focused on some of the 2012 Wilson Research Group findings related to design and verification language and library trends. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2012 Wilson Research Group study.

An interesting trend we are starting to see is that the electronic industry is maturing its functional verification processes, whether they are targeting their designs at IC/ASIC or FPGA implementations. This blog provides data to support this claim. An interesting question you might ask is, “What is driving this trend?” In some of my earlier blogs (click here for Part 1 and Part 2) I showed an that design complexity is increasing in terms design sizes and number of embedded processors. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (click here for Part 5 and Part 6). My belief is that the industry is being forced to mature its functional verification processes to address increasing complexity and effort.

Simulation Techniques Adoption Trends

Let’s begin by comparing  non-FPGA adoption trends related to various simulation techniques from the 2007 Far West Research study  (in blue) with the 2012 Wilson Research Group study  (in green), as shown in Figure 1.

Figure 1. Simulation-based technique adoption trends for non-FPGA designs

You can see that the study finds the industry increasing its adoption of various functional verification techniques for non-FPGA targeted designs. Clearly the industry is maturing its processes as I previously claimed.

For example, in 2007, the Far West Research Group found that only 48 percent of the industry performed code coverage. This surprised me. After all, HDL-based code coverage is a technology that has been around since the early 1990’s. However, I did informally verify the 2007 results through numerous customer visits and discussions. In 2012, we see that the industry adoption of code coverage has increased to 70 percent.

In 2007, the Far West Research Group study found that 37 percent of the industry had adopted assertions for use in simulation. In 2012, we find that industry adoption of assertions had increased to 63 percent. I believe that the maturing of the various assertion language standards has contributed to this increased adoption.

In 2007, the Far West Research Group study found that 40 percent of the industry had adopted functional coverage for use in simulation. In 2010, the industry adoption of functional coverage had increased to 66 percent. Part of this increase in functional coverage adoption has been driven by the increased adoption of constrained-random simulation, since you really can’t effectively do constrained-random simulation without doing functional coverage.

Now let’s look at  FPGA adoption trends related to various simulation techniques from the 2010 Far West Research study  (in pink) with the 2012 Wilson Research Group study  (in red).

Figure 2. Simulation-based technique adoption trends for non-FPGA designs

Again, you can clearly see that the industry is increasing its adoption of various functional verification techniques for FPGA targeted designs. This past year I have spent a significant amount of time in discussions with FPGA project managers around the world. During these discussions, most mangers mention the drive to improve verification process within their projects due to  rising complexity of this class of designs. The Wilson Research Group data supports these claims.

In fact, Figure 3 illustrates this maturing trend in the FPGA space, where we saw a 15 percent increase in the adoption of RTL simulation and an 8.5 percent increase in the adoption of code coverage. For complex FPGA designs, the traditional approach of “burn and churn” and debug in the lab is no longer a viable option. Nonetheless, it is still somewhat alarming that 31 percent of the FPGA study participants work on projects that perform no RTL simulation.

Figure 3. FPGA projects maturing their verification processes

Signoff Criteria Trends

We saw earlier in this blog the increased adoption of coverage techniques in the industry. Coverage has become a major component of a project’s verification signoff criteria. In Figure 4, we see how coverage has increased in importance in verification signoff criteria within the past five years, while other decision attributes have declined in terms of importance.

Figure 4. Non-FPGA functional verification signoff criteria trends

We see the same trends for FPGA designs, as shown in Figure 5.

Figure 5. FPGA functional verification signoff criteria trends

In my next blog (click here), I plan to continue the discussion related to adoption of various verification technologies and techniques as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

12 August, 2013

Language and Library Trends (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 8 click here), I focused on design and verification language trends, as identified by the Wilson Research Group study. This blog presents additional trends related to verification language and library adoption trends.

You might note that for some of the language and library data I present, the percentage sums to more than 100 percent. The reason for this is that some participants’ projects use multiple languages or multiple testbench methodologies.

Testbench Methodology Class Library Adoption

Now let’s look at testbench methodology and class library adoption for IC/ASIC designs. Figure 1 shows the trends in terms of methodology and class library adoption by comparing the 2010 Wilson Research Group study (in blue) with the 2012 study (in green). Today, we see a downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has increased by 486 percent since the fall of 2010. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 46 percent.

Figure 1. Methodology and class library trends

Figure 2 show the adoption of testbench methodologies and class libraries for FPGA designs (in red). We do not have sufficient data to show prior adoption trends in the FPGA space, but we anticipate that our future studies will enable us to do this. However, we did ask the FPGA study participants which testbench methodologies and class libraries they were planning to adopt within the next 12 months. Based on these responses, we anticipate that UVM adoption will increase by 40 percent, and OVM increase by 24 percent in the FPGA space.

Figure 2. Methodology and class library trends

Assertion Languages and Libraries

Finally, let’s examine assertion language and library adoption for IC/ASIC designs. The Wilson Research Group study found that 63 percent of all the IC/ASIC participants have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trend related to those participants who have adopted ABV.

Figure 3 shows the trends in terms of assertion language and library adoption by comparing the 2010 Wilson Research Group study (in blue), the 2012 Wilson Research Group study (in green), and the projected adoption trends within the next 12 months (in purple). The adoption of SVA continues to increase, while other assertion languages and libraries either remain flat or decline.

Figure 3. Assertion language and library adoption for Non-FPGA designs

Figure 4 shows the adoption of assertion language trends for FPGA designs (in red). Again, we do not have sufficient data to show prior adoption trends in the FPGA space, but we anticipate that our future studies will enable us to do this. We did ask the FPGA study participants which assertion languages and libraries they planned to adopt within the next 12 months. Based on these responses, we anticipate an increase in adoption for OVL, SVA, and PSL in the FPGA space within the next 12 months.

Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I plan to focus on the adoption of various verification technologies and techniques used in the industry, as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

5 August, 2013

Language and Library Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 7 click here), I focused on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies. In this blog, I present design and verification language trends, as identified by the Wilson Research Group study.

You might note that for some of the language and library data I present, the percentage sums to more than one hundred percent. The reason for this is that some participants’ projects use multiple languages.

RTL Design Languages

Let’s begin by examining the languages used for RTL design. Figure 1 shows the trends in terms of languages used for design, by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), the 2012 Wilson Research Group study (in green), as well as the projected design language adoption trends within the next twelve months (in purple) as identified by the study participants. Note that the design language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption continues to increase.

Also, it’s important to note that this study focused on languages used for RTL design. We have conducted a few informal studies related to languages used for architectural modeling—and it’s not too big of a surprise that we see increased adoption of C/C++ and SystemC in that space. However, since those studies have (thus far) been informal and not as rigorously executed as the Wilson Research Group study, I have decided to withhold that data until a more formal blind study can be executed related to architectural modeling and virtual prototyping.

Figure 1. Trends in languages used for Non-FPGA design

Let’s now look at the languages used specifically for FPGA RTL design. Figure 2 shows the trends in terms of languages used for FPGA design, by comparing the 2012 Wilson Research Group study (in red) with the projected design language adoption trends within the next twelve months (in purple).

Figure 2. Languages used for Non-FPGA design

It’s not too big of a surprise that VHDL is the predominant language used for FPGA RTL design, although we are starting to see increased interest in SystemVerilog.

Verification Languages

Next, let’s look at the languages used to verify Non-FPGA designs (that is, languages used to create simulation testbenches). Figure 3 shows the trends in terms of languages used to create simulation testbenches by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green).

Figure 3. Trends in languages used in verification to create Non-FPGA simulation testbenches

The study revealed that verification language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption is increasing. In fact, SystemVerilog adoption increased by 8.3 percent between 2010 and 2012.

Figure 4 provides a different analysis of the data by partitioning the projects by design size, and then calculating the adoption of SystemVerilog for creating testbenches by size. The design size partitions are represented as: less than 5M gates, 5M to 20M gates, and greater than 20M gates. Obviously, we find that the larger the design size, the greater the adoption of SystemVerilog for creating testbenches. Yet, probably the most interesting observation we can make from examining Figure 4 is related to smaller designs that are less than 5M gates. Here we see that 58.8 percent of the industry has adopted SystemVerilog for verification. In other words, it is safe to say that SystemVerilog for verification has become mainstream today and not just limited to early adopters or leading-edge design projects.

Figure 4. SystemVerilog (for verification) adoption by design size

Let’s now look at the languages used specifically for FPGA RTL design. Figure 5 shows the trends in terms of languages used for FPGA design, by comparing the 2012 Wilson Research Group study (in red) with the projected design language adoption trends within the next twelve months (in purple).

Figure 5. Trends in languages used in verification to create FPGA simulation testbenches

In my next blog (click here), I’ll continue the discussion on design and verification language trends as revealed by the 2012 Wilson Research Group Functional Verification Study.

, , , , , , , , , , , , , , , , , ,

29 July, 2013

Testbench Characteristics and Simulation Strategies

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for background on the study, click here).

In my previous blog (click here), I focused on the controversial topic of effort spent in verification. In this blog, I focus on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies. Although I am shifting the focus away from verification effort, I believe that the data I present in this blog is related to my previous blog and really needs to be considered when calculating effort.

Time Spent in full-chip versus Subsystem-Level Simulation

Let’s begin by looking at Figure 1, which shows the percentage of time (on average) that a project spends in full-chip or SoC integration-level verification versus subsystem and IP block-level verification. The mean time performing full chip verification is represented by the dark green bar, while the mean time performing subsystem verification is represented by the light green bar. Keep in mind that this graph represents the industry average. Some projects spend more time in full-chip verification, while other projects spend less time.

Figure 1. Mean time spent in full chip versus subsystem simulation

Number of Tests Created to Verify the Design in Simulation

Next, let’s look at Figure 2, which shows the number of tests various projects create to verify their designs using simulation. The graph represents the findings from the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). Note that the curves look remarkably similar over the past five years. The median number of tests created to verify the design is within the range of (>200 – 500) tests. It is interesting to see a sharp percentage increase in the number of participants who claimed that fewer tests (1 – 100) were created to verify a design in 2012. It’s hard to determine exactly why this was the case—perhaps it is due to the increased use of constrained random (which I will talk about shortly). Or perhaps there has been an increased use of legacy tests. The study was not design to go deeper into this issue and try to uncover the root cause. This is something I intend to informally study this next year through discussions with various industry thought leaders.

Figure 2. Number of tests created to verify a design in simulation

Percentage of Directed Tests versus Constrained-Random Tests

Now let’s compare the percentage of directed testing that is performed on a project to the percentage of constrained-random testing. Of course, in reality there is a wide range in the amount of directed and constrained-random testing that is actually performed on various projects. For example, some projects spend all of their time doing directed testing, while other projects combine techniques and spend part of their time doing directed testing—and the other part doing constrained-random. For our comparison, we will look at the industry average, as shown in Figure 3. The average percentage of tests that were directed is represented by the dark green bar, while the average percentage of tests that are constrained-random is represented by the light green bar.

Figure 3. Mean directed versus constrained-random testing performed on a project

Notice how the percentage mix of directed versus constrained-random testing has changed over the past two years.Today we see that, on average, a project performs more constrained-random simulation. In fact, between 2010 and 2012 there has been a 39 percent increase in the use of constrained-random simulation on a project. One driving force behind this increase has been the maturing and acceptance of both the SystemVerilog and UVM standards—since two standards facilitate an easier implementation of a constrained-random testbench. In addition, today we find that an entire ecosystem has emerged around both the SystemVerilog and UVM standards. This ecosystem consists of tools, verification IP, and industry expertise, such as consulting and training.

Nonetheless, even with the increased adoption of constrained-random simulation on a project, you will find that constrained-random simulation is generally only performed at the IP block or subsystem level. For the full SoC level simulation, directed testing and processor-driven verification are the prominent simulation-based techniques in use today.

Simulation Regression Time

Now let’s look at the time that various projects spend in a simulation regression. Figure 4 shows the trends in terms of simulation regression time by comparing the 2007 Far West Research study (in gray) with the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). There really hasn’t been a significant change in the time spent in a simulation regression within the past three years. You will find that some teams spend days or even weeks in a regression. Yet today, the industry median is between 8 and 16 hours, and for many projects, there has been a decrease in regression time over the past few years. Of course, this is another example of where deeper analysis is required to truly understand what is going on. To begin with, these questions should probably be refined to better understand simulation times related to IP versus SoC integration-level regressions. We will likely do that in future studies—with the understanding that we will not be able to show trends (or at least not initially).

Figure 4. Simulation regression time trends

In my next blog (click here), I’ll focus on design and verification language trends, as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@jhupcey Tweets

  • Loading tweets...