Posts Tagged ‘UVM’

9 October, 2014

DVCon India, held in September 2014 in Bangalore, built on the Indian SystemC User Group meeting events and added a Design & Verification track to its popular system-level design (ESL) track that has been popular for many years.  The main stage played host to the keynote presentations, opening ceremonies and best paper and poster awards.

Several DVCon India keynote presentations, which I will go into more depth later touched on emerging use of virtual platforms in system design and the growing impact India has on design verification.  In particular, Mentor’s CEO, Wally Rhines contrasted Wilson Research survey data on design verification from India and the rest of the world.  A strong adoption of SystemVerilog and its popular methodology, the Universal Verification Methodology (UVM) was clear from the survey results Wally shared.

But even beyond SystemVerilog and UVM, the discuss of what could come next anchored the first day of DVCon India discussion on Accellera’s exploration of “portable stimulus.”  Accellera has a group exploring if the industry is ready to start a standards project on this concept.  And the first day when DVCon India attendees were offered an opportunity to learn about this, the multi-company (Mentor Graphics, Breker & CVC) tutorial on the topic was standing room only.

DVCon Europe – The Stage is Set!

A tutorial slot at DVCon Europe will be devoted to the same topic that was popular at DVCon India.  For DVCon Europe attendees, you will find Tutorial T9, “Creating Portable Tests with a Graph-Based Test Specification” will cover this topic.  Technical representatives from Mentor Graphics and Breker will cover aspects of portable stimulus and offer examples of how it can work.  And early application of the technology will be covered by a representative from IBM.  To cover the topic appropriately, we have modified the presenters listed in the official printed program and full details are available online.  The presenters will be, in this order:

  • Holger Horbach, IBM, Germany
  • Frederic Krampac, Breker, France
  • Staffan Berg, Mentor Graphics, Sweden

Please join us for this tutorial and ensuing conversation and discussion.  Verification productivity is a pressing issue and our ability to better control and create stimulus is a step in the direction to address the verification challenges we all face.

One last note, the concept of “portable stimulus” is language agnostic so no matter which language you use for design and verification, the intention is this technology will be able to help.   The tutorial will help you understand how using a graph-based approach enables the highest degree of verification re-use, from IP block to sub-system to full-system level verification. You will see how it supports verification in SystemVerilog, Verilog, VHDL, C, C/C++, assembly, and even other non-traditional base languages. And it also can be extended from simulation to emulation to FPGA prototyping, and even silicon validation.

I look forward to seeing you at DVCon Europe in Munich!  And if you have not yet registered, please do so to secure your seat.

, , , , , , , , , , ,

9 July, 2014

Accellera has announced the completion of a multi-year effort to update its latest edition of the Universal Verification Methodology (UVM).  In completing this effort, the UVM 1.2 Class Reference Document was approved as an Accellera standard and the UVM Working Group has supplied an accompanying open-source reference implementation.  Questa supports UVM 1.2.

In addition to the resources you can download from Accellera, additional information on UVM 1.2 can be found at the Verification AcademyHTML documentation can easily be found at the Verification Academy too.

If you are a user of UVM 1.1 and have not been part of the UVM 1.2 development effort, you should know your peers have been busy the past few years since the stabilization and completion of UVM 1.1 to drive global adoption of UVM and to add, enhance and extend UVM.  In UVM 1.2 Messaging is now object-oriented, Sequences can automatically raise and drop objects, the register layer can now control transaction order within bursts and numerous bugs in UVM 1.1 have been fixed to improve quality.

Backward Incompatibility

All these changes come with a cost to the current UVM 1.1 user community.  When Accellera announced UVM 1.2 availability, it also disclosed some of the new features introduce backward incompatibility.  To reduce those issues, Accellera is making release notes and a one way conversion script part of the UVM 1.2 kit to ease the migration path forward.

If you follow the Verification Academy Cookbook rules, you will probably not see any impact from the backward compatibility issues.  And if you control your total verification environment, you will probably find it simpler to migrate forward as well.  Those who depend on outside resources will need to make sure those resources (like Verification IP) migrate forward to UVM 1.2 so you can migrate forward to UVM 1.2.  Mixing UVM 1.1 and UVM 1.2 was not considered by the Accellera UVM Working Group and is fraught with unknown issues.  We consider the migration an all or nothing proposition.  If you have multi-division, multi-company projects underway, it would be prudent to plan you move to UVM 1.2 with care at the conclusion of projects and when all suppliers and participating teams can migrate to UVM 1.2.

Public Review Period

Accellera seeks your input and feedback on UVM 1.2.  To support this, a public review forum on the Accellera website has been established to allow users to catalog issues, ask questions and generally offer feedback to help improve UVM 1.2 quality.

The public review process will end on October 1, 2014.  We encourage users to take the time now to test UVM 1.2 in their own environments and share their feedback to expidite the migration to UVM 1.2.

Path to IEEE

Public feedback will be taken into account along with further Accellera member testing to update UVM 1.2 prior to a committed hand-off to the IEEE for further standardization there later this year.  As this path unfolds, I will share updates on the standardization effort in the IEEE.

Verification Academy DAC 2014 UVM 1.2 Presentation

You will find many resources around the world on UVM 1.2.  At DAC 2014, the Verification Academy booth sponsored a session on UVM 1.2 titled  “UVM: What’s New, What’s Next, and Why You Care.”  If you did not attend DAC, you can still download the presentation and watch a video replay of it if you are a Verification Academy “full access” member (free registration required; restrictions apply).

The presentation by Tom Fitzpatrick goes into detail on the UVM 1.2 topic.  Importantly in Tom’s presentation is a discussion about what you should care about today.  You may find that software is a big issue and that his thesis challenges one to ask if UVM 1.2 is stuck in the past rather than addressing what should be addressed next.  I invite you to download the presentation and watch the video and share with me your thoughts. What do you think?

, , , , ,

25 April, 2014

DVCon 2014 Conference Proceedings Published

2014DVCon_logoWith record attendance announced for DVCon 2014, one might wonder if there is really a need to put some of the “Accellera Day” tutorial videos online.  With more than 1,000 professionals attending in some capacity, it would be easy to conclude that everyone that needs to know about UVM and the developments on the updated version to it, probably know.  Looking at just the LinkedIn design and verification forums one will realize there are 10’s of thousands who would have benefited if they had attended DVCon.  Thus, sharing this information more broadly is in order.

UVM Tutorial Video

UVM – What’s Now and What’s Next” is the tittle of the DVCon 2014 tutorial on UVM.  It covered use cases and pragmatic topics of the current UVM 1.1 standard as well as advanced topics for the next update, UVM 1.2.  The presenters covered sequence creation, register layer use, TLM-based communication, test execution, run-time phases and messaging enhancements.

The tutorial was split into five separate sections delivered by five speakers as follows:

  • Working Group Update: Adam Sherer, Accellera (7 min.)
  • Overview and Library Concepts: John Aynsley, Doulos (36 min.)
  • Stimulus Generation: Shawn Honess, Synopsys (21 min.)
  • UVM Register Layer: Tom Fitzpatrick, Mentor Graphics (36 min.)
  • UVM 1.2 Introduction: Uwe Simm, Cadence Design Systems (25 min.)

You can find out more information about the online tutorial videos hereRegistration is required, but there is no charge for access.  Once you have registered, you will get links to each of the five sections.  You can stream them or download them for offline access as you wish.  They are suitable for viewing on your computer or mobile devices.

DVCon 2014 Proceedings

DVCon 2014 was a full conference; it was more than just the the Accellera Day UVM Tutorial.  And in keeping with DVCon tradition, the conference proceedings are made available to all several months after the conference without charge.  If you visit the DVCon history area, you will find the 2014 proceedings have been published.  What I like about the DVCon proceedings it not only are the papers published, but the slides that were presented at the conference will often accompany the paper.

As an example, if you were interested in the DVCon 2014 Best Oral Presentation paper and presentation (Kelly D. Larson from NVIDIA on , “Determining Test Quality through Dynamic Runtime Monitoring of SystemVerilog Assertions” by the way), you will now find both the paper and presentation available online here.

For all those who did not make it to DVCon 2014, or who were there and could not see everything, the proceedings are now online and the first of the Accellera Day tutorials videos is published. Accellera is busy readying its other tutorial videos.  I’ll share information on their availability as they appear in the weeks and months ahead.

, , , , ,

10 April, 2014

Its always fun to take the wraps off of solutions we have been hard at work developing.  The global team of Mentor Graphics engineers have spent considerable time and energy to bring the next level of SoC design and verification productivity to what seems to be a never ending response to Moore’s Law.  As silicon feature sizes get smaller, design sizes get larger and the verification problem mushrooms.  But you know that.  These changes are the constants that drive the need for continued innovation.  Our next level of innovation for design verification is embodied in the Mentor Enterprise Verification Platform (EVP) which we recently announced.

Gary Smith recently published Keeping Up with the Emulation Market, and lays out the fact that verification platforms are unifying with emulation now a pivotal element, not just for microprocessor design success, but for Multi-Platform Based SoC design success as well.  The need to bring software debug into the loop with early hardware concepts is a verification challenge that must be supported as well.  Pradeep Chakraborty reported on the point made by Anil Gupta of Applied Micro at the UVM 1.2 Day in Bangalore where Anil implored “Think about the block, the subsystem and the top.”  The point made was software is often overlooked or under tested prior to committing to hardware implementation implying that our focus on UVM leaves us to verify no higher than where UVM takes us – and that is not the “top” of the SoC that mandates software be part of the verification plan.

Path to Success

With the Mentor EVP, we do address these issues.  We bring simulation and emulation together in a unified platform.  Software debug on conceptual hardware is supported to address verification at the “top.”  And even as Gary’s report concludes with a wonder about how easy access to emulation will be supported for the masses.  That too is solved in the Mentor EVP using VirtuaLAB that can be hosted in data centers along with the emulator vs. complex, one-off lab setups that lock an emulator to a design and lock out your global team of software developers from collaborating.  The Mentor EVP moves to emulation for the masses in a 24×7 world.

With big designs comes big data and complex debug tasks.  These complex debug tasks are all easily handled by the new Mentor Visualizer Debug Environment that has native UVM and SystemVerilog class-based debug capabilities and low-power UPF debug support to easily pinpoint design errors. All of this works in both interactive and post-simulation modes for simulation and emulation.  To keep the software team productive, and get to SoC signoff sooner, the innovative and new Veloce OS3 global emulation resourcing technology moves software debug think-time offline to Mentor’s Codelink software debug tool.

And there’s more!  But I’ll leave that for you to discover.  When you have time, visit us here, to learn more about the Mentor Enterprise Verification Platform.

Path to Standards

As the move to support Multi-Platform Based SoC evolves, so do the standards that underpin it.  And as I’ve reported on the comments of others in this blog – and the understanding from our experience that UVM can only go so far in Multi-Platform Based SoC verification – we concluded the time is right for the industry to explore the need for new standards.

We announced at DVCon 2014 an offer to take our graph-based test specification into an Accellera committee to help move beyond the limitations today’s standards have.  As our investment in tools, technology and platforms continues, we are keenly aware users want their design and verification data to be as portable as possible.  The Accellera user community members echoed the need to discuss portable stimulus that can take you up and down the design hierarchy from block, to subsystem, to system (“top”) and support the concurrent design of hardware and software.

In support of this, Accellera approved the formation of a Portable Stimulus Specification Proposed Working Group (PWG) to study the validity and need for a portable stimulus specification.  To that end, join me at the kickoff meeting to launch this activity on Wednesday, May 7, 2014 from 10:00am to 4:00pm Pacific time at the offices of Mentor Graphics in Fremont, CA USA.  If you would to attend, or you would  like time on the agenda to discuss technology that would advance the development of a Portable Stimulus Specification or discuss your objectives/requirements for this group, contact me and I will put you in touch with the meeting organizer.  Accellera PWG meetings are open to all and do not require Accellera membership status to attend.

, , , , , , , , , ,

3 March, 2014

DVCon is always one of my favorite events in our industry, and I am proud to let you know that the latest issue of Verification Horizons is available “hot off the presses” at the Verification Academy to mark the occasion. For those of you attending the conference, please consider this issue as an addendum to the great technical program being offered (especially paper 8.1, “Of Camels and Committees: Standards Should Enable Innovation, Not Strangle It” by Dave Rich and yours truly). For those of you not able to join us at DVCon this year, consider this your consolation prize.

Although fewer in number, I’m sure you’ll find the articles in Verification Horizons as informational and useful as any you’ll see at DVCon. In particular, I’d like to make sure you check out these articles by our partners:

  • “Don’t Forget the Little Things That Can Make Verification Easier” by our friend Stu Sutherland of Sutherland HDL
  • “Taming Power-Aware Bugs with Questa Ultra” by SmartPlay Technologies
  • “Using Mentor Questa for pre-silicon validation of IEEE 1149.1-2013 based Silicon Instruments” by Intellitech
  • “Dealing With UVM and OVM Sequences” by eInfochips

If you’re at DVCon, please make sure to stop by the Mentor Graphics booth (#501) to say hi. Please join us on Wednesday for our luncheon presentation at noon, right after Session 8, in which I’ll present my paper mentioned above (that’s right. I’m not above shameless self-promotion). And we’ll wrap up the week with two Mentor-sponsored tutorials on Thursday:

Both of these tutorials feature a mix of Mentor presenters and customers to offer some practical examples that will give you some new ideas for improving your verification process. I hope to see you at DVCon.

, , ,

11 February, 2014

DVCon 2014 LogoOne of the nice things about DVCon is the update one can get from the developers of IEEE and Accellera standards.  And this year’s DVCon is no exception.  The four days of DVCon begin and end with tutorials that cover updates to popular standards like UVM, UPF, SystemC and more.  For our part, Mentor Graphics is participating in the development and delivery of these updates with our peers.

UVM LogoI have written in the past about the productivity challenges before us to address the verification crisis and the emergence of machine-to-machine communication and the Internet of Things driving power aware design and verification.  To advance the demands on improved verification and help to address the verification crisis, the next round in the Universal Verification Methodology (UVM) standard is being readied for industry adoption.  UVM 1.2, the emerging update will be covered in some detail in a Monday morning tutorial to help you learn “What’s Now and What’s Next.”  Mentor Graphics’ Tom Fitzpatrick and Accellera Working Group representative will present in this tutorial.

UVM 1.2 is an active development project of Accellera and has not yet been released so there is no official standard available for download and use yet.  I’ll share standardization details as they happen.

At the same time on Monday, those who are concerned with power aware design and verification can attend the tutorial on the Unified Low Power Format (UPF), or as it is officially called IEEE 1801™-2013.  The tutorial will cover the full spectrum of UPF capabilities and methodology from basic to advanced applications.  So if you are new to UPF and want to learn, this is a great tutorial to attend.  And if you are already an expert, the advanced application of UPF as highlighted by those companies who have adopted UPF make this valuable for you as well.  Mentor Graphics’ Erich Marschner and IEEE 1801 Working Group vice-chair will participate in this tutorial.

UPF is an official IEEE standard.  Have you downloaded your copy yet?  Accellera has worked with the IEEE to make no-charge access to the official standard for you.  You can find the UPF standard here.

In the afternoon, there will be a session on case studies in SystemC.  User and vendor presentations will explore use of this standard.  SystemC offers much in the verification space, not just in technology but learning on how to bridge the RTL world with transaction level modeling world.  Mentor Graphics’ John Stickley will review what we have learned and how you can apply it to your most pressing verification needs.

SystemC is an official IEEE standard.  Have you downloaded your copy yet?  Under the Accellera agreement with the IEEE, you can download SystemC standard here.

There is a lot more to DVCon than just the use of current standards and planning adoption of emerging standards.  I encourage you to check out the whole agenda and join me at DVCon 2014 March 3-6.

Mentor Graphics presentations during the conference include:

  • Tuesday Paper Sessions
    • Amit Srivastava – Stepping Into UPF 2.1 World: Easy Solution to Complex
      Power Estimation
    • Kenneth Bakalar – Interpreting UPF For A Mixed-Signal Design Under Test
    • Gordon Allan – Tried and Tested Speedups for Software-Driven SoC Simulatio
  • Tuesday Poster Sessions
    • Rich Edelman – Debugging Communicating Systems: The Blame Game – Blurring
      the Line Between Performance Analysis and Debug
    • Matthew Balance – Tackling Random Blind Spots with Strategy-Driven Stimulus Generation
    • Gaurav K. Verma – Supercharge Your Verification Using Rapid Expression Coverage as the Basis of a MC/DC-Compliant Coverage Methodology
    • Andreas Meyer – So You Think You Have Good Stimulus: System-Level Distributed Metrics Analysis and Results
    • Rich Edelman – UVM SchmooVM – I Want My C Tests!
    • Thom Ellis – Are  You Really Confident That You Are Getting the Very Best From Your Verification Resources?
    • Jitesh Bansal – Is Your Power Aware Design Really X-Aware
  • Wednesday Paper Sessions
    • Avidan Efody – Wiretap Your SoC: Why Scattering Verification IPs Throughout Your Design Is A Smart Thing To Do
    • Tom Fitzpatrick – Of Camels and Committees: Standards Should Enable Innovation, Not Strangle It

Mentor Graphics will host its traditional lunch at DVCon on Wednesday on the theme of Accelerating Verification.  And we have lively panel participants for the Tuesday and Wednesday panels.  And, as always, the Exhibit, CEO Keynote and Panels are open to all a no charge – you just have to REGISTER!

I look forward to seeing you there!

, , , , ,

4 October, 2013

We are truly living in the age of SoC design, where 78 percent of all designs today contain one or more embedded processors.  In fact, 56 percent of all designs contain two or more embedded processors, which brings a whole new level of verification challenges—requiring unique solutions.

A great example of this is STMicroelectronics who recently shared their experience and solution in addressing verification challenges due to rising complexity. In 2012, STMicroelectronics began a pilot project to build what it called the Eagle Reference Design, or ERD. The goal was to see if it would be possible to stitch together three ARM products — a Cortex-A15, Cortex-7 and DMC 400 — into one highly flexible platform, one that customers might eventually be able to tweak based on nothing more than an XML description of the system.

Engineers at STMicroelectronics sought to understand and benchmark the Eagle Reference Design. To speed this benchmarking along, they wanted a verification environment that would link software-based simulation and hardware-based emulation in a common flow.

Their solution was unique, and their story worth reading. They first built a simulation testbench that relied heavily on verification IP (VIP). Next, the team connected this testbench to a Veloce emulation system via TestBench XPress (TBX) co-modeling software. Running verification required separating all blocks of design code into two domains — synthesizable code, including all RTL, for running on the emulator; and all other modules that run on the HDL portion of the environment on the simulator (which is connected to the emulator). Throughout the project, the team worked closely with Mentor Graphics to fine-tune the new co-emulation verification environment, which requires that all SoC components be mapped exactly the same way in simulation and emulation.

Because the reference design was not bound to any particular project, the main goal was not to arrive at the complete verification of the design but rather to do performance analysis and establish verification methodologies and techniques that would work in the future. In this they succeeded, agreeing that when they eventually try this sort of combined approach on a real project, they will be able to port the verification environment to the emulator more or less seamlessly.

This is a great success story worth reading on how STMicroelectronics combined Questa simulation, Mentor verification IP (VIP), and Veloce emulation to speed up their benchmarking verification process. Check out the full story here!

, , , , ,

19 August, 2013

Verification Techniques & Technologies Adoption Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for background on the study, click here).

In my previous blog (Part 9 click here), I focused on some of the 2012 Wilson Research Group findings related to design and verification language and library trends. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2012 Wilson Research Group study.

An interesting trend we are starting to see is that the electronic industry is maturing its functional verification processes, whether they are targeting their designs at IC/ASIC or FPGA implementations. This blog provides data to support this claim. An interesting question you might ask is, “What is driving this trend?” In some of my earlier blogs (click here for Part 1 and Part 2) I showed an that design complexity is increasing in terms design sizes and number of embedded processors. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (click here for Part 5 and Part 6). My belief is that the industry is being forced to mature its functional verification processes to address increasing complexity and effort.

Simulation Techniques Adoption Trends

Let’s begin by comparing  non-FPGA adoption trends related to various simulation techniques from the 2007 Far West Research study  (in blue) with the 2012 Wilson Research Group study  (in green), as shown in Figure 1.

Figure 1. Simulation-based technique adoption trends for non-FPGA designs

You can see that the study finds the industry increasing its adoption of various functional verification techniques for non-FPGA targeted designs. Clearly the industry is maturing its processes as I previously claimed.

For example, in 2007, the Far West Research Group found that only 48 percent of the industry performed code coverage. This surprised me. After all, HDL-based code coverage is a technology that has been around since the early 1990’s. However, I did informally verify the 2007 results through numerous customer visits and discussions. In 2012, we see that the industry adoption of code coverage has increased to 70 percent.

In 2007, the Far West Research Group study found that 37 percent of the industry had adopted assertions for use in simulation. In 2012, we find that industry adoption of assertions had increased to 63 percent. I believe that the maturing of the various assertion language standards has contributed to this increased adoption.

In 2007, the Far West Research Group study found that 40 percent of the industry had adopted functional coverage for use in simulation. In 2010, the industry adoption of functional coverage had increased to 66 percent. Part of this increase in functional coverage adoption has been driven by the increased adoption of constrained-random simulation, since you really can’t effectively do constrained-random simulation without doing functional coverage.

Now let’s look at  FPGA adoption trends related to various simulation techniques from the 2010 Far West Research study  (in pink) with the 2012 Wilson Research Group study  (in red).

Figure 2. Simulation-based technique adoption trends for non-FPGA designs

Again, you can clearly see that the industry is increasing its adoption of various functional verification techniques for FPGA targeted designs. This past year I have spent a significant amount of time in discussions with FPGA project managers around the world. During these discussions, most mangers mention the drive to improve verification process within their projects due to  rising complexity of this class of designs. The Wilson Research Group data supports these claims.

In fact, Figure 3 illustrates this maturing trend in the FPGA space, where we saw a 15 percent increase in the adoption of RTL simulation and an 8.5 percent increase in the adoption of code coverage. For complex FPGA designs, the traditional approach of “burn and churn” and debug in the lab is no longer a viable option. Nonetheless, it is still somewhat alarming that 31 percent of the FPGA study participants work on projects that perform no RTL simulation.

Figure 3. FPGA projects maturing their verification processes

Signoff Criteria Trends

We saw earlier in this blog the increased adoption of coverage techniques in the industry. Coverage has become a major component of a project’s verification signoff criteria. In Figure 4, we see how coverage has increased in importance in verification signoff criteria within the past five years, while other decision attributes have declined in terms of importance.

Figure 4. Non-FPGA functional verification signoff criteria trends

We see the same trends for FPGA designs, as shown in Figure 5.

Figure 5. FPGA functional verification signoff criteria trends

In my next blog (click here), I plan to continue the discussion related to adoption of various verification technologies and techniques as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

12 August, 2013

Language and Library Trends (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 8 click here), I focused on design and verification language trends, as identified by the Wilson Research Group study. This blog presents additional trends related to verification language and library adoption trends.

You might note that for some of the language and library data I present, the percentage sums to more than 100 percent. The reason for this is that some participants’ projects use multiple languages or multiple testbench methodologies.

Testbench Methodology Class Library Adoption

Now let’s look at testbench methodology and class library adoption for IC/ASIC designs. Figure 1 shows the trends in terms of methodology and class library adoption by comparing the 2010 Wilson Research Group study (in blue) with the 2012 study (in green). Today, we see a downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has increased by 486 percent since the fall of 2010. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 46 percent.

Figure 1. Methodology and class library trends

Figure 2 show the adoption of testbench methodologies and class libraries for FPGA designs (in red). We do not have sufficient data to show prior adoption trends in the FPGA space, but we anticipate that our future studies will enable us to do this. However, we did ask the FPGA study participants which testbench methodologies and class libraries they were planning to adopt within the next 12 months. Based on these responses, we anticipate that UVM adoption will increase by 40 percent, and OVM increase by 24 percent in the FPGA space.

Figure 2. Methodology and class library trends

Assertion Languages and Libraries

Finally, let’s examine assertion language and library adoption for IC/ASIC designs. The Wilson Research Group study found that 63 percent of all the IC/ASIC participants have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trend related to those participants who have adopted ABV.

Figure 3 shows the trends in terms of assertion language and library adoption by comparing the 2010 Wilson Research Group study (in blue), the 2012 Wilson Research Group study (in green), and the projected adoption trends within the next 12 months (in purple). The adoption of SVA continues to increase, while other assertion languages and libraries either remain flat or decline.

Figure 3. Assertion language and library adoption for Non-FPGA designs

Figure 4 shows the adoption of assertion language trends for FPGA designs (in red). Again, we do not have sufficient data to show prior adoption trends in the FPGA space, but we anticipate that our future studies will enable us to do this. We did ask the FPGA study participants which assertion languages and libraries they planned to adopt within the next 12 months. Based on these responses, we anticipate an increase in adoption for OVL, SVA, and PSL in the FPGA space within the next 12 months.

Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I plan to focus on the adoption of various verification technologies and techniques used in the industry, as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

29 July, 2013

Testbench Characteristics and Simulation Strategies

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for background on the study, click here).

In my previous blog (click here), I focused on the controversial topic of effort spent in verification. In this blog, I focus on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies. Although I am shifting the focus away from verification effort, I believe that the data I present in this blog is related to my previous blog and really needs to be considered when calculating effort.

Time Spent in full-chip versus Subsystem-Level Simulation

Let’s begin by looking at Figure 1, which shows the percentage of time (on average) that a project spends in full-chip or SoC integration-level verification versus subsystem and IP block-level verification. The mean time performing full chip verification is represented by the dark green bar, while the mean time performing subsystem verification is represented by the light green bar. Keep in mind that this graph represents the industry average. Some projects spend more time in full-chip verification, while other projects spend less time.

Figure 1. Mean time spent in full chip versus subsystem simulation

Number of Tests Created to Verify the Design in Simulation

Next, let’s look at Figure 2, which shows the number of tests various projects create to verify their designs using simulation. The graph represents the findings from the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). Note that the curves look remarkably similar over the past five years. The median number of tests created to verify the design is within the range of (>200 – 500) tests. It is interesting to see a sharp percentage increase in the number of participants who claimed that fewer tests (1 – 100) were created to verify a design in 2012. It’s hard to determine exactly why this was the case—perhaps it is due to the increased use of constrained random (which I will talk about shortly). Or perhaps there has been an increased use of legacy tests. The study was not design to go deeper into this issue and try to uncover the root cause. This is something I intend to informally study this next year through discussions with various industry thought leaders.

Figure 2. Number of tests created to verify a design in simulation

Percentage of Directed Tests versus Constrained-Random Tests

Now let’s compare the percentage of directed testing that is performed on a project to the percentage of constrained-random testing. Of course, in reality there is a wide range in the amount of directed and constrained-random testing that is actually performed on various projects. For example, some projects spend all of their time doing directed testing, while other projects combine techniques and spend part of their time doing directed testing—and the other part doing constrained-random. For our comparison, we will look at the industry average, as shown in Figure 3. The average percentage of tests that were directed is represented by the dark green bar, while the average percentage of tests that are constrained-random is represented by the light green bar.

Figure 3. Mean directed versus constrained-random testing performed on a project

Notice how the percentage mix of directed versus constrained-random testing has changed over the past two years.Today we see that, on average, a project performs more constrained-random simulation. In fact, between 2010 and 2012 there has been a 39 percent increase in the use of constrained-random simulation on a project. One driving force behind this increase has been the maturing and acceptance of both the SystemVerilog and UVM standards—since two standards facilitate an easier implementation of a constrained-random testbench. In addition, today we find that an entire ecosystem has emerged around both the SystemVerilog and UVM standards. This ecosystem consists of tools, verification IP, and industry expertise, such as consulting and training.

Nonetheless, even with the increased adoption of constrained-random simulation on a project, you will find that constrained-random simulation is generally only performed at the IP block or subsystem level. For the full SoC level simulation, directed testing and processor-driven verification are the prominent simulation-based techniques in use today.

Simulation Regression Time

Now let’s look at the time that various projects spend in a simulation regression. Figure 4 shows the trends in terms of simulation regression time by comparing the 2007 Far West Research study (in gray) with the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). There really hasn’t been a significant change in the time spent in a simulation regression within the past three years. You will find that some teams spend days or even weeks in a regression. Yet today, the industry median is between 8 and 16 hours, and for many projects, there has been a decrease in regression time over the past few years. Of course, this is another example of where deeper analysis is required to truly understand what is going on. To begin with, these questions should probably be refined to better understand simulation times related to IP versus SoC integration-level regressions. We will likely do that in future studies—with the understanding that we will not be able to show trends (or at least not initially).

Figure 4. Simulation regression time trends

In my next blog (click here), I’ll focus on design and verification language trends, as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@jhupcey Tweets

  • Loading tweets...