Posts Tagged ‘Accellera’

10 April, 2014

Its always fun to take the wraps off of solutions we have been hard at work developing.  The global team of Mentor Graphics engineers have spent considerable time and energy to bring the next level of SoC design and verification productivity to what seems to be a never ending response to Moore’s Law.  As silicon feature sizes get smaller, design sizes get larger and the verification problem mushrooms.  But you know that.  These changes are the constants that drive the need for continued innovation.  Our next level of innovation for design verification is embodied in the Mentor Enterprise Verification Platform (EVP) which we recently announced.

Gary Smith recently published Keeping Up with the Emulation Market, and lays out the fact that verification platforms are unifying with emulation now a pivotal element, not just for microprocessor design success, but for Multi-Platform Based SoC design success as well.  The need to bring software debug into the loop with early hardware concepts is a verification challenge that must be supported as well.  Pradeep Chakraborty reported on the point made by Anil Gupta of Applied Micro at the UVM 2.1 Day in Bangalore where Anil implored “Think about the block, the subsystem and the top.”  The point made was software is often overlooked or under tested prior to committing to hardware implementation implying that our focus on UVM leaves us to verify no higher than where UVM takes us – and that is not the “top” of the SoC that mandates software be part of the verification plan.

Path to Success

With the Mentor EVP, we do address these issues.  We bring simulation and emulation together in a unified platform.  Software debug on conceptual hardware is supported to address verification at the “top.”  And even as Gary’s report concludes with a wonder about how easy access to emulation will be supported for the masses.  That too is solved in the Mentor EVP using VirtuaLAB that can be hosted in data centers along with the emulator vs. complex, one-off lab setups that lock an emulator to a design and lock out your global team of software developers from collaborating.  The Mentor EVP moves to emulation for the masses in a 24×7 world.

With big designs comes big data and complex debug tasks.  These complex debug tasks are all easily handled by the new Mentor Visualizer Debug Environment that has native UVM and SystemVerilog class-based debug capabilities and low-power UPF debug support to easily pinpoint design errors. All of this works in both interactive and post-simulation modes for simulation and emulation.  To keep the software team productive, and get to SoC signoff sooner, the innovative and new Veloce OS3 global emulation resourcing technology moves software debug think-time offline to Mentor’s Codelink software debug tool.

And there’s more!  But I’ll leave that for you to discover.  When you have time, visit us here, to learn more about the Mentor Enterprise Verification Platform.

Path to Standards

As the move to support Multi-Platform Based SoC evolves, so do the standards that underpin it.  And as I’ve reported on the comments of others in this blog – and the understanding from our experience that UVM can only go so far in Multi-Platform Based SoC verification – we concluded the time is right for the industry to explore the need for new standards.

We announced at DVCon 2014 an offer to take our graph-based test specification into an Accellera committee to help move beyond the limitations today’s standards have.  As our investment in tools, technology and platforms continues, we are keenly aware users want their design and verification data to be as portable as possible.  The Accellera user community members echoed the need to discuss portable stimulus that can take you up and down the design hierarchy from block, to subsystem, to system (“top”) and support the concurrent design of hardware and software.

In support of this, Accellera approved the formation of a Portable Stimulus Specification Proposed Working Group (PWG) to study the validity and need for a portable stimulus specification.  To that end, join me at the kickoff meeting to launch this activity on Wednesday, May 7, 2014 from 10:00am to 4:00pm Pacific time at the offices of Mentor Graphics in Fremont, CA USA.  If you would to attend, or you would  like time on the agenda to discuss technology that would advance the development of a Portable Stimulus Specification or discuss your objectives/requirements for this group, contact me and I will put you in touch with the meeting organizer.  Accellera PWG meetings are open to all and do not require Accellera membership status to attend.

, , , , , , , , , ,

25 February, 2014

As DVCon expands, we at Mentor Graphics have grown our sponsored sessions as well.  Would you expect less?

In DVCon’s recent past, it was a tradition for the North American SystemC User Group (NASCUG) to sponsor a day of activity before the official start of the conference.  When OSCI merged with Accellera, the day before the official conference start grew to become Accellera Day with a broader set of meetings and activities covering many of Accellera’s standards.  This has all grown into a more official part of the DVCon program.  On Monday at DVCon – or as many still call it – Accellera Day – the tradeshow now joins in opening.  I covered this in detail in an earlier blog, so I won’t repeat myself now.

The pre-conference education and meet-up to discuss the latest in standards development is joined by an end of conference tutorial series that has expanded to allow four parallel sessions from three.  Instead of the one tutorial we at Mentor Graphics would otherwise sponsor at DVCon, we will offer two in this expanded series. Given the impact verification has on design it would seem right that more time be devoted to topics that address this.  One half-day tutorial is just to short to give the subject its due respect.

The two Mentor Graphics sponsored tutorials at DVCon, to be run in series, will devote a day to explore the application of current verification technology by us and users like you.  If you are already attending DVCon, you are making your tutorial selections now.  And for those who might only be interested to attend the tutorials themselves, DVCon offers a tutorials-only package ($145/Tutorial).  Mentor’s two tutorials are:

The first tutorial references “smooth sailing,” not because this will be a “no-pirate zone,” although I can tell you that since International Talk Like a Pirate Day is in late September, one won’t have to worry about a morning of pirate talk! [Interesting Fun Fact: Mentor Graphics’ headquarters in Wilsonville, OR USA is a short 50 miles (~80 km) north of the creators of this parotic holiday.]  The smooth sailing comes from the ability to easily use multiple engines from simulation, formal, emulation, FPGA prototyping to address your block to system-level verification needs.

The second tutorial is all about formal.  Or, in a more colloquial way to say it, we will answer the question: Whatsup with formal?  No, I doubt we will find more slang terms for formal technology being used and created in the tutorial.  But the tutorial will certainly look at more focused applications of formal technology.  As a pioneer in focused formal applications (like clock domain crossing) the creation of these focused formal applications has greatly simplified use and expanded technology access to verification teams with RTL design checks, X-state verification, and more joining the list.  Maybe we should ask Whatsapp with formal! But wait!  That slang question is already taken – and Facebook affirmed ownership with a $19B purchase of it recently.  Oh well, I lament.  Join me at this tutorial and we can explore something suitable and not yet taken as a replacement.  I can’t think of a better way to close DVCon than to see if we can invent another $19B term (or app).

, , , , , ,

23 February, 2014

UVM 1.2 Release is Imminent

As vice chair of DVCon 2014, I can share with you that the Universal Verification Methodology (UVM) remains a topic of great interest.  It sets the pace for tutorials and given the pending release by Accellera, learning what is new in UVM 1.2 is a compelling reason to attend DVCon.

The Accellera Day tutorial series on Monday at DVCon is popular with UVM being a session of great interest.  Aside from the “verification crisis” driving the need to explore this industry standard, the first major update is also a reason to generate this interest.  The UVM tutorial is meant for the novice and expert alike.  UVM experts can expect to walk away with more information on the new UVM 1.2 features and how they might plan to deploy them.

Naturally, I suggest you consider registering for the conference to attend this tutorial.  (There are still a few seats left; but you will need to hurry!)

UVM Working Group Discussions

As a member of the Accellera UVM Working Group, I have asked the team to consider adopting the SystemC development scheme of an open public review of a pending release of open source code.  While the merger of OSCI and Accellera to form Accellera Systems Initiative inherited the OSCI style of public review, Accellera has not fully embraced it for all its projects.

In a disclosure of a bit of insider conversation I had with the UVM WG this last week, I asked the group to confirm that we were going to bypass the “official” public review option and go to an internal 30-day review cycle only – then release to the public.  While the conclusion was to stay on the 30-day internal review path, the group also noted that one who may be familiar with Git might be able to locate the source code (and many have) and do testing.

Since the bleeding-edge users know they can access as it is being developed, why not share the Git commands for everyone to gain access?  So the group has done just this.  When last minute changes for Release Candidate 4 were put in place, the Git script to offer access for early review was shared publicly.  You can find can find this public message here, thanks to UVM WG member Adiel Khan (from Synopsys).

If you are a seasoned UVM user and are attending DVCon the week of March 3rd, I would encourage you to do some testing now so you can connect with the developers first hand.  And even if you are not attending DVCon but want to migrate to UVM 1.2, you might want to get an early start to determine what you might need to do to adopt this release.

If you are not going to attend the DVCon UVM tutorial and want a short update on what this version will offer, the UVM WG secretary, Adam Sherer (from Cadence), put together a brief slide set that he presented at the TVS DVClub event in September 2013 that you can download.  You may find it a useful companion to the download of the open source code.

Even if you are not attending DVCon, the adoption of UVM is globally substantial and it might be good to reflect on the need for broader testing.  In the first releases of UVM, this may not have been as important as few were using it and the number of tests limited to the main developers.  However, as its popularity has grown and adoption increased, it is probably a good idea for the Accellera UVM Working Group to consider the impact of a new release on teams actively using it now.  While the UVM WG drives to closure on its release candidate and the UVM 1.2 standard, you are offered the opportunity to give us feedback.  For those who have time, please do!

Mentor Commentary on Standards Development

Lastly, for those attending DVCon, check out our own Tom Fitzpatrick’s Wednesday morning paper – Of Camels and Committees: Standards Should Enable Innovation, Not Strangle It. His commentary on the development process may shed some additional light into how technology additions, changes and enhancements are judged for inclusion in updates to standards, like UVM.

Resources:
- UVM 1.2 New Feature Presentation (Sept 2013): Download Here (Free)
- UVM 1.2 Public Review Instructions (Feb 2014): Download Here (Free)
- Mentor Commentary at DVCon: Register Here ($)

, , , , , ,

6 January, 2014

The UCIS Story

There is no secret as design sizes grow it is doubly burdensome for verification.  Two factors that are easy to measure is the time it takes to simulate a design and the other is the size of the dataset that contains the results of the verification runs. Simulation times are growing and the datasets are getting larger.  While time and attention is given to accelerated verification through emulation, or alternate verification methods, to reduce run times, less explored is the impact of larger datasets on verification closure.  How does one find bugs within datasets that are so large?  How can verification results from simulation, emulation, formal and more be brought together to help drive verification closure?  How can one link failures in verification back to requirements?

The Accellera standards organization took a multi-year journey to help address these issues and arrived at the creation of the Unified Coverage Interoperability Standard (UCIS).  You can get your free copy here if you would like to read and use it.  Mentor Graphics contributed a significant starting point to the standard and collaborated with major competitors and users to add to and extend from there.  But now that the standard is done, what does one do with it?

While that was a rhetorical question when the standard was done in 2012; today it begs an answer.

From my perspective there are two classes of users of UCIS.  The more immediate users are those who are building verification tools that must contend with design and verification complexity now.  With UCIS they have the initial underpinnings to add product features that will allow a level of data portability that was not present prior to the standard.  The second class of users are those who will use the UCIS Application Programming Interface (API) to build functions that will perform simple and complex tasks on these large datasets.  This last class of user that might exchange UCIS API code with each other has yet to materialize.  But the stage is set for them.

To highlight what the first class of UCIS adopters have been doing, DVClub in Europe will tackle to answer this question as on what one can do with UCIS on Monday, 13 January 2014.  Darron May, Product Manager at Mentor Graphics will speak for us on our application of the standard.  His session is titled Blending Metrics from Multiple Verification Engines to Improve Productivity.  You can find out more details about the DVClub event (speakers and presentation abstracts) and register here to attend in person or via remote access.  The event will be held 12:00-14:00 GMT and is free.

, , , , , ,

19 September, 2013

It’s hard for me to believe that SystemVerilog 3.1 was released just over 10 years ago. The 3.1 version added Object-Oriented Programming features for testbench development to a language predominately used for RTL design synthesis. Making debug easier was one of the driving forces in unifying testbench and design features into a single language. The semantics for evaluating expressions and executing statements would be the same in the testbench and design. Setting breakpoints and stepping through the code would be seamless. That should have made it easier for either a verification or a design engineer to understand a complete verification environment. Or maybe it would enable either one to at least understand enough of the environment to isolate a particular problem.

Ten years later, I have yet to see that promise fulfilled. Most design engineers still debug their simulations the same way they debug in the lab: they look at waveforms. During simulation, they rarely look at the design source code, and certainly never look at the testbench code (unless it’s just basic pin wiggling like a waveform). Verification engineers are not much different. They rely on waveform debugging because that is what they were brought up on, and many do not even realize source-level debugging is available to them. However the test/testbench is more like a piece of software than a hardware description, and there are many things about a modern testbench that is difficult to display in a waveform (e.g. call stacks, local variables, and random constraints). And methodologies like the UVM add many layers of source-level complexity that most users do not have the time to wade through.

Next week I will be presenting as part of an Industry Special Session during the Forum on specification & Design Languages (FDL September 24-26,2013) that will discuss these issues and try to get more involvement from the academic and user communities to help resolve them. Was combining constructs from many languages into one a success? Can tools provide representations of source-level constructs in an easier graphical form? We hopefully will not need another decade.

, , , ,

8 September, 2013

Schedules, respins, and bug classification

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 11 click here), I focused on some of the 2012 Wilson Research Group findings related to formal verification, acceleration/emulation, and FPGA prototyping trends. In this blog, I present verification results findings in terms of schedules, respins, and classification of functional bugs.

We have seen in previous blogs that a significant amount of effort is being applied to functional verification. An important question the various studies have tried to answer is whether this increasing effort is paying off.

Schedules

Figure 1 presents the design completion time compared to the project’s original schedule. What is interesting is that we really have not seen a change in this trend in over five years. That is, 67 percent of all projects are behind schedule with respect to the original plan. One could argue that designs have increased in complexity in terms of gate counts, embedded processors, and lots of software between 2007 and 2012. Yet, achieving project schedules has not worsened.

Figure 1. Non-FPGA design completion time compared to the project’s original schedule

What’s interesting is that the FPGA designs follows this same trend, as shown in figure 2.

Figure 2. Non-FPGA vs. FPGA design completion time relative to the original plan

Respins

Other verification data points worth looking at relate to the number of spins required between the start of a project and final production. Figure 3 shows this industry trend all the way back to the 2004 Collett study. Again, even though designs have increased in complexity, the data suggest that projects are not getting any worse in terms of the number of spins before production. If anything, there appears to be a slight improvement recently in this trend in projects requiring three or more spins.

Figure 3. Number of spins required from start of project until production

Bug classification

Figure 4 shows various categories of flaws that are contributing to respins. Note that the sum is greater than 100% on this graph, which is because a respin can be triggered by multiple flaws. 

Figure 4. Number of spins required from start of project until production

Although logic and functional flaws remain the leading causes of respins, the data suggest that there has been some improvement in this area over the past eight years. Is this due to the increased amount of reuse that is occurring in the industry? Or is the industry maturing its verification processes? Or is something entirely different going on? This data point raises some interesting questions worth exploring.

Figure 5 examines root cause of functional bugs by various categories. The data suggest an improvement in logic errors over an eleven year period, and potentially, a worsening of problems related to changing specifications. Problems associated with changing, incorrect, and incomplete specifications is a common theme I often hear when visiting customers.

Figure 5. Root cause of functional flaws

In my next blog (click here), I plan to wrap up this series of blogs in what I call the Epilogue—which will discuss potential gotchas and cautions on interpreting certain aspects of the data and thoughts about how the data might be used constructively.

, , ,

19 August, 2013

Verification Techniques & Technologies Adoption Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for background on the study, click here).

In my previous blog (Part 9 click here), I focused on some of the 2012 Wilson Research Group findings related to design and verification language and library trends. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2012 Wilson Research Group study.

An interesting trend we are starting to see is that the electronic industry is maturing its functional verification processes, whether they are targeting their designs at IC/ASIC or FPGA implementations. This blog provides data to support this claim. An interesting question you might ask is, “What is driving this trend?” In some of my earlier blogs (click here for Part 1 and Part 2) I showed an that design complexity is increasing in terms design sizes and number of embedded processors. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (click here for Part 5 and Part 6). My belief is that the industry is being forced to mature its functional verification processes to address increasing complexity and effort.

Simulation Techniques Adoption Trends

Let’s begin by comparing  non-FPGA adoption trends related to various simulation techniques from the 2007 Far West Research study  (in blue) with the 2012 Wilson Research Group study  (in green), as shown in Figure 1.

Figure 1. Simulation-based technique adoption trends for non-FPGA designs

You can see that the study finds the industry increasing its adoption of various functional verification techniques for non-FPGA targeted designs. Clearly the industry is maturing its processes as I previously claimed.

For example, in 2007, the Far West Research Group found that only 48 percent of the industry performed code coverage. This surprised me. After all, HDL-based code coverage is a technology that has been around since the early 1990’s. However, I did informally verify the 2007 results through numerous customer visits and discussions. In 2012, we see that the industry adoption of code coverage has increased to 70 percent.

In 2007, the Far West Research Group study found that 37 percent of the industry had adopted assertions for use in simulation. In 2012, we find that industry adoption of assertions had increased to 63 percent. I believe that the maturing of the various assertion language standards has contributed to this increased adoption.

In 2007, the Far West Research Group study found that 40 percent of the industry had adopted functional coverage for use in simulation. In 2010, the industry adoption of functional coverage had increased to 66 percent. Part of this increase in functional coverage adoption has been driven by the increased adoption of constrained-random simulation, since you really can’t effectively do constrained-random simulation without doing functional coverage.

Now let’s look at  FPGA adoption trends related to various simulation techniques from the 2010 Far West Research study  (in pink) with the 2012 Wilson Research Group study  (in red).

Figure 2. Simulation-based technique adoption trends for non-FPGA designs

Again, you can clearly see that the industry is increasing its adoption of various functional verification techniques for FPGA targeted designs. This past year I have spent a significant amount of time in discussions with FPGA project managers around the world. During these discussions, most mangers mention the drive to improve verification process within their projects due to  rising complexity of this class of designs. The Wilson Research Group data supports these claims.

In fact, Figure 3 illustrates this maturing trend in the FPGA space, where we saw a 15 percent increase in the adoption of RTL simulation and an 8.5 percent increase in the adoption of code coverage. For complex FPGA designs, the traditional approach of “burn and churn” and debug in the lab is no longer a viable option. Nonetheless, it is still somewhat alarming that 31 percent of the FPGA study participants work on projects that perform no RTL simulation.

Figure 3. FPGA projects maturing their verification processes

Signoff Criteria Trends

We saw earlier in this blog the increased adoption of coverage techniques in the industry. Coverage has become a major component of a project’s verification signoff criteria. In Figure 4, we see how coverage has increased in importance in verification signoff criteria within the past five years, while other decision attributes have declined in terms of importance.

Figure 4. Non-FPGA functional verification signoff criteria trends

We see the same trends for FPGA designs, as shown in Figure 5.

Figure 5. FPGA functional verification signoff criteria trends

In my next blog (click here), I plan to continue the discussion related to adoption of various verification technologies and techniques as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

12 August, 2013

Language and Library Trends (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 8 click here), I focused on design and verification language trends, as identified by the Wilson Research Group study. This blog presents additional trends related to verification language and library adoption trends.

You might note that for some of the language and library data I present, the percentage sums to more than 100 percent. The reason for this is that some participants’ projects use multiple languages or multiple testbench methodologies.

Testbench Methodology Class Library Adoption

Now let’s look at testbench methodology and class library adoption for IC/ASIC designs. Figure 1 shows the trends in terms of methodology and class library adoption by comparing the 2010 Wilson Research Group study (in blue) with the 2012 study (in green). Today, we see a downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has increased by 486 percent since the fall of 2010. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 46 percent.

Figure 1. Methodology and class library trends

Figure 2 show the adoption of testbench methodologies and class libraries for FPGA designs (in red). We do not have sufficient data to show prior adoption trends in the FPGA space, but we anticipate that our future studies will enable us to do this. However, we did ask the FPGA study participants which testbench methodologies and class libraries they were planning to adopt within the next 12 months. Based on these responses, we anticipate that UVM adoption will increase by 40 percent, and OVM increase by 24 percent in the FPGA space.

Figure 2. Methodology and class library trends

Assertion Languages and Libraries

Finally, let’s examine assertion language and library adoption for IC/ASIC designs. The Wilson Research Group study found that 63 percent of all the IC/ASIC participants have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trend related to those participants who have adopted ABV.

Figure 3 shows the trends in terms of assertion language and library adoption by comparing the 2010 Wilson Research Group study (in blue), the 2012 Wilson Research Group study (in green), and the projected adoption trends within the next 12 months (in purple). The adoption of SVA continues to increase, while other assertion languages and libraries either remain flat or decline.

Figure 3. Assertion language and library adoption for Non-FPGA designs

Figure 4 shows the adoption of assertion language trends for FPGA designs (in red). Again, we do not have sufficient data to show prior adoption trends in the FPGA space, but we anticipate that our future studies will enable us to do this. We did ask the FPGA study participants which assertion languages and libraries they planned to adopt within the next 12 months. Based on these responses, we anticipate an increase in adoption for OVL, SVA, and PSL in the FPGA space within the next 12 months.

Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I plan to focus on the adoption of various verification technologies and techniques used in the industry, as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

5 August, 2013

Language and Library Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 7 click here), I focused on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies. In this blog, I present design and verification language trends, as identified by the Wilson Research Group study.

You might note that for some of the language and library data I present, the percentage sums to more than one hundred percent. The reason for this is that some participants’ projects use multiple languages.

RTL Design Languages

Let’s begin by examining the languages used for RTL design. Figure 1 shows the trends in terms of languages used for design, by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), the 2012 Wilson Research Group study (in green), as well as the projected design language adoption trends within the next twelve months (in purple) as identified by the study participants. Note that the design language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption continues to increase.

Also, it’s important to note that this study focused on languages used for RTL design. We have conducted a few informal studies related to languages used for architectural modeling—and it’s not too big of a surprise that we see increased adoption of C/C++ and SystemC in that space. However, since those studies have (thus far) been informal and not as rigorously executed as the Wilson Research Group study, I have decided to withhold that data until a more formal blind study can be executed related to architectural modeling and virtual prototyping.

Figure 1. Trends in languages used for Non-FPGA design

Let’s now look at the languages used specifically for FPGA RTL design. Figure 2 shows the trends in terms of languages used for FPGA design, by comparing the 2012 Wilson Research Group study (in red) with the projected design language adoption trends within the next twelve months (in purple).

Figure 2. Languages used for Non-FPGA design

It’s not too big of a surprise that VHDL is the predominant language used for FPGA RTL design, although we are starting to see increased interest in SystemVerilog.

Verification Languages

Next, let’s look at the languages used to verify Non-FPGA designs (that is, languages used to create simulation testbenches). Figure 3 shows the trends in terms of languages used to create simulation testbenches by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green).

Figure 3. Trends in languages used in verification to create Non-FPGA simulation testbenches

The study revealed that verification language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption is increasing. In fact, SystemVerilog adoption increased by 8.3 percent between 2010 and 2012.

Figure 4 provides a different analysis of the data by partitioning the projects by design size, and then calculating the adoption of SystemVerilog for creating testbenches by size. The design size partitions are represented as: less than 5M gates, 5M to 20M gates, and greater than 20M gates. Obviously, we find that the larger the design size, the greater the adoption of SystemVerilog for creating testbenches. Yet, probably the most interesting observation we can make from examining Figure 4 is related to smaller designs that are less than 5M gates. Here we see that 58.8 percent of the industry has adopted SystemVerilog for verification. In other words, it is safe to say that SystemVerilog for verification has become mainstream today and not just limited to early adopters or leading-edge design projects.

Figure 4. SystemVerilog (for verification) adoption by design size

Let’s now look at the languages used specifically for FPGA RTL design. Figure 5 shows the trends in terms of languages used for FPGA design, by comparing the 2012 Wilson Research Group study (in red) with the projected design language adoption trends within the next twelve months (in purple).

Figure 5. Trends in languages used in verification to create FPGA simulation testbenches

In my next blog (click here), I’ll continue the discussion on design and verification language trends as revealed by the 2012 Wilson Research Group Functional Verification Study.

, , , , , , , , , , , , , , , , , ,

29 July, 2013

Testbench Characteristics and Simulation Strategies

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for background on the study, click here).

In my previous blog (click here), I focused on the controversial topic of effort spent in verification. In this blog, I focus on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies. Although I am shifting the focus away from verification effort, I believe that the data I present in this blog is related to my previous blog and really needs to be considered when calculating effort.

Time Spent in full-chip versus Subsystem-Level Simulation

Let’s begin by looking at Figure 1, which shows the percentage of time (on average) that a project spends in full-chip or SoC integration-level verification versus subsystem and IP block-level verification. The mean time performing full chip verification is represented by the dark green bar, while the mean time performing subsystem verification is represented by the light green bar. Keep in mind that this graph represents the industry average. Some projects spend more time in full-chip verification, while other projects spend less time.

Figure 1. Mean time spent in full chip versus subsystem simulation

Number of Tests Created to Verify the Design in Simulation

Next, let’s look at Figure 2, which shows the number of tests various projects create to verify their designs using simulation. The graph represents the findings from the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). Note that the curves look remarkably similar over the past five years. The median number of tests created to verify the design is within the range of (>200 – 500) tests. It is interesting to see a sharp percentage increase in the number of participants who claimed that fewer tests (1 – 100) were created to verify a design in 2012. It’s hard to determine exactly why this was the case—perhaps it is due to the increased use of constrained random (which I will talk about shortly). Or perhaps there has been an increased use of legacy tests. The study was not design to go deeper into this issue and try to uncover the root cause. This is something I intend to informally study this next year through discussions with various industry thought leaders.

Figure 2. Number of tests created to verify a design in simulation

Percentage of Directed Tests versus Constrained-Random Tests

Now let’s compare the percentage of directed testing that is performed on a project to the percentage of constrained-random testing. Of course, in reality there is a wide range in the amount of directed and constrained-random testing that is actually performed on various projects. For example, some projects spend all of their time doing directed testing, while other projects combine techniques and spend part of their time doing directed testing—and the other part doing constrained-random. For our comparison, we will look at the industry average, as shown in Figure 3. The average percentage of tests that were directed is represented by the dark green bar, while the average percentage of tests that are constrained-random is represented by the light green bar.

Figure 3. Mean directed versus constrained-random testing performed on a project

Notice how the percentage mix of directed versus constrained-random testing has changed over the past two years.Today we see that, on average, a project performs more constrained-random simulation. In fact, between 2010 and 2012 there has been a 39 percent increase in the use of constrained-random simulation on a project. One driving force behind this increase has been the maturing and acceptance of both the SystemVerilog and UVM standards—since two standards facilitate an easier implementation of a constrained-random testbench. In addition, today we find that an entire ecosystem has emerged around both the SystemVerilog and UVM standards. This ecosystem consists of tools, verification IP, and industry expertise, such as consulting and training.

Nonetheless, even with the increased adoption of constrained-random simulation on a project, you will find that constrained-random simulation is generally only performed at the IP block or subsystem level. For the full SoC level simulation, directed testing and processor-driven verification are the prominent simulation-based techniques in use today.

Simulation Regression Time

Now let’s look at the time that various projects spend in a simulation regression. Figure 4 shows the trends in terms of simulation regression time by comparing the 2007 Far West Research study (in gray) with the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). There really hasn’t been a significant change in the time spent in a simulation regression within the past three years. You will find that some teams spend days or even weeks in a regression. Yet today, the industry median is between 8 and 16 hours, and for many projects, there has been a decrease in regression time over the past few years. Of course, this is another example of where deeper analysis is required to truly understand what is going on. To begin with, these questions should probably be refined to better understand simulation times related to IP versus SoC integration-level regressions. We will likely do that in future studies—with the understanding that we will not be able to show trends (or at least not initially).

Figure 4. Simulation regression time trends

In my next blog (click here), I’ll focus on design and verification language trends, as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@HarryAtMentor Tweets

  • Loading tweets...