Posts Tagged ‘formal verification’

25 February, 2014

As DVCon expands, we at Mentor Graphics have grown our sponsored sessions as well.  Would you expect less?

In DVCon’s recent past, it was a tradition for the North American SystemC User Group (NASCUG) to sponsor a day of activity before the official start of the conference.  When OSCI merged with Accellera, the day before the official conference start grew to become Accellera Day with a broader set of meetings and activities covering many of Accellera’s standards.  This has all grown into a more official part of the DVCon program.  On Monday at DVCon – or as many still call it – Accellera Day – the tradeshow now joins in opening.  I covered this in detail in an earlier blog, so I won’t repeat myself now.

The pre-conference education and meet-up to discuss the latest in standards development is joined by an end of conference tutorial series that has expanded to allow four parallel sessions from three.  Instead of the one tutorial we at Mentor Graphics would otherwise sponsor at DVCon, we will offer two in this expanded series. Given the impact verification has on design it would seem right that more time be devoted to topics that address this.  One half-day tutorial is just to short to give the subject its due respect.

The two Mentor Graphics sponsored tutorials at DVCon, to be run in series, will devote a day to explore the application of current verification technology by us and users like you.  If you are already attending DVCon, you are making your tutorial selections now.  And for those who might only be interested to attend the tutorials themselves, DVCon offers a tutorials-only package ($145/Tutorial).  Mentor’s two tutorials are:

The first tutorial references “smooth sailing,” not because this will be a “no-pirate zone,” although I can tell you that since International Talk Like a Pirate Day is in late September, one won’t have to worry about a morning of pirate talk! [Interesting Fun Fact: Mentor Graphics’ headquarters in Wilsonville, OR USA is a short 50 miles (~80 km) north of the creators of this parotic holiday.]  The smooth sailing comes from the ability to easily use multiple engines from simulation, formal, emulation, FPGA prototyping to address your block to system-level verification needs.

The second tutorial is all about formal.  Or, in a more colloquial way to say it, we will answer the question: Whatsup with formal?  No, I doubt we will find more slang terms for formal technology being used and created in the tutorial.  But the tutorial will certainly look at more focused applications of formal technology.  As a pioneer in focused formal applications (like clock domain crossing) the creation of these focused formal applications has greatly simplified use and expanded technology access to verification teams with RTL design checks, X-state verification, and more joining the list.  Maybe we should ask Whatsapp with formal! But wait!  That slang question is already taken – and Facebook affirmed ownership with a $19B purchase of it recently.  Oh well, I lament.  Join me at this tutorial and we can explore something suitable and not yet taken as a replacement.  I can’t think of a better way to close DVCon than to see if we can invent another $19B term (or app).

, , , , , ,

30 October, 2013

MENTOR GRAPHICS AT ARM TECHCON

This week ARM® TechCon® 2013 is being held at the Santa Clara Convention Center from Tuesday October 29 through Thursday October 31st, but don’t worry, there’s nothing to be scared about.  The theme is “Where Intelligence Counts”, and in fact as a platinum sponsor of the event, Mentor Graphics is excited to present no less than ten technical and training sessions about using intelligent technology to design and verify ARM-based designs.

My personal favorite is scheduled for Halloween Day at 1:30pm, where I’ll tell you about a trick that Altera used to shave several months off their schedule, while verifying the functionality and performance of an ARM AXI™ fabric interconnect subsystem.  And the real treat is that they achieved first silicon success as well.  In keeping with the event’s theme, they used something called “intelligent” testbench automation.

And whether you’re designing multi-core designs with AXI fabrics, wireless designs with AMBA® 4 ACE™ extensions, or even enterprise computing systems with ARM’s latest AMBA® 5 CHI™ architecture, these sessions show you how to take advantage of the very latest simulation and formal technology to verify SoC connectivity, ensure correct interconnect functional operation, and even analyze on-chip network performance.

On Tuesday at 10:30am, Gordon Allan described how an intelligent performance analysis solution can leverage the power of an SQL database to analyze and verify interconnect performance in ways that traditional verification techniques cannot.  He showed a wide range of dynamic visual representations produced by SoC regressions that can be quickly and easily manipulated by engineers to verify performance to avoid expensive overdesign.

Right after Gordon’s session, Ping Yeung discussed using intelligent formal verification to automate SoC connectivity, overcoming observability and controllability challenges faced by simulation-only solutions.  Formal verification can examine all possible scenarios exhaustively, verifying on-chip bus connectivity, pin multiplexing of constrained interfaces, connectivity of clock and reset signals, as well as power control and scan test signal connectivity.

On Wednesday, Mark Peryer shows how to verify AMBA interconnect performance using intelligent database analysis and intelligent testbench automation for traffic scenario generation.  These techniques enable automatic testbench instrumentation for configurable ARM-based interconnect subsystems, as well as highly-efficient dense, medium, sparse, and varied bus traffic generation that covers even the most difficult to achieve corner-case conditions.

And finally also on Halloween, Andy Meyer offers an intelligent workshop for those that are designing high performance systems with hierarchical and distributed caches, using either ARM’s AMBA 5 CHI architecture or ARM’s AMBA 4 ACE architecture.  He’ll cover topics including how caching works, how to improve caching performance, and how to verify cache coherency.

For more information about these sessions, be sure to visit the ARM TechCon program website.  Or if you miss any of them, and would like to learn about how this intelligent technology can help you verify your ARM designs, don’t be afraid to email me at mark_olen@mentor.com.   Happy Halloween!

, , , , , , , , , ,

26 August, 2013

Verification Techniques & Technologies Adoption Trends (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 10 click here), I presented verification techniques and technologies adoption trends, as identified by the 2012 Wilson Research Group study. In this blog, I continue those discussions and focus on formal verification, acceleration/emulation, and FPGA prototyping.

For years, the term “formal verification” has bugged me since it is quite often misunderstood in the industry. The problem originated back in the mid 1990’s with the emergence of formal equivalence checking tools from various EDA vendors, such as Chrysalis Symbolic Design. These tools were introduced to the market as formal verification, which is technically a true statement. However, there are a range of tools available under the category formal verification, such as formal property checkers and equivalence checkers.

So, what’s the problem? The question related to formal property checking in prior studies could have been misinterpreted by some participants to mean equivalence checking, which reduces the confidence in the results. To prevent this misinterpretation, we decided to change the question in 2012 to clarify that we were talking about the formal verification of assertions and clearly state “not equivalence checking” in the question.

One other thing we wanted to learn in the formal verification space during this study was what percentage of the market was using these auto-formal analysis tools (such as X safety checks, deadlock detection, reset analysis, etc.) versus formal property checking tools. The previous studies never made this distinction.

The fact that we changed the question related to formal property checking while adding in auto-formal in the 2012 study means that there is no meaningful way to compare this study’s formal verification results to the formal verification results from prior studies.

Formal Technology Adoption Trends

Figure 1 shows the adoption percentages for formal property checking and auto-formal techniques.

Figure 1. Formal Technology Adoption

We found that about five percent of the participants who are applying auto-formal techniques are not doing formal property checking. This means that the combined adoption of formal property checking and auto-formal techniques is about 32 percent. As a point of reference, the 2007 FarWest Research study found 19 percent adoption for formal verification—and the 2010 study found the adoption at 29 percent. Both the 2007 and 2010 studies included the potential erroneous responses associated with formal equivalence checking, as well as auto-formal usage.

Figure 2 provides a different analysis of the formal property adoption data by partitioning the results by design sizes. The design size partitions are represented as: less than 5M gates, 5M to 20M gates, and greater than 20M gates.

Figure 2. Formal property checking adoption by design size

Acceleration/Emulation & FPGA Prototyping Adoption Trends

The amount of time spent in a simulation regression is an increasing concern for many projects. Intuitively, we tend to think that the design size influences simulation performance. However, there are two equally important factors that must be considered: number of tests in the simulation regression suite and the length of each test in terms of clock cycles.

For example, a project might have a small or moderate-sized design, yet verification of this design requires a long running test (e.g., a video input stream). Hence, in this example, the simulation regression time is influenced by the number of clock cycles required for the test and not necessarily the design size itself.

Figure 3 shows the number of directed tests created to verify a design in simulation (i.e., the regression suite). The findings obviously varied dramatically from a handful of tests to thousands of tests in a regression suite, depending on the design.

Figure 3. Number directed test created to verify a design

The increase in tests in the range of 1-100 is interesting to note. Is this due to the increase in adoption of constrained-random verification techniques in the past few years? Or possibly, something else is going on here. This line of questioning illustrates the value of reviewing various industry studies. That is, it is not so much in the absolute values a study presents, but the questions the new data raises.

Next, let’s look at regression times as shown in Figure 5. As you can see, it also varies dramatically from short regression times for some projects to multiple days for other projects. The median simulation regression time is about 16-24 hours. Here, we also see an increase in shorter regression times. Again this data raises some interesting questions that are worth exploring.

Figure 4. Simulation regression time trends

One technique that is often used to speed up simulation regressions (either due to very long tests and lots of tests) is either hardware-assisted acceleration or emulation. In addition, FPGA prototyping, while historically used as a platform for software development, has recently served a role in SoC integration validation.

Figure 5 shows the adoption trend for both HW-assisted acceleration/emulation and FPGA prototyping by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). We see a continual rise in HW acceleration and emulation. This is not only due to the need to verify larger designs, or designs with long test times. HW acceleration and emulation has become the key platform for SoC Integration verification, where both hardware and software are integrated into a system for the first time. In addition, emulation is being used increasingly as a software development platform.

Figure 5. HW-assisted acceleration/emulation and FPGA Prototyping trends

Note that the adoption of FPGA prototyping has remained flat (or decreased slightly as the 2012 data suggest). This might seem counter-intuitive since we previously saw a trend in terms of the increase in SoC class designs. So what’s going on?

Figure 6 partitions the data for HW-assisted acceleration/emulation and FPGA prototyping adoption by design size: less than 1M gates, 1M to 20M gates, and greater than 20M gates. Notice that the adoption of HW-assisted acceleration/emulation continues to increase as design sizes increase. However, the adoption of FPGA prototyping rapidly drops off as design sizes increase beyond 20M gates. 

Figure 6. Acceleration/emulation and FPGA prototyping adoption by design size

This graph illustrates one of the problems with FPGA prototyping of very large designs, which is that there is an increased engineering effort required to partition designs across multiple FPGAs. In fact, what I have found is that FPGA prototyping of very large designs is often a major engineering effort in itself, and that many projects are seeking alternative solutions to address this problem.

In my next blog (click here), I will present the final data I plan to share from the Wilson Research Group study. This blog will focus on results in terms of meeting schedules, required spins, and classes of bugs contributing to respins. I will then wrap up this series of blogs in what I call the Epilogue—which will discuss potential gotchas and cautions on interpreting certain aspects of the data and thoughts about how the data could be used constructively.

, , , , , ,

29 July, 2013

Testbench Characteristics and Simulation Strategies

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for background on the study, click here).

In my previous blog (click here), I focused on the controversial topic of effort spent in verification. In this blog, I focus on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies. Although I am shifting the focus away from verification effort, I believe that the data I present in this blog is related to my previous blog and really needs to be considered when calculating effort.

Time Spent in full-chip versus Subsystem-Level Simulation

Let’s begin by looking at Figure 1, which shows the percentage of time (on average) that a project spends in full-chip or SoC integration-level verification versus subsystem and IP block-level verification. The mean time performing full chip verification is represented by the dark green bar, while the mean time performing subsystem verification is represented by the light green bar. Keep in mind that this graph represents the industry average. Some projects spend more time in full-chip verification, while other projects spend less time.

Figure 1. Mean time spent in full chip versus subsystem simulation

Number of Tests Created to Verify the Design in Simulation

Next, let’s look at Figure 2, which shows the number of tests various projects create to verify their designs using simulation. The graph represents the findings from the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). Note that the curves look remarkably similar over the past five years. The median number of tests created to verify the design is within the range of (>200 – 500) tests. It is interesting to see a sharp percentage increase in the number of participants who claimed that fewer tests (1 – 100) were created to verify a design in 2012. It’s hard to determine exactly why this was the case—perhaps it is due to the increased use of constrained random (which I will talk about shortly). Or perhaps there has been an increased use of legacy tests. The study was not design to go deeper into this issue and try to uncover the root cause. This is something I intend to informally study this next year through discussions with various industry thought leaders.

Figure 2. Number of tests created to verify a design in simulation

Percentage of Directed Tests versus Constrained-Random Tests

Now let’s compare the percentage of directed testing that is performed on a project to the percentage of constrained-random testing. Of course, in reality there is a wide range in the amount of directed and constrained-random testing that is actually performed on various projects. For example, some projects spend all of their time doing directed testing, while other projects combine techniques and spend part of their time doing directed testing—and the other part doing constrained-random. For our comparison, we will look at the industry average, as shown in Figure 3. The average percentage of tests that were directed is represented by the dark green bar, while the average percentage of tests that are constrained-random is represented by the light green bar.

Figure 3. Mean directed versus constrained-random testing performed on a project

Notice how the percentage mix of directed versus constrained-random testing has changed over the past two years.Today we see that, on average, a project performs more constrained-random simulation. In fact, between 2010 and 2012 there has been a 39 percent increase in the use of constrained-random simulation on a project. One driving force behind this increase has been the maturing and acceptance of both the SystemVerilog and UVM standards—since two standards facilitate an easier implementation of a constrained-random testbench. In addition, today we find that an entire ecosystem has emerged around both the SystemVerilog and UVM standards. This ecosystem consists of tools, verification IP, and industry expertise, such as consulting and training.

Nonetheless, even with the increased adoption of constrained-random simulation on a project, you will find that constrained-random simulation is generally only performed at the IP block or subsystem level. For the full SoC level simulation, directed testing and processor-driven verification are the prominent simulation-based techniques in use today.

Simulation Regression Time

Now let’s look at the time that various projects spend in a simulation regression. Figure 4 shows the trends in terms of simulation regression time by comparing the 2007 Far West Research study (in gray) with the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). There really hasn’t been a significant change in the time spent in a simulation regression within the past three years. You will find that some teams spend days or even weeks in a regression. Yet today, the industry median is between 8 and 16 hours, and for many projects, there has been a decrease in regression time over the past few years. Of course, this is another example of where deeper analysis is required to truly understand what is going on. To begin with, these questions should probably be refined to better understand simulation times related to IP versus SoC integration-level regressions. We will likely do that in future studies—with the understanding that we will not be able to show trends (or at least not initially).

Figure 4. Simulation regression time trends

In my next blog (click here), I’ll focus on design and verification language trends, as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

22 July, 2013

Effort Spent On Verification (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (click here), I focused on the controversial topic of effort spent in verification. This blog continues that discussion.

I stated in my previous blog that I don’t believe there is a simple answer to the question, “how much effort was spent on verification in your last project?” I believe that it is necessary to look at multiple data points to truly get a sense of the real effort involved in verification today. So, let’s look at a few additional findings from the study.

Time designers spend in verification

It’s important to note that verification engineers are not the only project members involved in functional verification. Design engineers spend a significant amount of their time in verification too, as shown in Figure 1.

Figure 1. Average (mean) time design engineers spend in design vs. verification

In fact, you might note that design engineers now actually spend more time doing verification than design. This time expenditure has shifted in the last five years. In fact, the amount of time that design engineers spend doing verification has increased by 15 percent since 2007, while the amount of time they spend doing design has decreased by about 13 percent.

The designer’s involvement in verification ranges from:

  • Small sandbox testing to explore various aspects of the implementation
  • Full functional testing of IP blocks and SoC integration
  • Debugging verification problems identified by a separate verification team

Percentage of time verification engineers spends in various task

Next, let’s look at the mean time verification engineers spend in performing various tasks related to their specific project. You might note that verification engineers spend most of their time in debugging. Ideally, if all the tasks were optimized, then you would expect this. Yet, unfortunately, the time spent in debugging can vary significantly from project-to-project, which presents scheduling challenges for managers during a project’s verification planning process.

Figure 2. Average (mean) time verification engineers spend in various task

Number of formal analysis, FPGA prototyping, and emulation Engineers

Functional verification is not limited to simulation-based techniques. Hence, it’s important to gather data related to other functional verification techniques, such as the number of verification engineers involved in formal analysis, FPGA prototyping, and emulation.

Figure 3 presents the trends in terms of the number of verification engineers focused on formal analysis on a project. In 2007, the mean number of verification engineers focused on formal analysis on a project was 1.68, while in 2010 the mean number increased to 1.84. For some reason, we did see a slight decreased in the mean number of verification engineers who focus on formal in 2012. Regardless, the curve is remarkably consistent for the past five years.

Figure 3. Median number of verification engineers focused on formal analysis

Although FPGA prototyping is a common technique used to create platforms for software development, it is also sometimes used by projects for SoC integration verification and system validation. Figure 4 presents the trends in terms of the number of verification engineers focused on FPGA prototyping. In 2007, the mean number of verification engineers focused on FPGA prototyping on a project was 1.42, while in 2010 the mean number was 1.86. In 2012 we saw a slight decline in mean number of verification engineers focused on FPGA prototyping. However, the curve has been remarkably similar for the past five years.

Figure 4. Number of verification engineers focused on FPGA prototyping

Figure 5 presents the trends in terms of the number of verification engineers focused on hardware-assisted acceleration and emulation. In 2007, the mean number of verification engineers focused on hardware-assisted acceleration and emulation on a project was 1.31, while in 2010 the mean number was 1.86. In 2012, we see a slight decrease in the mean number of verification engineers who focus on hardware-assisted acceleration and emulation.

Figure 5. Number of verification engineers focused on emulation

Again, noticed how the curve has been consistent over the past five years. In other words, we are not seeing any big trends in terms of increased verification engineers focused predominately on formal, FPGA prototyping, and hardware-assisted acceleration and emulation. This trend was certainly not true for general verification engineers who focus on simulation-based techniques, as I presented in my previous blog, where we saw a 75 percent increase in the peak number verification engineers involved on a project within the past five years.

A few more thoughts on verification effort

So, can I conclusively state that 70 percent of a project’s effort is spent in verification today as some people have claimed? No. In fact, even after reviewing the data on different aspects of today’s verification process, I would still find it difficult to state quantitatively what the effort is. Yet, the data that I’ve presented so far seems to indicate that the effort (whatever it is) is increasing. And there is still additional data relevant to the verification effort discussion that I plan to present in upcoming blogs. However, in my next blog (click here), I shift the discussion from verification effort, and focus on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies.

, , , , , , , , ,

8 July, 2013

Reuse Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on clocking and power management.  In this blog, I focus on design and verification reuse trends. As I mentioned in my prologue blog to this series (click here), one interesting trend that emerged from the study is that reuse adoption is increasing.

Design Composition Trends

Figure 1 shows the mean design composition trends graph, which compares the 2007 Far West Research study (in blue) with the 2012 Wilson Research Group study (in green).

New logic development has decreased by 34 percent in the last five years, while external IP adoption has increased by 69 percent. This increase in adoption has been driven by IP demand required for SoC development, such as embedded processor cores and standard interface cores. 

Figure 1. Mean design composition trends

Figure 2 compares today’s design composition between FPGA designs (in red) with Non-FPGA designs (in green). Currently, more new designs (i.e., new RTL) are created for FPGA versus Non-FPGA designs. However, as FPGAs get larger in terms of transistors, reuse will become even more important to address the design productivity gap that could arise between the number of transistors that can be manufactured on an FPGA and the amount of time to design for a given project.

Figure 2. Mean composition comparison between FPGA and Non-FPGA designs.

 

Verification Testbench Composition Trends

Figure 3 shows the mean testbench composition trends graph, which compares the 2007 Far West Research study (in blue) with the 2012 Wilson Research Group study (in green).

Notice that new verification code development has decreased by 24 percent in the last three years, while external verification IP adoption has increased by 138 percent. This increase has been driven by the emergence of standard on-chip and off-chip bus architectures.

Figure 3. Mean testbench composition trends

Figure 4 compares today’s testbench composition between FPGA (in red) and Non-FPGA (in green) designs. Again, we see that more new code is written today for FPGA than Non-FPGA testbenches, and I expect this will change over time to be more in line with Non-FPGA designs. 

Figure 4. Mean testbench composition comparison between FPGA and Non-FPGA designs

In my next blog (click here), I’ll shift my focus from design trends to project resource trends. I’ll also present our findings on the project effort spent in verification.

, , , ,

23 April, 2013

This is the first in a series of blogs that presents the results from the 2012 Wilson Research Group Functional Verification Study.

Study Overview

In 2002 and 2004, Ron Collett International, Inc. conducted its well known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification. However, after the 2004 study, no other industry studies were conducted, which left a void in identifying industry trends.

To address this void, Mentor Graphics commissioned Far West Research to conduct an industry study on functional verification in the fall of 2007. Then in the fall of 2010, Mentor commissioned Wilson Research Group to conduct another functional verification study. Both of these studies were conducted as blind studies to avoid influencing the results. This means that the survey participants did not know that the study was commissioned by Mentor Graphics. In addition, to support trend analysis on the data, both studies followed the same format and questions (when possible) as the original 2002 and 2004 Collett studies.

In the fall of 2012, Mentor Graphics commissioned Wilson Research Group again to conduct a new functional verification study. This study was also a blind study and follows the same format as the Collett, Far West Research, and previous Wilson Research Group studies. The 2012 Wilson Research Group study is one of the largest functional verification studies ever conducted. The overall confidence level of the study was calculated to be 95% with a margin of error of 4.05%.

Unlike the previous Collett and Far West Research studies that were conducted only in North America, both the 2010 and 2012 Wilson Research Group studies were worldwide studies. The regions targeted were:

  • North America:Canada,United States
  • Europe/Israel:Finland,France,Germany,Israel,Italy,Sweden,UK
  • Asia (minusIndia):China,Korea,Japan,Taiwan
  • India

The survey results are compiled both globally and regionally for analysis.

Another difference between the Wilson Research Group and previous industry studies is that both of the Wilson Research Group studies also included FPGA projects. Hence for the first time, we are able to present some emerging trends in the FPGA functional verification space.

Figure 1 shows the percentage makeup of survey participants by their job description. The red bars represents the FPGA participants while the green bars represent the non-FPGA (i.e., IC/ASIC) participants.

 

Figure 1: Survey participants job title description

Figure 2 shows the percentage makeup of survey participants by company type. Again, the red bars represents the FPGA participants while the green bars represents the non-FPGA (i.e., IC/ASIC) participants.

Figure 2: Survey participants company description

In a future set of blogs, over the course of the next few months, I plan to present the highlights from the 2012 Wilson Research Group study along with my analysis, comments, and obviously, opinions. A few interesting observations emerged from the study, which include:

  1. FPGA projects are beginning to adopt advanced verification techniques due to increased design complexity.
  2. The effort spent on verification is increasing.
  3. The industry is converging on common processes driven by maturing industry standards.

A few final comments concerning the 2012 Wilson Research Group Study.  As I mentioned, the study was based on the original 2002 and 2004 Collett studies.  To ensure consistency in terms of proper interpretation (or potential error related to mis-interpretation of the questions), we have avoided changing or modifying the questions over the years—with the exception of questions that relate to shrinking geometries sizes and gate counts. One other exception relates  introducing a few new questions related to verification techniques that were not a major concern ten years ago (such as low-power functional verification).  Ensuring consistency in the line of questioning enables us to have high confidence in the trends that emerge over the years.

Also, the method in which the study pools was created follows the same process as the original Collett studies.  It is important to note that the data presented in this series of blogs does not represent trends related to silicon volume (that is, a few projects could dominate in terms of the volume of manufactured silicon and not represent the broader industry).  The data in this series of blogs represents trends related to the study pool—which is a fair proxy for active design projects.

My next blog presents current design trends that were identified by the survey. This will be followed by a set of blogs focused on the functional verification results.

Also, to learn more about the 2012 Wilson Reserach Group study, view my pre-recorded Functional Verification Study web-seminar, which is located out on the Verification Academy website.

Quick links to the 2012 Wilson Research Group Study results (so far…)

, , , , , , , , , , , , ,

18 October, 2012

When it comes to formal methods, many engineers are skeptics. Perhaps this is due to value propositions that have been pitched over the years that have over-promised yet under-delivered in terms of results. Or perhaps it is due to the advanced skills that have traditionally been required to achieve predictable and reliable results. After all, historically this was the case—dating back to the mid-nineties when formal techniques were only adopted by companies that could afford a dedicated team of formal experts with PhDs.

So, what’s changed today? The emergence of functional verification solutions targeted at specific problem domains, which blend simulation with formal-based techniques in a seamless way to improve results. In other words, the application of formal-based technology is not just for experts anymore! In fact, everyone can reap the benefits of formal analysis today with very little effort.

One example of this blending of simulation with formal-based techniques is in the area of accelerating the process of code coverage closure with the new Questa CoverCheck solution. Closing code coverage typically involves many engineering weeks of effort to manually review code coverage holes to determine if they are unreachable and can be safely ignored—or figuring out exactly how to handcraft special tests to cover them during simulation. Questa CoverCheck makes it easy for non-expert users to leverage formal-based technology to complete this process by automatically identifying the set of unreachable coverage items in a design, and then guiding the user to create tests for the reachable items that have not been covered yet. This process, illustrated in the figure below, is push-button, low-effort, and requires no expertise with formal techniques. In addition, no assertions are required nor expertise in assertion languages. It is a beautiful example of how formal-based technology is blended with simulation to form a solution that improves both productivity and quality of results.

Another example of how formal-based technology is being used today to complement simulation is with AutoCheck, which is part of the Questa Formal solution. For example, there is a class of bugs that cannot be found using RTL simulation due to a simulation effect known as X-state optimism. These bugs might be found during gate-level simulation, but this occurs very late in the design flow when it is costlier to fix. By using AutoCheck, engineers are able to identify and correct X-state issues early in the design flow, before simulation occurs. In addition to X-state issues, AutoCheck uses formal-based technology to verify a wide range of common RTL errors that are difficult or impossible to find during RTL simulation. It is another example of a push-button, low-effort solution where assertion-language and formal expertise is not required. What’s new in the latest Questa Formal release is significant improvements in engine performance and capacity, along with multicore support.

Questa CDC is one more example of how formal-based technology is being used today to complement simulation. Today, we see about 94% of all designs have multiple asynchronous clock domains. Verifying that a signal originating from one clock domain will safely be registered in a different asynchronous clock domain is not possible using traditional RTL simulation since state element setup and hold times are not modeled, which means that metastability issues will not be verified. Again, these bugs might be found later in the flow during gate-level simulation where it is costlier to fix. Static timing analysis, although effective at finding timing issues within a single or synchronous clock domains is unable to identify issues across asyncrhonous clock domains. This is an area with formal-based technology, such as Questa CDC, can help. What’s new in the latest Questa CDC release is support for unlimited design sizes through hierarchical CDC analysis along with a 5X improvement in performance.

To learn what’s new with Questa Formal-Based Technology, see our recent press release. Or check out the Questa Cover Check, Questa CDC and Questa Formal links.

, ,

26 June, 2011

Verification Techniques & Technologies Adoption Trends

This blog is a continuation of a series of blogs, which present the highlights from the 2010 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 8 click here), I focused on some of the 2010 Wilson Research Group findings related to design and verification language trends. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2010 Wilson Research Group study.

One of the claims I made in the prologue to this series of blogs is that we are seeing a trend in increased industry adoption of advanced functional verification techniques, which is supported by the data I present in this blog. An interesting question you might ask is “what is driving this trend?” In some of my earlier blogs (click here for Part 1 and Part 2) I showed an industry trend in that design complexity is increasing in terms design sizes and number of embedded processors. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (click here for Part 4). My belief is that the industry is being forced to mature its functional verification process to address increasing complexity and effort.

Simulation Techniques Adoption Trends

Let’s begin by comparing  the adoption trends related to various simulation techniques as shown in Figure 1, where the data from the 2007 Far West Research study  is shown in blue and the data from 2010 Wilson Research Group study is shown in green.  

Figure 1. Simulation-based technique adoption trends

You can see that the study finds the industry increasing its adoption of various functional verification techniques.

For example, in 2007, the Far West Research Group found that only 48 percent of the industry performed code coverage. This surprised me. After all, HDL-based code coverage is a technology that has been around since the early 1990’s. However, I did informally verify the 2007 results through numerous customer visits and discussions. In 2010, we see that the industry adoption of code coverage has increased to 72 percent.

Now, a side comment: In this blog, I do not plan to discuss either the strengths or weaknesses of the various verification techniques that were studied (such as code coverage, whose strengths and weaknesses have been argued and debated for years)—perhaps in a separate series of future blogs. In this series of blogs, I plan to focus only on the findings from the 2010 Wilson Research Group study.

In 2007, the Far West Research Group study found that 37 percent of the industry had adopted assertions for use in simulation. In 2010, we find that industry adoption of assertions had increased to 72 percent. I believe that the maturing of the various assertion language standards has contributed to this increased adoption.

In 2007, the Far West Research Group study found that 40 percent of the industry had adopted functional coverage for use in simulation. In 2010, the industry adoption of functional coverage had increased to 69 percent. Part of this increase in functional coverage adoption has been driven by the increased adoption of constrained-random simulation, since you really can’t effectively do constrained-random simulation without doing functional coverage.

In fact, we see from the Far West Research Group 2007 study that 41 percent of the industry had adopted constrained-random simulation techniques. In 2010, the industry adoption had increased to 69 percent. I believe that this increase in constrained-random adoption has been driven by the increased adoption of the various base-class library methodologies, as I presented in a previous blog (click here for Part 8).

Formal Property Checking Adoption Trends

Figure 2 shows the trends in terms of formal property checking adoption by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green). The industry adoption of formal property checking has increased by an amazing 53 percent in the past three years. Again, this is another data point that supports my claim that the industry is starting to mature its adoption of advanced functional verification techniques.

 

Figure 2. Trends in formal property checking adoption

Another way to analyze the results is to partition a project’s adoption of formal property checking by design size, as shown in Figure 3, where less than 1M gates  is shown in blue, 1M to 20M gates is shown in orange, and greater than 20M gates is shown in red. Obviously, the larger the design, the more effort is generally spent in verification. Hence, it’s not too surprising to see the increased adoption of formal property checking for larger designs.

Figure 3. Trends in formal property checking adoption by design size

Acceleration/Emulation & FPGA Prototyping Adoption Trends

The amount of time spent in a simulation regression is an increasing concern for many projects. Intuitively, we tend to think that the design size influences simulation performance. However, there are two equally important factors that must be considered: number of test in the simulation regression suite, and the length of each test in terms of clock cycles. 

For example, a project might have a small or moderate-sized design, yet verification of this designs requires a long running test (e.g., a video input stream). Hence, in this example, the simulation regression time is influenced by the number of clock cycles required for the test, and not necessarily the design size itself.

In blog 6 of this series, I presented industry data on the number of tests created to verify a design in simulation (i.e., the regression suite). The findings obviously varied dramatically from a handful of test to thousands of test in a regression suite, depending on the design. In Figure 4 below, I show the findings for a projects regression time, which also varies dramatically from short regression times for some projects, to multiple days for other projects. The median simulation regression time is about 16 hours in 2010. 

Figure 4. Simulation regression time trends

One technique that is often used to speed up simulation regression (either due to very long test and lots of test) is either hardware assisted acceleration or emulation. In addition, FPGA prototyping, while historically used as a platform for software development, has recently served a role in SoC integration validation.

Figure 5 shows the adoption trend for both HW assisted acceleration/emulation and FPGA prototyping by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green). We have seen a 75 percent increase in the adoption of HW assisted acceleration/emulation over the past three years. 

Figure 5. HW assisted acceleration/emulation and FPGA Prototyping trends

I was surprised to see that the adoption of FPGA prototyping did not increase over the past three years, considering that we found in increase in SoC development over the same period. So, I decided to dig deeper into this anomaly.

In Figure 6, you’ll see that I partitioned the HW assisted acceleration/emulation and FPGA prototyping adoption data by design size: less than 1M gates (in blue), 1M to 20M gates (in yellow), and greater than 20M gates (in red). This revealed that the adoption of HW assisted acceleration/emulation continued to increase as design sizes increased. However, the adoption of FPGA prototyping rapidly dropped off as design sizes increased beyond 20M gates.   

Figure 6. Acceleration/emulation & FPGA Prototyping adoption by design size

So, what’s going on? One problem with FPGA prototyping of large designs is that there is an increased engineering effort required to partition designs across multiple FPGAs. In fact, what I have found is that FPGA prototyping of very large designs is often a major engineering effort in itself, and that many projects are seeking alternative solutions to address this problem.

In my next blog, I plan to present the 2010 Wilson Research Group study findings related to various project results in terms of schedule and required spins before production.

, , , ,

4 April, 2011

 

Effort Spent On Verification (Continued)

This blog is a continuation of a series of blogs, which present the highlights from the 2010 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (click here), I focused on the controversial topic of effort spent in verification. This blog continues this discussion.

I stated in my previous blog that I don’t believe there is a simple answer to the question, “how much effort was spent on verification in your last project.”  I believe that it is necessary to look at multiple data points to truly get a sense of the real effort involved in verification today. So, let’s look at a few additional findings from the study.

Time designers spend in verification

It’s important to note that verification engineers are not the only project members involved in functional verification. Design engineers spend a significant amount of their time in verification too, as shown in Figure 1.

5slide1  

Figure 1. Mean time designer spends in design vs. verification

In fact, one finding from our study is that the mean time a design engineer spends in verification has increased from an average of 46 percent in 2007, to 50 percent in 2010. The involvement of designers in verification ranges from:

  • Small sandbox testing to explore various aspects of the implementation
  • Full functional testing of IP blocks and SoC integration
  • Debugging verification problems identified by a separate verification team

Percentage of time verification engineers spend on various task

Next, let’s look at the mean time verification engineers spend on various task related to their specific project. You might note that verification engineers spend most of their time in debugging. Ideally, if all the tasks were optimized, then you would expect this. Yet, unfortunately, the time spent in debugging can vary significantly from project-to-project, which presents scheduling challenges for managers during a project’s verification planning process.

5slide2 

Figure 2. Mean time verification engineers spend in different task

Number of formal analysis, FPGA prototyping, and emulation Engineers

Functional verification is not limited only to simulation-based techniques. Hence, it’s important to gather data related to other functional verification techniques, such as the number of verification engineers involved in formal analysis, FPGA prototyping, and emulation.

Figure 3 presents the trends in terms of number of verification engineers focused on formal analysis. In 2007, the median number of verification engineers focused on formal analysis on a project was 1.68, while in 2010 the median number increased to 1.84.

5slide3 

Figure 3. Median number of verification engineers focused on formal analysis

Figure 4 presents the trends in terms of number of verification engineers focused on FPGA prototyping. In 2007, the median number of verification engineers focused on FPGA prototyping on a project was 1.42, while in 2010 the median number increased to 2.04. Although FPGA prototyping is a common technique used to create platforms for software development, it can be used for SoC integration verification and system validation.

5slide4 

Figure 4. Number of verification engineers focused on FPGA prototyping

Figure 5 presents the trends in terms of number of verification engineers focused on hardware assisted acceleration and emulation. In 2007, the median number of verification engineers focused on hardware assisted acceleration and emulation on a project was 1.31, while in 2010 the median number increased to 1.86.

5slide5 

Figure 5. Number of verification engineers focused on emulation

A few more thoughts on verification effort

So, can I conclusively state that 70 percent of a project’s effort is spent in verification today?  No.  In fact, even after reviewing the data on different aspects of today’s verification process, I would still find it difficult to state quantitatively what the effort is. Yet, the data that I’ve presented so far seems to indicate that the effort (whatever it is) is increasing. And there is still additional data relevant to the verification effort discussion that I plan to present in upcoming blogs. However, in my next blog (click here), I shift the discussion from verification effort, and focus on some of the 2010 Wilson Research Group findings related to testbench characteristics and simulation strategies.

, ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...