Posts Tagged ‘Simulation’
This is the first in a series of blogs that presents the results from the 2012 Wilson Research Group Functional Verification Study.
In 2002 and 2004, Ron Collett International, Inc. conducted its well known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification. However, after the 2004 study, no other industry studies were conducted, which left a void in identifying industry trends.
To address this void, Mentor Graphics commissioned Far West Research to conduct an industry study on functional verification in the fall of 2007. Then in the fall of 2010, Mentor commissioned Wilson Research Group to conduct another functional verification study. Both of these studies were conducted as blind studies to avoid influencing the results. This means that the survey participants did not know that the study was commissioned by Mentor Graphics. In addition, to support trend analysis on the data, both studies followed the same format and questions (when possible) as the original 2002 and 2004 Collett studies.
In the fall of 2012, Mentor Graphics commissioned Wilson Research Group again to conduct a new functional verification study. This study was also a blind study and follows the same format as the Collett, Far West Research, and previous Wilson Research Group studies. The 2012 Wilson Research Group study is one of the largest functional verification studies ever conducted. The overall confidence level of the study was calculated to be 95% with a margin of error of 4.05%.
Unlike the previous Collett and Far West Research studies that were conducted only in North America, both the 2010 and 2012 Wilson Research Group studies were worldwide studies. The regions targeted were:
- North America:Canada,United States
- Asia (minusIndia):China,Korea,Japan,Taiwan
The survey results are compiled both globally and regionally for analysis.
Another difference between the Wilson Research Group and previous industry studies is that both of the Wilson Research Group studies also included FPGA projects. Hence for the first time, we are able to present some emerging trends in the FPGA functional verification space.
Figure 1 shows the percentage makeup of survey participants by their job description. The red bars represents the FPGA participants while the green bars represent the non-FPGA (i.e., IC/ASIC) participants.
Figure 1: Survey participants job title description
Figure 2 shows the percentage makeup of survey participants by company type. Again, the red bars represents the FPGA participants while the green bars represents the non-FPGA (i.e., IC/ASIC) participants.
Figure 2: Survey participants company description
In a future set of blogs, over the course of the next few months, I plan to present the highlights from the 2012 Wilson Research Group study along with my analysis, comments, and obviously, opinions. A few interesting observations emerged from the study, which include:
- FPGA projects are beginning to adopt advanced verification techniques due to increased design complexity.
- The effort spent on verification is increasing.
- The industry is converging on common processes driven by maturing industry standards.
My next blog presents current design trends that were identified by the survey. This will be followed by a set of blogs focused on the functional verification results.
Also, to learn more about the 2012 Wilson Reserach Group study, view my pre-recorded Functional Verification Study web-seminar, which is located out on the Verification Academy website.
Quick links to the 2012 Wilson Research Group Study results (so far…)
- Part 1 – Design Trends
Tags: accellera, Assertion-Based Verification, formal verification, functional coverage, functional verification, IEEE, Simulation, Standards, SystemVerilog, UVM, Verification Academy, Verification Methodology, verilog, vhdl
When it comes to formal methods, many engineers are skeptics. Perhaps this is due to value propositions that have been pitched over the years that have over-promised yet under-delivered in terms of results. Or perhaps it is due to the advanced skills that have traditionally been required to achieve predictable and reliable results. After all, historically this was the case—dating back to the mid-nineties when formal techniques were only adopted by companies that could afford a dedicated team of formal experts with PhDs.
So, what’s changed today? The emergence of functional verification solutions targeted at specific problem domains, which blend simulation with formal-based techniques in a seamless way to improve results. In other words, the application of formal-based technology is not just for experts anymore! In fact, everyone can reap the benefits of formal analysis today with very little effort.
One example of this blending of simulation with formal-based techniques is in the area of accelerating the process of code coverage closure with the new Questa CoverCheck solution. Closing code coverage typically involves many engineering weeks of effort to manually review code coverage holes to determine if they are unreachable and can be safely ignored—or figuring out exactly how to handcraft special tests to cover them during simulation. Questa CoverCheck makes it easy for non-expert users to leverage formal-based technology to complete this process by automatically identifying the set of unreachable coverage items in a design, and then guiding the user to create tests for the reachable items that have not been covered yet. This process, illustrated in the figure below, is push-button, low-effort, and requires no expertise with formal techniques. In addition, no assertions are required nor expertise in assertion languages. It is a beautiful example of how formal-based technology is blended with simulation to form a solution that improves both productivity and quality of results.
Another example of how formal-based technology is being used today to complement simulation is with AutoCheck, which is part of the Questa Formal solution. For example, there is a class of bugs that cannot be found using RTL simulation due to a simulation effect known as X-state optimism. These bugs might be found during gate-level simulation, but this occurs very late in the design flow when it is costlier to fix. By using AutoCheck, engineers are able to identify and correct X-state issues early in the design flow, before simulation occurs. In addition to X-state issues, AutoCheck uses formal-based technology to verify a wide range of common RTL errors that are difficult or impossible to find during RTL simulation. It is another example of a push-button, low-effort solution where assertion-language and formal expertise is not required. What’s new in the latest Questa Formal release is significant improvements in engine performance and capacity, along with multicore support.
Questa CDC is one more example of how formal-based technology is being used today to complement simulation. Today, we see about 94% of all designs have multiple asynchronous clock domains. Verifying that a signal originating from one clock domain will safely be registered in a different asynchronous clock domain is not possible using traditional RTL simulation since state element setup and hold times are not modeled, which means that metastability issues will not be verified. Again, these bugs might be found later in the flow during gate-level simulation where it is costlier to fix. Static timing analysis, although effective at finding timing issues within a single or synchronous clock domains is unable to identify issues across asyncrhonous clock domains. This is an area with formal-based technology, such as Questa CDC, can help. What’s new in the latest Questa CDC release is support for unlimited design sizes through hierarchical CDC analysis along with a 5X improvement in performance.
Instant Replay Offers Multiple Views at Any Speed
If you’ve watched any professional sporting event on television lately, you’ve seen the pressure put on referees and umpires. They have to make split-second decisions in real-time, having viewed ultra-high-speed action just a single time. But watching at home on television, we get the luxury of viewing multiple replays of events in question in high-definition super-slow-motion, one frame at a time, and even in reverse. We also get to see many different views of these controversial events, from the front, the back, the side, up close, or far away. Sometimes it seems there must be twenty different cameras at every sporting event.
Wouldn’t it nice if you could apply this same principle to your SoC level simulations? What if you had instant replay from multiple viewing angles in your functional verification toolbox? It turns out that such a technology indeed exists, and it’s called “Codelink Replay”.
Codelink Replay enables verification engineers to use instant replay with multiple viewing angles to quickly and accurately debug even the most complex SoC level simulation failures. This is becoming increasingly important, as we see in Harry Foster’s blog series about the 2010 Wilson Research Group Functional Verification Study that over half of all new design starts now contain multiple embedded processors. If you’re responsible for verifying a design with multiple embedded cores such as ARM’s new Cortex A15 and Cortex A7 processors, this technology will have a dramatic impact for you.
Multi-Core SoC Design Verification
Multi-core designs present a whole new level of verification challenges. Achieving functional coverage of your IP blocks at the RTL level has become merely a pre-requisite now – as they say “necessary but not sufficient”. Welcome to the world of SoC level verification, where you use your design’s software as a testbench. After all, since a testbench’s role is to mimic the design’s target environment, so as to test its functionality, how better to accomplish this than to execute the design’s software against its hardware, albeit during simulation?
Some verification teams have already dabbled in this world. Perhaps you’ve written a handful of tests in C or assembly code, loaded them into memory, initialized your processor, and executed them. This is indeed the best way to verify SoC level functionality including power optimization management, clocking domain control, bus traffic arbitration schemes, driver-to-peripheral compatibility, and more, as none of these aspects of an SoC design can be appropriately verified at the RTL IP block level.
However, imagine running a software testbench program only to see that the processor stopped executing code two hours into the simulation. What do you do next? Debugging “software as a testbench” simulation can be daunting. Especially when the software developers say “the software is good”, and the hardware designers say “the hardware is fine”. Until recently, you could count on weeks to debug these types of failures. And the problem is compounded with today’s SoC designs with multiple processors running software test programs from memory.
This is where Codelink Replay comes in. It enables you to replay your simulation in slow motion or fast forward, while observing many different views including hardware views (waveforms, CPU register values, program counter, call stack, bus transactions, and four-state logic) and software views (memory, source code, decompiled code, variable values, and output) – all remaining in perfect synchrony, whether you’re playing forward or backward, single-step, slow-motion, or fast speed. So when your simulation fails, just start at that point in time, and replay backwards to the root of the problem. It’s non-invasive. It doesn’t require any modifications to your design or to your tests.
Debugging SoC Designs Quickly and Accurately
So if you’re under pressure to make fast and accurate decisions when your SoC level tests fail, you can relate to the challenges faced by professional sports referees and umpires. But with Codelink Replay, you can be assured that there are about 20 different virtual “cameras” tracing and logging your processors during simulation, giving you the same instant replay benefit we get when we watch sporting events on television. If you’re interested to learn more about this new technology, check out the web seminar at the URL below, that introduces Codelink Replay, and shows how it supports the entire ARM family of processors, including even the latest Cortex A-Series, Cortex R-Series, and Cortex M-Series.
Who Doesn’t Like Faster?
In my last blog post I introduced new technology called Intelligent Testbench Automation (“iTBA”). It’s generating lots of interest in the industry because just like constrained random testing (“CRT”), it can generate tons of tests for functional verification. But it has unique efficiencies that allow you to achieve coverage 10X to 100X faster. And who doesn’t like faster? Well since the last post I’ve received many questions of interest from readers, but one seems to stick out enough to “cover” it here in a follow up post.
Several readers commented that they like the concept of randomness, because it has the ability of generating sequences of sequences; perhaps even a single sequence executed multiple times in a row. 1 And they were willing to suffer some extra redundancy as an unfortunate but necessary trade-off.
While this benefit of random testing is understandable, there’s no need to worry as iTBA has you covered here. If you checked out this link - http://www.verificationacademy.com/infact2 - you found an interactive example of a side by side comparison of CRT and iTBA. The intent of the example was to show comparisons of what happens when you use CRT to generate tests randomly versus when you use iTBA to generate tests without redundancy.
However in a real application of iTBA, it’s equally likely that you’d manage your redundancy, not necessarily eliminate it completely. We’ve improved the on-line illustration now to include two (of the many) additional features of iTBA.
Coverage First – Then Random
One is the ability to run a simulation with high coverage non-redundant tests first, followed immediately by random tests. Try it again, but this time check the box labeled “Run after all coverage is met”. What you’ll find is that iTBA achieves your targeted coverage in the first 576 tests, at which time CRT will have achieved somewhere around 50% coverage at best. But notice that iTBA doesn’t stop at 100% coverage. It continues on, generating tests randomly. By the time CRT gets to about 70% coverage, iTBA has achieved 100%, and has also generated scores of additional tests randomly. You can have the best of both worlds. You can click on the “suspend”, “resume”, and “show chart” buttons during the simulation to see the progress of each.
Interleave Coverage and Random
Two is the ability to run a simulation randomly, but to clip the redundancy rather than eliminate it. Move the “inFact coverage goal” bar to set the clip level (try 2 or 3 or 4), and restart the simulation. Now you’ll see iTBA generating random tests, but managing the redundancy to whatever level you chose. The result is simulation with a managed amount of redundancy that still achieves 100% of your target coverage, including every corner-case.
iTBA generates tons of tests, but lets you decide how much to control them. If you’re interested to learn more about how iTBA can help you achieve your functional verification goals faster, you might consider attending the Tech Design Forum in Santa Clara on September 8th. There’s a track dedicated to achieving coverage closure. Check out this URL for more information about it. http://www.techdesignforums.com/events/santa-clara/event.cfm
1 – By the way, if achieving your test goals is predicated on certain specific sequences of sequences, our experts can show you how to create an iTBA graph that will achieve those goals much faster than relying on redundancy. But that’s another story for another time.
If you’ve been to DAC or DVCon during the past couple of years, you’ve probably at least heard of something new called “Intelligent Testbench Automation”. Well, it’s actually not really all that new, as the underlying principles have been used in compiler testing and some types of software testing for the past three decades, but its application to electronic design verification is certainly new, and exciting.
The value proposition of iTBA is fairly simple and straightforward. Just like constrained random testing, iTBA generates tons of stimuli for functional verification. But iTBA is so efficient, that it achieves the targeted functional coverage one to two orders of magnitude faster than CRT. So what would you do if you could achieve your current simulation goals 10X to 100X faster?
You could finish your verification earlier, especially when it seems like you’re getting new IP drops every day. I’ve seen IP verification teams reduce their simulations from several days on several CPUs (using CRT) to a couple of hours on a single CPU (with iTBA). No longer can IP designers send RTL revisions faster than we can verify them.
But for me, I’d ultimately use the time savings to expand my testing goals. Today’s designs are so complex that typically only a fraction of their functionality gets tested anyway. And one of the biggest challenges is trading off what functionality to test, and what not to test. (We’ll show you how iTBA can help you here, in a future blog post.) Well, if I can achieve my initial target coverage in one-tenth of the time, then I’d use at least part of the time saving to expand my coverage, and go after some of the functionality that originally I didn’t think I’d have time to test.
On Line Illustration
If you check out this link – http://www.verificationacademy.com/infact – you’ll find an interactive example of a side by side comparison of constrained random testing and intelligent testbench automation. It’s an Adobe Flash Demonstration, and it lets you run your own simulations. Try it, it’s fun.
The example shows a target coverage of 576 equally weighted test cases in a 24×24 grid. You can adjust the dials at the top for the number and speed of simulators to use, and then click on “start”. Both CRT and iTBA simulations run in parallel at the same speed, cycle for cycle, and each time a new test case is simulated the number in its cell is incremented by one, and the color of the cell changes. Notice that the iTBA simulation on the right achieves 100% coverage very quickly, covering every unique test case efficiently. But notice that the CRT simulation on the left eventually achieves 100% coverage painfully and slowly, with much unwanted redundancy. You can also click on “show chart” to see a coverage chart of your simulation.
You probably knew that random testing repeats, but you probably didn’t know by how much. It turns out that the redundancy factor is expressed in the equation “ T = N ln N + C “, where “T” is the number of tests that must be generated to achieve 100% coverage of “N” different cases, and “C” is a small constant. So using the natural logarithm of 576, we can calculate that given equally weighted cases, the random simulation will require an average of about 3661 tests to achieve our goal. Sometimes it’s more, sometimes it’s less, given the unpredictability of random testing. In the meantime the iTBA simulation achieves 100% coverage in just 576 tests, a reduction of 84%.
Experiment at Home
You probably already have an excellent six-sided demonstration vehicle somewhere at home. Try rolling a single die repeatedly, simulating a random test generator. How many times does it take you to “cover” all six unique test cases? T = N ln N + C says it should take about 11 times or more. You might get lucky and hit 8, 9, or 10. But chances are you’ll still be rolling at 11, 12, 13, or even more. If you used iTBA to generate the test cases, it would take you six rolls, and you’d be done. Now in this example, getting to coverage twice as fast may not be that exciting to you. But if you extrapolate these results to your RTL design’s test plan, the savings can become quite interesting.
So here’s a quick question for you. What’s the minimum number of unique functional test cases needed to realize at least a 10X gain in efficiency with iTBA compared to what you could get with CRT? (Hint – You can figure it out with three taps on a scientific calculator.) It’s probably a pretty small number compared to the number of functions your design can actually perform, meaning that there’s at least a 10X improvement in testing efficiency awaiting you with iTBA.
Hopefully at this point you’re at least a little bit interested? Like some others, you may be skeptical at this point. Could this technology really offer a 10X improvement in functional verification? Check out the Verification Academy at this site – http://www.verificationacademy.com/course-modules/dynamic-verification/intelligent-testbench-automation – to see the first academy sessions that will introduce you to Intelligent Testbench Automation. Or you can even Google “intelligent testbench automation”, and see what you find. Thanks for reading . . .
This blog is a continuation of a series of blogs, which present the highlights from the 2010 Wilson Research Group Functional Verification Study (for a background on the study, click here).
In my previous blog (Part 8 click here), I focused on some of the 2010 Wilson Research Group findings related to design and verification language trends. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2010 Wilson Research Group study.
One of the claims I made in the prologue to this series of blogs is that we are seeing a trend in increased industry adoption of advanced functional verification techniques, which is supported by the data I present in this blog. An interesting question you might ask is “what is driving this trend?” In some of my earlier blogs (click here for Part 1 and Part 2) I showed an industry trend in that design complexity is increasing in terms design sizes and number of embedded processors. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (click here for Part 4). My belief is that the industry is being forced to mature its functional verification process to address increasing complexity and effort.
Simulation Techniques Adoption Trends
Let’s begin by comparing the adoption trends related to various simulation techniques as shown in Figure 1, where the data from the 2007 Far West Research study is shown in blue and the data from 2010 Wilson Research Group study is shown in green.
Figure 1. Simulation-based technique adoption trends
You can see that the study finds the industry increasing its adoption of various functional verification techniques.
For example, in 2007, the Far West Research Group found that only 48 percent of the industry performed code coverage. This surprised me. After all, HDL-based code coverage is a technology that has been around since the early 1990’s. However, I did informally verify the 2007 results through numerous customer visits and discussions. In 2010, we see that the industry adoption of code coverage has increased to 72 percent.
Now, a side comment: In this blog, I do not plan to discuss either the strengths or weaknesses of the various verification techniques that were studied (such as code coverage, whose strengths and weaknesses have been argued and debated for years)—perhaps in a separate series of future blogs. In this series of blogs, I plan to focus only on the findings from the 2010 Wilson Research Group study.
In 2007, the Far West Research Group study found that 37 percent of the industry had adopted assertions for use in simulation. In 2010, we find that industry adoption of assertions had increased to 72 percent. I believe that the maturing of the various assertion language standards has contributed to this increased adoption.
In 2007, the Far West Research Group study found that 40 percent of the industry had adopted functional coverage for use in simulation. In 2010, the industry adoption of functional coverage had increased to 69 percent. Part of this increase in functional coverage adoption has been driven by the increased adoption of constrained-random simulation, since you really can’t effectively do constrained-random simulation without doing functional coverage.
In fact, we see from the Far West Research Group 2007 study that 41 percent of the industry had adopted constrained-random simulation techniques. In 2010, the industry adoption had increased to 69 percent. I believe that this increase in constrained-random adoption has been driven by the increased adoption of the various base-class library methodologies, as I presented in a previous blog (click here for Part 8).
Formal Property Checking Adoption Trends
Figure 2 shows the trends in terms of formal property checking adoption by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green). The industry adoption of formal property checking has increased by an amazing 53 percent in the past three years. Again, this is another data point that supports my claim that the industry is starting to mature its adoption of advanced functional verification techniques.
Another way to analyze the results is to partition a project’s adoption of formal property checking by design size, as shown in Figure 3, where less than 1M gates is shown in blue, 1M to 20M gates is shown in orange, and greater than 20M gates is shown in red. Obviously, the larger the design, the more effort is generally spent in verification. Hence, it’s not too surprising to see the increased adoption of formal property checking for larger designs.
Figure 3. Trends in formal property checking adoption by design size
Acceleration/Emulation & FPGA Prototyping Adoption Trends
The amount of time spent in a simulation regression is an increasing concern for many projects. Intuitively, we tend to think that the design size influences simulation performance. However, there are two equally important factors that must be considered: number of test in the simulation regression suite, and the length of each test in terms of clock cycles.
For example, a project might have a small or moderate-sized design, yet verification of this designs requires a long running test (e.g., a video input stream). Hence, in this example, the simulation regression time is influenced by the number of clock cycles required for the test, and not necessarily the design size itself.
In blog 6 of this series, I presented industry data on the number of tests created to verify a design in simulation (i.e., the regression suite). The findings obviously varied dramatically from a handful of test to thousands of test in a regression suite, depending on the design. In Figure 4 below, I show the findings for a projects regression time, which also varies dramatically from short regression times for some projects, to multiple days for other projects. The median simulation regression time is about 16 hours in 2010.
Figure 4. Simulation regression time trends
One technique that is often used to speed up simulation regression (either due to very long test and lots of test) is either hardware assisted acceleration or emulation. In addition, FPGA prototyping, while historically used as a platform for software development, has recently served a role in SoC integration validation.
Figure 5 shows the adoption trend for both HW assisted acceleration/emulation and FPGA prototyping by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green). We have seen a 75 percent increase in the adoption of HW assisted acceleration/emulation over the past three years.
Figure 5. HW assisted acceleration/emulation and FPGA Prototyping trends
I was surprised to see that the adoption of FPGA prototyping did not increase over the past three years, considering that we found in increase in SoC development over the same period. So, I decided to dig deeper into this anomaly.
In Figure 6, you’ll see that I partitioned the HW assisted acceleration/emulation and FPGA prototyping adoption data by design size: less than 1M gates (in blue), 1M to 20M gates (in yellow), and greater than 20M gates (in red). This revealed that the adoption of HW assisted acceleration/emulation continued to increase as design sizes increased. However, the adoption of FPGA prototyping rapidly dropped off as design sizes increased beyond 20M gates.
Figure 6. Acceleration/emulation & FPGA Prototyping adoption by design size
So, what’s going on? One problem with FPGA prototyping of large designs is that there is an increased engineering effort required to partition designs across multiple FPGAs. In fact, what I have found is that FPGA prototyping of very large designs is often a major engineering effort in itself, and that many projects are seeking alternative solutions to address this problem.
In my next blog, I plan to present the 2010 Wilson Research Group study findings related to various project results in terms of schedule and required spins before production.
Testbench Characteristics and Simulation Strategies (Continued)
This blog is a continuation of a series of blogs, which present the highlights from the 2010 Wilson Research Group Functional Verification Study (for a background on the study, click here).
In my previous blog (Part 6 click here), I focused on some of the 2010 Wilson Research Group findings related to testbench characteristics and simulation strategies. In this blog, I continue this discussion, and present additional findings specifically related to the number of tests created by a project, as well as the length of time spent in a simulation regression run for various projects.
Percentage directed tests created by a project
Let’s begin by examining the percentage of directed tests that were created by a project, as shown in Figure 1. Here, we compare the results for FPGA designs (in grey) and non-FPGA designs (in green).
Figure 1. Percentage of directed testing by a project
Obviously, the study results are all over the spectrum, where some projects create more directed tests than others. The study data revealed that FPGA design participants tend to belong to a higher percentage of projects that only do directed tests.
Figure 2. Median percentage of directed testing by a project by region
You can see from the results that India seems to spend less time focused on directed testing compared with other regions, which means that India spends more time with alternative stimulus generation methods (such as, constrained-random, processor-driven, or graph-based techniques).
Let’s look at the percentage of directed testing by design size, for non-FPGA projects. The median results are shown in Figure 3, where the design size partitions are represented as: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red).
Figure 3. Median percentage of directed testing by a project by design size
As design sizes increase, there is less reliance on directed testing.
Percentage of project tests that were random or constrained random
Next, let’s look at the percentage of tests that were random or constrained random across multiple projects. Figure 4 compares the results between FPGA designs (in grey) and non-FPGA designs (in green).
Figure 4. Percentage of random or constrained-random testing by a project
And again, the study results indicate that projects are all over the spectrum in their usage of random or constrained-random stimulation generation. Some projects do more, while other projects do less.
Figure 5 shows the median percentage of random or constrained-random testing by region, where North America (in blue), Europe/Israel (in green), Asia (in green), and India (in red).
Figure 5. Median percentage of random or constrained-random testing by region
You can see that the median percentage of random or constrained-random testing by a project is higher in Indian than other regions of the world.
Let’s look at the percentage of random or constrained-random testing by design size, for non-FPGA projects. The median results are shown in Figure 6, where the design size partitions are represented as: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red).
Figure 6. Median percentage of random or constrained-random testing by design size
Smaller designs tend to do less random or constrained-random testing.
Simulation regression time
Now, let’s look at the time that various projects spend in a simulation regression. Figure 7 compares the simulation regression time between FPGA designs (in grey) and non-FPGA designs (in green) from our recent study.
Figure 7. Time spent in a simulation regression by project
And again, we see that FPGA projects tend to spend less time in a simulation regression run compared to non-FPGA projects.
Figure 8 shows the trends in terms of simulation regression time by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green). There really hasn’t been a significant change in the time spent in a simulation regression within the past three years. You will find that some teams spend days or even weeks in a regression. Yet, the industry median is about 16 hours for both the 2007 and 2010 studies.
Figure 8. Simulation regression time trends
Figure 9 shows the median simulation regression time by region, where North America (in blue), Europe/Israel (in green), Asia (in green), and India (in red).
Figure 9. Median simulation regression time by regions
Finally, Figure 10, shows the median simulation regression time by design size, where the design size partitions are represented as: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red).
Figure 10. Median simulation regression time by design size
Obviously, project teams working on smaller designs spend less time in a simulation regression run compared to project teams working on larger designs.
In my next blog (click here), I’ll focus on design and verification language trends, as identified by the 2010 Wilson Research Group study.
Testbench Characteristics and Simulation Strategies
This blog is a continuation of a series of blogs that present the highlights from the 2010 Wilson Research Group Functional Verification Study (for background on the study, click here).
In my previous blog (click here), I focused on the controversial topic of effort spent in verification. In this blog, I focus on some of the 2010 Wilson Research Group findings related to testbench characteristics and simulation strategies. Although I am shifting the focus away from verification effort, I believe that the data I present in this blog is related to the discussion of overall effort and really needs to be considered.
Time spent in top-level simulation
Let’s begin by examining the percentage of time a project spends in top-level simulation today. Figure 1 compares the time spent in top-level simulation between FPGA designs (in grey) and non-FPGA designs (in green) from our recent study.
Figure 1. Percentage time spent in top-level simulation
Obviously, the study results show that projects are all over the spectrum, where some projects spend less time in top-level simulation, while others spent much more time.
I decided to partition the data by design size (using the same partition that I’ve presented in previous blogs), and then compare the time spent in top-level verification by design size for Non-FPGA designs. Figure 2 shows the results, where the design size partitions are represented as: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red).
Figure 2. Percentage of time spent in top-level simulation by design size
You can see from the results that, unlike previous analysis where the data was partitioned by design size, it didn’t seem to matter for design sizes between 1M-20M gates and design sizes greater than 20M gates. That is, they seem to spend about the same amount of time in top-level verification.
Time spent in subsystem-level simulation
Next, let’s look at the percentage of time that a project spends in subsystem-level simulation (e.g., clusters, blocks, etc.). Figure 3 compares the time spent in subsystem-level simulation between FPGA designs (in grey) and non-FPGA designs (in green) from our recent study.
Figure 3. Percentage of time spent in subsystem-level simulation
Non-FPGA designs seem to spend more time in subsystem-level simulation when compared to FPGA designs. Again, this is probably not too surprising to anyone familiar with traditional FPGA development. Although, as the size and complexity of FPGA designs continue to increase, I suspect we will see more subsystem-level verification. Unfortunately, we can’t present the trends on FPGA designs, as I mentioned in the background blog for the Wilson Research Group Functional Verification Study. However, future studies should be able to leverage the data we collected in the Wilson Research Group study and present FPGA trends.
Figure 4 shows the Non-FPGA results for the time spent in subsystem-level verification, where the data was partitioned by design size: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red). There doesn’t appear to be any big surprise when viewing the data with this partition.
Figure 4. Percentage of time spent in subsystem-level simulation by design size
Number of tests created to verify the design in simulation
Now let’s look at the percentage of projects in terms of the number of tests they create to verify a design using simulation. Figure 5 compares the number of tests created to verify a design between FPGA designs (in grey) and non-FPGA designs (in green) from our recent study.
Figure 5. Number of tests created to verify a design in simulation
Again, the data is not too surprising. We see that engineers working on FPGA designs tend to create fewer tests to verify their design in simulation.
Figure 6 shows the trend in terms of the number of tests created to verify a design in simulation by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green). You will note that there has been an increase in the number of tests in the last three years. In fact, although there is a wide variation in the number of tests created by a project, the median calculation trend has grown from 342 to 398 tests.
Figure 6. Number of tests created to verify a design in simulation trends
I also did a regional analysis of the number of tests a non-FPGA project creates to verify a design in simulation, and the median results are shown in Figure 7, where North America (in blue), Europe/Israel (in green), Asia (in yellow), and India (in red). You can see that the median calculation is higher in Asia and Indian than North America and Europe/Israel.
Figure 7. Number of tests created to verify a design in simulation by region
Finally, I did a design size analysis of the number of tests a non-FPGA project creates to verify a design in simulation, and the median results are shown in Figure 8, where the design size partitions are represented as: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red).
Figure 8. Number of tests created to verify a design in simulation by design size
Obviously, larger designs require more tests to verify them in simulation.
In my next blog (click here), I’ll continue focusing on testbench characteristics and simulation strategies, and I’ll present additional data from the 2010 Wilson Research Group study.
What does the word performance mean to you?
Speed? Well, obviously speed is an important characteristic. Yet, if the team is running in the wrong direction, it really doesn’t matter how fast they are going.
How about accomplishment? After all, we do assess an employee’s or project team’s accomplishments using a process we refer to as a performance review.
What about efficiency, which is a ratio comparing the amount of work accomplished to the effort or cost put into the process? Certainly, from a project perspective, effort and cost should be an important consideration.
Finally, perhaps quality of results is a characteristic we should consider. After all, poor results are of little use.
From a verification perspective, I think it is necessary to focus on the real problem, that is, the project’s verification objectives:
- Reduce risk-find more bugs sooner
- Know when we are done-increase confidence
- Improve project productivity and efficiency-get more work done
Now, whenever I hear the phrase “get more work done,” I’m often reminded of Henry Ford, who was the founder of the Ford Motor Company. Henry is probably best know as the father of modern assembly lines used in mass production, and he revolutionized transportation specifically, and American industry in general. Henry once said, “If I had asked people what they wanted, they would have said faster horses.” This quote provides a classic example of the importance of focusing on the real problem, and thinking outside the box.
In fact, Henry Ford’s faster horses example is often used in advanced courses on product marketing and requirements gathering. The typical example of focusing on the real problem generally involves a dialogue between Henry and a farmer, as follows:
Henry: So, why do you want faster horses?
Farmer: I need to get to the store in less time.
Henry: And why do you need to get to the store in less time?
Farmer: Because I need to get more work done on the farm.
As you can see, the farmer really didn’t need faster horses-he needed a solution that would allow him to get more work done on the farm. Faster horses are certainly one solution, but thinking outside the box, there are other more efficient solutions that would yield higher quality results.
Now, before I move on to discuss ways to improve verification performance, I would like to give one more example of thinking outside the box to improve performance. And for this example, I’ve chosen the famous Intel 8088 microprocessor. I was just an engineering student when the 8088 was released in 1979, and like so many geeks of my generation, I couldn’t wait to get my hands on one.
The 8088 had a maximum clock speed of approximately 5Mhz. It took multiple clock cycles to complete an instruction (on average about 15). Furthermore, a 16-bit multiplication required about 80 clock cycles. So the question is, how could we improve the 8088 performance to get more work done?
Well, one approach would be to speed up the clock. However, this would only provide incremental improvements compared to what could be achieved by thinking outside the box and architecting a more clever solution that took advantage of Moore’s Law. In fact, over time, that is exactly what happened.
First, the multiplier performance can be improved by moving to a single-cycle multiplier, such as a Wallace Tree, Baugh-Wooley, or Dadda architecture. These architectures calculate multiple partial products in parallel. Second, the average number of clock cycles per instruction can be reduced by moving to pipelined architectures, where multiple instruction executions overlap, giving a net effect of one instruction completing every clock cycle (as an ideal case example).
The point is that we have moved to solutions that get more work done by “increasing the amount of work per cycle,” instead of just a brute force approach to the problem.
In my next blog, I’ll discuss why performance even matters, followed by thoughts on improving verification performance.
PROLOGUE: Over the weekend, I was thinking about a recent visit I had with an advanced ASIC team manager who told me that they had optimized most aspects of their verification flow to such an extent that most of their remaining effort was spent in debugging. So, I decided to work up a draft blog on debugging. However, this morning, when I was preparing to post my blog, I noticed that Richard Goering had beat me to the punch and had posted a blog on debugging about two weeks ago. Having reviewed his blog, I think we are both in agreement—debugging is a huge bottleneck in the flow. I think that debugging must be looked at as a solution, and not a tool feature. However, there are many aspects of debugging beyond traditional simulation triage of design models and testbench components—ranging from embedded software, to power and performance analysis, to code and functional coverage closure, etc. There really isn’t a unified solution—debugging must be considered an integral part of each aspect of design and verification.
ACT 1: My original blog from this weekend….
“Bloody instructions, which, being taught, return to plague the inventor….”
William Shakespeare, Macbeth, act 1, scene 7
All right, even Shakespeare had issues with debugging. But before I get into all of that, let me set the stage with a little background info…
First, let me say that I love my job. My role at Mentor Graphics consists of a diverse set of tasks. Yet, probably my most rewarding work involves studying and assessing today’s electronics industry. The objective of this work is to help Mentor identify discontinuities in today’s EDA solutions, as well as understand emerging verification challenges. But what I like most about my work is that it allows me to participate in detailed discussions with various project teams and multiple industry thought leaders across multiple market segments.
A couple of years ago, I was performing a detailed verification assessment for an ASIC project team. As I usually do when I conduct these kinds of assessments, I asked the team what was the biggest bottleneck in their flow. This one enthusiastic, young engineer started waving his hand vigorously at me and said: “I know, I know…..it’s layoffs!” Okay, so after the group recovered itself from an outburst of nervous chuckles, I pressed forward with my question. It turned out that the group unanimously agreed that debugging was generally a significant, yet often underestimated, effort associated with their flow. Perhaps this shouldn’t surprise anyone, when you consider that the Collett International 2003 IC/ASIC Design Closure study found that 42 percent of the verification effort was consumed in writing test and creating testbenches, while 58 percent was consumed in debugging. More recently, a 2007 Farwest Research study, chartered by Mentor Graphics, found that 52 percent of a dedicated verification engineers effort was consumed in debugging.
The problem with debugging is that the effort is not always obvious since it applies to all aspects of the design and verification flow and often involves many different stakeholders. For example, architectural modeling, RTL coding, testbench implementation, transaction modeling, embedded software, coverage modeling and closure, and on and on and on. What makes it particularly insidious is that it is extremely difficult to predict or schedule. In fact, what you will find is that a mature organization relies on historical data extracted from their previous project’s debugging effort metrics in order to estimate their future project effort. However, due to the unpredictable nature of debugging, history doesn’t always repeat itself. And unfortunately, there is no silver bullet in terms of a single debugging tool or strategy. Multiple solutions, ranging from RTL implementation debugging, to OVM object-oriented testbench component debugging, to embedded software debugging capabilities, to coverage closure are required. Fortunately, multiple good solutions have emerged, ranging from assertions for reducing RTL debugging effort, to SystemVerilog dynamic structures analysis and debugging, to processor-driven verification debugging solutions for embedded software verification, to the intelligent testbench for automating coverage closure.
EPILOGUE: I opened this blog humorously with a quote from Shakespeare. Yet, today’s debugging effort is no laughing matter, and it contributes significantly to a project’s overall design and verification effort. I’ll conclude this blog with a sobering quote from Brian Kernighan (the K in the K&R C language) who once pointed out:
Debugging is twice as hard as writing the code in the first place.
I’m curious about your thoughts. Does debugging consume a significant amount of effort in your flow? If not, what is the biggest bottleneck in your flow?
About Verification Horizons BLOG
This blog will provide an online forum to provide weekly updates on concepts, values, standards, methodologies and examples to assist with the understanding of what advanced functional verification technologies can do and how to most effectively apply them. We're looking forward to your comments and suggestions on the posts to make this a useful tool.
- Part 1: The 2012 Wilson Research Group Functional Verification Study
- What’s the deal with those wire’s and reg’s in Verilog
- Getting AMP’ed Up on the IEEE Low-Power Standard
- Prologue: The 2012 Wilson Research Group Functional Verification Study
- Even More UVM Debug in Questa 10.2
- IEEE Approves New Low Power Standard
- May 2013 (2)
- April 2013 (2)
- March 2013 (2)
- February 2013 (5)
- January 2013 (1)
- December 2012 (1)
- November 2012 (1)
- October 2012 (4)
- September 2012 (1)
- August 2012 (1)
- July 2012 (6)
- June 2012 (1)
- May 2012 (3)
- March 2012 (1)
- February 2012 (6)
- January 2012 (2)
- December 2011 (2)
- November 2011 (2)
- October 2011 (3)
- September 2011 (1)
- July 2011 (3)
- June 2011 (6)
- Intelligent Testbench Automation Delivers 10X to 100X Faster Functional Verification
- Part 9: The 2010 Wilson Research Group Functional Verification Study
- Verification Horizons DAC Issue Now Available Online
- Accellera & OSCI Unite
- The IEEE’s Most Popular EDA Standards
- UVM Register Kit Available for OVM 2.1.2
- May 2011 (2)
- April 2011 (7)
- User-2-User’s Functional Verification Track
- Part 7: The 2010 Wilson Research Group Functional Verification Study
- Part 6: The 2010 Wilson Research Group Functional Verification Study
- SystemC Day 2011 Videos Available Now
- Part 5: The 2010 Wilson Research Group Functional Verification Study
- Part 4: The 2010 Wilson Research Group Functional Verification Study
- Part 3: The 2010 Wilson Research Group Functional Verification Study
- March 2011 (5)
- February 2011 (4)
- January 2011 (1)
- December 2010 (2)
- October 2010 (3)
- September 2010 (4)
- August 2010 (1)
- July 2010 (3)
- June 2010 (9)
- The reports of OVM’s death are greatly exaggerated (with apologies to Mark Twain)
- New Verification Academy Advanced OVM (&UVM) Module
- OVM/UVM @DAC: The Dog That Didn’t Bark
- DAC: Day 1; An Ode to an Old Friend
- UVM: Joint Statement Issued by Mentor, Cadence & Synopsys
- Static Verification
- OVM/UVM at DAC 2010
- DAC Panel: Bridging Pre-Silicon Verification and Post-Silicon Validation
- Accellera’s DAC Breakfast & Panel Discussion
- May 2010 (9)
- Easier UVM Testbench Construction – UVM Sequence Layering
- North American SystemC User Group (NASCUG) Meeting at DAC
- An Extension to UVM: The UVM Container
- UVM Register Package 2.0 Available for Download
- Accellera’s OVM: Omnimodus Verification Methodology
- High-Level Design Validation and Test (HLDVT) 2010
- New OVM Sequence Layering Package – For Easier Tests
- OVM 2.0 Register Package Released
- OVM Extensions for Testbench Reuse
- April 2010 (6)
- SystemC Day Videos from DVCon Available Now
- On Committees and Motivations
- The Final Signatures (the meeting during the meeting)
- UVM Adoption: Go Native-UVM or use OVM Compatibility Kit?
- UVM-EA (Early Adopter) Starter Kit Available for Download
- Accellera Adopts OVM 2.1.1 for its Universal Verification Methodology (UVM)
- March 2010 (4)
- February 2010 (5)
- January 2010 (5)
- December 2009 (15)
- A Cliffhanger ABV Seminar, Jan 19, Santa Clara, CA
- Truth in Labeling: VMM2.0
- IEEE Std. 1800™-2009 (SystemVerilog) Ready for Purchase & Download
- December Verification Horizons Issue Out
- Evolution is a tinkerer
- It Is Better to Give than It Is to Receive
- Zombie Alert! (Can the CEDA DTC “User Voice” Be Heard When They Won’t Let You Listen)
- DVCon is Just Around the Corner
- The “Standards Corner” Becomes a Blog
- I Am Honored to Honor
- IEEE Standards Association Awards Ceremony
- ABV and being from Missouri…
- Time hogs, blogs, and evolving underdogs…
- Full House – and this is no gamble!
- Welcome to the Verification Horizons Blog!
- September 2009 (2)
- July 2009 (1)
- May 2009 (1)