Posts Tagged ‘testbench’
Graph-Based Intelligent Testbench Automation
While intelligent testbench automation is still reasonably new when measured in EDA years, this graph-based verification technology is being adopted by more and more verification teams every day. And the interest is global. Verification teams from Europe, North America, and the Pacific Rim are now using iTBA to help them verify their newest electronic designs in less time and with fewer resources. (If you haven’t adopted it yet, your competitors probably have.) If you have yet to learn how this new technology can help you achieve higher levels of verification, despite increasing design complexity, I’d suggest you check out a recent article in the June 2012 issue of Verification Horizons titled “Is Intelligent Testbench Automation For You?” The article focuses on where iTBA is best applied and where it will help you most by producing optimal results, and how design applications with a large verification space, functionally oriented coverage goals, and unbalanced conditions can often experience a 100X gain in coverage closure acceleration. For more detail about these and other considerations, you’ll have to read the article.
And while you’re there, you might also notice that the entire June 2012 issue of Verification Horizons is devoted to helping you achieve the highest levels of coverage as efficiently as possible. Editor and fellow verification technologist Tom Fitzpatrick succinctly adapts Murphy’s Law to verification, writing “If It Isn’t Covered, It Doesn’t Work”. And any experienced verification engineer (or manager) knows just how true this is, making it critical that we thoughtfully prioritize our verification goals, and achieve them as quickly and efficiently as possible. The June 2012 issue offers nine high quality articles, with a particular focus on coverage.
Another proof that iTBA is catching on globally, is the upcoming TVS DVClub event being held next Monday 2 July 2012, in Bristol, Cambridge, and Grenoble. The title of the event is “Graph-Based Verification”, and three industry experts will discuss different ways you can take advantage of what graph-based intelligent testbench automation has to offer. My colleague and fellow verification technologist Staffan Berg leads off the event with a proof of his own, as he will present how graph-based iTBA can significantly shorten your time-to-coverage. Staffan will show you how to use graph-based verification to define your stimulus space and coverage goals, by highlighting examples from some of the verification teams that have already adopted this technology, as I mentioned above. He’ll also show how you can introduce iTBA into your existing verification environment, so you can realize these benefits without disrupting your existing process. I have already registered and plan to attend the TVS DVClub event, but I’ll have to do some adapting of my own as the event runs from 11:30am to 2:00pm BST in the UK. But I’ve seen Staffan present before, and both he and intelligent testbench automation are worth getting up early for. Hope to see you there, remotely speaking.
Instant Replay Offers Multiple Views at Any Speed
If you’ve watched any professional sporting event on television lately, you’ve seen the pressure put on referees and umpires. They have to make split-second decisions in real-time, having viewed ultra-high-speed action just a single time. But watching at home on television, we get the luxury of viewing multiple replays of events in question in high-definition super-slow-motion, one frame at a time, and even in reverse. We also get to see many different views of these controversial events, from the front, the back, the side, up close, or far away. Sometimes it seems there must be twenty different cameras at every sporting event.
Wouldn’t it nice if you could apply this same principle to your SoC level simulations? What if you had instant replay from multiple viewing angles in your functional verification toolbox? It turns out that such a technology indeed exists, and it’s called “Codelink Replay”.
Codelink Replay enables verification engineers to use instant replay with multiple viewing angles to quickly and accurately debug even the most complex SoC level simulation failures. This is becoming increasingly important, as we see in Harry Foster’s blog series about the 2010 Wilson Research Group Functional Verification Study that over half of all new design starts now contain multiple embedded processors. If you’re responsible for verifying a design with multiple embedded cores such as ARM’s new Cortex A15 and Cortex A7 processors, this technology will have a dramatic impact for you.
Multi-Core SoC Design Verification
Multi-core designs present a whole new level of verification challenges. Achieving functional coverage of your IP blocks at the RTL level has become merely a pre-requisite now – as they say “necessary but not sufficient”. Welcome to the world of SoC level verification, where you use your design’s software as a testbench. After all, since a testbench’s role is to mimic the design’s target environment, so as to test its functionality, how better to accomplish this than to execute the design’s software against its hardware, albeit during simulation?
Some verification teams have already dabbled in this world. Perhaps you’ve written a handful of tests in C or assembly code, loaded them into memory, initialized your processor, and executed them. This is indeed the best way to verify SoC level functionality including power optimization management, clocking domain control, bus traffic arbitration schemes, driver-to-peripheral compatibility, and more, as none of these aspects of an SoC design can be appropriately verified at the RTL IP block level.
However, imagine running a software testbench program only to see that the processor stopped executing code two hours into the simulation. What do you do next? Debugging “software as a testbench” simulation can be daunting. Especially when the software developers say “the software is good”, and the hardware designers say “the hardware is fine”. Until recently, you could count on weeks to debug these types of failures. And the problem is compounded with today’s SoC designs with multiple processors running software test programs from memory.
This is where Codelink Replay comes in. It enables you to replay your simulation in slow motion or fast forward, while observing many different views including hardware views (waveforms, CPU register values, program counter, call stack, bus transactions, and four-state logic) and software views (memory, source code, decompiled code, variable values, and output) – all remaining in perfect synchrony, whether you’re playing forward or backward, single-step, slow-motion, or fast speed. So when your simulation fails, just start at that point in time, and replay backwards to the root of the problem. It’s non-invasive. It doesn’t require any modifications to your design or to your tests.
Debugging SoC Designs Quickly and Accurately
So if you’re under pressure to make fast and accurate decisions when your SoC level tests fail, you can relate to the challenges faced by professional sports referees and umpires. But with Codelink Replay, you can be assured that there are about 20 different virtual “cameras” tracing and logging your processors during simulation, giving you the same instant replay benefit we get when we watch sporting events on television. If you’re interested to learn more about this new technology, check out the web seminar at the URL below, that introduces Codelink Replay, and shows how it supports the entire ARM family of processors, including even the latest Cortex A-Series, Cortex R-Series, and Cortex M-Series.
Who Doesn’t Like Faster?
In my last blog post I introduced new technology called Intelligent Testbench Automation (“iTBA”). It’s generating lots of interest in the industry because just like constrained random testing (“CRT”), it can generate tons of tests for functional verification. But it has unique efficiencies that allow you to achieve coverage 10X to 100X faster. And who doesn’t like faster? Well since the last post I’ve received many questions of interest from readers, but one seems to stick out enough to “cover” it here in a follow up post.
Several readers commented that they like the concept of randomness, because it has the ability of generating sequences of sequences; perhaps even a single sequence executed multiple times in a row. 1 And they were willing to suffer some extra redundancy as an unfortunate but necessary trade-off.
While this benefit of random testing is understandable, there’s no need to worry as iTBA has you covered here. If you checked out this link - http://www.verificationacademy.com/infact2 - you found an interactive example of a side by side comparison of CRT and iTBA. The intent of the example was to show comparisons of what happens when you use CRT to generate tests randomly versus when you use iTBA to generate tests without redundancy.
However in a real application of iTBA, it’s equally likely that you’d manage your redundancy, not necessarily eliminate it completely. We’ve improved the on-line illustration now to include two (of the many) additional features of iTBA.
Coverage First – Then Random
One is the ability to run a simulation with high coverage non-redundant tests first, followed immediately by random tests. Try it again, but this time check the box labeled “Run after all coverage is met”. What you’ll find is that iTBA achieves your targeted coverage in the first 576 tests, at which time CRT will have achieved somewhere around 50% coverage at best. But notice that iTBA doesn’t stop at 100% coverage. It continues on, generating tests randomly. By the time CRT gets to about 70% coverage, iTBA has achieved 100%, and has also generated scores of additional tests randomly. You can have the best of both worlds. You can click on the “suspend”, “resume”, and “show chart” buttons during the simulation to see the progress of each.
Interleave Coverage and Random
Two is the ability to run a simulation randomly, but to clip the redundancy rather than eliminate it. Move the “inFact coverage goal” bar to set the clip level (try 2 or 3 or 4), and restart the simulation. Now you’ll see iTBA generating random tests, but managing the redundancy to whatever level you chose. The result is simulation with a managed amount of redundancy that still achieves 100% of your target coverage, including every corner-case.
iTBA generates tons of tests, but lets you decide how much to control them. If you’re interested to learn more about how iTBA can help you achieve your functional verification goals faster, you might consider attending the Tech Design Forum in Santa Clara on September 8th. There’s a track dedicated to achieving coverage closure. Check out this URL for more information about it. http://www.techdesignforums.com/events/santa-clara/event.cfm
1 – By the way, if achieving your test goals is predicated on certain specific sequences of sequences, our experts can show you how to create an iTBA graph that will achieve those goals much faster than relying on redundancy. But that’s another story for another time.
If you’ve been to DAC or DVCon during the past couple of years, you’ve probably at least heard of something new called “Intelligent Testbench Automation”. Well, it’s actually not really all that new, as the underlying principles have been used in compiler testing and some types of software testing for the past three decades, but its application to electronic design verification is certainly new, and exciting.
The value proposition of iTBA is fairly simple and straightforward. Just like constrained random testing, iTBA generates tons of stimuli for functional verification. But iTBA is so efficient, that it achieves the targeted functional coverage one to two orders of magnitude faster than CRT. So what would you do if you could achieve your current simulation goals 10X to 100X faster?
You could finish your verification earlier, especially when it seems like you’re getting new IP drops every day. I’ve seen IP verification teams reduce their simulations from several days on several CPUs (using CRT) to a couple of hours on a single CPU (with iTBA). No longer can IP designers send RTL revisions faster than we can verify them.
But for me, I’d ultimately use the time savings to expand my testing goals. Today’s designs are so complex that typically only a fraction of their functionality gets tested anyway. And one of the biggest challenges is trading off what functionality to test, and what not to test. (We’ll show you how iTBA can help you here, in a future blog post.) Well, if I can achieve my initial target coverage in one-tenth of the time, then I’d use at least part of the time saving to expand my coverage, and go after some of the functionality that originally I didn’t think I’d have time to test.
On Line Illustration
If you check out this link – http://www.verificationacademy.com/infact – you’ll find an interactive example of a side by side comparison of constrained random testing and intelligent testbench automation. It’s an Adobe Flash Demonstration, and it lets you run your own simulations. Try it, it’s fun.
The example shows a target coverage of 576 equally weighted test cases in a 24×24 grid. You can adjust the dials at the top for the number and speed of simulators to use, and then click on “start”. Both CRT and iTBA simulations run in parallel at the same speed, cycle for cycle, and each time a new test case is simulated the number in its cell is incremented by one, and the color of the cell changes. Notice that the iTBA simulation on the right achieves 100% coverage very quickly, covering every unique test case efficiently. But notice that the CRT simulation on the left eventually achieves 100% coverage painfully and slowly, with much unwanted redundancy. You can also click on “show chart” to see a coverage chart of your simulation.
You probably knew that random testing repeats, but you probably didn’t know by how much. It turns out that the redundancy factor is expressed in the equation “ T = N ln N + C “, where “T” is the number of tests that must be generated to achieve 100% coverage of “N” different cases, and “C” is a small constant. So using the natural logarithm of 576, we can calculate that given equally weighted cases, the random simulation will require an average of about 3661 tests to achieve our goal. Sometimes it’s more, sometimes it’s less, given the unpredictability of random testing. In the meantime the iTBA simulation achieves 100% coverage in just 576 tests, a reduction of 84%.
Experiment at Home
You probably already have an excellent six-sided demonstration vehicle somewhere at home. Try rolling a single die repeatedly, simulating a random test generator. How many times does it take you to “cover” all six unique test cases? T = N ln N + C says it should take about 11 times or more. You might get lucky and hit 8, 9, or 10. But chances are you’ll still be rolling at 11, 12, 13, or even more. If you used iTBA to generate the test cases, it would take you six rolls, and you’d be done. Now in this example, getting to coverage twice as fast may not be that exciting to you. But if you extrapolate these results to your RTL design’s test plan, the savings can become quite interesting.
So here’s a quick question for you. What’s the minimum number of unique functional test cases needed to realize at least a 10X gain in efficiency with iTBA compared to what you could get with CRT? (Hint – You can figure it out with three taps on a scientific calculator.) It’s probably a pretty small number compared to the number of functions your design can actually perform, meaning that there’s at least a 10X improvement in testing efficiency awaiting you with iTBA.
Hopefully at this point you’re at least a little bit interested? Like some others, you may be skeptical at this point. Could this technology really offer a 10X improvement in functional verification? Check out the Verification Academy at this site – http://www.verificationacademy.com/course-modules/dynamic-verification/intelligent-testbench-automation – to see the first academy sessions that will introduce you to Intelligent Testbench Automation. Or you can even Google “intelligent testbench automation”, and see what you find. Thanks for reading . . .
This blog is a continuation of a series of blogs, which present the highlights from the 2010 Wilson Research Group Functional Verification Study (for a background on the study, click here).
In my previous blog (Part 8 click here), I focused on some of the 2010 Wilson Research Group findings related to design and verification language trends. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2010 Wilson Research Group study.
One of the claims I made in the prologue to this series of blogs is that we are seeing a trend in increased industry adoption of advanced functional verification techniques, which is supported by the data I present in this blog. An interesting question you might ask is “what is driving this trend?” In some of my earlier blogs (click here for Part 1 and Part 2) I showed an industry trend in that design complexity is increasing in terms design sizes and number of embedded processors. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (click here for Part 4). My belief is that the industry is being forced to mature its functional verification process to address increasing complexity and effort.
Simulation Techniques Adoption Trends
Let’s begin by comparing the adoption trends related to various simulation techniques as shown in Figure 1, where the data from the 2007 Far West Research study is shown in blue and the data from 2010 Wilson Research Group study is shown in green.
Figure 1. Simulation-based technique adoption trends
You can see that the study finds the industry increasing its adoption of various functional verification techniques.
For example, in 2007, the Far West Research Group found that only 48 percent of the industry performed code coverage. This surprised me. After all, HDL-based code coverage is a technology that has been around since the early 1990’s. However, I did informally verify the 2007 results through numerous customer visits and discussions. In 2010, we see that the industry adoption of code coverage has increased to 72 percent.
Now, a side comment: In this blog, I do not plan to discuss either the strengths or weaknesses of the various verification techniques that were studied (such as code coverage, whose strengths and weaknesses have been argued and debated for years)—perhaps in a separate series of future blogs. In this series of blogs, I plan to focus only on the findings from the 2010 Wilson Research Group study.
In 2007, the Far West Research Group study found that 37 percent of the industry had adopted assertions for use in simulation. In 2010, we find that industry adoption of assertions had increased to 72 percent. I believe that the maturing of the various assertion language standards has contributed to this increased adoption.
In 2007, the Far West Research Group study found that 40 percent of the industry had adopted functional coverage for use in simulation. In 2010, the industry adoption of functional coverage had increased to 69 percent. Part of this increase in functional coverage adoption has been driven by the increased adoption of constrained-random simulation, since you really can’t effectively do constrained-random simulation without doing functional coverage.
In fact, we see from the Far West Research Group 2007 study that 41 percent of the industry had adopted constrained-random simulation techniques. In 2010, the industry adoption had increased to 69 percent. I believe that this increase in constrained-random adoption has been driven by the increased adoption of the various base-class library methodologies, as I presented in a previous blog (click here for Part 8).
Formal Property Checking Adoption Trends
Figure 2 shows the trends in terms of formal property checking adoption by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green). The industry adoption of formal property checking has increased by an amazing 53 percent in the past three years. Again, this is another data point that supports my claim that the industry is starting to mature its adoption of advanced functional verification techniques.
Another way to analyze the results is to partition a project’s adoption of formal property checking by design size, as shown in Figure 3, where less than 1M gates is shown in blue, 1M to 20M gates is shown in orange, and greater than 20M gates is shown in red. Obviously, the larger the design, the more effort is generally spent in verification. Hence, it’s not too surprising to see the increased adoption of formal property checking for larger designs.
Figure 3. Trends in formal property checking adoption by design size
Acceleration/Emulation & FPGA Prototyping Adoption Trends
The amount of time spent in a simulation regression is an increasing concern for many projects. Intuitively, we tend to think that the design size influences simulation performance. However, there are two equally important factors that must be considered: number of test in the simulation regression suite, and the length of each test in terms of clock cycles.
For example, a project might have a small or moderate-sized design, yet verification of this designs requires a long running test (e.g., a video input stream). Hence, in this example, the simulation regression time is influenced by the number of clock cycles required for the test, and not necessarily the design size itself.
In blog 6 of this series, I presented industry data on the number of tests created to verify a design in simulation (i.e., the regression suite). The findings obviously varied dramatically from a handful of test to thousands of test in a regression suite, depending on the design. In Figure 4 below, I show the findings for a projects regression time, which also varies dramatically from short regression times for some projects, to multiple days for other projects. The median simulation regression time is about 16 hours in 2010.
Figure 4. Simulation regression time trends
One technique that is often used to speed up simulation regression (either due to very long test and lots of test) is either hardware assisted acceleration or emulation. In addition, FPGA prototyping, while historically used as a platform for software development, has recently served a role in SoC integration validation.
Figure 5 shows the adoption trend for both HW assisted acceleration/emulation and FPGA prototyping by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green). We have seen a 75 percent increase in the adoption of HW assisted acceleration/emulation over the past three years.
Figure 5. HW assisted acceleration/emulation and FPGA Prototyping trends
I was surprised to see that the adoption of FPGA prototyping did not increase over the past three years, considering that we found in increase in SoC development over the same period. So, I decided to dig deeper into this anomaly.
In Figure 6, you’ll see that I partitioned the HW assisted acceleration/emulation and FPGA prototyping adoption data by design size: less than 1M gates (in blue), 1M to 20M gates (in yellow), and greater than 20M gates (in red). This revealed that the adoption of HW assisted acceleration/emulation continued to increase as design sizes increased. However, the adoption of FPGA prototyping rapidly dropped off as design sizes increased beyond 20M gates.
Figure 6. Acceleration/emulation & FPGA Prototyping adoption by design size
So, what’s going on? One problem with FPGA prototyping of large designs is that there is an increased engineering effort required to partition designs across multiple FPGAs. In fact, what I have found is that FPGA prototyping of very large designs is often a major engineering effort in itself, and that many projects are seeking alternative solutions to address this problem.
In my next blog, I plan to present the 2010 Wilson Research Group study findings related to various project results in terms of schedule and required spins before production.
Testbench Characteristics and Simulation Strategies (Continued)
This blog is a continuation of a series of blogs, which present the highlights from the 2010 Wilson Research Group Functional Verification Study (for a background on the study, click here).
In my previous blog (Part 6 click here), I focused on some of the 2010 Wilson Research Group findings related to testbench characteristics and simulation strategies. In this blog, I continue this discussion, and present additional findings specifically related to the number of tests created by a project, as well as the length of time spent in a simulation regression run for various projects.
Percentage directed tests created by a project
Let’s begin by examining the percentage of directed tests that were created by a project, as shown in Figure 1. Here, we compare the results for FPGA designs (in grey) and non-FPGA designs (in green).
Figure 1. Percentage of directed testing by a project
Obviously, the study results are all over the spectrum, where some projects create more directed tests than others. The study data revealed that FPGA design participants tend to belong to a higher percentage of projects that only do directed tests.
Figure 2. Median percentage of directed testing by a project by region
You can see from the results that India seems to spend less time focused on directed testing compared with other regions, which means that India spends more time with alternative stimulus generation methods (such as, constrained-random, processor-driven, or graph-based techniques).
Let’s look at the percentage of directed testing by design size, for non-FPGA projects. The median results are shown in Figure 3, where the design size partitions are represented as: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red).
Figure 3. Median percentage of directed testing by a project by design size
As design sizes increase, there is less reliance on directed testing.
Percentage of project tests that were random or constrained random
Next, let’s look at the percentage of tests that were random or constrained random across multiple projects. Figure 4 compares the results between FPGA designs (in grey) and non-FPGA designs (in green).
Figure 4. Percentage of random or constrained-random testing by a project
And again, the study results indicate that projects are all over the spectrum in their usage of random or constrained-random stimulation generation. Some projects do more, while other projects do less.
Figure 5 shows the median percentage of random or constrained-random testing by region, where North America (in blue), Europe/Israel (in green), Asia (in green), and India (in red).
Figure 5. Median percentage of random or constrained-random testing by region
You can see that the median percentage of random or constrained-random testing by a project is higher in Indian than other regions of the world.
Let’s look at the percentage of random or constrained-random testing by design size, for non-FPGA projects. The median results are shown in Figure 6, where the design size partitions are represented as: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red).
Figure 6. Median percentage of random or constrained-random testing by design size
Smaller designs tend to do less random or constrained-random testing.
Simulation regression time
Now, let’s look at the time that various projects spend in a simulation regression. Figure 7 compares the simulation regression time between FPGA designs (in grey) and non-FPGA designs (in green) from our recent study.
Figure 7. Time spent in a simulation regression by project
And again, we see that FPGA projects tend to spend less time in a simulation regression run compared to non-FPGA projects.
Figure 8 shows the trends in terms of simulation regression time by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green). There really hasn’t been a significant change in the time spent in a simulation regression within the past three years. You will find that some teams spend days or even weeks in a regression. Yet, the industry median is about 16 hours for both the 2007 and 2010 studies.
Figure 8. Simulation regression time trends
Figure 9 shows the median simulation regression time by region, where North America (in blue), Europe/Israel (in green), Asia (in green), and India (in red).
Figure 9. Median simulation regression time by regions
Finally, Figure 10, shows the median simulation regression time by design size, where the design size partitions are represented as: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red).
Figure 10. Median simulation regression time by design size
Obviously, project teams working on smaller designs spend less time in a simulation regression run compared to project teams working on larger designs.
In my next blog (click here), I’ll focus on design and verification language trends, as identified by the 2010 Wilson Research Group study.
Testbench Characteristics and Simulation Strategies
This blog is a continuation of a series of blogs that present the highlights from the 2010 Wilson Research Group Functional Verification Study (for background on the study, click here).
In my previous blog (click here), I focused on the controversial topic of effort spent in verification. In this blog, I focus on some of the 2010 Wilson Research Group findings related to testbench characteristics and simulation strategies. Although I am shifting the focus away from verification effort, I believe that the data I present in this blog is related to the discussion of overall effort and really needs to be considered.
Time spent in top-level simulation
Let’s begin by examining the percentage of time a project spends in top-level simulation today. Figure 1 compares the time spent in top-level simulation between FPGA designs (in grey) and non-FPGA designs (in green) from our recent study.
Figure 1. Percentage time spent in top-level simulation
Obviously, the study results show that projects are all over the spectrum, where some projects spend less time in top-level simulation, while others spent much more time.
I decided to partition the data by design size (using the same partition that I’ve presented in previous blogs), and then compare the time spent in top-level verification by design size for Non-FPGA designs. Figure 2 shows the results, where the design size partitions are represented as: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red).
Figure 2. Percentage of time spent in top-level simulation by design size
You can see from the results that, unlike previous analysis where the data was partitioned by design size, it didn’t seem to matter for design sizes between 1M-20M gates and design sizes greater than 20M gates. That is, they seem to spend about the same amount of time in top-level verification.
Time spent in subsystem-level simulation
Next, let’s look at the percentage of time that a project spends in subsystem-level simulation (e.g., clusters, blocks, etc.). Figure 3 compares the time spent in subsystem-level simulation between FPGA designs (in grey) and non-FPGA designs (in green) from our recent study.
Figure 3. Percentage of time spent in subsystem-level simulation
Non-FPGA designs seem to spend more time in subsystem-level simulation when compared to FPGA designs. Again, this is probably not too surprising to anyone familiar with traditional FPGA development. Although, as the size and complexity of FPGA designs continue to increase, I suspect we will see more subsystem-level verification. Unfortunately, we can’t present the trends on FPGA designs, as I mentioned in the background blog for the Wilson Research Group Functional Verification Study. However, future studies should be able to leverage the data we collected in the Wilson Research Group study and present FPGA trends.
Figure 4 shows the Non-FPGA results for the time spent in subsystem-level verification, where the data was partitioned by design size: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red). There doesn’t appear to be any big surprise when viewing the data with this partition.
Figure 4. Percentage of time spent in subsystem-level simulation by design size
Number of tests created to verify the design in simulation
Now let’s look at the percentage of projects in terms of the number of tests they create to verify a design using simulation. Figure 5 compares the number of tests created to verify a design between FPGA designs (in grey) and non-FPGA designs (in green) from our recent study.
Figure 5. Number of tests created to verify a design in simulation
Again, the data is not too surprising. We see that engineers working on FPGA designs tend to create fewer tests to verify their design in simulation.
Figure 6 shows the trend in terms of the number of tests created to verify a design in simulation by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green). You will note that there has been an increase in the number of tests in the last three years. In fact, although there is a wide variation in the number of tests created by a project, the median calculation trend has grown from 342 to 398 tests.
Figure 6. Number of tests created to verify a design in simulation trends
I also did a regional analysis of the number of tests a non-FPGA project creates to verify a design in simulation, and the median results are shown in Figure 7, where North America (in blue), Europe/Israel (in green), Asia (in yellow), and India (in red). You can see that the median calculation is higher in Asia and Indian than North America and Europe/Israel.
Figure 7. Number of tests created to verify a design in simulation by region
Finally, I did a design size analysis of the number of tests a non-FPGA project creates to verify a design in simulation, and the median results are shown in Figure 8, where the design size partitions are represented as: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red).
Figure 8. Number of tests created to verify a design in simulation by design size
Obviously, larger designs require more tests to verify them in simulation.
In my next blog (click here), I’ll continue focusing on testbench characteristics and simulation strategies, and I’ll present additional data from the 2010 Wilson Research Group study.
In an earlier blog post, I discussed a sequence layering technique that Mentor verification technologists had created and presented on at DVCon 2010, based on OVM. This package has been updated and tested to work with UVM 1.0 EA and is ready for download.
As a reminder, the UVM Layering 1.0 Package, like the OVM one, provides the means to add layers of tests (sequences) without modifying the underlying testbench and without extending components or using the factory to override implementations. The package also provides the DVCon paper and presentation that describes it in more detail in case you did not attend DVCon.
Users have found layered sequences can make verification life easier as sequences and sequencers are natively parallel and have arbitration and other communication process hooks already built-in. The package is a companion to the UVM 2.0 Register Package that was also updated from OVM to UVM.
Download OVM Configuration and Virtual Interface Extensions from OVMWorld.org
Creating configurable testbench elements is critical for reuse. If you write some OVM code in one particular testbench and never intend to use it in any other testbench, then there is no need to make it configurable. As soon as you wish to take code and turn it into reusable IP which can be used in a variety of applications, not all of which are immediately known, then you need to think about how to make the code configurable. Making code configurable means that you need to think about the breadth of applications where it will be used and the degrees of freedom you want to make available to the user of this IP.
The author of any verification IP needs to think about how to make that VIP sufficiently flexible to be used in a variety of different scenarios. In order to achieve this, OVM components need to have a number of settings associated with which can be varied by the testbench integrator. These settings may include things such as error injection rates, protocol modes supported or not supported, whether an agent is active or passive, to name but a few.
Future OVM Directions
If you download the OVM Configuration and Virtual Interface Extensions, you will find information on future directions for OVM in the documentation directory. In particular, detaching the configuration scoping mechanism from the OVM component hierarchy is an active area of investigation which might enhance the existing configuration mechanisms in important ways. This would allow one to naturally use the scoping mechanism with “behavioral VIP,” such as sequences, in addition to the current “structural scoping” mechanism that works with OVM components.
Additionally, the current configuration database is limited to storing strings, integers, and objects derived from ovm _ object. Another area of active investigation is enabling the database to store values of any type to solve the efficiency problems when storing and retrieving integral types.
It is also possible to improve and expand the existing wildcarding mechanisms to deal with the full regular expression syntax.
We invite you to join us in the this endeavor and share your thoughts here or on OVM World. To get started, you can download the OVM Configuration and Virtual Interface extensions to learn more.
About Verification Horizons BLOG
This blog will provide an online forum to provide weekly updates on concepts, values, standards, methodologies and examples to assist with the understanding of what advanced functional verification technologies can do and how to most effectively apply them. We're looking forward to your comments and suggestions on the posts to make this a useful tool.
- Texas-Sized DAC Edition of Verification Horizons Now Up on Verification Academy
- IEEE 1801™-2013 UPF Standard Is Published
- Part 1: The 2012 Wilson Research Group Functional Verification Study
- What’s the deal with those wire’s and reg’s in Verilog
- Getting AMP’ed Up on the IEEE Low-Power Standard
- Prologue: The 2012 Wilson Research Group Functional Verification Study
- May 2013 (4)
- April 2013 (2)
- March 2013 (2)
- February 2013 (5)
- January 2013 (1)
- December 2012 (1)
- November 2012 (1)
- October 2012 (4)
- September 2012 (1)
- August 2012 (1)
- July 2012 (6)
- June 2012 (1)
- May 2012 (3)
- March 2012 (1)
- February 2012 (6)
- January 2012 (2)
- December 2011 (2)
- November 2011 (2)
- October 2011 (3)
- September 2011 (1)
- July 2011 (3)
- June 2011 (6)
- Intelligent Testbench Automation Delivers 10X to 100X Faster Functional Verification
- Part 9: The 2010 Wilson Research Group Functional Verification Study
- Verification Horizons DAC Issue Now Available Online
- Accellera & OSCI Unite
- The IEEE’s Most Popular EDA Standards
- UVM Register Kit Available for OVM 2.1.2
- May 2011 (2)
- April 2011 (7)
- User-2-User’s Functional Verification Track
- Part 7: The 2010 Wilson Research Group Functional Verification Study
- Part 6: The 2010 Wilson Research Group Functional Verification Study
- SystemC Day 2011 Videos Available Now
- Part 5: The 2010 Wilson Research Group Functional Verification Study
- Part 4: The 2010 Wilson Research Group Functional Verification Study
- Part 3: The 2010 Wilson Research Group Functional Verification Study
- March 2011 (5)
- February 2011 (4)
- January 2011 (1)
- December 2010 (2)
- October 2010 (3)
- September 2010 (4)
- August 2010 (1)
- July 2010 (3)
- June 2010 (9)
- The reports of OVM’s death are greatly exaggerated (with apologies to Mark Twain)
- New Verification Academy Advanced OVM (&UVM) Module
- OVM/UVM @DAC: The Dog That Didn’t Bark
- DAC: Day 1; An Ode to an Old Friend
- UVM: Joint Statement Issued by Mentor, Cadence & Synopsys
- Static Verification
- OVM/UVM at DAC 2010
- DAC Panel: Bridging Pre-Silicon Verification and Post-Silicon Validation
- Accellera’s DAC Breakfast & Panel Discussion
- May 2010 (9)
- Easier UVM Testbench Construction – UVM Sequence Layering
- North American SystemC User Group (NASCUG) Meeting at DAC
- An Extension to UVM: The UVM Container
- UVM Register Package 2.0 Available for Download
- Accellera’s OVM: Omnimodus Verification Methodology
- High-Level Design Validation and Test (HLDVT) 2010
- New OVM Sequence Layering Package – For Easier Tests
- OVM 2.0 Register Package Released
- OVM Extensions for Testbench Reuse
- April 2010 (6)
- SystemC Day Videos from DVCon Available Now
- On Committees and Motivations
- The Final Signatures (the meeting during the meeting)
- UVM Adoption: Go Native-UVM or use OVM Compatibility Kit?
- UVM-EA (Early Adopter) Starter Kit Available for Download
- Accellera Adopts OVM 2.1.1 for its Universal Verification Methodology (UVM)
- March 2010 (4)
- February 2010 (5)
- January 2010 (5)
- December 2009 (15)
- A Cliffhanger ABV Seminar, Jan 19, Santa Clara, CA
- Truth in Labeling: VMM2.0
- IEEE Std. 1800™-2009 (SystemVerilog) Ready for Purchase & Download
- December Verification Horizons Issue Out
- Evolution is a tinkerer
- It Is Better to Give than It Is to Receive
- Zombie Alert! (Can the CEDA DTC “User Voice” Be Heard When They Won’t Let You Listen)
- DVCon is Just Around the Corner
- The “Standards Corner” Becomes a Blog
- I Am Honored to Honor
- IEEE Standards Association Awards Ceremony
- ABV and being from Missouri…
- Time hogs, blogs, and evolving underdogs…
- Full House – and this is no gamble!
- Welcome to the Verification Horizons Blog!
- September 2009 (2)
- July 2009 (1)
- May 2009 (1)