Intelligent Testbench Automation Delivers 10X to 100X Faster Functional Verification
If you’ve been to DAC or DVCon during the past couple of years, you’ve probably at least heard of something new called “Intelligent Testbench Automation”. Well, it’s actually not really all that new, as the underlying principles have been used in compiler testing and some types of software testing for the past three decades, but its application to electronic design verification is certainly new, and exciting.
The value proposition of iTBA is fairly simple and straightforward. Just like constrained random testing, iTBA generates tons of stimuli for functional verification. But iTBA is so efficient, that it achieves the targeted functional coverage one to two orders of magnitude faster than CRT. So what would you do if you could achieve your current simulation goals 10X to 100X faster?
You could finish your verification earlier, especially when it seems like you’re getting new IP drops every day. I’ve seen IP verification teams reduce their simulations from several days on several CPUs (using CRT) to a couple of hours on a single CPU (with iTBA). No longer can IP designers send RTL revisions faster than we can verify them.
But for me, I’d ultimately use the time savings to expand my testing goals. Today’s designs are so complex that typically only a fraction of their functionality gets tested anyway. And one of the biggest challenges is trading off what functionality to test, and what not to test. (We’ll show you how iTBA can help you here, in a future blog post.) Well, if I can achieve my initial target coverage in one-tenth of the time, then I’d use at least part of the time saving to expand my coverage, and go after some of the functionality that originally I didn’t think I’d have time to test.
On Line Illustration
If you check out this link – http://www.verificationacademy.com/infact – you’ll find an interactive example of a side by side comparison of constrained random testing and intelligent testbench automation. It’s an Adobe Flash Demonstration, and it lets you run your own simulations. Try it, it’s fun.
The example shows a target coverage of 576 equally weighted test cases in a 24×24 grid. You can adjust the dials at the top for the number and speed of simulators to use, and then click on “start”. Both CRT and iTBA simulations run in parallel at the same speed, cycle for cycle, and each time a new test case is simulated the number in its cell is incremented by one, and the color of the cell changes. Notice that the iTBA simulation on the right achieves 100% coverage very quickly, covering every unique test case efficiently. But notice that the CRT simulation on the left eventually achieves 100% coverage painfully and slowly, with much unwanted redundancy. You can also click on “show chart” to see a coverage chart of your simulation.
You probably knew that random testing repeats, but you probably didn’t know by how much. It turns out that the redundancy factor is expressed in the equation “ T = N ln N + C “, where “T” is the number of tests that must be generated to achieve 100% coverage of “N” different cases, and “C” is a small constant. So using the natural logarithm of 576, we can calculate that given equally weighted cases, the random simulation will require an average of about 3661 tests to achieve our goal. Sometimes it’s more, sometimes it’s less, given the unpredictability of random testing. In the meantime the iTBA simulation achieves 100% coverage in just 576 tests, a reduction of 84%.
Experiment at Home
You probably already have an excellent six-sided demonstration vehicle somewhere at home. Try rolling a single die repeatedly, simulating a random test generator. How many times does it take you to “cover” all six unique test cases? T = N ln N + C says it should take about 11 times or more. You might get lucky and hit 8, 9, or 10. But chances are you’ll still be rolling at 11, 12, 13, or even more. If you used iTBA to generate the test cases, it would take you six rolls, and you’d be done. Now in this example, getting to coverage twice as fast may not be that exciting to you. But if you extrapolate these results to your RTL design’s test plan, the savings can become quite interesting.
So here’s a quick question for you. What’s the minimum number of unique functional test cases needed to realize at least a 10X gain in efficiency with iTBA compared to what you could get with CRT? (Hint – You can figure it out with three taps on a scientific calculator.) It’s probably a pretty small number compared to the number of functions your design can actually perform, meaning that there’s at least a 10X improvement in testing efficiency awaiting you with iTBA.
Hopefully at this point you’re at least a little bit interested? Like some others, you may be skeptical at this point. Could this technology really offer a 10X improvement in functional verification? Check out the Verification Academy at this site – http://www.verificationacademy.com/course-modules/dynamic-verification/intelligent-testbench-automation – to see the first academy sessions that will introduce you to Intelligent Testbench Automation. Or you can even Google “intelligent testbench automation”, and see what you find. Thanks for reading . . .
Posted June 28th, 2011, by Mark Olen
- IEEE-SA EDA & IP Interoperability Symposium
- Back to School: How to Educate Yourself and Your Colleagues About Formal and CDC Verification
- Mentor Announces Joint Portable Stimulus Contribution with Cadence, Breker
- Ready for a Verification Extravaganza in the Land of Verification Engineers?
- Conclusion: The 2014 Wilson Research Group Functional Verification Study
- How Formal Techniques Can Keep Hackers from Driving You into a Ditch, Part 2 of 2
- Part 12: The 2014 Wilson Research Group Functional Verification Study
- Beating Design Complexity with VirtuaLAB
- Part 11: The 2014 Wilson Research Group Functional Verification Study
- How Formal Techniques Can Keep Hackers from Driving You into a Ditch, Part 1 of 2