Posts Tagged ‘Intelligent Testbench Automation’

30 October, 2013

MENTOR GRAPHICS AT ARM TECHCON

This week ARM® TechCon® 2013 is being held at the Santa Clara Convention Center from Tuesday October 29 through Thursday October 31st, but don’t worry, there’s nothing to be scared about.  The theme is “Where Intelligence Counts”, and in fact as a platinum sponsor of the event, Mentor Graphics is excited to present no less than ten technical and training sessions about using intelligent technology to design and verify ARM-based designs.

My personal favorite is scheduled for Halloween Day at 1:30pm, where I’ll tell you about a trick that Altera used to shave several months off their schedule, while verifying the functionality and performance of an ARM AXI™ fabric interconnect subsystem.  And the real treat is that they achieved first silicon success as well.  In keeping with the event’s theme, they used something called “intelligent” testbench automation.

And whether you’re designing multi-core designs with AXI fabrics, wireless designs with AMBA® 4 ACE™ extensions, or even enterprise computing systems with ARM’s latest AMBA® 5 CHI™ architecture, these sessions show you how to take advantage of the very latest simulation and formal technology to verify SoC connectivity, ensure correct interconnect functional operation, and even analyze on-chip network performance.

On Tuesday at 10:30am, Gordon Allan described how an intelligent performance analysis solution can leverage the power of an SQL database to analyze and verify interconnect performance in ways that traditional verification techniques cannot.  He showed a wide range of dynamic visual representations produced by SoC regressions that can be quickly and easily manipulated by engineers to verify performance to avoid expensive overdesign.

Right after Gordon’s session, Ping Yeung discussed using intelligent formal verification to automate SoC connectivity, overcoming observability and controllability challenges faced by simulation-only solutions.  Formal verification can examine all possible scenarios exhaustively, verifying on-chip bus connectivity, pin multiplexing of constrained interfaces, connectivity of clock and reset signals, as well as power control and scan test signal connectivity.

On Wednesday, Mark Peryer shows how to verify AMBA interconnect performance using intelligent database analysis and intelligent testbench automation for traffic scenario generation.  These techniques enable automatic testbench instrumentation for configurable ARM-based interconnect subsystems, as well as highly-efficient dense, medium, sparse, and varied bus traffic generation that covers even the most difficult to achieve corner-case conditions.

And finally also on Halloween, Andy Meyer offers an intelligent workshop for those that are designing high performance systems with hierarchical and distributed caches, using either ARM’s AMBA 5 CHI architecture or ARM’s AMBA 4 ACE architecture.  He’ll cover topics including how caching works, how to improve caching performance, and how to verify cache coherency.

For more information about these sessions, be sure to visit the ARM TechCon program website.  Or if you miss any of them, and would like to learn about how this intelligent technology can help you verify your ARM designs, don’t be afraid to email me at mark_olen@mentor.com.   Happy Halloween!

, , , , , , , , , ,

26 July, 2013

You don’t need a graphic like the one below to know that multi-core SoC designs are here to stay.  This one happens to be based on ARM’s AMBA 4 ACE architecture which is particularly effective for mobile design applications, offering an optimized mix of high performance processing and low power consumption.  But with software’s increasing role in overall design functionality, verification engineers are now tasked with verifying not just proper HW functionality, but proper HW functionality under control of application SW.  So how do you verify HW/SW interactions during system level verification?

 For most verification teams, the current alternatives are like choosing between a walk through the desert or drinking from a fire hose.  In the desert, you can manually write test programs in C, compile them and load them into system memory, and then initialize the embedded processors and execute the programs.  Seems straightforward, but now try it for multiple embedded cores and make sure you confirm your power up sequence and optimal low power management (remember, we’re testing a mobile market design), correct memory mapping, peripheral connectivity, mode selection, and basically anything that your design is intended to do before its battery runs out.  You can get lost pretty quickly.  Eventually you remember that you weren’t hired to write multi-threaded software programs, but that there’s an entire staff of software developers down the hall who were.  So you boot your design’s operating system, load the SW drivers, and run the design’s target application programs, and fully verify that all’s well between the HW and the SW at the system level.

But here comes the fire hose.  By this time, you’ve moved from your RTL simulator to an emulator, because just simulating Linux booting up takes weeks to months.  But what happens when your emulator runs into a system level failure after billions of clock cycles and several days of emulation?  There’s no way to avoid full HW/SW verification at the system level, but wouldn’t it be nice to find most of the HW/SW interaction bugs earlier in the process, when they’re easier to debug?

 There’s an easier way to bridge the gap between the desert and the fire hose.  It’s called “intelligent Software Driven Verification”.  iSDV automates the generation of embedded C test programs, for multi-core processor execution.  These tests generate thousands of high-value processor instructions that verify HW/SW interactions.  Bugs discovered take much less time to debug, and the embedded C test programs can run in both simulation and emulation environments, easing the transition from one to the other.Check out the on-line web seminar at the link below to learn about using intelligent Software Driven Verification” as a way to uncover the majority of your system-level design bugs after RTL level simulation, but before full system level emulation. 

http://www.mentor.com/products/fv/multimedia/automating-software-driven-hardware-verification-with-questa-infact

, , , , , ,

28 June, 2012

Graph-Based Intelligent Testbench Automation
While intelligent testbench automation is still reasonably new when measured in EDA years, this graph-based verification technology is being adopted by more and more verification teams every day.  And the interest is global.  Verification teams from Europe, North America, and the Pacific Rim are now using iTBA to help them verify their newest electronic designs in less time and with fewer resources.  (If you haven’t adopted it yet, your competitors probably have.)  If you have yet to learn how this new technology can help you achieve higher levels of verification, despite increasing design complexity, I’d suggest you check out a recent article in the June 2012 issue of Verification Horizons titled “Is Intelligent Testbench Automation For You?”  The article focuses on where iTBA is best applied and where it will help you most by producing optimal results, and how design applications with a large verification space, functionally oriented coverage goals, and unbalanced conditions can often experience a 100X gain in coverage closure acceleration.  For more detail about these and other considerations, you’ll have to read the article.

Fitzpatrick’s Corollary
And while you’re there, you might also notice that the entire June 2012 issue of Verification Horizons is devoted to helping you achieve the highest levels of coverage as efficiently as possible.  Editor and fellow verification technologist Tom Fitzpatrick succinctly adapts Murphy’s Law to verification, writing “If It Isn’t Covered, It Doesn’t Work”.   And any experienced verification engineer (or manager) knows just how true this is, making it critical that we thoughtfully prioritize our verification goals, and achieve them as quickly and efficiently as possible.  The June 2012 issue offers nine high quality articles, with a particular focus on coverage.

Berg’s Proof
Another proof that iTBA is catching on globally, is the upcoming TVS DVClub event being held next Monday 2 July 2012, in Bristol, Cambridge, and Grenoble.  The title of the event is “Graph-Based Verification”, and three industry experts will discuss different ways you can take advantage of what graph-based intelligent testbench automation has to offer.  My colleague and fellow verification technologist Staffan Berg leads off the event with a proof of his own, as he will present how graph-based iTBA can significantly shorten your time-to-coverage.  Staffan will show you how to use graph-based verification to define your stimulus space and coverage goals, by highlighting examples from some of the verification teams that have already adopted this technology, as I mentioned above.  He’ll also show how you can introduce iTBA into your existing verification environment, so you can realize these benefits without disrupting your existing process.  I have already registered and plan to attend the TVS DVClub event, but I’ll have to do some adapting of my own as the event runs from 11:30am to 2:00pm BST in the UK.  But I’ve seen Staffan present before, and both he and intelligent testbench automation are worth getting up early for.  Hope to see you there, remotely speaking.

, , , , , ,

21 February, 2012

Is my car trying to tell me something?

This past Friday was the beginning of a two day internal functional verification meeting at Mentor Graphics corporate headquarters on Intelligent Testbench Automation (iTBA).  (Mentor’s iTBA product, Questa inFact is hot and getting hotter.) After getting to my car to return home at the end of the first day, I was thinking that the large interest in this technology – demonstrated by a standing room only training event – has got to be a tipping point indication for iTBA.

I turned my car on.  (Actually, I “pushed” it on as there is no place to put a key to turn anymore.)

Tornado-bp2Moments after starting my car a winter storm alert interrupted the music on the radio and displayed two notices.  One I am familiar with when the temperature falls and snow begins to collect on the mountain passes.  I’m not going to drive in the direction of the snow, so no problem.  The other alert was of grave concern.  It was a tornado watch.  And the tornado watch was not off in some other direction many miles away, it was “0 miles” from me.  I looked up, I scanned the horizon and dark black was in one direction and sun in the other.  I changed the radio channel to a local AM evening drive station, but no mention of a tornado watch.  I headed in the direction of the sun.  It seemed the safest direction to head.  But before I did, I snapped a quick picture as proof I actually read “Tornado Watch” on the car’s navigation screen.

iTBA to the Rescue?

I returned to ponder if functional verification has just gotten too big for current techniques that iTBA is going from a nice to have, to a must have.

Several years back it was popular to brag about the compute farms & ranches one had.  With 5,000 machines here and another 5,000 machines there it seemed a sane demonstration of one’s design and verification prowess.  But this gave way to 50,000 multicore machines and who is talking about this with pride?  All talk is out of necessity.  And what about the next step?  Who has 500,000 or 5,000,000 on the drawing board or in their data centers?  Looking around, it seems very few admit to more than 100,000 and even fewer have more than 500,000.

Verification may be in crisis, as many will say, but it you hold verification technology constant, it is not in crisis, is on a  collision coarse with disaster.  Addressing this crisis has been the theme of many of Mentor Graphics CEO Wally Rhines’ keynotes at DVCon.  His 2011 keynote was taken to heart by many who attended.  The need to improve by a several orders of magnitude the “Velocity of Verification” has been followed by several examples over the year.

One example was shared several months after DVCon when Mentor Graphics and TSMC announced we had partnered to validate advanced functional verification technology.  While not all test results at TSMC or our common customer, AppliedMicro, were revealed, one of the slower tests demonstrated the value of iTBA to shorten time-to-coverage by over 100x.  Even days after that announcement we disclosed Mentor’s Veloce emulation platform offered 400x OVM/UVM driven verification improvement.

100x  and 400x seem like a large numbers, but it appears even bigger when you put it into the context of the time it was measured.  With current constrained random techniques, a project that takes 6 weeks of simulator run time to reach 100% closure can reach it in about 10 hours with Questa inFact or about 2.5 hours with Veloce.  Instead of using complex scripts to peek in on a simulation run over the course of a month and a half, a verification team could actually leave work for the day, return the next morning and have a full, complete and exhaustive verification run.  And when even faster turnaround time is needed, emulation returns results during the work day.

SoC Verification: A Balance of simulation, iTBA & emulation

Wally’s DVCon 2011 keynote referenced 8 customer results coming from Mentor’s Questa inFact tool.  Many more have discovered what this can do for them as well.  And with each success, come the requests from more to see what it can do for them.

But changing the “Velocity of  SoC Verification” has not rested on one technique alone.  Stop by the Mentor Graphics DVCon booth and we can share with you the advances we have made to address system-level verification since last year.

Crossing The Chasm

Which brings me to the point of the “Tornado Watch.”  As I pondered the iTBA tipping point, about “how little things can make big differences” as can be found in Malcolm Gladwell’s book, my car must have been channeling Geoffrey Moore of  “Crossing The Chasm” fame instead.  For that reason it must have issued the Tornado Watch.  Could it be that iTBA is set to cross the chasm from early adopters to the early majority?

And thankfully, I don’t think my car is programmed to issue tipping point warnings, nor do I want to see if it can.

In the end, it will be with the benefit of hindsight that let’s us know if we are crossing the chasm into the tornado or not now or soon.  But for Mentor’s part, full and advanced support of iTBA technology with Questa inFact is ready now, and we are set to cross the chasm into the tornado.   My colleague, Mark Olen, blogs about iTBA here.   If you have not had a chance yet to read his blog on iTBA delivering 10x to 100x faster functional verification, it is worth the time to do so.  You can look for him to give frequent updates on iTBA and comment on the positive impact is has on SoC design and verification teams in the months ahead.

I look forward to seeing you at DVCon.

, , , , , , , , ,

26 July, 2011

Who Doesn’t Like Faster?

In my last blog post I introduced new technology called Intelligent Testbench Automation (“iTBA”).  It’s generating lots of interest in the industry because just like constrained random testing (“CRT”), it can generate tons of tests for functional verification.  But it has unique efficiencies that allow you to achieve coverage 10X to 100X faster.  And who doesn’t like faster?  Well since the last post I’ve received many questions of interest from readers, but one seems to stick out enough to “cover” it here in a follow up post.

Several readers commented that they like the concept of randomness, because it has the ability of generating sequences of sequences; perhaps even a single sequence executed multiple times in a row. 1 And they were willing to suffer some extra redundancy as an unfortunate but necessary trade-off.

Interactive Example

While this benefit of random testing is understandable, there’s no need to worry as iTBA has you covered here.  If you checked out this link - http://www.verificationacademy.com/infact2 - you found an interactive example of a side by side comparison of CRT and iTBA.  The intent of the example was to show comparisons of what happens when you use CRT to generate tests randomly versus when you use iTBA to generate tests without redundancy.

However in a real application of iTBA, it’s equally likely that you’d manage your redundancy, not necessarily eliminate it completely.  We’ve improved the on-line illustration now to include two (of the many) additional features of iTBA.

Coverage First – Then Random

One is the ability to run a simulation with high coverage non-redundant tests first, followed immediately by random tests.  Try it again, but this time check the box labeled “Run after all coverage is met”.  What you’ll find is that iTBA achieves your targeted coverage in the first 576 tests, at which time CRT will have achieved somewhere around 50% coverage at best.  But notice that iTBA doesn’t stop at 100% coverage.  It continues on, generating tests randomly.  By the time CRT gets to about 70% coverage, iTBA has achieved 100%, and has also generated scores of additional tests randomly.  You can have the best of both worlds.  You can click on the “suspend”, “resume”, and “show chart” buttons during the simulation to see the progress of each.

Interleave Coverage and Random

Two is the ability to run a simulation randomly, but to clip the redundancy rather than eliminate it.  Move the “inFact coverage goal” bar to set the clip level (try 2 or 3 or 4), and restart the simulation.  Now you’ll see iTBA generating random tests, but managing the redundancy to whatever level you chose.  The result is simulation with a managed amount of redundancy that still achieves 100% of your target coverage, including every corner-case.

iTBA generates tons of tests, but lets you decide how much to control them.  If you’re interested to learn more about how iTBA can help you achieve your functional verification goals faster, you might consider attending the Tech Design Forum in Santa Clara on September 8th.  There’s a track dedicated to achieving coverage closure.  Check out this URL for more information about it.  http://www.techdesignforums.com/events/santa-clara/event.cfm

1 – By the way, if achieving your test goals is predicated on certain specific sequences of sequences, our experts can show you how to create an iTBA graph that will achieve those goals much faster than relying on redundancy.  But that’s another story for another time.

, , , , , ,

28 June, 2011

iTBA Introduction

If you’ve been to DAC or DVCon during the past couple of years, you’ve probably at least heard of something new called “Intelligent Testbench Automation”.  Well, it’s actually not really all that new, as the underlying principles have been used in compiler testing and some types of software testing for the past three decades, but its application to electronic design verification is certainly new, and exciting.

The value proposition of iTBA is fairly simple and straightforward.  Just like constrained random testing, iTBA generates tons of stimuli for functional verification.  But iTBA is so efficient, that it achieves the targeted functional coverage one to two orders of magnitude faster than CRT.  So what would you do if you could achieve your current simulation goals 10X to 100X faster?

You could finish your verification earlier, especially when it seems like you’re getting new IP drops every day.  I’ve seen IP verification teams reduce their simulations from several days on several CPUs (using CRT) to a couple of hours on a single CPU (with iTBA).  No longer can IP designers send RTL revisions faster than we can verify them.

But for me, I’d ultimately use the time savings to expand my testing goals.  Today’s designs are so complex that typically only a fraction of their functionality gets tested anyway.  And one of the biggest challenges is trading off what functionality to test, and what not to test.  (We’ll show you how iTBA can help you here, in a future blog post.)  Well, if I can achieve my initial target coverage in one-tenth of the time, then I’d use at least part of the time saving to expand my coverage, and go after some of the functionality that originally I didn’t think I’d have time to test.

On Line Illustration

If you check out this link – http://www.verificationacademy.com/infact  – you’ll find an interactive example of a side by side comparison of constrained random testing and intelligent testbench automation.  It’s an Adobe Flash Demonstration, and it lets you run your own simulations.  Try it, it’s fun.

The example shows a target coverage of 576 equally weighted test cases in a 24×24 grid.  You can adjust the dials at the top for the number and speed of simulators to use, and then click on “start”.  Both CRT and iTBA simulations run in parallel at the same speed, cycle for cycle, and each time a new test case is simulated the number in its cell is incremented by one, and the color of the cell changes.  Notice that the iTBA simulation on the right achieves 100% coverage very quickly, covering every unique test case efficiently.  But notice that the CRT simulation on the left eventually achieves 100% coverage painfully and slowly, with much unwanted redundancy.  You can also click on “show chart” to see a coverage chart of your simulation.

Math Facts

You probably knew that random testing repeats, but you probably didn’t know by how much.  It turns out that the redundancy factor is expressed in the equation “ T = N ln N + C “, where “T” is the number of tests that must be generated to achieve 100% coverage of “N” different cases, and “C” is a small constant.  So using the natural logarithm of 576, we can calculate that given equally weighted cases, the random simulation will require an average of about 3661 tests to achieve our goal.  Sometimes it’s more, sometimes it’s less, given the unpredictability of random testing.  In the meantime the iTBA simulation achieves 100% coverage in just 576 tests, a reduction of 84%.

Experiment at Home

You probably already have an excellent six-sided demonstration vehicle somewhere at home.  Try rolling a single die repeatedly, simulating a random test generator.  How many times does it take you to “cover” all six unique test cases?  T = N ln N + C says it should take about 11 times or more.  You might get lucky and hit 8, 9, or 10.  But chances are you’ll still be rolling at 11, 12, 13, or even more.  If you used iTBA to generate the test cases, it would take you six rolls, and you’d be done.  Now in this example, getting to coverage twice as fast may not be that exciting to you.  But if you extrapolate these results to your RTL design’s test plan, the savings can become quite interesting.

Quiz Question

So here’s a quick question for you.  What’s the minimum number of unique functional test cases needed to realize at least a 10X gain in efficiency with iTBA compared to what you could get with CRT?  (Hint – You can figure it out with three taps on a scientific calculator.)  It’s probably a pretty small number compared to the number of functions your design can actually perform, meaning that there’s at least a 10X improvement in testing efficiency awaiting you with iTBA.

More Information

Hopefully at this point you’re at least a little bit interested?  Like some others, you may be skeptical at this point.  Could this technology really offer a 10X improvement in functional verification?  Check out the Verification Academy at this site – http://www.verificationacademy.com/course-modules/dynamic-verification/intelligent-testbench-automation – to see the first academy sessions that will introduce you to Intelligent Testbench Automation.  Or you can even Google “intelligent testbench automation”, and see what you find.  Thanks for reading . . .

, , , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

Recent Comments