Archive for Mark Olen

30 October, 2013

MENTOR GRAPHICS AT ARM TECHCON

This week ARM® TechCon® 2013 is being held at the Santa Clara Convention Center from Tuesday October 29 through Thursday October 31st, but don’t worry, there’s nothing to be scared about.  The theme is “Where Intelligence Counts”, and in fact as a platinum sponsor of the event, Mentor Graphics is excited to present no less than ten technical and training sessions about using intelligent technology to design and verify ARM-based designs.

My personal favorite is scheduled for Halloween Day at 1:30pm, where I’ll tell you about a trick that Altera used to shave several months off their schedule, while verifying the functionality and performance of an ARM AXI™ fabric interconnect subsystem.  And the real treat is that they achieved first silicon success as well.  In keeping with the event’s theme, they used something called “intelligent” testbench automation.

And whether you’re designing multi-core designs with AXI fabrics, wireless designs with AMBA® 4 ACE™ extensions, or even enterprise computing systems with ARM’s latest AMBA® 5 CHI™ architecture, these sessions show you how to take advantage of the very latest simulation and formal technology to verify SoC connectivity, ensure correct interconnect functional operation, and even analyze on-chip network performance.

On Tuesday at 10:30am, Gordon Allan described how an intelligent performance analysis solution can leverage the power of an SQL database to analyze and verify interconnect performance in ways that traditional verification techniques cannot.  He showed a wide range of dynamic visual representations produced by SoC regressions that can be quickly and easily manipulated by engineers to verify performance to avoid expensive overdesign.

Right after Gordon’s session, Ping Yeung discussed using intelligent formal verification to automate SoC connectivity, overcoming observability and controllability challenges faced by simulation-only solutions.  Formal verification can examine all possible scenarios exhaustively, verifying on-chip bus connectivity, pin multiplexing of constrained interfaces, connectivity of clock and reset signals, as well as power control and scan test signal connectivity.

On Wednesday, Mark Peryer shows how to verify AMBA interconnect performance using intelligent database analysis and intelligent testbench automation for traffic scenario generation.  These techniques enable automatic testbench instrumentation for configurable ARM-based interconnect subsystems, as well as highly-efficient dense, medium, sparse, and varied bus traffic generation that covers even the most difficult to achieve corner-case conditions.

And finally also on Halloween, Andy Meyer offers an intelligent workshop for those that are designing high performance systems with hierarchical and distributed caches, using either ARM’s AMBA 5 CHI architecture or ARM’s AMBA 4 ACE architecture.  He’ll cover topics including how caching works, how to improve caching performance, and how to verify cache coherency.

For more information about these sessions, be sure to visit the ARM TechCon program website.  Or if you miss any of them, and would like to learn about how this intelligent technology can help you verify your ARM designs, don’t be afraid to email me at mark_olen@mentor.com.   Happy Halloween!

, , , , , , , , , ,

26 July, 2013

You don’t need a graphic like the one below to know that multi-core SoC designs are here to stay.  This one happens to be based on ARM’s AMBA 4 ACE architecture which is particularly effective for mobile design applications, offering an optimized mix of high performance processing and low power consumption.  But with software’s increasing role in overall design functionality, verification engineers are now tasked with verifying not just proper HW functionality, but proper HW functionality under control of application SW.  So how do you verify HW/SW interactions during system level verification?

 For most verification teams, the current alternatives are like choosing between a walk through the desert or drinking from a fire hose.  In the desert, you can manually write test programs in C, compile them and load them into system memory, and then initialize the embedded processors and execute the programs.  Seems straightforward, but now try it for multiple embedded cores and make sure you confirm your power up sequence and optimal low power management (remember, we’re testing a mobile market design), correct memory mapping, peripheral connectivity, mode selection, and basically anything that your design is intended to do before its battery runs out.  You can get lost pretty quickly.  Eventually you remember that you weren’t hired to write multi-threaded software programs, but that there’s an entire staff of software developers down the hall who were.  So you boot your design’s operating system, load the SW drivers, and run the design’s target application programs, and fully verify that all’s well between the HW and the SW at the system level.

But here comes the fire hose.  By this time, you’ve moved from your RTL simulator to an emulator, because just simulating Linux booting up takes weeks to months.  But what happens when your emulator runs into a system level failure after billions of clock cycles and several days of emulation?  There’s no way to avoid full HW/SW verification at the system level, but wouldn’t it be nice to find most of the HW/SW interaction bugs earlier in the process, when they’re easier to debug?

 There’s an easier way to bridge the gap between the desert and the fire hose.  It’s called “intelligent Software Driven Verification”.  iSDV automates the generation of embedded C test programs, for multi-core processor execution.  These tests generate thousands of high-value processor instructions that verify HW/SW interactions.  Bugs discovered take much less time to debug, and the embedded C test programs can run in both simulation and emulation environments, easing the transition from one to the other.Check out the on-line web seminar at the link below to learn about using intelligent Software Driven Verification” as a way to uncover the majority of your system-level design bugs after RTL level simulation, but before full system level emulation. 

http://www.mentor.com/products/fv/multimedia/automating-software-driven-hardware-verification-with-questa-infact

, , , , , ,

26 July, 2012

A system-level verification engineer once told me that his company consumes over 50% of its emulation capacity debugging failures. According to him there was just no way around consuming emulators while debugging their SoC design emulation runs. In fact when failures occur during emulation, verification engineers often turn to live debugging with JTAG interfaces to the Design Under Test. This enables one engineer to debug one problem at a time, while consuming expensive emulation capacity for extended periods of time. After all, when some of the intricate interactions between system software and design hardware fail, it can take days if not weeks to debug. To say this is painful, slow, and expensive would be an understatement.

Would you be interested to learn about a better alternative for debugging SoC emulation runs? Veloce Codelink offers instant replay capability for emulation. This allows multiple engineers to debug multiple problems at the same time, without consuming any emulation capacity, leaving the emulators to be used where they’re most needed – running more regression tests. And Veloce Codelink is non-invasive – no additional clock cycles needed to extract emulation data.

If you consume as much time debugging emulation failures as the system-level verification engineer above, Veloce Codelink could double your emulation capacity, too. To learn more about Veloce Codelink’s “virtual emulation” that enables “DVR” control of emulation runs, check out our On-Demand Web Seminar titled “Off-line Debug of Multi-Core SoCs with Veloce Emulation“. In this web seminar you’ll also learn about Veloce Codelink’s “flight data recording” technology that enables long emulation runs to be debugged, without requiring huge amount of memory to store all of the data.

http://www.mentor.com/products/fv/multimedia/veloce-codelink-web-seminar

, , , , , ,

28 June, 2012

Graph-Based Intelligent Testbench Automation
While intelligent testbench automation is still reasonably new when measured in EDA years, this graph-based verification technology is being adopted by more and more verification teams every day.  And the interest is global.  Verification teams from Europe, North America, and the Pacific Rim are now using iTBA to help them verify their newest electronic designs in less time and with fewer resources.  (If you haven’t adopted it yet, your competitors probably have.)  If you have yet to learn how this new technology can help you achieve higher levels of verification, despite increasing design complexity, I’d suggest you check out a recent article in the June 2012 issue of Verification Horizons titled “Is Intelligent Testbench Automation For You?”  The article focuses on where iTBA is best applied and where it will help you most by producing optimal results, and how design applications with a large verification space, functionally oriented coverage goals, and unbalanced conditions can often experience a 100X gain in coverage closure acceleration.  For more detail about these and other considerations, you’ll have to read the article.

Fitzpatrick’s Corollary
And while you’re there, you might also notice that the entire June 2012 issue of Verification Horizons is devoted to helping you achieve the highest levels of coverage as efficiently as possible.  Editor and fellow verification technologist Tom Fitzpatrick succinctly adapts Murphy’s Law to verification, writing “If It Isn’t Covered, It Doesn’t Work”.   And any experienced verification engineer (or manager) knows just how true this is, making it critical that we thoughtfully prioritize our verification goals, and achieve them as quickly and efficiently as possible.  The June 2012 issue offers nine high quality articles, with a particular focus on coverage.

Berg’s Proof
Another proof that iTBA is catching on globally, is the upcoming TVS DVClub event being held next Monday 2 July 2012, in Bristol, Cambridge, and Grenoble.  The title of the event is “Graph-Based Verification”, and three industry experts will discuss different ways you can take advantage of what graph-based intelligent testbench automation has to offer.  My colleague and fellow verification technologist Staffan Berg leads off the event with a proof of his own, as he will present how graph-based iTBA can significantly shorten your time-to-coverage.  Staffan will show you how to use graph-based verification to define your stimulus space and coverage goals, by highlighting examples from some of the verification teams that have already adopted this technology, as I mentioned above.  He’ll also show how you can introduce iTBA into your existing verification environment, so you can realize these benefits without disrupting your existing process.  I have already registered and plan to attend the TVS DVClub event, but I’ll have to do some adapting of my own as the event runs from 11:30am to 2:00pm BST in the UK.  But I’ve seen Staffan present before, and both he and intelligent testbench automation are worth getting up early for.  Hope to see you there, remotely speaking.

, , , , , ,

13 December, 2011

Instant Replay Offers Multiple Views at Any Speed

If you’ve watched any professional sporting event on television lately, you’ve seen the pressure put on referees and umpires.  They have to make split-second decisions in real-time, having viewed ultra-high-speed action just a single time.  But watching at home on television, we get the luxury of viewing multiple replays of events in question in high-definition super-slow-motion, one frame at a time, and even in reverse.  We also get to see many different views of these controversial events, from the front, the back, the side, up close, or far away.  Sometimes it seems there must be twenty different cameras at every sporting event.

Wouldn’t it nice if you could apply this same principle to your SoC level simulations?  What if you had instant replay from multiple viewing angles in your functional verification toolbox?  It turns out that such a technology indeed exists, and it’s called “Codelink Replay”.

Codelink Replay enables verification engineers to use instant replay with multiple viewing angles to quickly and accurately debug even the most complex SoC level simulation failures.  This is becoming increasingly important, as we see in Harry Foster’s blog series about the 2010 Wilson Research Group Functional Verification Study that over half of all new design starts now contain multiple embedded processors.  If you’re responsible for verifying a design with multiple embedded cores such as ARM’s new Cortex A15 and Cortex A7 processors, this technology will have a dramatic impact for you.

Multi-Core SoC Design Verification

Multi-core designs present a whole new level of verification challenges.  Achieving functional coverage of your IP blocks at the RTL level has become merely a pre-requisite now – as they say “necessary but not sufficient”.  Welcome to the world of SoC level verification, where you use your design’s software as a testbench.  After all, since a testbench’s role is to mimic the design’s target environment, so as to test its functionality, how better to accomplish this than to execute the design’s software against its hardware, albeit during simulation?

Some verification teams have already dabbled in this world.   Perhaps you’ve written a handful of tests in C or assembly code, loaded them into memory, initialized your processor, and executed them.  This is indeed the best way to verify SoC level functionality including power optimization management, clocking domain control, bus traffic arbitration schemes, driver-to-peripheral compatibility, and more, as none of these aspects of an SoC design can be appropriately verified at the RTL IP block level.

However, imagine running a software testbench program only to see that the processor stopped executing code two hours into the simulation.  What do you do next?  Debugging “software as a testbench” simulation can be daunting.  Especially when the software developers say “the software is good”, and the hardware designers say “the hardware is fine”.  Until recently, you could count on weeks to debug these types of failures.  And the problem is compounded with today’s SoC designs with multiple processors running software test programs from memory.

This is where Codelink Replay comes in.  It enables you to replay your simulation in slow motion or fast forward, while observing many different views including hardware views (waveforms, CPU register values, program counter, call stack, bus transactions, and four-state logic) and software views (memory, source code, decompiled code, variable values, and output) – all remaining in perfect synchrony, whether you’re playing forward or backward, single-step, slow-motion, or fast speed.  So when your simulation fails, just start at that point in time, and replay backwards to the root of the problem.  It’s non-invasive.  It doesn’t require any modifications to your design or to your tests.

Debugging SoC Designs Quickly and Accurately

So if you’re under pressure to make fast and accurate decisions when your SoC level tests fail, you can relate to the challenges faced by professional sports referees and umpires.  But with Codelink Replay, you can be assured that there are about 20 different virtual “cameras” tracing and logging your processors during simulation, giving you the same instant replay benefit we get when we watch sporting events on television.  If you’re interested to learn more about this new technology, check out the web seminar at the URL below, that introduces Codelink Replay, and shows how it supports the entire ARM family of processors, including even the latest Cortex A-Series, Cortex R-Series, and Cortex M-Series.

http://www.mentor.com/products/fv/multimedia/verifying-complex-soc-designs-with-questa-codelink

, , , , , , , ,

26 July, 2011

Who Doesn’t Like Faster?

In my last blog post I introduced new technology called Intelligent Testbench Automation (“iTBA”).  It’s generating lots of interest in the industry because just like constrained random testing (“CRT”), it can generate tons of tests for functional verification.  But it has unique efficiencies that allow you to achieve coverage 10X to 100X faster.  And who doesn’t like faster?  Well since the last post I’ve received many questions of interest from readers, but one seems to stick out enough to “cover” it here in a follow up post.

Several readers commented that they like the concept of randomness, because it has the ability of generating sequences of sequences; perhaps even a single sequence executed multiple times in a row. 1 And they were willing to suffer some extra redundancy as an unfortunate but necessary trade-off.

Interactive Example

While this benefit of random testing is understandable, there’s no need to worry as iTBA has you covered here.  If you checked out this link – http://www.verificationacademy.com/infact2 – you found an interactive example of a side by side comparison of CRT and iTBA.  The intent of the example was to show comparisons of what happens when you use CRT to generate tests randomly versus when you use iTBA to generate tests without redundancy.

However in a real application of iTBA, it’s equally likely that you’d manage your redundancy, not necessarily eliminate it completely.  We’ve improved the on-line illustration now to include two (of the many) additional features of iTBA.

Coverage First – Then Random

One is the ability to run a simulation with high coverage non-redundant tests first, followed immediately by random tests.  Try it again, but this time check the box labeled “Run after all coverage is met”.  What you’ll find is that iTBA achieves your targeted coverage in the first 576 tests, at which time CRT will have achieved somewhere around 50% coverage at best.  But notice that iTBA doesn’t stop at 100% coverage.  It continues on, generating tests randomly.  By the time CRT gets to about 70% coverage, iTBA has achieved 100%, and has also generated scores of additional tests randomly.  You can have the best of both worlds.  You can click on the “suspend”, “resume”, and “show chart” buttons during the simulation to see the progress of each.

Interleave Coverage and Random

Two is the ability to run a simulation randomly, but to clip the redundancy rather than eliminate it.  Move the “inFact coverage goal” bar to set the clip level (try 2 or 3 or 4), and restart the simulation.  Now you’ll see iTBA generating random tests, but managing the redundancy to whatever level you chose.  The result is simulation with a managed amount of redundancy that still achieves 100% of your target coverage, including every corner-case.

iTBA generates tons of tests, but lets you decide how much to control them.  If you’re interested to learn more about how iTBA can help you achieve your functional verification goals faster, you might consider attending the Tech Design Forum in Santa Clara on September 8th.  There’s a track dedicated to achieving coverage closure.  Check out this URL for more information about it.  http://www.techdesignforums.com/events/santa-clara/event.cfm

1 – By the way, if achieving your test goals is predicated on certain specific sequences of sequences, our experts can show you how to create an iTBA graph that will achieve those goals much faster than relying on redundancy.  But that’s another story for another time.

, , , , , ,

28 June, 2011

iTBA Introduction

If you’ve been to DAC or DVCon during the past couple of years, you’ve probably at least heard of something new called “Intelligent Testbench Automation”.  Well, it’s actually not really all that new, as the underlying principles have been used in compiler testing and some types of software testing for the past three decades, but its application to electronic design verification is certainly new, and exciting.

The value proposition of iTBA is fairly simple and straightforward.  Just like constrained random testing, iTBA generates tons of stimuli for functional verification.  But iTBA is so efficient, that it achieves the targeted functional coverage one to two orders of magnitude faster than CRT.  So what would you do if you could achieve your current simulation goals 10X to 100X faster?

You could finish your verification earlier, especially when it seems like you’re getting new IP drops every day.  I’ve seen IP verification teams reduce their simulations from several days on several CPUs (using CRT) to a couple of hours on a single CPU (with iTBA).  No longer can IP designers send RTL revisions faster than we can verify them.

But for me, I’d ultimately use the time savings to expand my testing goals.  Today’s designs are so complex that typically only a fraction of their functionality gets tested anyway.  And one of the biggest challenges is trading off what functionality to test, and what not to test.  (We’ll show you how iTBA can help you here, in a future blog post.)  Well, if I can achieve my initial target coverage in one-tenth of the time, then I’d use at least part of the time saving to expand my coverage, and go after some of the functionality that originally I didn’t think I’d have time to test.

On Line Illustration

If you check out this link – http://www.verificationacademy.com/infact  – you’ll find an interactive example of a side by side comparison of constrained random testing and intelligent testbench automation.  It’s an Adobe Flash Demonstration, and it lets you run your own simulations.  Try it, it’s fun.

The example shows a target coverage of 576 equally weighted test cases in a 24×24 grid.  You can adjust the dials at the top for the number and speed of simulators to use, and then click on “start”.  Both CRT and iTBA simulations run in parallel at the same speed, cycle for cycle, and each time a new test case is simulated the number in its cell is incremented by one, and the color of the cell changes.  Notice that the iTBA simulation on the right achieves 100% coverage very quickly, covering every unique test case efficiently.  But notice that the CRT simulation on the left eventually achieves 100% coverage painfully and slowly, with much unwanted redundancy.  You can also click on “show chart” to see a coverage chart of your simulation.

Math Facts

You probably knew that random testing repeats, but you probably didn’t know by how much.  It turns out that the redundancy factor is expressed in the equation “ T = N ln N + C “, where “T” is the number of tests that must be generated to achieve 100% coverage of “N” different cases, and “C” is a small constant.  So using the natural logarithm of 576, we can calculate that given equally weighted cases, the random simulation will require an average of about 3661 tests to achieve our goal.  Sometimes it’s more, sometimes it’s less, given the unpredictability of random testing.  In the meantime the iTBA simulation achieves 100% coverage in just 576 tests, a reduction of 84%.

Experiment at Home

You probably already have an excellent six-sided demonstration vehicle somewhere at home.  Try rolling a single die repeatedly, simulating a random test generator.  How many times does it take you to “cover” all six unique test cases?  T = N ln N + C says it should take about 11 times or more.  You might get lucky and hit 8, 9, or 10.  But chances are you’ll still be rolling at 11, 12, 13, or even more.  If you used iTBA to generate the test cases, it would take you six rolls, and you’d be done.  Now in this example, getting to coverage twice as fast may not be that exciting to you.  But if you extrapolate these results to your RTL design’s test plan, the savings can become quite interesting.

Quiz Question

So here’s a quick question for you.  What’s the minimum number of unique functional test cases needed to realize at least a 10X gain in efficiency with iTBA compared to what you could get with CRT?  (Hint – You can figure it out with three taps on a scientific calculator.)  It’s probably a pretty small number compared to the number of functions your design can actually perform, meaning that there’s at least a 10X improvement in testing efficiency awaiting you with iTBA.

More Information

Hopefully at this point you’re at least a little bit interested?  Like some others, you may be skeptical at this point.  Could this technology really offer a 10X improvement in functional verification?  Check out the Verification Academy at this site – http://www.verificationacademy.com/course-modules/dynamic-verification/intelligent-testbench-automation – to see the first academy sessions that will introduce you to Intelligent Testbench Automation.  Or you can even Google “intelligent testbench automation”, and see what you find.  Thanks for reading . . .

, , , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...