Posts Tagged ‘testbench’

30 October, 2013

MENTOR GRAPHICS AT ARM TECHCON

This week ARM® TechCon® 2013 is being held at the Santa Clara Convention Center from Tuesday October 29 through Thursday October 31st, but don’t worry, there’s nothing to be scared about.  The theme is “Where Intelligence Counts”, and in fact as a platinum sponsor of the event, Mentor Graphics is excited to present no less than ten technical and training sessions about using intelligent technology to design and verify ARM-based designs.

My personal favorite is scheduled for Halloween Day at 1:30pm, where I’ll tell you about a trick that Altera used to shave several months off their schedule, while verifying the functionality and performance of an ARM AXI™ fabric interconnect subsystem.  And the real treat is that they achieved first silicon success as well.  In keeping with the event’s theme, they used something called “intelligent” testbench automation.

And whether you’re designing multi-core designs with AXI fabrics, wireless designs with AMBA® 4 ACE™ extensions, or even enterprise computing systems with ARM’s latest AMBA® 5 CHI™ architecture, these sessions show you how to take advantage of the very latest simulation and formal technology to verify SoC connectivity, ensure correct interconnect functional operation, and even analyze on-chip network performance.

On Tuesday at 10:30am, Gordon Allan described how an intelligent performance analysis solution can leverage the power of an SQL database to analyze and verify interconnect performance in ways that traditional verification techniques cannot.  He showed a wide range of dynamic visual representations produced by SoC regressions that can be quickly and easily manipulated by engineers to verify performance to avoid expensive overdesign.

Right after Gordon’s session, Ping Yeung discussed using intelligent formal verification to automate SoC connectivity, overcoming observability and controllability challenges faced by simulation-only solutions.  Formal verification can examine all possible scenarios exhaustively, verifying on-chip bus connectivity, pin multiplexing of constrained interfaces, connectivity of clock and reset signals, as well as power control and scan test signal connectivity.

On Wednesday, Mark Peryer shows how to verify AMBA interconnect performance using intelligent database analysis and intelligent testbench automation for traffic scenario generation.  These techniques enable automatic testbench instrumentation for configurable ARM-based interconnect subsystems, as well as highly-efficient dense, medium, sparse, and varied bus traffic generation that covers even the most difficult to achieve corner-case conditions.

And finally also on Halloween, Andy Meyer offers an intelligent workshop for those that are designing high performance systems with hierarchical and distributed caches, using either ARM’s AMBA 5 CHI architecture or ARM’s AMBA 4 ACE architecture.  He’ll cover topics including how caching works, how to improve caching performance, and how to verify cache coherency.

For more information about these sessions, be sure to visit the ARM TechCon program website.  Or if you miss any of them, and would like to learn about how this intelligent technology can help you verify your ARM designs, don’t be afraid to email me at mark_olen@mentor.com.   Happy Halloween!

, , , , , , , , , ,

19 September, 2013

It’s hard for me to believe that SystemVerilog 3.1 was released just over 10 years ago. The 3.1 version added Object-Oriented Programming features for testbench development to a language predominately used for RTL design synthesis. Making debug easier was one of the driving forces in unifying testbench and design features into a single language. The semantics for evaluating expressions and executing statements would be the same in the testbench and design. Setting breakpoints and stepping through the code would be seamless. That should have made it easier for either a verification or a design engineer to understand a complete verification environment. Or maybe it would enable either one to at least understand enough of the environment to isolate a particular problem.

Ten years later, I have yet to see that promise fulfilled. Most design engineers still debug their simulations the same way they debug in the lab: they look at waveforms. During simulation, they rarely look at the design source code, and certainly never look at the testbench code (unless it’s just basic pin wiggling like a waveform). Verification engineers are not much different. They rely on waveform debugging because that is what they were brought up on, and many do not even realize source-level debugging is available to them. However the test/testbench is more like a piece of software than a hardware description, and there are many things about a modern testbench that is difficult to display in a waveform (e.g. call stacks, local variables, and random constraints). And methodologies like the UVM add many layers of source-level complexity that most users do not have the time to wade through.

Next week I will be presenting as part of an Industry Special Session during the Forum on specification & Design Languages (FDL September 24-26,2013) that will discuss these issues and try to get more involvement from the academic and user communities to help resolve them. Was combining constructs from many languages into one a success? Can tools provide representations of source-level constructs in an easier graphical form? We hopefully will not need another decade.

, , , ,

19 August, 2013

Verification Techniques & Technologies Adoption Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for background on the study, click here).

In my previous blog (Part 9 click here), I focused on some of the 2012 Wilson Research Group findings related to design and verification language and library trends. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2012 Wilson Research Group study.

An interesting trend we are starting to see is that the electronic industry is maturing its functional verification processes, whether they are targeting their designs at IC/ASIC or FPGA implementations. This blog provides data to support this claim. An interesting question you might ask is, “What is driving this trend?” In some of my earlier blogs (click here for Part 1 and Part 2) I showed an that design complexity is increasing in terms design sizes and number of embedded processors. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (click here for Part 5 and Part 6). My belief is that the industry is being forced to mature its functional verification processes to address increasing complexity and effort.

Simulation Techniques Adoption Trends

Let’s begin by comparing  non-FPGA adoption trends related to various simulation techniques from the 2007 Far West Research study  (in blue) with the 2012 Wilson Research Group study  (in green), as shown in Figure 1.

Figure 1. Simulation-based technique adoption trends for non-FPGA designs

You can see that the study finds the industry increasing its adoption of various functional verification techniques for non-FPGA targeted designs. Clearly the industry is maturing its processes as I previously claimed.

For example, in 2007, the Far West Research Group found that only 48 percent of the industry performed code coverage. This surprised me. After all, HDL-based code coverage is a technology that has been around since the early 1990’s. However, I did informally verify the 2007 results through numerous customer visits and discussions. In 2012, we see that the industry adoption of code coverage has increased to 70 percent.

In 2007, the Far West Research Group study found that 37 percent of the industry had adopted assertions for use in simulation. In 2012, we find that industry adoption of assertions had increased to 63 percent. I believe that the maturing of the various assertion language standards has contributed to this increased adoption.

In 2007, the Far West Research Group study found that 40 percent of the industry had adopted functional coverage for use in simulation. In 2010, the industry adoption of functional coverage had increased to 66 percent. Part of this increase in functional coverage adoption has been driven by the increased adoption of constrained-random simulation, since you really can’t effectively do constrained-random simulation without doing functional coverage.

Now let’s look at  FPGA adoption trends related to various simulation techniques from the 2010 Far West Research study  (in pink) with the 2012 Wilson Research Group study  (in red).

Figure 2. Simulation-based technique adoption trends for non-FPGA designs

Again, you can clearly see that the industry is increasing its adoption of various functional verification techniques for FPGA targeted designs. This past year I have spent a significant amount of time in discussions with FPGA project managers around the world. During these discussions, most mangers mention the drive to improve verification process within their projects due to  rising complexity of this class of designs. The Wilson Research Group data supports these claims.

In fact, Figure 3 illustrates this maturing trend in the FPGA space, where we saw a 15 percent increase in the adoption of RTL simulation and an 8.5 percent increase in the adoption of code coverage. For complex FPGA designs, the traditional approach of “burn and churn” and debug in the lab is no longer a viable option. Nonetheless, it is still somewhat alarming that 31 percent of the FPGA study participants work on projects that perform no RTL simulation.

Figure 3. FPGA projects maturing their verification processes

Signoff Criteria Trends

We saw earlier in this blog the increased adoption of coverage techniques in the industry. Coverage has become a major component of a project’s verification signoff criteria. In Figure 4, we see how coverage has increased in importance in verification signoff criteria within the past five years, while other decision attributes have declined in terms of importance.

Figure 4. Non-FPGA functional verification signoff criteria trends

We see the same trends for FPGA designs, as shown in Figure 5.

Figure 5. FPGA functional verification signoff criteria trends

In my next blog (click here), I plan to continue the discussion related to adoption of various verification technologies and techniques as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

5 August, 2013

Language and Library Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 7 click here), I focused on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies. In this blog, I present design and verification language trends, as identified by the Wilson Research Group study.

You might note that for some of the language and library data I present, the percentage sums to more than one hundred percent. The reason for this is that some participants’ projects use multiple languages.

RTL Design Languages

Let’s begin by examining the languages used for RTL design. Figure 1 shows the trends in terms of languages used for design, by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), the 2012 Wilson Research Group study (in green), as well as the projected design language adoption trends within the next twelve months (in purple) as identified by the study participants. Note that the design language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption continues to increase.

Also, it’s important to note that this study focused on languages used for RTL design. We have conducted a few informal studies related to languages used for architectural modeling—and it’s not too big of a surprise that we see increased adoption of C/C++ and SystemC in that space. However, since those studies have (thus far) been informal and not as rigorously executed as the Wilson Research Group study, I have decided to withhold that data until a more formal blind study can be executed related to architectural modeling and virtual prototyping.

Figure 1. Trends in languages used for Non-FPGA design

Let’s now look at the languages used specifically for FPGA RTL design. Figure 2 shows the trends in terms of languages used for FPGA design, by comparing the 2012 Wilson Research Group study (in red) with the projected design language adoption trends within the next twelve months (in purple).

Figure 2. Languages used for Non-FPGA design

It’s not too big of a surprise that VHDL is the predominant language used for FPGA RTL design, although we are starting to see increased interest in SystemVerilog.

Verification Languages

Next, let’s look at the languages used to verify Non-FPGA designs (that is, languages used to create simulation testbenches). Figure 3 shows the trends in terms of languages used to create simulation testbenches by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green).

Figure 3. Trends in languages used in verification to create Non-FPGA simulation testbenches

The study revealed that verification language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption is increasing. In fact, SystemVerilog adoption increased by 8.3 percent between 2010 and 2012.

Figure 4 provides a different analysis of the data by partitioning the projects by design size, and then calculating the adoption of SystemVerilog for creating testbenches by size. The design size partitions are represented as: less than 5M gates, 5M to 20M gates, and greater than 20M gates. Obviously, we find that the larger the design size, the greater the adoption of SystemVerilog for creating testbenches. Yet, probably the most interesting observation we can make from examining Figure 4 is related to smaller designs that are less than 5M gates. Here we see that 58.8 percent of the industry has adopted SystemVerilog for verification. In other words, it is safe to say that SystemVerilog for verification has become mainstream today and not just limited to early adopters or leading-edge design projects.

Figure 4. SystemVerilog (for verification) adoption by design size

Let’s now look at the languages used specifically for FPGA RTL design. Figure 5 shows the trends in terms of languages used for FPGA design, by comparing the 2012 Wilson Research Group study (in red) with the projected design language adoption trends within the next twelve months (in purple).

Figure 5. Trends in languages used in verification to create FPGA simulation testbenches

In my next blog (click here), I’ll continue the discussion on design and verification language trends as revealed by the 2012 Wilson Research Group Functional Verification Study.

, , , , , , , , , , , , , , , , , ,

29 July, 2013

Testbench Characteristics and Simulation Strategies

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for background on the study, click here).

In my previous blog (click here), I focused on the controversial topic of effort spent in verification. In this blog, I focus on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies. Although I am shifting the focus away from verification effort, I believe that the data I present in this blog is related to my previous blog and really needs to be considered when calculating effort.

Time Spent in full-chip versus Subsystem-Level Simulation

Let’s begin by looking at Figure 1, which shows the percentage of time (on average) that a project spends in full-chip or SoC integration-level verification versus subsystem and IP block-level verification. The mean time performing full chip verification is represented by the dark green bar, while the mean time performing subsystem verification is represented by the light green bar. Keep in mind that this graph represents the industry average. Some projects spend more time in full-chip verification, while other projects spend less time.

Figure 1. Mean time spent in full chip versus subsystem simulation

Number of Tests Created to Verify the Design in Simulation

Next, let’s look at Figure 2, which shows the number of tests various projects create to verify their designs using simulation. The graph represents the findings from the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). Note that the curves look remarkably similar over the past five years. The median number of tests created to verify the design is within the range of (>200 – 500) tests. It is interesting to see a sharp percentage increase in the number of participants who claimed that fewer tests (1 – 100) were created to verify a design in 2012. It’s hard to determine exactly why this was the case—perhaps it is due to the increased use of constrained random (which I will talk about shortly). Or perhaps there has been an increased use of legacy tests. The study was not design to go deeper into this issue and try to uncover the root cause. This is something I intend to informally study this next year through discussions with various industry thought leaders.

Figure 2. Number of tests created to verify a design in simulation

Percentage of Directed Tests versus Constrained-Random Tests

Now let’s compare the percentage of directed testing that is performed on a project to the percentage of constrained-random testing. Of course, in reality there is a wide range in the amount of directed and constrained-random testing that is actually performed on various projects. For example, some projects spend all of their time doing directed testing, while other projects combine techniques and spend part of their time doing directed testing—and the other part doing constrained-random. For our comparison, we will look at the industry average, as shown in Figure 3. The average percentage of tests that were directed is represented by the dark green bar, while the average percentage of tests that are constrained-random is represented by the light green bar.

Figure 3. Mean directed versus constrained-random testing performed on a project

Notice how the percentage mix of directed versus constrained-random testing has changed over the past two years.Today we see that, on average, a project performs more constrained-random simulation. In fact, between 2010 and 2012 there has been a 39 percent increase in the use of constrained-random simulation on a project. One driving force behind this increase has been the maturing and acceptance of both the SystemVerilog and UVM standards—since two standards facilitate an easier implementation of a constrained-random testbench. In addition, today we find that an entire ecosystem has emerged around both the SystemVerilog and UVM standards. This ecosystem consists of tools, verification IP, and industry expertise, such as consulting and training.

Nonetheless, even with the increased adoption of constrained-random simulation on a project, you will find that constrained-random simulation is generally only performed at the IP block or subsystem level. For the full SoC level simulation, directed testing and processor-driven verification are the prominent simulation-based techniques in use today.

Simulation Regression Time

Now let’s look at the time that various projects spend in a simulation regression. Figure 4 shows the trends in terms of simulation regression time by comparing the 2007 Far West Research study (in gray) with the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). There really hasn’t been a significant change in the time spent in a simulation regression within the past three years. You will find that some teams spend days or even weeks in a regression. Yet today, the industry median is between 8 and 16 hours, and for many projects, there has been a decrease in regression time over the past few years. Of course, this is another example of where deeper analysis is required to truly understand what is going on. To begin with, these questions should probably be refined to better understand simulation times related to IP versus SoC integration-level regressions. We will likely do that in future studies—with the understanding that we will not be able to show trends (or at least not initially).

Figure 4. Simulation regression time trends

In my next blog (click here), I’ll focus on design and verification language trends, as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

28 June, 2012

Graph-Based Intelligent Testbench Automation
While intelligent testbench automation is still reasonably new when measured in EDA years, this graph-based verification technology is being adopted by more and more verification teams every day.  And the interest is global.  Verification teams from Europe, North America, and the Pacific Rim are now using iTBA to help them verify their newest electronic designs in less time and with fewer resources.  (If you haven’t adopted it yet, your competitors probably have.)  If you have yet to learn how this new technology can help you achieve higher levels of verification, despite increasing design complexity, I’d suggest you check out a recent article in the June 2012 issue of Verification Horizons titled “Is Intelligent Testbench Automation For You?”  The article focuses on where iTBA is best applied and where it will help you most by producing optimal results, and how design applications with a large verification space, functionally oriented coverage goals, and unbalanced conditions can often experience a 100X gain in coverage closure acceleration.  For more detail about these and other considerations, you’ll have to read the article.

Fitzpatrick’s Corollary
And while you’re there, you might also notice that the entire June 2012 issue of Verification Horizons is devoted to helping you achieve the highest levels of coverage as efficiently as possible.  Editor and fellow verification technologist Tom Fitzpatrick succinctly adapts Murphy’s Law to verification, writing “If It Isn’t Covered, It Doesn’t Work”.   And any experienced verification engineer (or manager) knows just how true this is, making it critical that we thoughtfully prioritize our verification goals, and achieve them as quickly and efficiently as possible.  The June 2012 issue offers nine high quality articles, with a particular focus on coverage.

Berg’s Proof
Another proof that iTBA is catching on globally, is the upcoming TVS DVClub event being held next Monday 2 July 2012, in Bristol, Cambridge, and Grenoble.  The title of the event is “Graph-Based Verification”, and three industry experts will discuss different ways you can take advantage of what graph-based intelligent testbench automation has to offer.  My colleague and fellow verification technologist Staffan Berg leads off the event with a proof of his own, as he will present how graph-based iTBA can significantly shorten your time-to-coverage.  Staffan will show you how to use graph-based verification to define your stimulus space and coverage goals, by highlighting examples from some of the verification teams that have already adopted this technology, as I mentioned above.  He’ll also show how you can introduce iTBA into your existing verification environment, so you can realize these benefits without disrupting your existing process.  I have already registered and plan to attend the TVS DVClub event, but I’ll have to do some adapting of my own as the event runs from 11:30am to 2:00pm BST in the UK.  But I’ve seen Staffan present before, and both he and intelligent testbench automation are worth getting up early for.  Hope to see you there, remotely speaking.

, , , , , ,

13 December, 2011

Instant Replay Offers Multiple Views at Any Speed

If you’ve watched any professional sporting event on television lately, you’ve seen the pressure put on referees and umpires.  They have to make split-second decisions in real-time, having viewed ultra-high-speed action just a single time.  But watching at home on television, we get the luxury of viewing multiple replays of events in question in high-definition super-slow-motion, one frame at a time, and even in reverse.  We also get to see many different views of these controversial events, from the front, the back, the side, up close, or far away.  Sometimes it seems there must be twenty different cameras at every sporting event.

Wouldn’t it nice if you could apply this same principle to your SoC level simulations?  What if you had instant replay from multiple viewing angles in your functional verification toolbox?  It turns out that such a technology indeed exists, and it’s called “Codelink Replay”.

Codelink Replay enables verification engineers to use instant replay with multiple viewing angles to quickly and accurately debug even the most complex SoC level simulation failures.  This is becoming increasingly important, as we see in Harry Foster’s blog series about the 2010 Wilson Research Group Functional Verification Study that over half of all new design starts now contain multiple embedded processors.  If you’re responsible for verifying a design with multiple embedded cores such as ARM’s new Cortex A15 and Cortex A7 processors, this technology will have a dramatic impact for you.

Multi-Core SoC Design Verification

Multi-core designs present a whole new level of verification challenges.  Achieving functional coverage of your IP blocks at the RTL level has become merely a pre-requisite now – as they say “necessary but not sufficient”.  Welcome to the world of SoC level verification, where you use your design’s software as a testbench.  After all, since a testbench’s role is to mimic the design’s target environment, so as to test its functionality, how better to accomplish this than to execute the design’s software against its hardware, albeit during simulation?

Some verification teams have already dabbled in this world.   Perhaps you’ve written a handful of tests in C or assembly code, loaded them into memory, initialized your processor, and executed them.  This is indeed the best way to verify SoC level functionality including power optimization management, clocking domain control, bus traffic arbitration schemes, driver-to-peripheral compatibility, and more, as none of these aspects of an SoC design can be appropriately verified at the RTL IP block level.

However, imagine running a software testbench program only to see that the processor stopped executing code two hours into the simulation.  What do you do next?  Debugging “software as a testbench” simulation can be daunting.  Especially when the software developers say “the software is good”, and the hardware designers say “the hardware is fine”.  Until recently, you could count on weeks to debug these types of failures.  And the problem is compounded with today’s SoC designs with multiple processors running software test programs from memory.

This is where Codelink Replay comes in.  It enables you to replay your simulation in slow motion or fast forward, while observing many different views including hardware views (waveforms, CPU register values, program counter, call stack, bus transactions, and four-state logic) and software views (memory, source code, decompiled code, variable values, and output) – all remaining in perfect synchrony, whether you’re playing forward or backward, single-step, slow-motion, or fast speed.  So when your simulation fails, just start at that point in time, and replay backwards to the root of the problem.  It’s non-invasive.  It doesn’t require any modifications to your design or to your tests.

Debugging SoC Designs Quickly and Accurately

So if you’re under pressure to make fast and accurate decisions when your SoC level tests fail, you can relate to the challenges faced by professional sports referees and umpires.  But with Codelink Replay, you can be assured that there are about 20 different virtual “cameras” tracing and logging your processors during simulation, giving you the same instant replay benefit we get when we watch sporting events on television.  If you’re interested to learn more about this new technology, check out the web seminar at the URL below, that introduces Codelink Replay, and shows how it supports the entire ARM family of processors, including even the latest Cortex A-Series, Cortex R-Series, and Cortex M-Series.

http://www.mentor.com/products/fv/multimedia/verifying-complex-soc-designs-with-questa-codelink

, , , , , , , ,

26 July, 2011

Who Doesn’t Like Faster?

In my last blog post I introduced new technology called Intelligent Testbench Automation (“iTBA”).  It’s generating lots of interest in the industry because just like constrained random testing (“CRT”), it can generate tons of tests for functional verification.  But it has unique efficiencies that allow you to achieve coverage 10X to 100X faster.  And who doesn’t like faster?  Well since the last post I’ve received many questions of interest from readers, but one seems to stick out enough to “cover” it here in a follow up post.

Several readers commented that they like the concept of randomness, because it has the ability of generating sequences of sequences; perhaps even a single sequence executed multiple times in a row. 1 And they were willing to suffer some extra redundancy as an unfortunate but necessary trade-off.

Interactive Example

While this benefit of random testing is understandable, there’s no need to worry as iTBA has you covered here.  If you checked out this link - http://www.verificationacademy.com/infact2 - you found an interactive example of a side by side comparison of CRT and iTBA.  The intent of the example was to show comparisons of what happens when you use CRT to generate tests randomly versus when you use iTBA to generate tests without redundancy.

However in a real application of iTBA, it’s equally likely that you’d manage your redundancy, not necessarily eliminate it completely.  We’ve improved the on-line illustration now to include two (of the many) additional features of iTBA.

Coverage First – Then Random

One is the ability to run a simulation with high coverage non-redundant tests first, followed immediately by random tests.  Try it again, but this time check the box labeled “Run after all coverage is met”.  What you’ll find is that iTBA achieves your targeted coverage in the first 576 tests, at which time CRT will have achieved somewhere around 50% coverage at best.  But notice that iTBA doesn’t stop at 100% coverage.  It continues on, generating tests randomly.  By the time CRT gets to about 70% coverage, iTBA has achieved 100%, and has also generated scores of additional tests randomly.  You can have the best of both worlds.  You can click on the “suspend”, “resume”, and “show chart” buttons during the simulation to see the progress of each.

Interleave Coverage and Random

Two is the ability to run a simulation randomly, but to clip the redundancy rather than eliminate it.  Move the “inFact coverage goal” bar to set the clip level (try 2 or 3 or 4), and restart the simulation.  Now you’ll see iTBA generating random tests, but managing the redundancy to whatever level you chose.  The result is simulation with a managed amount of redundancy that still achieves 100% of your target coverage, including every corner-case.

iTBA generates tons of tests, but lets you decide how much to control them.  If you’re interested to learn more about how iTBA can help you achieve your functional verification goals faster, you might consider attending the Tech Design Forum in Santa Clara on September 8th.  There’s a track dedicated to achieving coverage closure.  Check out this URL for more information about it.  http://www.techdesignforums.com/events/santa-clara/event.cfm

1 – By the way, if achieving your test goals is predicated on certain specific sequences of sequences, our experts can show you how to create an iTBA graph that will achieve those goals much faster than relying on redundancy.  But that’s another story for another time.

, , , , , ,

28 June, 2011

iTBA Introduction

If you’ve been to DAC or DVCon during the past couple of years, you’ve probably at least heard of something new called “Intelligent Testbench Automation”.  Well, it’s actually not really all that new, as the underlying principles have been used in compiler testing and some types of software testing for the past three decades, but its application to electronic design verification is certainly new, and exciting.

The value proposition of iTBA is fairly simple and straightforward.  Just like constrained random testing, iTBA generates tons of stimuli for functional verification.  But iTBA is so efficient, that it achieves the targeted functional coverage one to two orders of magnitude faster than CRT.  So what would you do if you could achieve your current simulation goals 10X to 100X faster?

You could finish your verification earlier, especially when it seems like you’re getting new IP drops every day.  I’ve seen IP verification teams reduce their simulations from several days on several CPUs (using CRT) to a couple of hours on a single CPU (with iTBA).  No longer can IP designers send RTL revisions faster than we can verify them.

But for me, I’d ultimately use the time savings to expand my testing goals.  Today’s designs are so complex that typically only a fraction of their functionality gets tested anyway.  And one of the biggest challenges is trading off what functionality to test, and what not to test.  (We’ll show you how iTBA can help you here, in a future blog post.)  Well, if I can achieve my initial target coverage in one-tenth of the time, then I’d use at least part of the time saving to expand my coverage, and go after some of the functionality that originally I didn’t think I’d have time to test.

On Line Illustration

If you check out this link – http://www.verificationacademy.com/infact  – you’ll find an interactive example of a side by side comparison of constrained random testing and intelligent testbench automation.  It’s an Adobe Flash Demonstration, and it lets you run your own simulations.  Try it, it’s fun.

The example shows a target coverage of 576 equally weighted test cases in a 24×24 grid.  You can adjust the dials at the top for the number and speed of simulators to use, and then click on “start”.  Both CRT and iTBA simulations run in parallel at the same speed, cycle for cycle, and each time a new test case is simulated the number in its cell is incremented by one, and the color of the cell changes.  Notice that the iTBA simulation on the right achieves 100% coverage very quickly, covering every unique test case efficiently.  But notice that the CRT simulation on the left eventually achieves 100% coverage painfully and slowly, with much unwanted redundancy.  You can also click on “show chart” to see a coverage chart of your simulation.

Math Facts

You probably knew that random testing repeats, but you probably didn’t know by how much.  It turns out that the redundancy factor is expressed in the equation “ T = N ln N + C “, where “T” is the number of tests that must be generated to achieve 100% coverage of “N” different cases, and “C” is a small constant.  So using the natural logarithm of 576, we can calculate that given equally weighted cases, the random simulation will require an average of about 3661 tests to achieve our goal.  Sometimes it’s more, sometimes it’s less, given the unpredictability of random testing.  In the meantime the iTBA simulation achieves 100% coverage in just 576 tests, a reduction of 84%.

Experiment at Home

You probably already have an excellent six-sided demonstration vehicle somewhere at home.  Try rolling a single die repeatedly, simulating a random test generator.  How many times does it take you to “cover” all six unique test cases?  T = N ln N + C says it should take about 11 times or more.  You might get lucky and hit 8, 9, or 10.  But chances are you’ll still be rolling at 11, 12, 13, or even more.  If you used iTBA to generate the test cases, it would take you six rolls, and you’d be done.  Now in this example, getting to coverage twice as fast may not be that exciting to you.  But if you extrapolate these results to your RTL design’s test plan, the savings can become quite interesting.

Quiz Question

So here’s a quick question for you.  What’s the minimum number of unique functional test cases needed to realize at least a 10X gain in efficiency with iTBA compared to what you could get with CRT?  (Hint – You can figure it out with three taps on a scientific calculator.)  It’s probably a pretty small number compared to the number of functions your design can actually perform, meaning that there’s at least a 10X improvement in testing efficiency awaiting you with iTBA.

More Information

Hopefully at this point you’re at least a little bit interested?  Like some others, you may be skeptical at this point.  Could this technology really offer a 10X improvement in functional verification?  Check out the Verification Academy at this site – http://www.verificationacademy.com/course-modules/dynamic-verification/intelligent-testbench-automation – to see the first academy sessions that will introduce you to Intelligent Testbench Automation.  Or you can even Google “intelligent testbench automation”, and see what you find.  Thanks for reading . . .

, , , , , ,

26 June, 2011

Verification Techniques & Technologies Adoption Trends

This blog is a continuation of a series of blogs, which present the highlights from the 2010 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 8 click here), I focused on some of the 2010 Wilson Research Group findings related to design and verification language trends. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2010 Wilson Research Group study.

One of the claims I made in the prologue to this series of blogs is that we are seeing a trend in increased industry adoption of advanced functional verification techniques, which is supported by the data I present in this blog. An interesting question you might ask is “what is driving this trend?” In some of my earlier blogs (click here for Part 1 and Part 2) I showed an industry trend in that design complexity is increasing in terms design sizes and number of embedded processors. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (click here for Part 4). My belief is that the industry is being forced to mature its functional verification process to address increasing complexity and effort.

Simulation Techniques Adoption Trends

Let’s begin by comparing  the adoption trends related to various simulation techniques as shown in Figure 1, where the data from the 2007 Far West Research study  is shown in blue and the data from 2010 Wilson Research Group study is shown in green.  

Figure 1. Simulation-based technique adoption trends

You can see that the study finds the industry increasing its adoption of various functional verification techniques.

For example, in 2007, the Far West Research Group found that only 48 percent of the industry performed code coverage. This surprised me. After all, HDL-based code coverage is a technology that has been around since the early 1990’s. However, I did informally verify the 2007 results through numerous customer visits and discussions. In 2010, we see that the industry adoption of code coverage has increased to 72 percent.

Now, a side comment: In this blog, I do not plan to discuss either the strengths or weaknesses of the various verification techniques that were studied (such as code coverage, whose strengths and weaknesses have been argued and debated for years)—perhaps in a separate series of future blogs. In this series of blogs, I plan to focus only on the findings from the 2010 Wilson Research Group study.

In 2007, the Far West Research Group study found that 37 percent of the industry had adopted assertions for use in simulation. In 2010, we find that industry adoption of assertions had increased to 72 percent. I believe that the maturing of the various assertion language standards has contributed to this increased adoption.

In 2007, the Far West Research Group study found that 40 percent of the industry had adopted functional coverage for use in simulation. In 2010, the industry adoption of functional coverage had increased to 69 percent. Part of this increase in functional coverage adoption has been driven by the increased adoption of constrained-random simulation, since you really can’t effectively do constrained-random simulation without doing functional coverage.

In fact, we see from the Far West Research Group 2007 study that 41 percent of the industry had adopted constrained-random simulation techniques. In 2010, the industry adoption had increased to 69 percent. I believe that this increase in constrained-random adoption has been driven by the increased adoption of the various base-class library methodologies, as I presented in a previous blog (click here for Part 8).

Formal Property Checking Adoption Trends

Figure 2 shows the trends in terms of formal property checking adoption by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green). The industry adoption of formal property checking has increased by an amazing 53 percent in the past three years. Again, this is another data point that supports my claim that the industry is starting to mature its adoption of advanced functional verification techniques.

 

Figure 2. Trends in formal property checking adoption

Another way to analyze the results is to partition a project’s adoption of formal property checking by design size, as shown in Figure 3, where less than 1M gates  is shown in blue, 1M to 20M gates is shown in orange, and greater than 20M gates is shown in red. Obviously, the larger the design, the more effort is generally spent in verification. Hence, it’s not too surprising to see the increased adoption of formal property checking for larger designs.

Figure 3. Trends in formal property checking adoption by design size

Acceleration/Emulation & FPGA Prototyping Adoption Trends

The amount of time spent in a simulation regression is an increasing concern for many projects. Intuitively, we tend to think that the design size influences simulation performance. However, there are two equally important factors that must be considered: number of test in the simulation regression suite, and the length of each test in terms of clock cycles. 

For example, a project might have a small or moderate-sized design, yet verification of this designs requires a long running test (e.g., a video input stream). Hence, in this example, the simulation regression time is influenced by the number of clock cycles required for the test, and not necessarily the design size itself.

In blog 6 of this series, I presented industry data on the number of tests created to verify a design in simulation (i.e., the regression suite). The findings obviously varied dramatically from a handful of test to thousands of test in a regression suite, depending on the design. In Figure 4 below, I show the findings for a projects regression time, which also varies dramatically from short regression times for some projects, to multiple days for other projects. The median simulation regression time is about 16 hours in 2010. 

Figure 4. Simulation regression time trends

One technique that is often used to speed up simulation regression (either due to very long test and lots of test) is either hardware assisted acceleration or emulation. In addition, FPGA prototyping, while historically used as a platform for software development, has recently served a role in SoC integration validation.

Figure 5 shows the adoption trend for both HW assisted acceleration/emulation and FPGA prototyping by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green). We have seen a 75 percent increase in the adoption of HW assisted acceleration/emulation over the past three years. 

Figure 5. HW assisted acceleration/emulation and FPGA Prototyping trends

I was surprised to see that the adoption of FPGA prototyping did not increase over the past three years, considering that we found in increase in SoC development over the same period. So, I decided to dig deeper into this anomaly.

In Figure 6, you’ll see that I partitioned the HW assisted acceleration/emulation and FPGA prototyping adoption data by design size: less than 1M gates (in blue), 1M to 20M gates (in yellow), and greater than 20M gates (in red). This revealed that the adoption of HW assisted acceleration/emulation continued to increase as design sizes increased. However, the adoption of FPGA prototyping rapidly dropped off as design sizes increased beyond 20M gates.   

Figure 6. Acceleration/emulation & FPGA Prototyping adoption by design size

So, what’s going on? One problem with FPGA prototyping of large designs is that there is an increased engineering effort required to partition designs across multiple FPGAs. In fact, what I have found is that FPGA prototyping of very large designs is often a major engineering effort in itself, and that many projects are seeking alternative solutions to address this problem.

In my next blog, I plan to present the 2010 Wilson Research Group study findings related to various project results in terms of schedule and required spins before production.

, , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@HarryAtMentor Tweets

  • Loading tweets...