Intelligent Testbench Automation Blog

The intelligent testbench automation (iTBA) blog provides information pertaining to the latest advances in functional verification that focus on accelerating functional coverage closure and efficiently creating stimulus generation to enhance verification productivity. Using a graph-based methodology, such as inFact, can help to free up resources so the design team can focus on targeting hard to reach corner cases or expanding the tests to cover additional functionality. What it all boils down to is GRAPHS RULE. Follow us as we explore the possibilities iTBA offers.

21 May, 2014

If there was a tool that enabled you to create tests 10x more productively that you can today, would you change your entire verification language and testbench environment tomorrow? Not likely. Despite our love of technology that makes us more productive and efficient, lots of thought and planning goes into making big shifts in verification methodology. Now, what if that same 10x boost in test creation productivity could be obtained by just bolting a new tool onto your existing verification language and environment?

Intelligent Testbench Automtion allows just such a boost in verification productivity while making only minor changes to your existing verification environment. In a UVM environment,the QuestainFacttechnology for iTBA  imports and controls existing sequence items and configuration classes to achieve faster coverage closure and more-comprehensive testing. But, what about verification environments based on SystemC or VHDL? The good news is that Questa inFact can be bolted onto these environments with just a few lines of code.

Learn more about how Intelligent Testbench Automation integrates into a VHDL testbench environment in this article from the DVCon issue of Verification Horizons.

Do you use a SystemC or VHDL testbench for verification? How are you automating test creation in your verification process?

5 January, 2010

Quite often I visit customers who have a well-established, directed test verification environment.  Although they know their company’s long-term success could benefit from an advanced verification methodology, their current verification organization is not ready to make the transition and they are often too busy on their current projects. The majority of these customers have a Verilog environment, with a minority using either VHDL or SystemC.

As we look closer at these directed test environments, we often find:

  • Many of the directed tests verify specific DUT functions and there are no compelling reasons to change them.
  • Legacy tests need to be supported, so the verification environment must be considered when evaluating new flows or methodologies.
  • A subset of the directed tests is rather complex, exercising multiple DUT functions together.  Such tests often contain complex procedural code that exercises combinations of DUT functions.  These tests can be difficult to write and maintain, but are important because they attempt to exercise the DUT in more typical customer use models.  Sometimes these tests employ some amount of randomness to data or control fields to add variability to the tests.
  • A formalized scheme to measure test coverage is often lacking, so it is difficult to assess how effective the combination of tests are in exercising important DUT functionality.

What options are available to help such customers improve their verification environment? One viable low-impact solution to increase their effectiveness and productivity is using an intelligent testbench solution.  A systematic coverage-driven stimulus generation tool, such as inFact, can adapt to existing methodologies and is a useful step to take towards the architecture of an advanced verification environment like OVM.   It addresses the more complex directed test architectures that target combinations of DUT functions in a natural way, and does so by enabling re-use of significant portions of the existing directed test environment.  It includes a formalized scheme for measuring stimulus coverage, thereby addressing a significant shortcoming of many directed test environments.

So what is it about intelligent testbench automation that makes this possible?

Consider a common directed test code structure having three nested for loops, each having a range of values.  This structure generates all for-loop index values in combination, but does so in a fixed ordering like 111, 112, 113, 121, etc.  If test adjacency ordering was important, this for-loop implementation would be insufficient as it would not detect a bug for test adjacencies like 312,111, and many others.  A rather complex directed test would be needed to generate these cases, either in random order or with all adjacency combinations expressed.

Next, consider a more complex directed test having nested loops with conditional statements selecting different code branches depending on previously selected stimulus options, DUT responses, or both.  This code may also use calls to random() functions to add to the number of stimulus cases generated, increasing test variability.   Tasks and functions are typically called from within the procedural code to implement the lower-level functionality needed for the tests.  The test is written with certain verification goals in mind, but we don’t commonly see any formalized coverage metrics implemented that measure how well the test covers the different conditions implied by the code structure.

These types of test architectures map nicely to a graph-based verification tool because the procedural code maps directly to a graph, and lower-level tasks and functions can be called directly from nodes in the graph.  The structure of the graph is easy to understand since it depicts the various choices at different levels in a protocol, as the graph example below taken from an I2C testbench illustrates.

i2cmaster

Lower-level testbench code implemented as tasks or functions can be called directly from the blue graph “action” nodes, thereby facilitating significant re-use of the existing testbench.  When new functionality needs to be added to a test, the graph structure is easily extended by adding branches to the graph for the new DUT functionality being tested.

Since inFact supports various testbench coding styles (Verilog modules, interfaces, SystemVerilog and SystemC classes, OVM, “e,” Vera …) inFact easily integrates into existing environments.  Portions of the testbench environment unrelated to the block(s) managed by inFact remain unchanged, giving users flexibility.  Users planning for a migration to an advanced methodology have the option of re-generating their inFact rule graph as an OVM sequence.

During simulation, inFact decides which branches of the graph to traverse based on user-defined traversal goals.  Hard-coded selection of choices in nested “for” loops or conditional logic is replaced by user-configurable graph traversal strategies, which select graph branches randomly or systematically under control of a path traversal algorithm.  Stimulus coverage (either node or path) can be tabulated by inFact during traversal. Because the stimulus coverage is built into the tool, the user does not have to learn a new language or methodology, yet can realize the benefits of knowing their stimulus coverage.

This approach to directed testing gives users an option to improve the performance of their existing directed test environment and helps prepare them for a subsequent migration to a verification methodology such as OVM.

 


4 November, 2009

During discussions with customers about verification and functional coverage closure techniques, one thing that has consistently surprised me is how much reliance is sometimes placed on the unquantified benefits of random testing. There are many well-quantified benefits to constrained-random stimulus generation. Using algebraic constraints to declaratively describe a stimulus domain and automation to create specific stimulus dramatically boosts verification efficiency and results in verification of corner cases that humans would be unlikely to consider. More difficult to quantify, however, are the benefits of redundant stimulus. It is definitely beneficial in provoking some sequential behavior, but how much redundancy is enough and how much is just wasteful? I work with a tool that accelerates functional coverage closure by efficiently targeting stimulus that will ‘cover’ the coverage model. The typical result is an order-of-magnitude improvement in time-to-coverage closure. The core productivity benefit of reducing time-to-coverage is typically understood by the customers I talk with. As designs have become more complex, verification requirements have increased, coverage models have grown, and coverage closure has become challenging, unpredictable and time-consuming. So, some automation in achieving coverage closure is a welcome addition to one’s verification toolkit. However, there is concern over what might be lost in the stimulus-optimization process. Is all that ‘redundant’ stimulus really important and not so redundant after all?

 

This concern seems quite valid. After all, coverage metrics are a subjective measure of verification quality. Achieving coverage of a specific set of coverage metrics proves that a given set of functionality was exercised, but doesn’t prove that this is the only functionality that needed to be exercised. The coverage model is a dynamic, moving target, and is likely to change several times across the typical verification cycle. The coverage model may be expanded as new features are added. The coverage model may be trimmed, or certain areas re-prioritized, as the schedule runs out or as certain coverage is deemed less important or cost-prohibitive. Finally, the coverage model may be expanded to include functional areas where bugs were found. From a coverage-driven verification perspective, verification that isn’t documented in the coverage model effectively does not exist. Verification that uncovered a bug should be repeated as the design is refined and changed. The only way to guarantee this happens is to augment the coverage model.

 

Across the verification cycle, two activities are taking place in parallel. Tests are added and simulation is run to target coverage closure. Meanwhile, bugs discovered during verification are analyzed to determine whether the coverage model should be expanded to functionality surrounding the bug – an activity I like to refer to as bug prospecting. As an example, let’s say we discover a bug with large packets in heavy traffic conditions. We might want to exercise different combinations of traffic conditions and small, medium, and large packets to see if the coverage model should be enhanced. Stimulus described using a declarative description, such as constraints, does a good job of producing unexpected cases. When stimulus is produced randomly, however, 90% of the stimulus, on average, is redundant. This makes systematic bug prospecting difficult and time consuming.

 

A coverage-aware stimulus generation tool like inFact allows the verification engineer to flexibly tailor stimulus according to the requirements of the job at hand. When time-to-coverage closure is the most important thing (regression runs, for example), coverage-aware stimulus provides the ultimate convergence between coverage-closure efficiency and bug prospecting. When it’s time to do some bug prospecting, redundancy can be limited (but not eliminated) and the stimulus targeted a bit more loosely. Then, the coverage model can be enhanced to ensure future regression runs efficiently produce the newly-interesting stimulus. So, far from limiting verification, coverage-aware stimulus actually gives the verification engineer new and improved tools to tackle bug prospecting and coverage closure.

, , , , ,

15 September, 2009

As a marketing guy, spending time with real engineers is always a welcome sanity check. Sometimes it’s easy to abstract a bit too much and forget some of the practical realities of the verification process. A few of my recent sanity-check moments have occurred around the concept of stimulus-coverage closure. In a coverage-driven verification flow that uses constrained-random stimulus, stimulus coverage is very important to ensure that all expected (and critical) stimulus has been generated. Achieving coverage closure in this area seems like it should be almost trivial – especially compared to hitting specific response coverage or triggering assertions in the design. However, several recent customer engagements have highlighted some of the difficulties around just achieving coverage closure for stimulus.

When using randomly-generated stimulus, we can predict the number of expected stimulus items to achieve stimulus coverage closure using a classic problem from probability theory called the Coupon Collector’s Problem. The subject of this problem is a game in which the object is to collect a full set of coupons from a limitless uniformly-distributed random collection. Early in the game, it is easy to fill empty slots in the coupon collection, since the probability is high that each new coupon selection is different from the previously-selected coupons. However, as the coupon collection approaches completeness, each new selection has a high probability of being a duplicate of a previously-selected coupon. After a bit of mathematical derivation, the expected number of selections needed to complete a set of n coupons is shown to be O(n∙log(n)). Given a collection that contains 250 elements, we would expect to have to make around 1380 random selections to fill the set. With a typical-size coverage model, uniformly-distributed random stimulus results in a 10-20% stimulus efficiency rate. Put another way, coverage closure could be achieved 5-10 times faster if non-redundant stimulus were used.

However, life is often not as simple as math and marketing folks predict. There are two primary aspects of stimulus and coverage models that complicate stimulus coverage. First, it is very common for the stimulus model and the coverage model to be mismatched. After all, they serve very different purposes. The stimulus model typically describes the entire valid stimulus space for the design. The coverage model, on the other hand, describes a much smaller set of stimulus that should exercise critical design functionality. Because a random stimulus generator has no knowledge of the stimulus coverage model, it’s often very difficult to hit the corner cases identified in the stimulus model.

The constraint structures used to describe stimulus also complicates stimulus-coverage closure. Recently, I’ve seen quite a few cases where the stimulus space is partitioned and re-partitioned by a chain of constraints. In these cases, it isn’t uncommon to see random stimulus with less than 1% efficiency. In other words, random stimulus could be over an order of magnitude less efficient than we would predict using the Coupon Collector’s Problem. Just achieving coverage closure is painfully difficult or practically impossible given the available computing resources.

In all of these cases, having a coverage-driven stimulus generation tool that is aware of both the stimulus model and the coverage model results in huge productivity gains in achieving stimulus coverage. Many of my customers believe that having coverage-driven stimulus is almost imperative to complete the required verification task.

, , , ,

11 August, 2009

Hi, I’m Mike Andrews, and I’m a Technical Marketing Engineer (TME) at Mentor Graphics, working on our Intelligent Testbench Automation (iTBA) product, namely inFact. The inFact tool employs rule-based (sometimes called graph-based) techniques to generate stimulus during testbench simulation, as a more efficient alternative to constrained random.

During a recent conversation about the complexity of creating the rule graphs that are the essential input of our tool, I half joked that it is like a logic puzzle, but easier than Sudoku. Of course, as the saying goes, many a true word is spoken in jest.
For a certain type of application, specifically the pseudo-random selection of values for the various fields of the test sequences that get sent to the testbench drivers, the analogy holds up rather well.

The puzzle is basically about numbers and ordering, and the solutions in many cases can be trivial, in others a little less so. In almost every case there is a very simple solution that provides value and could probably even be produced via perl-scripts or other simple forms of automation from the variable constraints and cover groups. Often there are other more complex solutions to the same puzzle, that further refine the efficiency of the stimulus, perhaps from a standard 10x improvement over constrained random up to 20x or 50x or even higher.

I had been working on just such a puzzle the day before and was quite pleased with the result. Discarding the obvious easy solution (I am supposed to be the expert after all), I started, much like Sudoku, with the low-hanging fruit, and quickly drew up the simple rules for a few of the variables and their associated cover groups. I then looked for trickier cases where the same variable was used in multiple cross-coverage groups, meaning that these may or may not have to appear in more than one place in the graph (this being the ‘ordering’ part of the puzzle). In some cases, different sub-ranges of one variable might be crossed with different sub-ranges of the others, adding a 3rd dimension to the relatively straightforward logic problem.

After about 45 minutes of (dare I say) puzzling ‘fun,’ with a couple of short bouts of head-scratching and occasional interruptions, I had a satisfactory solution that could potentially save a few more days or maybe weeks of simulation over the course of the verification project. The original trivial solution would probably have sufficed but, as usual, I found the process entertaining, since as a TME I am, of course, a geek in marketers’ clothing.

Later on, during a bus ride to the airport, the marketing side of me came up with an idea – maybe we could sell a cut-down version of the product within the quite lucrative computer game market. We have a quite colorful development environment, so we’re good on the graphics, and the game-play seems to be engaging for lovers of logic puzzles like me. I can picture bored engineers sitting in their airline seats trading their obsession for Solitaire or Minesweeper for our inFact puzzle-game (we might need a new product name for that market though – suggestions anyone?). We could also do an online version, or maybe even a Facebook app.

Once again the marketing side of my nature came to the fore – we need some hook to stop the game from being a short-lived fad. What we needed is a “High Score” concept to appeal to the competitive nature of our graph game addicts-to-be. A minor enhancement could calculate the statistical estimate of how many test sequences a constrained random stimulus generator would take to reach the same coverage goal, and then divide that by the number of required graph paths, and award the result as the user’s ‘score’ (representing their potential verification speed-up). Gamers could then ‘level-up’ as they raise their high score and get rewarded with more challenging puzzles. High scorers could be listed on a leader board for bragging rights (the current high score for the ‘constrained random : inFact’ ratio is around 250:1). Now all I need is a schedule from engineering…. and while I’m at it, let’s put the Wii port on the roadmap….

, , ,