Posts Tagged ‘functional verification’
In my previous blog, I introduced the 2012 Wilson Research Group Functional Verification Study (click here). The objective of my previous blog was to provide background on this large, worldwide industry study. I will present the key findings from this study in a set of upcoming blogs.
This blog begins the process of revealing the 2012 Wilson Research Group study findings by first focusing on current design trends. Let’s begin by examining process geometry adoption trends, as shown in Figure 1. Here, you will see trend comparisons between the 2007 Far West Research study (gray line), the 2010 Wilson Research Group study (blue line), and the 2012 Wilson Research Group study (green line).
Figure 1. Process geometry trends
Worldwide, the median process geometry size from the 2007 Far West Research study was about 90nm, while the median process geometry size is about 65nm in 2010. Today, the mean process geometry size for a typical project is about 45nm—although you can see that over a third of projects today are designing below 32nm.
In addition to the industry moving to smaller process geometries, the industry is also moving to larger design sizes as measured in number of gates of logic and datapath, excluding memories (which should not be a surprise). Figure 2 compares design sizes from the 2002 Collett study (dark blue line), the 2007 Far West Research study (gray line), the 2010 Wilson Research Group study (light blue line), and the 2012 Wilson Research Group study (green line).
Figure 2. Number of gates of logic and datapath trends, excluding memories
The study revealed that about a third of the non-FPGA designs today are less than 5M gates, while a third range in size between 5M to 20M gates, and about a third of all designs are larger than 20M gates.
It’s important to note here that the data on the mean design size trends does not reflect volume in terms of semiconductor production. For example, you could have fewer projects designing at a small geometry, yet they have higher volume in terms of production.
In Figure 3, I show the mean design size trends between the 2002 Collett study (dark blue line), the 2007 Far West Research study (gray line), the 2010 Wilson Research Group study (light blue line), and the 2012 Wilson Research Group study (green line). Obviously, gate counts have increased over the years, yet a significant number of designs continue to be developed with smaller (and larger) gate counts as indicated by the mean calculation. Another observation is that, as you would expect, the mean gate count trend is essentially following Moore’s law.
Figure 3. Mean design size trends
Figure 4 presents the current design implementation trends for non-FPGAs as identified by the survey participants.
Figure 4. Non-FPGA current design implementation trends
The data in Figure 4 presents trends in design implementation approaches for non-FPGA designs, ranging from the 2002 Collett study (dark blue bar), the 2004 Collet study (dark green bar), the 2007 Far West Research study (gray bar), the 2010 Wilson Research Group study (blue bar), and the 2012 Wilson Research Group study (green bar). Note that the study seems to indicate that there is a downward trend in standard cell design implementation.
Figure 5. FPGA design implementation trends
For the 2012 study, we decided that we wanted to get a sense of the percentage of FPGA projects that target the very complex programmable SoC FPGAs that have recently emerged, which is shown in Figure 5. Examples of these programmable SoC FPGAs include: Xilinx’s Zynq, Altera’s Arria/Cydone, and Microsemi’s SmarFusion.
In my next blog (click here), I’ll continue discussing current design trends, focusing specifically on embedded processors, power, and clock domains.
A unique concept most beginners have trouble grasping about the Verilog, and now the SystemVerilog, Hardware Description Language (HDL) is the difference between wire’s (networks) and reg‘s (variables). This concept is something that every experienced RTL designer should be familiar with, but there are now many verification engineers with no prior Verilog experience trying to pick up SystemVerilog for their testbench. Verification methodology courses tend to concentrate on the Object-Oriented programming aspects of testbench design, but do not cover this topic thinking that it is for designers only. Not true. If you have to communicate with a DUT then you need to understand the difference between wire’s and reg’s (nets and variables).
Anyone tasked with having to design or verify a piece of hardware should have some basic programming skills and understand the concept of a variable. If not, you had better stop right here and brush up on some programming basics. The key concept that you need to take away from programming is that you write a value into a variable and that value is saved until the next assignment to that variable. This concept is referred to as a procedural assignment which is part of executing an ordered set of statements. An HDL may add some notion of time in between assignments and other statements. The last assignment determines the current value of the variable.
|Combinatorial Logic||Sequential Logic|
Initially, Verilog used the keyword reg to declare variables representing sequential hardware registers. Eventually, synthesis tools began to use reg to represent both sequential and combinational hardware as shown above and the Verilog documentation was changed to say that reg is just what is used to declare a variable. SystemVerilog renamed reg to logic to avoid confusion with a register – it is just a data type (specifically reg is a 1-bit, 4-state data type). However people get confused because of all the old material that refers to reg. Just forget about it and use logic from now on.
Another distinctive characteristic of an HDL is that it models massive amounts of parallel processes. At the lowest level of digital design, every primitive gate (AND, OR, DFF) is an independent concurrent process. Modules are containers representing processes modeled at different levels of abstraction. Groups of primitives and modules pass values to each other via networks of signals. In Verilog, a wire declaration represents a network (net) of connections with each connection either driving a value or responding to the resolved value being driven on the net. The output of each of these concurrent processes drives a net in what is called a continuous assignment because the process continually updates the value it wants to drive on the net. There are various ways to declare a continuous assignment, all of which represent permanent behaviors:
wire A, B, C;
assign A = B| C; // continuous assignment construct.
or(A,B,C); // gate-level instance terminal connection
mymodule m1(A,B,C); // module instance port connection
Although these are all different forms of continuous assignment constructs, none of them directly assign a value to the net like a procedural assignment would. All of the values being concurrently driven onto the net are passed into a built-in resolution function. The result of that resolution function is based on the strengths of each driver representing the hardware technology in use. For example, an interrupt request signal might use the wired-or (wor) kind of net to indicate that at least one device is driving a ’1′, otherwise it will resolve to a ’0′. Some signals will have weaker pull-up/down resistors that will be overridden by the values of a stronger driver. Most technologies do not allow driving different values on the same net and the net will resolve to an unknown ‘x’ when that happens. In this case only one driver is actively assigning a ’0′ or ’1′ and the other drivers are effectively turned off by driving a high-impedance or ‘z’ state. The consequence of this is that a bi-directional port must be modeled using a net in order to have multiple drivers on either side of the port.
See my recent DVCon paper for an example of modeling bidirectional signals along with other tips for connecting the Testbench to your DUT.
It turns out that the vast majority of nets in a design will only have a single driver, so no strength information or resolution function is needed. SystemVerilog added a feature that allows a single continuous assignment to drive a variable. The expression driving the continuous assignment is assigned to the variable every time the expression changes its value. As soon as you have more than one driver or need strength information, you must go back to using a net. You cannot mix procedural and continuous assignments to the same variable. The reason for that restriction is that there is no way to resolve the “last write wins” semantics of a procedural assignment with a driver that wants to continually assign the variable (i.e. when is the last write finished and the continuous assignment supposed to take over?).
In summary, you should now be using logic for 4-state variables (or bit for 2-state variables) to represent all of your single drive signals. Any signal with more or the potential for more than one driver should be declared as a wire.
This is the first in a series of blogs that presents the results from the 2012 Wilson Research Group Functional Verification Study.
In 2002 and 2004, Ron Collett International, Inc. conducted its well known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification. However, after the 2004 study, no other industry studies were conducted, which left a void in identifying industry trends.
To address this void, Mentor Graphics commissioned Far West Research to conduct an industry study on functional verification in the fall of 2007. Then in the fall of 2010, Mentor commissioned Wilson Research Group to conduct another functional verification study. Both of these studies were conducted as blind studies to avoid influencing the results. This means that the survey participants did not know that the study was commissioned by Mentor Graphics. In addition, to support trend analysis on the data, both studies followed the same format and questions (when possible) as the original 2002 and 2004 Collett studies.
In the fall of 2012, Mentor Graphics commissioned Wilson Research Group again to conduct a new functional verification study. This study was also a blind study and follows the same format as the Collett, Far West Research, and previous Wilson Research Group studies. The 2012 Wilson Research Group study is one of the largest functional verification studies ever conducted. The overall confidence level of the study was calculated to be 95% with a margin of error of 4.05%.
Unlike the previous Collett and Far West Research studies that were conducted only in North America, both the 2010 and 2012 Wilson Research Group studies were worldwide studies. The regions targeted were:
- North America:Canada,United States
- Asia (minusIndia):China,Korea,Japan,Taiwan
The survey results are compiled both globally and regionally for analysis.
Another difference between the Wilson Research Group and previous industry studies is that both of the Wilson Research Group studies also included FPGA projects. Hence for the first time, we are able to present some emerging trends in the FPGA functional verification space.
Figure 1 shows the percentage makeup of survey participants by their job description. The red bars represents the FPGA participants while the green bars represent the non-FPGA (i.e., IC/ASIC) participants.
Figure 1: Survey participants job title description
Figure 2 shows the percentage makeup of survey participants by company type. Again, the red bars represents the FPGA participants while the green bars represents the non-FPGA (i.e., IC/ASIC) participants.
Figure 2: Survey participants company description
In a future set of blogs, over the course of the next few months, I plan to present the highlights from the 2012 Wilson Research Group study along with my analysis, comments, and obviously, opinions. A few interesting observations emerged from the study, which include:
- FPGA projects are beginning to adopt advanced verification techniques due to increased design complexity.
- The effort spent on verification is increasing.
- The industry is converging on common processes driven by maturing industry standards.
My next blog presents current design trends that were identified by the survey. This will be followed by a set of blogs focused on the functional verification results.
Also, to learn more about the 2012 Wilson Reserach Group study, view my pre-recorded Functional Verification Study web-seminar, which is located out on the Verification Academy website.
Quick links to the 2012 Wilson Research Group Study results (so far…)
- Part 1 – Design Trends
Tags: accellera, Assertion-Based Verification, formal verification, functional coverage, functional verification, IEEE, Simulation, Standards, SystemVerilog, UVM, Verification Academy, Verification Methodology, verilog, vhdl
Today at this week’s DVCon 2013 conference, the IEEE Standards Association (IEEE-SA) and Accellera Systems Initiative (Accellera) have jointly announced the public availability of the IEEE 1800 SystemVerilog Language Reference Manual at no charge through the IEEE Get Program.
As I posted a few weeks ago, the 1800-2012 is not a major revision of the standard, but does contain a few enhancements that will be of interest to design and verification engineers alike. However, providing the standard as freely available download is major news.
Even though the relative cost of the LRM was minor compared to the cost of most projects utilizing the standard, there seemed to be a barrier in most engineer’s minds in justifying the expense. So most just continued to use the last freely available SystemVerilog 3.1a LRM, which was 9 years old and very obsolete for such a rapidly changing technology.
When it comes to formal methods, many engineers are skeptics. Perhaps this is due to value propositions that have been pitched over the years that have over-promised yet under-delivered in terms of results. Or perhaps it is due to the advanced skills that have traditionally been required to achieve predictable and reliable results. After all, historically this was the case—dating back to the mid-nineties when formal techniques were only adopted by companies that could afford a dedicated team of formal experts with PhDs.
So, what’s changed today? The emergence of functional verification solutions targeted at specific problem domains, which blend simulation with formal-based techniques in a seamless way to improve results. In other words, the application of formal-based technology is not just for experts anymore! In fact, everyone can reap the benefits of formal analysis today with very little effort.
One example of this blending of simulation with formal-based techniques is in the area of accelerating the process of code coverage closure with the new Questa CoverCheck solution. Closing code coverage typically involves many engineering weeks of effort to manually review code coverage holes to determine if they are unreachable and can be safely ignored—or figuring out exactly how to handcraft special tests to cover them during simulation. Questa CoverCheck makes it easy for non-expert users to leverage formal-based technology to complete this process by automatically identifying the set of unreachable coverage items in a design, and then guiding the user to create tests for the reachable items that have not been covered yet. This process, illustrated in the figure below, is push-button, low-effort, and requires no expertise with formal techniques. In addition, no assertions are required nor expertise in assertion languages. It is a beautiful example of how formal-based technology is blended with simulation to form a solution that improves both productivity and quality of results.
Another example of how formal-based technology is being used today to complement simulation is with AutoCheck, which is part of the Questa Formal solution. For example, there is a class of bugs that cannot be found using RTL simulation due to a simulation effect known as X-state optimism. These bugs might be found during gate-level simulation, but this occurs very late in the design flow when it is costlier to fix. By using AutoCheck, engineers are able to identify and correct X-state issues early in the design flow, before simulation occurs. In addition to X-state issues, AutoCheck uses formal-based technology to verify a wide range of common RTL errors that are difficult or impossible to find during RTL simulation. It is another example of a push-button, low-effort solution where assertion-language and formal expertise is not required. What’s new in the latest Questa Formal release is significant improvements in engine performance and capacity, along with multicore support.
Questa CDC is one more example of how formal-based technology is being used today to complement simulation. Today, we see about 94% of all designs have multiple asynchronous clock domains. Verifying that a signal originating from one clock domain will safely be registered in a different asynchronous clock domain is not possible using traditional RTL simulation since state element setup and hold times are not modeled, which means that metastability issues will not be verified. Again, these bugs might be found later in the flow during gate-level simulation where it is costlier to fix. Static timing analysis, although effective at finding timing issues within a single or synchronous clock domains is unable to identify issues across asyncrhonous clock domains. This is an area with formal-based technology, such as Questa CDC, can help. What’s new in the latest Questa CDC release is support for unlimited design sizes through hierarchical CDC analysis along with a 5X improvement in performance.
A system-level verification engineer once told me that his company consumes over 50% of its emulation capacity debugging failures. According to him there was just no way around consuming emulators while debugging their SoC design emulation runs. In fact when failures occur during emulation, verification engineers often turn to live debugging with JTAG interfaces to the Design Under Test. This enables one engineer to debug one problem at a time, while consuming expensive emulation capacity for extended periods of time. After all, when some of the intricate interactions between system software and design hardware fail, it can take days if not weeks to debug. To say this is painful, slow, and expensive would be an understatement.
Would you be interested to learn about a better alternative for debugging SoC emulation runs? Veloce Codelink offers instant replay capability for emulation. This allows multiple engineers to debug multiple problems at the same time, without consuming any emulation capacity, leaving the emulators to be used where they’re most needed – running more regression tests. And Veloce Codelink is non-invasive – no additional clock cycles needed to extract emulation data.
If you consume as much time debugging emulation failures as the system-level verification engineer above, Veloce Codelink could double your emulation capacity, too. To learn more about Veloce Codelink’s “virtual emulation” that enables “DVR” control of emulation runs, check out our On-Demand Web Seminar titled “Off-line Debug of Multi-Core SoCs with Veloce Emulation“. In this web seminar you’ll also learn about Veloce Codelink’s “flight data recording” technology that enables long emulation runs to be debugged, without requiring huge amount of memory to store all of the data.
Graph-Based Intelligent Testbench Automation
While intelligent testbench automation is still reasonably new when measured in EDA years, this graph-based verification technology is being adopted by more and more verification teams every day. And the interest is global. Verification teams from Europe, North America, and the Pacific Rim are now using iTBA to help them verify their newest electronic designs in less time and with fewer resources. (If you haven’t adopted it yet, your competitors probably have.) If you have yet to learn how this new technology can help you achieve higher levels of verification, despite increasing design complexity, I’d suggest you check out a recent article in the June 2012 issue of Verification Horizons titled “Is Intelligent Testbench Automation For You?” The article focuses on where iTBA is best applied and where it will help you most by producing optimal results, and how design applications with a large verification space, functionally oriented coverage goals, and unbalanced conditions can often experience a 100X gain in coverage closure acceleration. For more detail about these and other considerations, you’ll have to read the article.
And while you’re there, you might also notice that the entire June 2012 issue of Verification Horizons is devoted to helping you achieve the highest levels of coverage as efficiently as possible. Editor and fellow verification technologist Tom Fitzpatrick succinctly adapts Murphy’s Law to verification, writing “If It Isn’t Covered, It Doesn’t Work”. And any experienced verification engineer (or manager) knows just how true this is, making it critical that we thoughtfully prioritize our verification goals, and achieve them as quickly and efficiently as possible. The June 2012 issue offers nine high quality articles, with a particular focus on coverage.
Another proof that iTBA is catching on globally, is the upcoming TVS DVClub event being held next Monday 2 July 2012, in Bristol, Cambridge, and Grenoble. The title of the event is “Graph-Based Verification”, and three industry experts will discuss different ways you can take advantage of what graph-based intelligent testbench automation has to offer. My colleague and fellow verification technologist Staffan Berg leads off the event with a proof of his own, as he will present how graph-based iTBA can significantly shorten your time-to-coverage. Staffan will show you how to use graph-based verification to define your stimulus space and coverage goals, by highlighting examples from some of the verification teams that have already adopted this technology, as I mentioned above. He’ll also show how you can introduce iTBA into your existing verification environment, so you can realize these benefits without disrupting your existing process. I have already registered and plan to attend the TVS DVClub event, but I’ll have to do some adapting of my own as the event runs from 11:30am to 2:00pm BST in the UK. But I’ve seen Staffan present before, and both he and intelligent testbench automation are worth getting up early for. Hope to see you there, remotely speaking.
In his recent post on UVM: Some Thoughts Before DVCon, Dennis outlined some great ideas about what we think should happen next for UVM. His 3rd point, “UVM needs to bridge the system domain,” is particularly relevant given the newly-formed Accellera Systems Initiative. This is actually an area we’ve been contemplating for a while here at Mentor, and as Dennis indicated, we shared our thoughts on this topic at our last face-to-face with the VIP-TSC. With demand coming from our users, and some positive feedback on our proposal, we have just released UVM Connect, an open-source library that provides TLM1 and TLM2 connectivity and object passing between SystemC and SystemVerilog models and components, as well as a UVM Command API for accessing and controlling UVM simulation from SystemC (or C or C++).
Mentor has always believed that SystemVerilog and SystemC each have their own strengths and that the most productive way to combine them in a system-level environment is to preserve the strengths of each while allowing the free exchange of data between them. Instead of trying to re-implement UVM in SystemC, or to extend SystemC to try and recreate SystemVerilog functional coverage or constrained-random stimulus, UVM Connect provides the framework needed to interoperate between languages. This lets you:
- Reuse your SystemC architectural models as reference models in UVM verification
- Reuse your stimulus generation agents in SystemVerilog to verify models in SystemC
- Have access to a wider array of VIP since you are no longer confined to a single language
- Utilize and interact with the UVM infrastructure from SystemC, including wait for and control UVM phase transitions, set and get configuration, issue UVM-style reports, set factory type and instance overrides, and more
UVM Connect provides object-based data transfer across the language boundary via TLM1 and TLM2 interfaces, which are natively supported in both languages. It works out-of-the-box with UVM 1.1a and later and lets you use your existing TLM models, regardless of language, in a mixed-language context without modification. In a nutshell, UVM Connect fulfills the principles and purpose of the TLM interface standard, letting you design independent models that communicate without directly referring to each other. The models thus work equally well in both native and mixed-language environments.I encourage you to download the kit and give it a try. In the spirit of “co-op-etition” I also encourage our competitors to qualify the library on their simulators.
In addition to the great material in the UVM/OVM Online Methodology Cookbook on Verification Academy, the kit also includes an HTML User’s Guide, based on extensive, well-documented examples, that includes detailed information on all aspects of the API. Please make sure to stop by the Mentor booth at DVCon and let us know what you think.
Instant Replay Offers Multiple Views at Any Speed
If you’ve watched any professional sporting event on television lately, you’ve seen the pressure put on referees and umpires. They have to make split-second decisions in real-time, having viewed ultra-high-speed action just a single time. But watching at home on television, we get the luxury of viewing multiple replays of events in question in high-definition super-slow-motion, one frame at a time, and even in reverse. We also get to see many different views of these controversial events, from the front, the back, the side, up close, or far away. Sometimes it seems there must be twenty different cameras at every sporting event.
Wouldn’t it nice if you could apply this same principle to your SoC level simulations? What if you had instant replay from multiple viewing angles in your functional verification toolbox? It turns out that such a technology indeed exists, and it’s called “Codelink Replay”.
Codelink Replay enables verification engineers to use instant replay with multiple viewing angles to quickly and accurately debug even the most complex SoC level simulation failures. This is becoming increasingly important, as we see in Harry Foster’s blog series about the 2010 Wilson Research Group Functional Verification Study that over half of all new design starts now contain multiple embedded processors. If you’re responsible for verifying a design with multiple embedded cores such as ARM’s new Cortex A15 and Cortex A7 processors, this technology will have a dramatic impact for you.
Multi-Core SoC Design Verification
Multi-core designs present a whole new level of verification challenges. Achieving functional coverage of your IP blocks at the RTL level has become merely a pre-requisite now – as they say “necessary but not sufficient”. Welcome to the world of SoC level verification, where you use your design’s software as a testbench. After all, since a testbench’s role is to mimic the design’s target environment, so as to test its functionality, how better to accomplish this than to execute the design’s software against its hardware, albeit during simulation?
Some verification teams have already dabbled in this world. Perhaps you’ve written a handful of tests in C or assembly code, loaded them into memory, initialized your processor, and executed them. This is indeed the best way to verify SoC level functionality including power optimization management, clocking domain control, bus traffic arbitration schemes, driver-to-peripheral compatibility, and more, as none of these aspects of an SoC design can be appropriately verified at the RTL IP block level.
However, imagine running a software testbench program only to see that the processor stopped executing code two hours into the simulation. What do you do next? Debugging “software as a testbench” simulation can be daunting. Especially when the software developers say “the software is good”, and the hardware designers say “the hardware is fine”. Until recently, you could count on weeks to debug these types of failures. And the problem is compounded with today’s SoC designs with multiple processors running software test programs from memory.
This is where Codelink Replay comes in. It enables you to replay your simulation in slow motion or fast forward, while observing many different views including hardware views (waveforms, CPU register values, program counter, call stack, bus transactions, and four-state logic) and software views (memory, source code, decompiled code, variable values, and output) – all remaining in perfect synchrony, whether you’re playing forward or backward, single-step, slow-motion, or fast speed. So when your simulation fails, just start at that point in time, and replay backwards to the root of the problem. It’s non-invasive. It doesn’t require any modifications to your design or to your tests.
Debugging SoC Designs Quickly and Accurately
So if you’re under pressure to make fast and accurate decisions when your SoC level tests fail, you can relate to the challenges faced by professional sports referees and umpires. But with Codelink Replay, you can be assured that there are about 20 different virtual “cameras” tracing and logging your processors during simulation, giving you the same instant replay benefit we get when we watch sporting events on television. If you’re interested to learn more about this new technology, check out the web seminar at the URL below, that introduces Codelink Replay, and shows how it supports the entire ARM family of processors, including even the latest Cortex A-Series, Cortex R-Series, and Cortex M-Series.
Who Doesn’t Like Faster?
In my last blog post I introduced new technology called Intelligent Testbench Automation (“iTBA”). It’s generating lots of interest in the industry because just like constrained random testing (“CRT”), it can generate tons of tests for functional verification. But it has unique efficiencies that allow you to achieve coverage 10X to 100X faster. And who doesn’t like faster? Well since the last post I’ve received many questions of interest from readers, but one seems to stick out enough to “cover” it here in a follow up post.
Several readers commented that they like the concept of randomness, because it has the ability of generating sequences of sequences; perhaps even a single sequence executed multiple times in a row. 1 And they were willing to suffer some extra redundancy as an unfortunate but necessary trade-off.
While this benefit of random testing is understandable, there’s no need to worry as iTBA has you covered here. If you checked out this link - http://www.verificationacademy.com/infact2 - you found an interactive example of a side by side comparison of CRT and iTBA. The intent of the example was to show comparisons of what happens when you use CRT to generate tests randomly versus when you use iTBA to generate tests without redundancy.
However in a real application of iTBA, it’s equally likely that you’d manage your redundancy, not necessarily eliminate it completely. We’ve improved the on-line illustration now to include two (of the many) additional features of iTBA.
Coverage First – Then Random
One is the ability to run a simulation with high coverage non-redundant tests first, followed immediately by random tests. Try it again, but this time check the box labeled “Run after all coverage is met”. What you’ll find is that iTBA achieves your targeted coverage in the first 576 tests, at which time CRT will have achieved somewhere around 50% coverage at best. But notice that iTBA doesn’t stop at 100% coverage. It continues on, generating tests randomly. By the time CRT gets to about 70% coverage, iTBA has achieved 100%, and has also generated scores of additional tests randomly. You can have the best of both worlds. You can click on the “suspend”, “resume”, and “show chart” buttons during the simulation to see the progress of each.
Interleave Coverage and Random
Two is the ability to run a simulation randomly, but to clip the redundancy rather than eliminate it. Move the “inFact coverage goal” bar to set the clip level (try 2 or 3 or 4), and restart the simulation. Now you’ll see iTBA generating random tests, but managing the redundancy to whatever level you chose. The result is simulation with a managed amount of redundancy that still achieves 100% of your target coverage, including every corner-case.
iTBA generates tons of tests, but lets you decide how much to control them. If you’re interested to learn more about how iTBA can help you achieve your functional verification goals faster, you might consider attending the Tech Design Forum in Santa Clara on September 8th. There’s a track dedicated to achieving coverage closure. Check out this URL for more information about it. http://www.techdesignforums.com/events/santa-clara/event.cfm
1 – By the way, if achieving your test goals is predicated on certain specific sequences of sequences, our experts can show you how to create an iTBA graph that will achieve those goals much faster than relying on redundancy. But that’s another story for another time.
About Verification Horizons BLOG
This blog will provide an online forum to provide weekly updates on concepts, values, standards, methodologies and examples to assist with the understanding of what advanced functional verification technologies can do and how to most effectively apply them. We're looking forward to your comments and suggestions on the posts to make this a useful tool.
- Part 1: The 2012 Wilson Research Group Functional Verification Study
- What’s the deal with those wire’s and reg’s in Verilog
- Getting AMP’ed Up on the IEEE Low-Power Standard
- Prologue: The 2012 Wilson Research Group Functional Verification Study
- Even More UVM Debug in Questa 10.2
- IEEE Approves New Low Power Standard
- May 2013 (2)
- April 2013 (2)
- March 2013 (2)
- February 2013 (5)
- January 2013 (1)
- December 2012 (1)
- November 2012 (1)
- October 2012 (4)
- September 2012 (1)
- August 2012 (1)
- July 2012 (6)
- June 2012 (1)
- May 2012 (3)
- March 2012 (1)
- February 2012 (6)
- January 2012 (2)
- December 2011 (2)
- November 2011 (2)
- October 2011 (3)
- September 2011 (1)
- July 2011 (3)
- June 2011 (6)
- Intelligent Testbench Automation Delivers 10X to 100X Faster Functional Verification
- Part 9: The 2010 Wilson Research Group Functional Verification Study
- Verification Horizons DAC Issue Now Available Online
- Accellera & OSCI Unite
- The IEEE’s Most Popular EDA Standards
- UVM Register Kit Available for OVM 2.1.2
- May 2011 (2)
- April 2011 (7)
- User-2-User’s Functional Verification Track
- Part 7: The 2010 Wilson Research Group Functional Verification Study
- Part 6: The 2010 Wilson Research Group Functional Verification Study
- SystemC Day 2011 Videos Available Now
- Part 5: The 2010 Wilson Research Group Functional Verification Study
- Part 4: The 2010 Wilson Research Group Functional Verification Study
- Part 3: The 2010 Wilson Research Group Functional Verification Study
- March 2011 (5)
- February 2011 (4)
- January 2011 (1)
- December 2010 (2)
- October 2010 (3)
- September 2010 (4)
- August 2010 (1)
- July 2010 (3)
- June 2010 (9)
- The reports of OVM’s death are greatly exaggerated (with apologies to Mark Twain)
- New Verification Academy Advanced OVM (&UVM) Module
- OVM/UVM @DAC: The Dog That Didn’t Bark
- DAC: Day 1; An Ode to an Old Friend
- UVM: Joint Statement Issued by Mentor, Cadence & Synopsys
- Static Verification
- OVM/UVM at DAC 2010
- DAC Panel: Bridging Pre-Silicon Verification and Post-Silicon Validation
- Accellera’s DAC Breakfast & Panel Discussion
- May 2010 (9)
- Easier UVM Testbench Construction – UVM Sequence Layering
- North American SystemC User Group (NASCUG) Meeting at DAC
- An Extension to UVM: The UVM Container
- UVM Register Package 2.0 Available for Download
- Accellera’s OVM: Omnimodus Verification Methodology
- High-Level Design Validation and Test (HLDVT) 2010
- New OVM Sequence Layering Package – For Easier Tests
- OVM 2.0 Register Package Released
- OVM Extensions for Testbench Reuse
- April 2010 (6)
- SystemC Day Videos from DVCon Available Now
- On Committees and Motivations
- The Final Signatures (the meeting during the meeting)
- UVM Adoption: Go Native-UVM or use OVM Compatibility Kit?
- UVM-EA (Early Adopter) Starter Kit Available for Download
- Accellera Adopts OVM 2.1.1 for its Universal Verification Methodology (UVM)
- March 2010 (4)
- February 2010 (5)
- January 2010 (5)
- December 2009 (15)
- A Cliffhanger ABV Seminar, Jan 19, Santa Clara, CA
- Truth in Labeling: VMM2.0
- IEEE Std. 1800™-2009 (SystemVerilog) Ready for Purchase & Download
- December Verification Horizons Issue Out
- Evolution is a tinkerer
- It Is Better to Give than It Is to Receive
- Zombie Alert! (Can the CEDA DTC “User Voice” Be Heard When They Won’t Let You Listen)
- DVCon is Just Around the Corner
- The “Standards Corner” Becomes a Blog
- I Am Honored to Honor
- IEEE Standards Association Awards Ceremony
- ABV and being from Missouri…
- Time hogs, blogs, and evolving underdogs…
- Full House – and this is no gamble!
- Welcome to the Verification Horizons Blog!
- September 2009 (2)
- July 2009 (1)
- May 2009 (1)