Posts Tagged ‘Add new tag’
Language and Library Trends
This blog is a continuation of a series of blogs, which present the highlights from the 2010 Wilson Research Group Functional Verification Study (for a background on the study, click here).
In my previous blog (Part 7 click here), I focused on some of the 2010 Wilson Research Group findings related to testbench characteristics and simulation strategies. In this blog, I present design and verification language trends, as identified by the Wilson Research Group study.
You might note for some of the language and library data I present, the percentage sums to more than one hundred percent. The reason for this is that some perticipant’s projects use multiple languages and multiple methodologies.
Let’s begin by examining the languages used for design, as shown in Figure 1. Here, we compare the results for languages used to design FPGAs (in grey) with languages used to design non-FPGAs (in green).
Figure 1. Languages used for design
Not too surprising, we see that VHDL is the most popular language used for the design of FPGAs, while Verilog and SystemVerilog are the most popular languages used for the design of non-FPGAs.
Figure 2 shows the trends in terms of languages used for design, by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green), as well as the projected design language adoption trends within the next twelve months (in purple). Note that the design language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption is increasing.
Figure 2. Trends in languages used for design
Next, let’s look at the languages used for verification (that is, languages used to create simulation testbenches). Figure 3 compares the results between FPGA designs (in grey) and non-FPGA designs (in green).
Figure 3. Languages used in verification to create simulation testbenches
And again, it’s not too surprising to see that VHDL is the most popular language used to create verification testbenches for FPGAs, while SystemVerilog is the most popular language used to create testbenches for non-FPGAs.
Figure 4 shows the trends in terms of languages used to create simulation testbenches by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green), as well as the projected language adoption trends within the next twelve months (in purple). Note that verification language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption is increasing.
Figure 4. Trends in languages used in verification to create simulation testbenches
Now, let’s look at methodology and class library adoption. Figure 5 shows the future trends in terms of methodology and class library adoption by comparing the 2010 Wilson Research Group study (in green) with the projected adoption trends within the next twelve months (in purple). Previous studies did not include data on methodology and class library adoption, so we are unable to show previous trends.
Figure 5. Methodology and class library future trends
The study indicates that the only methodology adoption projected to grow in the next twelve months are OVM and UVM.
Assertion Languages and Libraries
Finally, let’s examine assertion language and library adoption, as shown in Figure 6. Here, we compare the results for FPGA designs (in grey) and non-FPGA designs (in green).
Figure 6. Assertion language and library adoption
SystemVerilog Assertions (SVA) is the most popular assertion language used for both FPGA and non-FPGA designs.
Figure 7 shows the trends in terms assertion language and library adoption by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green), as well as the projected adoption trends within the next twelve months (in purple). Note that the adoption of most of the assertion languages is declining, with the exception of SVA whose adoption is increasing.
Figure 7. Trends in assertion language and library adoption
In my next blog (click here), I plan to focus on the adoption of various verification technologies and techniques used in the industry, as identified by the 2010 Wilson Research Group study.
Tags: 1076, 1364, 1666, 1800, accellera, Add new tag, Assertion-Based Verification, functional verification, IEEE 1800, OVM, Standards, SystemVerilog, UVM, Verification Methodology, verilog, vhdl, vmm
Effort Spent On Verification
This blog is a continuation of a series of blogs, which present the highlights from the 2010 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on design and verification reuse trends. In this blog, I focus on the controversial topic of amount of effort spent in verification.
I have been on the technical program committee for many conferences over the past few years (DVCon, DAC, DATE, FDL, HLDVT, MTV . . .), and it seems that there was not a single verification paper that I reviewed that didn’t start with the phrase: “Seventy percent of a project’s effort is spent in verification…blah blah blah.” Yet I’ve always wondered, where did this number come from? There has never been a reliable reference to the origin of this number, and certainly no credible studies that I am aware of.
I don’t believe that there is a simple answer to the question, “how much effort was spent on verification in your last project.” In fact, I believe that it is necessary to look at multiple data points, derived from multiple questions, to truly get a sense of effort spent in verification.
Total Project Time Spent In Verification
To try to assess the effort spent in verification, let’s begin by looking at one data point, which is the total project time spent in verification. Figure 1 shows the trends in total project time spent in verification by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green).
Figure 1. Percentage of total project time spent in verification
Notice that in 2007, the median total project time spent in verification was calculated to be 50 percent, while the number increased to 55 percent in 2010. Our recent study seems to indicate that the time spent in verification is increasing.
Peak Number of Verification Engineers
Next, let’s look at another data point, the peak number of verification engineers on a project. Figure 2 compares the peak number of verification engineers involved on FPGA designs (in grey) and non-FPGA designs (in green) from our recent study.
Figure 2. Peak number of verification engineers
It’s not surprising that projects involving non-FPGA designs tend to have a higher number of peak verification engineers compared with FPGA designs.
Figure 3 shows the trends in peak number of verification engineers for non-FPGA designs by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green).
Figure 3. Peak number of verification engineer trends
I decided that another interesting way to look at the data is to partition the set regionally, and calculate the median peak number of verification engineers on a project by region. The results are shown in Figure 4, with North America (in blue), Europe/Israel (in green), Asia (in green), and India (in red).
Figure 4. Peak number of verification engineer by region
Noticed how, on average, India seems to have more peak engineers involved on a project. India certainly has developed a core set of expertise in verification over the past few year’s.
The next analysis I decided to perform was to partition the data by design size, and then compare the median peak number of verification engineers. Figure 5 shows the results, where the design size partitions are represented as: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red).
Figure 5. Peak number of verification engineer by design size
Although I am focusing on effort spent in verification at the moment, let’s take a look at the peak number of design engineers involved on a project today. Figure 6 compares the peak number of design engineers involved on FPGA designs (in grey) and non-FPGA designs (in green).
Figure 6. Peak number of design engineers
Next, in Figure 7 I show the trends in peak number of design engineers for non-FPGA designs by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green).
Figure 7. Peak number of design engineer trends
You might note that there has not been a significant increase in design engineers in the past three years, although design sizes have increased. This is partially due to increased adoption of internal and external IP (as I discussed in my previous blog), as well as continued productivity improvements due to automation.
After I saw this data, I thought it would be interesting to compare the median increase in verification engineers to the median increase in design engineers from 2007 to 2010. The results were shocking, as shown in Figure 8, where we see a four percent increase in peak number of design engineers in the last three years compared to a 58 percent increase in peak number of verification engineers. Clearly, verification productivity improvements are needed in the industry to address this problem.
Figure 8. Peak number of design vs. verification engineer trends
In my next blog (click here), I’ll continue the discussion on effort spent in verification as revealed by the 2010 Wilson Research Group Functional Verification Study.
In my previous blog, I introduced the 2010 Wilson Research Group Functional Verification Study (click here). The objective of my previous blog was to provide a background on this large, worldwide industry study. The key findings from this study will be presented in a set of upcoming blogs.
This blog begins the process of revealing the 2010 Wilson Research Group study findings by first focusing on current design trends. Let’s begin by examining process geometry adoption trends, as shown in Figure 1. Here, you will see trend comparisons between the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green).
Figure 1. Process geometry trends
Worldwide, the median process geometry size from the 2007 Far West Research study was about 90nm. While today the median process geometry size is about 65nm. Regionally, Asia seems to be a little more aggressive in its move to smaller process geometries, where the median process geometry size was found to be 45nm.
In addition to the industry moving to smaller process geometries, the industry is also moving to larger design sizes as measured in number of gates of logic and datapath, excluding memories (which should not be a surprise). Figure 2 compares design sizes from the 2002 Collett study (in orange), the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green).
Figure 2. Number of gates of logic and datapath trends, excluding memories
The study revealed that about 30 percent of the IC/ASIC designs today are less than 1M gates, while 40 percent range in size between 1M to 20M gates, and about 30 percent of all designs are larger than 20M gates.
When compiling and analyzing the data from the study, in addition to calculating the mean on various aspects of the data, I decided to calculate the median for trend analysis. In Figure 3, I show the median design size trends between the 2002 Collett study (in orange), the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green). My objective in calculating the median is that the resulting value partitions the data into equal halves, and enables us to easily see that half the designs developed today are less than 6.1M gates, while the other half are greater than 6.1M gates. Obviously, we can see that gate counts have increased over the years, yet there is still a significant number of designs being developed with smaller gate counts as indicated by the median calculation.
Figure 3. Median design size trends
Figure 4 presents the current design implementation approaches as identified by the survey participants, which includes both FPGA and non-FPGA implementations.
The data in Figure 4 presents trends in design implementation approaches for non-FPGA designs, ranging from the 2002 Collett study (in pink), the 2004 Collet study (in orange), the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green). The study seems to indicate that there is a downward trend in standard cell design implementation.
Figure 5. Non-FPGA design implementation trends
We are not able to present trends for FPGA implementations, since none of the prior studies included FPGA survey participants.
In my next blog (click here), I’ll continue discussing current design trends, focusing specifically on embedded processors, power, and clock domains.
What does the word performance mean to you?
Speed? Well, obviously speed is an important characteristic. Yet, if the team is running in the wrong direction, it really doesn’t matter how fast they are going.
How about accomplishment? After all, we do assess an employee’s or project team’s accomplishments using a process we refer to as a performance review.
What about efficiency, which is a ratio comparing the amount of work accomplished to the effort or cost put into the process? Certainly, from a project perspective, effort and cost should be an important consideration.
Finally, perhaps quality of results is a characteristic we should consider. After all, poor results are of little use.
From a verification perspective, I think it is necessary to focus on the real problem, that is, the project’s verification objectives:
- Reduce risk-find more bugs sooner
- Know when we are done-increase confidence
- Improve project productivity and efficiency-get more work done
Now, whenever I hear the phrase “get more work done,” I’m often reminded of Henry Ford, who was the founder of the Ford Motor Company. Henry is probably best know as the father of modern assembly lines used in mass production, and he revolutionized transportation specifically, and American industry in general. Henry once said, “If I had asked people what they wanted, they would have said faster horses.” This quote provides a classic example of the importance of focusing on the real problem, and thinking outside the box.
In fact, Henry Ford’s faster horses example is often used in advanced courses on product marketing and requirements gathering. The typical example of focusing on the real problem generally involves a dialogue between Henry and a farmer, as follows:
Henry: So, why do you want faster horses?
Farmer: I need to get to the store in less time.
Henry: And why do you need to get to the store in less time?
Farmer: Because I need to get more work done on the farm.
As you can see, the farmer really didn’t need faster horses-he needed a solution that would allow him to get more work done on the farm. Faster horses are certainly one solution, but thinking outside the box, there are other more efficient solutions that would yield higher quality results.
Now, before I move on to discuss ways to improve verification performance, I would like to give one more example of thinking outside the box to improve performance. And for this example, I’ve chosen the famous Intel 8088 microprocessor. I was just an engineering student when the 8088 was released in 1979, and like so many geeks of my generation, I couldn’t wait to get my hands on one.
The 8088 had a maximum clock speed of approximately 5Mhz. It took multiple clock cycles to complete an instruction (on average about 15). Furthermore, a 16-bit multiplication required about 80 clock cycles. So the question is, how could we improve the 8088 performance to get more work done?
Well, one approach would be to speed up the clock. However, this would only provide incremental improvements compared to what could be achieved by thinking outside the box and architecting a more clever solution that took advantage of Moore’s Law. In fact, over time, that is exactly what happened.
First, the multiplier performance can be improved by moving to a single-cycle multiplier, such as a Wallace Tree, Baugh-Wooley, or Dadda architecture. These architectures calculate multiple partial products in parallel. Second, the average number of clock cycles per instruction can be reduced by moving to pipelined architectures, where multiple instruction executions overlap, giving a net effect of one instruction completing every clock cycle (as an ideal case example).
The point is that we have moved to solutions that get more work done by “increasing the amount of work per cycle,” instead of just a brute force approach to the problem.
In my next blog, I’ll discuss why performance even matters, followed by thoughts on improving verification performance.
I’ve always loved the Chinese proverb, “Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime.” Yet, why merely settle for fish when you can have sushi! My point is, to remain strategically relevant in today’s competitive landscape, it is necessary to constantly reinvent ourselves and evolve our technical skills. To that end, we have created the Verification Academy to help you evolve your advanced functional verification skills.
Since we launched the Verification Academy, we have had numerous requests for training on the Open Verification Methodology (OVM). Hence, in February we launched a new module titled Basic OVM that has been received with overwhelming enthusiasm. It’s currently our top viewed module. The Basic OVM module consists of 2.5 hours of content, and is divided into eight 20-minute sessions. The module is primarily aimed at existing VHDL and Verilog engineers who recognize they have a functional verification problem, but have little or know experience with constrained-random verification or object-oriented programming. Our goal for releasing the Basic OVM module is to raise your skill level to the point where you have sufficient confidence in your own technical understanding. In turn, you will have the confidence required to start the process of adopting advanced functional verification techniques.
This month, we are excited to announce the next step in evolving your Open Verification Methodology skills. Our new module is titled Advanced OVM (&UVM) and provides a higher level of OVM understanding beyond what is presented in our previously released Basic OVM module. What’s particularly exciting about this release is that we are addressing the numerous requests from you concerning advanced functional verification and creating contemporary testbenchesusing the OVM. The Advanced OVM module is presented by our own subject matter expert, Tom Fitzpatrick, who has been a driving force behind the OVM development and standardization. Tom is one of my favorite technical presenters. He is both informative and entertaining, so I’m sure you will really enjoy our new Advanced OVM module.
Now, as shown in Table 1, the Verification Academy covers a wide variety of topics, which enables you to start evolving your advanced functional verification skills.
Table 1. Verification Academy Modules
|This module provides a framework for all the modules within the Verification Academy, while introducing a tool for assessing and improving an organization’s advanced functional verification capability|
|This module provides a comprehensive introduction to ABV techniques, include an introduction to SystemVerilog Assertions|
|This module provides an understanding of the clock-domain crossing problem, in terms of metastability and reconvergence, and then introduces verification solutions|
|This module, although targeted at FPGA engineers, provides an excellent introduction to anyone interested in learning various functional verification techniques|
|This module provides a step-by-step introduction to the basics of OVM|
|This module provides the next level of understanding beyond the skills introduced in the Basic OVM module|
Another exciting announcement is that we have added a language capture option to most (and eventually all) Verification Academy modules. The language options include: Chinese Simplified and Traditional, Japanese and Russian.
In the next few weeks we have another exciting announcement related to the Verification Academy. Stay tuned!
I would like to encourage you to check out all our new and existing content at the Verification Academy by visiting www.verificationacademy.com.
PROLOGUE: Over the weekend, I was thinking about a recent visit I had with an advanced ASIC team manager who told me that they had optimized most aspects of their verification flow to such an extent that most of their remaining effort was spent in debugging. So, I decided to work up a draft blog on debugging. However, this morning, when I was preparing to post my blog, I noticed that Richard Goering had beat me to the punch and had posted a blog on debugging about two weeks ago. Having reviewed his blog, I think we are both in agreement—debugging is a huge bottleneck in the flow. I think that debugging must be looked at as a solution, and not a tool feature. However, there are many aspects of debugging beyond traditional simulation triage of design models and testbench components—ranging from embedded software, to power and performance analysis, to code and functional coverage closure, etc. There really isn’t a unified solution—debugging must be considered an integral part of each aspect of design and verification.
ACT 1: My original blog from this weekend….
“Bloody instructions, which, being taught, return to plague the inventor….”
William Shakespeare, Macbeth, act 1, scene 7
All right, even Shakespeare had issues with debugging. But before I get into all of that, let me set the stage with a little background info…
First, let me say that I love my job. My role at Mentor Graphics consists of a diverse set of tasks. Yet, probably my most rewarding work involves studying and assessing today’s electronics industry. The objective of this work is to help Mentor identify discontinuities in today’s EDA solutions, as well as understand emerging verification challenges. But what I like most about my work is that it allows me to participate in detailed discussions with various project teams and multiple industry thought leaders across multiple market segments.
A couple of years ago, I was performing a detailed verification assessment for an ASIC project team. As I usually do when I conduct these kinds of assessments, I asked the team what was the biggest bottleneck in their flow. This one enthusiastic, young engineer started waving his hand vigorously at me and said: “I know, I know…..it’s layoffs!” Okay, so after the group recovered itself from an outburst of nervous chuckles, I pressed forward with my question. It turned out that the group unanimously agreed that debugging was generally a significant, yet often underestimated, effort associated with their flow. Perhaps this shouldn’t surprise anyone, when you consider that the Collett International 2003 IC/ASIC Design Closure study found that 42 percent of the verification effort was consumed in writing test and creating testbenches, while 58 percent was consumed in debugging. More recently, a 2007 Farwest Research study, chartered by Mentor Graphics, found that 52 percent of a dedicated verification engineers effort was consumed in debugging.
The problem with debugging is that the effort is not always obvious since it applies to all aspects of the design and verification flow and often involves many different stakeholders. For example, architectural modeling, RTL coding, testbench implementation, transaction modeling, embedded software, coverage modeling and closure, and on and on and on. What makes it particularly insidious is that it is extremely difficult to predict or schedule. In fact, what you will find is that a mature organization relies on historical data extracted from their previous project’s debugging effort metrics in order to estimate their future project effort. However, due to the unpredictable nature of debugging, history doesn’t always repeat itself. And unfortunately, there is no silver bullet in terms of a single debugging tool or strategy. Multiple solutions, ranging from RTL implementation debugging, to OVM object-oriented testbench component debugging, to embedded software debugging capabilities, to coverage closure are required. Fortunately, multiple good solutions have emerged, ranging from assertions for reducing RTL debugging effort, to SystemVerilog dynamic structures analysis and debugging, to processor-driven verification debugging solutions for embedded software verification, to the intelligent testbench for automating coverage closure.
EPILOGUE: I opened this blog humorously with a quote from Shakespeare. Yet, today’s debugging effort is no laughing matter, and it contributes significantly to a project’s overall design and verification effort. I’ll conclude this blog with a sobering quote from Brian Kernighan (the K in the K&R C language) who once pointed out:
Debugging is twice as hard as writing the code in the first place.
I’m curious about your thoughts. Does debugging consume a significant amount of effort in your flow? If not, what is the biggest bottleneck in your flow?
I’m excited. I’ve had the pleasure of knowing Cliff Cummings for many years, and I was honored a couple of years ago to have him write the foreword in a book that I published on assertions. Now, we have joined forces to do a set of seminars titled: “Assertion-Based Verification for FPGA and IC Design.” The first seminar will take place on January 19, 2010 in Santa Clara, CA, and you can register online by clicking here.
This six-hour seminar is organized into four sessions. My session is titled: “Industry Perspective and Opportunities in Assertion-Based Verification,” and I intend to provide a survey of today’s ABV landscape, ranging from various industry myths to realities. In fact, I specifically plan on addressing the issues raised in my recent blog titled “Evolution is a tinkerer.” In addition, I’ll talk about what characterizes a successful organization that has successfully adopted ABV, and then contrast it against organizations that are struggling or have failed in their attempt to integrate ABV into their flow.
The second and third sessions will focus on “Advanced Debugging with Assertions” and “Effective Coverage Using Assertions.”
We will conclude the seminar with an extended session covering “Basic SystemVerilog Assertions Training” by our SystemVerilog guru Cliff Cummings! His presentation details practical SystemVerilog assertion tricks that you can apply today to your own work, as well as methodological recommendations to improve efficiency when adopting ABV.
I hope everyone has a peaceful, happy holiday, and I look forward to meeting you on January 19 at the ABV seminar in Santa Clara, CA!
I see that Synopsys has finally released VMM1.2. Congratulations, guys. There will be plenty of opportunity over the coming weeks to discuss the relative merits of OVM vs. the OVM features that have been “borrowed” and jammed into this new version of VMM, (factory, phasing, hierarchy…) but I’d like to talk a bit in this post about Synopsys’ unique approach to version numbering.
Let me just say that it’s patently obvious that Synopsys chose to hide the fact that this is a dramatically different VMM by calling it version 1.2 instead of 2.0, which is what they should have called it. Even though the accepted practice in our industry is to increase the major version number for a change of this magnitude, Synopsys is trying to convince everyone that it’s an incremental change to the methodology.
The fact is that the biggest advantage VMM had over OVM was the fact that it had been around longer. OVM has the advantages of being more full-featured, flexible, modular and reusable. Once VMM users understand the extent to which they’re going to have to rewrite their existing VMM code to take advantage of the new 2.0 features, the continuity argument will be gone and they may as well take a look at OVM too. And new users will now choose between a stable, proven OVM and a brand-spankin’-new VMM. The tables have turned, and Synopsys doesn’t want you to know this.
In fact, we’ve already had one customer try and compile their existing VMM1.1 code against the VMM2.0 (I mean 1.2) library without success – not a good sign for backward compatibility. A quick look at the first three lines of the sv/std_lib/vmm.sv file shows why:
In other words, if you want to use your existing VMM1.1 code, you +define+VMM_11 to get the old library code, otherwise you get a completely different library! Tell me that’s not a major release!
Perhaps an alternate metric would be helpful. I have, sitting on my desk, a copy of the Verification Methodology Manual for SystemVerilog. It is 503 pages long. The VMM1.2 User Guide, which incorporates the original book, along with all the new 2.0 (rats, did it again) features, weighs in at a whopping 1408 pages! That’s nearly a 3x increase in material.
By contrast, the OVM User Guide is only 158 pages, so even when combined with the OVM Reference Guide (384 pages), you’ve still got nearly 2.5x more stuff to go through with VMM. We could even throw in The OVM Cookbook (235 pages) and OVM is still half the size of VMM2.0.
It will be up to you to decide whether to take a chance on the new VMM2.0 or go with the more stable OVM. By the way, don’t be surprised to see an OVM2.1 rather soon <!–[if gte mso 9]> Normal 0 false false false MicrosoftInternetExplorer4 <![endif]–><!–[if gte mso 9]> <![endif]–> soon that adds some new features to address user requests we’ve gotten. These new enhancements are completely backward-compatible with existing code, unlike VMM2.0.
Come to think of it, the only justification for calling VMM1.2 a minor release is that it doesn’t really advance the state of the art at all. Since all they’re doing is adding functionality to VMM that OVM has had for over a year, I guess it’s OK to call it VMM1.2 after all.
As they say, “Imitation is the sincerest form of flattery.”
About Verification Horizons BLOG
This blog will provide an online forum to provide weekly updates on concepts, values, standards, methodologies and examples to assist with the understanding of what advanced functional verification technologies can do and how to most effectively apply them. We're looking forward to your comments and suggestions on the posts to make this a useful tool.
- Texas-Sized DAC Edition of Verification Horizons Now Up on Verification Academy
- IEEE 1801™-2013 UPF Standard Is Published
- Part 1: The 2012 Wilson Research Group Functional Verification Study
- What’s the deal with those wire’s and reg’s in Verilog
- Getting AMP’ed Up on the IEEE Low-Power Standard
- Prologue: The 2012 Wilson Research Group Functional Verification Study
- May 2013 (4)
- April 2013 (2)
- March 2013 (2)
- February 2013 (5)
- January 2013 (1)
- December 2012 (1)
- November 2012 (1)
- October 2012 (4)
- September 2012 (1)
- August 2012 (1)
- July 2012 (6)
- June 2012 (1)
- May 2012 (3)
- March 2012 (1)
- February 2012 (6)
- January 2012 (2)
- December 2011 (2)
- November 2011 (2)
- October 2011 (3)
- September 2011 (1)
- July 2011 (3)
- June 2011 (6)
- Intelligent Testbench Automation Delivers 10X to 100X Faster Functional Verification
- Part 9: The 2010 Wilson Research Group Functional Verification Study
- Verification Horizons DAC Issue Now Available Online
- Accellera & OSCI Unite
- The IEEE’s Most Popular EDA Standards
- UVM Register Kit Available for OVM 2.1.2
- May 2011 (2)
- April 2011 (7)
- User-2-User’s Functional Verification Track
- Part 7: The 2010 Wilson Research Group Functional Verification Study
- Part 6: The 2010 Wilson Research Group Functional Verification Study
- SystemC Day 2011 Videos Available Now
- Part 5: The 2010 Wilson Research Group Functional Verification Study
- Part 4: The 2010 Wilson Research Group Functional Verification Study
- Part 3: The 2010 Wilson Research Group Functional Verification Study
- March 2011 (5)
- February 2011 (4)
- January 2011 (1)
- December 2010 (2)
- October 2010 (3)
- September 2010 (4)
- August 2010 (1)
- July 2010 (3)
- June 2010 (9)
- The reports of OVM’s death are greatly exaggerated (with apologies to Mark Twain)
- New Verification Academy Advanced OVM (&UVM) Module
- OVM/UVM @DAC: The Dog That Didn’t Bark
- DAC: Day 1; An Ode to an Old Friend
- UVM: Joint Statement Issued by Mentor, Cadence & Synopsys
- Static Verification
- OVM/UVM at DAC 2010
- DAC Panel: Bridging Pre-Silicon Verification and Post-Silicon Validation
- Accellera’s DAC Breakfast & Panel Discussion
- May 2010 (9)
- Easier UVM Testbench Construction – UVM Sequence Layering
- North American SystemC User Group (NASCUG) Meeting at DAC
- An Extension to UVM: The UVM Container
- UVM Register Package 2.0 Available for Download
- Accellera’s OVM: Omnimodus Verification Methodology
- High-Level Design Validation and Test (HLDVT) 2010
- New OVM Sequence Layering Package – For Easier Tests
- OVM 2.0 Register Package Released
- OVM Extensions for Testbench Reuse
- April 2010 (6)
- SystemC Day Videos from DVCon Available Now
- On Committees and Motivations
- The Final Signatures (the meeting during the meeting)
- UVM Adoption: Go Native-UVM or use OVM Compatibility Kit?
- UVM-EA (Early Adopter) Starter Kit Available for Download
- Accellera Adopts OVM 2.1.1 for its Universal Verification Methodology (UVM)
- March 2010 (4)
- February 2010 (5)
- January 2010 (5)
- December 2009 (15)
- A Cliffhanger ABV Seminar, Jan 19, Santa Clara, CA
- Truth in Labeling: VMM2.0
- IEEE Std. 1800™-2009 (SystemVerilog) Ready for Purchase & Download
- December Verification Horizons Issue Out
- Evolution is a tinkerer
- It Is Better to Give than It Is to Receive
- Zombie Alert! (Can the CEDA DTC “User Voice” Be Heard When They Won’t Let You Listen)
- DVCon is Just Around the Corner
- The “Standards Corner” Becomes a Blog
- I Am Honored to Honor
- IEEE Standards Association Awards Ceremony
- ABV and being from Missouri…
- Time hogs, blogs, and evolving underdogs…
- Full House – and this is no gamble!
- Welcome to the Verification Horizons Blog!
- September 2009 (2)
- July 2009 (1)
- May 2009 (1)