Archive for March, 2011
Design Trends (Continued)
In Part 1 of this series of blogs, I focused on design trends (click here) as identified by the 2010 Wilson Research Group Functional Verification Study (click here). In this blog, I continue presenting the study findings related to design trends, with a focus on embedded processor, power management, and clock domain trends.
In Figure 1, we see the percentage of today’s designs by the number of embedded processor cores. It’s interesting to note that 78 percent of all non-FPGA designs (as shown in green) contain one or more embedded processors and could be classified as an SoC, which by nature is complex to verify. Yet, even 55 percent of all FPGA designs contain one or more embedded processors.
Figure 1. Number of embedded processor cores
Figure 2 shows the trends in terms of number of embedded processor cores for non-FPGA designs. The comparison includes the 2004 Collett study (in orange), the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green).
We are unable to show the FPGA trend data since none of the prior industry studies contained FPGA participates. However, future studies should be able to show FPGA trends since the 2010 Wilson Research Group study did contain FPGA participants.
Figure 2. Number of embedded processor core trends
The median number of embedded processor cores in 2004 was about 1.06. This number increased in 2007 to 1.46. Today, the median number of embedded processor cores was found to be 2.14.
Another interesting analysis on the study data is to partition it into design sizes (for example, less than 1M gates, 1M to 20M gates, greater than 20M gates), and then calculate the median number of embedded processors per partitioned set. The results are shown in Figure 3, and as you would expect, the larger the design, the more embedded processor cores.
Figure 3. Median embedded processor cores by design size
Platform-based SoC design approaches, containing multiple embedded processor cores with lots of third-party and internally developed IP, have driven the demand for common bus architectures. In Figure 4 we see the percentage of today’s designs by the type of on-chip bus architecture for both FPGA (in grey) and non-FPGA (in green) designs.
Figure 5 shows the trends in terms of on-chip bus architecture adoption. The comparison includes the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green). The study did not partition out the various ARM AMBA bus architectures between the 2007 and 2010 studies. However, it is interesting to note that there was about a two hundred and forty one percent increase in designs using the ARM AMBA bus architecture.
Figure 5. On-chip bus architecture adoption trends
One interesting way to analyze the study data is to partition the responses by geographical region. The results are shown in Figure 6. The regional comparison are North America (in blue), Europe/Israel (in green), Asia minus India (in orange), and India (in red).
Notice how Asia appears to lead the world in the development of designs containing ARM processors when compared to the rest of the world.
Figure 6. On-chip bus architecture adoption by region
Another interesting analysis is to partition the data by design sizes. The results are shown in Figure 7 with the following design size partitions: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red).
Figure 7. On-chip bus architecture adoption by design size
In Figure 8 we see the percentage of today’s designs by the number of embedded DSP cores for both FPGA designs (in grey) and non-FPGA designs (in green).
Figure 8. Number of embedded DSP cores
Figure 9 shows the trends in terms of the number of embedded DSP cores for non-FPGA designs. The comparison includes the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green).
Figure 9. Number of embedded DSP core trends
Independent Asynchronous Clock Domains
Figure 10 shows the percentage of design developed today by the number of independent asynchronous clock domains. The asynchronous clock domain data for FPGA designs are shown in grey, while the data for the non-FPGA designs is shown in green.
Figure 10. Number of independent asynchronous clock domains
Figure 11 shows the trends in number of independent asynchronous clock domains for non-FPGA designs. The comparison includes the 2002 Collett study (in orange), the 2004 Collett study (in pink), the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green).
It’s interesting to note that, although the number of clock domains is increasing over time, the sweet spot in terms of number of independent asynchronous clock domains seems to remain between two and 11 clock domains, and hasn’t changed significantly in the past nine years.
Figure 11. Number of independent asynchronous clock domain trends
Figure 12 partitions the study data by geographical region, and shows the median calculation for the number of independent asynchronous clock domains. The regional comparison are North America (in blue), Europe/Israel (in green), Asia mimus India (in orange), and India (in red).
Notice how Asia appears to lead the world in the median number of independent asynchronous clock domains.
Figure 12. Median number of independent clock domains by regions
Figure 13 provides a different analysis of the data by partitioning the data into design sizes, and then calculating the median number of independent asynchronous clock domains. The design size partitions are represented as: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red).
Figure 13. Median number of independent clock domains by design size
Figure 14 shows the percentage of designs that actively manage power by process geometry size. You will note that at 45nm, the study indicates that there is an increasing need to actively manage power.
Figure 14. Designs that actively manage power by process geometry
The size of the design, regardless of its process geometry, influences the decision to actively manage power, as shown Figure 15. The design size partitions are represented as follows: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red).
Figure 15. Design that actively manage power by size
Although there are many techniques that are used to manage power, Figure 16 shows the percentage of use for the top eight techniques that were identified through the study. It’s important to note that many designs will implement multiple power management solutions on a single chip.
Figure 16. Top eight techniques used to actively manage power
In my next blog (click here), I’ll present data on design and verification reuse trends.
In my previous blog, I introduced the 2010 Wilson Research Group Functional Verification Study (click here). The objective of my previous blog was to provide a background on this large, worldwide industry study. The key findings from this study will be presented in a set of upcoming blogs.
This blog begins the process of revealing the 2010 Wilson Research Group study findings by first focusing on current design trends. Let’s begin by examining process geometry adoption trends, as shown in Figure 1. Here, you will see trend comparisons between the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green).
Figure 1. Process geometry trends
Worldwide, the median process geometry size from the 2007 Far West Research study was about 90nm. While today the median process geometry size is about 65nm. Regionally, Asia seems to be a little more aggressive in its move to smaller process geometries, where the median process geometry size was found to be 45nm.
In addition to the industry moving to smaller process geometries, the industry is also moving to larger design sizes as measured in number of gates of logic and datapath, excluding memories (which should not be a surprise). Figure 2 compares design sizes from the 2002 Collett study (in orange), the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green).
Figure 2. Number of gates of logic and datapath trends, excluding memories
The study revealed that about 30 percent of the IC/ASIC designs today are less than 1M gates, while 40 percent range in size between 1M to 20M gates, and about 30 percent of all designs are larger than 20M gates.
When compiling and analyzing the data from the study, in addition to calculating the mean on various aspects of the data, I decided to calculate the median for trend analysis. In Figure 3, I show the median design size trends between the 2002 Collett study (in orange), the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green). My objective in calculating the median is that the resulting value partitions the data into equal halves, and enables us to easily see that half the designs developed today are less than 6.1M gates, while the other half are greater than 6.1M gates. Obviously, we can see that gate counts have increased over the years, yet there is still a significant number of designs being developed with smaller gate counts as indicated by the median calculation.
Figure 3. Median design size trends
Figure 4 presents the current design implementation approaches as identified by the survey participants, which includes both FPGA and non-FPGA implementations.
The data in Figure 4 presents trends in design implementation approaches for non-FPGA designs, ranging from the 2002 Collett study (in pink), the 2004 Collet study (in orange), the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green). The study seems to indicate that there is a downward trend in standard cell design implementation.
Figure 5. Non-FPGA design implementation trends
We are not able to present trends for FPGA implementations, since none of the prior studies included FPGA survey participants.
In my next blog (click here), I’ll continue discussing current design trends, focusing specifically on embedded processors, power, and clock domains.
In 2002 and 2004, Ron Collett International, Inc. conducted its well known ASIC/IC functional verification studies, which provided invaluable insight into the electronic industry’s state and trends in design and verification at that point in time. However, after the 2004 study, no other industry studies were conducted, which left a void in identifying industry trends.
To address this void, Mentor Graphics commissioned Far West Research to conduct a new industry study on functional verification in the fall of 2007. The study was conducted as a blind study to avoid influencing the results. This means that the survey participants did not know that the study was commissioned by Mentor Graphics. In addition, to support trend analysis on the data, the survey followed the same format and questions that were asked in the 2002 and 2004 Collett studies.
In the fall of 2010, Mentor Graphics commissioned Wilson Research Group to conduct a new functional verification study. This study is a blind study and follows the same format as the Collett and Far West Research studies. The 2010 Wilson Research Group study is one of the largest functional verification studies ever conducted. It is about 3.5 times larger than the Collett studies, and twice as large as the Far West Research study. The overall confidence level of the study was calculated to be 95% with a margin of error of 4.1%.
Unlike the previous Collett and Far West Research studies that were conducted in North America only, the 2010 Wilson Research Group study is a worldwide study. The regions targeted are:
- North America: Canada, United States
- Europe/Israel: Finland, France, Germany, Israel, Italy, Sweden, UK
- Asia (minus India): China, Korea, Japan, Taiwan
The survey results are compiled both globally and regionally for analysis.
Another difference with this study is that it includes FPGA engineers. I decided when I started to process the study results that I would compile both with the combined FPGA and non-FPGA data (when appropriate) and also separately for analysis. Obviously, for trend analysis I can only show the non-FPGA (IC/ASIC) data since no previous study included FPGA participants.
When compiling and analyzing the data from the study, in addition to calculating the mean on various aspects of the data, I decided to calculate the median for trend analysis. My objective in calculating the median is that the resulting value partitions the data into equal halves, which at times is more insightful when discussing trends.
Figure 1 shows the percentage makeup of survey participants by the company type. The grey bars represents the FPGA participants while the green bars represents the non-FPGA (i.e., IC/ASIC) participants.
Figure 1: Survey participants company description
Figure 2 shows the percentage makeup of survey participants by their job description. Again, the grey bars represents the FPGA participants while the green bars represents the non-FPGA (i.e., IC/ASIC) participants.
Figure 2: Survey participants job title description
In a future set of blogs, I plan to present the highlights from the 2010 Wilson Research Group study along with my analysis, comments, and obviously, opinions. A few interesting observations emerged from the study, which include:
- Reuse adoption is increasing.
- The effort spent on verification is increasing.
- The industry is adopting more advanced functional verification techniques.
My next blog (click here) presents current design trends that were identified by the survey. This will be followed by a set of blogs focused on the functional verification results.
Quick links to the 2010 Wilson Research Group Study results (so far…)
- Part 1 – Design Trends
- Part 2 - Design Trends (Continued)
- Part 3 – Reuse
- Part 4 - Effort Spent In Verification
- Part 5 – Effort Spent In Verification (Continued)
- Part 6 - Testbench characteristics and Simulation Strategies
- Part 7 - Testbench characteristics and Simulation Strategies (continued)
- Part 8 – Language and Library Trends
- Part 9 – Verification Techniques and Technologies Adoption Trends
More to come!!!
Wally Rhines DVCon 2011 Keynote Highlights Survey on Verification Languages
OK, maybe it is not the Dawning of the Age of Aquarius, but Wally Rhines’ DVCon 2011 keynote did have a slide titled “SystemVerilog in the Ascendancy.” It is not a word I see or use much. In fact, Google labs’ “Book Ngram Viewer” shows ascendancy has been in decline since around 1825.
It struck me that the title was tending towards the allegoric, if not mostly there, due to it conjuring possible metaphoric, astrological meaning as I began to wonder if planetary positioning was going to be offered on the next slide to bolster SystemVerilog’s ascendancy. I asked myself: Is SystemVerilog’s “ascendancy” a move to a new spiritual level? Has it transcended all other languages to garner greater social importance for design and verification? Is this a greater representation of another trends? Or, perhaps, I was having a flashback to the hippie era. After all, I was hearing in my mind that Hair song with the phrase
When the moon is in the second house
and Jupiter aligned with Mars…
But I was too young in the hippie era of 1967 to have a real flashback. And Wally’s keynote was not some hippie mumbo jumbo. I am also more than certain any of the engineers in the room at DVCon with some physics background could tell us Jupiter aligns with Mars several times a year and the few who might have astrological training (I’ve got to meet them!) could share with us the Moon is in the 7th House for about two hours every day.
Wally’s DVCon 2011 keynote was presented in three parts. The third and last part was on language transitions. When he got to that section he started it by presenting a slide on language transition titled “SystemVerilog in the Ascendancy.”
When some things go up, others go down. It is no surprise that VERA, which seeded the SystemVerilog standard, has reached a low level of predicted use in 2011 of 3%. Joining this decline is the other language of that day that battled with VERA, “e.” “e” use was at 16% in 2007 and 15% in 2010, but users plan a greater than 25% reduction in use from 2010 to 2011. This is a rather dramatic drop in one year, given it has held so steady from 2007 until now.
Wally also discussed the adoption of languages by geography. SystemVerilog has a strong global presence with particular strength in Asia and India. The “e” language shows focused geographic use in Europe/Israel followed by India. VHDL’s use also has focused geographic use with Europe/Israel leading followed by North America. It is interesting to note some languages have broad global appeal while others have only regional adoption.
Wally also touched on the adoption trends in testbench base-class libraries. Accellera’s UVM shows the largest growth from 2010 use to predicted use in 2011. It should grow from 7% to 27% in the next 12 months. While many projects adopted UVM’s progenitor, OVM, there appears to be no let up in use of OVM either over the next 12 months. In fact, there is some small growth predicted from 42% to 47%. Ongoing projects are the most probable reason that the OVM transition to UVM does not appear to start in the next 12 months. One can postulate that once projects end, teams can consider a transition from OVM to UVM. What it means to Mentor, OVM support is going to be critical for customer success for some time.
What is declining? “Other methodologies,” such as in-house or homebrew drop fastest as the last holdouts adopt the Accellera industry standard. All the other methodologies show small declines in the coming year.
The survey results Wally shared confirm the world is tending towards dominant use of IEEE Std. 1800™ (SystemVerilog) and Accellera UVM™. If the world is aligning on these standards, can we predict the standards wars are over? Looks like another Hair musical flashback:
Then peace will guide the planets.
And love will steer the stars
There are more survey results in Wally’s keynote. I will offer additional commentary in subsequent posts. Maybe you see additional information and meaning in those numbers. If so, I invite you to share your views and opinions of them. And no, you don’t need to dim the lights, turn on the black lights, download and listen to Hair’s Aquarius to divine your view.
by Rich Edelman and Dave Rich
The UVM is a derivative of OVM 2.1.1. It has similar use model, and is run in generally the same way.
One significant change is that the UVM requires a DPI compiled library in order to enable regular expression matching, backdoor access and other functionality.
When running UVM based testbenches, we recommend using the built-in, pre-compiled UVM and DPI compiled libraries. This will remove the need to install any compilers or create a “build” environment.
One other issue to mention if you are converting from OVM to UVM, and if you use stop_request() and/or global_stop_request(), then you will need to use the following plusarg, otherwise your testbench will end prematurely without awaiting your stop_request().
vsim +UVM_USE_OVM_RUN_SEMANTIC +UVM_TESTNAME=hello …
Simulating with UVM Out-Of-The-Box with Questa
The UVM base class libiraries can be used out of the box with Questa 10.0b or higher very easily. There is no need to compile the SystemVerilog UVM package or the C DPI source code yourself. The Questa 10.0b release and every release afterwards contains a pre-compiled DPI library, as well as a pre-compiled UVM library. The only dependency is that your host system requires glibc-2.3.4 or later installed. Questa 10.0c Windows users only, please read this important note about the location of the DPI libraries.
You can easily use these steps:
vsim hello …
Notice that we don’t have to specify +incdir+$(UVM_HOME)/src, $(UVM_HOME)/src/uvm_pkg.sv to vlog, or add a -sv_lib command to the vsim command to load the uvm_dpi shared object.
Controling UVM Versions
Each release of Questa comes with multiple versions of the UVM pre-compiled and ready to load. By default, a fresh install of Questa will load the latest version of UVM that is available in the release. If an older version of UVM is needed, this version can be selected in one of two ways.
Modify the modelsim.ini File
Inside the modelsim.ini file, it contains a line which defines a library mapping for Questa. That line is the mtiUvm line. It looks something like this:
mtiUvm = $MODEL_TECH/../uvm-1.1b
This example is pointing to the UVM 1.1b release included inside the Questa release. If we wanted to downgrade to UVM 1.1a, then we would simply modify the line to look like this:
mtiUvm = $MODEL_TECH/../uvm-1.1a
Command Line Switch
The Questa commands can also accept a switch on the command line to tell it which libraries to look for. This switch overrides what is specified in the modelsim.ini file if there is a conflict. The switch is ‘-L’. If this switch is used, then all Questa commands with the exception of vlib will need to use the switch.
vlib work vlog hello.sv -L $QUESTA_HOME/uvm-1.1a vsim hello -L $QUESTA_HOME/uvm-1.1a ...
If you are using some other platform, or you want to compile your own DPI library, please follow the directions below.
If you use an earlier Questa installation, like 6.6d or 10.0, then you must supply the +incdir, and you must compile the UVM.
For example, with 10.0a on linux, you can do
vsim -c -sv_lib $UVM_HOME/lib/uvm_dpi …
if you use your own UVM download, or you use Questa 6.6d or 10.0 you need to do the following:
vlog +incdir+$UVM_HOME/src $UVM_HOME/src/uvm_pkg.sv
mkdir -p $UVM_HOME/lib
g++ -m32 -fPIC -DQUESTA -g -W -shared
vlog +incdir+$UVM_HOME/src hello.sv
vsim -c -sv_lib $UVM_HOME/lib/uvm_dpi …
Building the UVM DPI Shared Object Yourself
If you don’t use the built-in, pre-compiled UVM, then you must provide the vlog +incdir+ and you must compile the UVM yourself, including the DPI library.
In $UVM_HOME/examples, there is a Makefile.questa which can compile and link your DPI shared object.
For Linux (linux):
setenv MTI_HOME /u/release/10.0a/questasim/
make -f Makefile.questa dpi_lib
> mkdir -p ../lib
> g++ -m32 -fPIC -DQUESTA -g -W -shared
> ../src/dpi/uvm_dpi.cc -o ../lib/uvm_dpi.so
For Linux 64 (linux_x86_64)
setenv MTI_HOME /u/release/10.0a/questasim/
make LIBNAME=uvm_dpi64 BITS=64 -f Makefile.questa dpi_lib
> mkdir -p ../lib
> g++ -m64 -fPIC -DQUESTA -g -W -shared
> ../src/dpi/uvm_dpi.cc -o ../lib/uvm_dpi64.so
For Windows (win32):
setenv MTI_HOME /u/release/10.0a/questasim/
make -f Makefile.questa dpi_libWin
> mkdir -p ../lib
> -g -DQUESTA -W -shared
> -Bsymbolic -Ic:/QuestaSim_10.0a/include
> ../src/dpi/uvm_dpi.cc -o
> c:/QuestaSim_10.0a/win32/mtipli.dll -lregex
Note: For Windows, you must use the GCC provided on the Questa download page: (questasim-gcc-4.2.1-mingw32vc9.zip)
Save to /tmp/questasim-gcc-4.2.1-mingw32vc9.zip
<creates the GCC directories in the MTI_HOME>
Using the UVM DPI Shared Object
You should add the -sv_lib switch to your vsim invocation. You do not need to specify the extension, vsim will look for ‘.so’ on linux and linux_x86_64, and ‘.dll’ on Windows.
vsim -sv_lib $UVM_HOME/lib/uvm_dpi -do “run -all; quit -f”
vsim -sv_lib $UVM_HOME/lib/uvm_dpi64 -do “run -all; quit -f”
cp $UVM_HOME/lib/uvm_dpi.dll .
vsim -sv_lib uvm_dpi -do “run -all; quit -f”
Running the examples from the UVM 1.1 Release
If you want to run the examples from the UVM 1.0 Release you need to get the Open Source kit – it contains the examples.
1. Download the UVM tar.gz and unpack it.
- Go to http://verificationacademy.com/verification-methodology - the download link is in the “UVM/OVM Downloads & Contributions” box.
- On the Accellera download page, click on “Download UVM”
2. set your UVM_HOME to point to the UVM installation.
- setenv UVM_HOME /tmp/uvm-<version#>
3. Go to the example that you want to run.
- cd $UVM_HOME/examples/simple/hello_world
4. Invoke make for your platform:
- For Windows (win32)
cd $UVM_HOME/examples/simple/hello_world make DPILIB_TARGET=dpi_libWin -f Makefile.questa all # Note: for windows, you need a "development area", with make, gcc/g++, etc. Using cygwin is the easiest solution
- For Linux (linux)
cd $UVM_HOME/examples/simple/hello_world make -f Makefile.questa all
- For Linux 64 (linux_x86_64)
cd $UVM_HOME/examples/simple/hello_world make BITS=64 -f Makefile.questa all
Migration from OVM to UVM
An OVM design can be migrated to UVM using a script. Many OVM designs can work without any hand coded changes or other intervention. It is a good idea to first get your design running on the latest version of OVM 2.1.2, before starting the migration process.
These designs can be converted from OVM to UVM using the distributed conversion script:
In certain cases hand coded changes might be required.
Using the ovm2uvm script, you can run a “dry run” try and see what must be changed. There are many options to the script. Before using it, you should study it carefully, and run it in ‘dry-run’ mode until you are comfortable with it. In all cases, make a backup copy of your source code, before you use the script to replace-in-place.
By default it does not change files.
Here is a simple script which copies the ovm code, then applies
# Copy my ovm-source to a new place.
(cd ovm-source; tar cf – .) | (mkdir -p uvm-source; cd uvm-source; tar xf -)
# Do a dry-run
$UVM_HOME/bin/ovm2uvm.pl -top_dir uvm-source
# Examine the *.patch file
# If satisfied with the analysis, change in place
$UVM_HOME/bin/ovm2uvm.pl -top_dir uvm-source -write
If you are migrating to the UVM from OVM, you are NOT required to use this script, but you must do a conversion by some means.
Once your OVM design is converted to UVM, you are almost ready to run.
The UVM requires that you use some DPI code. Additionally, the UVM defines a different semantic for run(). If you are using an OVM design converted to UVM, and you use stop_request() or global_stop_request(), then you need to add a switch:
vsim +UVM_USE_OVM_RUN_SEMANTIC +UVM_TESTNAME=hello …
In order to NOT use this switch, you need to change your OVM design. You need to NOT use stop_request() or global_stop_request(). You should cause your test and testbench to be controlled by raising objections as the first thing in your run tasks, and then lowering your objections where you previously had your stop requests.
More information about migrating from OVM to UVM can be found in the Verification Academy Cookbook (registration required).
About Verification Horizons BLOG
This blog will provide an online forum to provide weekly updates on concepts, values, standards, methodologies and examples to assist with the understanding of what advanced functional verification technologies can do and how to most effectively apply them. We're looking forward to your comments and suggestions on the posts to make this a useful tool.
- Part 1: The 2012 Wilson Research Group Functional Verification Study
- What’s the deal with those wire’s and reg’s in Verilog
- Getting AMP’ed Up on the IEEE Low-Power Standard
- Prologue: The 2012 Wilson Research Group Functional Verification Study
- Even More UVM Debug in Questa 10.2
- IEEE Approves New Low Power Standard
- May 2013 (2)
- April 2013 (2)
- March 2013 (2)
- February 2013 (5)
- January 2013 (1)
- December 2012 (1)
- November 2012 (1)
- October 2012 (4)
- September 2012 (1)
- August 2012 (1)
- July 2012 (6)
- June 2012 (1)
- May 2012 (3)
- March 2012 (1)
- February 2012 (6)
- January 2012 (2)
- December 2011 (2)
- November 2011 (2)
- October 2011 (3)
- September 2011 (1)
- July 2011 (3)
- June 2011 (6)
- Intelligent Testbench Automation Delivers 10X to 100X Faster Functional Verification
- Part 9: The 2010 Wilson Research Group Functional Verification Study
- Verification Horizons DAC Issue Now Available Online
- Accellera & OSCI Unite
- The IEEE’s Most Popular EDA Standards
- UVM Register Kit Available for OVM 2.1.2
- May 2011 (2)
- April 2011 (7)
- User-2-User’s Functional Verification Track
- Part 7: The 2010 Wilson Research Group Functional Verification Study
- Part 6: The 2010 Wilson Research Group Functional Verification Study
- SystemC Day 2011 Videos Available Now
- Part 5: The 2010 Wilson Research Group Functional Verification Study
- Part 4: The 2010 Wilson Research Group Functional Verification Study
- Part 3: The 2010 Wilson Research Group Functional Verification Study
- March 2011 (5)
- February 2011 (4)
- January 2011 (1)
- December 2010 (2)
- October 2010 (3)
- September 2010 (4)
- August 2010 (1)
- July 2010 (3)
- June 2010 (9)
- The reports of OVM’s death are greatly exaggerated (with apologies to Mark Twain)
- New Verification Academy Advanced OVM (&UVM) Module
- OVM/UVM @DAC: The Dog That Didn’t Bark
- DAC: Day 1; An Ode to an Old Friend
- UVM: Joint Statement Issued by Mentor, Cadence & Synopsys
- Static Verification
- OVM/UVM at DAC 2010
- DAC Panel: Bridging Pre-Silicon Verification and Post-Silicon Validation
- Accellera’s DAC Breakfast & Panel Discussion
- May 2010 (9)
- Easier UVM Testbench Construction – UVM Sequence Layering
- North American SystemC User Group (NASCUG) Meeting at DAC
- An Extension to UVM: The UVM Container
- UVM Register Package 2.0 Available for Download
- Accellera’s OVM: Omnimodus Verification Methodology
- High-Level Design Validation and Test (HLDVT) 2010
- New OVM Sequence Layering Package – For Easier Tests
- OVM 2.0 Register Package Released
- OVM Extensions for Testbench Reuse
- April 2010 (6)
- SystemC Day Videos from DVCon Available Now
- On Committees and Motivations
- The Final Signatures (the meeting during the meeting)
- UVM Adoption: Go Native-UVM or use OVM Compatibility Kit?
- UVM-EA (Early Adopter) Starter Kit Available for Download
- Accellera Adopts OVM 2.1.1 for its Universal Verification Methodology (UVM)
- March 2010 (4)
- February 2010 (5)
- January 2010 (5)
- December 2009 (15)
- A Cliffhanger ABV Seminar, Jan 19, Santa Clara, CA
- Truth in Labeling: VMM2.0
- IEEE Std. 1800™-2009 (SystemVerilog) Ready for Purchase & Download
- December Verification Horizons Issue Out
- Evolution is a tinkerer
- It Is Better to Give than It Is to Receive
- Zombie Alert! (Can the CEDA DTC “User Voice” Be Heard When They Won’t Let You Listen)
- DVCon is Just Around the Corner
- The “Standards Corner” Becomes a Blog
- I Am Honored to Honor
- IEEE Standards Association Awards Ceremony
- ABV and being from Missouri…
- Time hogs, blogs, and evolving underdogs…
- Full House – and this is no gamble!
- Welcome to the Verification Horizons Blog!
- September 2009 (2)
- July 2009 (1)
- May 2009 (1)