Archive for March, 2011

31 March, 2011

Design Trends (Continued)

In Part 1 of this series of blogs, I focused on design trends (click here) as identified by the 2010 Wilson Research Group Functional Verification Study (click here). In this blog, I continue presenting the study findings related to design trends, with a focus on embedded processor, power management, and clock domain trends.

Embedded Processors

In Figure 1, we see the percentage of today’s designs by the number of embedded processor cores. It’s interesting to note that 78 percent of all non-FPGA designs (as shown in green) contain one or more embedded processors and could be classified as an SoC, which by nature is complex to verify.  Yet, even 55  percent of all FPGA designs contain one or more embedded processors.

Number of embedded processor cores

Figure 1. Number of embedded processor cores

Figure 2 shows the trends in terms of number of embedded processor cores for non-FPGA designs. The comparison includes the 2004 Collett study (in orange), the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green).

We are unable to show the FPGA trend data since none of the prior industry studies contained FPGA participates. However, future studies should be able to show FPGA trends since the 2010 Wilson Research Group study did contain FPGA participants.

Embedded processor core trends

Figure 2. Number of embedded processor core trends

The median number of embedded processor cores in 2004 was about 1.06. This number increased in 2007 to 1.46. Today, the median number of embedded processor cores was found to be 2.14.

Another interesting analysis on the study data is to partition it into design sizes (for example, less than 1M gates, 1M to 20M gates, greater than 20M gates), and then calculate the median number of embedded processors per partitioned set. The results are shown in Figure 3, and as you would expect, the larger the design, the more embedded processor cores.

Median Embedded Processor Cores

 

Figure 3. Median embedded processor cores by design size

Platform-based SoC design approaches, containing multiple embedded processor cores with lots of third-party and internally developed IP, have driven the demand for common bus architectures. In Figure 4 we see the percentage of today’s designs by the type of on-chip bus architecture for both FPGA (in grey) and non-FPGA (in green) designs.

p2-slide5

Figure 4. On-chip bus architecture adoption

Figure 5 shows the trends in terms of on-chip bus architecture adoption. The comparison includes the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green). The study did not partition out the various ARM AMBA bus architectures between the 2007 and 2010 studies. However, it is interesting to note that there was about a two hundred and forty one percent increase in designs using the ARM AMBA bus architecture.

On-chip bus trends

Figure 5. On-chip bus architecture adoption trends

One interesting way to analyze the study data is to partition the responses by geographical region. The results are shown in Figure 6. The regional comparison are North America (in blue), Europe/Israel (in green), Asia minus India (in orange), and India (in red).

Notice how Asia appears to lead the world in the development of designs containing ARM processors when compared to the rest of the world.

On chip busses by region

Figure 6. On-chip bus architecture adoption by region

Another interesting analysis is to partition the data by design sizes. The results are shown in Figure 7 with the following design size partitions: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red).

On-chip busses by size

Figure 7. On-chip bus architecture adoption by design size

In Figure 8 we see the percentage of today’s designs by the number of embedded DSP cores for both FPGA designs (in grey) and non-FPGA designs (in green).

DSP cores

Figure 8. Number of embedded DSP cores

Figure 9 shows the trends in terms of the number of embedded DSP cores for non-FPGA designs. The comparison includes the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green).

DSP core trends

Figure 9. Number of embedded DSP core trends

Independent Asynchronous Clock Domains

Figure 10 shows the percentage of design developed today by the number of independent asynchronous clock domains. The asynchronous clock domain data for FPGA designs are shown in grey, while the data for the non-FPGA designs is shown in green.

clock domains

Figure 10. Number of independent asynchronous clock domains

Figure 11 shows the trends in number of independent asynchronous clock domains for non-FPGA designs. The comparison includes the 2002 Collett study (in orange), the 2004 Collett study (in pink), the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green).

It’s interesting to note that, although the number of clock domains is increasing over time, the sweet spot in terms of number of independent asynchronous clock domains seems to remain between two and 11 clock domains, and hasn’t changed significantly in the past nine years.

clock domain trends

Figure 11. Number of independent asynchronous clock domain trends

Figure 12 partitions the study data by geographical region, and shows the median calculation for the number of independent asynchronous clock domains. The regional comparison are North America (in blue), Europe/Israel (in green), Asia mimus India (in orange), and India (in red).

Notice how Asia appears to lead the world in the median number of independent asynchronous clock domains.

 

 clock domain region

Figure 12. Median number of independent clock domains by regions

Figure 13 provides a different analysis of the data by partitioning the data into design sizes, and then calculating the median number of independent asynchronous clock domains. The design size partitions are represented as: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red).

clock domain design size

 

Figure 13. Median number of independent clock domains by design size

Power Management

Figure 14 shows the percentage of designs that actively manage power by process geometry size. You will note that at 45nm, the study indicates that there is an increasing need to actively manage power.

Power Management

Figure 14. Designs that actively manage power by process geometry

The size of the design, regardless of its process geometry, influences the decision to actively manage power, as shown Figure 15. The design size partitions are represented as follows: less than 1M gates (in blue), 1M to 20M gates (in orange), and greater than 20M gates (in red).

Power Management Design Size

Figure 15. Design that actively manage power by size

Although there are many techniques that are used to manage power, Figure 16 shows the percentage of use for the top eight techniques that were identified through the study. It’s important to note that many designs will implement multiple power management solutions on a single chip.

slide171

Figure 16. Top eight techniques used to actively manage power

In my next blog (click here), I’ll present data on design and verification reuse trends.

,

30 March, 2011

Design Trends

In my previous blog, I introduced the 2010 Wilson Research Group Functional Verification Study (click here). The objective of my previous blog was to provide a background on this large, worldwide industry study. The key findings from this study will be presented in a set of upcoming blogs. 

This blog begins the process of revealing the 2010 Wilson Research Group study findings by first focusing on current design trends.  Let’s begin by examining process geometry adoption trends, as shown in Figure 1.  Here, you will see trend comparisons between the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green).

slide18

 

Figure 1. Process geometry trends

Worldwide, the median process geometry size from the 2007 Far West Research study was about 90nm.  While today the median process geometry size is about 65nm. Regionally, Asia seems to be a little more aggressive in its move to smaller process geometries, where the median process geometry size was found to be 45nm.

In addition to the industry moving to smaller process geometries, the industry is also moving to larger design sizes as measured in number of gates of logic and datapath, excluding memories (which should not be a surprise). Figure 2 compares design sizes from the 2002 Collett study (in orange), the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green).

1slide25

Figure 2. Number of gates of logic and datapath trends, excluding memories

The study revealed that about 30 percent of the IC/ASIC designs today are less than 1M gates, while 40 percent range in size between 1M to 20M gates, and about 30 percent of all designs are larger than 20M gates.

When compiling and analyzing the data from the study, in addition to calculating the mean on various aspects of the data, I decided to calculate the median for trend analysis. In Figure 3, I show the median design size trends between the 2002 Collett study (in orange), the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green). My objective in calculating the median is that the resulting value partitions the data into equal halves, and enables us to easily see that half the designs developed today are less than 6.1M gates, while the other half are greater than 6.1M gates. Obviously, we can see that gate counts have increased over the years, yet there is still a significant number of designs being developed with smaller gate counts as indicated by the median calculation.

slide33

Figure 3. Median design size trends

Figure 4 presents the current design implementation approaches as identified by the survey participants, which includes both FPGA and non-FPGA implementations. 

p1-slide41Figure 4. Current design implementation approach

The data in Figure 4 presents trends in design implementation approaches for non-FPGA designs, ranging from the 2002 Collett study (in pink), the 2004 Collet study (in orange),  the 2007 Far West Research study (in blue), and the 2010 Wilson Research Group study (in green). The study seems to indicate that there is a downward trend in standard cell design implementation.

1slide5a

Figure 5. Non-FPGA design implementation trends

We are not able to present trends for FPGA implementations, since none of the prior studies included FPGA survey participants.

In my next blog (click here), I’ll continue discussing current design trends, focusing specifically on embedded processors, power, and clock domains.

,

30 March, 2011

Study Overview

In 2002 and 2004, Ron Collett International, Inc. conducted its well known ASIC/IC functional verification studies, which provided invaluable insight into the electronic industry’s state and trends in design and verification at that point in time. However, after the 2004 study, no other industry studies were conducted, which left a void in identifying industry trends.

To address this void, Mentor Graphics commissioned Far West Research to conduct a new industry study on functional verification in the fall of 2007. The study was conducted as a blind study to avoid influencing the results. This means that the survey participants did not know that the study was commissioned by Mentor Graphics. In addition, to support trend analysis on the data, the survey followed the same format and questions that were asked in the 2002 and 2004 Collett studies.

In the fall of 2010, Mentor Graphics commissioned Wilson Research Group to conduct a new functional verification study. This study is a blind study and follows the same format as the Collett and Far West Research studies. The 2010 Wilson Research Group study is one of the largest functional verification studies ever conducted. It is about 3.5 times larger than the Collett studies, and twice as large as the Far West Research study. The overall  confidence level of the study was calculated to be 95% with a margin of error of 4.1%.

Unlike the previous Collett and Far West Research studies that were conducted in North America only, the 2010 Wilson Research Group study is a worldwide study. The regions targeted are:

  • North America: Canada, United States
  • Europe/Israel: Finland, France, Germany, Israel, Italy, Sweden, UK
  • Asia (minus India): China, Korea, Japan, Taiwan
  • India

The survey results are compiled both globally and regionally for analysis.

Another difference with this study is that it includes FPGA engineers.  I decided when I started to process the study results that I would compile both with the combined FPGA and non-FPGA data (when appropriate) and also separately for analysis. Obviously, for trend analysis I can only show the non-FPGA (IC/ASIC) data since no previous study included FPGA participants.

When compiling and analyzing the data from the study, in addition to calculating the mean on various aspects of the data, I decided to calculate the median for trend analysis. My objective in calculating the median is that the resulting value partitions the data into equal halves, which at times is more insightful when discussing trends.

Figure 1 shows the percentage makeup of survey participants by the  company type. The grey bars represents the FPGA participants while the green bars represents the non-FPGA (i.e., IC/ASIC) participants.

2010 Wilson Research Group Functional Verification Study - Survey participant company description

 

Figure 1: Survey participants company description

Figure 2 shows the percentage makeup of survey participants by their job description. Again, the grey bars represents the FPGA participants while the green bars represents the non-FPGA (i.e., IC/ASIC) participants.

2010 Wilson Research Group Functional Verification Study - Survey participants by job title 

Figure 2: Survey participants job title description

In a future set of blogs, I plan to present the highlights from the 2010 Wilson Research Group study along with my analysis, comments, and obviously, opinions. A few interesting observations emerged from the study, which include:

  1. Reuse adoption is increasing.
  2. The effort spent on verification is increasing.
  3. The industry is adopting more advanced functional verification techniques.

My next blog (click here) presents current design trends that were identified by the survey. This will be followed by a set of blogs focused on the functional verification results.

Quick links to the 2010 Wilson Research Group Study results (so far…)

More to come!!!

,

25 March, 2011

Wally Rhines DVCon 2011 Keynote Highlights Survey on Verification Languages

OK, maybe it is not the Dawning of the Age of Aquarius, but Wally Rhines’ DVCon 2011 keynote did have a slide titled “SystemVerilog in the Ascendancy.”  It is not a word I see or use much.  In fact, Google labs’ “Book Ngram Viewer” shows ascendancy has been in decline since around 1825.

It struck me that the title was tending towards the allegoric, if not mostly there, due to it conjuring possible metaphoric, astrological meaning as I began to wonder if planetary positioning was going to be offered on the next slide to bolster SystemVerilog’s ascendancy.  I asked myself: Is SystemVerilog’s “ascendancy” a move to a new spiritual level?  Has it transcended all other languages to garner greater social importance for design and verification?  Is this a greater representation of another trends?  Or, perhaps, I was having a flashback to the hippie era.  After all, I was hearing in my mind that Hair song with the phrase

When the moon is in the second house
and Jupiter aligned with Mars…

But I was too young in the hippie era of 1967 to have a real flashback.  And Wally’s keynote was not some hippie mumbo jumbo.  I am also more than certain any of the engineers in the room at DVCon with some physics background could tell us Jupiter aligns with Mars several times a year and the few who might have astrological training (I’ve got to meet them!) could share with us the Moon is in the 7th House for about two hours every day.

Wally’s DVCon 2011 keynote was presented in three parts.  The third and last part was on language transitions.  When he got to that section he started it by presenting a slide on language transition titled “SystemVerilog in the Ascendancy.”

DVCon 2011 keynoteFINAL_030111When Wally last keynoted DVCon in 2008, he presented information that SystemVerilog had been adopted by 24% of survey respondents in 2007.  For 2010 that number is 60% and will be 74% in 2011.

When some things go up, others go down.  It is no surprise that VERA, which seeded the SystemVerilog standard, has reached a low level of predicted use in 2011 of 3%.  Joining this decline is the other language of that day that battled with VERA, “e.”  “e” use was at 16% in 2007 and 15% in 2010, but users plan a greater than 25% reduction in use from 2010 to 2011.  This is a rather dramatic drop in one year, given it has held so steady from 2007 until now.

DVCon 2011 keynoteFINAL_030111-56Wally also discussed the adoption of languages by geography.  SystemVerilog has a strong global presence with particular strength in Asia and India.  The “e” language shows focused geographic use in Europe/Israel followed by India.  VHDL’s use also has focused geographic use with Europe/Israel leading followed by North America.  It is interesting to note some languages have broad global appeal while others have only regional adoption.

Wally also touched on the adoption trends in testbench base-class libraries.  Accellera’s UVM shows the largest growth from 2010 use to predicted use in 2011.  It should grow from 7% to 27% in the next 12 months.  While many projects adopted UVM’s progenitor, OVM, there appears to be no let up in use of OVM either over the next 12 months.  In fact, there is some small growth predicted from 42% to 47%.  Ongoing projects are the most probable reason that the OVM transition to UVM does not appear to start in the next 12 months.  One can postulate that once projects end, teams can consider a transition from OVM to UVM.  What it means to Mentor, OVM support is going to be critical for customer success for some time.DVCon 2011 keynoteFINAL_030111-57

What is declining?  “Other methodologies,” such as in-house or homebrew drop fastest as the last holdouts adopt the Accellera industry standard.  All the other methodologies show small declines in the coming year.

The survey results Wally shared confirm the world is tending towards dominant use of IEEE Std. 1800™ (SystemVerilog) and Accellera UVM™.   If the world is aligning on these standards, can we predict the standards wars are over?  Looks like another Hair musical flashback:

Then peace will guide the planets.
And love will steer the stars

There are more survey results in Wally’s keynote.  I will offer additional commentary in subsequent posts.  Maybe you see additional information and meaning in those numbers.  If so, I invite you to share your views and opinions of them.  And no, you don’t need to dim the lights, turn on the black lights, download and listen to Hair’s Aquarius to divine your view.

, , , , , ,

8 March, 2011

by Rich Edelman and Dave Rich

Introduction

The UVM is a derivative of OVM 2.1.1. It has similar use model, and is run in generally the same way.

One significant change is that the UVM requires a DPI compiled library in order to enable regular expression matching, backdoor access and other functionality.

When running UVM based testbenches, we recommend using the built-in, pre-compiled UVM and DPI compiled libraries. This will remove the need to install any compilers or create a “build” environment.

One other issue to mention if you are converting from OVM to UVM, and if you use stop_request() and/or global_stop_request(), then you will need to use the following plusarg, otherwise your testbench will end prematurely without awaiting your stop_request().

vsim +UVM_USE_OVM_RUN_SEMANTIC +UVM_TESTNAME=hello …

Simulating with UVM Out-Of-The-Box with Questa

The UVM base class libiraries can be used out of the box with Questa 10.0b or higher very easily. There is no need to compile the SystemVerilog UVM package or the C DPI source code yourself. The Questa 10.0b release and every release afterwards contains a pre-compiled DPI library, as well as a pre-compiled UVM library. The only dependency is that your host system requires glibc-2.3.4 or later installed. Questa 10.0c Windows users only, please read this important note about the location of the DPI libraries.

You can easily use these steps:

vlib work
vlog hello.sv
vsim hello …

Notice that we don’t have to specify +incdir+$(UVM_HOME)/src,  $(UVM_HOME)/src/uvm_pkg.sv  to vlog, or add a -sv_lib command to the vsim command to load the uvm_dpi shared object.

Controling UVM Versions

Each release of Questa comes with multiple versions of the UVM pre-compiled and ready to load.  By default, a fresh install of Questa will load the latest version of UVM that is available in the release.  If an older version of UVM is needed, this version can be selected in one of two ways.

Modify the modelsim.ini File

Inside the modelsim.ini file, it contains a line which defines a library mapping for Questa.  That line is the mtiUvm line.  It looks something like this:

mtiUvm = $MODEL_TECH/../uvm-1.1b

This example is pointing to the UVM 1.1b release included inside the Questa release.  If we wanted to downgrade to UVM 1.1a, then we would simply modify the line to look like this:

mtiUvm = $MODEL_TECH/../uvm-1.1a

Command Line Switch

The Questa commands can also accept a switch on the command line to tell it which libraries to look for.  This switch overrides what is specified in the modelsim.ini file if there is a conflict.  The switch is ‘-L’.  If this switch is used, then all Questa commands with the exception of vlib will need to use the switch.

vlib work
vlog hello.sv -L $QUESTA_HOME/uvm-1.1a
vsim hello -L $QUESTA_HOME/uvm-1.1a ...

If you are using some other platform, or you want to compile your own DPI library, please follow the directions below.

If you use an earlier Questa installation, like 6.6d or 10.0, then you must supply the +incdir, and you must compile the UVM.

For example, with 10.0a on linux, you can do

vlib work
vlog hello.sv
vsim -c -sv_lib $UVM_HOME/lib/uvm_dpi …

if you use your own UVM download, or you use Questa 6.6d or 10.0 you need to do the following:

vlib work
vlog +incdir+$UVM_HOME/src $UVM_HOME/src/uvm_pkg.sv
mkdir -p $UVM_HOME/lib
g++ -m32 -fPIC -DQUESTA -g -W -shared
-I/u/release/10.0a/questasim//include
$UVM_HOME/src/dpi/uvm_dpi.cc
-o $UVM_HOME/lib/uvm_dpi.so
vlog +incdir+$UVM_HOME/src hello.sv
vsim -c -sv_lib $UVM_HOME/lib/uvm_dpi …

Building the UVM DPI Shared Object Yourself

If you don’t use the built-in, pre-compiled UVM, then you must provide the vlog +incdir+ and you must compile the UVM yourself, including the DPI library.

In $UVM_HOME/examples, there is a Makefile.questa which can compile and link your DPI shared object.

For Linux (linux):

cd $UVM_HOME/examples
setenv MTI_HOME /u/release/10.0a/questasim/
make -f Makefile.questa dpi_lib

> mkdir -p ../lib
> g++ -m32 -fPIC -DQUESTA -g -W -shared
>   -I/u/release/10.0a/questasim//include
>   ../src/dpi/uvm_dpi.cc -o ../lib/uvm_dpi.so

For Linux 64 (linux_x86_64)

cd $UVM_HOME/examples
setenv MTI_HOME /u/release/10.0a/questasim/
make LIBNAME=uvm_dpi64 BITS=64 -f Makefile.questa dpi_lib

> mkdir -p ../lib
> g++ -m64 -fPIC -DQUESTA -g -W -shared
>   -I/u/release/10.0a/questasim//include
>   ../src/dpi/uvm_dpi.cc -o ../lib/uvm_dpi64.so

For Windows (win32):

cd $UVM_HOME/examples
setenv MTI_HOME /u/release/10.0a/questasim/
make -f Makefile.questa dpi_libWin

> mkdir -p ../lib
> c:/QuestaSim_10.0a/gcc-4.2.1-mingw32vc9/bin/g++.exe
>   -g -DQUESTA -W -shared
>   -Bsymbolic -Ic:/QuestaSim_10.0a/include
>   ../src/dpi/uvm_dpi.cc -o
>   ../lib/uvm_dpi.dll
>   c:/QuestaSim_10.0a/win32/mtipli.dll -lregex

Note: For Windows, you must use the GCC provided on the Questa download page: (questasim-gcc-4.2.1-mingw32vc9.zip)

Save to /tmp/questasim-gcc-4.2.1-mingw32vc9.zip
cd $MTI_HOME
unzip /tmp/questasim-gcc-4.2.1-mingw32vc9.zip
<creates the GCC directories in the MTI_HOME>

Using the UVM DPI Shared Object

You should add the -sv_lib switch to your vsim invocation. You do not need to specify the extension, vsim will look for ‘.so’ on linux and linux_x86_64, and ‘.dll’ on Windows.

linux:

vsim -sv_lib $UVM_HOME/lib/uvm_dpi -do “run -all; quit -f”

linux_x86_64:

vsim -sv_lib $UVM_HOME/lib/uvm_dpi64 -do “run -all; quit -f”

win32:

cp $UVM_HOME/lib/uvm_dpi.dll .
vsim -sv_lib uvm_dpi -do “run -all; quit -f”

Running the examples from the UVM 1.1 Release

If you want to run the examples from the UVM 1.0 Release you need to get the Open Source kit – it contains the examples.

1. Download the UVM tar.gz and unpack it.

2. set your UVM_HOME to point to the UVM installation.

  • setenv UVM_HOME /tmp/uvm-<version#>

3. Go to the example that you want to run.

  • cd $UVM_HOME/examples/simple/hello_world

4. Invoke make for your platform:

  • For Windows (win32)
cd $UVM_HOME/examples/simple/hello_world
make DPILIB_TARGET=dpi_libWin -f Makefile.questa all
# Note: for windows, you need a "development area", with make, gcc/g++, etc. Using cygwin is the easiest solution
  • For Linux (linux)
cd $UVM_HOME/examples/simple/hello_world
make -f Makefile.questa all
  • For Linux 64 (linux_x86_64)
cd $UVM_HOME/examples/simple/hello_world
make BITS=64 -f Makefile.questa all

Migration from OVM to UVM

An OVM design can be migrated to UVM using a script. Many OVM designs can work without any hand coded changes or other intervention. It is a good idea to first get your design running on the latest version of OVM 2.1.2, before starting the migration process.

These designs can be converted from OVM to UVM using the distributed conversion script:

cd $MY_TEST_BENCH
$UVM_HOME/bin/ovm2uvm

In certain cases hand coded changes might be required.

Using the ovm2uvm script, you can run a “dry run” try and see what must be changed. There are many options to the script. Before using it, you should study it carefully, and run it in ‘dry-run’ mode until you are comfortable with it. In all cases, make a backup copy of your source code, before you use the script to replace-in-place.

By default it does not change files.

Here is a simple script which copies the ovm code, then applies
the script.

# Copy my ovm-source to a new place.
(cd ovm-source; tar cf – .) | (mkdir -p uvm-source; cd uvm-source; tar xf -)

# Do a dry-run
$UVM_HOME/bin/ovm2uvm.pl -top_dir uvm-source

# Examine the *.patch file
….

# If satisfied with the analysis, change in place
$UVM_HOME/bin/ovm2uvm.pl -top_dir uvm-source -write

If you are migrating to the UVM from OVM, you are NOT required to use this script, but you must do a conversion by some means.

Once your OVM design is converted to UVM, you are almost ready to run.

The UVM requires that you use some DPI code. Additionally, the UVM defines a different semantic for run(). If you are using an OVM design converted to UVM, and you use stop_request() or global_stop_request(), then you need to add a switch:

vsim +UVM_USE_OVM_RUN_SEMANTIC +UVM_TESTNAME=hello …

In order to NOT use this switch, you need to change your OVM design. You need to NOT use stop_request() or global_stop_request(). You should cause your test and testbench to be controlled by raising objections as the first thing in your run tasks, and then lowering your objections where you previously had your stop requests.

More information about migrating from OVM to UVM can be found in the Verification Academy Cookbook (registration required).

, , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...