Posts Tagged ‘UVM’

22 July, 2017

There’s a wonderful quote in Brian Kernighan book The Elements of Programming Style, where he says “Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?” Humor aside, our 2016 Wilson Research Group study supports the claim that debugging is a challenge today, where most projects spend more time involved in debugging than any other task.

One insidious aspect of debugging is that it is unpredictable if not properly managed. For example, from a management perspective, it is easy to gather metrics on various processes involved in the product development lifecycle from previous projects in order to plan for the future. However, the unpredictability of debugging becomes a manager’s nightmare. Guidelines for efficient debugging are required to improve productivity.

In reality, debugging is required in every process that spans a products development lifecycle, from conception, architectural design, implementation, post-silicon, and even the test and testbenches we create to verify/validate a design. The emergence of SystemVerilog and UVM has necessitated the development of new debugging skills and tools. Traditional debugging approaches that were adopted for simple RTL testbenches have become less productive. To address this dearth of new debugging process knowledge I’m excited to announce that the Verification Academy has just released a new UVM debugging course. This course consists of five video sessions. Topics that are covered in this course include:

• A historical perspective on the general debugging problem
• Ways to effectively debug memory leaks in UVM environments
• How to debug connectivity issues between UVM components
• Guidelines to effectively debug common issues related to UVM phases
• Methods to debug common issues with the UVM configuration database

As always, this course is available free to you. To learn more about our new Verification Academy debugging course, as well as all our other video courses, please visit www.verificationacademy.com.

, , , , , ,

24 February, 2017

My last blog post was written a few years ago before attending a conference when I was reminiscing about the 10-year history of SystemVerilog. Now I’m writing about going to another conference, DVCon, and being part of a panel reminiscing about the 15-year history of SystemVerilog and envisioning its future. My history with SystemVerilog goes back much further.

Soul of a New MachineMy first job out of college was working with a group of Data General (DG) engineers made public in the book, Soul of a New Machine, by Tracy Kidder in 1981. Part of my job was writing a program that simulated the microcode of the CPUs we designed. Back then, there were no hardware description languages and you had to hard code everything for each CPU. If you were lucky you could reuse some of the code for the user interface between projects. Later, DG came up with a somewhat more general-purpose simulation language. It was general-purpose in the sense that it could be used for a wider range of projects based on the way DG developed hardware. But getting it to work in another company’s environment would have been a challenge.  By the way, Badru Agarwala was the DG simulation developer I worked with who later founded the Verilog simulation companies Frontline and Axiom. He now manages the Calypto division at Mentor Graphics.

Many other processor companies like DEC, IBM and Intel had their own in-house simulation languages or were in the process of developing one because no commercially viable technologies existed. Eventually, Phil Moorby at Gateway Design began developing the Verilog language and simulator. One of the benefits of having an independent language, although not an official standard yet, was you could now share or bring in models from outside your company. This includes being able to hand off a Verilog netlist to another company for manufacturing. Another benefit was that companies could now focus on the design and verification of their products instead of the design and verification of tools that design and verify their products.

I evaluated Verilog in its first year of release back in 1985/1986. DG decided not to adopt Verilog at that point, but I liked it so much I left DG and joined Gateway Design as one of its first application engineers. Dropping another name, Karen Bartleson was one of my first customers as a CAD manager working at UTMC. She recently took the office of President and CEO of the IEEE.

IEEEFast forward to the next decade, when Verilog became standardized as IEEE 1364-1995. But by then it had already lost ground in the verification space. Companies went back to developing their own in-house verification solutions. Sun Microsystems developed Vera and later released it as a commercial product marketed by Systems Science. Arturo Salz was one of its developers and will be on the DVCon panel with me as well. Specman was developed for National Semiconductor and a few other clients and later marketed by Verisity. Once again, we had the problem of competing independent languages and therefore limiting the ability to share or acquire verification models. So, in 1999, a few Gateway alums and others formed a startup which I joined a year later hoping to consolidate design and verification back into one standard language. That language was SUPERLOG and became the starting point for the Accellera SystemVerilog 3.0 standard in 2002, fifteen years ago.

IEEE Std 1800-2012There are many dates you could pick for the start of SystemVerilog. You could claim it couldn’t exist until there was a simulator supporting some of the features in the standard. For me, it started when I first read an early Verilog Language Reference Manual and executed my first initial block 31 years ago. I’ve been using Verilog almost every day since. And now all of Verilog is part of SystemVerilog. I’ve been so much a part of the development of the language from its early beginnings; that’s why some of my colleagues call me “The Walking LRM”. Luckily, I don’t dream about it. I hope I never get called “The Sleeping LRM”.

So, what’s next for SystemVerilog? Are we going to repeat the cycle of fragmentation and re-consolidation? Various extensions have already begun showing up in different implementations. SystemVerilog has become so complex that no one can keep a good portion of it in their head anymore. It is very difficult to remove anything once it is in the LRM. Should we start over? We tried to do that with SUPERLOG, but no one adopted it until it was made fully backward compatible with Verilog.

The Universal Verification Methodology (UVM) was designed to cut down the complexity of learning and using SystemVerilog. There are now a growing number of sub-methodologies for using the UVM because the UVM itself has exploded in complexity  (UVM Framework and Easier UVM to name a couple). I have also taken my own approach when teaching people SystemVerilog by showing a minimal subset (See my SystemVerilog OOP for UVM Verification course).

DVConI do believe in the KISS principle of Engineering and I hope that is what prevails in the next version of SystemVerilog, whether we start over or just make refinements. I hope you will be able to join the discussion with others and me at the DVCon panel next week, or in the forums in the Verification Academy, or on Twitter.

-Dave

, , , , , , , , , , , ,

4 January, 2017

Happy Holidays!

Hopefully, wherever you are you are enjoying some time off.

http://www.marthastewart.com/971784/prime-rib-roastAt our house, we’re planning a large dinner, including Prime Rib, mashed potatoes and gravy, sweet potatoes, green beans, butternut squash, Brussels sprouts, salad and some delicious Parker rolls. Who knows what else? And of course, pies for dessert. Probably indigestion.

Check out Martha Stewart’s Prime Rib

This great bounty of food is one thing our family looks forward to at this time of year. Some good eats and good time together.

Another great bounty that you might be looking forward to is the UVM Register package. It’s been around since the beginnings of the UVM, 6 years ago, but it’s getting a lot more interest recently. We’re seeing increases in questions, and we’re spending more time helping customers build out their UVM Register verification infrastructure.

But many of the customers aren’t quite sure what to do with the UVM Register package. What do I do about my quirky register models? What kind of coverage do I need? Do I really need to know that all the bits toggled in a system-level simulation? Callbacks? Do I really have to write all that code? I have a special address map – how do I model that? What about memory? No answers here, just a word of caution. Avoid indigestion.

The UVM Register package is big. It weighs in at about a quarter of the UVM.

The uvm-1.2 register source code is 26 files, 21,668 lines, 450 functions, 154 tasks, 48 classes. About 28% of the total lines.

The UVM Register model is sophisticated.

In the UVM 1.2 User Guide, the register documentation is the largest chapter at 50 pages out of 190 total. (Not to mention the almost 200 pages in the UVM 1.2 Class Reference manual out of 938 pages total).

Modeling a 32 bit register with a collection of classes is a lot of overhead.

For a small block it works fine. For a larger block it can become problematic. For a system, it is a real problem – a very large memory footprint for
register modeling.

I’m sure you must have questions too. How do you use the UVM Register package to solve your verification issues? Do you do some things at block level and other things at the system level?

It’s been about 10 years since the AVM came on the scene, followed by OVM and UVM. I’m hoping we can continue innovating, but without the indigestion of code bloat.

 AVM-3.0  :  38 files,  6,598 lines,   275 functions,  24 tasks, 135 classes
 OVM-2.1.2: 106 files, 37,786 lines, 1,230 functions, 159 tasks, 275 classes
 UVM-1.1d : 133 files, 67,965 lines, 1,893 functions, 318 tasks, 375 classes
 UVM-1.2  : 145 files, 75,642 lines, 2,203 functions, 318 tasks, 411 classes

The UVM Register package is big and interesting, just like my Prime Rib dinner. I have a plan to avoid indigestion. Do you?

How are you using the UVM Register package? And how do you avoid indigestion?

Thanks and Happy Holidays

, ,

15 December, 2016

Technical Program is Live

For the past several months, the DVCon U.S. Steering Committee has been meeting to craft a compelling event of technical papers, panels, keynotes, poster sessions and more for you.  With the hard work of authors who supply this content and the Technical Program Committee that reviews and selects from this content, a 4-day event schedule is now published.  You can find the event schedule here.

I am pleased to chair DVCon U.S. 2017 and work with such an august body of people – from the electronic design automation industry, design and verification practitioners and professionals from large systems houses to small consultancies – all who work hard for you to make this happen.  As has been the tradition of DVCon U.S. the past several years, the event starts with Accellera Day on Monday (Feb 27th) followed by two days of paper presentations, keynotes, panels and an exhibition.  The exhibition starts Monday, Accellera Day.  The last day of DVCon U.S. features a full day of tutorials split in to half-day parts.

Accellera Day

DVCon U.S. will feature something for advanced users and those who may be more novice.  The conference will showcase emerging standards and updates to those standards well used.  On Monday, Accellera Day, DVCon U.S. begins with a tutorial devoted to work underway within Accellera on a new standard, “Portable Stimulus,” that is set to give design and verification engineers a boost in overall design and verification productivity.  Given the work by the Accellera Portable Stimulus Working Group to put as much of the standard in place that it can, this tutorial, Creating Portable Stimulus Models with the Upcoming Accellera Standard, is sure to be an important educational opportunity.  If you are a user of UVM (Universal Verification Methodology) you will find the Portable Stimulus standard is set to remove many of the limitations of reuse at the subsystem and full-chip level and address the lack of portability across execution platforms.  Are you ready for Portable Stimulus?  You will be ready after attending this tutorial.

As the Monday luncheon evolves, I anticipate a moderated panel discussion hosted by Accellera on the emerging Portable Stimulus standard based on what you learned in the morning session.  As lunch ends, two parallel tutorials will start, one on IEEE P1800.2™ (aka UVM) and the other on System C design and verification advances.  Accellera Day is a great event to learn about the latest in the evolution of standards coming from Accellera and the IEEE.

Special Session

DVCon U.S. will make one departure from prior years’ programs and offer a special session on Tuesday on Trends in Functional Verification: A 2016 Industry Study presented by Harry Foster.  Harry has been reporting on the 2016 Wilson Research Group Study here at the Verification Horizon’s BLOG, and he has shared regional information at DVCon Europe and DVCon India on adoption and use of design and verification tools, technology and standards.  At DVCon U.S. he will pull all this together to show trends and offer predictions for the future.

There is much more to DVCon U.S. 2017 that I think you will find useful.  I leave it to you to explore the program more to discover this for yourself.  And if you can make it to DVCon U.S., registration is also open with advanced rates available until January 26th.  I hope to see you there!

, , , , , , ,

31 October, 2016

ASIC/IC Language and Library Adoption Trends

This blog is a continuation of a series of blogs related to the 2016 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on I various verification technology adoption trends. In this blog I plan to discuss various ASIC/IC language and library adoption trends..

Figure 1 shows the adoption trends for languages used to create RTL designs. Essentially, the adoption rates for all languages used to create RTL designs is projected to be either declining or flat over the next year.

BLOG-2016-WRG-figure-10-1

Figure 1. ASIC/IC Languages Used for RTL Design

As previously noted, the reason some of the results sum to more than 100 percent is that some projects are using multiple languages; thus, individual projects can have multiple answers.

Figure 2 shows the adoption trends for languages used to create ASIC/IC testbenches. Essentially, the adoption rates for all languages used to create testbenches are either declining or flat. Furthermore, the data suggest that SystemVerilog adoption is starting to saturate or level off in the mid-70s range.

BLOG-2016-WRG-figure-10-2

Figure 2. ASIC/IC Languages Used for  Verification (Testbenches)

Figure 3 shows the adoption trends for various ASIC/IC testbench methodologies built using class libraries.

BLOG-2016-WRG-figure-10-3

Figure 3. ASIC/IC Methodologies and Testbench Base-Class Libraries

Here we see a decline in adoption of all methodologies and class libraries with the exception of Accellera’s UVM, whose adoption continued to increase between 2014 and 2016. Furthermore, our study revealed that UVM is projected to continue its growth over the next year. However, like SystemVerlog, it will likely start to level off in the mid- to upper-70 percent range.

Figure 4 shows the ASIC/IC industry adoption trends for various assertion languages, and again, SystemVerilog Assertions seems to have saturated or leveled off.

BLOG-2016-WRG-figure-10-4

Figure 4. ASIC/IC Assertion Language Adoption

In my next blog (click here) I plan to present the ASIC/IC design and verification power trends.

Quick links to the 2016 Wilson Research Group Study results

, , , , , , , , , , , , , , , , , , ,

7 October, 2016

UVM and Better Debug – The UVM Factory and Config conspire against me

Sitting in my chair pulling out what’s left of my hair, trying to find my bug. Well, my customers’ bug, but now my bug too.

The customer has written a simple “replacement” monitor in UVM. This new monitor adds coverage. With a simple factory override the new monitor SHOULD be in place, collecting coverage, but it is not. The UVM testbench compiles and runs just fine. But no coverage is collected. No error message. A UVM Mystery. Causing UVM Misery…

I check what the internal name of the classes are. No help. Things look fine.

Visualizer 25> classinfo descriptive tb_pkg::monitor::monitor__1
# Class /tb_pkg::monitor::monitor__1 maps to monitor #( 8)
Visualizer 26> classinfo descriptive tb_pkg::monitor_with_cvg::monitor_with_cvg__1
# Class /tb_pkg::monitor_with_cvg::monitor_with_cvg__1 maps to monitor_with_cvg #( 8)

So I jump into interactive simulation. I’ll set some breakpoints INSIDE the UVM factory – just where I want to be on a Friday.

Trapped. Sitting in the debugger. Setting breakpoints. But a tricky outcome.

It turns out that there are two problems. The first problem is that parameterized classes are not first class citizens in the UVM Factory. Both of my monitors have the same name in the factory – “<unknown>”. You can see this if you do factory.print(). The second problem is that the rules of type compatible assignments are complicated, and this user had written some code that made it hard to see.

UVM Factory Source Code

The RED box is going to fail. The type requested is NOT the same as the orig_type. The ORANGE box is true. The orig_type_name IS “<unknown>”. So the BLUE box will NOT be executed. (The BLUE box is a successful lookup for the factory override. We want the BLUE box!). We just failed to find the override. Failure means NO override, just use the original type! Aha! We missed the override and just used the non-code-coverage enabled monitor. But why didn’t it match?

It should have matched on the RED box! The type should have been the same. How could it be different?

In the test, we set the override, to replace the regular monitor with the coverage enabled monitor:

monitor#(BITWIDTH)::type_id::set_type_override(monitor_with_cvg#(BITWIDTH)::get_type());

In the agent, I have:

monitor#(BITWIDTH) m;
function void build_phase(uvm_phase phase);
  m = monitor#(BITWIDTH)::type_id::create("m", this);
endfunction

From the debugger, let’s try dumping the database with this tcl script:

# puts "full_inst_path [examine {m_type_overrides[0].full_inst_path}]"
# puts "orig_type_name [examine {m_type_overrides[0].orig_type_name}]"
# puts "ovrd_type_name [examine {m_type_overrides[0].ovrd_type_name}]"
# puts "      selected [examine {m_type_overrides[0].selected}]"
# puts "     orig_type [examine {m_type_overrides[0].orig_type}]"
# puts "     ovrd_type [examine {m_type_overrides[0].ovrd_type}]"

Huh? Lots of <unknown> AND we see that indeed, the orig_type is not the same as the requested_type. According to the factory THIS override is not applicable for the requested_type. But we know it is.

# full_inst_path *
# orig_type_name <unknown>
# ovrd_type_name <unknown>
#       selected 1'h0
#      orig_type {{<unknown>} @uvm_component_registry__8@1}
#      ovrd_type {{<unknown>} @uvm_component_registry__6@1}
# requested_type {{<unknown>} @uvm_component_registry__7@1}

Hmm. Something heretical! Let’s replace those nasty uvm_component_param_utils macros with a few lines of SystemVerilog. Looks what happens when we DON’T use the UVM macros… No more “<unknown>”. We still didn’t fix our problem, but now debug is getting better. And we get a nice SystemVerilog assignment error message.

# full_inst_path *
# orig_type_name monitor#(8)
# ovrd_type_name monitor_with_coverage#(8)
#       selected 1'h0
#      orig_type {{monitor#(8)} @uvm_component_registry__7@1}
#      ovrd_type {{monitor_with_coverage#(8)} @uvm_component_registry__8@1}

# requested_type {{monitor#(8)} @uvm_component_registry__6@1}

Or just use the GUI, instead of the command language

UVM Factory Type Override Structure

That’s getting to be a useful debug…

What happened in this case is that the factory failed to be applied and ADDITIONALLY the UVM produced no error message. The code ran fine. This is a big problem – how can you tell what is running if your factory overrides don’t take?

See our paper in DVCON India – “Paper 6.1 Does the Factory Say Override?” for all the gory details. Spoiler alert! The user code fix is simple. Change “class monitor#(UINT32 BITWIDTH) …” to “class monitor#(BITWIDTH) …”. But boy was that hard to find.

Better debug here means fixing the UVM factory bugs and adding instrumentation for transparency. The paper has details on ways to improve debug.

Speaking of trouble in debug land. Have you ever had your config setting get it wrong?

Oops. I put the config setting on the wrong hierarchy.

Oops. I set the config AFTER I did the get.

Oops. I did a config get in a very expensive loop (a clock loop).

Stop by DVCON Europe – “Paper 5.2 Go Figure – UVM Config – The Good, The Bad, The Debug” to hear about a simple suggestion to add UVM configuration debug. But I’ll give you a preview.

The config database is a data structure. We put things in it (we “set” configurations). We get things out of it (we “get” configurations). For example, we might set the configuration for an AXI bus, including the desired traffic density. Later, in our testbench, we might get the configuration, and build the testbench to generate this traffic density. What if we get the wrong traffic density? How will we know? How to debug it?

For this setting and getting, how about adding three arguments (in BOLD italics below)

static function bit get(uvm_component cntxt,
  string inst_name,
  string field_name,
  inout T value,
  input uvm_object CALLING_CONTEXT = null,
  input string FILE = "",
  input int    LINE = 0
);

And

static function bit get(uvm_component cntxt,
  string inst_name,
  string field_name,
  inout T value,
  input uvm_object CALLING_CONTEXT = null,
  input string FILE = "",
  input int    LINE = 0
);

With these additions every time we do a set, we remember where this set came from. We remember the file and line number, and the calling context (the object handle) where the set executed. Every time we do a get, we can report the file and line number and the calling context (the object handle) where the get executed. Now we have really good visibility to the flow of sets and gets. The UVM does provide some debug switches, but they shed no light when you are trying to debug what happened in this hairball if-then-else. And this is just one small piece of configuration. In addition to adding the RED lines above, the decision making code inside the UVM config needs to be instrumented, so that we can understand which decision point caused a match to succeed or fail. The code below needs to be expanded (instrumented) to improve transparency for debug. (Hint: See the paper at DVCON Europe).

function uvm_resource_types::rsrc_q_t lookup_name(string scope = "",
  string name,
  uvm_resource_base type_handle = null,
  bit rpterr = 1);
  uvm_resource_types::rsrc_q_t rq;
  uvm_resource_types::rsrc_q_t q = new();
  uvm_resource_base rsrc;
  uvm_resource_base r;

  // resources with empty names are anonymous and do not exist in the name map
  if(name == "")
    return q;

  // Does an entry in the name map exist with the specified name?
  // If not, then we're done
  if((rpterr && !spell_check(name)) || (!rpterr && !rtab.exists(name))) begin
    return q;
  end
  rsrc = null;
  rq = rtab[name];
  for(int i=0; i<rq.size(); ++i) begin
    r = rq.get(i);
    // does the type and scope match?
    if(((type_handle == null) || (r.get_type_handle() == type_handle)) &&
      r.match_scope(scope))
        q.push_back(r);
  end
  return q;
endfunction

I hope you enjoyed this slog through debug. But luckily for all of us UVM Factory debug and UVM Config debug are USUALLY something we do infrequently, usually at testbench bring up time. When factory or config fail, usually something will be dreadfully wrong with our simulation – like running 10 times longer than expected, generating more or less traffic than expected, or failing to collect coverage. Usually, since the failure is so obvious we know something is wrong and fix it, never needing to debug the factory or config again. (Until someone changes the factory or config).

Meanwhile, if you have some time to kill, and you are interested in factory debug and assignment compatibility, you can play around with this code:

class M#(BITWIDTH=16);
  function new();
    $display("%s::new()", $typename(this));
  endfunction
endclass

module top();
  parameter int           INT8 = 8;
  parameter int unsigned UINT8 = 8;

  M #(INT8)  msigned8;
  M #(UINT8) munsigned8;

  initial begin
    msigned8 = new();
    munsigned8 = new();
    munsigned8 = msigned8;
    $display("%s", $typename(msigned8));
    $display("%s", $typename(munsigned8));
  end
endmodule

Time to think about lunch. With a good debugger and some knowledge of the UVM you can be successful getting your factory override debugged and your configuration database debugged. But you might just lose some hair.

, ,

22 September, 2016

Join us for the Verification Academy Live Seminar on Enterprise Debug & Analysis

Your designs are larger and more complex than ever and your verification solutions are generating more information that needs to be managed and analyzed.  Your need to build and validate systems with pre-built design IP that comes from multiple sources places time-to-market burdens on you that need to be addressed. Your ability to debug your system from the design described in RTL running on simulation farms to emulators and FPGA prototypes with eventual debug of post silicon implementation drives even more complexity.  And in the face of the adoption of newer methodologies like UVM, often embraced in unstructured ways, poses its own productivity burdens.

This pressure shows itself in our annual semi-annual industry survey results that illustrates there are now more verification engineers than design engineers for a team (a recent phenomena) and the time spent on debug now approaches 40% of an engineer’s total project time budget.

Clearly, improving debug productivity for an enterprise flow from block to system pre-silicon verification, virtual prototyping, emulation, as well as post-silicon validation is critical to stay on schedule and at the same time meet your end product quality goals.

We invite you to join us for a comprehensive seminar to learn the very latest verification techniques to address these challenges.  Harry Foster, Mentor Graphics Chief Verification Scientist, will review the 2016 Wilson Research Group Functional Verification Study in his featured keynote to open the seminar.  The seminar will review enterprise-level requirements, solutions and offer additional end-user keynotes that will help address your key challenges.  Click here for more information about the seminar and how to register.  Event details are below:

Verification Academy Live Seminar

  • Location: Santa Clara, CA USA
  • Date: Thursday – October 6, 2016
  • Agenda:
    • 08:30 – 09:00 Check in and Registration
    • 09:00 – 09:50 Industry Trends in Today’s Functional Verification Landscape
    • 09:50 – 10:10 Enterprise Verification Required
    • 10:15 – 11:00 Enterprise Debug for Simulation & Formal
    • 11:00 – 11:15 Break
    • 11:15 – 12:00 Shortcut to Productive Enterprise Verification with VIP, a UVM framework and a configuration GUI
    • 12:00 – 12:40 Lunch
    • 12:40 – 13:10 User Keynote Session
    • 13:10 – 13:40 Enterprise System Level Analysis
    • 13:40 – 14:00 Break
    • 14:00 – 14:40 System-Level Debug with Emulation
    • 14:40 – 15:10 User Keynote Session
    • 15:10 – 15:50 FPGA Prototyping: Maximize your Enterprise Debug Productivity
    • 15:50 – 16:00 Closing Remarks and Prize Drawing

, , , , , , ,

21 September, 2016

FPGA Language and Library Trends

This blog is a continuation of a series of blogs related to the 2016 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on FPGA verification techniques and technologies adoption trends, as identified by the 2016 Wilson Research Group study. In this blog, I’ll present FPGA design and verification language trends.

You might note that the percentage for some of the language that I present sums to more than one hundred percent. The reason for this is that many FPGA projects today use multiple languages.

FPGA RTL Design Language Adoption Trends

Let’s begin by examining the languages used for FPGA RTL design. Figure 1 shows the trends in terms of languages used for design, by comparing the 2012, 2014, and 2016 Wilson Research Group study, as well as the projected design language adoption trends within the next twelve months. Note that the language adoption is declining for most of the languages used for FPGA design with the exception of Verilog and SystemVerilog.

Also, it’s important to note that this study focused on languages used for RTL design. We have conducted a few informal studies related to languages used for architectural modeling—and it’s not too big of a surprise that we see increased adoption of C/C++ and SystemC in that space. However, since those studies have (thus far) been informal and not as rigorously executed as the Wilson Research Group study, I have decided to withhold that data until a more formal study can be executed related to architectural modeling and virtual prototyping.

BLOG-2016-WRG-figure-6-1

Figure 1. Trends in languages used for FPGA design

It’s not too big of a surprise that VHDL is the predominant language used for FPGA RTL design, although it is slowly declining when viewed as a worldwide trend. An important note here is that if you were to filter the results down by a particular market segment or region of the world, you would find different results. For example, if you only look at Europe, you would find that VHDL adoption as an FPGA design language is about 79 percent, while the world average is 62 percent. However, I believe that it is important to examine worldwide trends to get a sense of where the industry is moving in the future.

FPGA Verification Language Adoption Trends

Next, let’s look at the languages used to verify FPGA designs (that is, languages used to create simulation testbenches). Figure 2 shows the trends in terms of languages used to create simulation testbenches by comparing the 2012, 2014, and 2016 Wilson Research Group study, as well as the projected verification language adoption trends within the next twelve months.

BLOG-2016-WRG-figure-6-2

Figure 2. Trends in languages used in verification to create FPGA simulation testbenches

What is interesting in 2016 is that SystemVerilog overtook VHDL as the language of choice for building FPGA testbenches. But please note that the same comment related to design language adoption applies to verification language adoption. That is, if you were to filter the results down by a particular market segment or region of the world, you would find different results. For example, if you only look at Europe, you would find that VHDL adoption as an FPGA verification language is about 66 percent (greater than the worldwide average), while SystemVerilog adoption is 41 percent (less than the worldwide average).

FPGA Testbench Methodology Class Library Adoption Trends

Now let’s look at testbench methodology and class library adoption for FPGA designs. Figure 3 shows the trends in terms of methodology and class library adoption by comparing the 2012, 2014, and 2016 Wilson Research Group study, as well as the projected verification language adoption trends within the next twelve months.

BLOG-2016-WRG-figure-6-3

Figure 3. FPGA methodology and class library adoption trends

Today, we see a basically a flat or downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has been growing at a healthy 10.7 percent compounded annual growth rate. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 12.5 percent.

By the way, to be fair, we did get a few write-in methodologies, such as OSVVM and UVVM that are based on VHDL. I did not list them in the previous figure since it would be difficult to predict an accurate adoption percentage. The reason for this is that they were not listed as a selection option on the original question, which resulted in a few write-in answers. Nonetheless, the data suggest that the industry momentum and focused has moved to SystemVerilog and UVM.

FPGA Assertion Language and Library Adoption Trends

Finally, let’s examine assertion language and library adoption for FPGA designs. The 2016 Wilson Research Group study found that 47 percent of all the FPGA projects have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trends related to those participants who have adopted ABV.

Figure 4 shows the trends in terms of assertion language and library adoption by comparing the 2012, 2014, and 2016 Wilson Research Group study, and the projected adoption trends within the next 12 months. The adoption of SVA continues to increase, while other assertion languages and libraries are not trending at significant changes.

BLOG-2016-WRG-figure-6-4

Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I will shift the focus of this series of blogs and start to present the ASIC/IC findings from the 2016 Wilson Research Group Functional Verification Study.

Quick links to the 2016 Wilson Research Group Study results

, , , , , , , , , , , , , , ,

18 August, 2016

A great technical program awaits you for DVCon India 2016!  The DVCon India Steering Committee and Technical Program Committee have put together another outstanding program.  The two-day event splits itself into two main technical tracks: one for the Design Verification professional [DV Track] and the other from the Electronic System Design professional [ESL Track].  The conference will be held on Thursday & Friday, 15-16 September 2016 at the Leela Palace in Bangalore.  The conference opens with industry keynotes and a round of technical tutorials the first day.  Wally Rhines, Mentor Graphics CEO, will be the first keynote of the morning on “Design Verification – Challenging Yesterday, Today and Tomorrow.”

Mentor Graphics at DVCon India

In addition to Wally’s keynote, Mentor Graphics has sponsored several tutorials which when combined with other conference tutorials shares information, techniques and tips-and-tricks that can be applied to your current design and verification challenges.

The conference’s other technical elements (Posters, Panels & Papers) will likewise feature Mentor Graphics participants.  You should visit the DVCon India website for the full details on the comprehensive and deep program that has been put together.  The breadth of topics makes it an outstanding program.

Accellera Portable Stimulus Standard (PSS)

The hit of the first DVCon India was the early discussion about the emerging standardization activity in Accellera on “Portable Stimulus.”  In fact, at the second DVCon India a follow-up presentation on PSS standardization was requested and given as well (Leveraging Portable Stimulus Across Domains and Disciplines).  This year will be no exception to cover the PSS topic.

The Accellera Tutorial for DVCon India 2016 is on the emerging Portable Stimulus Standard.  The last thing any design and verification team wants to do is to rewrite a test as a design progresses along a path from concept to silicon.  The Accellera PSS tutorial will share with you concepts being ratified in the standard to bring the next generation of verification productivity and efficiency to you to avoid this.  Don’t be surprised if the PSS tutorial is standing room only.  I suggest if you want a seat, you come early to the room.

Register

To attend DVCon India, you must register.  A discounted registration rates available through 30 August 2016.  Click here to register!  I look forward to see you at DVCon India 2016! If you can’t join us in person, track the Mentor team on social media or on Twitter with hashtag #DVCon.

, , , , , , , ,

8 August, 2016

This is the first in a series of blogs that presents the findings from our new 2016 Wilson Research Group Functional Verification Study. Similar to my previous 2014 Wilson Research Group functional verification study blogs, I plan to begin this set of blogs with an exclusive focus on FPGA trends. Why? For the following reasons:

  1. Some of the more interesting trends in our 2016 study are related to FPGA designs. The 2016 ASIC/IC functional verification trends are overall fairly flat, which is another indication of a mature market.
  2. Unlike the traditional ASIC/IC market, there has historically been very few studies published on FPGA functional verification trends. We started studying the FPGA market segment back in the 2010 study, and we now have collected sufficient data to confidently present industry trends related to this market segment.
  3. Today’s FPGA designs have grown in complexity—and many now resemble complete systems. The task of verifying SoC-class designs is daunting, which has forced many FPGA projects to mature their verification process due to rising complexity. The FPGA-focused data I present in this set of blogs will support this claim.

My plan is to release the ASIC/IC functional verification trends through a set of blogs after I finish presenting the FPGA trends.

Introduction

In 2002 and 2004, Collett International Research, Inc. conducted its well-known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification at that point in time. However, after the 2004 study, no additional Collett studies were conducted, which left a void in identifying industry trends. To address this dearth of knowledge, five functional verification focused studies were commissioned by Mentor Graphics in 2007, 2010, 2012, 2014, and 2016. These were world-wide, double-blind, functional verification studies, covering all electronic industry market segments. To our knowledge, the 2014 and 2016 studies are two of the largest functional verification study ever conducted. This set of blogs presents the findings from our 2016 study and provides invaluable insight into the state of the electronic industry today in terms of both design and verification trends.

Study Background

Our study was modeled after the original 2002 and 2004 Collett International Research, Inc. studies. In other words, we endeavored to preserve the original wording of the Collett questions whenever possible to facilitate trend analysis. To ensure anonymity, we commissioned Wilson Research Group to execute our study. The purpose of preserving anonymity was to prevent biasing the participants’ responses. Furthermore, to ensure that our study would be executed as a double-blind study, the compilation and analysis of the results did not take into account the identity of the participants.

For the purpose of our study we used a multiple sampling frame approach that was constructed from eight independent lists that we acquired. This enabled us to cover all regions of the world—as well as cover all relevant electronic industry market segments. It is important to note that we decided not to include our own account team’s customer list in the sampling frame. This was done in a deliberate attempt to prevent biasing the final results. My next blog in this series will discuss other potential bias concerns when conducting a large industry study and describe what we did to address these concerns.

After data cleaning the results to remove inconsistent or random responses (e.g., someone who only answered “a” on all questions), the final sample size consisted of 1703 eligible participants (i.e., n=1703). This was approximately 90% this size of our 2014 study (i.e., 2014 n=1886). However, to put this figure in perspective, the famous 2004 Ron Collett International study sample size consisted of 201 eligible participants.

Unlike the 2002 and 2004 Collett IC/ASIC functional verification studies, which focused only on the ASIC/IC market segment, our studies were expanded in 2010 to include the FPGA market segment. We have partitioned the analysis of these two different market segments separately, to provide a clear focus on each. One other difference between our studies and the Collett studies is that our study covered all regions of the world, while the original Collett studies were conducted only in North America (US and Canada). We have the ability to compile the results both globally and regionally, but for the purpose of this set of blogs I am presenting only the globally compiled results.

Confidence Interval

All surveys are subject to sampling errors. To quantify this error in probabilistic terms, we calculate a confidence interval. For example, we determined the “overall” margin of error for our study to be ±2.36% at a 95% confidence interval. In other words, this confidence interval tells us that if we were to take repeated samples of size n=1703 from a population, 95% of the samples would fall inside our margin of error ±2.36%, and only 5% of the samples would fall outside. Obviously, response rate per individual question will impact the margin of error. However, all data presented in this blog has a margin of error of less than ±5%, unless otherwise noted.

Study Participants

This section provides background on the makeup of the study.

Figure 1 shows the percentage of overall study FPGA and ASIC/IC participants by market segment. It is important to note that this figures does not represent silicon volume by market segment.

BLOG-2016-WRG-figure-0-1

Figure 1: FPGA and ASIC/IC study participants by market segment

Figure 2 shows the percentage of overall study eligible FPGA and ASIC/IC participants by their job description. An example of eligible participant would be a self-identified design or verification engineer, or engineering manager, who is actively working within the electronics industry. Overall, design and verification engineers accounted for 60 percent of the study participants.

BLOG-2016-WRG-figure-0-2

Figure 2: FPGA and ASIC/IC study participants job title description

Before I start presenting the findings from our 2016 functional verification study, I plan to discuss in my next blog (click here) general bias concerns associated with all survey-based studies—and what we did to minimize these concerns.

Quick links to the 2016 Wilson Research Group Study results

, , , , , , , , , , , , , , , , , ,

@dennisbrophy tweets

Follow dennisbrophy

@dave_59 tweets

Follow dave_59

@jhupcey tweets

  • #ARM now hiring formal verification engineers in Austin: exciting tech challenge + Ram is a great guy to work with.…https://t.co/uwIXLHWqvg
  • Attention all SF Bay Area formal practitioners: next week Wednesday 7/26 on Mentor's Fremont campus the Verificatio…https://t.co/9Y0iFXJdYi
  • This is a very hands-on, creative role for a verification expert -- join us! https://t.co/jXWFGxGrpn

Follow jhupcey