Posts Tagged ‘Verification Methodology’

12 August, 2013

Language and Library Trends (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 8 click here), I focused on design and verification language trends, as identified by the Wilson Research Group study. This blog presents additional trends related to verification language and library adoption trends.

You might note that for some of the language and library data I present, the percentage sums to more than 100 percent. The reason for this is that some participants’ projects use multiple languages or multiple testbench methodologies.

Testbench Methodology Class Library Adoption

Now let’s look at testbench methodology and class library adoption for IC/ASIC designs. Figure 1 shows the trends in terms of methodology and class library adoption by comparing the 2010 Wilson Research Group study (in blue) with the 2012 study (in green). Today, we see a downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has increased by 486 percent since the fall of 2010. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 46 percent.

Figure 1. Methodology and class library trends

Figure 2 show the adoption of testbench methodologies and class libraries for FPGA designs (in red). We do not have sufficient data to show prior adoption trends in the FPGA space, but we anticipate that our future studies will enable us to do this. However, we did ask the FPGA study participants which testbench methodologies and class libraries they were planning to adopt within the next 12 months. Based on these responses, we anticipate that UVM adoption will increase by 40 percent, and OVM increase by 24 percent in the FPGA space.

Figure 2. Methodology and class library trends

Assertion Languages and Libraries

Finally, let’s examine assertion language and library adoption for IC/ASIC designs. The Wilson Research Group study found that 63 percent of all the IC/ASIC participants have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trend related to those participants who have adopted ABV.

Figure 3 shows the trends in terms of assertion language and library adoption by comparing the 2010 Wilson Research Group study (in blue), the 2012 Wilson Research Group study (in green), and the projected adoption trends within the next 12 months (in purple). The adoption of SVA continues to increase, while other assertion languages and libraries either remain flat or decline.

Figure 3. Assertion language and library adoption for Non-FPGA designs

Figure 4 shows the adoption of assertion language trends for FPGA designs (in red). Again, we do not have sufficient data to show prior adoption trends in the FPGA space, but we anticipate that our future studies will enable us to do this. We did ask the FPGA study participants which assertion languages and libraries they planned to adopt within the next 12 months. Based on these responses, we anticipate an increase in adoption for OVL, SVA, and PSL in the FPGA space within the next 12 months.

Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I plan to focus on the adoption of various verification technologies and techniques used in the industry, as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

26 July, 2013

You don’t need a graphic like the one below to know that multi-core SoC designs are here to stay.  This one happens to be based on ARM’s AMBA 4 ACE architecture which is particularly effective for mobile design applications, offering an optimized mix of high performance processing and low power consumption.  But with software’s increasing role in overall design functionality, verification engineers are now tasked with verifying not just proper HW functionality, but proper HW functionality under control of application SW.  So how do you verify HW/SW interactions during system level verification?

 For most verification teams, the current alternatives are like choosing between a walk through the desert or drinking from a fire hose.  In the desert, you can manually write test programs in C, compile them and load them into system memory, and then initialize the embedded processors and execute the programs.  Seems straightforward, but now try it for multiple embedded cores and make sure you confirm your power up sequence and optimal low power management (remember, we’re testing a mobile market design), correct memory mapping, peripheral connectivity, mode selection, and basically anything that your design is intended to do before its battery runs out.  You can get lost pretty quickly.  Eventually you remember that you weren’t hired to write multi-threaded software programs, but that there’s an entire staff of software developers down the hall who were.  So you boot your design’s operating system, load the SW drivers, and run the design’s target application programs, and fully verify that all’s well between the HW and the SW at the system level.

But here comes the fire hose.  By this time, you’ve moved from your RTL simulator to an emulator, because just simulating Linux booting up takes weeks to months.  But what happens when your emulator runs into a system level failure after billions of clock cycles and several days of emulation?  There’s no way to avoid full HW/SW verification at the system level, but wouldn’t it be nice to find most of the HW/SW interaction bugs earlier in the process, when they’re easier to debug?

 There’s an easier way to bridge the gap between the desert and the fire hose.  It’s called “intelligent Software Driven Verification”.  iSDV automates the generation of embedded C test programs, for multi-core processor execution.  These tests generate thousands of high-value processor instructions that verify HW/SW interactions.  Bugs discovered take much less time to debug, and the embedded C test programs can run in both simulation and emulation environments, easing the transition from one to the other.Check out the on-line web seminar at the link below to learn about using intelligent Software Driven Verification” as a way to uncover the majority of your system-level design bugs after RTL level simulation, but before full system level emulation. 

http://www.mentor.com/products/fv/multimedia/automating-software-driven-hardware-verification-with-questa-infact

, , , , , ,

22 July, 2013

Effort Spent On Verification (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (click here), I focused on the controversial topic of effort spent in verification. This blog continues that discussion.

I stated in my previous blog that I don’t believe there is a simple answer to the question, “how much effort was spent on verification in your last project?” I believe that it is necessary to look at multiple data points to truly get a sense of the real effort involved in verification today. So, let’s look at a few additional findings from the study.

Time designers spend in verification

It’s important to note that verification engineers are not the only project members involved in functional verification. Design engineers spend a significant amount of their time in verification too, as shown in Figure 1.

Figure 1. Average (mean) time design engineers spend in design vs. verification

In fact, you might note that design engineers now actually spend more time doing verification than design. This time expenditure has shifted in the last five years. In fact, the amount of time that design engineers spend doing verification has increased by 15 percent since 2007, while the amount of time they spend doing design has decreased by about 13 percent.

The designer’s involvement in verification ranges from:

  • Small sandbox testing to explore various aspects of the implementation
  • Full functional testing of IP blocks and SoC integration
  • Debugging verification problems identified by a separate verification team

Percentage of time verification engineers spends in various task

Next, let’s look at the mean time verification engineers spend in performing various tasks related to their specific project. You might note that verification engineers spend most of their time in debugging. Ideally, if all the tasks were optimized, then you would expect this. Yet, unfortunately, the time spent in debugging can vary significantly from project-to-project, which presents scheduling challenges for managers during a project’s verification planning process.

Figure 2. Average (mean) time verification engineers spend in various task

Number of formal analysis, FPGA prototyping, and emulation Engineers

Functional verification is not limited to simulation-based techniques. Hence, it’s important to gather data related to other functional verification techniques, such as the number of verification engineers involved in formal analysis, FPGA prototyping, and emulation.

Figure 3 presents the trends in terms of the number of verification engineers focused on formal analysis on a project. In 2007, the mean number of verification engineers focused on formal analysis on a project was 1.68, while in 2010 the mean number increased to 1.84. For some reason, we did see a slight decreased in the mean number of verification engineers who focus on formal in 2012. Regardless, the curve is remarkably consistent for the past five years.

Figure 3. Median number of verification engineers focused on formal analysis

Although FPGA prototyping is a common technique used to create platforms for software development, it is also sometimes used by projects for SoC integration verification and system validation. Figure 4 presents the trends in terms of the number of verification engineers focused on FPGA prototyping. In 2007, the mean number of verification engineers focused on FPGA prototyping on a project was 1.42, while in 2010 the mean number was 1.86. In 2012 we saw a slight decline in mean number of verification engineers focused on FPGA prototyping. However, the curve has been remarkably similar for the past five years.

Figure 4. Number of verification engineers focused on FPGA prototyping

Figure 5 presents the trends in terms of the number of verification engineers focused on hardware-assisted acceleration and emulation. In 2007, the mean number of verification engineers focused on hardware-assisted acceleration and emulation on a project was 1.31, while in 2010 the mean number was 1.86. In 2012, we see a slight decrease in the mean number of verification engineers who focus on hardware-assisted acceleration and emulation.

Figure 5. Number of verification engineers focused on emulation

Again, noticed how the curve has been consistent over the past five years. In other words, we are not seeing any big trends in terms of increased verification engineers focused predominately on formal, FPGA prototyping, and hardware-assisted acceleration and emulation. This trend was certainly not true for general verification engineers who focus on simulation-based techniques, as I presented in my previous blog, where we saw a 75 percent increase in the peak number verification engineers involved on a project within the past five years.

A few more thoughts on verification effort

So, can I conclusively state that 70 percent of a project’s effort is spent in verification today as some people have claimed? No. In fact, even after reviewing the data on different aspects of today’s verification process, I would still find it difficult to state quantitatively what the effort is. Yet, the data that I’ve presented so far seems to indicate that the effort (whatever it is) is increasing. And there is still additional data relevant to the verification effort discussion that I plan to present in upcoming blogs. However, in my next blog (click here), I shift the discussion from verification effort, and focus on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies.

, , , , , , , , ,

15 July, 2013

 

Effort Spent in Verification

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on design and verification reuse trends. In this blog, I focus on the controversial topic of the amount of effort spent in verification.

Directly asking study participants how much effort they spend in verification will not work. The reason is that it’s hard to find a paper or article on verification that doesn’t start with the phrase: “Seventy percent of a project’s effort is spent in verification…” In other words, the industry is already biased to respond with this effort value. Yet, there are really no creditable references to quantify this value.

I don’t believe that there is a simple answer to the question, “How much effort was spent on verification in your last project?” In fact, I believe that it is necessary to look at multiple data points derived from multiple questions to truly get a sense of effort spent in verification. And that’s what we did in our functional verification study.

Total Project Time Spent in Verification

To try to assess the effort spent in verification, let’s begin by looking at one data point, which is the total project time spent in verification. Figure 1 shows the trends in total percentage of project time spent in verification for non-FPGA designs by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). 

Figure 1. Percentage of total project time spent in verification for Non-FPGA designs

The graph clearly shows that there are some projects that spend a significant percentage of project time in verification (>80%), while other projects spend significantly less time. Notice that in 2007, the average (mean) project time spent in verification was 49 percent, while the average increased to 56 percent in 2010 and remained the same in 2012.

Figure 2 shows the trends in total percentage of project time spent in verification for FPGA designs by comparing the 2010 Wilson Research Group study (in pink) and the 2012 Wilson Research Group study (in red).

 

Figure 2. Percentage of total project time spent in verification for FPGA designs

You might note that many FPGA projects tend to spend less time in verification than non-FPGA projects. Traditionally, the strategy for FPGA designs has been to get to the lab as soon as possible and debug issues in the lab. In a future blog I’ll show data that indicates this strategy does not necessarily yield good results in terms of meeting project schedule or quality objectives.

Peak Number of Design and Verification Engineers

Next, let’s look at another data point, the average (mean) peak number of engineers involved on a project. Figure 4 compares the growth in recent years for the average peak number of design engineers (in light green) and verification engineers (in dark green) working on a typical non-FPGA project.

 

Figure 3. Peak number of design vs. verification engineer trends for non-FPGA projects

Note that there has not been a significant increase in design engineers in the past five years, although design sizes have continued to increase at a Moore’s Law rate. This is partially due to increased adoption of internal and external IP (as I discussed in my previous blog) as well as continued productivity improvements due to automation.

However, the mean peak number of verification engineers working on non-FPGA projects has increased by 75% within the last five years. In fact, today we see (on average) a one-to-one ratio for a project’s peak number of design and verification engineers.

Figure 4 provides a different analysis of the data by partitioning the projects by design sizes, and then calculating the mean peak number of verification engineers by project design. The design size partitions are represented as: less than 5M gates, 5M to 20M gates, and greater than 20M gates.

 

Figure 4. Mean peak number of verification engineer trends by design size for non-FPGA projects

Figure 5 shows the average (mean) peak number of design engineers (in red) and verification engineers (in pink) working on a typical FPGA project.

 

Figure 5. Peak number of design vs. verification engineer trends for non-FPGA projects 

Also, note that the ratio of design engineers versus verification engineers hasn’t changed within the last two years for FPGA projects. Typically, design engineers on FPGA projects are responsible for verification too, and you will find many projects that do not have verification engineers. This trend, however, will likely change as FPGA designs become more complex. We are already seeing this on some very complex FPGA projects today.

In my next blog (click here), I’ll continue the discussion on effort spent in verification as revealed by the 2012 Wilson Research Group Functional Verification Study.

, , , , , ,

8 July, 2013

Reuse Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on clocking and power management.  In this blog, I focus on design and verification reuse trends. As I mentioned in my prologue blog to this series (click here), one interesting trend that emerged from the study is that reuse adoption is increasing.

Design Composition Trends

Figure 1 shows the mean design composition trends graph, which compares the 2007 Far West Research study (in blue) with the 2012 Wilson Research Group study (in green).

New logic development has decreased by 34 percent in the last five years, while external IP adoption has increased by 69 percent. This increase in adoption has been driven by IP demand required for SoC development, such as embedded processor cores and standard interface cores. 

Figure 1. Mean design composition trends

Figure 2 compares today’s design composition between FPGA designs (in red) with Non-FPGA designs (in green). Currently, more new designs (i.e., new RTL) are created for FPGA versus Non-FPGA designs. However, as FPGAs get larger in terms of transistors, reuse will become even more important to address the design productivity gap that could arise between the number of transistors that can be manufactured on an FPGA and the amount of time to design for a given project.

Figure 2. Mean composition comparison between FPGA and Non-FPGA designs.

 

Verification Testbench Composition Trends

Figure 3 shows the mean testbench composition trends graph, which compares the 2007 Far West Research study (in blue) with the 2012 Wilson Research Group study (in green).

Notice that new verification code development has decreased by 24 percent in the last three years, while external verification IP adoption has increased by 138 percent. This increase has been driven by the emergence of standard on-chip and off-chip bus architectures.

Figure 3. Mean testbench composition trends

Figure 4 compares today’s testbench composition between FPGA (in red) and Non-FPGA (in green) designs. Again, we see that more new code is written today for FPGA than Non-FPGA testbenches, and I expect this will change over time to be more in line with Non-FPGA designs. 

Figure 4. Mean testbench composition comparison between FPGA and Non-FPGA designs

In my next blog (click here), I’ll shift my focus from design trends to project resource trends. I’ll also present our findings on the project effort spent in verification.

, , , ,

28 June, 2013

Clocking and Power Trends

In Part 2 of this series of blogs, I continued the discussion focused on design trends (click here) as identified by the 2012 Wilson Research Group Functional Verification Study (click here). In this blog, I continue presenting the study findings related to design trends, with a focus on clocking and power trends.

Independent Asynchronous Clock Domains

Figure 1 shows the percentage of designs developed today by the number of independent asynchronous clock domains. The asynchronous clock domain data for FPGA designs is shown in red, while the data for the non-FPGA designs is shown in green.

 

Figure 1. Number of independent asynchronous clock domains

Figure 2 shows the trends in number of independent asynchronous clock domains for non-FPGA designs. The comparison includes the 2002 Collett study (in dark green), the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2010 Wilson Research Group study (in green).

Figure 2. Trends: Number of independent asynchronous clock domain

It’s interesting to note that, although the number of clock domains is increasing over time, the sweet spot in terms of number of independent asynchronous clock domains seems to remain between 2 and 20, and it hasn’t changed significantly in the past ten years.

Figure 3 provides a different analysis of the data by partitioning the projects by design sizes, and then calculating the mean number of independent asynchronous clock domains by project design. The design size partitions are represented as: less than 5M gates, 5M to 20M gates, and greater than 20M gates.

Figure 3. Mean number of independent clock domains by design size

Power Management

Today, we see that about 67 percent of design projects actively manage power with a wide variety of techniques, ranging from simple clock-gating, to complex hypervisor/OS-controlled power management schemes. We decided for the 2012 Wilson Research Group study that we wanted to take a closer look at power management related to functional verification. Hence, I can share some interesting results with you here. However, since this aspect of functional verification has never been studied in previous surveys, I will not be able to show trends. Our goal is to carry these same questions forward in our future studies so that we can identify trends.

For these, Figure 4 shows the various aspects of their power-managed design that they verify (for those 67 percent of design projects that actively manage power).

Figure 4. Aspects of power-managed design that are verified

In our study, we asked what percentage of simulation was power-aware (that is, verifying some functional aspect of the power-management scheme), and the results are shown in Figure 5. We were surprised to learn that about 10 percent of all designs that actively manage power perform no power-aware simulation to verify the power management scheme.

Figure 5. Percentage of simulation that verified some aspect of power management

In addition, we asked what percent of verification resources were focused on power management verification, and the results are shown in Figure 6. You will note that the curve is very similar to the percentage of total simulations that were power-aware, which you would expect. Again, we see that about 10 percent of the projects that actively manage power provide no verification resources to verify the power-management scheme.

 

Figure 6. Percentage of verification resources focused on power management

Figure 7 shows the different types of simulation-based functional testing approaches that are currently applied to verifying power management. It’s not a surprise that most power-aware simulation is based on directed-testing approaches since often (but not always) power-aware simulations are performed at the SoC integration level where directed testing is common.

 

Figure 7. Percentage of simulation that verified some aspect of power management

Since the power intent cannot be directly described in an RTL model, alternative supporting notations have recently emerged to capture the power intent. In the 2012 study, we wanted to get a sense of where the industry stands in adopting the notation. For projects that actively manage power, Figure 8 shows the various notations that have been adopted to describe the power intent. Some projects are actively using multiple standards (such as different versions of UPF or a combination of CPF and UPF). That’s why the adoption results do not sum to 100 percent.

 

Figure 8. Notation used to describe power intent

In my next blog (click here), I’ll present data on design and verification reuse trends.

, , , , , , , , , ,

26 June, 2013

Design Trends (Continued)

In Part 1 of this series of blogs, I focused on design trends (click here) as identified by the 2012 Wilson Research Group Functional Verification Study (click here). In this blog, I continue presenting the study findings related to design trends, with a focus on embedded processor, DSP, and on-chip bussing trends.

Embedded Processors

In Figure 1, we see the percentage of today’s designs by the number of embedded processor cores. It’s interesting to note that 79 percent of all non-FPGA designs (in green) contain one or more embedded processors and could be classified as an SoC, which are inherently more difficult to verify than designs without embedded processors. Also note that 55 percent of all FPGA designs (in red) contain one or more embedded processors. 

Figure 1. Number of embedded processor cores

Figure 2 shows the trends in terms of the number of embedded processor cores for non-FPGA designs. The comparison includes the 2004 Collett study (in dark green), the 2007 Far West Research study (in gray), and the 2010 Wilson Research Group study (in green). 

Figure 2. Trends: Number of embedded processor cores

For reference, between the 2010 and 2012 Wilson Research Group study, we did not see a significant change in the number of embedded processors for FPGA designs. The results look essentially the same as the red curve in Figure 1.

Another way to look at the data is to calculate the mean number of embedded processors that are being designed in by SoC projects around the world. In Figure 3, you can see the continual rise in the mean number of embedded processor cores, where the mean was about 1.06 in 2004 (in dark green). This mean increased in 2007(in gray) to 1.46. Then, it increased again in 2010 (in blue) to 2.14. Today (in green) the mean number of embedded processors is 2.25. Of course, this calculation represents the industry average—where some projects are creating designs with many embedded processors, while other projects are creating designs with few or none.

It’s also important to note here that the analysis is per project, and it does not represent the number of embedded processors in terms of silicon volume (i.e., production). Some projects might be creating designs that result in high volume, while other projects are creating designs with low volume. 

Figure 3. Trends: Mean number of embedded processor core

Another interesting way to look at the data is to partition it into design sizes (for example, less than 5M gates, 5M to 20M gates, greater than 20M gates), and then calculate the mean number of embedded processors by design size. The results are shown in Figure 4, and as you would expect, the larger the design, the more embedded processor cores.

Figure 4. Non-FPGA mean embedded processor cores by design size

Platform-based SoC design approaches (i.e., designs containing multiple embedded processor cores with lots of third-party and internally developed IP) have driven the demand for common bus architectures. In Figure 5 we see the percentage of today’s designs by the type of on-chip bus architecture for both FPGA (in red) and non-FPGA (in green) designs.

Figure 5. On-chip bus architecture adoption

Figure 6 shows the trends in terms of on-chip bus architecture adoption for Non-FPGA designs. The comparison includes the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). Note that there was about a 250 percent reported increase in Non-FPGA design projects using the ARM AMBA bus architecture between the years 2007 and ??. 

Figure 6. Trends: Non-FPGA on-chip bus architecture adoption  

Figure 7 shows the trends in terms of on-chip bus architecture adoption for FPGA designs. The comparison includes the 2010 Wilson Research Group study (in pink), and the 2012 Wilson Research Group study (in red). Note that there was about a 163 percent increase in FPGA design projects using the ARM AMBA bus architecture between the years 2010 and 2012. 

Figure 7. FPGA on-chip bus architecture adoption trends

In Figure 8 we see the percentage of today’s designs by the number of embedded DSP cores for both FPGA designs (in red) and non-FPGA designs (in green).

Figure 8. Number of embedded DSP cores

Figure 9 shows the trends in terms of the number of embedded DSP cores for non-FPGA designs. The comparison includes the 2007 Far West Research study (in grey), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green).

 

Figure 9. Trends: Number of embedded DSP core

In my next blog (click here), I’ll present clocking and power trends.

, , , , , ,

23 April, 2013

This is the first in a series of blogs that presents the results from the 2012 Wilson Research Group Functional Verification Study.

Study Overview

In 2002 and 2004, Ron Collett International, Inc. conducted its well known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification. However, after the 2004 study, no other industry studies were conducted, which left a void in identifying industry trends.

To address this void, Mentor Graphics commissioned Far West Research to conduct an industry study on functional verification in the fall of 2007. Then in the fall of 2010, Mentor commissioned Wilson Research Group to conduct another functional verification study. Both of these studies were conducted as blind studies to avoid influencing the results. This means that the survey participants did not know that the study was commissioned by Mentor Graphics. In addition, to support trend analysis on the data, both studies followed the same format and questions (when possible) as the original 2002 and 2004 Collett studies.

In the fall of 2012, Mentor Graphics commissioned Wilson Research Group again to conduct a new functional verification study. This study was also a blind study and follows the same format as the Collett, Far West Research, and previous Wilson Research Group studies. The 2012 Wilson Research Group study is one of the largest functional verification studies ever conducted. The overall confidence level of the study was calculated to be 95% with a margin of error of 4.05%.

Unlike the previous Collett and Far West Research studies that were conducted only in North America, both the 2010 and 2012 Wilson Research Group studies were worldwide studies. The regions targeted were:

  • North America:Canada,United States
  • Europe/Israel:Finland,France,Germany,Israel,Italy,Sweden,UK
  • Asia (minusIndia):China,Korea,Japan,Taiwan
  • India

The survey results are compiled both globally and regionally for analysis.

Another difference between the Wilson Research Group and previous industry studies is that both of the Wilson Research Group studies also included FPGA projects. Hence for the first time, we are able to present some emerging trends in the FPGA functional verification space.

Figure 1 shows the percentage makeup of survey participants by their job description. The red bars represents the FPGA participants while the green bars represent the non-FPGA (i.e., IC/ASIC) participants.

 

Figure 1: Survey participants job title description

Figure 2 shows the percentage makeup of survey participants by company type. Again, the red bars represents the FPGA participants while the green bars represents the non-FPGA (i.e., IC/ASIC) participants.

Figure 2: Survey participants company description

In a future set of blogs, over the course of the next few months, I plan to present the highlights from the 2012 Wilson Research Group study along with my analysis, comments, and obviously, opinions. A few interesting observations emerged from the study, which include:

  1. FPGA projects are beginning to adopt advanced verification techniques due to increased design complexity.
  2. The effort spent on verification is increasing.
  3. The industry is converging on common processes driven by maturing industry standards.

A few final comments concerning the 2012 Wilson Research Group Study.  As I mentioned, the study was based on the original 2002 and 2004 Collett studies.  To ensure consistency in terms of proper interpretation (or potential error related to mis-interpretation of the questions), we have avoided changing or modifying the questions over the years—with the exception of questions that relate to shrinking geometries sizes and gate counts. One other exception relates  introducing a few new questions related to verification techniques that were not a major concern ten years ago (such as low-power functional verification).  Ensuring consistency in the line of questioning enables us to have high confidence in the trends that emerge over the years.

Also, the method in which the study pools was created follows the same process as the original Collett studies.  It is important to note that the data presented in this series of blogs does not represent trends related to silicon volume (that is, a few projects could dominate in terms of the volume of manufactured silicon and not represent the broader industry).  The data in this series of blogs represents trends related to the study pool—which is a fair proxy for active design projects.

My next blog presents current design trends that were identified by the survey. This will be followed by a set of blogs focused on the functional verification results.

Also, to learn more about the 2012 Wilson Reserach Group study, view my pre-recorded Functional Verification Study web-seminar, which is located out on the Verification Academy website.

Quick links to the 2012 Wilson Research Group Study results (so far…)

, , , , , , , , , , , , ,

22 February, 2012

In his recent post on UVM: Some Thoughts Before DVCon, Dennis outlined some great ideas about what we think should happen next for UVM. His 3rd point, “UVM needs to bridge the system domain,” is particularly relevant given the newly-formed Accellera Systems Initiative. This is actually an area we’ve been contemplating for a while here at Mentor, and as Dennis indicated, we shared our thoughts on this topic at our last face-to-face with the VIP-TSC.  With demand coming from our users, and some positive feedback on our proposal, we have just released UVM Connect, an open-source library that provides TLM1 and TLM2 connectivity and object passing between SystemC and SystemVerilog models and components, as well as a UVM Command API for accessing and controlling UVM simulation from SystemC (or C or C++).

image

You can find much more information on the UVM Connect page of Verification Academy.

Mentor has always believed that SystemVerilog and SystemC each have their own strengths and that the most productive way to combine them in a system-level environment is to preserve the strengths of each while allowing the free exchange of data between them. Instead of trying to re-implement UVM in SystemC, or to extend SystemC to try and recreate SystemVerilog functional coverage or constrained-random stimulus, UVM Connect provides the framework needed to interoperate between languages. This lets you:

  • Reuse your SystemC architectural models as reference models in UVM verification
  • Reuse your stimulus generation agents in SystemVerilog to verify models in SystemC
  • Have access to a wider array of VIP since you are no longer confined to a single language
  • Utilize and interact with the UVM infrastructure from SystemC, including wait for and control UVM phase transitions, set and get configuration, issue UVM-style reports, set factory type and instance overrides, and more

UVM Connect provides object-based data transfer across the language boundary via TLM1 and TLM2 interfaces, which are natively supported in both languages. It works out-of-the-box with UVM 1.1a and later and lets you use your existing TLM models, regardless of language, in a mixed-language context without modification. In a nutshell, UVM Connect fulfills the principles and purpose of the TLM interface standard, letting you design independent models that communicate without directly referring to each other. The models thus work equally well in both native and mixed-language environments.I encourage you to download the kit and give it a try. In the spirit of “co-op-etition” I also encourage our competitors to qualify the library on their simulators.

In addition to the great material in the UVM/OVM Online Methodology Cookbook on Verification Academy, the kit also includes an HTML User’s Guide, based on extensive, well-documented examples, that includes detailed information on all aspects of the API. Please make sure to stop by the Mentor booth at DVCon and let us know what you think.

, , , , , , , ,

11 November, 2011

Adopting SystemVerilog can be challenging to some, and learning the UVM at the same time might seem overwhelming. There is no getting over the fact that if you are going to develop any reasonably sized testbench in SystemVerilog, you need to learn how to declare and construct a class. You also need to learn a few Object-Oriented programming principles so you can extend a UVM class into something for your particular needs.

Once you lean those principals, adopting the UVM can significantly reduce the amount of time it takes to build your testbench because it provides the infrastructure to handle many of the common tasks used in functional verification today. Just a few examples of some of the features included in the UVM are:

By using a common set of industry standard verification methodology and practices, engineers are given the ability to develop modular, reusable verification IP developed by project teams internal or external to their company. Another benefit of the UVM is that it is extensively documented as well as having a considerable amount of tutorial and example material readily available. Mentor Graphics provides the Verification Academy Cookbook and the Cookbook Recipe of the Month Seminar Series to get you started.

One of the significant features of the UVM that differentiates it from what was lacking in the OVM is its Register Layer (it was so lacking that Mentor back-ported the UVM Register Layer to the OVM for those users not yet able to migrate to the UVM). The compelling use model for the UVM Register Layer is that it abstracts away much of the UVM that one needs to learn as a test writer. You write much of your test as you would in software:


spi_rm.ctrl.read(status, read_data, .parent(this));

spi_rm.ctrl.write(status, write_data, .parent(this)); 

Here we are issuing read and write commands to the control register of an SPI register model. All of the underlying translations to a specific DUT interface with its specific protocol are handled by Register Layer with configuration information set up by the testbench architect.

Our October Recipe of the Month gives a brief introduction to the UVM Register Layer. Our November Recipe will provide more details on implementing them in your environment.

Dave Rich

, ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...