Posts Tagged ‘Verification Methodology’

3 June, 2015

FPGA Language and Library Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on FPGA verification techniques and technologies adoption trends, as identified by the 2014 Wilson Research Group study. In this blog, I’ll present FPGA design and verification language trends, as identified by the Wilson Research Group study.

You might note that the percentage for some of the language and library data that I present sums to more than one hundred percent. The reason for this is that many FPGA projects today use multiple languages.

FPGA RTL Design Language Adoption Trends

Let’s begin by examining the languages used for FPGA RTL design. Figure 1 shows the trends in terms of languages used for design, by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected design language adoption trends within the next twelve months (in purple). Note that the language adoption is declining for most of the languages used for FPGA design with the exception of Verilog and SystemVerilog.

Also, it’s important to note that this study focused on languages used for RTL design. We have conducted a few informal studies related to languages used for architectural modeling—and it’s not too big of a surprise that we see increased adoption of C/C++ and SystemC in that space. However, since those studies have (thus far) been informal and not as rigorously executed as the Wilson Research Group study, I have decided to withhold that data until a more formal study can be executed related to architectural modeling and virtual prototyping.

Figure 1. Trends in languages used for FPGA design

It’s not too big of a surprise that VHDL is the predominant language used for FPGA RTL design, although the projected trend is that Verilog will likely overtake VHDL in terms of the predominate language used for FPGA design in the near future.

FPGA Verification Language Adoption Trends

Next, let’s look at the languages used to verify FPGA designs (that is, languages used to create simulation testbenches). Figure 2 shows the trends in terms of languages used to create simulation testbenches by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected verification language adoption trends within the next twelve months (in purple).

Figure 2. Trends in languages used in verification to create FPGA simulation testbenches

FPGA Testbench Methodology Class Library Adoption Trends

Now let’s look at testbench methodology and class library adoption for FPGA designs. Figure 3 shows the trends in terms of methodology and class library adoption by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected verification language adoption trends within the next twelve months (in purple).

Figure 3. FPGA methodology and class library adoption trends

Today, we see a downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has increased by 28 percent since 2012. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 20 percent.

FPGA Assertion Language and Library Adoption Trends

Finally, let’s examine assertion language and library adoption for FPGA designs. The 2014 Wilson Research Group study found that 44 percent of all the FPGA projects have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trends related to those participants who have adopted ABV.

Figure 4 shows the trends in terms of assertion language and library adoption by comparing the 2010 Wilson Research Group study (in dark blue), the 2012 Wilson Research Group study (in green), and the projected adoption trends within the next 12 months (in purple). The adoption of SVA continues to increase, while other assertion languages and libraries either remain flat or decline.

Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I will continue presenting findings from the 2014 Wilson Research Group Functional Verification Study.

Quick links to the 2014 Wilson Research Group Study results

, , , , , , , ,

26 February, 2015

A colleague recently asked me: Has anything changed? Do design teams tape-out nowadays without GLS (Gate-Level Simulation)? And if so, does their silicon actually work?

In his day (and mine), teams prepared in 3 phases: hierarchical gate-level netlist to weed out X-propagation issues, then full chip-level gate simulation (unit delay) to come out of reset and exercise all I/Os, and weed out any other X’s, then finally a run with SDF back-annotation on the clocktree-inserted final netlist.

gates

After much discussion about the actual value of GLS and the desirability of eliminating the pain of having to do it from our design flows, my firm conclusion:

Yes! Gate-level simulation is still required, at subsystem level and full chip level.

Its usage has been minimized over the years – firstly by adding LEC (Logical Equivalence Checking) and STA (Static Timing Analysis) to the RTL-to-GDSII design flow in the 90s, and secondly by employing static analysis of common failure modes that were traditionally caught during GLS – x-prop, clock-domain-crossing errors, power management errors, ATPG and BIST functionality, using tools like Questa® AutoCheck, in the last decade.

So there should not be any setup/hold or CDC issues remaining by this stage.  However, there are a number of reasons why I would always retain GLS:

  1. Financial prudence.  You tape out to foundry at your own risk, and GLS is the closest representation you can get to the printed design that you can do final due diligence on before you write that check.  Are you willing to risk millions by not doing GLS?
  2. It is the last resort to find any packaging issues that may be masked by use of inaccurate behavioral models higher up the flow, or erroneous STA due to bad false path or multi-cycle path definitions.  Also, simple packaging errors due to inverted enable signals can remain undetected by bad models.
  3. Ensure that the actual bringup sequence of your first silicon when it hits the production tester after fabrication.  Teams have found bugs that would have caused the sequence of first power-up, scan-test, and then blowing some configuration and security fuses on the tester, to completely brick the device, had they not run a final accurate bring-up test, with all Design-For-Verification modes turned off.
  4. In block-level verification, maybe you are doing a datapath compilation flow for your DSP core which flips pipeline stages around, so normal LEC tools are challenged.  How can you be sure?
  5. The final stages of processing can cause unexpected transformations of your design that may or may not be caught by LEC and STA, e.g. during scan chain insertion, or clocktree insertion, or power island retention/translation cell insertion.  You should not have any new setup/hold problems if the extraction and STA does its job, but what if there are gross errors affecting clock enables, or tool errors, or data processing errors.  First silicon with stuck clocks is no fun.  Again, why take the risk?  Just one simulation, of the bare metal design, coming up from power-on, wiggling all pads at least once, exercising all test modes at least once, is all that is required.
  6. When you have design deltas done at the physical netlist level: e.g. last minute ECOs (Engineering Change Orders), metal layer fixes, spare gate hookup, you can’t go back to an RTL representation to validate those.  Gates are all you have.
  7. You may need to simulate the production test vectors and burn-in test vectors for your first silicon, across process corners.  Your foundry may insist on this.
  8. Finally, you need to sleep at night while your chip is in the fab!

There are still misconceptions:

  • There is no need to repeat lots of RTL regression tests in gatelevel.  Don’t do that.  It takes an age to run those tests, so identify a tiny percentage of your regression suite that needs to rerun on GLS, to make it count.
  • Don’t wait until tapeout week before doing GLS – prepare for it very early in your flow by doing the 3 preparation steps mentioned above as soon as practical, so that all X-pessimism issues are sorted out well before crunch time.
  • The biggest misconception of all: ”designs today are too big to simulate.”.  Avoid that kind of scaremongering.  Buy a faster computer with more memory.  Spend the right amount of money to offset the risk you are about to undertake when you print a 20nm mask set.

Yes, it is possible to tape out silicon that works without GLS.  But no, you should not consider taking that risk.  And no, there is no justification for viewing GLS as “old school” and just hoping it will go away.

Now, the above is just one opinion, and reflects recent design/verification work I have done with major semiconductor companies.  I anticipate that large designs will be harder and harder to simulate and that we may need to find solutions for gate-level signoff using an emulator.  I also found some interesting recent papers, resources, and opinion – I don’t necessarily agree with all the content but it makes for interesting reading:

I’d be interested to know what your company does differently nowadays.  Do you sleep at night?

If you are attending DVCon next week, check out some of Mentor’s many presentations and events as described by Harry Foster, and please come and find me in the Mentor Graphics booth (801), I would be happy to hear about your challenges in Design/Verification/UVM and especially Debug.
Thanks for reading,
Gordon

, , , , , ,

12 August, 2013

Language and Library Trends (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 8 click here), I focused on design and verification language trends, as identified by the Wilson Research Group study. This blog presents additional trends related to verification language and library adoption trends.

You might note that for some of the language and library data I present, the percentage sums to more than 100 percent. The reason for this is that some participants’ projects use multiple languages or multiple testbench methodologies.

Testbench Methodology Class Library Adoption

Now let’s look at testbench methodology and class library adoption for IC/ASIC designs. Figure 1 shows the trends in terms of methodology and class library adoption by comparing the 2010 Wilson Research Group study (in blue) with the 2012 study (in green). Today, we see a downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has increased by 486 percent since the fall of 2010. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 46 percent.

Figure 1. Methodology and class library trends

Figure 2 show the adoption of testbench methodologies and class libraries for FPGA designs (in red). We do not have sufficient data to show prior adoption trends in the FPGA space, but we anticipate that our future studies will enable us to do this. However, we did ask the FPGA study participants which testbench methodologies and class libraries they were planning to adopt within the next 12 months. Based on these responses, we anticipate that UVM adoption will increase by 40 percent, and OVM increase by 24 percent in the FPGA space.

Figure 2. Methodology and class library trends

Assertion Languages and Libraries

Finally, let’s examine assertion language and library adoption for IC/ASIC designs. The Wilson Research Group study found that 63 percent of all the IC/ASIC participants have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trend related to those participants who have adopted ABV.

Figure 3 shows the trends in terms of assertion language and library adoption by comparing the 2010 Wilson Research Group study (in blue), the 2012 Wilson Research Group study (in green), and the projected adoption trends within the next 12 months (in purple). The adoption of SVA continues to increase, while other assertion languages and libraries either remain flat or decline.

Figure 3. Assertion language and library adoption for Non-FPGA designs

Figure 4 shows the adoption of assertion language trends for FPGA designs (in red). Again, we do not have sufficient data to show prior adoption trends in the FPGA space, but we anticipate that our future studies will enable us to do this. We did ask the FPGA study participants which assertion languages and libraries they planned to adopt within the next 12 months. Based on these responses, we anticipate an increase in adoption for OVL, SVA, and PSL in the FPGA space within the next 12 months.

Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I plan to focus on the adoption of various verification technologies and techniques used in the industry, as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

26 July, 2013

You don’t need a graphic like the one below to know that multi-core SoC designs are here to stay.  This one happens to be based on ARM’s AMBA 4 ACE architecture which is particularly effective for mobile design applications, offering an optimized mix of high performance processing and low power consumption.  But with software’s increasing role in overall design functionality, verification engineers are now tasked with verifying not just proper HW functionality, but proper HW functionality under control of application SW.  So how do you verify HW/SW interactions during system level verification?

 For most verification teams, the current alternatives are like choosing between a walk through the desert or drinking from a fire hose.  In the desert, you can manually write test programs in C, compile them and load them into system memory, and then initialize the embedded processors and execute the programs.  Seems straightforward, but now try it for multiple embedded cores and make sure you confirm your power up sequence and optimal low power management (remember, we’re testing a mobile market design), correct memory mapping, peripheral connectivity, mode selection, and basically anything that your design is intended to do before its battery runs out.  You can get lost pretty quickly.  Eventually you remember that you weren’t hired to write multi-threaded software programs, but that there’s an entire staff of software developers down the hall who were.  So you boot your design’s operating system, load the SW drivers, and run the design’s target application programs, and fully verify that all’s well between the HW and the SW at the system level.

But here comes the fire hose.  By this time, you’ve moved from your RTL simulator to an emulator, because just simulating Linux booting up takes weeks to months.  But what happens when your emulator runs into a system level failure after billions of clock cycles and several days of emulation?  There’s no way to avoid full HW/SW verification at the system level, but wouldn’t it be nice to find most of the HW/SW interaction bugs earlier in the process, when they’re easier to debug?

 There’s an easier way to bridge the gap between the desert and the fire hose.  It’s called “intelligent Software Driven Verification”.  iSDV automates the generation of embedded C test programs, for multi-core processor execution.  These tests generate thousands of high-value processor instructions that verify HW/SW interactions.  Bugs discovered take much less time to debug, and the embedded C test programs can run in both simulation and emulation environments, easing the transition from one to the other.Check out the on-line web seminar at the link below to learn about using intelligent Software Driven Verification” as a way to uncover the majority of your system-level design bugs after RTL level simulation, but before full system level emulation. 

http://www.mentor.com/products/fv/multimedia/automating-software-driven-hardware-verification-with-questa-infact

, , , , , ,

22 July, 2013

Effort Spent On Verification (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (click here), I focused on the controversial topic of effort spent in verification. This blog continues that discussion.

I stated in my previous blog that I don’t believe there is a simple answer to the question, “how much effort was spent on verification in your last project?” I believe that it is necessary to look at multiple data points to truly get a sense of the real effort involved in verification today. So, let’s look at a few additional findings from the study.

Time designers spend in verification

It’s important to note that verification engineers are not the only project members involved in functional verification. Design engineers spend a significant amount of their time in verification too, as shown in Figure 1.

Figure 1. Average (mean) time design engineers spend in design vs. verification

In fact, you might note that design engineers now actually spend more time doing verification than design. This time expenditure has shifted in the last five years. In fact, the amount of time that design engineers spend doing verification has increased by 15 percent since 2007, while the amount of time they spend doing design has decreased by about 13 percent.

The designer’s involvement in verification ranges from:

  • Small sandbox testing to explore various aspects of the implementation
  • Full functional testing of IP blocks and SoC integration
  • Debugging verification problems identified by a separate verification team

Percentage of time verification engineers spends in various task

Next, let’s look at the mean time verification engineers spend in performing various tasks related to their specific project. You might note that verification engineers spend most of their time in debugging. Ideally, if all the tasks were optimized, then you would expect this. Yet, unfortunately, the time spent in debugging can vary significantly from project-to-project, which presents scheduling challenges for managers during a project’s verification planning process.

Figure 2. Average (mean) time verification engineers spend in various task

Number of formal analysis, FPGA prototyping, and emulation Engineers

Functional verification is not limited to simulation-based techniques. Hence, it’s important to gather data related to other functional verification techniques, such as the number of verification engineers involved in formal analysis, FPGA prototyping, and emulation.

Figure 3 presents the trends in terms of the number of verification engineers focused on formal analysis on a project. In 2007, the mean number of verification engineers focused on formal analysis on a project was 1.68, while in 2010 the mean number increased to 1.84. For some reason, we did see a slight decreased in the mean number of verification engineers who focus on formal in 2012. Regardless, the curve is remarkably consistent for the past five years.

Figure 3. Median number of verification engineers focused on formal analysis

Although FPGA prototyping is a common technique used to create platforms for software development, it is also sometimes used by projects for SoC integration verification and system validation. Figure 4 presents the trends in terms of the number of verification engineers focused on FPGA prototyping. In 2007, the mean number of verification engineers focused on FPGA prototyping on a project was 1.42, while in 2010 the mean number was 1.86. In 2012 we saw a slight decline in mean number of verification engineers focused on FPGA prototyping. However, the curve has been remarkably similar for the past five years.

Figure 4. Number of verification engineers focused on FPGA prototyping

Figure 5 presents the trends in terms of the number of verification engineers focused on hardware-assisted acceleration and emulation. In 2007, the mean number of verification engineers focused on hardware-assisted acceleration and emulation on a project was 1.31, while in 2010 the mean number was 1.86. In 2012, we see a slight decrease in the mean number of verification engineers who focus on hardware-assisted acceleration and emulation.

Figure 5. Number of verification engineers focused on emulation

Again, noticed how the curve has been consistent over the past five years. In other words, we are not seeing any big trends in terms of increased verification engineers focused predominately on formal, FPGA prototyping, and hardware-assisted acceleration and emulation. This trend was certainly not true for general verification engineers who focus on simulation-based techniques, as I presented in my previous blog, where we saw a 75 percent increase in the peak number verification engineers involved on a project within the past five years.

A few more thoughts on verification effort

So, can I conclusively state that 70 percent of a project’s effort is spent in verification today as some people have claimed? No. In fact, even after reviewing the data on different aspects of today’s verification process, I would still find it difficult to state quantitatively what the effort is. Yet, the data that I’ve presented so far seems to indicate that the effort (whatever it is) is increasing. And there is still additional data relevant to the verification effort discussion that I plan to present in upcoming blogs. However, in my next blog (click here), I shift the discussion from verification effort, and focus on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies.

, , , , , , , , ,

15 July, 2013

 

Effort Spent in Verification

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on design and verification reuse trends. In this blog, I focus on the controversial topic of the amount of effort spent in verification.

Directly asking study participants how much effort they spend in verification will not work. The reason is that it’s hard to find a paper or article on verification that doesn’t start with the phrase: “Seventy percent of a project’s effort is spent in verification…” In other words, the industry is already biased to respond with this effort value. Yet, there are really no creditable references to quantify this value.

I don’t believe that there is a simple answer to the question, “How much effort was spent on verification in your last project?” In fact, I believe that it is necessary to look at multiple data points derived from multiple questions to truly get a sense of effort spent in verification. And that’s what we did in our functional verification study.

Total Project Time Spent in Verification

To try to assess the effort spent in verification, let’s begin by looking at one data point, which is the total project time spent in verification. Figure 1 shows the trends in total percentage of project time spent in verification for non-FPGA designs by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). 

Figure 1. Percentage of total project time spent in verification for Non-FPGA designs

The graph clearly shows that there are some projects that spend a significant percentage of project time in verification (>80%), while other projects spend significantly less time. Notice that in 2007, the average (mean) project time spent in verification was 49 percent, while the average increased to 56 percent in 2010 and remained the same in 2012.

Figure 2 shows the trends in total percentage of project time spent in verification for FPGA designs by comparing the 2010 Wilson Research Group study (in pink) and the 2012 Wilson Research Group study (in red).

 

Figure 2. Percentage of total project time spent in verification for FPGA designs

You might note that many FPGA projects tend to spend less time in verification than non-FPGA projects. Traditionally, the strategy for FPGA designs has been to get to the lab as soon as possible and debug issues in the lab. In a future blog I’ll show data that indicates this strategy does not necessarily yield good results in terms of meeting project schedule or quality objectives.

Peak Number of Design and Verification Engineers

Next, let’s look at another data point, the average (mean) peak number of engineers involved on a project. Figure 4 compares the growth in recent years for the average peak number of design engineers (in light green) and verification engineers (in dark green) working on a typical non-FPGA project.

 

Figure 3. Peak number of design vs. verification engineer trends for non-FPGA projects

Note that there has not been a significant increase in design engineers in the past five years, although design sizes have continued to increase at a Moore’s Law rate. This is partially due to increased adoption of internal and external IP (as I discussed in my previous blog) as well as continued productivity improvements due to automation.

However, the mean peak number of verification engineers working on non-FPGA projects has increased by 75% within the last five years. In fact, today we see (on average) a one-to-one ratio for a project’s peak number of design and verification engineers.

Figure 4 provides a different analysis of the data by partitioning the projects by design sizes, and then calculating the mean peak number of verification engineers by project design. The design size partitions are represented as: less than 5M gates, 5M to 20M gates, and greater than 20M gates.

 

Figure 4. Mean peak number of verification engineer trends by design size for non-FPGA projects

Figure 5 shows the average (mean) peak number of design engineers (in red) and verification engineers (in pink) working on a typical FPGA project.

 

Figure 5. Peak number of design vs. verification engineer trends for non-FPGA projects 

Also, note that the ratio of design engineers versus verification engineers hasn’t changed within the last two years for FPGA projects. Typically, design engineers on FPGA projects are responsible for verification too, and you will find many projects that do not have verification engineers. This trend, however, will likely change as FPGA designs become more complex. We are already seeing this on some very complex FPGA projects today.

In my next blog (click here), I’ll continue the discussion on effort spent in verification as revealed by the 2012 Wilson Research Group Functional Verification Study.

, , , , , ,

8 July, 2013

Reuse Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on clocking and power management.  In this blog, I focus on design and verification reuse trends. As I mentioned in my prologue blog to this series (click here), one interesting trend that emerged from the study is that reuse adoption is increasing.

Design Composition Trends

Figure 1 shows the mean design composition trends graph, which compares the 2007 Far West Research study (in blue) with the 2012 Wilson Research Group study (in green).

New logic development has decreased by 34 percent in the last five years, while external IP adoption has increased by 69 percent. This increase in adoption has been driven by IP demand required for SoC development, such as embedded processor cores and standard interface cores. 

Figure 1. Mean design composition trends

Figure 2 compares today’s design composition between FPGA designs (in red) with Non-FPGA designs (in green). Currently, more new designs (i.e., new RTL) are created for FPGA versus Non-FPGA designs. However, as FPGAs get larger in terms of transistors, reuse will become even more important to address the design productivity gap that could arise between the number of transistors that can be manufactured on an FPGA and the amount of time to design for a given project.

Figure 2. Mean composition comparison between FPGA and Non-FPGA designs.

 

Verification Testbench Composition Trends

Figure 3 shows the mean testbench composition trends graph, which compares the 2007 Far West Research study (in blue) with the 2012 Wilson Research Group study (in green).

Notice that new verification code development has decreased by 24 percent in the last three years, while external verification IP adoption has increased by 138 percent. This increase has been driven by the emergence of standard on-chip and off-chip bus architectures.

Figure 3. Mean testbench composition trends

Figure 4 compares today’s testbench composition between FPGA (in red) and Non-FPGA (in green) designs. Again, we see that more new code is written today for FPGA than Non-FPGA testbenches, and I expect this will change over time to be more in line with Non-FPGA designs. 

Figure 4. Mean testbench composition comparison between FPGA and Non-FPGA designs

In my next blog (click here), I’ll shift my focus from design trends to project resource trends. I’ll also present our findings on the project effort spent in verification.

, , , ,

28 June, 2013

Clocking and Power Trends

In Part 2 of this series of blogs, I continued the discussion focused on design trends (click here) as identified by the 2012 Wilson Research Group Functional Verification Study (click here). In this blog, I continue presenting the study findings related to design trends, with a focus on clocking and power trends.

Independent Asynchronous Clock Domains

Figure 1 shows the percentage of designs developed today by the number of independent asynchronous clock domains. The asynchronous clock domain data for FPGA designs is shown in red, while the data for the non-FPGA designs is shown in green.

 

Figure 1. Number of independent asynchronous clock domains

Figure 2 shows the trends in number of independent asynchronous clock domains for non-FPGA designs. The comparison includes the 2002 Collett study (in dark green), the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2010 Wilson Research Group study (in green).

Figure 2. Trends: Number of independent asynchronous clock domain

It’s interesting to note that, although the number of clock domains is increasing over time, the sweet spot in terms of number of independent asynchronous clock domains seems to remain between 2 and 20, and it hasn’t changed significantly in the past ten years.

Figure 3 provides a different analysis of the data by partitioning the projects by design sizes, and then calculating the mean number of independent asynchronous clock domains by project design. The design size partitions are represented as: less than 5M gates, 5M to 20M gates, and greater than 20M gates.

Figure 3. Mean number of independent clock domains by design size

Power Management

Today, we see that about 67 percent of design projects actively manage power with a wide variety of techniques, ranging from simple clock-gating, to complex hypervisor/OS-controlled power management schemes. We decided for the 2012 Wilson Research Group study that we wanted to take a closer look at power management related to functional verification. Hence, I can share some interesting results with you here. However, since this aspect of functional verification has never been studied in previous surveys, I will not be able to show trends. Our goal is to carry these same questions forward in our future studies so that we can identify trends.

For these, Figure 4 shows the various aspects of their power-managed design that they verify (for those 67 percent of design projects that actively manage power).

Figure 4. Aspects of power-managed design that are verified

In our study, we asked what percentage of simulation was power-aware (that is, verifying some functional aspect of the power-management scheme), and the results are shown in Figure 5. We were surprised to learn that about 10 percent of all designs that actively manage power perform no power-aware simulation to verify the power management scheme.

Figure 5. Percentage of simulation that verified some aspect of power management

In addition, we asked what percent of verification resources were focused on power management verification, and the results are shown in Figure 6. You will note that the curve is very similar to the percentage of total simulations that were power-aware, which you would expect. Again, we see that about 10 percent of the projects that actively manage power provide no verification resources to verify the power-management scheme.

 

Figure 6. Percentage of verification resources focused on power management

Figure 7 shows the different types of simulation-based functional testing approaches that are currently applied to verifying power management. It’s not a surprise that most power-aware simulation is based on directed-testing approaches since often (but not always) power-aware simulations are performed at the SoC integration level where directed testing is common.

 

Figure 7. Percentage of simulation that verified some aspect of power management

Since the power intent cannot be directly described in an RTL model, alternative supporting notations have recently emerged to capture the power intent. In the 2012 study, we wanted to get a sense of where the industry stands in adopting the notation. For projects that actively manage power, Figure 8 shows the various notations that have been adopted to describe the power intent. Some projects are actively using multiple standards (such as different versions of UPF or a combination of CPF and UPF). That’s why the adoption results do not sum to 100 percent.

 

Figure 8. Notation used to describe power intent

In my next blog (click here), I’ll present data on design and verification reuse trends.

, , , , , , , , , ,

26 June, 2013

Design Trends (Continued)

In Part 1 of this series of blogs, I focused on design trends (click here) as identified by the 2012 Wilson Research Group Functional Verification Study (click here). In this blog, I continue presenting the study findings related to design trends, with a focus on embedded processor, DSP, and on-chip bussing trends.

Embedded Processors

In Figure 1, we see the percentage of today’s designs by the number of embedded processor cores. It’s interesting to note that 79 percent of all non-FPGA designs (in green) contain one or more embedded processors and could be classified as an SoC, which are inherently more difficult to verify than designs without embedded processors. Also note that 55 percent of all FPGA designs (in red) contain one or more embedded processors. 

Figure 1. Number of embedded processor cores

Figure 2 shows the trends in terms of the number of embedded processor cores for non-FPGA designs. The comparison includes the 2004 Collett study (in dark green), the 2007 Far West Research study (in gray), and the 2010 Wilson Research Group study (in green). 

Figure 2. Trends: Number of embedded processor cores

For reference, between the 2010 and 2012 Wilson Research Group study, we did not see a significant change in the number of embedded processors for FPGA designs. The results look essentially the same as the red curve in Figure 1.

Another way to look at the data is to calculate the mean number of embedded processors that are being designed in by SoC projects around the world. In Figure 3, you can see the continual rise in the mean number of embedded processor cores, where the mean was about 1.06 in 2004 (in dark green). This mean increased in 2007(in gray) to 1.46. Then, it increased again in 2010 (in blue) to 2.14. Today (in green) the mean number of embedded processors is 2.25. Of course, this calculation represents the industry average—where some projects are creating designs with many embedded processors, while other projects are creating designs with few or none.

It’s also important to note here that the analysis is per project, and it does not represent the number of embedded processors in terms of silicon volume (i.e., production). Some projects might be creating designs that result in high volume, while other projects are creating designs with low volume. 

Figure 3. Trends: Mean number of embedded processor core

Another interesting way to look at the data is to partition it into design sizes (for example, less than 5M gates, 5M to 20M gates, greater than 20M gates), and then calculate the mean number of embedded processors by design size. The results are shown in Figure 4, and as you would expect, the larger the design, the more embedded processor cores.

Figure 4. Non-FPGA mean embedded processor cores by design size

Platform-based SoC design approaches (i.e., designs containing multiple embedded processor cores with lots of third-party and internally developed IP) have driven the demand for common bus architectures. In Figure 5 we see the percentage of today’s designs by the type of on-chip bus architecture for both FPGA (in red) and non-FPGA (in green) designs.

Figure 5. On-chip bus architecture adoption

Figure 6 shows the trends in terms of on-chip bus architecture adoption for Non-FPGA designs. The comparison includes the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). Note that there was about a 250 percent reported increase in Non-FPGA design projects using the ARM AMBA bus architecture between the years 2007 and ??. 

Figure 6. Trends: Non-FPGA on-chip bus architecture adoption  

Figure 7 shows the trends in terms of on-chip bus architecture adoption for FPGA designs. The comparison includes the 2010 Wilson Research Group study (in pink), and the 2012 Wilson Research Group study (in red). Note that there was about a 163 percent increase in FPGA design projects using the ARM AMBA bus architecture between the years 2010 and 2012. 

Figure 7. FPGA on-chip bus architecture adoption trends

In Figure 8 we see the percentage of today’s designs by the number of embedded DSP cores for both FPGA designs (in red) and non-FPGA designs (in green).

Figure 8. Number of embedded DSP cores

Figure 9 shows the trends in terms of the number of embedded DSP cores for non-FPGA designs. The comparison includes the 2007 Far West Research study (in grey), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green).

 

Figure 9. Trends: Number of embedded DSP core

In my next blog (click here), I’ll present clocking and power trends.

, , , , , ,

23 April, 2013

This is the first in a series of blogs that presents the results from the 2012 Wilson Research Group Functional Verification Study.

Study Overview

In 2002 and 2004, Ron Collett International, Inc. conducted its well known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification. However, after the 2004 study, no other industry studies were conducted, which left a void in identifying industry trends.

To address this void, Mentor Graphics commissioned Far West Research to conduct an industry study on functional verification in the fall of 2007. Then in the fall of 2010, Mentor commissioned Wilson Research Group to conduct another functional verification study. Both of these studies were conducted as blind studies to avoid influencing the results. This means that the survey participants did not know that the study was commissioned by Mentor Graphics. In addition, to support trend analysis on the data, both studies followed the same format and questions (when possible) as the original 2002 and 2004 Collett studies.

In the fall of 2012, Mentor Graphics commissioned Wilson Research Group again to conduct a new functional verification study. This study was also a blind study and follows the same format as the Collett, Far West Research, and previous Wilson Research Group studies. The 2012 Wilson Research Group study is one of the largest functional verification studies ever conducted. The overall confidence level of the study was calculated to be 95% with a margin of error of 4.05%.

Unlike the previous Collett and Far West Research studies that were conducted only in North America, both the 2010 and 2012 Wilson Research Group studies were worldwide studies. The regions targeted were:

  • North America:Canada,United States
  • Europe/Israel:Finland,France,Germany,Israel,Italy,Sweden,UK
  • Asia (minusIndia):China,Korea,Japan,Taiwan
  • India

The survey results are compiled both globally and regionally for analysis.

Another difference between the Wilson Research Group and previous industry studies is that both of the Wilson Research Group studies also included FPGA projects. Hence for the first time, we are able to present some emerging trends in the FPGA functional verification space.

Figure 1 shows the percentage makeup of survey participants by their job description. The red bars represents the FPGA participants while the green bars represent the non-FPGA (i.e., IC/ASIC) participants.

 

Figure 1: Survey participants job title description

Figure 2 shows the percentage makeup of survey participants by company type. Again, the red bars represents the FPGA participants while the green bars represents the non-FPGA (i.e., IC/ASIC) participants.

Figure 2: Survey participants company description

In a future set of blogs, over the course of the next few months, I plan to present the highlights from the 2012 Wilson Research Group study along with my analysis, comments, and obviously, opinions. A few interesting observations emerged from the study, which include:

  1. FPGA projects are beginning to adopt advanced verification techniques due to increased design complexity.
  2. The effort spent on verification is increasing.
  3. The industry is converging on common processes driven by maturing industry standards.

A few final comments concerning the 2012 Wilson Research Group Study.  As I mentioned, the study was based on the original 2002 and 2004 Collett studies.  To ensure consistency in terms of proper interpretation (or potential error related to mis-interpretation of the questions), we have avoided changing or modifying the questions over the years—with the exception of questions that relate to shrinking geometries sizes and gate counts. One other exception relates  introducing a few new questions related to verification techniques that were not a major concern ten years ago (such as low-power functional verification).  Ensuring consistency in the line of questioning enables us to have high confidence in the trends that emerge over the years.

Also, the method in which the study pools was created follows the same process as the original Collett studies.  It is important to note that the data presented in this series of blogs does not represent trends related to silicon volume (that is, a few projects could dominate in terms of the volume of manufactured silicon and not represent the broader industry).  The data in this series of blogs represents trends related to the study pool—which is a fair proxy for active design projects.

My next blog presents current design trends that were identified by the survey. This will be followed by a set of blogs focused on the functional verification results.

Also, to learn more about the 2012 Wilson Reserach Group study, view my pre-recorded Functional Verification Study web-seminar, which is located out on the Verification Academy website.

Quick links to the 2012 Wilson Research Group Study results (so far…)

, , , , , , , , , , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@jhupcey Tweets

  • Loading tweets...