Posts Tagged ‘IEEE’

8 September, 2013

Schedules, respins, and bug classification

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 11 click here), I focused on some of the 2012 Wilson Research Group findings related to formal verification, acceleration/emulation, and FPGA prototyping trends. In this blog, I present verification results findings in terms of schedules, respins, and classification of functional bugs.

We have seen in previous blogs that a significant amount of effort is being applied to functional verification. An important question the various studies have tried to answer is whether this increasing effort is paying off.

Schedules

Figure 1 presents the design completion time compared to the project’s original schedule. What is interesting is that we really have not seen a change in this trend in over five years. That is, 67 percent of all projects are behind schedule with respect to the original plan. One could argue that designs have increased in complexity in terms of gate counts, embedded processors, and lots of software between 2007 and 2012. Yet, achieving project schedules has not worsened.

Figure 1. Non-FPGA design completion time compared to the project’s original schedule

What’s interesting is that the FPGA designs follows this same trend, as shown in figure 2.

Figure 2. Non-FPGA vs. FPGA design completion time relative to the original plan

Respins

Other verification data points worth looking at relate to the number of spins required between the start of a project and final production. Figure 3 shows this industry trend all the way back to the 2004 Collett study. Again, even though designs have increased in complexity, the data suggest that projects are not getting any worse in terms of the number of spins before production. If anything, there appears to be a slight improvement recently in this trend in projects requiring three or more spins.

Figure 3. Number of spins required from start of project until production

Bug classification

Figure 4 shows various categories of flaws that are contributing to respins. Note that the sum is greater than 100% on this graph, which is because a respin can be triggered by multiple flaws. 

Figure 4. Number of spins required from start of project until production

Although logic and functional flaws remain the leading causes of respins, the data suggest that there has been some improvement in this area over the past eight years. Is this due to the increased amount of reuse that is occurring in the industry? Or is the industry maturing its verification processes? Or is something entirely different going on? This data point raises some interesting questions worth exploring.

Figure 5 examines root cause of functional bugs by various categories. The data suggest an improvement in logic errors over an eleven year period, and potentially, a worsening of problems related to changing specifications. Problems associated with changing, incorrect, and incomplete specifications is a common theme I often hear when visiting customers.

Figure 5. Root cause of functional flaws

In my next blog (click here), I plan to wrap up this series of blogs in what I call the Epilogue—which will discuss potential gotchas and cautions on interpreting certain aspects of the data and thoughts about how the data might be used constructively.

, , ,

19 August, 2013

Verification Techniques & Technologies Adoption Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for background on the study, click here).

In my previous blog (Part 9 click here), I focused on some of the 2012 Wilson Research Group findings related to design and verification language and library trends. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2012 Wilson Research Group study.

An interesting trend we are starting to see is that the electronic industry is maturing its functional verification processes, whether they are targeting their designs at IC/ASIC or FPGA implementations. This blog provides data to support this claim. An interesting question you might ask is, “What is driving this trend?” In some of my earlier blogs (click here for Part 1 and Part 2) I showed an that design complexity is increasing in terms design sizes and number of embedded processors. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (click here for Part 5 and Part 6). My belief is that the industry is being forced to mature its functional verification processes to address increasing complexity and effort.

Simulation Techniques Adoption Trends

Let’s begin by comparing  non-FPGA adoption trends related to various simulation techniques from the 2007 Far West Research study  (in blue) with the 2012 Wilson Research Group study  (in green), as shown in Figure 1.

Figure 1. Simulation-based technique adoption trends for non-FPGA designs

You can see that the study finds the industry increasing its adoption of various functional verification techniques for non-FPGA targeted designs. Clearly the industry is maturing its processes as I previously claimed.

For example, in 2007, the Far West Research Group found that only 48 percent of the industry performed code coverage. This surprised me. After all, HDL-based code coverage is a technology that has been around since the early 1990’s. However, I did informally verify the 2007 results through numerous customer visits and discussions. In 2012, we see that the industry adoption of code coverage has increased to 70 percent.

In 2007, the Far West Research Group study found that 37 percent of the industry had adopted assertions for use in simulation. In 2012, we find that industry adoption of assertions had increased to 63 percent. I believe that the maturing of the various assertion language standards has contributed to this increased adoption.

In 2007, the Far West Research Group study found that 40 percent of the industry had adopted functional coverage for use in simulation. In 2010, the industry adoption of functional coverage had increased to 66 percent. Part of this increase in functional coverage adoption has been driven by the increased adoption of constrained-random simulation, since you really can’t effectively do constrained-random simulation without doing functional coverage.

Now let’s look at  FPGA adoption trends related to various simulation techniques from the 2010 Far West Research study  (in pink) with the 2012 Wilson Research Group study  (in red).

Figure 2. Simulation-based technique adoption trends for non-FPGA designs

Again, you can clearly see that the industry is increasing its adoption of various functional verification techniques for FPGA targeted designs. This past year I have spent a significant amount of time in discussions with FPGA project managers around the world. During these discussions, most mangers mention the drive to improve verification process within their projects due to  rising complexity of this class of designs. The Wilson Research Group data supports these claims.

In fact, Figure 3 illustrates this maturing trend in the FPGA space, where we saw a 15 percent increase in the adoption of RTL simulation and an 8.5 percent increase in the adoption of code coverage. For complex FPGA designs, the traditional approach of “burn and churn” and debug in the lab is no longer a viable option. Nonetheless, it is still somewhat alarming that 31 percent of the FPGA study participants work on projects that perform no RTL simulation.

Figure 3. FPGA projects maturing their verification processes

Signoff Criteria Trends

We saw earlier in this blog the increased adoption of coverage techniques in the industry. Coverage has become a major component of a project’s verification signoff criteria. In Figure 4, we see how coverage has increased in importance in verification signoff criteria within the past five years, while other decision attributes have declined in terms of importance.

Figure 4. Non-FPGA functional verification signoff criteria trends

We see the same trends for FPGA designs, as shown in Figure 5.

Figure 5. FPGA functional verification signoff criteria trends

In my next blog (click here), I plan to continue the discussion related to adoption of various verification technologies and techniques as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

12 August, 2013

Language and Library Trends (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 8 click here), I focused on design and verification language trends, as identified by the Wilson Research Group study. This blog presents additional trends related to verification language and library adoption trends.

You might note that for some of the language and library data I present, the percentage sums to more than 100 percent. The reason for this is that some participants’ projects use multiple languages or multiple testbench methodologies.

Testbench Methodology Class Library Adoption

Now let’s look at testbench methodology and class library adoption for IC/ASIC designs. Figure 1 shows the trends in terms of methodology and class library adoption by comparing the 2010 Wilson Research Group study (in blue) with the 2012 study (in green). Today, we see a downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has increased by 486 percent since the fall of 2010. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 46 percent.

Figure 1. Methodology and class library trends

Figure 2 show the adoption of testbench methodologies and class libraries for FPGA designs (in red). We do not have sufficient data to show prior adoption trends in the FPGA space, but we anticipate that our future studies will enable us to do this. However, we did ask the FPGA study participants which testbench methodologies and class libraries they were planning to adopt within the next 12 months. Based on these responses, we anticipate that UVM adoption will increase by 40 percent, and OVM increase by 24 percent in the FPGA space.

Figure 2. Methodology and class library trends

Assertion Languages and Libraries

Finally, let’s examine assertion language and library adoption for IC/ASIC designs. The Wilson Research Group study found that 63 percent of all the IC/ASIC participants have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trend related to those participants who have adopted ABV.

Figure 3 shows the trends in terms of assertion language and library adoption by comparing the 2010 Wilson Research Group study (in blue), the 2012 Wilson Research Group study (in green), and the projected adoption trends within the next 12 months (in purple). The adoption of SVA continues to increase, while other assertion languages and libraries either remain flat or decline.

Figure 3. Assertion language and library adoption for Non-FPGA designs

Figure 4 shows the adoption of assertion language trends for FPGA designs (in red). Again, we do not have sufficient data to show prior adoption trends in the FPGA space, but we anticipate that our future studies will enable us to do this. We did ask the FPGA study participants which assertion languages and libraries they planned to adopt within the next 12 months. Based on these responses, we anticipate an increase in adoption for OVL, SVA, and PSL in the FPGA space within the next 12 months.

Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I plan to focus on the adoption of various verification technologies and techniques used in the industry, as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

5 August, 2013

Language and Library Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 7 click here), I focused on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies. In this blog, I present design and verification language trends, as identified by the Wilson Research Group study.

You might note that for some of the language and library data I present, the percentage sums to more than one hundred percent. The reason for this is that some participants’ projects use multiple languages.

RTL Design Languages

Let’s begin by examining the languages used for RTL design. Figure 1 shows the trends in terms of languages used for design, by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), the 2012 Wilson Research Group study (in green), as well as the projected design language adoption trends within the next twelve months (in purple) as identified by the study participants. Note that the design language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption continues to increase.

Also, it’s important to note that this study focused on languages used for RTL design. We have conducted a few informal studies related to languages used for architectural modeling—and it’s not too big of a surprise that we see increased adoption of C/C++ and SystemC in that space. However, since those studies have (thus far) been informal and not as rigorously executed as the Wilson Research Group study, I have decided to withhold that data until a more formal blind study can be executed related to architectural modeling and virtual prototyping.

Figure 1. Trends in languages used for Non-FPGA design

Let’s now look at the languages used specifically for FPGA RTL design. Figure 2 shows the trends in terms of languages used for FPGA design, by comparing the 2012 Wilson Research Group study (in red) with the projected design language adoption trends within the next twelve months (in purple).

Figure 2. Languages used for Non-FPGA design

It’s not too big of a surprise that VHDL is the predominant language used for FPGA RTL design, although we are starting to see increased interest in SystemVerilog.

Verification Languages

Next, let’s look at the languages used to verify Non-FPGA designs (that is, languages used to create simulation testbenches). Figure 3 shows the trends in terms of languages used to create simulation testbenches by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green).

Figure 3. Trends in languages used in verification to create Non-FPGA simulation testbenches

The study revealed that verification language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption is increasing. In fact, SystemVerilog adoption increased by 8.3 percent between 2010 and 2012.

Figure 4 provides a different analysis of the data by partitioning the projects by design size, and then calculating the adoption of SystemVerilog for creating testbenches by size. The design size partitions are represented as: less than 5M gates, 5M to 20M gates, and greater than 20M gates. Obviously, we find that the larger the design size, the greater the adoption of SystemVerilog for creating testbenches. Yet, probably the most interesting observation we can make from examining Figure 4 is related to smaller designs that are less than 5M gates. Here we see that 58.8 percent of the industry has adopted SystemVerilog for verification. In other words, it is safe to say that SystemVerilog for verification has become mainstream today and not just limited to early adopters or leading-edge design projects.

Figure 4. SystemVerilog (for verification) adoption by design size

Let’s now look at the languages used specifically for FPGA RTL design. Figure 5 shows the trends in terms of languages used for FPGA design, by comparing the 2012 Wilson Research Group study (in red) with the projected design language adoption trends within the next twelve months (in purple).

Figure 5. Trends in languages used in verification to create FPGA simulation testbenches

In my next blog (click here), I’ll continue the discussion on design and verification language trends as revealed by the 2012 Wilson Research Group Functional Verification Study.

, , , , , , , , , , , , , , , , , ,

29 July, 2013

Testbench Characteristics and Simulation Strategies

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for background on the study, click here).

In my previous blog (click here), I focused on the controversial topic of effort spent in verification. In this blog, I focus on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies. Although I am shifting the focus away from verification effort, I believe that the data I present in this blog is related to my previous blog and really needs to be considered when calculating effort.

Time Spent in full-chip versus Subsystem-Level Simulation

Let’s begin by looking at Figure 1, which shows the percentage of time (on average) that a project spends in full-chip or SoC integration-level verification versus subsystem and IP block-level verification. The mean time performing full chip verification is represented by the dark green bar, while the mean time performing subsystem verification is represented by the light green bar. Keep in mind that this graph represents the industry average. Some projects spend more time in full-chip verification, while other projects spend less time.

Figure 1. Mean time spent in full chip versus subsystem simulation

Number of Tests Created to Verify the Design in Simulation

Next, let’s look at Figure 2, which shows the number of tests various projects create to verify their designs using simulation. The graph represents the findings from the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). Note that the curves look remarkably similar over the past five years. The median number of tests created to verify the design is within the range of (>200 – 500) tests. It is interesting to see a sharp percentage increase in the number of participants who claimed that fewer tests (1 – 100) were created to verify a design in 2012. It’s hard to determine exactly why this was the case—perhaps it is due to the increased use of constrained random (which I will talk about shortly). Or perhaps there has been an increased use of legacy tests. The study was not design to go deeper into this issue and try to uncover the root cause. This is something I intend to informally study this next year through discussions with various industry thought leaders.

Figure 2. Number of tests created to verify a design in simulation

Percentage of Directed Tests versus Constrained-Random Tests

Now let’s compare the percentage of directed testing that is performed on a project to the percentage of constrained-random testing. Of course, in reality there is a wide range in the amount of directed and constrained-random testing that is actually performed on various projects. For example, some projects spend all of their time doing directed testing, while other projects combine techniques and spend part of their time doing directed testing—and the other part doing constrained-random. For our comparison, we will look at the industry average, as shown in Figure 3. The average percentage of tests that were directed is represented by the dark green bar, while the average percentage of tests that are constrained-random is represented by the light green bar.

Figure 3. Mean directed versus constrained-random testing performed on a project

Notice how the percentage mix of directed versus constrained-random testing has changed over the past two years.Today we see that, on average, a project performs more constrained-random simulation. In fact, between 2010 and 2012 there has been a 39 percent increase in the use of constrained-random simulation on a project. One driving force behind this increase has been the maturing and acceptance of both the SystemVerilog and UVM standards—since two standards facilitate an easier implementation of a constrained-random testbench. In addition, today we find that an entire ecosystem has emerged around both the SystemVerilog and UVM standards. This ecosystem consists of tools, verification IP, and industry expertise, such as consulting and training.

Nonetheless, even with the increased adoption of constrained-random simulation on a project, you will find that constrained-random simulation is generally only performed at the IP block or subsystem level. For the full SoC level simulation, directed testing and processor-driven verification are the prominent simulation-based techniques in use today.

Simulation Regression Time

Now let’s look at the time that various projects spend in a simulation regression. Figure 4 shows the trends in terms of simulation regression time by comparing the 2007 Far West Research study (in gray) with the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). There really hasn’t been a significant change in the time spent in a simulation regression within the past three years. You will find that some teams spend days or even weeks in a regression. Yet today, the industry median is between 8 and 16 hours, and for many projects, there has been a decrease in regression time over the past few years. Of course, this is another example of where deeper analysis is required to truly understand what is going on. To begin with, these questions should probably be refined to better understand simulation times related to IP versus SoC integration-level regressions. We will likely do that in future studies—with the understanding that we will not be able to show trends (or at least not initially).

Figure 4. Simulation regression time trends

In my next blog (click here), I’ll focus on design and verification language trends, as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

28 June, 2013

Clocking and Power Trends

In Part 2 of this series of blogs, I continued the discussion focused on design trends (click here) as identified by the 2012 Wilson Research Group Functional Verification Study (click here). In this blog, I continue presenting the study findings related to design trends, with a focus on clocking and power trends.

Independent Asynchronous Clock Domains

Figure 1 shows the percentage of designs developed today by the number of independent asynchronous clock domains. The asynchronous clock domain data for FPGA designs is shown in red, while the data for the non-FPGA designs is shown in green.

 

Figure 1. Number of independent asynchronous clock domains

Figure 2 shows the trends in number of independent asynchronous clock domains for non-FPGA designs. The comparison includes the 2002 Collett study (in dark green), the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2010 Wilson Research Group study (in green).

Figure 2. Trends: Number of independent asynchronous clock domain

It’s interesting to note that, although the number of clock domains is increasing over time, the sweet spot in terms of number of independent asynchronous clock domains seems to remain between 2 and 20, and it hasn’t changed significantly in the past ten years.

Figure 3 provides a different analysis of the data by partitioning the projects by design sizes, and then calculating the mean number of independent asynchronous clock domains by project design. The design size partitions are represented as: less than 5M gates, 5M to 20M gates, and greater than 20M gates.

Figure 3. Mean number of independent clock domains by design size

Power Management

Today, we see that about 67 percent of design projects actively manage power with a wide variety of techniques, ranging from simple clock-gating, to complex hypervisor/OS-controlled power management schemes. We decided for the 2012 Wilson Research Group study that we wanted to take a closer look at power management related to functional verification. Hence, I can share some interesting results with you here. However, since this aspect of functional verification has never been studied in previous surveys, I will not be able to show trends. Our goal is to carry these same questions forward in our future studies so that we can identify trends.

For these, Figure 4 shows the various aspects of their power-managed design that they verify (for those 67 percent of design projects that actively manage power).

Figure 4. Aspects of power-managed design that are verified

In our study, we asked what percentage of simulation was power-aware (that is, verifying some functional aspect of the power-management scheme), and the results are shown in Figure 5. We were surprised to learn that about 10 percent of all designs that actively manage power perform no power-aware simulation to verify the power management scheme.

Figure 5. Percentage of simulation that verified some aspect of power management

In addition, we asked what percent of verification resources were focused on power management verification, and the results are shown in Figure 6. You will note that the curve is very similar to the percentage of total simulations that were power-aware, which you would expect. Again, we see that about 10 percent of the projects that actively manage power provide no verification resources to verify the power-management scheme.

 

Figure 6. Percentage of verification resources focused on power management

Figure 7 shows the different types of simulation-based functional testing approaches that are currently applied to verifying power management. It’s not a surprise that most power-aware simulation is based on directed-testing approaches since often (but not always) power-aware simulations are performed at the SoC integration level where directed testing is common.

 

Figure 7. Percentage of simulation that verified some aspect of power management

Since the power intent cannot be directly described in an RTL model, alternative supporting notations have recently emerged to capture the power intent. In the 2012 study, we wanted to get a sense of where the industry stands in adopting the notation. For projects that actively manage power, Figure 8 shows the various notations that have been adopted to describe the power intent. Some projects are actively using multiple standards (such as different versions of UPF or a combination of CPF and UPF). That’s why the adoption results do not sum to 100 percent.

 

Figure 8. Notation used to describe power intent

In my next blog (click here), I’ll present data on design and verification reuse trends.

, , , , , , , , , ,

8 May, 2013

 Design Trends

In my previous blog, I introduced the 2012 Wilson Research Group Functional Verification Study (click here). The objective of my previous blog was to provide background on this large, worldwide industry study. I will present the key findings from this study in a set of upcoming blogs. 

This blog begins the process of revealing the 2012 Wilson Research Group study findings by first focusing on current design trends.  Let’s begin by examining process geometry adoption trends, as shown in Figure 1.  Here, you will see trend comparisons between the 2007 Far West Research study (gray line), the 2010 Wilson Research Group study (blue line), and the 2012 Wilson Research Group study (green line).

Figure 1. Process geometry trends

Worldwide, the median process geometry size from the 2007 Far West Research study was about 90nm, while the median process geometry size is about 65nm in 2010. Today, the mean process geometry size for a typical project is about 45nm—although you can see that over a third of projects today are designing below 32nm.

In addition to the industry moving to smaller process geometries, the industry is also moving to larger design sizes as measured in number of gates of logic and datapath, excluding memories (which should not be a surprise). Figure 2 compares design sizes from the 2002 Collett study (dark blue line), the 2007 Far West Research study (gray line), the 2010 Wilson Research Group study (light blue line), and the 2012 Wilson Research Group study (green line).

Figure 2. Number of gates of logic and datapath trends, excluding memories

The study revealed that about a third of the non-FPGA designs today are less than 5M gates, while a third range in size between 5M to 20M gates, and about a third of all designs are larger than 20M gates.

It’s important to note here that the data on the mean design size trends does not reflect volume in terms of semiconductor production. For example, you could have fewer projects designing at a small geometry, yet they have higher volume in terms of production.

In Figure 3, I show the mean design size trends between the 2002 Collett study (dark blue line), the 2007 Far West Research study (gray line), the 2010 Wilson Research Group study (light blue line), and the 2012 Wilson Research Group study (green line). Obviously, gate counts have increased over the years, yet a significant number of designs continue to be developed with smaller (and larger) gate counts as indicated by the mean calculation. Another observation is that, as you would expect, the mean gate count trend is essentially following Moore’s law.

Figure 3. Mean design size trends

Figure 4 presents the current design implementation trends for non-FPGAs as identified by the survey participants.

 

Figure 4. Non-FPGA current design implementation trends

The data in Figure 4 presents trends in design implementation approaches for non-FPGA designs, ranging from the 2002 Collett study (dark blue bar), the 2004 Collet study (dark green bar),  the 2007 Far West Research study (gray bar), the 2010 Wilson Research Group study (blue bar), and the 2012 Wilson Research Group study (green bar). Note that the study seems to indicate that there is a downward trend in standard cell design implementation.

Figure 5. FPGA design implementation trends

For the 2012 study, we decided that we wanted to get a sense of the percentage of FPGA projects that target the very complex programmable SoC FPGAs that have recently emerged, which is shown in Figure 5. Examples of these programmable SoC FPGAs include: Xilinx’s Zynq, Altera’s Arria/Cydone, and Microsemi’s SmarFusion.

In my next blog (click here), I’ll continue discussing current design trends, focusing specifically on embedded processors, power, and clock domains.

, , , , ,

23 April, 2013

This is the first in a series of blogs that presents the results from the 2012 Wilson Research Group Functional Verification Study.

Study Overview

In 2002 and 2004, Ron Collett International, Inc. conducted its well known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification. However, after the 2004 study, no other industry studies were conducted, which left a void in identifying industry trends.

To address this void, Mentor Graphics commissioned Far West Research to conduct an industry study on functional verification in the fall of 2007. Then in the fall of 2010, Mentor commissioned Wilson Research Group to conduct another functional verification study. Both of these studies were conducted as blind studies to avoid influencing the results. This means that the survey participants did not know that the study was commissioned by Mentor Graphics. In addition, to support trend analysis on the data, both studies followed the same format and questions (when possible) as the original 2002 and 2004 Collett studies.

In the fall of 2012, Mentor Graphics commissioned Wilson Research Group again to conduct a new functional verification study. This study was also a blind study and follows the same format as the Collett, Far West Research, and previous Wilson Research Group studies. The 2012 Wilson Research Group study is one of the largest functional verification studies ever conducted. The overall confidence level of the study was calculated to be 95% with a margin of error of 4.05%.

Unlike the previous Collett and Far West Research studies that were conducted only in North America, both the 2010 and 2012 Wilson Research Group studies were worldwide studies. The regions targeted were:

  • North America:Canada,United States
  • Europe/Israel:Finland,France,Germany,Israel,Italy,Sweden,UK
  • Asia (minusIndia):China,Korea,Japan,Taiwan
  • India

The survey results are compiled both globally and regionally for analysis.

Another difference between the Wilson Research Group and previous industry studies is that both of the Wilson Research Group studies also included FPGA projects. Hence for the first time, we are able to present some emerging trends in the FPGA functional verification space.

Figure 1 shows the percentage makeup of survey participants by their job description. The red bars represents the FPGA participants while the green bars represent the non-FPGA (i.e., IC/ASIC) participants.

 

Figure 1: Survey participants job title description

Figure 2 shows the percentage makeup of survey participants by company type. Again, the red bars represents the FPGA participants while the green bars represents the non-FPGA (i.e., IC/ASIC) participants.

Figure 2: Survey participants company description

In a future set of blogs, over the course of the next few months, I plan to present the highlights from the 2012 Wilson Research Group study along with my analysis, comments, and obviously, opinions. A few interesting observations emerged from the study, which include:

  1. FPGA projects are beginning to adopt advanced verification techniques due to increased design complexity.
  2. The effort spent on verification is increasing.
  3. The industry is converging on common processes driven by maturing industry standards.

A few final comments concerning the 2012 Wilson Research Group Study.  As I mentioned, the study was based on the original 2002 and 2004 Collett studies.  To ensure consistency in terms of proper interpretation (or potential error related to mis-interpretation of the questions), we have avoided changing or modifying the questions over the years—with the exception of questions that relate to shrinking geometries sizes and gate counts. One other exception relates  introducing a few new questions related to verification techniques that were not a major concern ten years ago (such as low-power functional verification).  Ensuring consistency in the line of questioning enables us to have high confidence in the trends that emerge over the years.

Also, the method in which the study pools was created follows the same process as the original Collett studies.  It is important to note that the data presented in this series of blogs does not represent trends related to silicon volume (that is, a few projects could dominate in terms of the volume of manufactured silicon and not represent the broader industry).  The data in this series of blogs represents trends related to the study pool—which is a fair proxy for active design projects.

My next blog presents current design trends that were identified by the survey. This will be followed by a set of blogs focused on the functional verification results.

Also, to learn more about the 2012 Wilson Reserach Group study, view my pre-recorded Functional Verification Study web-seminar, which is located out on the Verification Academy website.

Quick links to the 2012 Wilson Research Group Study results (so far…)

, , , , , , , , , , , , ,

25 February, 2013

Today at this week’s DVCon 2013 conference, the IEEE Standards Association (IEEE-SA) and Accellera Systems Initiative (Accellera) have jointly announced the public availability of the IEEE 1800 SystemVerilog Language Reference Manual at no charge through the IEEE Get Program.

As I posted a few weeks ago, the 1800-2012 is not a major revision of the standard, but does contain a few enhancements that will be of interest to design and verification engineers alike. However, providing the standard as freely available download is major news.

Even though the relative cost of the LRM was minor compared to the cost of most projects utilizing the standard, there seemed to be a barrier in most engineer’s minds in justifying the expense. So most just continued to use the last freely available SystemVerilog 3.1a LRM, which was 9 years old and very obsolete for such a rapidly changing technology.

Thanks Accellera!

, , , ,

31 October, 2012

Ready for 100 billion “things” connected by the Internet?

The IEEE Standards Association (SA) Corporate Advisory Group (CAG) has been working to bring industry input into the standards development organization on the emerging Internet of Things (IoT) trend that will connect billions of devices with each other.

As you can imagine, the impact this will have to the service structure down to the development of connected devices will have impact on tools used to create, verify and test them from the EDA industry to the protocols that will need to be in place to facilitate this.

This past summer the oneM2M was launched to bring some groups together who were dedicated to product technical specification for the M2M Service Layer.  The impact on the IEEE, that is responsible for ongoing Internet standardization, is likewise large and not totally known.

Segars ARM Techcon Presentation smlI was reminded of the IoT impact this week by ARM’s EVP, Simon Segars.  His ARM Techcon keynote presentation this week.  noted the IoT is a merging of our digital and physical worlds.  He also said predictions are the data from smartphones is “exploding at a 100% growth rate a year for the next 4-5 years.”   To make the point even more stunning, Simon shared that Facebook expects 1-2 billion pictures will be taken and uploaded to their website around Halloween 2012.  The good news for those who did not have the time to make it to Santa Clara, CA USA for ARM Techcon, his presentation has been made available for viewing on YouTube.  You can find it here.

The IoT conversation continues around the globe.

IEEE IoT Workshop: You are invited!

ieee_mb_blue IEEE has restored service to their Internet connection at www.ieee.org.  However, connection from IEEE staff locations is tentative due to the widespread devastation of Hurricane Sandy in the New Jersey USA area where they live and work.  There may be delays in getting official invitations out on the IoT workshop.  The IEEE workshop on Internet of Things has been put together in conjunction with several of the CAG member companies, with direct leadership from our STMicroelectonics representative and input from representatives from Broadcom, GE Medical, Ericsson, Qualcomm and others. The IEEE SA staff and IoT Workshop leadership have asked those who are connected to share workshop information.  I am doing that here.

You are invited to attend and participate in the workshop.  Details on the event are:

Event Description:

The event will feature a combination of keynote speeches, product showcase and panel sessions with the goal to:

  • identify collaboration opportunities and standardization gaps related to IoT
  • help industry foster the growth of IoT markets;
  • leverage IEEE’s value and platform for IoT industry-wide consensus development,; and
  • help industry with the creation of a vibrant IoT ecosystem.

Date: 13 November 2012
Location: Milan, Italy
Fee: Free

Keynotes include:

  • Service Provider’s View of the IoT World (SP)
  • End to End Systems Security (ST)
  • IEEE-SA – Perfect Platform for the New Millennia of Consensus Development

Panel Topics include:

  • GW as an Enabler of the New Services in the IoT World
  • Monetizing Services in the IoT World
  • Security in the IoT World
  • Standard, what we have and what is missing, convergence in the technology world, collaboration opportunities.

Website: http://standards.ieee.org/events/iot/index.html

WEATHER ALERT!
31 October 2012 4:25 p.m. PDT
Access to ieee.org has been restored.  That was quick! You can now access IoT Workshop details from IEEE directly.
31 October 2012 3:00 p.m. PDT
Due to the impact of Hurricane Sandy, power to IEEE servers has been lost and backup power sources have been depleted.  Access to the IEEE website for more information, registration and additional details is not available at this moment.  The workshop will be held.If the servers return to the Internet, I will update this notice.And if their absence appears to be something that will last longer than another day or so, I will update this blog with alternate contact information for those who would like more detailed information on how to register and where to go to attend the event. 

, , , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@HarryAtMentor Tweets

  • Loading tweets...