Archive for Harry Foster

3 June, 2015

FPGA Language and Library Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on FPGA verification techniques and technologies adoption trends, as identified by the 2014 Wilson Research Group study. In this blog, I’ll present FPGA design and verification language trends, as identified by the Wilson Research Group study.

You might note that the percentage for some of the language and library data that I present sums to more than one hundred percent. The reason for this is that many FPGA projects today use multiple languages.

FPGA RTL Design Language Adoption Trends

Let’s begin by examining the languages used for FPGA RTL design. Figure 1 shows the trends in terms of languages used for design, by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected design language adoption trends within the next twelve months (in purple). Note that the language adoption is declining for most of the languages used for FPGA design with the exception of Verilog and SystemVerilog.

Also, it’s important to note that this study focused on languages used for RTL design. We have conducted a few informal studies related to languages used for architectural modeling—and it’s not too big of a surprise that we see increased adoption of C/C++ and SystemC in that space. However, since those studies have (thus far) been informal and not as rigorously executed as the Wilson Research Group study, I have decided to withhold that data until a more formal study can be executed related to architectural modeling and virtual prototyping.

Figure 1. Trends in languages used for FPGA design

It’s not too big of a surprise that VHDL is the predominant language used for FPGA RTL design, although the projected trend is that Verilog will likely overtake VHDL in terms of the predominate language used for FPGA design in the near future.

FPGA Verification Language Adoption Trends

Next, let’s look at the languages used to verify FPGA designs (that is, languages used to create simulation testbenches). Figure 2 shows the trends in terms of languages used to create simulation testbenches by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected verification language adoption trends within the next twelve months (in purple).

Figure 2. Trends in languages used in verification to create FPGA simulation testbenches

FPGA Testbench Methodology Class Library Adoption Trends

Now let’s look at testbench methodology and class library adoption for FPGA designs. Figure 3 shows the trends in terms of methodology and class library adoption by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected verification language adoption trends within the next twelve months (in purple).

Figure 3. FPGA methodology and class library adoption trends

Today, we see a downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has increased by 28 percent since 2012. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 20 percent.

FPGA Assertion Language and Library Adoption Trends

Finally, let’s examine assertion language and library adoption for FPGA designs. The 2014 Wilson Research Group study found that 44 percent of all the FPGA projects have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trends related to those participants who have adopted ABV.

Figure 4 shows the trends in terms of assertion language and library adoption by comparing the 2010 Wilson Research Group study (in dark blue), the 2012 Wilson Research Group study (in green), and the projected adoption trends within the next 12 months (in purple). The adoption of SVA continues to increase, while other assertion languages and libraries either remain flat or decline.

Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I will continue presenting findings from the 2014 Wilson Research Group Functional Verification Study.

Quick links to the 2014 Wilson Research Group Study results

, , , , , , , ,

2 June, 2015

This year we are trying something new at the Verification Academy booth during next week’s 2015 Design Automation Conference.  We’ve decided to host an interactive panel on the controversial topic of Agile development. I say controversial because you typically find two camps of engineers when discussing the subject of Agile development—the believers and the non-believers.

My colleague Neil Johnson, principal consultant from XtremeEDA Corporation and a leading expert in Agile development, will provide some context for the topic with a short background on Agile methods to kick the panel off. Then I plan to join Neil on the panel, which will be moderated by Mentor’s own world-renowned Dennis Brophy.  Our intent is to have a healthy, interactive discussion with both the believers and the non-believers in the audience.

So, why is the subject of Agile development even worthy of discussion at DAC? Well, not to entirely give away my position on the subject…but I think it’s worthwhile to note some of the recent findings related to root cause of logical and functional flaws from the 2014 Wilson Research Group Functional Verification Study (see figure below).

Clearly, design errors are a major factor contributing to bugs. Yet a growing concern is the number of issues surrounding the specification that are leading to logical and functional flaws.  In reality, there is no such thing as the perfect specification—and few projects can afford to wait to start development until the perfection is achieved. Furthermore, in many market segments, late stage changes in the specification are common practice to ensure that the final product is competitive in a rapidly changing market. Could Agile development, in which requirements and solutions evolve through collaboration between self-organizing and cross-functional teams, be the saving grace?  Please join us on June 8th at 5pm in the Verification Academy booth at DAC and hear what the experts are saying!

, ,

11 May, 2015

FPGA Verification Technology Adoption Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on the effectiveness of verification in terms of FPGA project schedule and bug escapes. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2014 Wilson Research Group study.

An interesting trend we see in the FPGA space is a continual maturing of its functional verification processes. In fact, we find that the FPGA design space is about where the ASIC/IC design space was five years ago in terms of verification maturity—and it is catching up quickly. A question you might ask is, “What is driving this trend?” In Part 1 of this blog series I showed rising design complexity with the adoption of more advanced FPGA designs, as well as multiple embedded processor architectures targeted at FPGA designs. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (Part 2 and Part 3). My belief is that the industry creating FPGA designs is being forced to mature its functional verification processes to address today’s increasing complexity.

FPGA Simulation Technique Adoption Trends

Let’s begin by comparing  FPGA adoption trends related to various simulation techniques from the both the 2012 and 2014 Wilson Research Group study, as shown in Figure 1.

Figure 1. Simulation-based technique adoption trends for FPGA designs

You can clearly see that the industry is increasing its adoption of various functional verification techniques for FPGA targeted designs. This past year I have spent a significant amount of time in discussions with FPGA project managers around the world. During these discussions, most mangers mention the drive to improve verification process within their projects due to rising complexity. The Wilson Research Group data suggest that these claims are valid.

FPGA Formal Technology Adoption Trends

Figure w shows the adoption percentages for formal property checking and auto-formal techniques.

Figure 2. FPGA Formal Technology Adoption

Our study looked at two forms of formal technology adoption (i.e., formal property checking and automatic formal verification solutions). Examples of automatic formal verification solutions include X safety checks, deadlock detection, reset analysis, and so on.  The key difference is that for formal property checking the user writes a set of assertions that they wish to prove.  Automatic formal verification solutions do not require the user to write assertions.

In my next blog (click here), I’ll focus on FPGA design and verification language adoption trends, as identified by the 2014 Wilson Research Group study.

Quick links to the 2014 Wilson Research Group Study results

, , , , ,

21 April, 2015

FPGA Verification Effectiveness Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on the amount of effort spent in FPGA verification. We have seen in previous blogs that a significant amount of effort is being applied to FPGA functional verification. In this blog I focus on the effectiveness of verification in terms of FPGA project schedule and bug escapes.

FPGA Schedules

Figure 1 presents the design completion time compared to the project’s original schedule. What was a surprise in the 2014 findings is that we saw an improvement in the number of FPGA projects meeting schedule—compared to 2012. It is unclear why we are seeing this trend now.  Perhaps managers are getting better at scheduling—or are becoming more pessimistic with their schedules.  Or, perhaps it is due to the increase amount of reuse (both design and verification IP). Or, is the increased amount of FPGA verification effort prior to “getting to the lab” starting to pay off for some projects? This data point raises some interesting questions worth exploring further. Regardless, still a significant number of FPGA projects miss their originally planned schedule.

2014-WRG-BLOG-FPGA-4-1

Figure 1. FPGA design completion time compared to the project’s original schedule

FPGA Lab Iterations

ASIC/IC projects track the number of required spins that occur prior to market production.  In fact, this can be a useful metric for determining the overall verification effectiveness of an ASIC/IC project.  Unfortunately, we lack such a metric for FPGA projects.  For the 2014 study, we decided to ask the question related to the average number of lab iterations required before the design went into production. Again, this was done to try and get a sense of the project’s verification effectiveness.  The results are shown in Figure 2. However, I’m not convinced that FPGA lab iterations is analogous to ASIC/IC respin as a verification effectiveness metric.  Perhaps a better metric for future studies would be the number of bugs that escape into production and are found in the field. This might be something we should consider on future studies.

2014-WRG-BLOG-FPGA-4-2

Figure 2. Number of FPGA iterations in the lab (no trend data available)

FPGA Bug classification

For the 2014 study, we asked the FPGA project participants to identify the type of flaws that were contributing to rework in the lab. In Figure 3, I show the two leading causes of rework, which are logical and functional bugs, as well as clocking bugs. The data seems to suggest that these issues are growing. Perhaps due to the design of larger and more complex FPGAs. Again, this is a data point worth exploring further.

2014-WRG-BLOG-FPGA-4-3

Figure 3. Types of Flaws Resulting in FPGA Rework

In Figure 4, I show trends in terms of main contributing factors leading to logic and functional flaws—and you can see that design errors are the main cause of functional flaws.  But note that a significant amount of flaws are related to some aspect of the specification—such as changes in the specification—or incorrect or incomplete specifications. Problems associated with the specification process are a common theme I often hear when visiting FPGA customers.

2014-WRG-BLOG-FPGA-4-4

Figure 5. Root cause of FPGA functional flaws

In my next blog (click here), I plan to presenting the findings from our study for FPGA verification technology adoption trends.

Quick links to the 2014 Wilson Research Group Study results

,

1 April, 2015

FPGA Effort Verification Trends (Continued)

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on the controversial topic of effort spent in FPGA verification. This blog continues that discussion. I stated in my previous blog that I don’t believe there is a simple answer to the question, “how much effort was spent on verification in your last FPGA project?” I believe that it is necessary to look at multiple data points to truly get a sense of the real effort involved in verification today. So, let’s look at a few additional findings from the study.

Time FPGA designers spend in verification

For projects that have a separation of teams (i.e., design engineers and verification engineers), it’s important to note that FPGA verification engineers are not the only project members involved in functional verification. FPGA design engineers spend a significant amount of their time in verification too, as shown in Figure 1.

2014-WRG-BLOG-FPGA-3-1

Figure 1. Average (mean) time FPGA design engineers spend in design vs. verification.

You might note (on average) that FPGA design engineers actually spend slightly more time doing verification than design. We are not showing trends here since we have insufficient data related to the questions for FPGA designs from our previous study. We anticipate being able to show trends after our next study (currently scheduled for 2016).

Even if the FPGA project has a separation of teams, the designers are still involved in the verification process, ranging from:

  • Small sandbox testing to explore various aspects of the implementation
  • Full functional testing of IP blocks and SoC integration
  • Debugging verification problems identified by a separate verification team

In fact, getting a better understanding of exactly where FPGA designers spend their time has led us to conduct a series of follow-on discussions with various FPGA projects from various market segments. Through this process we have learned a concern by many project managers related to the increase amount of debugging time spent on a project (both pre-lab and lab debugging time). This is one area of FPGA verification that we plan to continue to explore through a series of in-depth discussions with multiple FPGA projects around the world.

Percentage of time FPGA verification engineers spends in various task

Next, let’s look at the mean time FPGA verification engineers spend in performing various tasks related to their specific project. You might note that verification engineers spend most of their time in debugging. Ideally, if all the tasks were optimized, then you would expect this. Yet, unfortunately, the time spent in debugging can vary significantly from project-to-project, which presents scheduling challenges for managers during a project’s verification planning process.

2014-WRG-BLOG-FPGA-3-2

Figure 2. Average (mean) time verification engineers spend in various task

In our 2012 study we found that FPGA verification engineers spent about 37% of their time involved in debugging task. There was a 16 percent increase in the amount of time spent in debugging between 2012 and 2014. Hence, the data suggest that debugging effort is increasing for both FPGA engineers.

In my next blog (click here) I present our study findings in terms of FPGA schedules, iterations in the lab, and classification of functional bugs.

Quick links to the 2014 Wilson Research Group Study results

, , ,

11 March, 2015

FPGA Verification Effort Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on FPGA design trends. In this blog, I present findings from our study related to the effort spent in verification.

Directly asking study participants how much effort they spend in verification will not work. The reason is that it’s hard to find a paper or article on verification that doesn’t start with the phrase: “Seventy percent of a project’s effort is spent in verification…” In other words, the industry is already biased to respond with this effort value. Yet, there are really no creditable references to quantify this value.

I don’t believe that there is a simple answer to the question, “How much effort was spent on verification in your last project?” In fact, I believe that it is necessary to look at multiple data points derived from multiple questions to truly get a sense of effort spent in verification. And that’s what we did in our functional verification study.

Total FPGA Project Time Spent in Verification

To try to assess the effort spent in verification, let’s begin by looking at one data point, which is the total project time spent in verification. Figure 1 shows the trends in total percentage of FPGA project time spent in verification by comparing the 2012 Wilson Research Group study (in dark blue), and the 2014 Wilson Research Group study (in light blue).

Figure 1. Percentage of FPGA project time spent in verification

Between the years 2012 and 2014 the industry did see a seven percent increase in the average time an FPGA project spends in verification. Historically, FPGA projects have spent less time in verification than ASIC/IC projects. The FPGA project strategy has traditionally been to get to the lab as soon as possible, and then iterate on issues in the lab. In a future blog I’ll show data that indicates this strategy does not necessarily yield good results in terms of meeting project schedule or quality objectives. Also, this lab-focused approach to FPGA verification becomes less effective as FPGA complexity increases.

Peak Number of Design and Verification Engineers

Perhaps one of the biggest challenges in design and verification today is identifying solutions to increase productivity and control engineering headcount. To illustrate the need for productivity improvement, we discuss the trend in terms of increasing engineering headcount for FPGA projects. Figure 2 shows the mean peak number of design and verification engineers working on an FPGA project. Again, this is an industry average since some projects have many engineers while other projects have few.

Figure 2. Mean peak number of engineers working on an FPGA project

You can see that the compounded annual growth rate (CAGR) for the peak number of FPGA design engineers between 2012 and 2014 was 4.9 percent, while the CAGR for the peak number of FPGA verification engineers was 20.9 percent. This huge demand for verification engineers on FPGA projects is one indicator of growing verification complexity in FPGA designs. Also, note that the ratio of design engineers versus verification engineers is approaching 1-to-1. This similar trend happened on traditional ASIC/IC designs in 2012.

In my next blog (click here) I focus on the time that FPGA design and verification engineers spends in various task.

Quick links to the 2014 Wilson Research Group Study results

, ,

23 February, 2015

It’s my favorite time of year again—DVCon!  And I believe that the DVCon 2015 technical program committee has put together one of the technically best DVCon’s in years. In this blog I plan on highlighting a few DVCon events that you might want to put on your calendar.

2015-DVCon

First, at this year’s conference the Verification Academy has a dedicated booth (#301), and I hope you stop by to say hello to myself, my friend Tom Fitzpatrick, and an amazing lineup of other Verification Academy subject matter experts.

Next, on Wednesday morning March 4 I have the honor of participating on a verification panel, titled: “Art of Science.” Here, my fellow panelist and I will debate the issue that verification today is considered by some to be more of an art than a science—and one which is perceived as difficult to master. To learn my position on this topic, you’ll have to stop by!

Also on Wednesday at the Mentor sponsored lunch, my colleague Steve Bailey and I have put together both an informative and entertaining talk we’ve title: “From Tightly Coupled (Loosely Bolted) to Verification Convergence.” Here, we discuss the state of verification past, present and future while examining the results from our recently industry world-wide study, which I started blogging about a few weeks ago (click here for more details). Our talk will examine how advanced techniques are taking hold in mainstream design and provide insights on the recent convergence of verification solutions to meet today’s growing challenges.

Finally, there are two tutorials I’d like to encourage you to attend while at DVCon this year:

  1. Advanced, High-Throughput Debug from Architectural Modeling Through Post-Silicon SoC Validation (click here for more details)
  2. Dead or Alive: Using Automated Formal Techniques to Characterize Dead Code, Reveal Paths to Hit Uncovered States, and Reach Coverage Closure Faster (click here for more details)

I look forward to meeting you at DVCon 2015!

, , ,

8 February, 2015

FPGA Design Trends

In my previous blog, I introduced the 2014 Wilson Research Group Functional Verification Study (click here). The objective of my previous blog was to provide an overview on our large, worldwide industry study. The key findings from this study will be presented in a set of upcoming blogs. In this blog, I present trends related to various aspects of FPGA design to illustrate growing design complexity.

Let’s begin by examining embedded processor trends targeted at a general FPGA implementation. Our 2014 study found that 56% of all FPGA designs contained one or more embedded processors, as shown in Figure 1. Although we did not see an overall growth in the number of FPGAs containing one or more embedded processors between 2012 and 2014, we did see an increase in the number of FPGA projects creating designs containing more than one embedded processor.

2014-WRG-BLOG-FPGA-1-1

Figure 1. Number of embedded processors in FPGA trends

SoC class designs (i.e., designs containing embedded processors) add a new layer of verification complexity to the verification process that did not exist with traditional non-SoC class designs due to hardware and software interactions, new coherency architectures, and the emergence of complex network-on-a-chip interconnect.

In addition to embedded processors targeted at general FPGA class of designs, there has been a recent emergence of specific programmable SoC FPGA implementations, such as: Xilinx’s Zynq, Altera’s Arria/Cydone, and Microsemi’s SmarFusion. Figure 2 shows the adoption trends for these programmable SoC FPGAs, which you can see grew by over 93 percent between 2012 and 2014. Keep in mind that this trend data does not represent volume production—it represents the number of FPGA projects that are creating designs targeted at a programmable SoC class of FPGA.

2014-WRG-BLOG-FPGA-1-3

Figure 2. Type of FPGA implementation trends

As the industry moves to SoC class designs, regardless of targeted FPGA implementation, FPGA projects are starting to increase their adoption of industry standard on-chip bus protocols—versus proprietary bus protocols. Figure 3 shows the current adoption of AMBA and other on-chip bus protocols for FPGA designs as identified by our new study. Note, the reason we are not showing trends here is that the 2012 study did not separate out the various AMBA protocols, which is something we decided to do for our 2014 study. Hence, we cannot do an apples-to-apples comparison between 2012 and 2014 for FPGA on-chip bus protocol adoption.

2014-WRG-BLOG-FPGA-1-3a

Figure 3. FPGA on-chip bus protocol adoption

Another aspect of SoC class design is the emergence of IP-based design practices, which is fundamental for improving design productivity. Figure 4 shows FPGA design composition trends—and we see that there has been a declined in new logic created by FPGA project teams. At the same time we see an increase in the adoption of both internally developed and externally acquired IP.

2014-WRG-BLOG-FPGA-1-5

Figure 4. FPGA design composition trends

In my next blog (click here), I’ll focus on verification effort trends related to FPGA designs.

Quick links to the 2014 Wilson Research Group Study results

,

21 January, 2015

This blog is a continuation of a series of blogs that present the highlights from the 2014 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In this blog I discuss the issue of study bias, and what we did to address these concerns.

MINIMIZING STUDY BIAS

When architecting a study, three main concerns must be addressed to ensure valid results: sample validity bias, non-response bias, and stakeholder bias. Each of these concerns is discussed in the following sections, as well as the steps we took to minimize these bias concerns.

Sample Validity Bias

To ensure that a study is unbiased, it’s critical that every member of a studied population have an equal chance of participating. An example of a biased study would be when a technical conference surveys its participants. The data might raise some interesting questions, but unfortunately, it does not represent members of the population that were unable to participant in the conference. The same bias can occur if a journal or online publication limits its surveys to only its subscribers.

A classic example of sample validity bias is the famous Literary Digest poll in the 1936 United States presidential election, where the magazine surveyed over two million people. This was a huge study for this period in time. The sampling frame of the study was chosen from the magazine’s subscriber list, phone books, and car registrations. However, the problem with this approach was that the study did not represent the actual voter population since it was a luxury to have a subscription to a magazine, or a phone, or a car during The Great Depression. As a result of this biased sample, the poll inaccurately predicted that Republican Alf Landon versus the Democrat Franklin Roosevelt would win the 1936 presidential election.

For our study, we carefully chose a broad set of independent lists that, when combined, represented all regions of the world and all electronic design market segments. We reviewed the participant results in terms of market segments to ensure no segment or region representation was inadvertently excluded or under-represented.

Non-Response Bias

Non-response bias in a study occurs when a randomly sampled individual cannot be contacted or refuses to participate in a survey. For example, spam and unsolicited mail filters remove an individual from the possibility of receiving an invitation to participate in a study, which can bias results. It is important to validate sufficient responses occurred across all lists that make up the sample frame. Hence, we reviewed the final results to ensure that no single list of respondents that made up the sample frame dominated the final results.

Another potential non-response bias is due to lack of language translation, which we learned during our 2012 study. The 2012 study generally had good representation from all regions of the world, with the exception of an initially very poor level of participation from Japan. To solve this problem, we took two actions:

  1. We translated both the invitation and the survey into Japanese.
  2. We acquired additional engineering lists directly from Japan to augment our existing survey invitation list.

This resulted in a balanced representation from Japan. Based on that experience, we took the same approach to solve the language problem for the 2014 study.

Stakeholder Bias

Stakeholder bias occurs when someone who has a vested interest in survey results can complete an online study survey multiple times and urge others to complete the survey in order to influence the results. To address this problem, a special code was generated for each study participation invitation that was sent out. The code could only be used once to fill out the survey questions, preventing someone from taking the study multiple times or sharing the invitation with someone else.

2010 Study Bias

While architecting the 2012 study, we did discover a non-response bias associated with the 2010 study. Although multiple lists across multiple market segments and across multiple regions of the world were used during the 2010 study, we discovered that a single list dominated the responses, which consisted of participants who worked on more advanced projects and whose functional verification processes tend to be mature. Hence, for this series of blogs we have decided not to publish any of the 2010 results as part of verification technology adoption trend analysis.

The 2007, 2012, and 2014 studies were well balance and did not exhibit the non-response bias previously described for the 2010 data. Hence, we have confidence in talking about general industry trends presented in this series of blogs.

Quick links to the 2014 Wilson Research Group Study results

,

21 January, 2015

This is the first in a series of blogs that presents the findings from our new 2014 Wilson Research Group Functional Verification Study. However, unlike my previous Wilson Research Group functional verification study blogs, which focused on the ASIC/IC market, I plan to begin this set of blogs with an exclusive focus on FPGA trends. Why? For the following reasons:

  1. Unlike the traditional ASIC/IC market, there has historically been very few studies published on FPGA functional verification trends. We started studying the FPGA market segment back in the 2010 study, and we now have collected sufficient data to confidently present industry trends related to this market segment.
  2. Today’s FPGA designs have grown in complexity—and many now resemble complete systems. The task of verifying SoC-class designs is daunting, which has forced many FPGA projects to mature their verification process due to rising complexity. The FPGA-focused data I present in this set of blogs will support this claim.

My plan is to release the ASIC/IC functional verification trends through a set of blogs after I finish presenting the FPGA trends.

Introduction

In 2002 and 2004, Collett International Research, Inc. conducted its well-known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification at that point in time. However, after the 2004 study, no additional Collett studies were conducted, which left a void in identifying industry trends. To address this dearth of knowledge, four studies were commissioned by Mentor Graphics in 2007, 2010, 2012, and 2014, which focused on functional verification. These were world-wide, double-blind, functional verification studies, covering all electronic industry market segments. To our knowledge, the 2014 study was the largest functional verification study ever conducted. This set of blogs presents the findings from our 2014 study and provides invaluable insight into the state of the electronic industry today in terms of both design and verification trends.

Study Background

Our study was modeled after the original 2002 and 2004 Collett International Research, Inc. studies. In other words, we endeavored to preserve the original wording of the Collett questions whenever possible to facilitate trend analysis. To ensure anonymity, we commissioned Wilson Research Group to execute our study. The purpose of preserving anonymity was to prevent biasing the participants’ responses. Furthermore, to ensure that our study would be executed as a double-blind study, the compilation and analysis of the results did not take into account the identity of the participants.

For the purpose of our study we used a multiple sampling frame approach that was constructed from eight independent lists that we acquired. This enabled us to cover all regions of the world—as well as cover all relevant electronic industry market segments. It is important to note that we decided not to include our own account team’s customer list in the sampling frame. This was done in a deliberate attempt to prevent biasing the final results. My next blog in this series will discuss other potential bias concerns when conducting a large industry study and describe what we did to address these concerns.

After data cleaning the results to remove inconsistent or random responses (e.g., someone who only answered “a” on all questions), the final sample size consisted of 1886 eligible participants (i.e., n=1886). To put this figure in perspective, the 2004 Collett study sample size consisted of 201 eligible participants.

Unlike the 2002 and 2004 Collett IC/ASIC functional verification studies, which focused only on the ASIC/IC market segment, our studies were expanded in 2010 to include the FPGA market segment. We have partitioned the analysis of these two different market segments separately, to provide a clear focus on each. One other difference between our studies and the Collett studies is that our study covered all regions of the world, while the original Collett studies were conducted only in North America (US and Canada). We have the ability to compile the results both globally and regionally, but for the purpose of this set of blogs I am presenting only the globally compiled results.

Confidence Interval

All surveys are subject to sampling errors. To quantify this error in probabilistic terms, we calculate a confidence interval. For example, we determined the overall margin of error for our study to be ±2.19% at a 95% confidence interval. In other words, this confidence interval tells us that if we were to take repeated samples of size n=1886 from a population, 95% of the samples would fall inside our margin of error ±2.19%, and only 5% of the samples would fall outside.

Study Participants

This section provides background on the makeup of the study.

Figure 1 shows the percentage of overall study participants by market segment.

2014-WRG-BLOG-P-1

Figure 1: Study participants by market segment

Figure 2 shows the percentage of overall study eligible participants by their job description. An example of eligible participant would be a self-identified design or verification engineer, or engineering manager, who is actively working within the electronics industry. Overall, design and verification engineers accounted for 60 percent of the study participants.

2014-WRG-BLOG-P-2

Figure 2: Study participants job title description

Before I start presenting the findings from our 2014 functional verification study, I plan to discuss in my next blog (click here) general bias concerns associated with all survey-based studies—and what we did to minimize these concerns.

Quick links to the 2014 Wilson Research Group Study results

,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@jhupcey Tweets

  • Loading tweets...