Archive for Harry Foster

23 February, 2015

It’s my favorite time of year again—DVCon!  And I believe that the DVCon 2015 technical program committee has put together one of the technically best DVCon’s in years. In this blog I plan on highlighting a few DVCon events that you might want to put on your calendar.

2015-DVCon

First, at this year’s conference the Verification Academy has a dedicated booth (#301), and I hope you stop by to say hello to myself, my friend Tom Fitzpatrick, and an amazing lineup of other Verification Academy subject matter experts.

Next, on Wednesday morning March 4 I have the honor of participating on a verification panel, titled: “Art of Science.” Here, my fellow panelist and I will debate the issue that verification today is considered by some to be more of an art than a science—and one which is perceived as difficult to master. To learn my position on this topic, you’ll have to stop by!

Also on Wednesday at the Mentor sponsored lunch, my colleague Steve Bailey and I have put together both an informative and entertaining talk we’ve title: “From Tightly Coupled (Loosely Bolted) to Verification Convergence.” Here, we discuss the state of verification past, present and future while examining the results from our recently industry world-wide study, which I started blogging about a few weeks ago (click here for more details). Our talk will examine how advanced techniques are taking hold in mainstream design and provide insights on the recent convergence of verification solutions to meet today’s growing challenges.

Finally, there are two tutorials I’d like to encourage you to attend while at DVCon this year:

  1. Advanced, High-Throughput Debug from Architectural Modeling Through Post-Silicon SoC Validation (click here for more details)
  2. Dead or Alive: Using Automated Formal Techniques to Characterize Dead Code, Reveal Paths to Hit Uncovered States, and Reach Coverage Closure Faster (click here for more details)

I look forward to meeting you at DVCon 2015!

, , ,

8 February, 2015

FPGA Design Trends

In my previous blog, I introduced the 2014 Wilson Research Group Functional Verification Study (click here). The objective of my previous blog was to provide an overview on our large, worldwide industry study. The key findings from this study will be presented in a set of upcoming blogs. In this blog, I present trends related to various aspects of FPGA design to illustrate growing design complexity.

Let’s begin by examining embedded processor trends targeted at a general FPGA implementation. Our 2014 study found that 56% of all FPGA designs contained one or more embedded processors, as shown in Figure 1. Although we did not see an overall growth in the number of FPGAs containing one or more embedded processors between 2012 and 2014, we did see an increase in the number of FPGA projects creating designs containing more than one embedded processor.

2014-WRG-BLOG-FPGA-1-1

Figure 1. Number of embedded processors in FPGA trends

SoC class designs (i.e., designs containing embedded processors) add a new layer of verification complexity to the verification process that did not exist with traditional non-SoC class designs due to hardware and software interactions, new coherency architectures, and the emergence of complex network-on-a-chip interconnect.

In addition to embedded processors targeted at general FPGA class of designs, there has been a recent emergence of specific programmable SoC FPGA implementations, such as: Xilinx’s Zynq, Altera’s Arria/Cydone, and Microsemi’s SmarFusion. Figure 2 shows the adoption trends for these programmable SoC FPGAs, which you can see grew by over 93 percent between 2012 and 2014. Keep in mind that this trend data does not represent volume production—it represents the number of FPGA projects that are creating designs targeted at a programmable SoC class of FPGA.

2014-WRG-BLOG-FPGA-1-3

Figure 2. Type of FPGA implementation trends

As the industry moves to SoC class designs, regardless of targeted FPGA implementation, FPGA projects are starting to increase their adoption of industry standard on-chip bus protocols—versus proprietary bus protocols. Figure 3 shows the current adoption of AMBA and other on-chip bus protocols for FPGA designs as identified by our new study. Note, the reason we are not showing trends here is that the 2012 study did not separate out the various AMBA protocols, which is something we decided to do for our 2014 study. Hence, we cannot do an apples-to-apples comparison between 2012 and 2014 for FPGA on-chip bus protocol adoption.

2014-WRG-BLOG-FPGA-1-3a

Figure 3. FPGA on-chip bus protocol adoption

Another aspect of SoC class design is the emergence of IP-based design practices, which is fundamental for improving design productivity. Figure 4 shows FPGA design composition trends—and we see that there has been a declined in new logic created by FPGA project teams. At the same time we see an increase in the adoption of both internally developed and externally acquired IP.

2014-WRG-BLOG-FPGA-1-5

Figure 4. FPGA design composition trends

In my next blog (click here), I’ll focus on verification effort trends related to FPGA designs.

Quick links to the 2014 Wilson Research Group Study results (so far…)

,

21 January, 2015

This blog is a continuation of a series of blogs that present the highlights from the 2014 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In this blog I discuss the issue of study bias, and what we did to address these concerns.

MINIMIZING STUDY BIAS

When architecting a study, three main concerns must be addressed to ensure valid results: sample validity bias, non-response bias, and stakeholder bias. Each of these concerns is discussed in the following sections, as well as the steps we took to minimize these bias concerns.

Sample Validity Bias

To ensure that a study is unbiased, it’s critical that every member of a studied population have an equal chance of participating. An example of a biased study would be when a technical conference surveys its participants. The data might raise some interesting questions, but unfortunately, it does not represent members of the population that were unable to participant in the conference. The same bias can occur if a journal or online publication limits its surveys to only its subscribers.

A classic example of sample validity bias is the famous Literary Digest poll in the 1936 United States presidential election, where the magazine surveyed over two million people. This was a huge study for this period in time. The sampling frame of the study was chosen from the magazine’s subscriber list, phone books, and car registrations. However, the problem with this approach was that the study did not represent the actual voter population since it was a luxury to have a subscription to a magazine, or a phone, or a car during The Great Depression. As a result of this biased sample, the poll inaccurately predicted that Republican Alf Landon versus the Democrat Franklin Roosevelt would win the 1936 presidential election.

For our study, we carefully chose a broad set of independent lists that, when combined, represented all regions of the world and all electronic design market segments. We reviewed the participant results in terms of market segments to ensure no segment or region representation was inadvertently excluded or under-represented.

Non-Response Bias

Non-response bias in a study occurs when a randomly sampled individual cannot be contacted or refuses to participate in a survey. For example, spam and unsolicited mail filters remove an individual from the possibility of receiving an invitation to participate in a study, which can bias results. It is important to validate sufficient responses occurred across all lists that make up the sample frame. Hence, we reviewed the final results to ensure that no single list of respondents that made up the sample frame dominated the final results.

Another potential non-response bias is due to lack of language translation, which we learned during our 2012 study. The 2012 study generally had good representation from all regions of the world, with the exception of an initially very poor level of participation from Japan. To solve this problem, we took two actions:

  1. We translated both the invitation and the survey into Japanese.
  2. We acquired additional engineering lists directly from Japan to augment our existing survey invitation list.

This resulted in a balanced representation from Japan. Based on that experience, we took the same approach to solve the language problem for the 2014 study.

Stakeholder Bias

Stakeholder bias occurs when someone who has a vested interest in survey results can complete an online study survey multiple times and urge others to complete the survey in order to influence the results. To address this problem, a special code was generated for each study participation invitation that was sent out. The code could only be used once to fill out the survey questions, preventing someone from taking the study multiple times or sharing the invitation with someone else.

2010 Study Bias

While architecting the 2012 study, we did discover a non-response bias associated with the 2010 study. Although multiple lists across multiple market segments and across multiple regions of the world were used during the 2010 study, we discovered that a single list dominated the responses, which consisted of participants who worked on more advanced projects and whose functional verification processes tend to be mature. Hence, for this series of blogs we have decided not to publish any of the 2010 results as part of verification technology adoption trend analysis.

The 2007, 2012, and 2014 studies were well balance and did not exhibit the non-response bias previously described for the 2010 data. Hence, we have confidence in talking about general industry trends presented in this series of blogs.

Quick links to the 2014 Wilson Research Group Study results (so far…)

,

21 January, 2015

This is the first in a series of blogs that presents the findings from our new 2014 Wilson Research Group Functional Verification Study. However, unlike my previous Wilson Research Group functional verification study blogs, which focused on the ASIC/IC market, I plan to begin this set of blogs with an exclusive focus on FPGA trends. Why? For the following reasons:

  1. Unlike the traditional ASIC/IC market, there has historically been very few studies published on FPGA functional verification trends. We started studying the FPGA market segment back in the 2010 study, and we now have collected sufficient data to confidently present industry trends related to this market segment.
  2. Today’s FPGA designs have grown in complexity—and many now resemble complete systems. The task of verifying SoC-class designs is daunting, which has forced many FPGA projects to mature their verification process due to rising complexity. The FPGA-focused data I present in this set of blogs will support this claim.

My plan is to release the ASIC/IC functional verification trends through a set of blogs after I finish presenting the FPGA trends.

Introduction

In 2002 and 2004, Collett International Research, Inc. conducted its well-known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification at that point in time. However, after the 2004 study, no additional Collett studies were conducted, which left a void in identifying industry trends. To address this dearth of knowledge, four studies were commissioned by Mentor Graphics in 2007, 2010, 2012, and 2014, which focused on functional verification. These were world-wide, double-blind, functional verification studies, covering all electronic industry market segments. To our knowledge, the 2014 study was the largest functional verification study ever conducted. This set of blogs presents the findings from our 2014 study and provides invaluable insight into the state of the electronic industry today in terms of both design and verification trends.

Study Background

Our study was modeled after the original 2002 and 2004 Collett International Research, Inc. studies. In other words, we endeavored to preserve the original wording of the Collett questions whenever possible to facilitate trend analysis. To ensure anonymity, we commissioned Wilson Research Group to execute our study. The purpose of preserving anonymity was to prevent biasing the participants’ responses. Furthermore, to ensure that our study would be executed as a double-blind study, the compilation and analysis of the results did not take into account the identity of the participants.

For the purpose of our study we used a multiple sampling frame approach that was constructed from eight independent lists that we acquired. This enabled us to cover all regions of the world—as well as cover all relevant electronic industry market segments. It is important to note that we decided not to include our own account team’s customer list in the sampling frame. This was done in a deliberate attempt to prevent biasing the final results. My next blog in this series will discuss other potential bias concerns when conducting a large industry study and describe what we did to address these concerns.

After data cleaning the results to remove inconsistent or random responses (e.g., someone who only answered “a” on all questions), the final sample size consisted of 1886 eligible participants (i.e., n=1886). To put this figure in perspective, the 2004 Collett study sample size consisted of 201 eligible participants.

Unlike the 2002 and 2004 Collett IC/ASIC functional verification studies, which focused only on the ASIC/IC market segment, our studies were expanded in 2010 to include the FPGA market segment. We have partitioned the analysis of these two different market segments separately, to provide a clear focus on each. One other difference between our studies and the Collett studies is that our study covered all regions of the world, while the original Collett studies were conducted only in North America (US and Canada). We have the ability to compile the results both globally and regionally, but for the purpose of this set of blogs I am presenting only the globally compiled results.

Confidence Interval

All surveys are subject to sampling errors. To quantify this error in probabilistic terms, we calculate a confidence interval. For example, we determined the overall margin of error for our study to be ±2.19% at a 95% confidence interval. In other words, this confidence interval tells us that if we were to take repeated samples of size n=1886 from a population, 95% of the samples would fall inside our margin of error ±2.19%, and only 5% of the samples would fall outside.

Study Participants

This section provides background on the makeup of the study.

Figure 1 shows the percentage of overall study participants by market segment.

2014-WRG-BLOG-P-1

Figure 1: Study participants by market segment

Figure 2 shows the percentage of overall study eligible participants by their job description. An example of eligible participant would be a self-identified design or verification engineer, or engineering manager, who is actively working within the electronics industry. Overall, design and verification engineers accounted for 60 percent of the study participants.

2014-WRG-BLOG-P-2

Figure 2: Study participants job title description

Before I start presenting the findings from our 2014 functional verification study, I plan to discuss in my next blog (click here) general bias concerns associated with all survey-based studies—and what we did to minimize these concerns.

Quick links to the 2014 Wilson Research Group Study results (so far…)

,

17 March, 2014

As some of you might be aware, the Verification Academy has a video course dedicated to the topic of Assertion-Based Verification (ABV). In fact, this was the first video course we released for the academy almost six years ago. To date, it has remained one of our most popular courses. Yet one of the most common requests for improvement to this course has been adding content that focuses more on the verification process and provide guidelines on how to effectively integrate ABV into an existing flow. At the Verification Academy, we always welcome your feedback on content improvement. But before I talk about our new, updated ABV course, let me explain to you why I’m such a big believer in this technology.

Ensuring functional correctness continues to pose one of the greatest challenges for today’s SoC design teams. Indeed, recent industry studies have found that more time is spent in functional verification than any other project task. [1] But if you dig a little deeper into the trends revealed by these studies, you will find that debugging is the fastest-growing component of the overall verification process, and that on average, it consumes 34 percent of the verification engineer’s total effort. To make matters worse, the industry is witnessing increasing pressure to shorten the overall development cycle. Clearly, new design and verification techniques that improve productivity—combined with a focus on maturing functional verification process capabilities within an organization—are required.

Fig-6-2

Assertion-based verification (ABV), although certainly not an end-all to the verification challenge, directly addresses the debugging challenge. For example, those that have effectively adopted an assertion-based verification (ABV) methodology have seen a significant reduction in simulation debugging time (as much as 50 percent [2][3]) due to improved observability. In addition, organizations that have embraced an ABV methodology are able to take advantage of more advanced verification techniques, such as formal property checking, thus improving their overall verification quality and results.

While the process of writing assertions is fairly well understood by those skilled in the art—or whose skill can be easily acquired through a wealth of published papers and books that focus on language syntax and semantics—the process of creating a repeatable ABV methodology that integrates into an existing verification flow is not. And this is where our new ABV course will help. We’ve added a new session titled “Maturing a Project’s ABV Process Capabilities” that provides a set of actionable guidelines and recommendations that are process focused. These recommendations are based on years of ABV experience from multiple projects in various market segments, and they should help you achieve effective adoption of ABV on your project.

To learn more about the new ABV course, visit the Verification Academy (www.verificationacademy.com).

References

[1]     Prologue: The 2012 Wilson Research Group Functional Verification Study (here)

[2]     Y. Abarbanel, I. Beer, L. Gluhovsky, S. Keidar, Y. Wolfsthal, “FoCs—Automatic Generation of Simulation Checkers from Formal Specifications,” Proc. 12th International Conference Computer Aided Verification, pp. 414-427 (2000)

[3]     B. Turumella, M. Sharma, M., “Assertion-based verification of a 32 thread SPARC™ CMT microprocessor,” In Proceedings of the 45th Design Automation Conference, DAC 2008, pp. 256 – 261, (2008)

28 November, 2013

Wow! I’ve been on the road since August, and finally found a spare moment to get back to this blog. I started this blog series with a prologue that gave a little bit of background on the 2012 Wilson Research Group study. And with this epilogue, I will draw the series to a conclusion with some insight on the study process. Many people are cynics about industry studies in general (and particularly ones based on surveys) and believe that they are inaccurate, unreliable, and biased. However, I believe that the benefit from an industry study is not necessarily the quantitative values that the answers reveal, but the new questions they raise.

With that said, it is important to understand that the 2007 FarWest Research study and the 2010 and 2012 Wilson Research Group studies followed the format of the original 2002 and 2004 Ron Collett International studies, which have certain limitations. For example, the Collett studies were very block-, IP-, and RTL-focused studies. The data that these studies revealed certainly is of value and important, but it doesn’t represent some of the challenges in SoC integration verification and system validation that have recently emerged. In fact, many of the techniques used for block and subsystem verification that these surveys studied (such as constrained-random, functional coverage, general formal property checking) do not scale well to the SoC integration and system-level validation space. I believe that future studies should be expanded to include these emerging challenges.

Another criticism of all the previous studies is that the presented data is aggregated across all market segments—ranging from mil/areo, mobile, consumer, networking, computers, and so forth. Hence, it can be difficult for those in a particular market segment to benchmark themselves against or relate to the study data. This is a valid argument. With our two recent Wilson Research studies, it is possible to filter the data down for presentation to a specific market segment. However, it is not possible with the previous studies.

Nonetheless, there is still value in observing general industry trends between the multiple studies as long as the format of the studies is consistent. Consistency is something we strived for across the studies. For example, whenever possible, we tried to maintain the exact wording of the questions originally used in the Collett studies.

Minimizing Study Biases

When architecting a study, there are three main concerns that must be addressed to ensure valid results: (1) sample validity, (2) non-response bias, (3) stakeholder bias. I’ll briefly review each of these concerns in the following paragraphs, and discuss steps we took to try and minimize the bias concerns.

(1)    Sample validity or undercoverage bias:  To ensure that a study is unbiased, it’s critical that every member of a studied population have an equal chance of participating. An example of a biased study would be when a technical conference surveys its participants. The data might raise some interesting questions, but unfortunately, it doesn’t represent members of the population that was  unable to participant in the conference. They same bias can occur if a journal or online publication simply surveys its subscribers.

A classic example of this problem is the famous Literary Digest poll in the 1936 presidential election, where the magazine surveyed over two million people. This was a huge study for this period in time. The pool (or make-up) of the study was chosen from the magazine’s subscriber list, phone books, and car registrations. However, the problem with this approach was that the study did not represent the actual voter population since it was a luxury to have a subscription to a magazine, or a phone, or a car during The Great Depression. As a result of this biased sample, the poll inaccurately predicted that Republican Alf Landon versus the Democrat Franklin Roosevelt would win the 1936 presidential election.

For the 2012 Wilson Research Group study, we carefully chose a broad set of lists that, when combined, represented all regions of the world and all electronic design market segments. We reviewed the participant results in terms of market segments to ensure no segment or region representation was inadvertently excluded or under-represented.

(2)    Non-response bias: Non-response bias in surveys occurs when a randomly sampled individual cannot be contacted or refuses to participate in a survey. For example, spam and unsolicited mail filters remove an individual from the possibility of receiving an invitation to participate in a survey, which can bias results. It is important to validate sufficient responses occurred across all list that make up the study pool. Hence, we reviewed the final results to ensure that no single list of respondents that made up the participant pool dominated the final results.

Another potential non-response bias is due to lack of language translation. In fact, we learned this during the 2012 Wilson Research Group study. The study generally had good representation from all regions of the world, with the exception of an initially very poor level of participation from Japan. To solve this problem, we took two actions: (1) we translated both the invitation and the survey into Japanese, (2) we acquired additional engineering lists directly from Japan to augment our existing survey invitation list. This resulted in a balanced representation from Japan.

(3)    Stakeholder bias: Stakeholder bias occurs when someone who has a vested interest in survey results can complete an online survey multiple times and urge others to complete the survey in order to influence the results. To address this problem, a special code was generated for each study participation invitation that was sent out. The code could only be used once to fill out the survey, preventing someone from taking the study multiple times, or sharing the invitation with someone else.

2010 Study Bias

After analyzing the results from the 2012 study we are confident that the study was balanced across market segments and regions of the world, which was our goal. However, while architecting the 2012 study, we did discover a non-response bias associated with the 2010 study. Although multiple lists across multiple market segments and across multiple regions of the world were used during the 2010 study, we discovered that a single list dominated the responses, which consisted of participants who worked on more advanced projects and whose functional verification processes tend to be mature. For example, as a result of this bias, if you look at Figure 3 concerning languages used to create testbenches, the industry-wide adoption of SystemVerilog is likely less than what was shown for 2010. This means that the industry adoption between 2010 and 2012 would have increased slightly more than indicated, and the growth between 2007 and 2010 would have been slightly less.

The 2007 study, like the 2012 study, was well balance and did not exhibit the non-response bias previously described for the 2010 data. Hence, we have confidence in talking about general industry trends between 2007 and 2012. The 2010 data can still be useful as a reference point during discussion if you keep in mind that it represents a more process-mature segment of the population.

Our plan is to commission a new study in 2014. At that point, we should have sufficient data to start showing trends in the FPGA space (something we have not been able to do yet) and have a clearer picture of emerging trends in the non-FPGA space. However, as previously stated, the emerging challenges today are occurring in the SoC integration verification and system-level validation space. We will either reduce the scope of our existing IP/RTL-focused study to make room for new questions in these spaces, or conduct a separate, rigorous study for these new emerging challenges.

Final word—I hope the data presented in this set of blogs has provided some insight on general trends. But more importantly, I hope it has inspired you with questions about your own processes that you might want to investigate.

4 October, 2013

We are truly living in the age of SoC design, where 78 percent of all designs today contain one or more embedded processors.  In fact, 56 percent of all designs contain two or more embedded processors, which brings a whole new level of verification challenges—requiring unique solutions.

A great example of this is STMicroelectronics who recently shared their experience and solution in addressing verification challenges due to rising complexity. In 2012, STMicroelectronics began a pilot project to build what it called the Eagle Reference Design, or ERD. The goal was to see if it would be possible to stitch together three ARM products — a Cortex-A15, Cortex-7 and DMC 400 — into one highly flexible platform, one that customers might eventually be able to tweak based on nothing more than an XML description of the system.

Engineers at STMicroelectronics sought to understand and benchmark the Eagle Reference Design. To speed this benchmarking along, they wanted a verification environment that would link software-based simulation and hardware-based emulation in a common flow.

Their solution was unique, and their story worth reading. They first built a simulation testbench that relied heavily on verification IP (VIP). Next, the team connected this testbench to a Veloce emulation system via TestBench XPress (TBX) co-modeling software. Running verification required separating all blocks of design code into two domains — synthesizable code, including all RTL, for running on the emulator; and all other modules that run on the HDL portion of the environment on the simulator (which is connected to the emulator). Throughout the project, the team worked closely with Mentor Graphics to fine-tune the new co-emulation verification environment, which requires that all SoC components be mapped exactly the same way in simulation and emulation.

Because the reference design was not bound to any particular project, the main goal was not to arrive at the complete verification of the design but rather to do performance analysis and establish verification methodologies and techniques that would work in the future. In this they succeeded, agreeing that when they eventually try this sort of combined approach on a real project, they will be able to port the verification environment to the emulator more or less seamlessly.

This is a great success story worth reading on how STMicroelectronics combined Questa simulation, Mentor verification IP (VIP), and Veloce emulation to speed up their benchmarking verification process. Check out the full story here!

, , , , ,

8 September, 2013

Schedules, respins, and bug classification

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 11 click here), I focused on some of the 2012 Wilson Research Group findings related to formal verification, acceleration/emulation, and FPGA prototyping trends. In this blog, I present verification results findings in terms of schedules, respins, and classification of functional bugs.

We have seen in previous blogs that a significant amount of effort is being applied to functional verification. An important question the various studies have tried to answer is whether this increasing effort is paying off.

Schedules

Figure 1 presents the design completion time compared to the project’s original schedule. What is interesting is that we really have not seen a change in this trend in over five years. That is, 67 percent of all projects are behind schedule with respect to the original plan. One could argue that designs have increased in complexity in terms of gate counts, embedded processors, and lots of software between 2007 and 2012. Yet, achieving project schedules has not worsened.

Figure 1. Non-FPGA design completion time compared to the project’s original schedule

What’s interesting is that the FPGA designs follows this same trend, as shown in figure 2.

Figure 2. Non-FPGA vs. FPGA design completion time relative to the original plan

Respins

Other verification data points worth looking at relate to the number of spins required between the start of a project and final production. Figure 3 shows this industry trend all the way back to the 2004 Collett study. Again, even though designs have increased in complexity, the data suggest that projects are not getting any worse in terms of the number of spins before production. If anything, there appears to be a slight improvement recently in this trend in projects requiring three or more spins.

Figure 3. Number of spins required from start of project until production

Bug classification

Figure 4 shows various categories of flaws that are contributing to respins. Note that the sum is greater than 100% on this graph, which is because a respin can be triggered by multiple flaws. 

Figure 4. Number of spins required from start of project until production

Although logic and functional flaws remain the leading causes of respins, the data suggest that there has been some improvement in this area over the past eight years. Is this due to the increased amount of reuse that is occurring in the industry? Or is the industry maturing its verification processes? Or is something entirely different going on? This data point raises some interesting questions worth exploring.

Figure 5 examines root cause of functional bugs by various categories. The data suggest an improvement in logic errors over an eleven year period, and potentially, a worsening of problems related to changing specifications. Problems associated with changing, incorrect, and incomplete specifications is a common theme I often hear when visiting customers.

Figure 5. Root cause of functional flaws

In my next blog (click here), I plan to wrap up this series of blogs in what I call the Epilogue—which will discuss potential gotchas and cautions on interpreting certain aspects of the data and thoughts about how the data might be used constructively.

, , ,

26 August, 2013

Verification Techniques & Technologies Adoption Trends (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 10 click here), I presented verification techniques and technologies adoption trends, as identified by the 2012 Wilson Research Group study. In this blog, I continue those discussions and focus on formal verification, acceleration/emulation, and FPGA prototyping.

For years, the term “formal verification” has bugged me since it is quite often misunderstood in the industry. The problem originated back in the mid 1990’s with the emergence of formal equivalence checking tools from various EDA vendors, such as Chrysalis Symbolic Design. These tools were introduced to the market as formal verification, which is technically a true statement. However, there are a range of tools available under the category formal verification, such as formal property checkers and equivalence checkers.

So, what’s the problem? The question related to formal property checking in prior studies could have been misinterpreted by some participants to mean equivalence checking, which reduces the confidence in the results. To prevent this misinterpretation, we decided to change the question in 2012 to clarify that we were talking about the formal verification of assertions and clearly state “not equivalence checking” in the question.

One other thing we wanted to learn in the formal verification space during this study was what percentage of the market was using these auto-formal analysis tools (such as X safety checks, deadlock detection, reset analysis, etc.) versus formal property checking tools. The previous studies never made this distinction.

The fact that we changed the question related to formal property checking while adding in auto-formal in the 2012 study means that there is no meaningful way to compare this study’s formal verification results to the formal verification results from prior studies.

Formal Technology Adoption Trends

Figure 1 shows the adoption percentages for formal property checking and auto-formal techniques.

Figure 1. Formal Technology Adoption

We found that about five percent of the participants who are applying auto-formal techniques are not doing formal property checking. This means that the combined adoption of formal property checking and auto-formal techniques is about 32 percent. As a point of reference, the 2007 FarWest Research study found 19 percent adoption for formal verification—and the 2010 study found the adoption at 29 percent. Both the 2007 and 2010 studies included the potential erroneous responses associated with formal equivalence checking, as well as auto-formal usage.

Figure 2 provides a different analysis of the formal property adoption data by partitioning the results by design sizes. The design size partitions are represented as: less than 5M gates, 5M to 20M gates, and greater than 20M gates.

Figure 2. Formal property checking adoption by design size

Acceleration/Emulation & FPGA Prototyping Adoption Trends

The amount of time spent in a simulation regression is an increasing concern for many projects. Intuitively, we tend to think that the design size influences simulation performance. However, there are two equally important factors that must be considered: number of tests in the simulation regression suite and the length of each test in terms of clock cycles.

For example, a project might have a small or moderate-sized design, yet verification of this design requires a long running test (e.g., a video input stream). Hence, in this example, the simulation regression time is influenced by the number of clock cycles required for the test and not necessarily the design size itself.

Figure 3 shows the number of directed tests created to verify a design in simulation (i.e., the regression suite). The findings obviously varied dramatically from a handful of tests to thousands of tests in a regression suite, depending on the design.

Figure 3. Number directed test created to verify a design

The increase in tests in the range of 1-100 is interesting to note. Is this due to the increase in adoption of constrained-random verification techniques in the past few years? Or possibly, something else is going on here. This line of questioning illustrates the value of reviewing various industry studies. That is, it is not so much in the absolute values a study presents, but the questions the new data raises.

Next, let’s look at regression times as shown in Figure 5. As you can see, it also varies dramatically from short regression times for some projects to multiple days for other projects. The median simulation regression time is about 16-24 hours. Here, we also see an increase in shorter regression times. Again this data raises some interesting questions that are worth exploring.

Figure 4. Simulation regression time trends

One technique that is often used to speed up simulation regressions (either due to very long tests and lots of tests) is either hardware-assisted acceleration or emulation. In addition, FPGA prototyping, while historically used as a platform for software development, has recently served a role in SoC integration validation.

Figure 5 shows the adoption trend for both HW-assisted acceleration/emulation and FPGA prototyping by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). We see a continual rise in HW acceleration and emulation. This is not only due to the need to verify larger designs, or designs with long test times. HW acceleration and emulation has become the key platform for SoC Integration verification, where both hardware and software are integrated into a system for the first time. In addition, emulation is being used increasingly as a software development platform.

Figure 5. HW-assisted acceleration/emulation and FPGA Prototyping trends

Note that the adoption of FPGA prototyping has remained flat (or decreased slightly as the 2012 data suggest). This might seem counter-intuitive since we previously saw a trend in terms of the increase in SoC class designs. So what’s going on?

Figure 6 partitions the data for HW-assisted acceleration/emulation and FPGA prototyping adoption by design size: less than 1M gates, 1M to 20M gates, and greater than 20M gates. Notice that the adoption of HW-assisted acceleration/emulation continues to increase as design sizes increase. However, the adoption of FPGA prototyping rapidly drops off as design sizes increase beyond 20M gates. 

Figure 6. Acceleration/emulation and FPGA prototyping adoption by design size

This graph illustrates one of the problems with FPGA prototyping of very large designs, which is that there is an increased engineering effort required to partition designs across multiple FPGAs. In fact, what I have found is that FPGA prototyping of very large designs is often a major engineering effort in itself, and that many projects are seeking alternative solutions to address this problem.

In my next blog (click here), I will present the final data I plan to share from the Wilson Research Group study. This blog will focus on results in terms of meeting schedules, required spins, and classes of bugs contributing to respins. I will then wrap up this series of blogs in what I call the Epilogue—which will discuss potential gotchas and cautions on interpreting certain aspects of the data and thoughts about how the data could be used constructively.

, , , , , ,

19 August, 2013

Verification Techniques & Technologies Adoption Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for background on the study, click here).

In my previous blog (Part 9 click here), I focused on some of the 2012 Wilson Research Group findings related to design and verification language and library trends. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2012 Wilson Research Group study.

An interesting trend we are starting to see is that the electronic industry is maturing its functional verification processes, whether they are targeting their designs at IC/ASIC or FPGA implementations. This blog provides data to support this claim. An interesting question you might ask is, “What is driving this trend?” In some of my earlier blogs (click here for Part 1 and Part 2) I showed an that design complexity is increasing in terms design sizes and number of embedded processors. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (click here for Part 5 and Part 6). My belief is that the industry is being forced to mature its functional verification processes to address increasing complexity and effort.

Simulation Techniques Adoption Trends

Let’s begin by comparing  non-FPGA adoption trends related to various simulation techniques from the 2007 Far West Research study  (in blue) with the 2012 Wilson Research Group study  (in green), as shown in Figure 1.

Figure 1. Simulation-based technique adoption trends for non-FPGA designs

You can see that the study finds the industry increasing its adoption of various functional verification techniques for non-FPGA targeted designs. Clearly the industry is maturing its processes as I previously claimed.

For example, in 2007, the Far West Research Group found that only 48 percent of the industry performed code coverage. This surprised me. After all, HDL-based code coverage is a technology that has been around since the early 1990’s. However, I did informally verify the 2007 results through numerous customer visits and discussions. In 2012, we see that the industry adoption of code coverage has increased to 70 percent.

In 2007, the Far West Research Group study found that 37 percent of the industry had adopted assertions for use in simulation. In 2012, we find that industry adoption of assertions had increased to 63 percent. I believe that the maturing of the various assertion language standards has contributed to this increased adoption.

In 2007, the Far West Research Group study found that 40 percent of the industry had adopted functional coverage for use in simulation. In 2010, the industry adoption of functional coverage had increased to 66 percent. Part of this increase in functional coverage adoption has been driven by the increased adoption of constrained-random simulation, since you really can’t effectively do constrained-random simulation without doing functional coverage.

Now let’s look at  FPGA adoption trends related to various simulation techniques from the 2010 Far West Research study  (in pink) with the 2012 Wilson Research Group study  (in red).

Figure 2. Simulation-based technique adoption trends for non-FPGA designs

Again, you can clearly see that the industry is increasing its adoption of various functional verification techniques for FPGA targeted designs. This past year I have spent a significant amount of time in discussions with FPGA project managers around the world. During these discussions, most mangers mention the drive to improve verification process within their projects due to  rising complexity of this class of designs. The Wilson Research Group data supports these claims.

In fact, Figure 3 illustrates this maturing trend in the FPGA space, where we saw a 15 percent increase in the adoption of RTL simulation and an 8.5 percent increase in the adoption of code coverage. For complex FPGA designs, the traditional approach of “burn and churn” and debug in the lab is no longer a viable option. Nonetheless, it is still somewhat alarming that 31 percent of the FPGA study participants work on projects that perform no RTL simulation.

Figure 3. FPGA projects maturing their verification processes

Signoff Criteria Trends

We saw earlier in this blog the increased adoption of coverage techniques in the industry. Coverage has become a major component of a project’s verification signoff criteria. In Figure 4, we see how coverage has increased in importance in verification signoff criteria within the past five years, while other decision attributes have declined in terms of importance.

Figure 4. Non-FPGA functional verification signoff criteria trends

We see the same trends for FPGA designs, as shown in Figure 5.

Figure 5. FPGA functional verification signoff criteria trends

In my next blog (click here), I plan to continue the discussion related to adoption of various verification technologies and techniques as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@jhupcey Tweets

  • Loading tweets...