Verification Horizons BLOG

This blog will provide an online forum to provide weekly updates on concepts, values, standards, methodologies and examples to assist with the understanding of what advanced functional verification technologies can do and how to most effectively apply them. We're looking forward to your comments and suggestions on the posts to make this a useful tool.

11 February, 2014

DVCon 2014 LogoOne of the nice things about DVCon is the update one can get from the developers of IEEE and Accellera standards.  And this year’s DVCon is no exception.  The four days of DVCon begin and end with tutorials that cover updates to popular standards like UVM, UPF, SystemC and more.  For our part, Mentor Graphics is participating in the development and delivery of these updates with our peers.

UVM LogoI have written in the past about the productivity challenges before us to address the verification crisis and the emergence of machine-to-machine communication and the Internet of Things driving power aware design and verification.  To advance the demands on improved verification and help to address the verification crisis, the next round in the Universal Verification Methodology (UVM) standard is being readied for industry adoption.  UVM 1.2, the emerging update will be covered in some detail in a Monday morning tutorial to help you learn “What’s Now and What’s Next.”  Mentor Graphics’ Tom Fitzpatrick and Accellera Working Group representative will present in this tutorial.

UVM 1.2 is an active development project of Accellera and has not yet been released so there is no official standard available for download and use yet.  I’ll share standardization details as they happen.

At the same time on Monday, those who are concerned with power aware design and verification can attend the tutorial on the Unified Low Power Format (UPF), or as it is officially called IEEE 1801™-2013.  The tutorial will cover the full spectrum of UPF capabilities and methodology from basic to advanced applications.  So if you are new to UPF and want to learn, this is a great tutorial to attend.  And if you are already an expert, the advanced application of UPF as highlighted by those companies who have adopted UPF make this valuable for you as well.  Mentor Graphics’ Erich Marschner and IEEE 1801 Working Group vice-chair will participate in this tutorial.

UPF is an official IEEE standard.  Have you downloaded your copy yet?  Accellera has worked with the IEEE to make no-charge access to the official standard for you.  You can find the UPF standard here.

In the afternoon, there will be a session on case studies in SystemC.  User and vendor presentations will explore use of this standard.  SystemC offers much in the verification space, not just in technology but learning on how to bridge the RTL world with transaction level modeling world.  Mentor Graphics’ John Stickley will review what we have learned and how you can apply it to your most pressing verification needs.

SystemC is an official IEEE standard.  Have you downloaded your copy yet?  Under the Accellera agreement with the IEEE, you can download SystemC standard here.

There is a lot more to DVCon than just the use of current standards and planning adoption of emerging standards.  I encourage you to check out the whole agenda and join me at DVCon 2014 March 3-6.

Mentor Graphics presentations during the conference include:

  • Tuesday Paper Sessions
    • Amit Srivastava – Stepping Into UPF 2.1 World: Easy Solution to Complex
      Power Estimation
    • Kenneth Bakalar – Interpreting UPF For A Mixed-Signal Design Under Test
    • Gordon Allan – Tried and Tested Speedups for Software-Driven SoC Simulatio
  • Tuesday Poster Sessions
    • Rich Edelman – Debugging Communicating Systems: The Blame Game – Blurring
      the Line Between Performance Analysis and Debug
    • Matthew Balance – Tackling Random Blind Spots with Strategy-Driven Stimulus Generation
    • Gaurav K. Verma – Supercharge Your Verification Using Rapid Expression Coverage as the Basis of a MC/DC-Compliant Coverage Methodology
    • Andreas Meyer – So You Think You Have Good Stimulus: System-Level Distributed Metrics Analysis and Results
    • Rich Edelman – UVM SchmooVM – I Want My C Tests!
    • Thom Ellis – Are  You Really Confident That You Are Getting the Very Best From Your Verification Resources?
    • Jitesh Bansal – Is Your Power Aware Design Really X-Aware
  • Wednesday Paper Sessions
    • Avidan Efody – Wiretap Your SoC: Why Scattering Verification IPs Throughout Your Design Is A Smart Thing To Do
    • Tom Fitzpatrick – Of Camels and Committees: Standards Should Enable Innovation, Not Strangle It

Mentor Graphics will host its traditional lunch at DVCon on Wednesday on the theme of Accelerating Verification.  And we have lively panel participants for the Tuesday and Wednesday panels.  And, as always, the Exhibit, CEO Keynote and Panels are open to all a no charge – you just have to REGISTER!

I look forward to seeing you there!

, , , , ,

4 February, 2014

Marketing teams at FPGA vendors have been busy as the silicon nanometer geometry race escalates. Altera is “delivering the unimaginable” while Xilinx is offering “all programmable SoCs” to design centers. It’s clear that the SoC has become more accessible to a broader market today and that FPGA vendors have staked out a solid technology roadmap for the near future. Do marketing messages surrounding the geometry race effect day to day life of engineers, and if so, how – especially when it comes to verification?
An excellent whitepaper from Altera, “The Breakthrough Advantage for FPGAs with Tri-Gate Technology,” covers Altera’s Stratix 10 FPGAs and SoCs. The paper describes verification challenges in this new expanded market this way: “Although current generation FPGAs require a rigorous simulation verification methodology rivaling ASICs, the additional lab testing and ability to reprogram FPGAs save substantial manpower investment. The overall cost of ownership must be considered when comparing an FPGA whose component price is higher than an ASIC of similar complexity.” I believe you can use this statement to engage your management in a discussion about better verification processes.

Xilinx also has excellent published technical resources. Its recent UltraScale backgrounder describes how they are solving the challenges in implementing a design with their reprogrammable silicon. Clearly Xilinx has made an impressive investment to make it easier to implement a design with its FPGA UltraScale products. Improvements include ASIC-like clocking and annealing dataflow bottlenecks without compromising performance. Xilinx also describes improvements when using its Vivado design suite, particularly when it comes to in-lab design bring up.

For other FPGA insights, it’s also worth checking out Electronics Engineering Journal’s recent article “Proliferating Programmability in 2014,” which claims that the long-term future of FPGAs tool flows even though, as Kevin Morris sees it, EDA seems to have abandoned the market. (Kevin, I’m here to tell you you’re wrong.)

Do you think it’s inevitable that your FPGA team will first struggle to make it across the verification finish line before adopting a more process-oriented verification flow like the ASIC market demands? It’s not. I base this conclusion on the many conversations I’ve had with FPGA designers, their managers, sales engineers and many other talented people in this market over the years. Yes, there are significant challenges in FPGA design, but not all of them are technology related. With some emotion, one engineer remarked that debugging the same type of issue over and over in the hardware lab and expecting a different outcome was insane. (He’s right.) Others say they need specific ROI information for their management to even accept their need for change. Still others state that had they only known the solutions I talked about in my seminar a year ago, they would have not spent months and months bringing up their design in the lab.
With my peers here at Mentor Graphics, I have developed a three-step verification flow that includes coverage, assertions and improved throughput. I’ll write about this flow and related issues in the weeks ahead here on this blog. The flow is built on fundamental verification technologies that benefit the broad FPGA market. The goal, in developing the technology and writing about it here, has been to provide practical solutions and help more FPGA teams cross the verification gap.

In the meantime, what are your stories? Are you able to influence your management into adopting advanced technology to aid lab bring-up? Is your management’s bias towards lower cost and faster implementation (at the expense of verification)? Let me know in the comments or, if you prefer, by e-mail: joe_rodriguez@mentor.com.

, , , , , , , , ,

6 January, 2014

The UCIS Story

There is no secret as design sizes grow it is doubly burdensome for verification.  Two factors that are easy to measure is the time it takes to simulate a design and the other is the size of the dataset that contains the results of the verification runs. Simulation times are growing and the datasets are getting larger.  While time and attention is given to accelerated verification through emulation, or alternate verification methods, to reduce run times, less explored is the impact of larger datasets on verification closure.  How does one find bugs within datasets that are so large?  How can verification results from simulation, emulation, formal and more be brought together to help drive verification closure?  How can one link failures in verification back to requirements?

The Accellera standards organization took a multi-year journey to help address these issues and arrived at the creation of the Unified Coverage Interoperability Standard (UCIS).  You can get your free copy here if you would like to read and use it.  Mentor Graphics contributed a significant starting point to the standard and collaborated with major competitors and users to add to and extend from there.  But now that the standard is done, what does one do with it?

While that was a rhetorical question when the standard was done in 2012; today it begs an answer.

From my perspective there are two classes of users of UCIS.  The more immediate users are those who are building verification tools that must contend with design and verification complexity now.  With UCIS they have the initial underpinnings to add product features that will allow a level of data portability that was not present prior to the standard.  The second class of users are those who will use the UCIS Application Programming Interface (API) to build functions that will perform simple and complex tasks on these large datasets.  This last class of user that might exchange UCIS API code with each other has yet to materialize.  But the stage is set for them.

To highlight what the first class of UCIS adopters have been doing, DVClub in Europe will tackle to answer this question as on what one can do with UCIS on Monday, 13 January 2014.  Darron May, Product Manager at Mentor Graphics will speak for us on our application of the standard.  His session is titled Blending Metrics from Multiple Verification Engines to Improve Productivity.  You can find out more details about the DVClub event (speakers and presentation abstracts) and register here to attend in person or via remote access.  The event will be held 12:00-14:00 GMT and is free.

, , , , , ,

28 November, 2013

Wow! I’ve been on the road since August, and finally found a spare moment to get back to this blog. I started this blog series with a prologue that gave a little bit of background on the 2012 Wilson Research Group study. And with this epilogue, I will draw the series to a conclusion with some insight on the study process. Many people are cynics about industry studies in general (and particularly ones based on surveys) and believe that they are inaccurate, unreliable, and biased. However, I believe that the benefit from an industry study is not necessarily the quantitative values that the answers reveal, but the new questions they raise.

With that said, it is important to understand that the 2007 FarWest Research study and the 2010 and 2012 Wilson Research Group studies followed the format of the original 2002 and 2004 Ron Collett International studies, which have certain limitations. For example, the Collett studies were very block-, IP-, and RTL-focused studies. The data that these studies revealed certainly is of value and important, but it doesn’t represent some of the challenges in SoC integration verification and system validation that have recently emerged. In fact, many of the techniques used for block and subsystem verification that these surveys studied (such as constrained-random, functional coverage, general formal property checking) do not scale well to the SoC integration and system-level validation space. I believe that future studies should be expanded to include these emerging challenges.

Another criticism of all the previous studies is that the presented data is aggregated across all market segments—ranging from mil/areo, mobile, consumer, networking, computers, and so forth. Hence, it can be difficult for those in a particular market segment to benchmark themselves against or relate to the study data. This is a valid argument. With our two recent Wilson Research studies, it is possible to filter the data down for presentation to a specific market segment. However, it is not possible with the previous studies.

Nonetheless, there is still value in observing general industry trends between the multiple studies as long as the format of the studies is consistent. Consistency is something we strived for across the studies. For example, whenever possible, we tried to maintain the exact wording of the questions originally used in the Collett studies.

Minimizing Study Biases

When architecting a study, there are three main concerns that must be addressed to ensure valid results: (1) sample validity, (2) non-response bias, (3) stakeholder bias. I’ll briefly review each of these concerns in the following paragraphs, and discuss steps we took to try and minimize the bias concerns.

(1)    Sample validity or undercoverage bias:  To ensure that a study is unbiased, it’s critical that every member of a studied population have an equal chance of participating. An example of a biased study would be when a technical conference surveys its participants. The data might raise some interesting questions, but unfortunately, it doesn’t represent members of the population that was  unable to participant in the conference. They same bias can occur if a journal or online publication simply surveys its subscribers.

A classic example of this problem is the famous Literary Digest poll in the 1936 presidential election, where the magazine surveyed over two million people. This was a huge study for this period in time. The pool (or make-up) of the study was chosen from the magazine’s subscriber list, phone books, and car registrations. However, the problem with this approach was that the study did not represent the actual voter population since it was a luxury to have a subscription to a magazine, or a phone, or a car during The Great Depression. As a result of this biased sample, the poll inaccurately predicted that Republican Alf Landon versus the Democrat Franklin Roosevelt would win the 1936 presidential election.

For the 2012 Wilson Research Group study, we carefully chose a broad set of lists that, when combined, represented all regions of the world and all electronic design market segments. We reviewed the participant results in terms of market segments to ensure no segment or region representation was inadvertently excluded or under-represented.

(2)    Non-response bias: Non-response bias in surveys occurs when a randomly sampled individual cannot be contacted or refuses to participate in a survey. For example, spam and unsolicited mail filters remove an individual from the possibility of receiving an invitation to participate in a survey, which can bias results. It is important to validate sufficient responses occurred across all list that make up the study pool. Hence, we reviewed the final results to ensure that no single list of respondents that made up the participant pool dominated the final results.

Another potential non-response bias is due to lack of language translation. In fact, we learned this during the 2010 Wilson Research Group study. The study generally had good representation from all regions of the world, with the exception of an initially very poor level of participation from Japan. To solve this problem, we took two actions: (1) we translated both the invitation and the survey into Japanese, (2) we acquired additional engineering lists directly from Japan to augment our existing survey invitation list. This resulted in a balanced representation from Japan. Based on that experience, we took the same approach to solve the language problem for the 2012 study.

(3)    Stakeholder bias: Stakeholder bias occurs when someone who has a vested interest in survey results can complete an online survey multiple times and urge others to complete the survey in order to influence the results. To address this problem, a special code was generated for each study participation invitation that was sent out. The code could only be used once to fill out the survey, preventing someone from taking the study multiple times, or sharing the invitation with someone else.

2010 Study Bias

After analyzing the results from the 2012 study we are confident that the study was balanced across market segments and regions of the world, which was our goal. However, while architecting the 2012 study, we did discover a non-response bias associated with the 2010 study. Although multiple lists across multiple market segments and across multiple regions of the world were used during the 2010 study, we discovered that a single list dominated the responses, which consisted of participants who worked on more advanced projects and whose functional verification processes tend to be mature. For example, as a result of this bias, if you look at Figure 3 concerning languages used to create testbenches, the industry-wide adoption of SystemVerilog is likely less than what was shown for 2010. This means that the industry adoption between 2010 and 2012 would have increased slightly more than indicated, and the growth between 2007 and 2010 would have been slightly less.

The 2007 study, like the 2012 study, was well balance and did not exhibit the non-response bias previously described for the 2010 data. Hence, we have confidence in talking about general industry trends between 2007 and 2012. The 2010 data can still be useful as a reference point during discussion if you keep in mind that it represents a more process-mature segment of the population.

Our plan is to commission a new study in 2014. At that point, we should have sufficient data to start showing trends in the FPGA space (something we have not been able to do yet) and have a clearer picture of emerging trends in the non-FPGA space. However, as previously stated, the emerging challenges today are occurring in the SoC integration verification and system-level validation space. We will either reduce the scope of our existing IP/RTL-focused study to make room for new questions in these spaces, or conduct a separate, rigorous study for these new emerging challenges.

Final word—I hope the data presented in this set of blogs has provided some insight on general trends. But more importantly, I hope it has inspired you with questions about your own processes that you might want to investigate.

15 November, 2013

Wanted to let you all know that the October, 2013 issue of Verification Horizons is available online. You can view the articles or download the issue here. In addition to a little paternal bragging in the Introduction, I wanted to call your attention in particular to a few of the articles written by some Mentor colleagues:

  • Software-Driven Testing of AXI Bus in a Dual Core ARM System by Mark Olen, Mentor Graphics
    In this article, Mark presents an architecture for verifying the functionality and performance of a complex AXI bus fabric using a combination of SystemVerilog and C software-driven test techniques, where the operation of the C code is automatically coordinated with additional UVM stimulus to ensure that you’re hitting corner cases of your software as well as your hardware.
  • Caching in on Analysis by Mark Peryer, Mentor Graphics
    Our “other Mark” explains how to verify complex interconnect subsystems in Questa through testbench and instrumentation generation, as well as automated stimulus to target interconnect functionality and cache coherency.
  • DDR SDRAM Bus Monitoring using Mentor Verification IP by Nikhil Jain, Mentor Graphics
    Here, Nikhil explains how Mentor’s DDR VIP can be used as a bus monitor, taking advantage of builtin coverage and assertions, to ensure proper protocol behavior.
  • Life Isn’t Fair, So Use Formal by Roger Sabbagh, Mentor Graphics
    Roger will show you how to use Questa CoverCheck to help you reach (or usually eliminate) that last 10% of code coverage that always seems to take so long.

I had to write my introduction before the Red Sox actually made it to the World Series, but I just have to say that

  1. I picked the Sox to beat the Tigers in 6 games in the ALCS (including calling Big Papi’s grand slam in game two – ask my son), and
  2. I picked the Sox to beat the Cardinals in 6 games to win the World Series.

Just wanted you all to know that.

-Tom

30 October, 2013

MENTOR GRAPHICS AT ARM TECHCON

This week ARM® TechCon® 2013 is being held at the Santa Clara Convention Center from Tuesday October 29 through Thursday October 31st, but don’t worry, there’s nothing to be scared about.  The theme is “Where Intelligence Counts”, and in fact as a platinum sponsor of the event, Mentor Graphics is excited to present no less than ten technical and training sessions about using intelligent technology to design and verify ARM-based designs.

My personal favorite is scheduled for Halloween Day at 1:30pm, where I’ll tell you about a trick that Altera used to shave several months off their schedule, while verifying the functionality and performance of an ARM AXI™ fabric interconnect subsystem.  And the real treat is that they achieved first silicon success as well.  In keeping with the event’s theme, they used something called “intelligent” testbench automation.

And whether you’re designing multi-core designs with AXI fabrics, wireless designs with AMBA® 4 ACE™ extensions, or even enterprise computing systems with ARM’s latest AMBA® 5 CHI™ architecture, these sessions show you how to take advantage of the very latest simulation and formal technology to verify SoC connectivity, ensure correct interconnect functional operation, and even analyze on-chip network performance.

On Tuesday at 10:30am, Gordon Allan described how an intelligent performance analysis solution can leverage the power of an SQL database to analyze and verify interconnect performance in ways that traditional verification techniques cannot.  He showed a wide range of dynamic visual representations produced by SoC regressions that can be quickly and easily manipulated by engineers to verify performance to avoid expensive overdesign.

Right after Gordon’s session, Ping Yeung discussed using intelligent formal verification to automate SoC connectivity, overcoming observability and controllability challenges faced by simulation-only solutions.  Formal verification can examine all possible scenarios exhaustively, verifying on-chip bus connectivity, pin multiplexing of constrained interfaces, connectivity of clock and reset signals, as well as power control and scan test signal connectivity.

On Wednesday, Mark Peryer shows how to verify AMBA interconnect performance using intelligent database analysis and intelligent testbench automation for traffic scenario generation.  These techniques enable automatic testbench instrumentation for configurable ARM-based interconnect subsystems, as well as highly-efficient dense, medium, sparse, and varied bus traffic generation that covers even the most difficult to achieve corner-case conditions.

And finally also on Halloween, Andy Meyer offers an intelligent workshop for those that are designing high performance systems with hierarchical and distributed caches, using either ARM’s AMBA 5 CHI architecture or ARM’s AMBA 4 ACE architecture.  He’ll cover topics including how caching works, how to improve caching performance, and how to verify cache coherency.

For more information about these sessions, be sure to visit the ARM TechCon program website.  Or if you miss any of them, and would like to learn about how this intelligent technology can help you verify your ARM designs, don’t be afraid to email me at mark_olen@mentor.com.   Happy Halloween!

, , , , , , , , , ,

15 October, 2013

Low Power Flow Kicks-off Symposium

In the world of electronic design automation, as an idea takes hold and works its way from thought to silicon, numerous tools are used by engineers and the like to help bring a good idea to product fruition.  Standards play a key and important role to help move your user information from high-level concepts into the netlists can be realized in silicon.  The IEEE Standards Association is holding a Symposium on EDA Interoperability to help members of the electronics/semiconductor design and verification community better understand the landscape of EDA and IP standards and the role they play to address interoperability.

Another key component are the programs and business relationships we foster to promote tool connectivity and interoperability among each other.  The Questa users rely on the Questa Vanguard Partnership program so their trusted tool and technology partners have access to our verification technology to allow them to craft the leading edge design and verification flows with technology from numerous sources.  If your users want you to connect with Questa, we invite them to explore the benefits of this program.  Even better, join us at the IEEE SA Symposium on EDA Interoperability where can also discuss this in person – Register Here!

Event Details
Date: 24 October 2013
Time: 9:00 a.m. – 6:00 p.m. PT
Location: Techmart – 5201 Great America Parkway, Santa Clara, CA 95054-1125
Cost: Free!
Program: http://standards.ieee.org/events/edasymposium/program.html

One of the more pressing issues in design and verification today is address the issue of low power.  The IEEE SA Symposium on EDA kicks-off the morning with its first session on “Interoperability Challenges: Power Management in Silicon.”  The session will feature an opening presentation on the state of standardization by the Vice Chair of the IEEE P1801 Working Group (and Mentor Graphics Verification Architect) as well as two presentations from ARM on the use of the IEEE 1801 (UPF) standard.

11:00 a.m. – 12:00 p.m. Session 1: Interoperability Challenges: Power Management in Silicon
IEEE 1801 Low Power Format: Impact and Opportunities
Erich Marschner, Vice Chair of IEEE P1801 Working Group, Verification Architect, Mentor Graphics
Power Intent Constraints: Using IEEE1801 to improve the quality of soft IP
Stuart Riches, Project Manager, ARM
Power Intent Verification: Using IEEE1801 for the verification of ARM Cortex A53 processor
Adnan Khan, Senior Engineer, ARM

The event is sponsored by Mentor Graphics and Synopsys and we have made sure the symposium is free to attend.  You just need to register.  There are other great aspects to the event, not just the ability to have a conversation on the state of standards for low power design and verification in the morning.  In fact, the end of the event will take a look at EDA 2020 and what is needed in the future.  This will be a very interactive session that will open the conversation to all attendees.  I can’t wait to learn what you have to share!  See you at the Techmart on the 24th.

, , , , , , ,

4 October, 2013

We are truly living in the age of SoC design, where 78 percent of all designs today contain one or more embedded processors.  In fact, 56 percent of all designs contain two or more embedded processors, which brings a whole new level of verification challenges—requiring unique solutions.

A great example of this is STMicroelectronics who recently shared their experience and solution in addressing verification challenges due to rising complexity. In 2012, STMicroelectronics began a pilot project to build what it called the Eagle Reference Design, or ERD. The goal was to see if it would be possible to stitch together three ARM products — a Cortex-A15, Cortex-7 and DMC 400 — into one highly flexible platform, one that customers might eventually be able to tweak based on nothing more than an XML description of the system.

Engineers at STMicroelectronics sought to understand and benchmark the Eagle Reference Design. To speed this benchmarking along, they wanted a verification environment that would link software-based simulation and hardware-based emulation in a common flow.

Their solution was unique, and their story worth reading. They first built a simulation testbench that relied heavily on verification IP (VIP). Next, the team connected this testbench to a Veloce emulation system via TestBench XPress (TBX) co-modeling software. Running verification required separating all blocks of design code into two domains — synthesizable code, including all RTL, for running on the emulator; and all other modules that run on the HDL portion of the environment on the simulator (which is connected to the emulator). Throughout the project, the team worked closely with Mentor Graphics to fine-tune the new co-emulation verification environment, which requires that all SoC components be mapped exactly the same way in simulation and emulation.

Because the reference design was not bound to any particular project, the main goal was not to arrive at the complete verification of the design but rather to do performance analysis and establish verification methodologies and techniques that would work in the future. In this they succeeded, agreeing that when they eventually try this sort of combined approach on a real project, they will be able to port the verification environment to the emulator more or less seamlessly.

This is a great success story worth reading on how STMicroelectronics combined Questa simulation, Mentor verification IP (VIP), and Veloce emulation to speed up their benchmarking verification process. Check out the full story here!

, , , , ,

19 September, 2013

It’s hard for me to believe that SystemVerilog 3.1 was released just over 10 years ago. The 3.1 version added Object-Oriented Programming features for testbench development to a language predominately used for RTL design synthesis. Making debug easier was one of the driving forces in unifying testbench and design features into a single language. The semantics for evaluating expressions and executing statements would be the same in the testbench and design. Setting breakpoints and stepping through the code would be seamless. That should have made it easier for either a verification or a design engineer to understand a complete verification environment. Or maybe it would enable either one to at least understand enough of the environment to isolate a particular problem.

Ten years later, I have yet to see that promise fulfilled. Most design engineers still debug their simulations the same way they debug in the lab: they look at waveforms. During simulation, they rarely look at the design source code, and certainly never look at the testbench code (unless it’s just basic pin wiggling like a waveform). Verification engineers are not much different. They rely on waveform debugging because that is what they were brought up on, and many do not even realize source-level debugging is available to them. However the test/testbench is more like a piece of software than a hardware description, and there are many things about a modern testbench that is difficult to display in a waveform (e.g. call stacks, local variables, and random constraints). And methodologies like the UVM add many layers of source-level complexity that most users do not have the time to wade through.

Next week I will be presenting as part of an Industry Special Session during the Forum on specification & Design Languages (FDL September 24-26,2013) that will discuss these issues and try to get more involvement from the academic and user communities to help resolve them. Was combining constructs from many languages into one a success? Can tools provide representations of source-level constructs in an easier graphical form? We hopefully will not need another decade.

, , , ,

8 September, 2013

Schedules, respins, and bug classification

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 11 click here), I focused on some of the 2012 Wilson Research Group findings related to formal verification, acceleration/emulation, and FPGA prototyping trends. In this blog, I present verification results findings in terms of schedules, respins, and classification of functional bugs.

We have seen in previous blogs that a significant amount of effort is being applied to functional verification. An important question the various studies have tried to answer is whether this increasing effort is paying off.

Schedules

Figure 1 presents the design completion time compared to the project’s original schedule. What is interesting is that we really have not seen a change in this trend in over five years. That is, 67 percent of all projects are behind schedule with respect to the original plan. One could argue that designs have increased in complexity in terms of gate counts, embedded processors, and lots of software between 2007 and 2012. Yet, achieving project schedules has not worsened.

Figure 1. Non-FPGA design completion time compared to the project’s original schedule

What’s interesting is that the FPGA designs follows this same trend, as shown in figure 2.

Figure 2. Non-FPGA vs. FPGA design completion time relative to the original plan

Respins

Other verification data points worth looking at relate to the number of spins required between the start of a project and final production. Figure 3 shows this industry trend all the way back to the 2004 Collett study. Again, even though designs have increased in complexity, the data suggest that projects are not getting any worse in terms of the number of spins before production. If anything, there appears to be a slight improvement recently in this trend in projects requiring three or more spins.

Figure 3. Number of spins required from start of project until production

Bug classification

Figure 4 shows various categories of flaws that are contributing to respins. Note that the sum is greater than 100% on this graph, which is because a respin can be triggered by multiple flaws. 

Figure 4. Number of spins required from start of project until production

Although logic and functional flaws remain the leading causes of respins, the data suggest that there has been some improvement in this area over the past eight years. Is this due to the increased amount of reuse that is occurring in the industry? Or is the industry maturing its verification processes? Or is something entirely different going on? This data point raises some interesting questions worth exploring.

Figure 5 examines root cause of functional bugs by various categories. The data suggest an improvement in logic errors over an eleven year period, and potentially, a worsening of problems related to changing specifications. Problems associated with changing, incorrect, and incomplete specifications is a common theme I often hear when visiting customers.

Figure 5. Root cause of functional flaws

In my next blog (click here), I plan to wrap up this series of blogs in what I call the Epilogue—which will discuss potential gotchas and cautions on interpreting certain aspects of the data and thoughts about how the data might be used constructively.

, , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...