Verification Horizons BLOG

This blog will provide an online forum to provide weekly updates on concepts, values, standards, methodologies and examples to assist with the understanding of what advanced functional verification technologies can do and how to most effectively apply them. We're looking forward to your comments and suggestions on the posts to make this a useful tool.

26 February, 2015

A colleague recently asked me: Has anything changed? Do design teams tape-out nowadays without GLS (Gate-Level Simulation)? And if so, does their silicon actually work?

In his day (and mine), teams prepared in 3 phases: hierarchical gate-level netlist to weed out X-propagation issues, then full chip-level gate simulation (unit delay) to come out of reset and exercise all I/Os, and weed out any other X’s, then finally a run with SDF back-annotation on the clocktree-inserted final netlist.

gates

After much discussion about the actual value of GLS and the desirability of eliminating the pain of having to do it from our design flows, my firm conclusion:

Yes! Gate-level simulation is still required, at subsystem level and full chip level.

Its usage has been minimized over the years – firstly by adding LEC (Logical Equivalence Checking) and STA (Static Timing Analysis) to the RTL-to-GDSII design flow in the 90s, and secondly by employing static analysis of common failure modes that were traditionally caught during GLS – x-prop, clock-domain-crossing errors, power management errors, ATPG and BIST functionality, using tools like Questa® AutoCheck, in the last decade.

So there should not be any setup/hold or CDC issues remaining by this stage.  However, there are a number of reasons why I would always retain GLS:

  1. Financial prudence.  You tape out to foundry at your own risk, and GLS is the closest representation you can get to the printed design that you can do final due diligence on before you write that check.  Are you willing to risk millions by not doing GLS?
  2. It is the last resort to find any packaging issues that may be masked by use of inaccurate behavioral models higher up the flow, or erroneous STA due to bad false path or multi-cycle path definitions.  Also, simple packaging errors due to inverted enable signals can remain undetected by bad models.
  3. Ensure that the actual bringup sequence of your first silicon when it hits the production tester after fabrication.  Teams have found bugs that would have caused the sequence of first power-up, scan-test, and then blowing some configuration and security fuses on the tester, to completely brick the device, had they not run a final accurate bring-up test, with all Design-For-Verification modes turned off.
  4. In block-level verification, maybe you are doing a datapath compilation flow for your DSP core which flips pipeline stages around, so normal LEC tools are challenged.  How can you be sure?
  5. The final stages of processing can cause unexpected transformations of your design that may or may not be caught by LEC and STA, e.g. during scan chain insertion, or clocktree insertion, or power island retention/translation cell insertion.  You should not have any new setup/hold problems if the extraction and STA does its job, but what if there are gross errors affecting clock enables, or tool errors, or data processing errors.  First silicon with stuck clocks is no fun.  Again, why take the risk?  Just one simulation, of the bare metal design, coming up from power-on, wiggling all pads at least once, exercising all test modes at least once, is all that is required.
  6. When you have design deltas done at the physical netlist level: e.g. last minute ECOs (Engineering Change Orders), metal layer fixes, spare gate hookup, you can’t go back to an RTL representation to validate those.  Gates are all you have.
  7. You may need to simulate the production test vectors and burn-in test vectors for your first silicon, across process corners.  Your foundry may insist on this.
  8. Finally, you need to sleep at night while your chip is in the fab!

There are still misconceptions:

  • There is no need to repeat lots of RTL regression tests in gatelevel.  Don’t do that.  It takes an age to run those tests, so identify a tiny percentage of your regression suite that needs to rerun on GLS, to make it count.
  • Don’t wait until tapeout week before doing GLS – prepare for it very early in your flow by doing the 3 preparation steps mentioned above as soon as practical, so that all X-pessimism issues are sorted out well before crunch time.
  • The biggest misconception of all: ”designs today are too big to simulate.”.  Avoid that kind of scaremongering.  Buy a faster computer with more memory.  Spend the right amount of money to offset the risk you are about to undertake when you print a 20nm mask set.

Yes, it is possible to tape out silicon that works without GLS.  But no, you should not consider taking that risk.  And no, there is no justification for viewing GLS as “old school” and just hoping it will go away.

Now, the above is just one opinion, and reflects recent design/verification work I have done with major semiconductor companies.  I anticipate that large designs will be harder and harder to simulate and that we may need to find solutions for gate-level signoff using an emulator.  I also found some interesting recent papers, resources, and opinion – I don’t necessarily agree with all the content but it makes for interesting reading:

I’d be interested to know what your company does differently nowadays.  Do you sleep at night?

If you are attending DVCon next week, check out some of Mentor’s many presentations and events as described by Harry Foster, and please come and find me in the Mentor Graphics booth (801), I would be happy to hear about your challenges in Design/Verification/UVM and especially Debug.
Thanks for reading,
Gordon

, , , , , ,

23 February, 2015

It’s my favorite time of year again—DVCon!  And I believe that the DVCon 2015 technical program committee has put together one of the technically best DVCon’s in years. In this blog I plan on highlighting a few DVCon events that you might want to put on your calendar.

2015-DVCon

First, at this year’s conference the Verification Academy has a dedicated booth (#301), and I hope you stop by to say hello to myself, my friend Tom Fitzpatrick, and an amazing lineup of other Verification Academy subject matter experts.

Next, on Wednesday morning March 4 I have the honor of participating on a verification panel, titled: “Art of Science.” Here, my fellow panelist and I will debate the issue that verification today is considered by some to be more of an art than a science—and one which is perceived as difficult to master. To learn my position on this topic, you’ll have to stop by!

Also on Wednesday at the Mentor sponsored lunch, my colleague Steve Bailey and I have put together both an informative and entertaining talk we’ve title: “From Tightly Coupled (Loosely Bolted) to Verification Convergence.” Here, we discuss the state of verification past, present and future while examining the results from our recently industry world-wide study, which I started blogging about a few weeks ago (click here for more details). Our talk will examine how advanced techniques are taking hold in mainstream design and provide insights on the recent convergence of verification solutions to meet today’s growing challenges.

Finally, there are two tutorials I’d like to encourage you to attend while at DVCon this year:

  1. Advanced, High-Throughput Debug from Architectural Modeling Through Post-Silicon SoC Validation (click here for more details)
  2. Dead or Alive: Using Automated Formal Techniques to Characterize Dead Code, Reveal Paths to Hit Uncovered States, and Reach Coverage Closure Faster (click here for more details)

I look forward to meeting you at DVCon 2015!

, , ,

19 February, 2015

It’s amazing how quickly a year goes by. DVCon 2014 seems like it was just a few months ago, and here we are rolling up on DVCon 2015. As my colleague Dennis Brophy blogged earlier, it was just a year ago that Mentor Graphics proposed that Accellera launch a Proposed Working Group (PWG) to explore whether sufficient need and interest existed in the industry to standardize a portable stimulus specification. After nearly a year of requirements gathering and discussion between the participants of the Portable Stimulus PWG, the PWG announced that it had concluded there was sufficient interest and need to justify an official standards body.

When portable stimulus is discussed, it is often in the context of SoC-level verication – both because of how critical SoC- and System-level verification is today, and because of the stresses an SoC-level environment places on a portable stimulus solution. SoC-level verification must be done in simulation, emulation, and silicon, and is typically driven via the embedded processors in the design – in other words, using the design to verify itself. In an SoC-level environment, Portable Stimulus introduces a degree of stimulus-generation automation not possible using directed tests, while enabling the generated stimulus to be portable across engines and tailored to the performance characteristics of those engines.

While SoC-level verification may illustrate an extreme example of the requirements for portable stimulus, there are many other cases where portable stimulus is extremely valuable. One of these cases is illustrated by a paper co-authored by Boris Hristov from Ciena , and my colleague, Mike Andrews. Portable Stimulus Models for C/SystemC, UVM and Emulation discusses how portable stimulus can be applied to verify a C/SystemC design that will be targeted at High-Level Synthesis (HLS). Then, reuse the same stimulus model to verify the RTL output of the High-Level Synthesis tool in a UVM environment in simulation and emulation. While a very different target than SoC-level verification, the benefits of applying Portable Stimulus are much the same – consistent stimulus across languages and engines, and a high degree of automation.

So, if you’re attending DVCon be sure to check out the presentation for Portable Stimulus Models for C/SystemC, UVM and Emulation. It’s on Tuesday afternoon in the Multi-Language session: http://dvcon.org/content/event-details?id=180-7

Come find me in the Mentor Graphics booth (booth 801), and I’ll be happy to discuss verification in general and Portable Stimulus, specifically, in more depth with you.


11 February, 2015

Accellera Approves Creation of Portable Stimulus Working Group

At DVCon 2014, Mentor Graphics proposed Accellera launch an exploratory exercise, called a Proposed Working Group (PWG), to determine if there was sufficient interest and need to create a standard in this area.  To help motivate the consideration of this activity, we indicated we would offer our graph-based test specification embodied in our inFact verification tool.

Rapid adoption of our technology has been the trend, especially when used in conjunction within a SystemVerilog UVM testbench environment.  One of the major benefits of UVM has been the portable nature of the testbench to facilitate design verification within and across companies.  The exclusive nature of our graph-based test specification language limits its easy use within the industry leading users to suggest we look to standardize it in keeping with the fundamental UVM principle of testbench portability.

After about a year of discussion in Accellera, the group announced it had concluded there should be an official standards project in this area.  Industry participants have likewise offered quotes of support for the formation of the Accellera Portable Stimulus Working Group.

The challenges to efficient and effective verification continue to grow.  If we stop where we are today in verification algorithm advances and standards the trend to require more people, time or compute resources will continue grow unabated at exponential rates.

For Mentor Graphics part, the verification team here has gone to market with innovative technology that has shown remarkable ability to improve verification productivity and efficiency.  The specification we offer to Accellera to seed this project is the same embodied in technology we used when we partnered with TSMC to validate advanced functional verification technology we announced in 2011.  From that announcement, we shared that tests conducted by AppliedMicro in designs destined for TSMC shortened “time-to-coverage by over 100x.”

One need not wonder if it is possible to shrink a month’s worth of verification tests into less than an 8 hour work day.  It is.  To find out how our specific use of this technology works and what motivates us to support standardization of Portable Stimulus in Accellera, I invite you to visit the Verification Academy where a session on Intelligent Testbench Automation shows what can be done.

And for those who would like to help in the development of the standard and may have technology to further underpin it, you should consider attending the first organizational meeting of the Portable Stimulus Working Group at DVCon 2015 March 5th from 6pm-9pm.  Contact Accellera for member-only meeting details or catch me at DVCon 2015 and I can share more information with you.

, , , , , , ,

8 February, 2015

FPGA Design Trends

In my previous blog, I introduced the 2014 Wilson Research Group Functional Verification Study (click here). The objective of my previous blog was to provide an overview on our large, worldwide industry study. The key findings from this study will be presented in a set of upcoming blogs. In this blog, I present trends related to various aspects of FPGA design to illustrate growing design complexity.

Let’s begin by examining embedded processor trends targeted at a general FPGA implementation. Our 2014 study found that 56% of all FPGA designs contained one or more embedded processors, as shown in Figure 1. Although we did not see an overall growth in the number of FPGAs containing one or more embedded processors between 2012 and 2014, we did see an increase in the number of FPGA projects creating designs containing more than one embedded processor.

2014-WRG-BLOG-FPGA-1-1

Figure 1. Number of embedded processors in FPGA trends

SoC class designs (i.e., designs containing embedded processors) add a new layer of verification complexity to the verification process that did not exist with traditional non-SoC class designs due to hardware and software interactions, new coherency architectures, and the emergence of complex network-on-a-chip interconnect.

In addition to embedded processors targeted at general FPGA class of designs, there has been a recent emergence of specific programmable SoC FPGA implementations, such as: Xilinx’s Zynq, Altera’s Arria/Cydone, and Microsemi’s SmarFusion. Figure 2 shows the adoption trends for these programmable SoC FPGAs, which you can see grew by over 93 percent between 2012 and 2014. Keep in mind that this trend data does not represent volume production—it represents the number of FPGA projects that are creating designs targeted at a programmable SoC class of FPGA.

2014-WRG-BLOG-FPGA-1-3

Figure 2. Type of FPGA implementation trends

As the industry moves to SoC class designs, regardless of targeted FPGA implementation, FPGA projects are starting to increase their adoption of industry standard on-chip bus protocols—versus proprietary bus protocols. Figure 3 shows the current adoption of AMBA and other on-chip bus protocols for FPGA designs as identified by our new study. Note, the reason we are not showing trends here is that the 2012 study did not separate out the various AMBA protocols, which is something we decided to do for our 2014 study. Hence, we cannot do an apples-to-apples comparison between 2012 and 2014 for FPGA on-chip bus protocol adoption.

2014-WRG-BLOG-FPGA-1-3a

Figure 3. FPGA on-chip bus protocol adoption

Another aspect of SoC class design is the emergence of IP-based design practices, which is fundamental for improving design productivity. Figure 4 shows FPGA design composition trends—and we see that there has been a declined in new logic created by FPGA project teams. At the same time we see an increase in the adoption of both internally developed and externally acquired IP.

2014-WRG-BLOG-FPGA-1-5

Figure 4. FPGA design composition trends

In my next blog (click here), I’ll focus on verification effort trends related to FPGA designs.

Quick links to the 2014 Wilson Research Group Study results (so far…)

,

21 January, 2015

This blog is a continuation of a series of blogs that present the highlights from the 2014 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In this blog I discuss the issue of study bias, and what we did to address these concerns.

MINIMIZING STUDY BIAS

When architecting a study, three main concerns must be addressed to ensure valid results: sample validity bias, non-response bias, and stakeholder bias. Each of these concerns is discussed in the following sections, as well as the steps we took to minimize these bias concerns.

Sample Validity Bias

To ensure that a study is unbiased, it’s critical that every member of a studied population have an equal chance of participating. An example of a biased study would be when a technical conference surveys its participants. The data might raise some interesting questions, but unfortunately, it does not represent members of the population that were unable to participant in the conference. The same bias can occur if a journal or online publication limits its surveys to only its subscribers.

A classic example of sample validity bias is the famous Literary Digest poll in the 1936 United States presidential election, where the magazine surveyed over two million people. This was a huge study for this period in time. The sampling frame of the study was chosen from the magazine’s subscriber list, phone books, and car registrations. However, the problem with this approach was that the study did not represent the actual voter population since it was a luxury to have a subscription to a magazine, or a phone, or a car during The Great Depression. As a result of this biased sample, the poll inaccurately predicted that Republican Alf Landon versus the Democrat Franklin Roosevelt would win the 1936 presidential election.

For our study, we carefully chose a broad set of independent lists that, when combined, represented all regions of the world and all electronic design market segments. We reviewed the participant results in terms of market segments to ensure no segment or region representation was inadvertently excluded or under-represented.

Non-Response Bias

Non-response bias in a study occurs when a randomly sampled individual cannot be contacted or refuses to participate in a survey. For example, spam and unsolicited mail filters remove an individual from the possibility of receiving an invitation to participate in a study, which can bias results. It is important to validate sufficient responses occurred across all lists that make up the sample frame. Hence, we reviewed the final results to ensure that no single list of respondents that made up the sample frame dominated the final results.

Another potential non-response bias is due to lack of language translation, which we learned during our 2012 study. The 2012 study generally had good representation from all regions of the world, with the exception of an initially very poor level of participation from Japan. To solve this problem, we took two actions:

  1. We translated both the invitation and the survey into Japanese.
  2. We acquired additional engineering lists directly from Japan to augment our existing survey invitation list.

This resulted in a balanced representation from Japan. Based on that experience, we took the same approach to solve the language problem for the 2014 study.

Stakeholder Bias

Stakeholder bias occurs when someone who has a vested interest in survey results can complete an online study survey multiple times and urge others to complete the survey in order to influence the results. To address this problem, a special code was generated for each study participation invitation that was sent out. The code could only be used once to fill out the survey questions, preventing someone from taking the study multiple times or sharing the invitation with someone else.

2010 Study Bias

While architecting the 2012 study, we did discover a non-response bias associated with the 2010 study. Although multiple lists across multiple market segments and across multiple regions of the world were used during the 2010 study, we discovered that a single list dominated the responses, which consisted of participants who worked on more advanced projects and whose functional verification processes tend to be mature. Hence, for this series of blogs we have decided not to publish any of the 2010 results as part of verification technology adoption trend analysis.

The 2007, 2012, and 2014 studies were well balance and did not exhibit the non-response bias previously described for the 2010 data. Hence, we have confidence in talking about general industry trends presented in this series of blogs.

Quick links to the 2014 Wilson Research Group Study results (so far…)

,

21 January, 2015

This is the first in a series of blogs that presents the findings from our new 2014 Wilson Research Group Functional Verification Study. However, unlike my previous Wilson Research Group functional verification study blogs, which focused on the ASIC/IC market, I plan to begin this set of blogs with an exclusive focus on FPGA trends. Why? For the following reasons:

  1. Unlike the traditional ASIC/IC market, there has historically been very few studies published on FPGA functional verification trends. We started studying the FPGA market segment back in the 2010 study, and we now have collected sufficient data to confidently present industry trends related to this market segment.
  2. Today’s FPGA designs have grown in complexity—and many now resemble complete systems. The task of verifying SoC-class designs is daunting, which has forced many FPGA projects to mature their verification process due to rising complexity. The FPGA-focused data I present in this set of blogs will support this claim.

My plan is to release the ASIC/IC functional verification trends through a set of blogs after I finish presenting the FPGA trends.

Introduction

In 2002 and 2004, Collett International Research, Inc. conducted its well-known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification at that point in time. However, after the 2004 study, no additional Collett studies were conducted, which left a void in identifying industry trends. To address this dearth of knowledge, four studies were commissioned by Mentor Graphics in 2007, 2010, 2012, and 2014, which focused on functional verification. These were world-wide, double-blind, functional verification studies, covering all electronic industry market segments. To our knowledge, the 2014 study was the largest functional verification study ever conducted. This set of blogs presents the findings from our 2014 study and provides invaluable insight into the state of the electronic industry today in terms of both design and verification trends.

Study Background

Our study was modeled after the original 2002 and 2004 Collett International Research, Inc. studies. In other words, we endeavored to preserve the original wording of the Collett questions whenever possible to facilitate trend analysis. To ensure anonymity, we commissioned Wilson Research Group to execute our study. The purpose of preserving anonymity was to prevent biasing the participants’ responses. Furthermore, to ensure that our study would be executed as a double-blind study, the compilation and analysis of the results did not take into account the identity of the participants.

For the purpose of our study we used a multiple sampling frame approach that was constructed from eight independent lists that we acquired. This enabled us to cover all regions of the world—as well as cover all relevant electronic industry market segments. It is important to note that we decided not to include our own account team’s customer list in the sampling frame. This was done in a deliberate attempt to prevent biasing the final results. My next blog in this series will discuss other potential bias concerns when conducting a large industry study and describe what we did to address these concerns.

After data cleaning the results to remove inconsistent or random responses (e.g., someone who only answered “a” on all questions), the final sample size consisted of 1886 eligible participants (i.e., n=1886). To put this figure in perspective, the 2004 Collett study sample size consisted of 201 eligible participants.

Unlike the 2002 and 2004 Collett IC/ASIC functional verification studies, which focused only on the ASIC/IC market segment, our studies were expanded in 2010 to include the FPGA market segment. We have partitioned the analysis of these two different market segments separately, to provide a clear focus on each. One other difference between our studies and the Collett studies is that our study covered all regions of the world, while the original Collett studies were conducted only in North America (US and Canada). We have the ability to compile the results both globally and regionally, but for the purpose of this set of blogs I am presenting only the globally compiled results.

Confidence Interval

All surveys are subject to sampling errors. To quantify this error in probabilistic terms, we calculate a confidence interval. For example, we determined the overall margin of error for our study to be ±2.19% at a 95% confidence interval. In other words, this confidence interval tells us that if we were to take repeated samples of size n=1886 from a population, 95% of the samples would fall inside our margin of error ±2.19%, and only 5% of the samples would fall outside.

Study Participants

This section provides background on the makeup of the study.

Figure 1 shows the percentage of overall study participants by market segment.

2014-WRG-BLOG-P-1

Figure 1: Study participants by market segment

Figure 2 shows the percentage of overall study eligible participants by their job description. An example of eligible participant would be a self-identified design or verification engineer, or engineering manager, who is actively working within the electronics industry. Overall, design and verification engineers accounted for 60 percent of the study participants.

2014-WRG-BLOG-P-2

Figure 2: Study participants job title description

Before I start presenting the findings from our 2014 functional verification study, I plan to discuss in my next blog (click here) general bias concerns associated with all survey-based studies—and what we did to minimize these concerns.

Quick links to the 2014 Wilson Research Group Study results (so far…)

,

14 January, 2015

“Who Knew?” about verification IP (VIP), was the theme of a recent DeepChip post by John Cooley on December 18.  More specifically the article states, “Who knew VIP was big and that Wally had a good piece of it?”  We knew.

We knew that ASIC and FPGA design engineers can choose to buy design IP from several alternative sources or build their own, but that does not help with the problem of verification.  We knew that you don’t really want to rely on the same source that designed your IP, to test it.  We knew that you don’t want to write and maintain bus functional models (BFMs) or more complete VIP for standard protocols.  Not that you couldn’t, but why would you if you don’t have to?

We also knew that verification teams want easy-to-use VIP that is built on a standard foundation of SystemVerilog, compliant with a protocol’s specification, and is easily configurable to your implementation.  That way it integrates into your verification environment just as easily as if you had built it yourself.

Leading design IP providers such as ARM®, PLDA, and Northwest Logic knew that Mentor Graphics’ VIP is built on standards, is protocol compliant, and is easy to use.  In fact you can read more about what Jim Wallace, systems and software group director at ARM; Stephane Hauradou, CTO of PLDA; and Brian Daellenbach, president of Northwest Logic; have to say about Mentor Graphics’ recently introduced EZ-VIP technology for PCIe 4.0 (at this website http://www.mentor.com/company/news/mentor-verification-ip-pcie-4 ), and why they know that their customers can rely on it as well.

Verification engineers knew, too.  You can read comments from many of them (at Cooley’s website http://www.deepchip.com/items/dac14-06.html ), about their opinions on VIP.  In addition, Mercury Systems also knew.  “Mentor Graphics PCIe VIP is fully compliant with the PCIe protocol specification and with UVM coding guidelines. We found that we could drop it into our existing environment and get it up and running very quickly”, said Nick Solimini, Consulting DV Engineer at Mercury Systems. “Mentor’s support for their VIP is excellent. All our technical questions were answered promptly so we were able to be productive throughout the project”.

So, now you know,  Mentor Graphics’ Questa VIP is built on standard SV UVM, is specification compliant, is easy to get up and running and is an integral part of many successful verification environments today.  If you’d like to learn more about Questa VIP and Mentor Graphics’ EZ-VIP technology, send me an email, and I’ll let you in on what (thanks to Cooley and our customers) is no longer the best kept secret in verification.  Who knew?

, , , , , , , ,

6 January, 2015

2014 was an exciting year for formal verification to say the least, and below I call out a sampling of noteworthy conference papers that contributed to this energy. Specifically, they show how formal property checking itself is simultaneously scaling to tackle ever larger verification problems, and also being extended to cover domains that were once exclusively the province of simulation.

———-
DVCon Oski Samsung image

Conference: DVCon 2014

Title: Sign-off With Bounded Formal Verification Proofs

Highlights: This paper totally busts the myth that you need to fully solve a formal proof in order to be valuable for verification.  Instead, the authors’ show how “bounded proofs” – i.e. formal proofs that haven’t reached a solution yet – can provide engineers enough information to confidently make sign-off decisions.  Specifically, they show that as long as a given formal analysis isn’t failing as it proceeds deeper in to the state space, and as long as it touches the cover points you wanted to hit, “no news is good news” and the design is likely sound.

Authors: NamDo Kim and Junhyuk Park of Samsung Electronics Co., Ltd.; HarGovind Singh and Vigyan Singhal of Oski Technology, Inc.

Proceedings: http://events.dvcon.org/events/proceedings.aspx?id=163-2

———-
U2U Oracle April

Conference: Mentor Graphics’ User2User Conference, San Jose, CA, April 2014

Title: Targeted Approach to Formal Model Checking:  A Case Study

Highlights: This paper documents a real work SoC verification case study on how formal was focused on three major verification flows: bug hunting, “assurance” (a/k/a proofs of key functions), and exhaustive static & temporal connectivity checking.  The results were impressive: in 50% of the time vs. the simulation approaches they supplanted,  these formal applications found 4 RTL bugs missed by other methods, and 6 more RTL bugs in cache verification – 1 of which would have been a chip killer had it escaped.

Author: Ram Narayan, Oracle Labs

Proceedings: http://supportnet.mentor.com/member/u2u/2014-san-jose.cfm

———-
U2U Qualcomm October image

Conference: Mentor Graphics’ User2User Conference, Bangalore India, October 2014

Title: Gear up ! Speed up ! –   Dead Code Detection Through Reachability Analysis (D2CTRA) for Coverage Convergence

Highlights: For D&V engineers, few things in verification are more frustrating than watching the code coverage score plateau well short of your coverage spec. No matter how many clever directed test you write, no matter how many different random seeds you try, the coverage score simply flat-lines. In parallel, specifying waivers to deliberately exclude unused IP configurations from the analysis can be a tedious and error prone manual process.  In this presentation the authors show how they used Questa CoverCheck to solve this problem on a very control intensive design.  In a nutshell, they were able to rapidly identify dead code areas, easily set coverage waivers on properly dead areas to keep them from inaccurately pulling down the coverage score, and investigate and fix the trouble spots.  (Note: this presentation won the “Best Paper” award for the conference’s functional verification track!)

Authors: Pandithurai Sangaiyah, Anand Kumar of Qualcomm India Pvt Ltd

Proceedings: https://verificationacademy.com/resource/42639

———-

Again, the above is just a sampling of the great work and innovations being made in the formal and assertion-based verification domain in 2014.  If the anecdotes and success stories that I’m regularly receiving in my InBox (let alone the upcoming conference papers and posters that I’m aware of) are any indication, 2015 promises to be an even bigger year for formal-based solutions!

Joe Hupcey III

P.S. Bonus: I’ve already written about the ARM Techcon 2014 paper on Advanced Verification Management and Coverage Closure Techniques by Nguyen Le, Microsoft, et.al.; but it bears re-referencing my prior Verification Horizon’s post on this paper: ARM® Techcon Paper Report: How Microsoft Saved 4 Man-Months Meeting Their Coverage Closure Goals Using Automated Verification Management & Formal Apps

12 December, 2014

Just in time for Christmas and other year-end holidays, I am pleased to announce that the latest issue of Verification Horizons is now available on-line. As usual, we have a great lineup of articles that I’m sure you’ll find informative:

By the way, if you’d like to subscribe to our quarterly Verification Horizons newsletter, you may do so here.

Whichever holiday(s) you celebrate at this special time of year, I hope you experience the peace, joy and hope that my family and I will share this Christmas.

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@jhupcey Tweets

  • Loading tweets...