Verification Horizons BLOG

This blog will provide an online forum to provide weekly updates on concepts, values, standards, methodologies and examples to assist with the understanding of what advanced functional verification technologies can do and how to most effectively apply them. We're looking forward to your comments and suggestions on the posts to make this a useful tool.

11 May, 2015

Because Clock Domain Crossing (CDC) verification has been around for well over a decade, it’s tempting to think that CDC has attained the status of “solved problem”.  However, with today’s SoCs employing over 50 independent clocks, the reality is that CDC verification is only getting more challenging. Hence, this is why Mentor R&D seeks to stay ahead of the curve by attending cutting edge academic events that most of us have never heard of.  Case in point: the recent 21st IEEE International Symposium on Asynchronous Circuits and Systems — “ASYNC” for short.

At this year’s ASYNC, Mentor CDC R&D lead Chris Kwok reached out to the academic community with a presentation on “Hunting Asynchronous CDC Violations in the Wild”. In a nutshell, Chris updated the researchers on the state of CDC in the commercial EDA world. After providing this market snapshot, Chris went on to describe how the audience’s innovations will be most welcome as the number of clock domains – and the interactivity and dependencies between multiple domains like clock and reset signaling — continues to increase.

2015-5-4 ASYNC IMG_7409

It was readily apparent that Chris’ remarks were well received given the detailed questions he fielded immediately after his presentation, and by the stream of visitors to the Mentor Graphics’ demo table.

2015-5-4 ASYNC IMG_7452

Indeed, as my colleague noted, “The attendees I talked to had some really intriguing ideas around the core analysis technologies that will be needed for CDC verification at advanced nodes. It’s sparked some new thinking about some particularly thorny issues my team has been working on.

In summary, the event was an interesting reminder of all the science that goes into the core of each Questa CDC release, enabling Mentor to stay ahead of our customers’ toughest technical challenges.

Until next time, may your clock domains be synchronized, and reset signaling be properly buffered,

Joe Hupcey III

Reference link: the ASYNC conference website http://ee.usc.edu/async2015/

P.S. Do you dream about improving your flip-flop’s tau figure? Do you calculate MTBF’s in your spare time?  Are you frustrated that your colleagues can’t appreciate that synchronizers and data flip-flops ARE different? If you answered “yes” to any one of these questions, the Questa CDC team would like to invite you to apply for an opening in R&D in Mentor’s Fremont, CA office:
http://chk.tbe.taleo.net/chk04/ats/careers/requisition.jsp?org=MENTOR&cws=1&rid=3462

, , ,

7 May, 2015

For all things verification, you will want to stop by the Verification Academy booth #2408 at DAC to interact with experts exploring the challenges of IC design and verification.  At the top of each hour, the Verification Academy will feature a presentation followed by a lively conversation.  Presentations will not be repeated so each hour will be unique.

We have themed each of the days as well:

  • Monday is “Debug Day
  • Tuesday is “Standards & FPGA Day
  • Wednesday is “Formal Verification Day

Naturally, you will find a few exceptions to those rules when you look at the program in detail.  Please register for Verification Academy sessions here: Monday Registration | Tuesday Registration | Wednesday Registration.  [NOTE: the Verification Academy sessions are highlighted with a blue background when you visit the registration site.]  A concise listing of all the Verification Academy sessions can be found here.

We will feature an end of the day reception on Monday at the Verification Academy booth after the last presentation.  Neil Johnson (XtremeEDA) and Mentor’s Harry Foster will explore Agile Evolution in SoC Verification in that last session.  The session begins at 5pm.  Neil is a proponent of this methodology as a means to to help build in design quality and simplify the task of verification.  In addition to being an advocate for this, he is also a practitioner of it.  He is an open-source hardware developer and Moderator at www.AgileSoC.com.  We think the conversation that follows this informative session will be a lively one in which we invite everyone to continue over cocktails and hor d’oeuvres at 5:30pm.

We are sponsoring other events outside of the Verification Academy as well.  Tuesday is truly “Standards Day” at DAC.  In addition to the standards theme at the Verification Academy booth, you can kick off the day at the Accellera Breakfast and later in the day attend the IEEE DASC, Accellera and Si2 System Level Low Power Workshop.  Here is a partial list of Standards Day activities:

Registration

If you have not yet registered for DAC, do so now.  If you do not have plans to register for the full technical conference, many conference events are fee free if you select the “I LOVE DAC” registration option before May 19th!  In fact, all the “Standards Day” events listed above are free with early I Love DAC registration. Simply click here and you will be taken to the “I Love DAC” location to register.  Register before May 19th as after that date a $95 minimum fee sets in.

See you at DAC!

, , , , , , , , , ,

21 April, 2015

FPGA Verification Effectiveness Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on the amount of effort spent in FPGA verification. We have seen in previous blogs that a significant amount of effort is being applied to FPGA functional verification. In this blog I focus on the effectiveness of verification in terms of FPGA project schedule and bug escapes.

FPGA Schedules

Figure 1 presents the design completion time compared to the project’s original schedule. What was a surprise in the 2014 findings is that we saw an improvement in the number of FPGA projects meeting schedule—compared to 2012. It is unclear why we are seeing this trend now.  Perhaps managers are getting better at scheduling—or are becoming more pessimistic with their schedules.  Or, perhaps it is due to the increase amount of reuse (both design and verification IP). Or, is the increased amount of FPGA verification effort prior to “getting to the lab” starting to pay off for some projects? This data point raises some interesting questions worth exploring further. Regardless, still a significant number of FPGA projects miss their originally planned schedule.

2014-WRG-BLOG-FPGA-4-1

Figure 1. FPGA design completion time compared to the project’s original schedule

FPGA Lab Iterations

ASIC/IC projects track the number of required spins that occur prior to market production.  In fact, this can be a useful metric for determining the overall verification effectiveness of an ASIC/IC project.  Unfortunately, we lack such a metric for FPGA projects.  For the 2014 study, we decided to ask the question related to the average number of lab iterations required before the design went into production. Again, this was done to try and get a sense of the project’s verification effectiveness.  The results are shown in Figure 2. However, I’m not convinced that FPGA lab iterations is analogous to ASIC/IC respin as a verification effectiveness metric.  Perhaps a better metric for future studies would be the number of bugs that escape into production and are found in the field. This might be something we should consider on future studies.

2014-WRG-BLOG-FPGA-4-2

Figure 2. Number of FPGA iterations in the lab (no trend data available)

FPGA Bug classification

For the 2014 study, we asked the FPGA project participants to identify the type of flaws that were contributing to rework in the lab. In Figure 3, I show the two leading causes of rework, which are logical and functional bugs, as well as clocking bugs. The data seems to suggest that these issues are growing. Perhaps due to the design of larger and more complex FPGAs. Again, this is a data point worth exploring further.

2014-WRG-BLOG-FPGA-4-3

Figure 3. Types of Flaws Resulting in FPGA Rework

In Figure 4, I show trends in terms of main contributing factors leading to logic and functional flaws—and you can see that design errors are the main cause of functional flaws.  But note that a significant amount of flaws are related to some aspect of the specification—such as changes in the specification—or incorrect or incomplete specifications. Problems associated with the specification process are a common theme I often hear when visiting FPGA customers.

2014-WRG-BLOG-FPGA-4-4

Figure 5. Root cause of FPGA functional flaws

In my next blog (click here), I plan to presenting the findings from our study for FPGA verification technology adoption trends.

Quick links to the 2014 Wilson Research Group Study results

,

16 April, 2015

dvcon_2015_logo

I was fortunate to be able to attend DVCon this year. One of my favorite aspects of the DVCon show are the paper and poster sessions.  DVCon is a very hands-on show, with the focus being practical applications of new verification techniques. It’s great to be able to listen as industry experts present new techniques and approaches during the paper sessions that they have spent countless hours developing, and be able to interact in a more informal manner with the poster presenters. Once DVCon is over, the content of these papers maintains their value, and I frequently find myself revisiting papers from previous DVCon conferences.

I was happy to be at DVCon this year presenting a poster paper on software-driven hardware verification. Software-driven verification of hardware has been around for a very long time, of course. Going back to the era when systems were composed of discrete packages wired together on a board, running some amount of software on the processor has been a great way to verify that the components of the system have been correctly integrated. Today, as the interactions between software running on multiple processors and hardware IP become more and more complex, software-driven hardware verification continues to be relevant.

There are many challenges in software-driven hardware verification. Some challenges, such as automating creation of stimulus, are addressed by existing tools and are within the scope of the Accellera Portable Stimulus Working Group. Other challenges are more foundational, such as how test functionality is encapsulated and connected to maximize test-creation productivity, and maximize reuse of elements of test functionality. My paper, Jump-Start Software-Driven Hardware Verification with a Verification Framework, proposes a set of key features and capabilities required by a verification framework targeted at software-driven hardware verification.

Continuing the theme of reuse, I’m excited to announce that a collection of papers, poster papers, and interviews from DVCon are now available on Verification Academy. Whether or not you were able to attend DVCon, you can read papers on topics ranging from regression management to formal techniques to software-driven hardware verification. In addition, you can listen to the presenters of poster papers introduce their poster, and see interviews with industry figures. You can find these resources and more at the following link:

https://verificationacademy.com/news/featured-presentations-dvcon-2015

16 April, 2015

Do automated formal apps really help D&V engineers “cross the chasm” and start using formal verification directly? In Part 1 of this case study on Oracle’s “Project RAPID”, the Oracle team’s appetite for using formal verification was whetted by impressive results from the Questa Connectivity Check and Questa Register Check apps. Picking up the story where we left off, award-winning author Ram Narayan explains how success with these automated formal apps inspired the team to try their hands at using formal technology directly with the Questa Property Checking (PropCheck) app for classical model/property checking. Ram writes:

Some of the IP units … were good candidates for formal verification. That’s because it was very reasonable to expect to be able to prove the complete functionality of these units formally. We decided to target these units with the Assurance strategy.”

and

Spurred by the success of applying [the apps], we considered applying formal methods in a bug hunting mode for units that were already being verified with simulation. Unlike Assurance, Bug Hunting doesn’t attempt to prove the entire functionality, but rather targets specific areas where simulation is not providing enough confidence that all corner case bugs have been discovered.”

2015-4-8 page 5 of Oracle Ram N VH article
The results of their assurance and bug hunting strategies speak for themselves: Table 1 in the article reports that the team found 79 bugs with these formal verification techniques!

Given this success with formal, the team gained the confidence to apply formal in more DUT areas where formal would be more effective than simulation – i.e. “using the best tool for the job” as necessary. Indeed, a common thread throughout the whole story is how formal and simulation were often used in tandem to simultaneously leverage the unique strengths of each technology to improve the overall quality of verification. The article’s conclusion begins with this observation:

“Formal verification is a highly effective and efficient approach to finding bugs. Simulation is the only means available to compute functional coverage towards verification closure. In this project we attempted to strike a balance between the two methodologies and to operate within the strengths of each approach towards meeting the projects goals.

The bottom-line: formal has “gone mainstream” in this team’s current and future projects:

“The most significant accomplishment to me is the shift in the outlook of the design team towards formal. According to one designer whose unit was targeted with Bug Hunting, ‘I was initially skeptical about what formal could do. From what I have seen, I want to target my next design first with formal and find most of the bugs.’ … “the time savings and improved design quality that formal verification brings are welcome benefits. We plan to continue to push the boundaries of what is covered by formal in future projects.”

Granted, the road from zero formal to full adoption might not have been quite as smooth as this engaging article describes. Still, their declaration to future usage of formal apps in conjunction with formal property checking – let alone their project’s impressive results – appear to conclusively prove the original thesis.  Namely, once formal’s considerable power and benefits are introduced by a series of formal apps, there is no going back and formal becomes a permanent part of the user’s verification tool kit.

Does Ram’s/Oracle’s journey resonate with you? Have you had the same experience or seen something similar at your employer or clients?  Please share your thoughts in the comments below, or contact me offline.

Until next time, may your coverage be high and your power consumption be low,

Joe Hupcey III

P.S. FYI, the author of the Verification Horizons article described above (and the related award-winning DVCon 2014 poster) was also a co-author of the 2015 DVCon USA Best Paper, 10.1 “I Created the Verification Gap” by Ram Narayan and Tom Symons of Oracle Labs.  Congratulations Ram and Tom!

Reference Links:

Verification Horizons, March 2015, Volume 11, Issue 1:
Evolving the Use of Formal Model Checking in SoC Design Verification
Ram Narayan, Oracle Corp.

https://verificationacademy.com/verification-horizons/march-2015-volume-11-issue-1/Evolving-the-Use-of-Formal-Model-Checking-in-SoC-Design-Verification

—–

DVCon USA, March 2014, 1P.2:
The Future of Formal Model Checking is NOW! Leveraging Formal Methods for RAPID System On Chip Verification, (Poster Presentation Honorable Mention)
Ram Narayan, Oracle Corp.

http://events.dvcon.org/events/proceedings.aspx?id=163-1-P

, , , , , , , , , , , , , , ,

9 April, 2015

One of the biggest developments in the formal verification world in the past several years has been the industry-wide growth of formal-based “apps” — automated applications that leverage formal’s exhaustive verification technology “under the hood” to focus on specific verification tasks well suited to formal algorithms. But do formal apps really help D&V engineers “cross the chasm” and start using formal verification directly?  (Or if you prefer, are apps an effective “Trojan Horse”?)  A recent article in Verification Horizons by Oracle’s Ram Narayan titled “Evolving the Use of Formal Model Checking in SoC Design Verification about the evolution of the verification methodology employed on Oracle’s “Project RAPID” suggests the answer is “yes”.

2015-4-8 front page of Oracle Ram N VH article

In a nutshell, the clear benefits Ram’s team received from formal apps inspired them to try their hand at formal model checking; and their results exceeded all expectations. I recommend you read the article in its entirety because it’s a great real-world case study; rich with anecdotes from the front-line engineer himself. (Indeed, this article was inspired by Ram’s award winning DVCon 2014 poster, but I digress) But for the purposes of this post, allow me to focus exclusively on the highlights pertaining to the “crossing the chasm” thesis. Consider the following excerpts.

* First, they started from scratch:

“At the outset of the project, there were no specific plans to use formal verification on RAPID. We did not have any infrastructure in place for running formal tools, and neither did we have anyone on the team with any noteworthy experience using these tools.”

* The first app they tried exceeded all expectations: Like many customers, Oracle got their feet wet with formal-driven SoC connectivity checking. And like 100% of Questa Connectivity Check app customers, they came away impressed:

“Our goal was to catch trivial design errors through formal methods without having to rely on lengthy and in some cases, random SoC simulations. Given our modest expectations at the outset, we would have been satisfied if we just verified these SoC connectivity checks with formal tools.  … SoC Connectivity checks were written to verify the correct connectivity between critical SoC signals like interrupts, events and other control/datapath signals. These checks are trivial to define and are of high value. Proving these connections saved us significant cycles in SoC simulations.

This is not just a gut feeling on the author’s part: the bottom row of Table 2 in the article (showing the Questa Connectivity Check app cutting the schedule by 66%) backs-up the above quote with real project data.

2015-4-8 Table 2 Oracle Ram N VH article

Article Table 2: Formal Verification Time Savings on Oracle’s Project RAPID – formal-based connectivity verification with the Questa Connectivity Check app delivers 66% schedule savings


* Another app is tried, and it’s also wildly successful:
the Questa Register Check app was the next formal app to be applied. Not only did it take care of the immediate control&status register verification task, but it also enabled more effective downstream verification:

“The Register Access Verification established controllability and observability of the registers in the unit from its interface. The IP core logic verification could now safely use the control registers as inputs to properties on the rest of the logic they drive. In addition to these registers, we chose a few internal nodes in the design as observation and control points in our properties. These points gave us additional controllability and observability to the design and reduced the complexity of the cones of logic being analyzed around them. We proved the correctness (observability) of these points prior to enjoying the benefits of using them (controllability) for other properties. This approach made it easier to write properties on the entire unit without any compromise on the efficacy of the overall unit verification.”

At this point in the story, the Oracle team is still confining their use of formal to the stable of available automated formal apps. However, as we’ll see in Part 2 of this case study, this success bred curiosity in the underlying technology …

Until next time, may your coverage be high and your power consumption be low,

Joe Hupcey III

P.S. FYI, the author of the Verification Horizons article described above (and the related award-winning DVCon 2014 poster) was also a co-author of the 2015 DVCon USA Best Paper, 10.1 “I Created the Verification Gap” by Ram Narayan and Tom Symons of Oracle Labs.  Congratulations Ram and Tom!

Reference Links:

Verification Horizons, March 2015, Volume 11, Issue 1:
Evolving the Use of Formal Model Checking in SoC Design Verification
Ram Narayan, Oracle Corp.

https://verificationacademy.com/verification-horizons/march-2015-volume-11-issue-1/Evolving-the-Use-of-Formal-Model-Checking-in-SoC-Design-Verification

—–

DVCon USA, March 2014, 1P.2:
The Future of Formal Model Checking is NOW! Leveraging Formal Methods for RAPID System On Chip Verification, (Poster Presentation Honorable Mention)
Ram Narayan, Oracle Corp.

http://events.dvcon.org/events/proceedings.aspx?id=163-1-P

, , , , , , , , , , , , , , ,

3 April, 2015

It is always good to pause to recognize the companies and individuals with whom we collaborate to create the verification flows and solutions that allow the simplest and most complex devices and systems to come to life.  It is this time of year when the fruit of collaboration has generally been shared publicly.  This is probably the case, in no small part, to the nearing of the annual trek to the Design Automation Conference (DAC).  As we get closer to that week in June this year, I will discuss it even more.  But now I would like to offer a look back at two major milestones around this time of the year that shaped our future.

20 Years Ago

On April 3, 1995, we announced “Device Vendors Providing Library Support to Mentor.”  Our ModelSim simulator gained support from 12 ASIC and programmable logic vendors.  Until then, Mentor’s gate-level simulation was provided by QuickSim and its large collection of ASIC vendor libraries and flows.  With the emergence of VITAL (VHDL Initiative Towards ASIC Libraries) and as an IEEE standards project for it (1976.4) emerged, we continued our activities to drive knowledge about VITAL and educate and help the rest of the ASIC vendor community so they could bring to market their own simulation libraries for ModelSim.

As we added Verilog to the language mix, those Verilog libraries were likewise qualified and offered to the mutual customers we shared with our valued ASIC Vendor partners.  ModelSim grew to be a very popular product and the value of collaboration taught us the importance of shared collaboration.

10 Years Ago

In mid May 2005, we launched our Questa Vanguard Partnership (QVP) program modeled on the ModelSim program.  SystemVerilog 3.1a had been released by Accellera and was in the final stages of IEEE certification which was to come in November 2005.  But to get a jump on solidifying business relationships with our partners and to encourage support of SystemVerilog we began to work with companies around the world who expressed an interest to build a vibrant ecosystem.  A lot was accomplished in the six months between the launch of the QVP program to the approval of the first IEEE SystemVerliog 1800-2005 standard.

But it was good to pause then too and celebrate the standard with our new Questa partners, our mainstay semiconductor library partners and competitors in Japan.  Upon IEEE approval of the standard, Accellera in conjunction with the Big-3 EDA companies and CQ Publishing (Japan), held a “Happy Birthday” celebration reception.  I have to offer special thanks to my friends at Synopsys for the idea.  And, yes, we all know that this November will be lucky 10 years for SystemVerilog and we have already started to discuss what can be done at the annual fall standards meetings in Japan to celebrate this milestone.

Tomorrow (DAC)

As I mentioned, the great thing about this time of the year is the planning for DAC.  Many good things have happened in the last year.  Last year, at Mentor Graphics’ urging and our public commitment to donate technology, Accellera started a “Proposed Working Group” on Portable Stimulus to determine the viability of a standards project.  Accellera formally approved the formation of the Portable Stimulus Working Group in December 2014.  At the Verification Academy booth at DAC, we will certainly offer updates on this work and affirm our sustained commitment to the development of this standard.  I will share full details about what, when and where for the Verification Academy booth at DAC later.

But wait!  There will probably be more.  I can assure you, I will post a few more times during this final two-month journey to DAC.  And as the daily program for the Verification Academy booth is finalized, I will share its content my thoughts on this.  And as industry events, like the Accellera DAC Breakfast are finalized, I will make this part of my commentary on DAC 2015 as well.  It seems this DAC will be a busy DAC.

But this is something you can do now!  If you don’t know if you want to attend the technical program yet, you should at a minimum secure a free pass to the exhibit floor and access to some open industry events.  If you register by May 19th, you can choose the “I Love DAC” registration – complements of ATopTech, Atrenta, and Calypto.  After May 19th, it is no longer free.  So why not register now?  I look forward to seeing you at DAC.

, , , , , , , , , , ,

1 April, 2015

FPGA Effort Verification Trends (Continued)

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on the controversial topic of effort spent in FPGA verification. This blog continues that discussion. I stated in my previous blog that I don’t believe there is a simple answer to the question, “how much effort was spent on verification in your last FPGA project?” I believe that it is necessary to look at multiple data points to truly get a sense of the real effort involved in verification today. So, let’s look at a few additional findings from the study.

Time FPGA designers spend in verification

For projects that have a separation of teams (i.e., design engineers and verification engineers), it’s important to note that FPGA verification engineers are not the only project members involved in functional verification. FPGA design engineers spend a significant amount of their time in verification too, as shown in Figure 1.

2014-WRG-BLOG-FPGA-3-1

Figure 1. Average (mean) time FPGA design engineers spend in design vs. verification.

You might note (on average) that FPGA design engineers actually spend slightly more time doing verification than design. We are not showing trends here since we have insufficient data related to the questions for FPGA designs from our previous study. We anticipate being able to show trends after our next study (currently scheduled for 2016).

Even if the FPGA project has a separation of teams, the designers are still involved in the verification process, ranging from:

  • Small sandbox testing to explore various aspects of the implementation
  • Full functional testing of IP blocks and SoC integration
  • Debugging verification problems identified by a separate verification team

In fact, getting a better understanding of exactly where FPGA designers spend their time has led us to conduct a series of follow-on discussions with various FPGA projects from various market segments. Through this process we have learned a concern by many project managers related to the increase amount of debugging time spent on a project (both pre-lab and lab debugging time). This is one area of FPGA verification that we plan to continue to explore through a series of in-depth discussions with multiple FPGA projects around the world.

Percentage of time FPGA verification engineers spends in various task

Next, let’s look at the mean time FPGA verification engineers spend in performing various tasks related to their specific project. You might note that verification engineers spend most of their time in debugging. Ideally, if all the tasks were optimized, then you would expect this. Yet, unfortunately, the time spent in debugging can vary significantly from project-to-project, which presents scheduling challenges for managers during a project’s verification planning process.

2014-WRG-BLOG-FPGA-3-2

Figure 2. Average (mean) time verification engineers spend in various task

In our 2012 study we found that FPGA verification engineers spent about 37% of their time involved in debugging task. There was a 16 percent increase in the amount of time spent in debugging between 2012 and 2014. Hence, the data suggest that debugging effort is increasing for both FPGA engineers.

In my next blog (click here) I present our study findings in terms of FPGA schedules, iterations in the lab, and classification of functional bugs.

Quick links to the 2014 Wilson Research Group Study results

, , ,

17 March, 2015

StPatricksDay

With a name like “Fitzpatrick,” you knew I’d be celebrating today, right?

Well, there’s no better way to celebrate this fine day than to announce that our latest edition of Verification Horizons is available online! Now that Spring is almost here, there’s a bit less snow on the ground than there was when I wrote my introduction, but everything is still covered. I’m considering spray-painting it all green in honor of the occasion, so at least it looks like I have a lawn again.

In this issue of Verification Horizons, I’d particularly like to draw your attention to “Successive Refinement: A Methodology for Incremental Specification of Power Intent,” by my friend and colleague Erich Marschner and several of our friends at ARM® Ltd. In this article, you’ll find out how the Unified Power Format (UPF) specification can be used to specify and verify your power architecture abstractly, and then add implementation information later in the process. This methodology is still relatively new in the industry, so if you’re thinking about making your next design PowerAware, you’ll want to read this article to be up on the very latest approach.

In addition to that, we’ve also got Harry Foster discussing some of the results from his latest industry study in “Does Design Size Influence First Silicon Success?” Harry is also blogging about his survey results on Verification Horizons here and here (with more to come).

Our friends at L&T Technology Services Ltd. share some of their experience in doing PowerAware design in “PowerAware RTL Verification of USB 3.0 IPs,” in which you’ll see how UPF can let you explore two different power management architectures for the same RTL.

Next, History class is in session, with Dr. Lauro Rizzatti, long-time EDA guru, giving us part 1 of a 3-part lesson in “Hardware Emulation: Three Decades of Evolution.”

Our friends at Oracle® are up next with “Evolving the Use of Formal Model Checking in SoC Design Verification,” in which they share a case study of their use of formal methods as the central piece in verifying an SoC design they recently completed with first-pass silicon success. By the way, I’d also like to take this opportunity to congratulate the author of this article, Ram Narayan, for his Best Paper award at DVCon(US) 2015. Well done, Ram!

We round out the issue with our famous “Partners’ Corner” section, which includes two articles. In “Small, Maintainable Tests,” our friends at Sondrel IC Design Services show you a few tricks on how to make use of UVM virtual sequences to raise the level of abstraction of your tests. In “Functional Coverage Development Tips: Do’s and Don’ts,” our friends at eInfochips give you a great overview of functional coverage, especially the covergroup and related features in SystemVerilog.

I’d also like to take a moment to thank all of you who came by our Verification Academy booth at DVCon to say hi. I found it incredibly humbling and gratifying to hear from so many of you who have learned new verification skills from the Verification Academy. That’s a big part of why we do what we do, and I appreciate you letting us know about it.

Now, it’s time to celebrate St. Patrick’s Day for real!

, , , , , , ,

11 March, 2015

FPGA Verification Effort Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on FPGA design trends. In this blog, I present findings from our study related to the effort spent in verification.

Directly asking study participants how much effort they spend in verification will not work. The reason is that it’s hard to find a paper or article on verification that doesn’t start with the phrase: “Seventy percent of a project’s effort is spent in verification…” In other words, the industry is already biased to respond with this effort value. Yet, there are really no creditable references to quantify this value.

I don’t believe that there is a simple answer to the question, “How much effort was spent on verification in your last project?” In fact, I believe that it is necessary to look at multiple data points derived from multiple questions to truly get a sense of effort spent in verification. And that’s what we did in our functional verification study.

Total FPGA Project Time Spent in Verification

To try to assess the effort spent in verification, let’s begin by looking at one data point, which is the total project time spent in verification. Figure 1 shows the trends in total percentage of FPGA project time spent in verification by comparing the 2012 Wilson Research Group study (in dark blue), and the 2014 Wilson Research Group study (in light blue).

Figure 1. Percentage of FPGA project time spent in verification

Between the years 2012 and 2014 the industry did see a seven percent increase in the average time an FPGA project spends in verification. Historically, FPGA projects have spent less time in verification than ASIC/IC projects. The FPGA project strategy has traditionally been to get to the lab as soon as possible, and then iterate on issues in the lab. In a future blog I’ll show data that indicates this strategy does not necessarily yield good results in terms of meeting project schedule or quality objectives. Also, this lab-focused approach to FPGA verification becomes less effective as FPGA complexity increases.

Peak Number of Design and Verification Engineers

Perhaps one of the biggest challenges in design and verification today is identifying solutions to increase productivity and control engineering headcount. To illustrate the need for productivity improvement, we discuss the trend in terms of increasing engineering headcount for FPGA projects. Figure 2 shows the mean peak number of design and verification engineers working on an FPGA project. Again, this is an industry average since some projects have many engineers while other projects have few.

Figure 2. Mean peak number of engineers working on an FPGA project

You can see that the compounded annual growth rate (CAGR) for the peak number of FPGA design engineers between 2012 and 2014 was 4.9 percent, while the CAGR for the peak number of FPGA verification engineers was 20.9 percent. This huge demand for verification engineers on FPGA projects is one indicator of growing verification complexity in FPGA designs. Also, note that the ratio of design engineers versus verification engineers is approaching 1-to-1. This similar trend happened on traditional ASIC/IC designs in 2012.

In my next blog (click here) I focus on the time that FPGA design and verification engineers spends in various task.

Quick links to the 2014 Wilson Research Group Study results

, ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@jhupcey Tweets

  • Loading tweets...