Posts Tagged ‘formal verification’

11 May, 2015

FPGA Verification Technology Adoption Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on the effectiveness of verification in terms of FPGA project schedule and bug escapes. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2014 Wilson Research Group study.

An interesting trend we see in the FPGA space is a continual maturing of its functional verification processes. In fact, we find that the FPGA design space is about where the ASIC/IC design space was five years ago in terms of verification maturity—and it is catching up quickly. A question you might ask is, “What is driving this trend?” In Part 1 of this blog series I showed rising design complexity with the adoption of more advanced FPGA designs, as well as multiple embedded processor architectures targeted at FPGA designs. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (Part 2 and Part 3). My belief is that the industry creating FPGA designs is being forced to mature its functional verification processes to address today’s increasing complexity.

FPGA Simulation Technique Adoption Trends

Let’s begin by comparing  FPGA adoption trends related to various simulation techniques from the both the 2012 and 2014 Wilson Research Group study, as shown in Figure 1.

Figure 1. Simulation-based technique adoption trends for FPGA designs

You can clearly see that the industry is increasing its adoption of various functional verification techniques for FPGA targeted designs. This past year I have spent a significant amount of time in discussions with FPGA project managers around the world. During these discussions, most mangers mention the drive to improve verification process within their projects due to rising complexity. The Wilson Research Group data suggest that these claims are valid.

FPGA Formal Technology Adoption Trends

Figure w shows the adoption percentages for formal property checking and auto-formal techniques.

Figure 2. FPGA Formal Technology Adoption

Our study looked at two forms of formal technology adoption (i.e., formal property checking and automatic formal verification solutions). Examples of automatic formal verification solutions include X safety checks, deadlock detection, reset analysis, and so on.  The key difference is that for formal property checking the user writes a set of assertions that they wish to prove.  Automatic formal verification solutions do not require the user to write assertions.

In my next blog (click here), I’ll focus on FPGA design and verification language adoption trends, as identified by the 2014 Wilson Research Group study.

Quick links to the 2014 Wilson Research Group Study results

, , , , ,

7 May, 2015

For all things verification, you will want to stop by the Verification Academy booth #2408 at DAC to interact with experts exploring the challenges of IC design and verification.  At the top of each hour, the Verification Academy will feature a presentation followed by a lively conversation.  Presentations will not be repeated so each hour will be unique.

We have themed each of the days as well:

  • Monday is “Debug Day
  • Tuesday is “Standards & FPGA Day
  • Wednesday is “Formal Verification Day

Naturally, you will find a few exceptions to those rules when you look at the program in detail.  Please register for Verification Academy sessions here: Monday Registration | Tuesday Registration | Wednesday Registration.  [NOTE: the Verification Academy sessions are highlighted with a blue background when you visit the registration site.]  A concise listing of all the Verification Academy sessions can be found here.

We will feature an end of the day reception on Monday at the Verification Academy booth after the last presentation.  Neil Johnson (XtremeEDA) and Mentor’s Harry Foster will explore Agile Evolution in SoC Verification in that last session.  The session begins at 5pm.  Neil is a proponent of this methodology as a means to to help build in design quality and simplify the task of verification.  In addition to being an advocate for this, he is also a practitioner of it.  He is an open-source hardware developer and Moderator at www.AgileSoC.com.  We think the conversation that follows this informative session will be a lively one in which we invite everyone to continue over cocktails and hor d’oeuvres at 5:30pm.

We are sponsoring other events outside of the Verification Academy as well.  Tuesday is truly “Standards Day” at DAC.  In addition to the standards theme at the Verification Academy booth, you can kick off the day at the Accellera Breakfast and later in the day attend the IEEE DASC, Accellera and Si2 System Level Low Power Workshop.  Here is a partial list of Standards Day activities:

Registration

If you have not yet registered for DAC, do so now.  If you do not have plans to register for the full technical conference, many conference events are fee free if you select the “I LOVE DAC” registration option before May 19th!  In fact, all the “Standards Day” events listed above are free with early I Love DAC registration. Simply click here and you will be taken to the “I Love DAC” location to register.  Register before May 19th as after that date a $95 minimum fee sets in.

See you at DAC!

, , , , , , , , , ,

16 April, 2015

Do automated formal apps really help D&V engineers “cross the chasm” and start using formal verification directly? In Part 1 of this case study on Oracle’s “Project RAPID”, the Oracle team’s appetite for using formal verification was whetted by impressive results from the Questa Connectivity Check and Questa Register Check apps. Picking up the story where we left off, award-winning author Ram Narayan explains how success with these automated formal apps inspired the team to try their hands at using formal technology directly with the Questa Property Checking (PropCheck) app for classical model/property checking. Ram writes:

Some of the IP units … were good candidates for formal verification. That’s because it was very reasonable to expect to be able to prove the complete functionality of these units formally. We decided to target these units with the Assurance strategy.”

and

Spurred by the success of applying [the apps], we considered applying formal methods in a bug hunting mode for units that were already being verified with simulation. Unlike Assurance, Bug Hunting doesn’t attempt to prove the entire functionality, but rather targets specific areas where simulation is not providing enough confidence that all corner case bugs have been discovered.”

2015-4-8 page 5 of Oracle Ram N VH article
The results of their assurance and bug hunting strategies speak for themselves: Table 1 in the article reports that the team found 79 bugs with these formal verification techniques!

Given this success with formal, the team gained the confidence to apply formal in more DUT areas where formal would be more effective than simulation – i.e. “using the best tool for the job” as necessary. Indeed, a common thread throughout the whole story is how formal and simulation were often used in tandem to simultaneously leverage the unique strengths of each technology to improve the overall quality of verification. The article’s conclusion begins with this observation:

“Formal verification is a highly effective and efficient approach to finding bugs. Simulation is the only means available to compute functional coverage towards verification closure. In this project we attempted to strike a balance between the two methodologies and to operate within the strengths of each approach towards meeting the projects goals.

The bottom-line: formal has “gone mainstream” in this team’s current and future projects:

“The most significant accomplishment to me is the shift in the outlook of the design team towards formal. According to one designer whose unit was targeted with Bug Hunting, ‘I was initially skeptical about what formal could do. From what I have seen, I want to target my next design first with formal and find most of the bugs.’ … “the time savings and improved design quality that formal verification brings are welcome benefits. We plan to continue to push the boundaries of what is covered by formal in future projects.”

Granted, the road from zero formal to full adoption might not have been quite as smooth as this engaging article describes. Still, their declaration to future usage of formal apps in conjunction with formal property checking – let alone their project’s impressive results – appear to conclusively prove the original thesis.  Namely, once formal’s considerable power and benefits are introduced by a series of formal apps, there is no going back and formal becomes a permanent part of the user’s verification tool kit.

Does Ram’s/Oracle’s journey resonate with you? Have you had the same experience or seen something similar at your employer or clients?  Please share your thoughts in the comments below, or contact me offline.

Until next time, may your coverage be high and your power consumption be low,

Joe Hupcey III

P.S. FYI, the author of the Verification Horizons article described above (and the related award-winning DVCon 2014 poster) was also a co-author of the 2015 DVCon USA Best Paper, 10.1 “I Created the Verification Gap” by Ram Narayan and Tom Symons of Oracle Labs.  Congratulations Ram and Tom!

Reference Links:

Verification Horizons, March 2015, Volume 11, Issue 1:
Evolving the Use of Formal Model Checking in SoC Design Verification
Ram Narayan, Oracle Corp.

https://verificationacademy.com/verification-horizons/march-2015-volume-11-issue-1/Evolving-the-Use-of-Formal-Model-Checking-in-SoC-Design-Verification

—–

DVCon USA, March 2014, 1P.2:
The Future of Formal Model Checking is NOW! Leveraging Formal Methods for RAPID System On Chip Verification, (Poster Presentation Honorable Mention)
Ram Narayan, Oracle Corp.

http://events.dvcon.org/events/proceedings.aspx?id=163-1-P

, , , , , , , , , , , , , , ,

9 April, 2015

One of the biggest developments in the formal verification world in the past several years has been the industry-wide growth of formal-based “apps” — automated applications that leverage formal’s exhaustive verification technology “under the hood” to focus on specific verification tasks well suited to formal algorithms. But do formal apps really help D&V engineers “cross the chasm” and start using formal verification directly?  (Or if you prefer, are apps an effective “Trojan Horse”?)  A recent article in Verification Horizons by Oracle’s Ram Narayan titled “Evolving the Use of Formal Model Checking in SoC Design Verification about the evolution of the verification methodology employed on Oracle’s “Project RAPID” suggests the answer is “yes”.

2015-4-8 front page of Oracle Ram N VH article

In a nutshell, the clear benefits Ram’s team received from formal apps inspired them to try their hand at formal model checking; and their results exceeded all expectations. I recommend you read the article in its entirety because it’s a great real-world case study; rich with anecdotes from the front-line engineer himself. (Indeed, this article was inspired by Ram’s award winning DVCon 2014 poster, but I digress) But for the purposes of this post, allow me to focus exclusively on the highlights pertaining to the “crossing the chasm” thesis. Consider the following excerpts.

* First, they started from scratch:

“At the outset of the project, there were no specific plans to use formal verification on RAPID. We did not have any infrastructure in place for running formal tools, and neither did we have anyone on the team with any noteworthy experience using these tools.”

* The first app they tried exceeded all expectations: Like many customers, Oracle got their feet wet with formal-driven SoC connectivity checking. And like 100% of Questa Connectivity Check app customers, they came away impressed:

“Our goal was to catch trivial design errors through formal methods without having to rely on lengthy and in some cases, random SoC simulations. Given our modest expectations at the outset, we would have been satisfied if we just verified these SoC connectivity checks with formal tools.  … SoC Connectivity checks were written to verify the correct connectivity between critical SoC signals like interrupts, events and other control/datapath signals. These checks are trivial to define and are of high value. Proving these connections saved us significant cycles in SoC simulations.

This is not just a gut feeling on the author’s part: the bottom row of Table 2 in the article (showing the Questa Connectivity Check app cutting the schedule by 66%) backs-up the above quote with real project data.

2015-4-8 Table 2 Oracle Ram N VH article

Article Table 2: Formal Verification Time Savings on Oracle’s Project RAPID – formal-based connectivity verification with the Questa Connectivity Check app delivers 66% schedule savings


* Another app is tried, and it’s also wildly successful:
the Questa Register Check app was the next formal app to be applied. Not only did it take care of the immediate control&status register verification task, but it also enabled more effective downstream verification:

“The Register Access Verification established controllability and observability of the registers in the unit from its interface. The IP core logic verification could now safely use the control registers as inputs to properties on the rest of the logic they drive. In addition to these registers, we chose a few internal nodes in the design as observation and control points in our properties. These points gave us additional controllability and observability to the design and reduced the complexity of the cones of logic being analyzed around them. We proved the correctness (observability) of these points prior to enjoying the benefits of using them (controllability) for other properties. This approach made it easier to write properties on the entire unit without any compromise on the efficacy of the overall unit verification.”

At this point in the story, the Oracle team is still confining their use of formal to the stable of available automated formal apps. However, as we’ll see in Part 2 of this case study, this success bred curiosity in the underlying technology …

Until next time, may your coverage be high and your power consumption be low,

Joe Hupcey III

P.S. FYI, the author of the Verification Horizons article described above (and the related award-winning DVCon 2014 poster) was also a co-author of the 2015 DVCon USA Best Paper, 10.1 “I Created the Verification Gap” by Ram Narayan and Tom Symons of Oracle Labs.  Congratulations Ram and Tom!

Reference Links:

Verification Horizons, March 2015, Volume 11, Issue 1:
Evolving the Use of Formal Model Checking in SoC Design Verification
Ram Narayan, Oracle Corp.

https://verificationacademy.com/verification-horizons/march-2015-volume-11-issue-1/Evolving-the-Use-of-Formal-Model-Checking-in-SoC-Design-Verification

—–

DVCon USA, March 2014, 1P.2:
The Future of Formal Model Checking is NOW! Leveraging Formal Methods for RAPID System On Chip Verification, (Poster Presentation Honorable Mention)
Ram Narayan, Oracle Corp.

http://events.dvcon.org/events/proceedings.aspx?id=163-1-P

, , , , , , , , , , , , , , ,

17 March, 2015

StPatricksDay

With a name like “Fitzpatrick,” you knew I’d be celebrating today, right?

Well, there’s no better way to celebrate this fine day than to announce that our latest edition of Verification Horizons is available online! Now that Spring is almost here, there’s a bit less snow on the ground than there was when I wrote my introduction, but everything is still covered. I’m considering spray-painting it all green in honor of the occasion, so at least it looks like I have a lawn again.

In this issue of Verification Horizons, I’d particularly like to draw your attention to “Successive Refinement: A Methodology for Incremental Specification of Power Intent,” by my friend and colleague Erich Marschner and several of our friends at ARM® Ltd. In this article, you’ll find out how the Unified Power Format (UPF) specification can be used to specify and verify your power architecture abstractly, and then add implementation information later in the process. This methodology is still relatively new in the industry, so if you’re thinking about making your next design PowerAware, you’ll want to read this article to be up on the very latest approach.

In addition to that, we’ve also got Harry Foster discussing some of the results from his latest industry study in “Does Design Size Influence First Silicon Success?” Harry is also blogging about his survey results on Verification Horizons here and here (with more to come).

Our friends at L&T Technology Services Ltd. share some of their experience in doing PowerAware design in “PowerAware RTL Verification of USB 3.0 IPs,” in which you’ll see how UPF can let you explore two different power management architectures for the same RTL.

Next, History class is in session, with Dr. Lauro Rizzatti, long-time EDA guru, giving us part 1 of a 3-part lesson in “Hardware Emulation: Three Decades of Evolution.”

Our friends at Oracle® are up next with “Evolving the Use of Formal Model Checking in SoC Design Verification,” in which they share a case study of their use of formal methods as the central piece in verifying an SoC design they recently completed with first-pass silicon success. By the way, I’d also like to take this opportunity to congratulate the author of this article, Ram Narayan, for his Best Paper award at DVCon(US) 2015. Well done, Ram!

We round out the issue with our famous “Partners’ Corner” section, which includes two articles. In “Small, Maintainable Tests,” our friends at Sondrel IC Design Services show you a few tricks on how to make use of UVM virtual sequences to raise the level of abstraction of your tests. In “Functional Coverage Development Tips: Do’s and Don’ts,” our friends at eInfochips give you a great overview of functional coverage, especially the covergroup and related features in SystemVerilog.

I’d also like to take a moment to thank all of you who came by our Verification Academy booth at DVCon to say hi. I found it incredibly humbling and gratifying to hear from so many of you who have learned new verification skills from the Verification Academy. That’s a big part of why we do what we do, and I appreciate you letting us know about it.

Now, it’s time to celebrate St. Patrick’s Day for real!

, , , , , , ,

17 November, 2014

Few verification tasks are more challenging than trying to achieve code coverage goals for a complex system that, by design, has numerous layers of configuration options and modes of operation.  When the verification effort gets underway and the coverage holes start appearing, even the most creative and thorough UVM testbench architect can be bogged down devising new tests – either constrained-random or even highly directed tests — to reach uncovered areas.

armpaper
At the recent ARM® Techcon, Nguyen Le, a Principal Design Verification Engineer in the Interactive Entertainment Business Unit at Microsoft Corp. documented a real world case study on this exact situation.  Specifically, in the paper titled “Advanced Verification Management and Coverage Closure Techniques”, Nguyen outlined his initial pain in verification management and improving cover closure metrics, and how he conquered both these challenges – speeding up his regression run time by 3x, while simultaneously moving the overall coverage needle up to 97%, and saving 4 man-months in the process!  Here are the highlights:

* DUT in question
— SoC with multi-million gate internal IP blocks
— Consumer electronics end-market = very high volume production = very high cost of failure!

* Verification flow
— Constrained-random, coverage driven approach using UVM, with IP block-level testbenches as well as  SoC level
— Rigorous testplan requirements tracking, supported by a variety of coverage metrics including functional coverage with SystemVerilog covergroups, assertion coverage with SVA covers, and code coverage on statements, Branches, Expressions, Conditions, and FSMs

* Sign-off requirements
— All test requirements tracked through to completion
— 100% functional, assertion and code coverage

* Pain points
— Code coverage: code coverage holes can come from a variety of expected and unforeseen sources: dead code can be due to unused functions in reused IP blocks, from specific configuration settings, or a bug in the code.  Given the rapid pace of the customer’s development cycle, it’s all too easy for dead code to slip into the DUT due to the frequent changes in the RTL, or due to different interpretations of the spec.  “Unexplainably dead” code coverage areas were manually inspected, and the exclusions for properly unreachable code were manually addressed with the addition of pragmas.  Both procedures were time consuming and error prone
— Verification management: the verification cycle and the generated data were managed through manually-maintained scripting.  Optimizing the results display, throughput, and tool control became a growing maintenance burden.

* New automation
— Questa Verification Manager: built around the Unified Coverage Database (UCDB) standard, the tool supports a dynamic verification plan cross-linked with the functional coverage points and code coverage of the DUT.  In this way the dispersed project teams now had a unified view which told them at a glance which tests were contributing the most value, and which areas of the DUT needed more attention.  In parallel, the included administrative features enabled efficient control of large regressions, merging of results, and quick triage of failures.

— Questa CoverCheck: this tool reads code coverage results from simulation in UCDB, and then leverages formal technology under-the-hood to mathematically prove that no stimulus could ever activate the code in question. If it’s OK for a given block of code to be dead due to a particular configuration choice, etc., the user can automatically generate wavers to refine the code coverage results.  Additionally, the tool can also identify segments of code that, though difficult to reach, might someday be exercised in silicon. In such cases, CoverCheck helps point the way to testbench enhancements to better reach these parts of the design.

— The above tools used in concert (along with Questasim) enabled a very straightforward coverage score improvement process as follows:
1 – Run full regression and merge the UCDB files
2 – Run Questa CoverCheck with the master UCDB created in (1)
3 – Use CoverCheck to generate exclusions for “legitimate” unreachable holes, and apply said exclusions to the UCDB
4 – Use CoverCheck to generate waveforms for reachable holes, and share these with the testbench developer(s) to refine the stimulus
5 – Report the new & improved coverage results in Verification Manager

* Results
— Automation with Verification Manager enabled Microsoft to reduce the variation of test sequences from 10x runtime down to a focused 2x variation.  Additionally, using the coverage reporting to rank and optimize their tests, they increased their regression throughput by 3x!
— With CoverCheck, the Microsoft engineers improved code coverage by 10 – 15% in most hand-coded RTL blocks, saw up to 20% coverage improvement for auto-generated RTL code, and in a matter of hours were able to increase their overall coverage number from 87% to 97%!
— Bottom-line: the customer estimated that they saved 4 man-months on one project with this process

2014 MSFT presentation at ARM Techcon -- cover check ROI

Taking a step back, success stories like this one, where automated, formal-based applications leverage the exhaustive nature of formal analysis to tame once intractable problems (which require no prior knowledge of formal or assertion-based verification), are becoming more common by the day.  In this case, Mentor’s formal-based CoverCheck is clearly the right tool for this specific verification need, literally filling in the gaps in a traditional UVM testbench verification flow.  Hence, I believe the overall moral of the story is a simple rule of thumb: when you are grappling with a “last mile problem” of unearthing all the unexpected, yet potentially damaging corner cases, consider a formal-based application as the best tool for job.  Wouldn’t you agree?

Joe Hupcey III

Reference links:

Direct link to the presentation slides: https://verificationacademy.com/Advanced-Verification-Management-Presentation

ARM Techcon 2014 Proceedings: http://www.armtechcon.com/

Official paper citation:
Advanced Verification Management and Coverage Closure Techniques, Nguyen Le, Microsoft; Harsh Patel, Roger Sabbagh, Darron May, Josef Derner, Mentor Graphics

, , , , , , , , , , ,

5 November, 2014

Between 2006 and 2014, the average number of IPs integrated into an advanced SoC increased from about 30 to over 120. In the same period, the average number of embedded processors found in an advanced SoC increased from one to as many as 20. However, increased design size is only one dimension of the growing verification complexity challenge. Beyond this growing-functionality phenomenon are new layers of requirements that must be verified. Many of these verification requirements did not exist ten years ago, such as multiple asynchronous clock domains, interacting power domains, security domains, and complex HW/SW dependencies. Add all these challenges together, and you have the perfect storm brewing.

It’s not just the challenges in design and verification that have been changing, of course. New technologies have been developed to address emerging verification challenges. For example, new automated ways of applying formal verification have been developed that allow non-Formal experts to take advantage of the significant benefits of formal verification. New technology for stimulus generation have also been developed that allow verification engineers to develop complex stimulus scenarios 10x more efficiently than with directed tests and execute those tests 10x more efficiently than with pure-random generation.

It’s not just technology, of course. Along with new technologies, new methodologies are needed to make adoption of new technologies efficient and repeatable. The UVM is one example of these new methodologies that make it easier to build complex and modular testbench environments by enabling reuse – both of verification components and knowledge.

The Verification Academy website provides great resources for learning about new technologies and methodologies that make verification more effective and efficient. This year, we tried something new and took Verification Academy on the road with live events in Austin, Santa Clara, and Denver. It was great to see so many verification engineers and managers attending to learn about new verification techniques and share their experiences applying these techniques with their colleagues.

va_live_sc

If you weren’t able to attend one of the live events – or if you did attend and really want to see a particular session again – you’re in luck. The presentations from the Verification Academy Live seminars are now available on the Verification Academy site:

  • Navigating the Perfect Storm: New School Verification Solutions
  • New School Coverage Closure
  • New School Connectivity Checking
  • New School Stimulus Generation Techniques
  • New School Thinking for Fast and Efficient Verification using EZ-VIP
  • Verification and Debug: Old School Meets New School
  • New Low Power Verification Techniques
  • Establishing a company-wide verification reuse library with UVM
  • Full SoC Emulation from Device Drivers to Peripheral Interfaces

You can find all the sessions via the following link:

https://verificationacademy.com/seminars/academy-live

, , , , , , , , ,

25 February, 2014

As DVCon expands, we at Mentor Graphics have grown our sponsored sessions as well.  Would you expect less?

In DVCon’s recent past, it was a tradition for the North American SystemC User Group (NASCUG) to sponsor a day of activity before the official start of the conference.  When OSCI merged with Accellera, the day before the official conference start grew to become Accellera Day with a broader set of meetings and activities covering many of Accellera’s standards.  This has all grown into a more official part of the DVCon program.  On Monday at DVCon – or as many still call it – Accellera Day – the tradeshow now joins in opening.  I covered this in detail in an earlier blog, so I won’t repeat myself now.

The pre-conference education and meet-up to discuss the latest in standards development is joined by an end of conference tutorial series that has expanded to allow four parallel sessions from three.  Instead of the one tutorial we at Mentor Graphics would otherwise sponsor at DVCon, we will offer two in this expanded series. Given the impact verification has on design it would seem right that more time be devoted to topics that address this.  One half-day tutorial is just to short to give the subject its due respect.

The two Mentor Graphics sponsored tutorials at DVCon, to be run in series, will devote a day to explore the application of current verification technology by us and users like you.  If you are already attending DVCon, you are making your tutorial selections now.  And for those who might only be interested to attend the tutorials themselves, DVCon offers a tutorials-only package ($145/Tutorial).  Mentor’s two tutorials are:

The first tutorial references “smooth sailing,” not because this will be a “no-pirate zone,” although I can tell you that since International Talk Like a Pirate Day is in late September, one won’t have to worry about a morning of pirate talk! [Interesting Fun Fact: Mentor Graphics’ headquarters in Wilsonville, OR USA is a short 50 miles (~80 km) north of the creators of this parotic holiday.]  The smooth sailing comes from the ability to easily use multiple engines from simulation, formal, emulation, FPGA prototyping to address your block to system-level verification needs.

The second tutorial is all about formal.  Or, in a more colloquial way to say it, we will answer the question: Whatsup with formal?  No, I doubt we will find more slang terms for formal technology being used and created in the tutorial.  But the tutorial will certainly look at more focused applications of formal technology.  As a pioneer in focused formal applications (like clock domain crossing) the creation of these focused formal applications has greatly simplified use and expanded technology access to verification teams with RTL design checks, X-state verification, and more joining the list.  Maybe we should ask Whatsapp with formal! But wait!  That slang question is already taken – and Facebook affirmed ownership with a $19B purchase of it recently.  Oh well, I lament.  Join me at this tutorial and we can explore something suitable and not yet taken as a replacement.  I can’t think of a better way to close DVCon than to see if we can invent another $19B term (or app).

, , , , , ,

30 October, 2013

MENTOR GRAPHICS AT ARM TECHCON

This week ARM® TechCon® 2013 is being held at the Santa Clara Convention Center from Tuesday October 29 through Thursday October 31st, but don’t worry, there’s nothing to be scared about.  The theme is “Where Intelligence Counts”, and in fact as a platinum sponsor of the event, Mentor Graphics is excited to present no less than ten technical and training sessions about using intelligent technology to design and verify ARM-based designs.

My personal favorite is scheduled for Halloween Day at 1:30pm, where I’ll tell you about a trick that Altera used to shave several months off their schedule, while verifying the functionality and performance of an ARM AXI™ fabric interconnect subsystem.  And the real treat is that they achieved first silicon success as well.  In keeping with the event’s theme, they used something called “intelligent” testbench automation.

And whether you’re designing multi-core designs with AXI fabrics, wireless designs with AMBA® 4 ACE™ extensions, or even enterprise computing systems with ARM’s latest AMBA® 5 CHI™ architecture, these sessions show you how to take advantage of the very latest simulation and formal technology to verify SoC connectivity, ensure correct interconnect functional operation, and even analyze on-chip network performance.

On Tuesday at 10:30am, Gordon Allan described how an intelligent performance analysis solution can leverage the power of an SQL database to analyze and verify interconnect performance in ways that traditional verification techniques cannot.  He showed a wide range of dynamic visual representations produced by SoC regressions that can be quickly and easily manipulated by engineers to verify performance to avoid expensive overdesign.

Right after Gordon’s session, Ping Yeung discussed using intelligent formal verification to automate SoC connectivity, overcoming observability and controllability challenges faced by simulation-only solutions.  Formal verification can examine all possible scenarios exhaustively, verifying on-chip bus connectivity, pin multiplexing of constrained interfaces, connectivity of clock and reset signals, as well as power control and scan test signal connectivity.

On Wednesday, Mark Peryer shows how to verify AMBA interconnect performance using intelligent database analysis and intelligent testbench automation for traffic scenario generation.  These techniques enable automatic testbench instrumentation for configurable ARM-based interconnect subsystems, as well as highly-efficient dense, medium, sparse, and varied bus traffic generation that covers even the most difficult to achieve corner-case conditions.

And finally also on Halloween, Andy Meyer offers an intelligent workshop for those that are designing high performance systems with hierarchical and distributed caches, using either ARM’s AMBA 5 CHI architecture or ARM’s AMBA 4 ACE architecture.  He’ll cover topics including how caching works, how to improve caching performance, and how to verify cache coherency.

For more information about these sessions, be sure to visit the ARM TechCon program website.  Or if you miss any of them, and would like to learn about how this intelligent technology can help you verify your ARM designs, don’t be afraid to email me at mark_olen@mentor.com.   Happy Halloween!

, , , , , , , , , ,

26 August, 2013

Verification Techniques & Technologies Adoption Trends (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 10 click here), I presented verification techniques and technologies adoption trends, as identified by the 2012 Wilson Research Group study. In this blog, I continue those discussions and focus on formal verification, acceleration/emulation, and FPGA prototyping.

For years, the term “formal verification” has bugged me since it is quite often misunderstood in the industry. The problem originated back in the mid 1990’s with the emergence of formal equivalence checking tools from various EDA vendors, such as Chrysalis Symbolic Design. These tools were introduced to the market as formal verification, which is technically a true statement. However, there are a range of tools available under the category formal verification, such as formal property checkers and equivalence checkers.

So, what’s the problem? The question related to formal property checking in prior studies could have been misinterpreted by some participants to mean equivalence checking, which reduces the confidence in the results. To prevent this misinterpretation, we decided to change the question in 2012 to clarify that we were talking about the formal verification of assertions and clearly state “not equivalence checking” in the question.

One other thing we wanted to learn in the formal verification space during this study was what percentage of the market was using these auto-formal analysis tools (such as X safety checks, deadlock detection, reset analysis, etc.) versus formal property checking tools. The previous studies never made this distinction.

The fact that we changed the question related to formal property checking while adding in auto-formal in the 2012 study means that there is no meaningful way to compare this study’s formal verification results to the formal verification results from prior studies.

Formal Technology Adoption Trends

Figure 1 shows the adoption percentages for formal property checking and auto-formal techniques.

Figure 1. Formal Technology Adoption

We found that about five percent of the participants who are applying auto-formal techniques are not doing formal property checking. This means that the combined adoption of formal property checking and auto-formal techniques is about 32 percent. As a point of reference, the 2007 FarWest Research study found 19 percent adoption for formal verification—and the 2010 study found the adoption at 29 percent. Both the 2007 and 2010 studies included the potential erroneous responses associated with formal equivalence checking, as well as auto-formal usage.

Figure 2 provides a different analysis of the formal property adoption data by partitioning the results by design sizes. The design size partitions are represented as: less than 5M gates, 5M to 20M gates, and greater than 20M gates.

Figure 2. Formal property checking adoption by design size

Acceleration/Emulation & FPGA Prototyping Adoption Trends

The amount of time spent in a simulation regression is an increasing concern for many projects. Intuitively, we tend to think that the design size influences simulation performance. However, there are two equally important factors that must be considered: number of tests in the simulation regression suite and the length of each test in terms of clock cycles.

For example, a project might have a small or moderate-sized design, yet verification of this design requires a long running test (e.g., a video input stream). Hence, in this example, the simulation regression time is influenced by the number of clock cycles required for the test and not necessarily the design size itself.

Figure 3 shows the number of directed tests created to verify a design in simulation (i.e., the regression suite). The findings obviously varied dramatically from a handful of tests to thousands of tests in a regression suite, depending on the design.

Figure 3. Number directed test created to verify a design

The increase in tests in the range of 1-100 is interesting to note. Is this due to the increase in adoption of constrained-random verification techniques in the past few years? Or possibly, something else is going on here. This line of questioning illustrates the value of reviewing various industry studies. That is, it is not so much in the absolute values a study presents, but the questions the new data raises.

Next, let’s look at regression times as shown in Figure 5. As you can see, it also varies dramatically from short regression times for some projects to multiple days for other projects. The median simulation regression time is about 16-24 hours. Here, we also see an increase in shorter regression times. Again this data raises some interesting questions that are worth exploring.

Figure 4. Simulation regression time trends

One technique that is often used to speed up simulation regressions (either due to very long tests and lots of tests) is either hardware-assisted acceleration or emulation. In addition, FPGA prototyping, while historically used as a platform for software development, has recently served a role in SoC integration validation.

Figure 5 shows the adoption trend for both HW-assisted acceleration/emulation and FPGA prototyping by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). We see a continual rise in HW acceleration and emulation. This is not only due to the need to verify larger designs, or designs with long test times. HW acceleration and emulation has become the key platform for SoC Integration verification, where both hardware and software are integrated into a system for the first time. In addition, emulation is being used increasingly as a software development platform.

Figure 5. HW-assisted acceleration/emulation and FPGA Prototyping trends

Note that the adoption of FPGA prototyping has remained flat (or decreased slightly as the 2012 data suggest). This might seem counter-intuitive since we previously saw a trend in terms of the increase in SoC class designs. So what’s going on?

Figure 6 partitions the data for HW-assisted acceleration/emulation and FPGA prototyping adoption by design size: less than 1M gates, 1M to 20M gates, and greater than 20M gates. Notice that the adoption of HW-assisted acceleration/emulation continues to increase as design sizes increase. However, the adoption of FPGA prototyping rapidly drops off as design sizes increase beyond 20M gates. 

Figure 6. Acceleration/emulation and FPGA prototyping adoption by design size

This graph illustrates one of the problems with FPGA prototyping of very large designs, which is that there is an increased engineering effort required to partition designs across multiple FPGAs. In fact, what I have found is that FPGA prototyping of very large designs is often a major engineering effort in itself, and that many projects are seeking alternative solutions to address this problem.

In my next blog (click here), I will present the final data I plan to share from the Wilson Research Group study. This blog will focus on results in terms of meeting schedules, required spins, and classes of bugs contributing to respins. I will then wrap up this series of blogs in what I call the Epilogue—which will discuss potential gotchas and cautions on interpreting certain aspects of the data and thoughts about how the data could be used constructively.

, , , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@jhupcey Tweets

  • Loading tweets...