Posts Tagged ‘functional verification’

27 July, 2015

ASIC/IC Language and Library Adoption Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I presented our study findings on various verification technology adoption trends. In this blog, I focus on language and library adoption trends.

As previously noted, the reason some of the results sum to more than 100 percent is that some projects are using multiple languages; thus, individual projects can have multiple answers.

Figure 1 shows the adoption trends for languages used to create RTL designs. Essentially, the adoption rates for all languages used to create RTL designs is projected to be either declining or flat over the next year, with the exception of SystemVerilog.

2014-WRG-BLOG-ASIC-10-1

Figure 1. ASIC/IC Languages Used for RTL Design

Figure 2 shows the adoption trends for languages used to create ASIC/IC testbenches. Essentially, the adoption rates for all languages used to create testbenches are either declining or flat, with the exception of SystemVerilog. Nonetheless, the data suggest that SystemVerilog adoption is starting to saturate or level off at about 75 percent.

2014-WRG-BLOG-ASIC-10-2

Figure 2. ASIC/IC Languages Used for  Verification (Testbenches)

Figure 3 shows the adoption trends for various ASIC/IC testbench methodologies built using class libraries.

2014-WRG-BLOG-ASIC-10-3

Figure 3. ASIC/IC Methodologies and Testbench Base-Class Libraries

Here we see a decline in adoption of all methodologies and class libraries with the exception of Accellera’s UVM3, whose adoption increased by 56 percent between 2012 and 2014. Furthermore, our study revealed that UVM is projected to grow an additional 13 percent within the next year.

Figure 4 shows the ASIC/IC industry adoption trends for various assertion languages, and again, SystemVerilog Assertions seems to have saturated or leveled off.

2014-WRG-BLOG-ASIC-10-4

Figure 4. ASIC/IC Assertion Language Adoption

In my next blog (click here) I plan to present the ASIC/IC design and verification power trends.

Quick links to the 2014 Wilson Research Group Study results

, , , , , , , , , , , , , , , ,

19 July, 2015

ASIC/IC Verification Technology Adoption Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on the growing ASIC/IC design project resource trends due to rising design complexity. In this blog I examine various verification technology adoption trends.

Dynamic Verification Techniques

Figure 1 shows the ASIC/IC adoption trends for various simulation-based techniques from 2007 through 2014, which include code coverage, assertions, functional coverage, and constrained-random simulation.

2014-WRG-BLOG-ASIC-9-1

Figure 1. ASIC/IC Dynamic Verification Technology Adoption Trends

One observation from these adoption trends is that the electronic design industry is maturing its verification processes. This maturity is likely due to the growing complexity of designs as discussed in the previous section. Another observation is that constrained-random simulation adoption appears to be leveling off. This trend is likely due to the scaling limitations of constrained-random simulation. This technique generally works well at the IP block or subsystem level in simulation, but does not scale to the entire SoC integration level.

ASIC/IC Static Verification Techniques

Figure 2 shows the ASIC/IC adoption trends for formal property checking (e.g., model checking), as well as automatic formal applications (e.g., SoC integration connectivity checking, deadlock detection, X semantic safety checks, coverage reachability analysis, and many other properties that can be automatically extracted and then formally proven). Formal property checking traditionally has been a high-effort process requiring specialized skills and expertise. However, the recent emergence of automatic formal applications provides narrowly focused solutions and does not require specialized skills to adopt. While formal property checking adoption is experiencing incremental growth between 2012 and 2014, the adoption of automatic formal applications increased by 62 percent. In general, formal solutions (i.e., formal property checking combined with automatic formal applications) are one of the fastest growing segments in functional verification.

2014-WRG-BLOG-ASIC-9-2

Figure 2. ASIC/IC Formal Technology Adoption

Emulation and FPGA Prototyping

Historically, the simulation market has depended on processor frequency scaling as one means of continual improvement in simulation performance. However, as processor frequency scaling levels off, simulation-based techniques are unable to keep up with today’s growing complexity. This is particularly true when simulating large designs that include both software and embedded processor core models. Hence, acceleration techniques are now required to extend ASIC/IC verification performance for very large designs. In fact, emulation and FPGA prototyping have become key platforms for SoC integration verification where both hardware and software are integrated into a system for the first time. In addition to SoC verification, emulation and FPGA prototyping are also used today as a platform for software development.

Today, 35 percent of the industry has adopted emulation, while 33 percent of the industry has adopted FPGA prototyping. Figure 3 describes various reasons why projects are using these techniques. You might note that the results do not sum to 100 percent since multiple answers were accepted from each study participant. Also, we are unable to show trend analysis here since previous studies did not examine this aspect of functional verification.

2014-WRG-BLOG-ASIC-9-3

Figure 3. Why Was Emulation or FPGA Prototyping Used?

Figure 4 partitions the data for emulation and FPGA prototyping adoption by the design size as follows: less than 5M gates, 5M to 80M gates, and greater than 80M gates. Notice that the adoption of emulation continues to increase as design sizes increase. However, the adoption of FPGA prototyping rapidly drops off as design sizes increase beyond 80M gates. Actually, the drop-off point is more likely around 40M gates or so since this is the average capacity limit of many of today’s FPGAs. This graph illustrates one of the problems with adopting FPGA prototyping of very large designs. That is, there can be an increased engineering effort required to partition designs across multiple FPGAs. However, better FPGA partitioning solutions are now emerging from EDA to address these challenges. In addition, better FPGA debugging solutions are also emerging from EDA to address today’s lab visibility challenges. Hence, I anticipate seeing an increase in adoption of FPGA prototyping for larger gate counts as time goes forward.

2014-WRG-BLOG-ASIC-9-4

Figure 4. Emulation and FPGA Prototyping Adoption by Design Size

In my next blog (click here) I plan to discuss various ASIC/IC language and library adoption trends.

Quick links to the 2014 Wilson Research Group Study results

, , , , , ,

13 July, 2015

ASIC/IC Resource Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on ASIC/IC design trends and rising complexity. In this blog, I plan to discuss the growing ASIC/IC design project resource trends due to rising design complexity.

Figure 1 shows the percentage of total project time spent in verification. As you would expect, the results are all over the spectrum; whereas, some projects spend less time in verification, other projects spend more. The average total project time spent in verification in 2014 was 57 percent, which did not change significantly from 2012. However, notice the increase in the percentage of projects that spend more than 80 percent of their time in verification.

2014-WRG-BLOG-ASIC-8-1Figure 1. Percentage of ASIC/IC Project Time Spent in Verification

Perhaps one of the biggest challenges in design and verification today is identifying solutions to increase productivity and control engineering headcount. To illustrate the need for productivity improvement, we discuss the trend in terms of increasing engineering headcount. Figure 2 shows the mean peak number of engineers working on a project. Again, this is an industry average since some projects have many engineers while other projects have few. You can see that the mean peak number of verification engineers today is greater than the mean peak number of design engineers. In other words, there are, on average, more verification engineers working on a project than design engineers. This situation has changed significantly since 2007.

2014-WRG-BLOG-ASIC-8-2Figure 2. Mean Number of Peak Engineers per ASIC/IC Project

Another way to comprehend the impact of today’s project headcount trends is to calculate the compounded annual growth rate (CAGR) for both design and verification engineers. Between 2007 and 2014 the industry experienced a 3.7 percent CAGR for design engineers and a 12.8 percent CAGR for verification engineers, as shown in Figure 3. Clearly, the double-digit increase in required verification engineers has become a major project cost management concern, and is one indicator of growing verification effort.

2014-WRG-BLOG-ASIC-8-3aFigure 3. Peak Engineers CAGR per ASIC/IC Project

But verification engineers are not the only project stakeholders involved in the verification process. Design engineers spend a significant amount of their time in verification too, as shown in Figure 4. In 2014, design engineers spent on average 53 percent of their time involved in design activities and 47 percent of their time in verification.

2014-WRG-BLOG-ASIC-8-4Figure 4. Where ASIC/IC Design Engineers Spend Their Time

However, this is a reversal in the trends observed from the 2010 and 2012 studies, which indicated that design engineers spent more time in verification activities than design activities. The data suggest that design effort has risen significantly in the last two years when you take into account that: (a) design engineers are spending more time in design, and (b) there was a nine percent CAGR in required design engineers between 2012 and 2014 (shown in Figure 2), which is a steeper increase than the overall 3.7 CAGR for design engineers spanning 2007 through 2014. We will discuss a few factors that might be contributing to this increased design effort in upcoming blogs.

Figure 5 shows where verification engineers spend their time (on average). We do not show trends here since this aspect of project resources was not studied prior to 2012, and there were no significant changes in the results between 2012 and 2014. Our study found that verification engineers spend more of their time in debugging than any other activity. This needs to be an important research area whose future solutions will be necessary for improving productivity and predictability within a project.

2014-WRG-BLOG-ASIC-8-5Figure 5. Where ASIC/IC Verification Engineers Spend Their Time

In my next blog (click here) I plan to discuss various ASIC/IC verification solutions adoption trends.

Quick links to the 2014 Wilson Research Group Study results


8 July, 2015

ASIC/IC Design Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous set of blogs, I focused on FPGA design and verification trends.  I now will shift the focus of this series of blogs from FPGA trends to ASIC/IC trends.

In this blog, I present trends related to various aspects of design to illustrate growing design complexity. Figure 1 shows the trends from the 2007, 2012, and 2014 studies in terms of active ASIC/IC design project by design sizes (gates of logic and datapath, excluding memories). The 2014 study added more resolution in identifying larger design sizes (up to 500M or more gates), while the 2012 studies’ upper bound was limited to 60M or more gates.

2014-WRG-BLOG-ASIC-7-1Figure 1. ASIC/IC Design Sizes

The key takeaway here is that the electronic industry continues to move to larger designs. In fact, 31 percent of today’s design projects are working on designs over 80M gates, while 17 percent of today’s design projects are working on designs over 500M gates.

But increased design size is only one dimension of the growing complexity challenge. What has changed significantly in design since the original Collett studies is the dramatic movement to SoC class of designs. In 2004, Collett found that 52 percent of designs included one or more embedded processors. Our 2014 study found that the number of designs with embedded processors had increased to 71 percent. Furthermore, 45 percent of all designs today contain two or more embedded processors, while 12 percent of today’s designs include eight or more embedded processors. SoC class designs add a new layer of verification complexity to the verification process that did not exist with traditional non-SoC class designs due to hardware and software interactions, new coherency architectures, and the emergence of complex networkon-a-chip interconnect.

In addition to the increasing number of embedded processors contained within an SoC, it is not uncommon to find in the order of 120 integrated IP blocks within today’s more advanced SoCs. Many of these IP blocks have their own clocking requirements, which often present new verification challenges due to metastability issues involving signals that cross between multiple asynchronous clock domains.

In Figure 2, we see that 93 percent of all ASIC/IC design projects today are working on designs that have two or more asynchronous clock domains.

2014-WRG-BLOG-ASIC-7-2Figure 2. Number of Asynchronous Clock Domains in ASIC/IC Designs

One of the challenges with verifying clock domain crossing issues is that there is a class of metastability bugs that cannot be demonstrated in simulation on an RTL model. To simulate these issues requires a gate-level model with timing, which is often not available until later stages in the design flow. However, emerging static clock-domain crossing (CDC) verification tools can identify clock domain issues directly on an RTL model at earlier stages in the design flow.

In my next blog (click here) I plan to discuss the growing ASIC/IC design project resource trends due to rising design complexity.

Quick links to the 2014 Wilson Research Group Study results


2 June, 2015

This year we are trying something new at the Verification Academy booth during next week’s 2015 Design Automation Conference.  We’ve decided to host an interactive panel on the controversial topic of Agile development. I say controversial because you typically find two camps of engineers when discussing the subject of Agile development—the believers and the non-believers.

My colleague Neil Johnson, principal consultant from XtremeEDA Corporation and a leading expert in Agile development, will provide some context for the topic with a short background on Agile methods to kick the panel off. Then I plan to join Neil on the panel, which will be moderated by Mentor’s own world-renowned Dennis Brophy.  Our intent is to have a healthy, interactive discussion with both the believers and the non-believers in the audience.

So, why is the subject of Agile development even worthy of discussion at DAC? Well, not to entirely give away my position on the subject…but I think it’s worthwhile to note some of the recent findings related to root cause of logical and functional flaws from the 2014 Wilson Research Group Functional Verification Study (see figure below).

Clearly, design errors are a major factor contributing to bugs. Yet a growing concern is the number of issues surrounding the specification that are leading to logical and functional flaws.  In reality, there is no such thing as the perfect specification—and few projects can afford to wait to start development until the perfection is achieved. Furthermore, in many market segments, late stage changes in the specification are common practice to ensure that the final product is competitive in a rapidly changing market. Could Agile development, in which requirements and solutions evolve through collaboration between self-organizing and cross-functional teams, be the saving grace?  Please join us on June 8th at 5pm in the Verification Academy booth at DAC and hear what the experts are saying!

, ,

16 May, 2015

In a recent post on deepchip.com John Cooley wrote about “Who Knew VIP?”. In addition, Mark Olen wrote about this same topic on the verification horizon’s blog. So are VIPs becoming more and more popular? Yes!

Here are the big reasons why I believe we are seeing this trend:

  • Ease of Integration
  • Ready made Configurations and Sequences
  • Debug Capabilities

I will be digging deeper on each of these reasons and topics in a three part series on the Verification Horizons BLOG. Today, in part 1 our focus is Easy Integration or EZ VIP.

While developing VIP’s there are various trade off that one has to consider – ease of use vs amount of configuration, protocol specific vs commonality across various protocols. A couple of years ago; when I first got my hands on Mentor’s VIP, there were some features that I really liked and some that I wasn’t familiar with and some that I needed to learn. Over the last couple of years, there have been strides of improvements with ease of use (EZ-VIP) a big one in that list.

EZ-VIP’s is aimed to make it easier to:

  • Make connections between QVIP interfaces and DUT signals
  • Integrate and configure a QVIP in a UVM testbench

EZ-VIP_Productivity

This makes it easier for customers to become productive in hours rather than days.

Connectivity Modules:

Earlier we gave the users the flexibility on the direction of the ports of VIP based on the use modes. The user needed to write certain glue logic apart from setting the directions of the ports.

QVIP_ConnModule

Now we have created new connectivity modules. These connectivity modules enable easy connectivity during integration. These connectivity modules are wrapper around the VIP interface with the needed glue logic, ports having protocol standard names and with the right direction based on the mode e.g. Master, Slave, EP, RC etc. These now enables the user to quickly integrate the QVIP with the DUT.

EZ-VIP_Productivity_1

Quick Starter Kits:

These kits are specific to PCIe and are customized pre-packaged for all major IP vendors, easy-to-use verification environments for the serial and parallel interfaces of PCIe 1.0, 2.0, 3.0, 4.0 and mPCIe, which can be used to verify PHY, Root Complex and Endpoint designs. Users also get example which could be used as reference. These have dramatically reduced bring up time for PCIe QVIP with these IPs to less than a day.

In the part 2 of this series, I will talk about QVIP Configuration and Sequences. Stay tuned and I look forward to hearing your feedback on the series of posts on VIP.

, , , , , , , ,

11 May, 2015

FPGA Verification Technology Adoption Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on the effectiveness of verification in terms of FPGA project schedule and bug escapes. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2014 Wilson Research Group study.

An interesting trend we see in the FPGA space is a continual maturing of its functional verification processes. In fact, we find that the FPGA design space is about where the ASIC/IC design space was five years ago in terms of verification maturity—and it is catching up quickly. A question you might ask is, “What is driving this trend?” In Part 1 of this blog series I showed rising design complexity with the adoption of more advanced FPGA designs, as well as multiple embedded processor architectures targeted at FPGA designs. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (Part 2 and Part 3). My belief is that the industry creating FPGA designs is being forced to mature its functional verification processes to address today’s increasing complexity.

FPGA Simulation Technique Adoption Trends

Let’s begin by comparing  FPGA adoption trends related to various simulation techniques from the both the 2012 and 2014 Wilson Research Group study, as shown in Figure 1.

Figure 1. Simulation-based technique adoption trends for FPGA designs

You can clearly see that the industry is increasing its adoption of various functional verification techniques for FPGA targeted designs. This past year I have spent a significant amount of time in discussions with FPGA project managers around the world. During these discussions, most mangers mention the drive to improve verification process within their projects due to rising complexity. The Wilson Research Group data suggest that these claims are valid.

FPGA Formal Technology Adoption Trends

Figure w shows the adoption percentages for formal property checking and auto-formal techniques.

Figure 2. FPGA Formal Technology Adoption

Our study looked at two forms of formal technology adoption (i.e., formal property checking and automatic formal verification solutions). Examples of automatic formal verification solutions include X safety checks, deadlock detection, reset analysis, and so on.  The key difference is that for formal property checking the user writes a set of assertions that they wish to prove.  Automatic formal verification solutions do not require the user to write assertions.

In my next blog (click here), I’ll focus on FPGA design and verification language adoption trends, as identified by the 2014 Wilson Research Group study.

Quick links to the 2014 Wilson Research Group Study results

, , , , ,

21 April, 2015

FPGA Verification Effectiveness Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on the amount of effort spent in FPGA verification. We have seen in previous blogs that a significant amount of effort is being applied to FPGA functional verification. In this blog I focus on the effectiveness of verification in terms of FPGA project schedule and bug escapes.

FPGA Schedules

Figure 1 presents the design completion time compared to the project’s original schedule. What was a surprise in the 2014 findings is that we saw an improvement in the number of FPGA projects meeting schedule—compared to 2012. It is unclear why we are seeing this trend now.  Perhaps managers are getting better at scheduling—or are becoming more pessimistic with their schedules.  Or, perhaps it is due to the increase amount of reuse (both design and verification IP). Or, is the increased amount of FPGA verification effort prior to “getting to the lab” starting to pay off for some projects? This data point raises some interesting questions worth exploring further. Regardless, still a significant number of FPGA projects miss their originally planned schedule.

2014-WRG-BLOG-FPGA-4-1

Figure 1. FPGA design completion time compared to the project’s original schedule

FPGA Lab Iterations

ASIC/IC projects track the number of required spins that occur prior to market production.  In fact, this can be a useful metric for determining the overall verification effectiveness of an ASIC/IC project.  Unfortunately, we lack such a metric for FPGA projects.  For the 2014 study, we decided to ask the question related to the average number of lab iterations required before the design went into production. Again, this was done to try and get a sense of the project’s verification effectiveness.  The results are shown in Figure 2. However, I’m not convinced that FPGA lab iterations is analogous to ASIC/IC respin as a verification effectiveness metric.  Perhaps a better metric for future studies would be the number of bugs that escape into production and are found in the field. This might be something we should consider on future studies.

2014-WRG-BLOG-FPGA-4-2

Figure 2. Number of FPGA iterations in the lab (no trend data available)

FPGA Bug classification

For the 2014 study, we asked the FPGA project participants to identify the type of flaws that were contributing to rework in the lab. In Figure 3, I show the two leading causes of rework, which are logical and functional bugs, as well as clocking bugs. The data seems to suggest that these issues are growing. Perhaps due to the design of larger and more complex FPGAs. Again, this is a data point worth exploring further.

2014-WRG-BLOG-FPGA-4-3

Figure 3. Types of Flaws Resulting in FPGA Rework

In Figure 4, I show trends in terms of main contributing factors leading to logic and functional flaws—and you can see that design errors are the main cause of functional flaws.  But note that a significant amount of flaws are related to some aspect of the specification—such as changes in the specification—or incorrect or incomplete specifications. Problems associated with the specification process are a common theme I often hear when visiting FPGA customers.

2014-WRG-BLOG-FPGA-4-4

Figure 5. Root cause of FPGA functional flaws

In my next blog (click here), I plan to presenting the findings from our study for FPGA verification technology adoption trends.

Quick links to the 2014 Wilson Research Group Study results

,

16 April, 2015

Do automated formal apps really help D&V engineers “cross the chasm” and start using formal verification directly? In Part 1 of this case study on Oracle’s “Project RAPID”, the Oracle team’s appetite for using formal verification was whetted by impressive results from the Questa Connectivity Check and Questa Register Check apps. Picking up the story where we left off, award-winning author Ram Narayan explains how success with these automated formal apps inspired the team to try their hands at using formal technology directly with the Questa Property Checking (PropCheck) app for classical model/property checking. Ram writes:

Some of the IP units … were good candidates for formal verification. That’s because it was very reasonable to expect to be able to prove the complete functionality of these units formally. We decided to target these units with the Assurance strategy.”

and

Spurred by the success of applying [the apps], we considered applying formal methods in a bug hunting mode for units that were already being verified with simulation. Unlike Assurance, Bug Hunting doesn’t attempt to prove the entire functionality, but rather targets specific areas where simulation is not providing enough confidence that all corner case bugs have been discovered.”

2015-4-8 page 5 of Oracle Ram N VH article
The results of their assurance and bug hunting strategies speak for themselves: Table 1 in the article reports that the team found 79 bugs with these formal verification techniques!

Given this success with formal, the team gained the confidence to apply formal in more DUT areas where formal would be more effective than simulation – i.e. “using the best tool for the job” as necessary. Indeed, a common thread throughout the whole story is how formal and simulation were often used in tandem to simultaneously leverage the unique strengths of each technology to improve the overall quality of verification. The article’s conclusion begins with this observation:

“Formal verification is a highly effective and efficient approach to finding bugs. Simulation is the only means available to compute functional coverage towards verification closure. In this project we attempted to strike a balance between the two methodologies and to operate within the strengths of each approach towards meeting the projects goals.

The bottom-line: formal has “gone mainstream” in this team’s current and future projects:

“The most significant accomplishment to me is the shift in the outlook of the design team towards formal. According to one designer whose unit was targeted with Bug Hunting, ‘I was initially skeptical about what formal could do. From what I have seen, I want to target my next design first with formal and find most of the bugs.’ … “the time savings and improved design quality that formal verification brings are welcome benefits. We plan to continue to push the boundaries of what is covered by formal in future projects.”

Granted, the road from zero formal to full adoption might not have been quite as smooth as this engaging article describes. Still, their declaration to future usage of formal apps in conjunction with formal property checking – let alone their project’s impressive results – appear to conclusively prove the original thesis.  Namely, once formal’s considerable power and benefits are introduced by a series of formal apps, there is no going back and formal becomes a permanent part of the user’s verification tool kit.

Does Ram’s/Oracle’s journey resonate with you? Have you had the same experience or seen something similar at your employer or clients?  Please share your thoughts in the comments below, or contact me offline.

Until next time, may your coverage be high and your power consumption be low,

Joe Hupcey III

P.S. FYI, the author of the Verification Horizons article described above (and the related award-winning DVCon 2014 poster) was also a co-author of the 2015 DVCon USA Best Paper, 10.1 “I Created the Verification Gap” by Ram Narayan and Tom Symons of Oracle Labs.  Congratulations Ram and Tom!

Reference Links:

Verification Horizons, March 2015, Volume 11, Issue 1:
Evolving the Use of Formal Model Checking in SoC Design Verification
Ram Narayan, Oracle Corp.

https://verificationacademy.com/verification-horizons/march-2015-volume-11-issue-1/Evolving-the-Use-of-Formal-Model-Checking-in-SoC-Design-Verification

—–

DVCon USA, March 2014, 1P.2:
The Future of Formal Model Checking is NOW! Leveraging Formal Methods for RAPID System On Chip Verification, (Poster Presentation Honorable Mention)
Ram Narayan, Oracle Corp.

http://events.dvcon.org/events/proceedings.aspx?id=163-1-P

, , , , , , , , , , , , , , ,

9 April, 2015

One of the biggest developments in the formal verification world in the past several years has been the industry-wide growth of formal-based “apps” — automated applications that leverage formal’s exhaustive verification technology “under the hood” to focus on specific verification tasks well suited to formal algorithms. But do formal apps really help D&V engineers “cross the chasm” and start using formal verification directly?  (Or if you prefer, are apps an effective “Trojan Horse”?)  A recent article in Verification Horizons by Oracle’s Ram Narayan titled “Evolving the Use of Formal Model Checking in SoC Design Verification about the evolution of the verification methodology employed on Oracle’s “Project RAPID” suggests the answer is “yes”.

2015-4-8 front page of Oracle Ram N VH article

In a nutshell, the clear benefits Ram’s team received from formal apps inspired them to try their hand at formal model checking; and their results exceeded all expectations. I recommend you read the article in its entirety because it’s a great real-world case study; rich with anecdotes from the front-line engineer himself. (Indeed, this article was inspired by Ram’s award winning DVCon 2014 poster, but I digress) But for the purposes of this post, allow me to focus exclusively on the highlights pertaining to the “crossing the chasm” thesis. Consider the following excerpts.

* First, they started from scratch:

“At the outset of the project, there were no specific plans to use formal verification on RAPID. We did not have any infrastructure in place for running formal tools, and neither did we have anyone on the team with any noteworthy experience using these tools.”

* The first app they tried exceeded all expectations: Like many customers, Oracle got their feet wet with formal-driven SoC connectivity checking. And like 100% of Questa Connectivity Check app customers, they came away impressed:

“Our goal was to catch trivial design errors through formal methods without having to rely on lengthy and in some cases, random SoC simulations. Given our modest expectations at the outset, we would have been satisfied if we just verified these SoC connectivity checks with formal tools.  … SoC Connectivity checks were written to verify the correct connectivity between critical SoC signals like interrupts, events and other control/datapath signals. These checks are trivial to define and are of high value. Proving these connections saved us significant cycles in SoC simulations.

This is not just a gut feeling on the author’s part: the bottom row of Table 2 in the article (showing the Questa Connectivity Check app cutting the schedule by 66%) backs-up the above quote with real project data.

2015-4-8 Table 2 Oracle Ram N VH article

Article Table 2: Formal Verification Time Savings on Oracle’s Project RAPID – formal-based connectivity verification with the Questa Connectivity Check app delivers 66% schedule savings


* Another app is tried, and it’s also wildly successful:
the Questa Register Check app was the next formal app to be applied. Not only did it take care of the immediate control&status register verification task, but it also enabled more effective downstream verification:

“The Register Access Verification established controllability and observability of the registers in the unit from its interface. The IP core logic verification could now safely use the control registers as inputs to properties on the rest of the logic they drive. In addition to these registers, we chose a few internal nodes in the design as observation and control points in our properties. These points gave us additional controllability and observability to the design and reduced the complexity of the cones of logic being analyzed around them. We proved the correctness (observability) of these points prior to enjoying the benefits of using them (controllability) for other properties. This approach made it easier to write properties on the entire unit without any compromise on the efficacy of the overall unit verification.”

At this point in the story, the Oracle team is still confining their use of formal to the stable of available automated formal apps. However, as we’ll see in Part 2 of this case study, this success bred curiosity in the underlying technology …

Until next time, may your coverage be high and your power consumption be low,

Joe Hupcey III

P.S. FYI, the author of the Verification Horizons article described above (and the related award-winning DVCon 2014 poster) was also a co-author of the 2015 DVCon USA Best Paper, 10.1 “I Created the Verification Gap” by Ram Narayan and Tom Symons of Oracle Labs.  Congratulations Ram and Tom!

Reference Links:

Verification Horizons, March 2015, Volume 11, Issue 1:
Evolving the Use of Formal Model Checking in SoC Design Verification
Ram Narayan, Oracle Corp.

https://verificationacademy.com/verification-horizons/march-2015-volume-11-issue-1/Evolving-the-Use-of-Formal-Model-Checking-in-SoC-Design-Verification

—–

DVCon USA, March 2014, 1P.2:
The Future of Formal Model Checking is NOW! Leveraging Formal Methods for RAPID System On Chip Verification, (Poster Presentation Honorable Mention)
Ram Narayan, Oracle Corp.

http://events.dvcon.org/events/proceedings.aspx?id=163-1-P

, , , , , , , , , , , , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@jhupcey Tweets

  • Loading tweets...