Posts Tagged ‘formal verification’

22 August, 2015

Impact of Design Size on First Silicon Success

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I present verification results findings in terms of schedules, number of required spins, and classification of functional bugs. In this blog, I conclude the series on the 2014 Wilson Research Group Functional Verification Study by providing a deeper analysis of respins by design size.

It’s generally assumed that the larger the design—the increased likelihood of the occurrence of bugs. Yet, a question worth answering is how effective projects are at finding these bugs prior to tapeout.

In Figure 1, we first extract the 2014 data from the required number of spins trends presented in my previous blog (click here), and then partition this data into sets based on design size (that is, designs less than 5 million gates, designs between 5 and 80 million gates, and designs greater than 80 million gates). This led to perhaps one of the most startling findings from our 2014 study. That is, the data suggest that the smaller the design—the less likelihood of achieving first silicon success! While 34 percent of the designs over 80 million gates achieve first silicon success, only 27 percent of the designs less than 5 million gates are able to achieve first silicon success. The difference is statistically significant.

2014-WRG-BLOG-Conclusion-1

Figure 1. Number of spins by design size

To understand what factors might be contributing to this phenomena, we decided to apply the same partitioning technique while examining verification technology adoption trends.

Figure 2 shows the adoption trends for various verification techniques from 2007 through 2014, which include code coverage, assertions, functional coverage, and constrained-random simulation.

One observation we can make from these adoption trends is that the electronic design industry is maturing its verification processes. This maturity is likely due to the need to address the challenge of verifying designs with growing complexity.

2014-WRG-BLOG-Conclusion-2

Figure 2. Verification Technology Adoption Trends

In Figure 3 we extract the 2014 data from the various verification technology adoptions trends presented in Figure 2, and then partition this data into sets based on design size (that is, designs less than 5 million gates, designs between 5 and 80 million gates, and designs greater than 80 million gates).

2014-WRG-BLOG-Conclusion-3

Figure 3. Verification Technology Adoption by Design

Across the board we see that designs less than 5 million gates are less likely to adopt code coverage, assertions, functional coverage, and constrained-random simulation. Hence, if you correlate this data with the number of spins by design size (as shown in Figure 1), then the data suggest that the verification maturity of an organization has a significant influence on its ability to achieve first silicon success.

As a side note, you might have noticed that there is less adoption of constrained-random simulation for designs greater than 80 million gates. There are a few factors contributing to this behavior: (1) constrained-random works well at the IP and subsystem level, but does not scale to the full-chip level for large designs. (2) There a number of projects working on large designs that predominately focuses on integrating existing or purchased IPs. Hence, these types of projects focus more of their verification effort on integration and system validation task, and constrained-random simulation is rarely applied here.

So, to conclude this blog series, in general, the industry is maturing its verification processes as witnessed by the verification technology adoption trends. However, we found that smaller designs were less likely to adopt what is generally viewed as industry best verification practices and techniques. Similarly, we found that projects working on smaller designs tend to have a smaller ratio of peak verification engineers to peak designers. Could the fact that fewer available verification resources combined with the lack of adoption of more advanced verification techniques account for fewer small designs achieving first silicon success? The data suggest that this might be one contributing factor. It’s certainly something worth considering.

Quick links to the 2014 Wilson Research Group Study results

, , , , , ,

18 August, 2015

In Part 1 of this series, inspired by security researchers that were able take over a new Jeep and drive it into a ditch I asserted that in the future all vehicles will need to encrypt their internal control and data bus traffic with an encryption key. This key would be stored in a secure memory element of some sort – a separate memory chip or a register bank inside a system on a chip (SoC). As such, Design & Verification (D&V) engineers will need to verify that this secure storage can’t be compromised.

White hat hacking and constrained-random test benches don’t scale and aren’t exhaustive, so in this post I’ll describe how formal verification technology can be brought to bear.

First, the verification challenge here can be boiled down to two concerns:

(A) Confidentiality: can the key be read by an unauthorized party, or accidentally “leak” to the outputs?

(B) Integrity: can the key be altered, overwritten, or erased by the bad guys (or due to some unforeseen hardware or firmware bug)?

The only way to exhaustively verify (A) and (B) with only a few hours of compute time on common, low-cost servers is by employing formal verification technology. In a nutshell, “formal verification uses mathematical formal methods to prove or disprove the correctness of a system’s design with respect to formal specifications expressed as properties.”(1)

Returning to our automotive example, the “formal specification” is that (A) and (B) above can never happen, i.e. the key can only be read and edited by authorized parties through specific, secure pathways – anything else is a design flaw that must be fixed before going into production.

So what can D&V engineers do at the RTL level to employ formal technology to this verification challenge – especially if they have never used formal tools or have written System Verilog Assertions (SVA) before? Luckily Mentor has developed a fully automated solution that exhaustively verifies that only the paths you specify can reach security or safety-critical storage elements – i.e. to formally prove the confidentiality and integrity of your DUTs “root of trust”. The best part is that no knowledge of formal or property specification languages is required.

Questa Secure Check block diagram

Questa Secure Check app block diagram

Specifically, using your RTL and cleartext, human and machine readable Tcl code to specify the secure / safety-critical storage and allowed access paths as input, the Questa Secure Check app automates formal technology to exhaustively verify the “root of trust” – i.e. the storage for the system’s encryption keys – can be read or tampered with via unauthorized paths.

To expedite the analysis and/or minimize formal compile and run time, the app supports “black boxing” of clearly extraneous IPs and paths to keep the focus on the secure channels alone. The result: an exhaustive proof of your design’s integrity and/or clear counterexamples showing how your specification can be violated.

Questa Secure Check screen shot

Questa Secure Check app GUI example: users click on the “Insecure Path” of concern and the app generates a schematic of the path and waveforms related to the signals involved

In summary, only a sound hardware-based solution based on securely stored encryption keys will establish a true root of trust. Only an exhaustive formal analysis can verify this with mathematical certainly, and thus the Questa Secure Check app was created to help customers address this challenge.

I look forward to hearing your feedback and comments on what you are doing to address this challenge.

Keep your eyes on the road and your hands up on the wheel,

Joe Hupcey III

References:

(1) Using Formal Methods to Verify Complex Designs, IBM Haifa Research Lab, 2007
https://www.research.ibm.com/haifa/projects/verification/RB_Homepage/papers/wp_formal_verification_1.pdf

P.S. Shameless commercial pitch: the lead customer of the Questa Secure Check app is in the consumer electronics space, where their products are subject to world-wide attack, 24/7/365. Suffice to say that if Secure Check can help harden this customer’s system against determined, continuous attacks, it can help automakers, medical device manufacturers, smart phone designers, aircraft companies, etc.  In advance of a new Verification Academy course on this topic coming out this autumn, to learn more feel free to contact me offline, or ask questions in the comments section below.

, , , ,

5 August, 2015

[Preface: everywhere it refers to automobiles in this post, you can also swap in “X-Ray machine”, “pacemaker”, and “aircraft”]

The dark side of our connected future is here: from the comfort of a living room sofa, security researchers were able to remotely disable the brakes and transmission of a new Jeep Cherokee — literally driving the vehicle into a ditch (Hackers Remotely Kill A Jeep On The Highway – With Me In It, Wired, 7-21-15). Another group of researchers were able to hack into a car’s braking and other critical systems via the digital audio broadcast (DAB) infotainment system (“Now car hackers can bust in through your motor’s DAB RADIO”, The Register, 7-24-2015). In this form of attack, multiple vehicles could be affected simultaneously.

wired -- jeep in a ditch -- IMG_0724-1024x768

Security Researcher Charlie Miller attempts to reverse out of a ditch after its brakes were remotely disabled. Source: Wired Magazine

Fortunately no one has been hurt in these experiments, and manufacturers have been quick to respond with patches. But these two stories (and a growing number of others like them) demonstrate just how insecure today’s automobile electronics are.

So what can be done to prevent this?

First, I’m not here to argue that there is a single “silver bullet”.  To combat the numerous direct and side channel attacks there needs to be multiple, overlapping solutions to provide the necessary defense in depth. That said, the #1 priority is to secure the “root of trust”, from which everything else – the hardware, firmware, OS, and application layer’s security – is derived. If the root of trust can be compromised, then the whole system is vulnerable.

So how exactly am I defining the “root of trust”? I assert that in the near future the root of trust will effectively be an encryption key – a digital signature for each vehicle — that will be encoded into the electronics of every vehicle. Hence, I argue that the data packets transiting the interior networks of the vehicle will need to be “signed” with the vehicle’s signature and decrypted by the receiving sensor packs/Engine Control Units (ECUs)/radios/etc.

That’s right: encrypt all packets in all the interior data networks. Every. Single. One.

Is this overkill? Look again at the picture of the Jeep in the ditch.

Unfortunately, most automobiles being sold today cannot support this proposal since popular bus protocols like CAN and LIN simply do not have the bandwidth or protocol architecture to support this.  However, real time packet encryption/decryption is certainly possible with the increasingly popular Automotive Ethernet standard (whose early adopters include BMW, Hyundai, and Volkswagen). In such an advanced system, the automaker would embed a unique encryption key in each vehicle in the factory (exactly like set-top-box and game console makers do today). The key would be stored in a secure memory element of some sort – a separate memory chip or a register bank inside a system on a chip (SoC)). As such, D&V engineers will need to verify that this secure storage can’t be compromised.

So what’s the best verification methodology to ensure that this secure storage can’t be (A) read by an unauthorized party or accidentally “leak” to the outputs; or (B) be altered, overwritten, or erased by the bad guys?

Unfortunately, the classical approach of employing squads of “white hat” hackers to try and crack the system drastically decreases in effectiveness as circuit complexity increases. Similarly, even a really well designed constrained-random simulation environment is not exhaustive either. Consequently, a mathematical, “formal” analysis is required. The good news is that my group has been successful partnering with customers to develop exactly this type of solution. I’ll explain in part 2 of this series …

Until then, keep your eyes on the road and your hands up on the wheel; and I look forward to hearing your comments.

Joe Hupcey III

P.S. I grant there are ways to isolate software processes from each other to forestall hacking (indeed, my colleagues down the hall in Mentor’s Embedded Systems Division coach customers on how to do this every day.) However, I assert that only a sound hardware-based solution based on securely stored encryption keys will establish a secure root of trust, as well as give consumers peace of mind.

In a somewhat related note, I predict that in the near future computers/phone/tablets will start implementing and promoting hardware switches to physically disconnect their built-in microphones and cameras. Similarly, just like all cars now have air bag shutoff switches, there should also be a mechanical switch on the dashboard that disconnects all the vehicle’s external antennas. Granted this will not completely eliminate clever, unforeseen side channel attacks; but it will go a long way. Besides, the marketing value will be fantastic.

, , , ,

19 July, 2015

ASIC/IC Verification Technology Adoption Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on the growing ASIC/IC design project resource trends due to rising design complexity. In this blog I examine various verification technology adoption trends.

Dynamic Verification Techniques

Figure 1 shows the ASIC/IC adoption trends for various simulation-based techniques from 2007 through 2014, which include code coverage, assertions, functional coverage, and constrained-random simulation.

2014-WRG-BLOG-ASIC-9-1

Figure 1. ASIC/IC Dynamic Verification Technology Adoption Trends

One observation from these adoption trends is that the electronic design industry is maturing its verification processes. This maturity is likely due to the growing complexity of designs as discussed in the previous section. Another observation is that constrained-random simulation adoption appears to be leveling off. This trend is likely due to the scaling limitations of constrained-random simulation. This technique generally works well at the IP block or subsystem level in simulation, but does not scale to the entire SoC integration level.

ASIC/IC Static Verification Techniques

Figure 2 shows the ASIC/IC adoption trends for formal property checking (e.g., model checking), as well as automatic formal applications (e.g., SoC integration connectivity checking, deadlock detection, X semantic safety checks, coverage reachability analysis, and many other properties that can be automatically extracted and then formally proven). Formal property checking traditionally has been a high-effort process requiring specialized skills and expertise. However, the recent emergence of automatic formal applications provides narrowly focused solutions and does not require specialized skills to adopt. While formal property checking adoption is experiencing incremental growth between 2012 and 2014, the adoption of automatic formal applications increased by 62 percent. In general, formal solutions (i.e., formal property checking combined with automatic formal applications) are one of the fastest growing segments in functional verification.

2014-WRG-BLOG-ASIC-9-2

Figure 2. ASIC/IC Formal Technology Adoption

Emulation and FPGA Prototyping

Historically, the simulation market has depended on processor frequency scaling as one means of continual improvement in simulation performance. However, as processor frequency scaling levels off, simulation-based techniques are unable to keep up with today’s growing complexity. This is particularly true when simulating large designs that include both software and embedded processor core models. Hence, acceleration techniques are now required to extend ASIC/IC verification performance for very large designs. In fact, emulation and FPGA prototyping have become key platforms for SoC integration verification where both hardware and software are integrated into a system for the first time. In addition to SoC verification, emulation and FPGA prototyping are also used today as a platform for software development.

Today, 35 percent of the industry has adopted emulation, while 33 percent of the industry has adopted FPGA prototyping. Figure 3 describes various reasons why projects are using these techniques. You might note that the results do not sum to 100 percent since multiple answers were accepted from each study participant. Also, we are unable to show trend analysis here since previous studies did not examine this aspect of functional verification.

2014-WRG-BLOG-ASIC-9-3

Figure 3. Why Was Emulation or FPGA Prototyping Used?

Figure 4 partitions the data for emulation and FPGA prototyping adoption by the design size as follows: less than 5M gates, 5M to 80M gates, and greater than 80M gates. Notice that the adoption of emulation continues to increase as design sizes increase. However, the adoption of FPGA prototyping rapidly drops off as design sizes increase beyond 80M gates. Actually, the drop-off point is more likely around 40M gates or so since this is the average capacity limit of many of today’s FPGAs. This graph illustrates one of the problems with adopting FPGA prototyping of very large designs. That is, there can be an increased engineering effort required to partition designs across multiple FPGAs. However, better FPGA partitioning solutions are now emerging from EDA to address these challenges. In addition, better FPGA debugging solutions are also emerging from EDA to address today’s lab visibility challenges. Hence, I anticipate seeing an increase in adoption of FPGA prototyping for larger gate counts as time goes forward.

2014-WRG-BLOG-ASIC-9-4

Figure 4. Emulation and FPGA Prototyping Adoption by Design Size

In my next blog (click here) I plan to discuss various ASIC/IC language and library adoption trends.

Quick links to the 2014 Wilson Research Group Study results

, , , , , ,

8 June, 2015

Do you have a really tough verification problem – one that takes seemingly forever for a testbench simulation to solve – and are left wondering whether an automated formal application would be better suited for the task?

Are you curious about formal or clock-domain crossing verification, but are overwhelmed by all the results you get from a Google search?

Are you worried that adding in low power circuitry with a UPF file will completely mess up your CDC analysis?

Good news: inspired by the success of the UVM courses on the Verification Academy website, the Questa Formal and CDC team has created all new courses on a whole variety of formal and CDC-related subjects that address these questions and more.  New topics that are covered include:

* What’s a formal app, and what are the benefits of the approach?

* Reviews of automated formal apps for bug hunting, exhaustive connectivity checking and register verification, X-state analysis, and more

* New topics in CDC verification, such as the need for reconvergence analysis, and power-aware CDC verification

* How to get started with direct property checking including: test planning for formal, SVA coding tricks that get the most out of the formal analysis engines AND ensure reuse with simulation and emulation, how to setup the analysis for rapidly reaching a solution, and how to measure formal coverage and estimate whether you have enough assertions

The best part: all of this content is available NOW at www.verificationacademy.com, and it’s all FREE!

Enjoy!

Joe Hupcey III,
on behalf of the Questa Formal and CDC team

P.S. If you’re coming to the DAC in San Francisco, be sure to come by the Verification Academy booth (#2408) for live presentations, end-user case studies, and demos on the full range of verification topics – UVM, low power, portable stimulus, formal, CDC, hardware-software co-verification, and more.  Follow this link for all the details & schedule of events (including “Formal & CDC Day” on June 10!): http://www.mentor.com/events/design-automation-conference/schedule

, , , , , , , , ,

11 May, 2015

FPGA Verification Technology Adoption Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on the effectiveness of verification in terms of FPGA project schedule and bug escapes. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2014 Wilson Research Group study.

An interesting trend we see in the FPGA space is a continual maturing of its functional verification processes. In fact, we find that the FPGA design space is about where the ASIC/IC design space was five years ago in terms of verification maturity—and it is catching up quickly. A question you might ask is, “What is driving this trend?” In Part 1 of this blog series I showed rising design complexity with the adoption of more advanced FPGA designs, as well as multiple embedded processor architectures targeted at FPGA designs. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (Part 2 and Part 3). My belief is that the industry creating FPGA designs is being forced to mature its functional verification processes to address today’s increasing complexity.

FPGA Simulation Technique Adoption Trends

Let’s begin by comparing  FPGA adoption trends related to various simulation techniques from the both the 2012 and 2014 Wilson Research Group study, as shown in Figure 1.

Figure 1. Simulation-based technique adoption trends for FPGA designs

You can clearly see that the industry is increasing its adoption of various functional verification techniques for FPGA targeted designs. This past year I have spent a significant amount of time in discussions with FPGA project managers around the world. During these discussions, most mangers mention the drive to improve verification process within their projects due to rising complexity. The Wilson Research Group data suggest that these claims are valid.

FPGA Formal Technology Adoption Trends

Figure w shows the adoption percentages for formal property checking and auto-formal techniques.

Figure 2. FPGA Formal Technology Adoption

Our study looked at two forms of formal technology adoption (i.e., formal property checking and automatic formal verification solutions). Examples of automatic formal verification solutions include X safety checks, deadlock detection, reset analysis, and so on.  The key difference is that for formal property checking the user writes a set of assertions that they wish to prove.  Automatic formal verification solutions do not require the user to write assertions.

In my next blog (click here), I’ll focus on FPGA design and verification language adoption trends, as identified by the 2014 Wilson Research Group study.

Quick links to the 2014 Wilson Research Group Study results

, , , , ,

7 May, 2015

For all things verification, you will want to stop by the Verification Academy booth #2408 at DAC to interact with experts exploring the challenges of IC design and verification.  At the top of each hour, the Verification Academy will feature a presentation followed by a lively conversation.  Presentations will not be repeated so each hour will be unique.

We have themed each of the days as well:

  • Monday is “Debug Day
  • Tuesday is “Standards & FPGA Day
  • Wednesday is “Formal Verification Day

Naturally, you will find a few exceptions to those rules when you look at the program in detail.  Please register for Verification Academy sessions here: Monday Registration | Tuesday Registration | Wednesday Registration.  [NOTE: the Verification Academy sessions are highlighted with a blue background when you visit the registration site.]  A concise listing of all the Verification Academy sessions can be found here.

We will feature an end of the day reception on Monday at the Verification Academy booth after the last presentation.  Neil Johnson (XtremeEDA) and Mentor’s Harry Foster will explore Agile Evolution in SoC Verification in that last session.  The session begins at 5pm.  Neil is a proponent of this methodology as a means to to help build in design quality and simplify the task of verification.  In addition to being an advocate for this, he is also a practitioner of it.  He is an open-source hardware developer and Moderator at www.AgileSoC.com.  We think the conversation that follows this informative session will be a lively one in which we invite everyone to continue over cocktails and hor d’oeuvres at 5:30pm.

We are sponsoring other events outside of the Verification Academy as well.  Tuesday is truly “Standards Day” at DAC.  In addition to the standards theme at the Verification Academy booth, you can kick off the day at the Accellera Breakfast and later in the day attend the IEEE DASC, Accellera and Si2 System Level Low Power Workshop.  Here is a partial list of Standards Day activities:

Registration

If you have not yet registered for DAC, do so now.  If you do not have plans to register for the full technical conference, many conference events are fee free if you select the “I LOVE DAC” registration option before May 19th!  In fact, all the “Standards Day” events listed above are free with early I Love DAC registration. Simply click here and you will be taken to the “I Love DAC” location to register.  Register before May 19th as after that date a $95 minimum fee sets in.

See you at DAC!

, , , , , , , , , ,

16 April, 2015

Do automated formal apps really help D&V engineers “cross the chasm” and start using formal verification directly? In Part 1 of this case study on Oracle’s “Project RAPID”, the Oracle team’s appetite for using formal verification was whetted by impressive results from the Questa Connectivity Check and Questa Register Check apps. Picking up the story where we left off, award-winning author Ram Narayan explains how success with these automated formal apps inspired the team to try their hands at using formal technology directly with the Questa Property Checking (PropCheck) app for classical model/property checking. Ram writes:

Some of the IP units … were good candidates for formal verification. That’s because it was very reasonable to expect to be able to prove the complete functionality of these units formally. We decided to target these units with the Assurance strategy.”

and

Spurred by the success of applying [the apps], we considered applying formal methods in a bug hunting mode for units that were already being verified with simulation. Unlike Assurance, Bug Hunting doesn’t attempt to prove the entire functionality, but rather targets specific areas where simulation is not providing enough confidence that all corner case bugs have been discovered.”

2015-4-8 page 5 of Oracle Ram N VH article
The results of their assurance and bug hunting strategies speak for themselves: Table 1 in the article reports that the team found 79 bugs with these formal verification techniques!

Given this success with formal, the team gained the confidence to apply formal in more DUT areas where formal would be more effective than simulation – i.e. “using the best tool for the job” as necessary. Indeed, a common thread throughout the whole story is how formal and simulation were often used in tandem to simultaneously leverage the unique strengths of each technology to improve the overall quality of verification. The article’s conclusion begins with this observation:

“Formal verification is a highly effective and efficient approach to finding bugs. Simulation is the only means available to compute functional coverage towards verification closure. In this project we attempted to strike a balance between the two methodologies and to operate within the strengths of each approach towards meeting the projects goals.

The bottom-line: formal has “gone mainstream” in this team’s current and future projects:

“The most significant accomplishment to me is the shift in the outlook of the design team towards formal. According to one designer whose unit was targeted with Bug Hunting, ‘I was initially skeptical about what formal could do. From what I have seen, I want to target my next design first with formal and find most of the bugs.’ … “the time savings and improved design quality that formal verification brings are welcome benefits. We plan to continue to push the boundaries of what is covered by formal in future projects.”

Granted, the road from zero formal to full adoption might not have been quite as smooth as this engaging article describes. Still, their declaration to future usage of formal apps in conjunction with formal property checking – let alone their project’s impressive results – appear to conclusively prove the original thesis.  Namely, once formal’s considerable power and benefits are introduced by a series of formal apps, there is no going back and formal becomes a permanent part of the user’s verification tool kit.

Does Ram’s/Oracle’s journey resonate with you? Have you had the same experience or seen something similar at your employer or clients?  Please share your thoughts in the comments below, or contact me offline.

Until next time, may your coverage be high and your power consumption be low,

Joe Hupcey III

P.S. FYI, the author of the Verification Horizons article described above (and the related award-winning DVCon 2014 poster) was also a co-author of the 2015 DVCon USA Best Paper, 10.1 “I Created the Verification Gap” by Ram Narayan and Tom Symons of Oracle Labs.  Congratulations Ram and Tom!

Reference Links:

Verification Horizons, March 2015, Volume 11, Issue 1:
Evolving the Use of Formal Model Checking in SoC Design Verification
Ram Narayan, Oracle Corp.

https://verificationacademy.com/verification-horizons/march-2015-volume-11-issue-1/Evolving-the-Use-of-Formal-Model-Checking-in-SoC-Design-Verification

—–

DVCon USA, March 2014, 1P.2:
The Future of Formal Model Checking is NOW! Leveraging Formal Methods for RAPID System On Chip Verification, (Poster Presentation Honorable Mention)
Ram Narayan, Oracle Corp.

http://events.dvcon.org/events/proceedings.aspx?id=163-1-P

, , , , , , , , , , , , , , ,

9 April, 2015

One of the biggest developments in the formal verification world in the past several years has been the industry-wide growth of formal-based “apps” — automated applications that leverage formal’s exhaustive verification technology “under the hood” to focus on specific verification tasks well suited to formal algorithms. But do formal apps really help D&V engineers “cross the chasm” and start using formal verification directly?  (Or if you prefer, are apps an effective “Trojan Horse”?)  A recent article in Verification Horizons by Oracle’s Ram Narayan titled “Evolving the Use of Formal Model Checking in SoC Design Verification about the evolution of the verification methodology employed on Oracle’s “Project RAPID” suggests the answer is “yes”.

2015-4-8 front page of Oracle Ram N VH article

In a nutshell, the clear benefits Ram’s team received from formal apps inspired them to try their hand at formal model checking; and their results exceeded all expectations. I recommend you read the article in its entirety because it’s a great real-world case study; rich with anecdotes from the front-line engineer himself. (Indeed, this article was inspired by Ram’s award winning DVCon 2014 poster, but I digress) But for the purposes of this post, allow me to focus exclusively on the highlights pertaining to the “crossing the chasm” thesis. Consider the following excerpts.

* First, they started from scratch:

“At the outset of the project, there were no specific plans to use formal verification on RAPID. We did not have any infrastructure in place for running formal tools, and neither did we have anyone on the team with any noteworthy experience using these tools.”

* The first app they tried exceeded all expectations: Like many customers, Oracle got their feet wet with formal-driven SoC connectivity checking. And like 100% of Questa Connectivity Check app customers, they came away impressed:

“Our goal was to catch trivial design errors through formal methods without having to rely on lengthy and in some cases, random SoC simulations. Given our modest expectations at the outset, we would have been satisfied if we just verified these SoC connectivity checks with formal tools.  … SoC Connectivity checks were written to verify the correct connectivity between critical SoC signals like interrupts, events and other control/datapath signals. These checks are trivial to define and are of high value. Proving these connections saved us significant cycles in SoC simulations.

This is not just a gut feeling on the author’s part: the bottom row of Table 2 in the article (showing the Questa Connectivity Check app cutting the schedule by 66%) backs-up the above quote with real project data.

2015-4-8 Table 2 Oracle Ram N VH article

Article Table 2: Formal Verification Time Savings on Oracle’s Project RAPID – formal-based connectivity verification with the Questa Connectivity Check app delivers 66% schedule savings


* Another app is tried, and it’s also wildly successful:
the Questa Register Check app was the next formal app to be applied. Not only did it take care of the immediate control&status register verification task, but it also enabled more effective downstream verification:

“The Register Access Verification established controllability and observability of the registers in the unit from its interface. The IP core logic verification could now safely use the control registers as inputs to properties on the rest of the logic they drive. In addition to these registers, we chose a few internal nodes in the design as observation and control points in our properties. These points gave us additional controllability and observability to the design and reduced the complexity of the cones of logic being analyzed around them. We proved the correctness (observability) of these points prior to enjoying the benefits of using them (controllability) for other properties. This approach made it easier to write properties on the entire unit without any compromise on the efficacy of the overall unit verification.”

At this point in the story, the Oracle team is still confining their use of formal to the stable of available automated formal apps. However, as we’ll see in Part 2 of this case study, this success bred curiosity in the underlying technology …

Until next time, may your coverage be high and your power consumption be low,

Joe Hupcey III

P.S. FYI, the author of the Verification Horizons article described above (and the related award-winning DVCon 2014 poster) was also a co-author of the 2015 DVCon USA Best Paper, 10.1 “I Created the Verification Gap” by Ram Narayan and Tom Symons of Oracle Labs.  Congratulations Ram and Tom!

Reference Links:

Verification Horizons, March 2015, Volume 11, Issue 1:
Evolving the Use of Formal Model Checking in SoC Design Verification
Ram Narayan, Oracle Corp.

https://verificationacademy.com/verification-horizons/march-2015-volume-11-issue-1/Evolving-the-Use-of-Formal-Model-Checking-in-SoC-Design-Verification

—–

DVCon USA, March 2014, 1P.2:
The Future of Formal Model Checking is NOW! Leveraging Formal Methods for RAPID System On Chip Verification, (Poster Presentation Honorable Mention)
Ram Narayan, Oracle Corp.

http://events.dvcon.org/events/proceedings.aspx?id=163-1-P

, , , , , , , , , , , , , , ,

17 March, 2015

StPatricksDay

With a name like “Fitzpatrick,” you knew I’d be celebrating today, right?

Well, there’s no better way to celebrate this fine day than to announce that our latest edition of Verification Horizons is available online! Now that Spring is almost here, there’s a bit less snow on the ground than there was when I wrote my introduction, but everything is still covered. I’m considering spray-painting it all green in honor of the occasion, so at least it looks like I have a lawn again.

In this issue of Verification Horizons, I’d particularly like to draw your attention to “Successive Refinement: A Methodology for Incremental Specification of Power Intent,” by my friend and colleague Erich Marschner and several of our friends at ARM® Ltd. In this article, you’ll find out how the Unified Power Format (UPF) specification can be used to specify and verify your power architecture abstractly, and then add implementation information later in the process. This methodology is still relatively new in the industry, so if you’re thinking about making your next design PowerAware, you’ll want to read this article to be up on the very latest approach.

In addition to that, we’ve also got Harry Foster discussing some of the results from his latest industry study in “Does Design Size Influence First Silicon Success?” Harry is also blogging about his survey results on Verification Horizons here and here (with more to come).

Our friends at L&T Technology Services Ltd. share some of their experience in doing PowerAware design in “PowerAware RTL Verification of USB 3.0 IPs,” in which you’ll see how UPF can let you explore two different power management architectures for the same RTL.

Next, History class is in session, with Dr. Lauro Rizzatti, long-time EDA guru, giving us part 1 of a 3-part lesson in “Hardware Emulation: Three Decades of Evolution.”

Our friends at Oracle® are up next with “Evolving the Use of Formal Model Checking in SoC Design Verification,” in which they share a case study of their use of formal methods as the central piece in verifying an SoC design they recently completed with first-pass silicon success. By the way, I’d also like to take this opportunity to congratulate the author of this article, Ram Narayan, for his Best Paper award at DVCon(US) 2015. Well done, Ram!

We round out the issue with our famous “Partners’ Corner” section, which includes two articles. In “Small, Maintainable Tests,” our friends at Sondrel IC Design Services show you a few tricks on how to make use of UVM virtual sequences to raise the level of abstraction of your tests. In “Functional Coverage Development Tips: Do’s and Don’ts,” our friends at eInfochips give you a great overview of functional coverage, especially the covergroup and related features in SystemVerilog.

I’d also like to take a moment to thank all of you who came by our Verification Academy booth at DVCon to say hi. I found it incredibly humbling and gratifying to hear from so many of you who have learned new verification skills from the Verification Academy. That’s a big part of why we do what we do, and I appreciate you letting us know about it.

Now, it’s time to celebrate St. Patrick’s Day for real!

, , , , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@jhupcey Tweets

  • Loading tweets...