Posts Tagged ‘formal verification’

8 May, 2017

At the recent DVCon in Shanghai, China, my colleague Jin Hou delivered the tutorial “Back to Basics: Doing Formal the Right Way”. Jin is an expert in formal and CDC methodologies, and in her career she has trained hundreds of engineers how to get up to speed with formal, and leverage its strengths as part of a complete enterprise-class verification flow. I’ve been to DVCons in the US, EU, and India, so I was eager to hear her first-hand account of the latest incarnation of this successful conference series. Below is a review of her experience.

Joe: Tell me more about your tutorial

Jin: In recent years there has been a lot activity around automated formal apps. This is a good thing not only because they help any engineer take advantage of formal’s power, but their success has also inspired interest in property checking. Consequently, this tutorial was created to show engineers who are new to formal what is involved in the planning, setup, execution, and results analysis with formal property checking verification. Wherever it made sense, I would make analogies to testbench simulation flows to explain formal methodology, and show how formal and simulation can be used in combination.

——

Joe: Did you get to survey the attendees at all? If so, what were the results?

Jin: Yes, but first let me say the tutorial was standing room only – there were at least 40 people were in a room setup for half that number!

At the beginning I asked for a show of hands for the following questions:

“How many people are using formal now?” Only 3 people raised their hands.

“How many of you are thinking of using formal in 6 months?”  Just 7 hands went up.

In my 1-on-1 discussions after the tutorial concluded, an attendee told me, “Assertions are still pretty new to Chinese engineers. Except for [a major semiconductor supplier], most customers don’t have any assertion and formal experts.” However, I also heard some great news on this point: a professor from an area university told me his department was starting a course on SystemVerilog – including writing SVA — this coming academic year!

In general, it was clear to me that there is a great opportunity for formal usage to grow in China!

—–

Joe: Were there any particular comments or questions from the attendees?

Jin: The overall theme of the Q&A I received was around the benefits that property checking provided over other verification techniques. Indeed, I was directly asked “Why do you need formal?” It’s a fair question – especially since property checking with hand-written assertions (vs. automated formal apps) does impose a learning curve on its users; and it is not always obvious what formal is capable of doing better than the alternatives.

Of course we setup the tutorial to address exactly these concerns; so over the course of the presentation I showed how the formal can address many serious verification tasks long before a simulation testbench would be available. For starters, hand-written assertions themselves help designers communicate their concerns to the verification team, bridging the gaps between them. With such an “executable specification” (which, by the way, with the right syntax can be easily reused in simulation and emulation), verification tasks that would take forever in simulation can be quickly solved by formal. Perhaps the most well received example of this was when I spoke of how formal can deliver exhaustive proofs of liveness and safety properties.

[Ed. Note: “Liveness” == something good must eventually occur. “Safety” == something bad must never occur, i.e., good behavior should be always true. A well regarded, in-depth review of these valuable formal analyses is available here https://www.doc.ic.ac.uk/~jnm/book/firstbook/pdf/ch7.pdf ]

In general, the fact that you can do such significant verification with exhaustive results, before you write a single line of testbench code, means that formal should be part of any serious verification project.

—–

Joe: Looking back on the experience, how has it inspired you?

Jin: It’s given me great ideas on how we can further enhance our Verification Academy courses to help new formal users to get up to speed with the key concepts and work flows. In particular, we need to better promote “formal coverage” – what it tells you, how it helps the formal engineer measure their personal progress, and how it feeds into the overall verification project status alongside simulation code and functional coverage reporting. 

—–

Joe: Thanks, Jin!  Final thoughts?

Jin: While I enjoy supporting customers here in the San Francisco Bay Area, I also look forward to introducing more engineers to formal verification across China!

—–

If you went to DVCon in Shanghai — or in San Jose – this spring, please share your experiences in the comments below.

Until DAC, may your power consumption be low and your coverage be high!

Joe Hupcey III

Reference links:

DVCon Shanghai 2017 proceedings — https://dvcon-china.org/dvcon-china-history-archive

Safety & Liveness Properties, Chapter 7, Concurrency: State Models & Java Programs, Jeff Magee and Jeff Kramer, https://www.doc.ic.ac.uk/~jnm/book/firstbook/pdf/ch7.pdf

Related Verification Academy courses:

Getting Started with Formal-Based Technology
https://verificationacademy.com/courses/Getting-Started-with-Formal-Based-Technology

Formal Assertion-Based Verification
https://verificationacademy.com/courses/Formal-Assertion-Based-Verification

Formal-Based Technology: Automatic Formal Solutions
https://verificationacademy.com/courses/Formal-Based-Technology-Automatic-Formal-Solutions

Assertion-Based Verification
https://verificationacademy.com/courses/assertion-based-verification

, , , , , , , ,

3 January, 2017

Deeper Dive into First Silicon Success and Safety Critical Designs

This blog is a continuation of a series of blogs related to the 2016 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I presented verification results in terms of schedules, number of required spins, and classification of functional bugs. In this blog, I conclude the series by having a little fun with some of the findings from our recent study.

You might recall from our 2014 study we did a deeper dive into the findings made a non-intuitive observation related to design size and first silicon success. That is, the smaller the design the less likelihood of achieving first silicon success (see 2014 conclusion blog for details). This observation still holds true in 2016.

For our 2016 study, we decided to do a deeper dive related to the following:

  1. Verification maturity and silicon success, and
  2. Safety critical designs and silicon successes.

Verification Maturity and Silicon Success

Figure 1 presents the findings for ASIC/IC first silicon success, and Figure 2 presents the findings for FPGA non-trivial bug escapes into production. It is important to note that only 33 percent of ASIC/IC projects are able to achieve first silicon success, and only 22 percent of FPGA projects go into production without a non-trivial bug.

BLOG-2016-WRG-figure-Con-1

Figure 1. ASIC/IC required spins before final production

BLOG-2016-WRG-figure-Con-2

Figure 2. FPGA non-trivial bug escapes into production

A question worth asking is if there might be some correlation between project success (in terms of preventing bugs) and verification maturity. To answer this question we looked at verification technology adoption trends related to a project’s silicon success.

Figure 3 presents the adoption of various verification techniques related to ASIC/IC projects, and then correlates these results against achieving first silicon success. The data suggest that the more mature an ASIC/IC project is in its adoption of verification technology the more likelihood of achieving first silicon success.

BLOG-2016-WRG-figure-Con-3

Figure 3. ASIC/IC spins and verification maturity.

Similarly, in Figure 4 we examine the adoption of various verification techniques on FPGA projects, and then correlate these results against preventing non-trivial bug escapes into production. Again, the data suggest that the more mature an FPGA project is in its adoption of verification technology the more likelihood that a non-trivial bug will not escape into production.

BLOG-2016-WRG-figure-Con-4

Figure 4. FPGA non-trivial bug escapes into production and verification maturity.

Safety Critical Designs and Silicon Success

The second aspect of our 2016 study that we decided to examine a little deeper relates to safety critical designs and their silicon success. Intuitively, one might think that the rigid and structured process required to adhere to one of the safety critical development processes (such as, DO-254 for mil/aero, ISO 26262 for automotive, IEC 60601 for medical, etc.) would yield higher quality in terms of preventing bugs and achieving silicon success.

Figure 5 shows the percentage of ASIC/IC and FPGA projects that claimed to be working on a safety critical design.

BLOG-2016-WRG-figure-Con-5

Figure 5. Percentage of projects working on safety critical designs

Keep in mind that the findings in Figure 5 do not represent volume in terms of silicon production—the data represents projects that claim to work under one of the safety critical development process standards.  Using this partition between projects working on non-safety critical and safety critical designs we decided to see how these two classes of projects compared in terms of preventing bugs.

Figure 6 compares the number of required spins for both safety critical and non-safety critical ASIC/IC designs. While Figure 7 compares the FPGA designs with non-trivial bug escapes for both safety critical and non-safety critical designs.

BLOG-2016-WRG-figure-Con-6

Figure 6. Requires ASIC/IC spins for safety critical vs. non-safety critical designs

 BLOG-2016-WRG-figure-Con-7

Figure 7. Non-trivial bug escapes for safety critical vs. non-safety critical FPGA designs

Clearly, the data suggest that a development process adopted to ensure safety does not necessarily ensure quality. Perhaps this is non-intuitive. However, to be fair, many of the safety critical features implemented in today’s designs are quite complex and increase the verification burden.

This concludes the findings from the 2016 Wilson Research Group Study.

Quick links to the 2016 Wilson Research Group Study results

, , , , , ,

15 November, 2016

While internet connected vehicles remain a popular target for hackers, the new breed of “smart” devices have the potential to invite the bad guys inside your home and force you to pay them to leave. Specifically, in a DefCon 2016 demo security researchers Andrew Tierney and Ken Munro showed how an internet connected thermostat could be taken over and reloaded with “ransomware”. Consequently, the bad guys could literally leave you in the cold until you pay up.

Security Researchers Andrew Tierney and Ken Munro show their ransomware proof of concept at DefCon 2016. Source: Motherboard, http://goo.gl/YZkyGy

Security Researchers Andrew Tierney and Ken Munro show their ransomware proof of concept at DefCon 2016. Source: Motherboard, http://goo.gl/YZkyGy

Fortunately this was just a proof of concept, but this story and a depressing large number of others like it demonstrate just how insecure today’s consumer electronic devices are.

So what can be done to prevent this?

First, as I’ve said before there is no single “silver bullet”. To combat the numerous direct and side channel attacks, multiple, overlapping solutions are needed to provide the necessary defense in depth. That said, the #1 priority is to secure the “root of trust”, from which everything else – the hardware, firmware, OS, and application layer’s security – is derived. If the root of trust can be compromised, then the whole system is vulnerable.

So how exactly am I defining the “root of trust”? It is an encryption key – a digital signature for each device – that will be encoded into the electronics of every “IoT” unit. Hence, I argue that before the device accepts any commands from the outside world, the private key stored in the device must be combined with the public key appended to the incoming data to authenticate its legitimacy. In such a system, just like set-top-box and game console makers do today, the smart thermostat manufacturer would embed a unique encryption key in each device in the factory. The key would be stored in a secure memory element of some sort – a separate memory chip or a register bank inside a system on a chip (SoC)).  As such, D&V engineers will need to verify that this secure storage can’t be compromised.

So what’s the best verification methodology to ensure that this secure storage can’t be (A) read by an unauthorized party or accidentally “leak” to the outputs; or (B) be altered, overwritten, or erased by the bad guys?

Unfortunately, the classical approach of employing squads of “white hat” hackers to try and crack the system rapidly decreases in effectiveness as circuit complexity increases. Similarly, even a really well designed constrained-random simulation environment is not exhaustive either. The only way to exhaustively verify (A) and (B) with only a few hours of compute time on common, low-cost servers is by employing formal verification technology. In a nutshell, “Formal verification uses mathematical formal methods to prove or disprove the correctness of a system’s design with respect to formal specifications expressed as properties.”(1)

Returning to our smart thermostat example, the “formal specification” is that (A) and (B) above can never happen, i.e. the key can only be read and edited by authorized parties through specific, secure pathways – anything else is a design flaw that must be fixed before going into production.

So what can Design & Verification engineers do at the RTL level to employ formal technology to this verification challenge – especially if they have never used formal tools or have written SystemVerilog Assertions (SVA) before? Luckily Mentor has developed a fully-automated solution that exhaustively verifies that only the paths you specify can reach security- or safety-critical storage elements – i.e. a tool that can mathematically prove the confidentiality and integrity of your DUT’s “root of trust”. The best part is that no knowledge of formal or property specification languages is required.

Questa Secure Check workflow block diagram

Questa Secure Check workflow block diagram

Specifically, using your RTL and cleartext, human and machine readable Tcl code to specify the secure / safety-critical storage and allowed access paths as input, the Questa Secure Check app automates formal technology to exhaustively verify that the “root of trust” – i.e. the storage for the system’s encryption keys – cannot be read or tampered with via unauthorized paths.

To expedite the analysis and/or minimize formal compile and run time, the app supports “black boxing” of clearly extraneous IPs and paths to keep the focus on the secure channels alone. The result: an exhaustive proof of your design’s integrity and/or clear counterexamples showing how your specification can be violated.

Secure Check GUI example: users click on the “Insecure Path” of concern and the app generates a schematic of the path and related waveforms of the signals involved

Secure Check GUI example: users click on the “Insecure Path” of concern and the app
generates a schematic of the path and related waveforms of the signals involved

In summary, only a sound hardware-based solution based on securely stored encryption keys will establish a true root of trust; and only an exhaustive formal analysis can verify this with mathematical certainly. The Questa Secure Check app was created to help customers address this challenge.

I look forward to hearing your feedback and comments on what you are doing to verify the integrity and confidentially of your secure storage.

Until next time, stay cool, or stay warm – whichever you prefer.

Joe Hupcey III

P.S. In a somewhat related note, I predict that in the near future such devices will start implementing and promoting hardware switches to physically disconnect their communications links. In the thermostat case, once the regular heating & cooling patterns were programmed in, the user could decide to disconnect the device from the internet. The user would only re-enable communications upon notification of a firmware upgrade from the manufacturer, or before embarking on a trip, etc. Granted this will not completely eliminate clever, unforeseen side channel attacks; but it will go a long way. Besides, the marketing value will be fantastic.

P.P.S. Shameless commercial pitch: the lead customer of the Questa Secure Check app is in the consumer electronics space, where their products are subject to world-wide attack, 24/7/365. Suffice to say that if Secure Check can help harden this customer’s system against determined, continuous attacks, it can help automakers, medical device manufacturers, smart phone designers, aircraft companies, etc.  In advance of a new Verification Academy course on this topic coming out soon, to learn more feel free to contact me offline, or ask questions in the comments section below.

P.P.P.S. If you want to learn more about the underlying formal technology, on November 16 at 8am Pacific there will be a webinar on “Back to Basics with Formal Verification: How to Shorten Your Schedule with Interactive Formal Debug and Design Exploration“.  No worries if you miss the live broadcast — the presentation recording will be posted for later viewing.

Reference Links:

The Hacker News article on this: http://thehackernews.com/2016/08/hacking-thermostat.html

(1) Using Formal Methods to Verify Complex Designs, IBM Haifa Research Lab, 2007

, , , , ,

31 October, 2016

ASIC/IC Language and Library Adoption Trends

This blog is a continuation of a series of blogs related to the 2016 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on I various verification technology adoption trends. In this blog I plan to discuss various ASIC/IC language and library adoption trends..

Figure 1 shows the adoption trends for languages used to create RTL designs. Essentially, the adoption rates for all languages used to create RTL designs is projected to be either declining or flat over the next year.

BLOG-2016-WRG-figure-10-1

Figure 1. ASIC/IC Languages Used for RTL Design

As previously noted, the reason some of the results sum to more than 100 percent is that some projects are using multiple languages; thus, individual projects can have multiple answers.

Figure 2 shows the adoption trends for languages used to create ASIC/IC testbenches. Essentially, the adoption rates for all languages used to create testbenches are either declining or flat. Furthermore, the data suggest that SystemVerilog adoption is starting to saturate or level off in the mid-70s range.

BLOG-2016-WRG-figure-10-2

Figure 2. ASIC/IC Languages Used for  Verification (Testbenches)

Figure 3 shows the adoption trends for various ASIC/IC testbench methodologies built using class libraries.

BLOG-2016-WRG-figure-10-3

Figure 3. ASIC/IC Methodologies and Testbench Base-Class Libraries

Here we see a decline in adoption of all methodologies and class libraries with the exception of Accellera’s UVM, whose adoption continued to increase between 2014 and 2016. Furthermore, our study revealed that UVM is projected to continue its growth over the next year. However, like SystemVerlog, it will likely start to level off in the mid- to upper-70 percent range.

Figure 4 shows the ASIC/IC industry adoption trends for various assertion languages, and again, SystemVerilog Assertions seems to have saturated or leveled off.

BLOG-2016-WRG-figure-10-4

Figure 4. ASIC/IC Assertion Language Adoption

In my next blog (click here) I plan to present the ASIC/IC design and verification power trends.

Quick links to the 2016 Wilson Research Group Study results

, , , , , , , , , , , , , , , , , , ,

11 September, 2016

FPGA Verification Technology Adoption Trends

This blog is a continuation of a series of blogs related to the 2016 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on the effectiveness of verification in terms of FPGA project schedule and bug escapes. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2016 Wilson Research Group study.

An interesting trend we see in the FPGA space is a continual maturing of its functional verification processes. In fact, we find that the FPGA design space is about where the ASIC/IC design space was five years ago in terms of verification maturity—and it is catching up quickly. A question you might ask is, “What is driving this trend?” In Part 1 of this blog series I showed rising design complexity with the adoption of more advanced FPGA designs, as well as multiple embedded processor architectures targeted at FPGA designs. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (Part 2 and Part 3). My belief is that the industry creating FPGA designs is being forced to mature its functional verification processes to address today’s increasing complexity.

FPGA Simulation Technique Adoption Trends

Let’s begin by comparing  FPGA adoption trends related to various simulation techniques from the both the 2012, 2014, and 2016 Wilson Research Group study, as shown in Figure 1.

BLOG-2016-WRG-figure-5-1

Figure 1. Simulation-based technique adoption trends for FPGA designs

You can clearly see that the industry is increasing its adoption of various functional verification techniques for FPGA targeted designs. This past year I have spent a significant amount of time in discussions with FPGA project managers around the world. During these discussions, most managers mention the drive to improve verification process within their projects due to rising complexity. The Wilson Research Group data suggest that these claims are valid.

FPGA Formal Technology Adoption Trends

Figure 2 shows the adoption trends for formal property checking and auto-formal techniques.

BLOG-2016-WRG-figure-5-2

Figure 2. FPGA Formal Technology Adoption

Our study looked at two forms of formal technology adoption (i.e., formal property checking and automatic formal verification solutions). Examples of automatic formal verification solutions (also referred to as formal apps) include X safety checks, deadlock detection, reset analysis, and so on.  The key difference is that for formal property checking the user writes a set of assertions that they wish to prove.  Automatic formal verification solutions do not require the user to write assertions. Again, the key take away here is that FPGA projects are maturing their verification processes to address growing design complexity.

In my next blog (click here), I’ll focus on FPGA design and verification language adoption trends, as identified by the 2016 Wilson Research Group study.

Quick links to the 2016 Wilson Research Group Study results

, , , , ,

8 August, 2016

This is the first in a series of blogs that presents the findings from our new 2016 Wilson Research Group Functional Verification Study. Similar to my previous 2014 Wilson Research Group functional verification study blogs, I plan to begin this set of blogs with an exclusive focus on FPGA trends. Why? For the following reasons:

  1. Some of the more interesting trends in our 2016 study are related to FPGA designs. The 2016 ASIC/IC functional verification trends are overall fairly flat, which is another indication of a mature market.
  2. Unlike the traditional ASIC/IC market, there has historically been very few studies published on FPGA functional verification trends. We started studying the FPGA market segment back in the 2010 study, and we now have collected sufficient data to confidently present industry trends related to this market segment.
  3. Today’s FPGA designs have grown in complexity—and many now resemble complete systems. The task of verifying SoC-class designs is daunting, which has forced many FPGA projects to mature their verification process due to rising complexity. The FPGA-focused data I present in this set of blogs will support this claim.

My plan is to release the ASIC/IC functional verification trends through a set of blogs after I finish presenting the FPGA trends.

Introduction

In 2002 and 2004, Collett International Research, Inc. conducted its well-known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification at that point in time. However, after the 2004 study, no additional Collett studies were conducted, which left a void in identifying industry trends. To address this dearth of knowledge, five functional verification focused studies were commissioned by Mentor Graphics in 2007, 2010, 2012, 2014, and 2016. These were world-wide, double-blind, functional verification studies, covering all electronic industry market segments. To our knowledge, the 2014 and 2016 studies are two of the largest functional verification study ever conducted. This set of blogs presents the findings from our 2016 study and provides invaluable insight into the state of the electronic industry today in terms of both design and verification trends.

Study Background

Our study was modeled after the original 2002 and 2004 Collett International Research, Inc. studies. In other words, we endeavored to preserve the original wording of the Collett questions whenever possible to facilitate trend analysis. To ensure anonymity, we commissioned Wilson Research Group to execute our study. The purpose of preserving anonymity was to prevent biasing the participants’ responses. Furthermore, to ensure that our study would be executed as a double-blind study, the compilation and analysis of the results did not take into account the identity of the participants.

For the purpose of our study we used a multiple sampling frame approach that was constructed from eight independent lists that we acquired. This enabled us to cover all regions of the world—as well as cover all relevant electronic industry market segments. It is important to note that we decided not to include our own account team’s customer list in the sampling frame. This was done in a deliberate attempt to prevent biasing the final results. My next blog in this series will discuss other potential bias concerns when conducting a large industry study and describe what we did to address these concerns.

After data cleaning the results to remove inconsistent or random responses (e.g., someone who only answered “a” on all questions), the final sample size consisted of 1703 eligible participants (i.e., n=1703). This was approximately 90% this size of our 2014 study (i.e., 2014 n=1886). However, to put this figure in perspective, the famous 2004 Ron Collett International study sample size consisted of 201 eligible participants.

Unlike the 2002 and 2004 Collett IC/ASIC functional verification studies, which focused only on the ASIC/IC market segment, our studies were expanded in 2010 to include the FPGA market segment. We have partitioned the analysis of these two different market segments separately, to provide a clear focus on each. One other difference between our studies and the Collett studies is that our study covered all regions of the world, while the original Collett studies were conducted only in North America (US and Canada). We have the ability to compile the results both globally and regionally, but for the purpose of this set of blogs I am presenting only the globally compiled results.

Confidence Interval

All surveys are subject to sampling errors. To quantify this error in probabilistic terms, we calculate a confidence interval. For example, we determined the “overall” margin of error for our study to be ±2.36% at a 95% confidence interval. In other words, this confidence interval tells us that if we were to take repeated samples of size n=1703 from a population, 95% of the samples would fall inside our margin of error ±2.36%, and only 5% of the samples would fall outside. Obviously, response rate per individual question will impact the margin of error. However, all data presented in this blog has a margin of error of less than ±5%, unless otherwise noted.

Study Participants

This section provides background on the makeup of the study.

Figure 1 shows the percentage of overall study FPGA and ASIC/IC participants by market segment. It is important to note that this figures does not represent silicon volume by market segment.

BLOG-2016-WRG-figure-0-1

Figure 1: FPGA and ASIC/IC study participants by market segment

Figure 2 shows the percentage of overall study eligible FPGA and ASIC/IC participants by their job description. An example of eligible participant would be a self-identified design or verification engineer, or engineering manager, who is actively working within the electronics industry. Overall, design and verification engineers accounted for 60 percent of the study participants.

BLOG-2016-WRG-figure-0-2

Figure 2: FPGA and ASIC/IC study participants job title description

Before I start presenting the findings from our 2016 functional verification study, I plan to discuss in my next blog (click here) general bias concerns associated with all survey-based studies—and what we did to minimize these concerns.

Quick links to the 2016 Wilson Research Group Study results

, , , , , , , , , , , , , , , , , ,

10 May, 2016

IMG_6642A few weeks ago I had the honor of presenting a paper related to my prior Verification Horizons blog posts on “How Formal Techniques Can Keep Hackers from Driving You into a Ditch” (Part 1, Part 2) at the annual Society of Automotive Engineers (SAE) World Congress in Detroit, MI. Being an IEEE member for many years, it was intriguing to enter this parallel universe of professionals equally interested in advancing the state of their art. This year in particular, the incredible momentum behind automotive automation gave this conference a palpable energy – below are but 5 aspects of this.

1 – The auto industry is now taking vehicle security very seriously — Apparently the infamous “Jeep hack” story and subsequent recall was a real watershed — automakers and their suppliers took this as a wake-up call to allocate substantial new R&D resources to this area. Indeed, informally scanning the audiences’ badges and striking up conversations in the “Cyber Security” and related “Safety Critical” conference tracks revealed that these sessions were well attended by representatives from all walks of the industry, and the expert panel discussions and papers themselves were very substantive.

2 – Learn to “think maliciously” – from the panel on With Connectivity, Comes Risks – Cybersecurity and Safety one of the panelists implored design & verification engineers to always consider how someone could misuse a diagnostic port or routine for evil and not good. Example: think about how something necessary like a command to disable the anti-lock brakes to bleed off the pressure before maintenance can be safely performed could be manipulated by a Trojan Horse to disable the brakes during a deliberately unsafe moment, or when the vehicle crosses a particular GPS geofence area. (In a related Verification Horizons post ISO 26262 fault analysis – worst case is really the worst, my colleague Avidan Efody explores a variant of this issue.)

3 – The CAN bus must go, but it’s going to be a painful transition – The Controller Area Network (CAN) is the bus protocol that connects the internal control systems of most cars. Indeed, even a cursory stroll across the SAE World Congress expo floor will expose you to a wide variety of CAN-related offerings. Before cars were connected to the internet CAN served vehicle requirements well; but by computer networking standards it’s not that fast (40 Kbit/s to 125 Kbits/sec) and the payload size is very small (a grand total of 64 bits). This small payload size is CAN’s Achilles Heel – there are simply not enough bits to embed digital signature or other security-related data in a payload this small, leaving the whole bus vulnerable. Hence, it’s trivial to do things like a basic “replay attack” (e.g. recording a door unlock sequence, then “replay” the signals when you want to steal the car without a trace). My personal bet to replace CAN is the emerging Automotive Ethernet standard (that’s already been embraced by BMW, Jaguar, and VW) because all the cyber security work in the computer and mobile networking worlds can be brought to bear.

4 – A new rev of ISO26262 is on the horizon — this is old news for anyone in the Functional Safety field, but semiconductor makers and EDA suppliers should be aware that the upcoming revision has specific provisions for “Semiconductor Functional Safety”.

5 – The auto industry as a whole is reinvigorated – as noted above, thanks to the whole movement toward more automated vehicles, there was energy crackling through the conference. Throughout the sessions I attended, and in the conversations on the expo floor, the engineers and managers from a myriad of disciplines were eagerly unpacking the challenges and brainstorming solutions. Case in point, on a panel on “Collaborate. Create. Commercialize. The Next Gen Supplier Network”, the VP of Purchasing for Toyota North America, a 20+ year veteran of the car business, summarized it best, “There has never been a better time to be in automotive!

Until the fully autonomous car is in production, keep your eyes on the road and your hands up on the wheel,

Joe Hupcey III

P.S. My colleagues are presenting at the Functional Safety and ISO26262 track at the upcoming IESF Automotive Conference on June 1 in Dearborn, MI – the complete agenda is posted here: https://www.mentor.com/events/iesf/automotive-conference

, , , , ,

29 March, 2016

If you were wondering whether formal verification is becoming a cornerstone of mainstream verification flows, several events at the recent DVCon USA 2016 should leave you without any doubt.  Consider the evidence:

* The consensus of the panel on “Emulation + Static Verification Will Replace Simulation”

First, out of all the possible topics for a panel on “futures”, this one was chosen (vs. a panel on something really futuristic, like “Is Quantum Computing Ready for EDA?”), suggesting that formal is collectively at the top of people’s minds when they step back and assess the effectiveness of their verification flows today.

2016-3 DVCon 2016 USA

Of course, the panel’s animating question was chosen to be deliberately provocative – harkening back to the first wave of formal, with its claims of formal being able to eliminate the need for simulation. However, the panelists – who were heavily weighted toward formal expertise – didn’t renew or update this boast. Instead, they spoke of how formal can be incredibly effective in solving quite a few types of verification challenges, and flatly awful at others. Same goes for simulation and emulation. The consensus that emerged can be paraphrased as, “use the right tool for the job” and “because there are so many different jobs to be done, every verification environment should use formal, simulation, and emulation to some degree”.  The larger point here is that in the eyes of these verification experts, formal apps and classical property checking have clearly matured into being an essential component of a complete verification tool kit.

2016-3 DVCon 2016 USA

* The formal-specific elements of the EDA CEO keynote

Mentor’s CEO Wally Rhines gave a very expansive keynote – part history lesson, part “state of the art today” snapshot, and part future outlook (Article; Wally’s slides).  In synchronicity with the aforementioned panel discussion (that followed right after his speech, so he couldn’t likely have known the independent panel’s consensus beforehand), woven into his remarks were multiple examples of how formal is (A) the exact right tool for the job for several critical verification tasks (e.g. CDC, SoC Connectivity verification, code coverage unreachability & closure, X-corruption, secure path verification, etc.), and (B) is an integral part of a complete verification flow comprised of virtual prototyping, formal, simulation, emulation, and FPGA prototyping.

2016-3 DVCon 2016 USA

* The Thursday afternoon tutorial on pure formal basic training was a full house

I don’t mean to brag [ok, I do, just a little], but the tutorial I produced on “Back to Basics: Doing Formal the Right Way” on the last day in the last time slot of the conference – i.e. a historically challenging time slot to draw a sizable audience – was a full house! Apparently, our intuition was correct: there IS a new generation of D&V engineers who are very interested learning classical formal property checking (whether they were inspired by the success with an automated formal app, or sensed the technology is truly ready for prime time). Granted, there were quite a few familiar faces from prior events who no doubt knew of my colleagues Doug Smith and Mark Eslinger’s abilities to distill complex topics into concise, actionable examples. But speaking as a veteran DVCon tutorial producer and presenter, I would have been happy with a gentleman’s 40-45 attendees. Thus we were positively stunned when we filled the room with 70+ people, and held them throughout the length of the program.

Do you agree — are we in a new wave of formal?  Please share your thoughts in the comments below, or contact me offline.

Joe Hupcey III

,

1 February, 2016

field goalWith both DVCon and the annual Super Bowl championship football game coming up, one can’t help but think of … formal verification!

Seriously: goal posts and scoring drives aren’t just for football – conceptually similar things can be a winning strategy for using formal analysis to penetrate the state space of a large DUT, and drive toward the areas you are seeking to reach. To paraphrase, run a series of formal “goal posting” plays. Here is how it works:

* After so many formally analyzed clock cycles – 10’s, 100’s, or 1000’s depending on the DUT – even the most scalable formal verification engines take longer and longer to compute the next steps of the analysis. The football analogy is when a team can only get so far down the field, and has to concede that it’s better to kick a field goal for 3 points rather than run a risky 4th down play that would waste valuable time, leave them in worse field position, and yield no points at all. Conversely, if you kick enough field goals, each of those 3 pointers begin to add up …

* Translating from football jargon back into formal verification terms: find a cover point which is at a depth of analysis that formal can hit (10’s to low dozens of clock cycles), then use that as a new jumping off point from which to explore deeper into the design. Repeat until you reach the deeply positioned target of interest – Touchdown!

formal methodology -- goal posting sketch

“Goal posting” formal runs compared to a single formal run to hit a desired target deep in a DUT’s state space

* Referring to the figure above, the intermediate goals (typically specified with COVER statements) are selected to enable successive formal runs to explore deeper into the design to ultimately hit the desired target. With this technique a user can more quickly leap frog from point-to-point and explore deeper into the design than a single formal run could.

This is an effective, time-tested bug hunting technique (especially when you are working to reproduce post-silicon bugs). Indeed, a great real-world case study of this methodology is captured in the 2015 DVCon USA paper, “11.3, Every Cloud – Post-Silicon Bug Spurs Formal Verification Adoption. In this paper the authors describe how they employed this technique to find the root cause of a corner-case bug in a DDR3 design – a bug that had escaped all previous verification methods.

If this year’s game is like many prior Super Bowls, the outcome will be obvious by half time.  Hence, you might as well turn off the TV and download this paper so you’ll be ready to apply this methodology on Monday.

Go Formal, GO!!!

Joe Hupcey III
on behalf of the Questa Formal and CDC team

,

22 August, 2015

Impact of Design Size on First Silicon Success

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I present verification results findings in terms of schedules, number of required spins, and classification of functional bugs. In this blog, I conclude the series on the 2014 Wilson Research Group Functional Verification Study by providing a deeper analysis of respins by design size.

It’s generally assumed that the larger the design—the increased likelihood of the occurrence of bugs. Yet, a question worth answering is how effective projects are at finding these bugs prior to tapeout.

In Figure 1, we first extract the 2014 data from the required number of spins trends presented in my previous blog (click here), and then partition this data into sets based on design size (that is, designs less than 5 million gates, designs between 5 and 80 million gates, and designs greater than 80 million gates). This led to perhaps one of the most startling findings from our 2014 study. That is, the data suggest that the smaller the design—the less likelihood of achieving first silicon success! While 34 percent of the designs over 80 million gates achieve first silicon success, only 27 percent of the designs less than 5 million gates are able to achieve first silicon success. The difference is statistically significant.

2014-WRG-BLOG-Conclusion-1

Figure 1. Number of spins by design size

To understand what factors might be contributing to this phenomena, we decided to apply the same partitioning technique while examining verification technology adoption trends.

Figure 2 shows the adoption trends for various verification techniques from 2007 through 2014, which include code coverage, assertions, functional coverage, and constrained-random simulation.

One observation we can make from these adoption trends is that the electronic design industry is maturing its verification processes. This maturity is likely due to the need to address the challenge of verifying designs with growing complexity.

2014-WRG-BLOG-Conclusion-2

Figure 2. Verification Technology Adoption Trends

In Figure 3 we extract the 2014 data from the various verification technology adoptions trends presented in Figure 2, and then partition this data into sets based on design size (that is, designs less than 5 million gates, designs between 5 and 80 million gates, and designs greater than 80 million gates).

2014-WRG-BLOG-Conclusion-3

Figure 3. Verification Technology Adoption by Design

Across the board we see that designs less than 5 million gates are less likely to adopt code coverage, assertions, functional coverage, and constrained-random simulation. Hence, if you correlate this data with the number of spins by design size (as shown in Figure 1), then the data suggest that the verification maturity of an organization has a significant influence on its ability to achieve first silicon success.

As a side note, you might have noticed that there is less adoption of constrained-random simulation for designs greater than 80 million gates. There are a few factors contributing to this behavior: (1) constrained-random works well at the IP and subsystem level, but does not scale to the full-chip level for large designs. (2) There a number of projects working on large designs that predominately focuses on integrating existing or purchased IPs. Hence, these types of projects focus more of their verification effort on integration and system validation task, and constrained-random simulation is rarely applied here.

So, to conclude this blog series, in general, the industry is maturing its verification processes as witnessed by the verification technology adoption trends. However, we found that smaller designs were less likely to adopt what is generally viewed as industry best verification practices and techniques. Similarly, we found that projects working on smaller designs tend to have a smaller ratio of peak verification engineers to peak designers. Could the fact that fewer available verification resources combined with the lack of adoption of more advanced verification techniques account for fewer small designs achieving first silicon success? The data suggest that this might be one contributing factor. It’s certainly something worth considering.

Quick links to the 2014 Wilson Research Group Study results

, , , , , ,

@dennisbrophy tweets

Follow dennisbrophy

@dave_59 tweets

Follow dave_59

@jhupcey tweets

  • #ARM now hiring formal verification engineers in Austin: exciting tech challenge + Ram is a great guy to work with.…https://t.co/uwIXLHWqvg
  • Attention all SF Bay Area formal practitioners: next week Wednesday 7/26 on Mentor's Fremont campus the Verificatio…https://t.co/9Y0iFXJdYi
  • This is a very hands-on, creative role for a verification expert -- join us! https://t.co/jXWFGxGrpn

Follow jhupcey