Posts Tagged ‘functional verification’

23 February, 2015

It’s my favorite time of year again—DVCon!  And I believe that the DVCon 2015 technical program committee has put together one of the technically best DVCon’s in years. In this blog I plan on highlighting a few DVCon events that you might want to put on your calendar.

2015-DVCon

First, at this year’s conference the Verification Academy has a dedicated booth (#301), and I hope you stop by to say hello to myself, my friend Tom Fitzpatrick, and an amazing lineup of other Verification Academy subject matter experts.

Next, on Wednesday morning March 4 I have the honor of participating on a verification panel, titled: “Art of Science.” Here, my fellow panelist and I will debate the issue that verification today is considered by some to be more of an art than a science—and one which is perceived as difficult to master. To learn my position on this topic, you’ll have to stop by!

Also on Wednesday at the Mentor sponsored lunch, my colleague Steve Bailey and I have put together both an informative and entertaining talk we’ve title: “From Tightly Coupled (Loosely Bolted) to Verification Convergence.” Here, we discuss the state of verification past, present and future while examining the results from our recently industry world-wide study, which I started blogging about a few weeks ago (click here for more details). Our talk will examine how advanced techniques are taking hold in mainstream design and provide insights on the recent convergence of verification solutions to meet today’s growing challenges.

Finally, there are two tutorials I’d like to encourage you to attend while at DVCon this year:

  1. Advanced, High-Throughput Debug from Architectural Modeling Through Post-Silicon SoC Validation (click here for more details)
  2. Dead or Alive: Using Automated Formal Techniques to Characterize Dead Code, Reveal Paths to Hit Uncovered States, and Reach Coverage Closure Faster (click here for more details)

I look forward to meeting you at DVCon 2015!

, , ,

8 February, 2015

FPGA Design Trends

In my previous blog, I introduced the 2014 Wilson Research Group Functional Verification Study (click here). The objective of my previous blog was to provide an overview on our large, worldwide industry study. The key findings from this study will be presented in a set of upcoming blogs. In this blog, I present trends related to various aspects of FPGA design to illustrate growing design complexity.

Let’s begin by examining embedded processor trends targeted at a general FPGA implementation. Our 2014 study found that 56% of all FPGA designs contained one or more embedded processors, as shown in Figure 1. Although we did not see an overall growth in the number of FPGAs containing one or more embedded processors between 2012 and 2014, we did see an increase in the number of FPGA projects creating designs containing more than one embedded processor.

2014-WRG-BLOG-FPGA-1-1

Figure 1. Number of embedded processors in FPGA trends

SoC class designs (i.e., designs containing embedded processors) add a new layer of verification complexity to the verification process that did not exist with traditional non-SoC class designs due to hardware and software interactions, new coherency architectures, and the emergence of complex network-on-a-chip interconnect.

In addition to embedded processors targeted at general FPGA class of designs, there has been a recent emergence of specific programmable SoC FPGA implementations, such as: Xilinx’s Zynq, Altera’s Arria/Cydone, and Microsemi’s SmarFusion. Figure 2 shows the adoption trends for these programmable SoC FPGAs, which you can see grew by over 93 percent between 2012 and 2014. Keep in mind that this trend data does not represent volume production—it represents the number of FPGA projects that are creating designs targeted at a programmable SoC class of FPGA.

2014-WRG-BLOG-FPGA-1-3

Figure 2. Type of FPGA implementation trends

As the industry moves to SoC class designs, regardless of targeted FPGA implementation, FPGA projects are starting to increase their adoption of industry standard on-chip bus protocols—versus proprietary bus protocols. Figure 3 shows the current adoption of AMBA and other on-chip bus protocols for FPGA designs as identified by our new study. Note, the reason we are not showing trends here is that the 2012 study did not separate out the various AMBA protocols, which is something we decided to do for our 2014 study. Hence, we cannot do an apples-to-apples comparison between 2012 and 2014 for FPGA on-chip bus protocol adoption.

2014-WRG-BLOG-FPGA-1-3a

Figure 3. FPGA on-chip bus protocol adoption

Another aspect of SoC class design is the emergence of IP-based design practices, which is fundamental for improving design productivity. Figure 4 shows FPGA design composition trends—and we see that there has been a declined in new logic created by FPGA project teams. At the same time we see an increase in the adoption of both internally developed and externally acquired IP.

2014-WRG-BLOG-FPGA-1-5

Figure 4. FPGA design composition trends

In my next blog (click here), I’ll focus on verification effort trends related to FPGA designs.

Quick links to the 2014 Wilson Research Group Study results (so far…)

,

21 January, 2015

This blog is a continuation of a series of blogs that present the highlights from the 2014 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In this blog I discuss the issue of study bias, and what we did to address these concerns.

MINIMIZING STUDY BIAS

When architecting a study, three main concerns must be addressed to ensure valid results: sample validity bias, non-response bias, and stakeholder bias. Each of these concerns is discussed in the following sections, as well as the steps we took to minimize these bias concerns.

Sample Validity Bias

To ensure that a study is unbiased, it’s critical that every member of a studied population have an equal chance of participating. An example of a biased study would be when a technical conference surveys its participants. The data might raise some interesting questions, but unfortunately, it does not represent members of the population that were unable to participant in the conference. The same bias can occur if a journal or online publication limits its surveys to only its subscribers.

A classic example of sample validity bias is the famous Literary Digest poll in the 1936 United States presidential election, where the magazine surveyed over two million people. This was a huge study for this period in time. The sampling frame of the study was chosen from the magazine’s subscriber list, phone books, and car registrations. However, the problem with this approach was that the study did not represent the actual voter population since it was a luxury to have a subscription to a magazine, or a phone, or a car during The Great Depression. As a result of this biased sample, the poll inaccurately predicted that Republican Alf Landon versus the Democrat Franklin Roosevelt would win the 1936 presidential election.

For our study, we carefully chose a broad set of independent lists that, when combined, represented all regions of the world and all electronic design market segments. We reviewed the participant results in terms of market segments to ensure no segment or region representation was inadvertently excluded or under-represented.

Non-Response Bias

Non-response bias in a study occurs when a randomly sampled individual cannot be contacted or refuses to participate in a survey. For example, spam and unsolicited mail filters remove an individual from the possibility of receiving an invitation to participate in a study, which can bias results. It is important to validate sufficient responses occurred across all lists that make up the sample frame. Hence, we reviewed the final results to ensure that no single list of respondents that made up the sample frame dominated the final results.

Another potential non-response bias is due to lack of language translation, which we learned during our 2012 study. The 2012 study generally had good representation from all regions of the world, with the exception of an initially very poor level of participation from Japan. To solve this problem, we took two actions:

  1. We translated both the invitation and the survey into Japanese.
  2. We acquired additional engineering lists directly from Japan to augment our existing survey invitation list.

This resulted in a balanced representation from Japan. Based on that experience, we took the same approach to solve the language problem for the 2014 study.

Stakeholder Bias

Stakeholder bias occurs when someone who has a vested interest in survey results can complete an online study survey multiple times and urge others to complete the survey in order to influence the results. To address this problem, a special code was generated for each study participation invitation that was sent out. The code could only be used once to fill out the survey questions, preventing someone from taking the study multiple times or sharing the invitation with someone else.

2010 Study Bias

While architecting the 2012 study, we did discover a non-response bias associated with the 2010 study. Although multiple lists across multiple market segments and across multiple regions of the world were used during the 2010 study, we discovered that a single list dominated the responses, which consisted of participants who worked on more advanced projects and whose functional verification processes tend to be mature. Hence, for this series of blogs we have decided not to publish any of the 2010 results as part of verification technology adoption trend analysis.

The 2007, 2012, and 2014 studies were well balance and did not exhibit the non-response bias previously described for the 2010 data. Hence, we have confidence in talking about general industry trends presented in this series of blogs.

Quick links to the 2014 Wilson Research Group Study results (so far…)

,

21 January, 2015

This is the first in a series of blogs that presents the findings from our new 2014 Wilson Research Group Functional Verification Study. However, unlike my previous Wilson Research Group functional verification study blogs, which focused on the ASIC/IC market, I plan to begin this set of blogs with an exclusive focus on FPGA trends. Why? For the following reasons:

  1. Unlike the traditional ASIC/IC market, there has historically been very few studies published on FPGA functional verification trends. We started studying the FPGA market segment back in the 2010 study, and we now have collected sufficient data to confidently present industry trends related to this market segment.
  2. Today’s FPGA designs have grown in complexity—and many now resemble complete systems. The task of verifying SoC-class designs is daunting, which has forced many FPGA projects to mature their verification process due to rising complexity. The FPGA-focused data I present in this set of blogs will support this claim.

My plan is to release the ASIC/IC functional verification trends through a set of blogs after I finish presenting the FPGA trends.

Introduction

In 2002 and 2004, Collett International Research, Inc. conducted its well-known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification at that point in time. However, after the 2004 study, no additional Collett studies were conducted, which left a void in identifying industry trends. To address this dearth of knowledge, four studies were commissioned by Mentor Graphics in 2007, 2010, 2012, and 2014, which focused on functional verification. These were world-wide, double-blind, functional verification studies, covering all electronic industry market segments. To our knowledge, the 2014 study was the largest functional verification study ever conducted. This set of blogs presents the findings from our 2014 study and provides invaluable insight into the state of the electronic industry today in terms of both design and verification trends.

Study Background

Our study was modeled after the original 2002 and 2004 Collett International Research, Inc. studies. In other words, we endeavored to preserve the original wording of the Collett questions whenever possible to facilitate trend analysis. To ensure anonymity, we commissioned Wilson Research Group to execute our study. The purpose of preserving anonymity was to prevent biasing the participants’ responses. Furthermore, to ensure that our study would be executed as a double-blind study, the compilation and analysis of the results did not take into account the identity of the participants.

For the purpose of our study we used a multiple sampling frame approach that was constructed from eight independent lists that we acquired. This enabled us to cover all regions of the world—as well as cover all relevant electronic industry market segments. It is important to note that we decided not to include our own account team’s customer list in the sampling frame. This was done in a deliberate attempt to prevent biasing the final results. My next blog in this series will discuss other potential bias concerns when conducting a large industry study and describe what we did to address these concerns.

After data cleaning the results to remove inconsistent or random responses (e.g., someone who only answered “a” on all questions), the final sample size consisted of 1886 eligible participants (i.e., n=1886). To put this figure in perspective, the 2004 Collett study sample size consisted of 201 eligible participants.

Unlike the 2002 and 2004 Collett IC/ASIC functional verification studies, which focused only on the ASIC/IC market segment, our studies were expanded in 2010 to include the FPGA market segment. We have partitioned the analysis of these two different market segments separately, to provide a clear focus on each. One other difference between our studies and the Collett studies is that our study covered all regions of the world, while the original Collett studies were conducted only in North America (US and Canada). We have the ability to compile the results both globally and regionally, but for the purpose of this set of blogs I am presenting only the globally compiled results.

Confidence Interval

All surveys are subject to sampling errors. To quantify this error in probabilistic terms, we calculate a confidence interval. For example, we determined the overall margin of error for our study to be ±2.19% at a 95% confidence interval. In other words, this confidence interval tells us that if we were to take repeated samples of size n=1886 from a population, 95% of the samples would fall inside our margin of error ±2.19%, and only 5% of the samples would fall outside.

Study Participants

This section provides background on the makeup of the study.

Figure 1 shows the percentage of overall study participants by market segment.

2014-WRG-BLOG-P-1

Figure 1: Study participants by market segment

Figure 2 shows the percentage of overall study eligible participants by their job description. An example of eligible participant would be a self-identified design or verification engineer, or engineering manager, who is actively working within the electronics industry. Overall, design and verification engineers accounted for 60 percent of the study participants.

2014-WRG-BLOG-P-2

Figure 2: Study participants job title description

Before I start presenting the findings from our 2014 functional verification study, I plan to discuss in my next blog (click here) general bias concerns associated with all survey-based studies—and what we did to minimize these concerns.

Quick links to the 2014 Wilson Research Group Study results (so far…)

,

14 January, 2015

“Who Knew?” about verification IP (VIP), was the theme of a recent DeepChip post by John Cooley on December 18.  More specifically the article states, “Who knew VIP was big and that Wally had a good piece of it?”  We knew.

We knew that ASIC and FPGA design engineers can choose to buy design IP from several alternative sources or build their own, but that does not help with the problem of verification.  We knew that you don’t really want to rely on the same source that designed your IP, to test it.  We knew that you don’t want to write and maintain bus functional models (BFMs) or more complete VIP for standard protocols.  Not that you couldn’t, but why would you if you don’t have to?

We also knew that verification teams want easy-to-use VIP that is built on a standard foundation of SystemVerilog, compliant with a protocol’s specification, and is easily configurable to your implementation.  That way it integrates into your verification environment just as easily as if you had built it yourself.

Leading design IP providers such as ARM®, PLDA, and Northwest Logic knew that Mentor Graphics’ VIP is built on standards, is protocol compliant, and is easy to use.  In fact you can read more about what Jim Wallace, systems and software group director at ARM; Stephane Hauradou, CTO of PLDA; and Brian Daellenbach, president of Northwest Logic; have to say about Mentor Graphics’ recently introduced EZ-VIP technology for PCIe 4.0 (at this website http://www.mentor.com/company/news/mentor-verification-ip-pcie-4 ), and why they know that their customers can rely on it as well.

Verification engineers knew, too.  You can read comments from many of them (at Cooley’s website http://www.deepchip.com/items/dac14-06.html ), about their opinions on VIP.  In addition, Mercury Systems also knew.  “Mentor Graphics PCIe VIP is fully compliant with the PCIe protocol specification and with UVM coding guidelines. We found that we could drop it into our existing environment and get it up and running very quickly”, said Nick Solimini, Consulting DV Engineer at Mercury Systems. “Mentor’s support for their VIP is excellent. All our technical questions were answered promptly so we were able to be productive throughout the project”.

So, now you know,  Mentor Graphics’ Questa VIP is built on standard SV UVM, is specification compliant, is easy to get up and running and is an integral part of many successful verification environments today.  If you’d like to learn more about Questa VIP and Mentor Graphics’ EZ-VIP technology, send me an email, and I’ll let you in on what (thanks to Cooley and our customers) is no longer the best kept secret in verification.  Who knew?

, , , , , , , ,

17 November, 2014

Few verification tasks are more challenging than trying to achieve code coverage goals for a complex system that, by design, has numerous layers of configuration options and modes of operation.  When the verification effort gets underway and the coverage holes start appearing, even the most creative and thorough UVM testbench architect can be bogged down devising new tests – either constrained-random or even highly directed tests — to reach uncovered areas.

armpaper
At the recent ARM® Techcon, Nguyen Le, a Principal Design Verification Engineer in the Interactive Entertainment Business Unit at Microsoft Corp. documented a real world case study on this exact situation.  Specifically, in the paper titled “Advanced Verification Management and Coverage Closure Techniques”, Nguyen outlined his initial pain in verification management and improving cover closure metrics, and how he conquered both these challenges – speeding up his regression run time by 3x, while simultaneously moving the overall coverage needle up to 97%, and saving 4 man-months in the process!  Here are the highlights:

* DUT in question
— SoC with multi-million gate internal IP blocks
— Consumer electronics end-market = very high volume production = very high cost of failure!

* Verification flow
— Constrained-random, coverage driven approach using UVM, with IP block-level testbenches as well as  SoC level
— Rigorous testplan requirements tracking, supported by a variety of coverage metrics including functional coverage with SystemVerilog covergroups, assertion coverage with SVA covers, and code coverage on statements, Branches, Expressions, Conditions, and FSMs

* Sign-off requirements
— All test requirements tracked through to completion
— 100% functional, assertion and code coverage

* Pain points
— Code coverage: code coverage holes can come from a variety of expected and unforeseen sources: dead code can be due to unused functions in reused IP blocks, from specific configuration settings, or a bug in the code.  Given the rapid pace of the customer’s development cycle, it’s all too easy for dead code to slip into the DUT due to the frequent changes in the RTL, or due to different interpretations of the spec.  “Unexplainably dead” code coverage areas were manually inspected, and the exclusions for properly unreachable code were manually addressed with the addition of pragmas.  Both procedures were time consuming and error prone
— Verification management: the verification cycle and the generated data were managed through manually-maintained scripting.  Optimizing the results display, throughput, and tool control became a growing maintenance burden.

* New automation
— Questa Verification Manager: built around the Unified Coverage Database (UCDB) standard, the tool supports a dynamic verification plan cross-linked with the functional coverage points and code coverage of the DUT.  In this way the dispersed project teams now had a unified view which told them at a glance which tests were contributing the most value, and which areas of the DUT needed more attention.  In parallel, the included administrative features enabled efficient control of large regressions, merging of results, and quick triage of failures.

— Questa CoverCheck: this tool reads code coverage results from simulation in UCDB, and then leverages formal technology under-the-hood to mathematically prove that no stimulus could ever activate the code in question. If it’s OK for a given block of code to be dead due to a particular configuration choice, etc., the user can automatically generate wavers to refine the code coverage results.  Additionally, the tool can also identify segments of code that, though difficult to reach, might someday be exercised in silicon. In such cases, CoverCheck helps point the way to testbench enhancements to better reach these parts of the design.

— The above tools used in concert (along with Questasim) enabled a very straightforward coverage score improvement process as follows:
1 – Run full regression and merge the UCDB files
2 – Run Questa CoverCheck with the master UCDB created in (1)
3 – Use CoverCheck to generate exclusions for “legitimate” unreachable holes, and apply said exclusions to the UCDB
4 – Use CoverCheck to generate waveforms for reachable holes, and share these with the testbench developer(s) to refine the stimulus
5 – Report the new & improved coverage results in Verification Manager

* Results
— Automation with Verification Manager enabled Microsoft to reduce the variation of test sequences from 10x runtime down to a focused 2x variation.  Additionally, using the coverage reporting to rank and optimize their tests, they increased their regression throughput by 3x!
— With CoverCheck, the Microsoft engineers improved code coverage by 10 – 15% in most hand-coded RTL blocks, saw up to 20% coverage improvement for auto-generated RTL code, and in a matter of hours were able to increase their overall coverage number from 87% to 97%!
— Bottom-line: the customer estimated that they saved 4 man-months on one project with this process

2014 MSFT presentation at ARM Techcon -- cover check ROI

Taking a step back, success stories like this one, where automated, formal-based applications leverage the exhaustive nature of formal analysis to tame once intractable problems (which require no prior knowledge of formal or assertion-based verification), are becoming more common by the day.  In this case, Mentor’s formal-based CoverCheck is clearly the right tool for this specific verification need, literally filling in the gaps in a traditional UVM testbench verification flow.  Hence, I believe the overall moral of the story is a simple rule of thumb: when you are grappling with a “last mile problem” of unearthing all the unexpected, yet potentially damaging corner cases, consider a formal-based application as the best tool for job.  Wouldn’t you agree?

Joe Hupcey III

Reference links:

Direct link to the presentation slides: https://verificationacademy.com/Advanced-Verification-Management-Presentation

ARM Techcon 2014 Proceedings: http://www.armtechcon.com/

Official paper citation:
Advanced Verification Management and Coverage Closure Techniques, Nguyen Le, Microsoft; Harsh Patel, Roger Sabbagh, Darron May, Josef Derner, Mentor Graphics

, , , , , , , , , , ,

5 November, 2014

Between 2006 and 2014, the average number of IPs integrated into an advanced SoC increased from about 30 to over 120. In the same period, the average number of embedded processors found in an advanced SoC increased from one to as many as 20. However, increased design size is only one dimension of the growing verification complexity challenge. Beyond this growing-functionality phenomenon are new layers of requirements that must be verified. Many of these verification requirements did not exist ten years ago, such as multiple asynchronous clock domains, interacting power domains, security domains, and complex HW/SW dependencies. Add all these challenges together, and you have the perfect storm brewing.

It’s not just the challenges in design and verification that have been changing, of course. New technologies have been developed to address emerging verification challenges. For example, new automated ways of applying formal verification have been developed that allow non-Formal experts to take advantage of the significant benefits of formal verification. New technology for stimulus generation have also been developed that allow verification engineers to develop complex stimulus scenarios 10x more efficiently than with directed tests and execute those tests 10x more efficiently than with pure-random generation.

It’s not just technology, of course. Along with new technologies, new methodologies are needed to make adoption of new technologies efficient and repeatable. The UVM is one example of these new methodologies that make it easier to build complex and modular testbench environments by enabling reuse – both of verification components and knowledge.

The Verification Academy website provides great resources for learning about new technologies and methodologies that make verification more effective and efficient. This year, we tried something new and took Verification Academy on the road with live events in Austin, Santa Clara, and Denver. It was great to see so many verification engineers and managers attending to learn about new verification techniques and share their experiences applying these techniques with their colleagues.

va_live_sc

If you weren’t able to attend one of the live events – or if you did attend and really want to see a particular session again – you’re in luck. The presentations from the Verification Academy Live seminars are now available on the Verification Academy site:

  • Navigating the Perfect Storm: New School Verification Solutions
  • New School Coverage Closure
  • New School Connectivity Checking
  • New School Stimulus Generation Techniques
  • New School Thinking for Fast and Efficient Verification using EZ-VIP
  • Verification and Debug: Old School Meets New School
  • New Low Power Verification Techniques
  • Establishing a company-wide verification reuse library with UVM
  • Full SoC Emulation from Device Drivers to Peripheral Interfaces

You can find all the sessions via the following link:

https://verificationacademy.com/seminars/academy-live

, , , , , , , , ,

7 May, 2014

My Feb. 4 post introduced Mentor Graphics’ three-step FPGA verification process intended to help design teams get out of the reprogrammable lab more effectively. Since then, I’ve engaged FPGA vendors, design managers and engineers to explain the process, paying special attention to the merits and technical detail for injecting automation into any FPGA verification environment, the hallmark of Mentor’s process. The feedback from these conversations helped me to develop a series of technical webinars, now available for free and on-demand. Check them out and let us know what you think in the comments below. My hope is the webinars might serve as a starting point for your own conversations on verification of FPGAs, demand for which seems to continue to grow as process nodes shrink.

Injecting Automation into Verification – FPGA Market Trends

Injecting Automation into Verification – Code Coverage

Injecting Automation into Verification – Assertions

Injecting Automation into Verification – Improved Throughput

, , , , , , , , ,

25 April, 2014

DVCon 2014 Conference Proceedings Published

2014DVCon_logoWith record attendance announced for DVCon 2014, one might wonder if there is really a need to put some of the “Accellera Day” tutorial videos online.  With more than 1,000 professionals attending in some capacity, it would be easy to conclude that everyone that needs to know about UVM and the developments on the updated version to it, probably know.  Looking at just the LinkedIn design and verification forums one will realize there are 10’s of thousands who would have benefited if they had attended DVCon.  Thus, sharing this information more broadly is in order.

UVM Tutorial Video

UVM – What’s Now and What’s Next” is the tittle of the DVCon 2014 tutorial on UVM.  It covered use cases and pragmatic topics of the current UVM 1.1 standard as well as advanced topics for the next update, UVM 1.2.  The presenters covered sequence creation, register layer use, TLM-based communication, test execution, run-time phases and messaging enhancements.

The tutorial was split into five separate sections delivered by five speakers as follows:

  • Working Group Update: Adam Sherer, Accellera (7 min.)
  • Overview and Library Concepts: John Aynsley, Doulos (36 min.)
  • Stimulus Generation: Shawn Honess, Synopsys (21 min.)
  • UVM Register Layer: Tom Fitzpatrick, Mentor Graphics (36 min.)
  • UVM 1.2 Introduction: Uwe Simm, Cadence Design Systems (25 min.)

You can find out more information about the online tutorial videos hereRegistration is required, but there is no charge for access.  Once you have registered, you will get links to each of the five sections.  You can stream them or download them for offline access as you wish.  They are suitable for viewing on your computer or mobile devices.

DVCon 2014 Proceedings

DVCon 2014 was a full conference; it was more than just the the Accellera Day UVM Tutorial.  And in keeping with DVCon tradition, the conference proceedings are made available to all several months after the conference without charge.  If you visit the DVCon history area, you will find the 2014 proceedings have been published.  What I like about the DVCon proceedings it not only are the papers published, but the slides that were presented at the conference will often accompany the paper.

As an example, if you were interested in the DVCon 2014 Best Oral Presentation paper and presentation (Kelly D. Larson from NVIDIA on , “Determining Test Quality through Dynamic Runtime Monitoring of SystemVerilog Assertions” by the way), you will now find both the paper and presentation available online here.

For all those who did not make it to DVCon 2014, or who were there and could not see everything, the proceedings are now online and the first of the Accellera Day tutorials videos is published. Accellera is busy readying its other tutorial videos.  I’ll share information on their availability as they appear in the weeks and months ahead.

, , , , ,

4 February, 2014

Marketing teams at FPGA vendors have been busy as the silicon nanometer geometry race escalates. Altera is “delivering the unimaginable” while Xilinx is offering “all programmable SoCs” to design centers. It’s clear that the SoC has become more accessible to a broader market today and that FPGA vendors have staked out a solid technology roadmap for the near future. Do marketing messages surrounding the geometry race effect day to day life of engineers, and if so, how – especially when it comes to verification?
An excellent whitepaper from Altera, “The Breakthrough Advantage for FPGAs with Tri-Gate Technology,” covers Altera’s Stratix 10 FPGAs and SoCs. The paper describes verification challenges in this new expanded market this way: “Although current generation FPGAs require a rigorous simulation verification methodology rivaling ASICs, the additional lab testing and ability to reprogram FPGAs save substantial manpower investment. The overall cost of ownership must be considered when comparing an FPGA whose component price is higher than an ASIC of similar complexity.” I believe you can use this statement to engage your management in a discussion about better verification processes.

Xilinx also has excellent published technical resources. Its recent UltraScale backgrounder describes how they are solving the challenges in implementing a design with their reprogrammable silicon. Clearly Xilinx has made an impressive investment to make it easier to implement a design with its FPGA UltraScale products. Improvements include ASIC-like clocking and annealing dataflow bottlenecks without compromising performance. Xilinx also describes improvements when using its Vivado design suite, particularly when it comes to in-lab design bring up.

For other FPGA insights, it’s also worth checking out Electronics Engineering Journal’s recent article “Proliferating Programmability in 2014,” which claims that the long-term future of FPGAs tool flows even though, as Kevin Morris sees it, EDA seems to have abandoned the market. (Kevin, I’m here to tell you you’re wrong.)

Do you think it’s inevitable that your FPGA team will first struggle to make it across the verification finish line before adopting a more process-oriented verification flow like the ASIC market demands? It’s not. I base this conclusion on the many conversations I’ve had with FPGA designers, their managers, sales engineers and many other talented people in this market over the years. Yes, there are significant challenges in FPGA design, but not all of them are technology related. With some emotion, one engineer remarked that debugging the same type of issue over and over in the hardware lab and expecting a different outcome was insane. (He’s right.) Others say they need specific ROI information for their management to even accept their need for change. Still others state that had they only known the solutions I talked about in my seminar a year ago, they would have not spent months and months bringing up their design in the lab.
With my peers here at Mentor Graphics, I have developed a three-step verification flow that includes coverage, assertions and improved throughput. I’ll write about this flow and related issues in the weeks ahead here on this blog. The flow is built on fundamental verification technologies that benefit the broad FPGA market. The goal, in developing the technology and writing about it here, has been to provide practical solutions and help more FPGA teams cross the verification gap.

In the meantime, what are your stories? Are you able to influence your management into adopting advanced technology to aid lab bring-up? Is your management’s bias towards lower cost and faster implementation (at the expense of verification)? Let me know in the comments or, if you prefer, by e-mail: joe_rodriguez@mentor.com.

, , , , , , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@jhupcey Tweets

  • Loading tweets...