Posts Tagged ‘Verification’

25 August, 2015

I have always been wanting to contribute to the growing verification engineering community in India, which Mentor’s CEO Wally Rhines calls “the largest verification market in the world”. So when I first accompanied the affable Dennis Brophy to the IEEE India office back in April of 2014 to discuss the possibility of having a DVCon in India, I knew I was at the right place at the right time and it was opportunity to contribute to this community.

I has been two years since that meeting, I don’t have to write about how big a success the first ever DVCon India in 2014 was. I’m glad I played a small part by being on the Technical Program Committee on the DV track, reviewing various abstracts. It is a responsibility which I thoroughly enjoyed. This year in addition to being on the TPC, I am contributing as the Chair for Tutorials and Posters. I am eagerly looking forward to the second edition of the Verification Extravaganza which is on 10th and 11th Sept 2015 and the amazing agenda we have planned for attendees.

Day 1 of the conference is dedicated to keynotes, panel discussions and tutorials while day 2 is dedicated fully to Papers with a DV track and a panel in addition to papers in a ESL track. Participants are free to attend any track and can move between tracks. This year we had many sponsored tutorials submissions hence, there will be three parallel tutorial tracks, one on the DV side and two on the ESL track.

Below please find a list of those that Mentor Graphics will be presenting at:

  • Keynote from Harry Foster discussing the growing complexity across the entire design ecosystem
    Thursday, September 10, 2015
    9:45am – 10:30am
    Grand Ball Room, The Leela Palace
    More Information >
    Register for this event >
  • Creating SystemVerilog UVM Testbenches for Simulation and Emulation Platform Portability to Boost Block-to-System Verification Productivity
    Thursday, September 10, 2015
    1:30pm – 3:00pm
    DV Track, Diya, The Leela Palace
    More Information >
    Register for this event >
  • Expediting the code coverage closure using Static Formal Techniques – A proven approach at block and SoC Levels!
    Thursday, September 10, 2015
    1:30pm – 3:00pm
    DV Track, Grand Ball Room, The Leela Palace
    More Information >
    Register for this event >

The papers on day 2 are primarily split into 3 parallel tracks, one on DV track and 2 parallel tracks on ESL. Within the DV track, one area is dedicated to UVM/SV. The other categories within the DV track will cover Portable Stimulus & Graph Based Stimulus, AMS, SoC & Stimulus Generation, Emulation, Acceleration and Prototyping & a generic selected category. The surprise among the categories is Portable Stimulus, which was a tutorial in last year however has continued to be of high interest and sessions will build on last year’s initial tutorial.

Overall there is an exciting mix of keynotes, tutorials, panels, papers and posters, which will make two exceptional days of learning, networking and fun. I look forward to seeing at DVCon India, 2015 and if you see me at the show, please come say hello and let me know what you think of the conference.

, , , , , , , ,

10 August, 2015

ASIC/IC Power Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I presented our study findings on various verification language and library adoption trends. In this blog, I focus on power trends.

Today, we see that about 73 percent of design projects actively manage power with a wide variety of techniques, ranging from simple clock-gating, to complex hypervisor/OS-controlled power management schemes. What is interesting from our 2014 study is that the data indicates that there has been a 19% increase in the last two years in the designs that actively manage power (see Figure 1).

2014-WRG-BLOG-ASIC-11-1

Figure 1. ASIC/IC projects working on designs that actively manage power

Figure 2 shows the various aspects of power-management that design projects must verify (for those 73 percent of design projects that actively manage power). The data from our study suggest that many projects are moving to more complex power-management schemes that involve software control. This adds a new layer of complexity to a project’s verification challenge, since these more complex power management schedules often require emulation to fully verify.

2014-WRG-BLOG-ASIC-11-2

Figure 2. Aspects of power-managed design that are verified

Since the power intent cannot be directly described in an RTL model, alternative supporting notations have recently emerged to capture the power intent. In the 2014 study, we wanted to get a sense of where the industry stands in adopting these various notations. For projects that actively manage power, Figure 3 shows the various standards used to describe power intent that have been adopted. Some projects are actively using multiple standards (such as different versions of UPF or a combination of CPF and UPF). That’s why the adoption results do not sum to 100 percent.

2014-WRG-BLOG-ASIC-11-3

Figure 3. Notation used to describe power intent

In an earlier blog in this series, I provided data that suggest a significant amount of effort is being applied to ASIC/IC functional verification. An important question the various studies have tried to answer is whether this increasing effort is paying off. In my next blog (click here), I present verification results findings in terms of schedules, number of required spins, and classification of functional bugs.

Quick links to the 2014 Wilson Research Group Study results

, , , , , ,

3 June, 2015

FPGA Language and Library Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on FPGA verification techniques and technologies adoption trends, as identified by the 2014 Wilson Research Group study. In this blog, I’ll present FPGA design and verification language trends, as identified by the Wilson Research Group study.

You might note that the percentage for some of the language and library data that I present sums to more than one hundred percent. The reason for this is that many FPGA projects today use multiple languages.

FPGA RTL Design Language Adoption Trends

Let’s begin by examining the languages used for FPGA RTL design. Figure 1 shows the trends in terms of languages used for design, by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected design language adoption trends within the next twelve months (in purple). Note that the language adoption is declining for most of the languages used for FPGA design with the exception of Verilog and SystemVerilog.

Also, it’s important to note that this study focused on languages used for RTL design. We have conducted a few informal studies related to languages used for architectural modeling—and it’s not too big of a surprise that we see increased adoption of C/C++ and SystemC in that space. However, since those studies have (thus far) been informal and not as rigorously executed as the Wilson Research Group study, I have decided to withhold that data until a more formal study can be executed related to architectural modeling and virtual prototyping.

Figure 1. Trends in languages used for FPGA design

It’s not too big of a surprise that VHDL is the predominant language used for FPGA RTL design, although the projected trend is that Verilog will likely overtake VHDL in terms of the predominate language used for FPGA design in the near future.

FPGA Verification Language Adoption Trends

Next, let’s look at the languages used to verify FPGA designs (that is, languages used to create simulation testbenches). Figure 2 shows the trends in terms of languages used to create simulation testbenches by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected verification language adoption trends within the next twelve months (in purple).

Figure 2. Trends in languages used in verification to create FPGA simulation testbenches

FPGA Testbench Methodology Class Library Adoption Trends

Now let’s look at testbench methodology and class library adoption for FPGA designs. Figure 3 shows the trends in terms of methodology and class library adoption by comparing the 2012 Wilson Research Group study (in dark blue), the 2014 Wilson Research Group study (in light blue), as well as the projected verification language adoption trends within the next twelve months (in purple).

Figure 3. FPGA methodology and class library adoption trends

Today, we see a downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has increased by 28 percent since 2012. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 20 percent.

FPGA Assertion Language and Library Adoption Trends

Finally, let’s examine assertion language and library adoption for FPGA designs. The 2014 Wilson Research Group study found that 44 percent of all the FPGA projects have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trends related to those participants who have adopted ABV.

Figure 4 shows the trends in terms of assertion language and library adoption by comparing the 2010 Wilson Research Group study (in dark blue), the 2012 Wilson Research Group study (in green), and the projected adoption trends within the next 12 months (in purple). The adoption of SVA continues to increase, while other assertion languages and libraries either remain flat or decline.

Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I will shift the focus of this series of blogs and start to present the ASIC/IC findings from the 2014 Wilson Research Group Functional Verification Study.

Quick links to the 2014 Wilson Research Group Study results

, , , , , , , ,

2 June, 2015

This year we are trying something new at the Verification Academy booth during next week’s 2015 Design Automation Conference.  We’ve decided to host an interactive panel on the controversial topic of Agile development. I say controversial because you typically find two camps of engineers when discussing the subject of Agile development—the believers and the non-believers.

My colleague Neil Johnson, principal consultant from XtremeEDA Corporation and a leading expert in Agile development, will provide some context for the topic with a short background on Agile methods to kick the panel off. Then I plan to join Neil on the panel, which will be moderated by Mentor’s own world-renowned Dennis Brophy.  Our intent is to have a healthy, interactive discussion with both the believers and the non-believers in the audience.

So, why is the subject of Agile development even worthy of discussion at DAC? Well, not to entirely give away my position on the subject…but I think it’s worthwhile to note some of the recent findings related to root cause of logical and functional flaws from the 2014 Wilson Research Group Functional Verification Study (see figure below).

Clearly, design errors are a major factor contributing to bugs. Yet a growing concern is the number of issues surrounding the specification that are leading to logical and functional flaws.  In reality, there is no such thing as the perfect specification—and few projects can afford to wait to start development until the perfection is achieved. Furthermore, in many market segments, late stage changes in the specification are common practice to ensure that the final product is competitive in a rapidly changing market. Could Agile development, in which requirements and solutions evolve through collaboration between self-organizing and cross-functional teams, be the saving grace?  Please join us on June 8th at 5pm in the Verification Academy booth at DAC and hear what the experts are saying!

, ,

21 April, 2015

FPGA Verification Effectiveness Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on the amount of effort spent in FPGA verification. We have seen in previous blogs that a significant amount of effort is being applied to FPGA functional verification. In this blog I focus on the effectiveness of verification in terms of FPGA project schedule and bug escapes.

FPGA Schedules

Figure 1 presents the design completion time compared to the project’s original schedule. What was a surprise in the 2014 findings is that we saw an improvement in the number of FPGA projects meeting schedule—compared to 2012. It is unclear why we are seeing this trend now.  Perhaps managers are getting better at scheduling—or are becoming more pessimistic with their schedules.  Or, perhaps it is due to the increase amount of reuse (both design and verification IP). Or, is the increased amount of FPGA verification effort prior to “getting to the lab” starting to pay off for some projects? This data point raises some interesting questions worth exploring further. Regardless, still a significant number of FPGA projects miss their originally planned schedule.

2014-WRG-BLOG-FPGA-4-1

Figure 1. FPGA design completion time compared to the project’s original schedule

FPGA Lab Iterations

ASIC/IC projects track the number of required spins that occur prior to market production.  In fact, this can be a useful metric for determining the overall verification effectiveness of an ASIC/IC project.  Unfortunately, we lack such a metric for FPGA projects.  For the 2014 study, we decided to ask the question related to the average number of lab iterations required before the design went into production. Again, this was done to try and get a sense of the project’s verification effectiveness.  The results are shown in Figure 2. However, I’m not convinced that FPGA lab iterations is analogous to ASIC/IC respin as a verification effectiveness metric.  Perhaps a better metric for future studies would be the number of bugs that escape into production and are found in the field. This might be something we should consider on future studies.

2014-WRG-BLOG-FPGA-4-2

Figure 2. Number of FPGA iterations in the lab (no trend data available)

FPGA Bug classification

For the 2014 study, we asked the FPGA project participants to identify the type of flaws that were contributing to rework in the lab. In Figure 3, I show the two leading causes of rework, which are logical and functional bugs, as well as clocking bugs. The data seems to suggest that these issues are growing. Perhaps due to the design of larger and more complex FPGAs. Again, this is a data point worth exploring further.

2014-WRG-BLOG-FPGA-4-3

Figure 3. Types of Flaws Resulting in FPGA Rework

In Figure 4, I show trends in terms of main contributing factors leading to logic and functional flaws—and you can see that design errors are the main cause of functional flaws.  But note that a significant amount of flaws are related to some aspect of the specification—such as changes in the specification—or incorrect or incomplete specifications. Problems associated with the specification process are a common theme I often hear when visiting FPGA customers.

2014-WRG-BLOG-FPGA-4-4

Figure 5. Root cause of FPGA functional flaws

In my next blog (click here), I plan to presenting the findings from our study for FPGA verification technology adoption trends.

Quick links to the 2014 Wilson Research Group Study results

,

1 April, 2015

FPGA Effort Verification Trends (Continued)

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here). In my previous blog (click here), I focused on the controversial topic of effort spent in FPGA verification. This blog continues that discussion. I stated in my previous blog that I don’t believe there is a simple answer to the question, “how much effort was spent on verification in your last FPGA project?” I believe that it is necessary to look at multiple data points to truly get a sense of the real effort involved in verification today. So, let’s look at a few additional findings from the study.

Time FPGA designers spend in verification

For projects that have a separation of teams (i.e., design engineers and verification engineers), it’s important to note that FPGA verification engineers are not the only project members involved in functional verification. FPGA design engineers spend a significant amount of their time in verification too, as shown in Figure 1.

2014-WRG-BLOG-FPGA-3-1

Figure 1. Average (mean) time FPGA design engineers spend in design vs. verification.

You might note (on average) that FPGA design engineers actually spend slightly more time doing verification than design. We are not showing trends here since we have insufficient data related to the questions for FPGA designs from our previous study. We anticipate being able to show trends after our next study (currently scheduled for 2016).

Even if the FPGA project has a separation of teams, the designers are still involved in the verification process, ranging from:

  • Small sandbox testing to explore various aspects of the implementation
  • Full functional testing of IP blocks and SoC integration
  • Debugging verification problems identified by a separate verification team

In fact, getting a better understanding of exactly where FPGA designers spend their time has led us to conduct a series of follow-on discussions with various FPGA projects from various market segments. Through this process we have learned a concern by many project managers related to the increase amount of debugging time spent on a project (both pre-lab and lab debugging time). This is one area of FPGA verification that we plan to continue to explore through a series of in-depth discussions with multiple FPGA projects around the world.

Percentage of time FPGA verification engineers spends in various task

Next, let’s look at the mean time FPGA verification engineers spend in performing various tasks related to their specific project. You might note that verification engineers spend most of their time in debugging. Ideally, if all the tasks were optimized, then you would expect this. Yet, unfortunately, the time spent in debugging can vary significantly from project-to-project, which presents scheduling challenges for managers during a project’s verification planning process.

2014-WRG-BLOG-FPGA-3-2

Figure 2. Average (mean) time verification engineers spend in various task

In our 2012 study we found that FPGA verification engineers spent about 37% of their time involved in debugging task. There was a 16 percent increase in the amount of time spent in debugging between 2012 and 2014. Hence, the data suggest that debugging effort is increasing for both FPGA engineers.

In my next blog (click here) I present our study findings in terms of FPGA schedules, iterations in the lab, and classification of functional bugs.

Quick links to the 2014 Wilson Research Group Study results

, , ,

17 November, 2014

Few verification tasks are more challenging than trying to achieve code coverage goals for a complex system that, by design, has numerous layers of configuration options and modes of operation.  When the verification effort gets underway and the coverage holes start appearing, even the most creative and thorough UVM testbench architect can be bogged down devising new tests – either constrained-random or even highly directed tests — to reach uncovered areas.

armpaper
At the recent ARM® Techcon, Nguyen Le, a Principal Design Verification Engineer in the Interactive Entertainment Business Unit at Microsoft Corp. documented a real world case study on this exact situation.  Specifically, in the paper titled “Advanced Verification Management and Coverage Closure Techniques”, Nguyen outlined his initial pain in verification management and improving cover closure metrics, and how he conquered both these challenges – speeding up his regression run time by 3x, while simultaneously moving the overall coverage needle up to 97%, and saving 4 man-months in the process!  Here are the highlights:

* DUT in question
— SoC with multi-million gate internal IP blocks
— Consumer electronics end-market = very high volume production = very high cost of failure!

* Verification flow
— Constrained-random, coverage driven approach using UVM, with IP block-level testbenches as well as  SoC level
— Rigorous testplan requirements tracking, supported by a variety of coverage metrics including functional coverage with SystemVerilog covergroups, assertion coverage with SVA covers, and code coverage on statements, Branches, Expressions, Conditions, and FSMs

* Sign-off requirements
— All test requirements tracked through to completion
— 100% functional, assertion and code coverage

* Pain points
— Code coverage: code coverage holes can come from a variety of expected and unforeseen sources: dead code can be due to unused functions in reused IP blocks, from specific configuration settings, or a bug in the code.  Given the rapid pace of the customer’s development cycle, it’s all too easy for dead code to slip into the DUT due to the frequent changes in the RTL, or due to different interpretations of the spec.  “Unexplainably dead” code coverage areas were manually inspected, and the exclusions for properly unreachable code were manually addressed with the addition of pragmas.  Both procedures were time consuming and error prone
— Verification management: the verification cycle and the generated data were managed through manually-maintained scripting.  Optimizing the results display, throughput, and tool control became a growing maintenance burden.

* New automation
— Questa Verification Manager: built around the Unified Coverage Database (UCDB) standard, the tool supports a dynamic verification plan cross-linked with the functional coverage points and code coverage of the DUT.  In this way the dispersed project teams now had a unified view which told them at a glance which tests were contributing the most value, and which areas of the DUT needed more attention.  In parallel, the included administrative features enabled efficient control of large regressions, merging of results, and quick triage of failures.

— Questa CoverCheck: this tool reads code coverage results from simulation in UCDB, and then leverages formal technology under-the-hood to mathematically prove that no stimulus could ever activate the code in question. If it’s OK for a given block of code to be dead due to a particular configuration choice, etc., the user can automatically generate wavers to refine the code coverage results.  Additionally, the tool can also identify segments of code that, though difficult to reach, might someday be exercised in silicon. In such cases, CoverCheck helps point the way to testbench enhancements to better reach these parts of the design.

— The above tools used in concert (along with Questasim) enabled a very straightforward coverage score improvement process as follows:
1 – Run full regression and merge the UCDB files
2 – Run Questa CoverCheck with the master UCDB created in (1)
3 – Use CoverCheck to generate exclusions for “legitimate” unreachable holes, and apply said exclusions to the UCDB
4 – Use CoverCheck to generate waveforms for reachable holes, and share these with the testbench developer(s) to refine the stimulus
5 – Report the new & improved coverage results in Verification Manager

* Results
— Automation with Verification Manager enabled Microsoft to reduce the variation of test sequences from 10x runtime down to a focused 2x variation.  Additionally, using the coverage reporting to rank and optimize their tests, they increased their regression throughput by 3x!
— With CoverCheck, the Microsoft engineers improved code coverage by 10 – 15% in most hand-coded RTL blocks, saw up to 20% coverage improvement for auto-generated RTL code, and in a matter of hours were able to increase their overall coverage number from 87% to 97%!
— Bottom-line: the customer estimated that they saved 4 man-months on one project with this process

2014 MSFT presentation at ARM Techcon -- cover check ROI

Taking a step back, success stories like this one, where automated, formal-based applications leverage the exhaustive nature of formal analysis to tame once intractable problems (which require no prior knowledge of formal or assertion-based verification), are becoming more common by the day.  In this case, Mentor’s formal-based CoverCheck is clearly the right tool for this specific verification need, literally filling in the gaps in a traditional UVM testbench verification flow.  Hence, I believe the overall moral of the story is a simple rule of thumb: when you are grappling with a “last mile problem” of unearthing all the unexpected, yet potentially damaging corner cases, consider a formal-based application as the best tool for job.  Wouldn’t you agree?

Joe Hupcey III

Reference links:

Direct link to the presentation slides: https://verificationacademy.com/Advanced-Verification-Management-Presentation

ARM Techcon 2014 Proceedings: http://www.armtechcon.com/

Official paper citation:
Advanced Verification Management and Coverage Closure Techniques, Nguyen Le, Microsoft; Harsh Patel, Roger Sabbagh, Darron May, Josef Derner, Mentor Graphics

, , , , , , , , , , ,

7 May, 2014

My Feb. 4 post introduced Mentor Graphics’ three-step FPGA verification process intended to help design teams get out of the reprogrammable lab more effectively. Since then, I’ve engaged FPGA vendors, design managers and engineers to explain the process, paying special attention to the merits and technical detail for injecting automation into any FPGA verification environment, the hallmark of Mentor’s process. The feedback from these conversations helped me to develop a series of technical webinars, now available for free and on-demand. Check them out and let us know what you think in the comments below. My hope is the webinars might serve as a starting point for your own conversations on verification of FPGAs, demand for which seems to continue to grow as process nodes shrink.

Injecting Automation into Verification – FPGA Market Trends

Injecting Automation into Verification – Code Coverage

Injecting Automation into Verification – Assertions

Injecting Automation into Verification – Improved Throughput

, , , , , , , , ,

27 February, 2014

DVCon 2014 LogoPsst!  I’ll let you in on some news…

While DVCon calls the free portion of the conference “Exhibits Only,” let me share a little secret for you – You also gain access to the conference panels and the keynote presentation.

For those in Silicon Valley and local to DVCon, I invite you to register for the FREE side of the conference, not just for the conference exhibition that will have (in evening hours) drinks and appetizers, but for the industry conversation that will be offered via panels and CEO keynote.  The two panels will also feature Mentor Graphics speakers so you can learn our opinions on the topics as well.

How do you secure your FREE pass?  That’s the simple part!  Go here and start the registration process by clicking the “REGISTER NOW” button in the upper right.  After entering your contact information and completing a brief survey, you will be asked to select the part of the conference you wish to attend.  Select “Exhibit Only” for no charge.  Then “checkout” to complete your registration and you are done!  Of course, you can just show up and do this onsite.  But why waste time in line when you can do this from your computer or mobile device?

See you there!  You can find us at our Mentor Graphics booth.  We are booth 501.  (P.S., if you cannot spare the time to attend but would like to see a running commentary on the sessions, panels and other happenings follow me on Twitter: @dennisbrophy or look for the conference hashtag #DVCon.)

Now here is what you can get for free:

Panels

Is Software the Missing Piece In Verification?

Moderator Ed Sperling – Semiconductor Engineering
Panelists Tom Anderson – Breker
Kenneth Knowlson – Intel
Steve Chappell – Synopsys
Sandeep Pendharkar – Vayavya Labs
Frank Schirrmeister – Cadence Design Systems
Mark Olen – Mentor Graphics
Location Oak Ballroom
Date & Time Wednesday – 5 March 2014 8:30am – 9:45am

Did We Create the Verification Gap?

Moderator John Blyler – Extension Media
Panelists Janick Bergeron – Synopsys
Jim Caravella – NXP
Harry Foster – Mentor Graphics
John Goodenough – ARM
Bill Grundmann – Xilinx
Mike Stellfox – Cadence Design Systems
Location Oak Ballroom
Date & Time Wednesday – 5 March 2014 1:30pm – 3:00pm

Keynote

An Executive View of Trends and Technologies in Electronics
Lip-Bu Tan, President & CEO Cadence Design Systems
Oak Ballroom
Tuesday – 4 March 2014 2:00pm – 2:30pm

Exhibition

Monday: 5:00pm – 7:00pm (Booth Crawl included; Attendees open to win $500 gift card!)
Tuesday: 2:30pm – 6:00pm (Reception 5:00pm – 6:00pm)
Wednesday: 2:00pm – 6:00pm (Reception 5:00pm – 6:00pm)

, , ,

4 February, 2014

Marketing teams at FPGA vendors have been busy as the silicon nanometer geometry race escalates. Altera is “delivering the unimaginable” while Xilinx is offering “all programmable SoCs” to design centers. It’s clear that the SoC has become more accessible to a broader market today and that FPGA vendors have staked out a solid technology roadmap for the near future. Do marketing messages surrounding the geometry race effect day to day life of engineers, and if so, how – especially when it comes to verification?
An excellent whitepaper from Altera, “The Breakthrough Advantage for FPGAs with Tri-Gate Technology,” covers Altera’s Stratix 10 FPGAs and SoCs. The paper describes verification challenges in this new expanded market this way: “Although current generation FPGAs require a rigorous simulation verification methodology rivaling ASICs, the additional lab testing and ability to reprogram FPGAs save substantial manpower investment. The overall cost of ownership must be considered when comparing an FPGA whose component price is higher than an ASIC of similar complexity.” I believe you can use this statement to engage your management in a discussion about better verification processes.

Xilinx also has excellent published technical resources. Its recent UltraScale backgrounder describes how they are solving the challenges in implementing a design with their reprogrammable silicon. Clearly Xilinx has made an impressive investment to make it easier to implement a design with its FPGA UltraScale products. Improvements include ASIC-like clocking and annealing dataflow bottlenecks without compromising performance. Xilinx also describes improvements when using its Vivado design suite, particularly when it comes to in-lab design bring up.

For other FPGA insights, it’s also worth checking out Electronics Engineering Journal’s recent article “Proliferating Programmability in 2014,” which claims that the long-term future of FPGAs tool flows even though, as Kevin Morris sees it, EDA seems to have abandoned the market. (Kevin, I’m here to tell you you’re wrong.)

Do you think it’s inevitable that your FPGA team will first struggle to make it across the verification finish line before adopting a more process-oriented verification flow like the ASIC market demands? It’s not. I base this conclusion on the many conversations I’ve had with FPGA designers, their managers, sales engineers and many other talented people in this market over the years. Yes, there are significant challenges in FPGA design, but not all of them are technology related. With some emotion, one engineer remarked that debugging the same type of issue over and over in the hardware lab and expecting a different outcome was insane. (He’s right.) Others say they need specific ROI information for their management to even accept their need for change. Still others state that had they only known the solutions I talked about in my seminar a year ago, they would have not spent months and months bringing up their design in the lab.
With my peers here at Mentor Graphics, I have developed a three-step verification flow that includes coverage, assertions and improved throughput. I’ll write about this flow and related issues in the weeks ahead here on this blog. The flow is built on fundamental verification technologies that benefit the broad FPGA market. The goal, in developing the technology and writing about it here, has been to provide practical solutions and help more FPGA teams cross the verification gap.

In the meantime, what are your stories? Are you able to influence your management into adopting advanced technology to aid lab bring-up? Is your management’s bias towards lower cost and faster implementation (at the expense of verification)? Let me know in the comments or, if you prefer, by e-mail: joe_rodriguez@mentor.com.

, , , , , , , , ,

@dennisbrophy tweets

Follow dennisbrophy

@dave_59 tweets

Follow dave_59

@jhupcey tweets

Follow jhupcey