Posts Tagged ‘Verification Academy’

22 July, 2017

There’s a wonderful quote in Brian Kernighan book The Elements of Programming Style, where he says “Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?” Humor aside, our 2016 Wilson Research Group study supports the claim that debugging is a challenge today, where most projects spend more time involved in debugging than any other task.

One insidious aspect of debugging is that it is unpredictable if not properly managed. For example, from a management perspective, it is easy to gather metrics on various processes involved in the product development lifecycle from previous projects in order to plan for the future. However, the unpredictability of debugging becomes a manager’s nightmare. Guidelines for efficient debugging are required to improve productivity.

In reality, debugging is required in every process that spans a products development lifecycle, from conception, architectural design, implementation, post-silicon, and even the test and testbenches we create to verify/validate a design. The emergence of SystemVerilog and UVM has necessitated the development of new debugging skills and tools. Traditional debugging approaches that were adopted for simple RTL testbenches have become less productive. To address this dearth of new debugging process knowledge I’m excited to announce that the Verification Academy has just released a new UVM debugging course. This course consists of five video sessions. Topics that are covered in this course include:

• A historical perspective on the general debugging problem
• Ways to effectively debug memory leaks in UVM environments
• How to debug connectivity issues between UVM components
• Guidelines to effectively debug common issues related to UVM phases
• Methods to debug common issues with the UVM configuration database

As always, this course is available free to you. To learn more about our new Verification Academy debugging course, as well as all our other video courses, please visit

, , , , , ,

1 June, 2017

Accellera’s Emerging Portable Stimulus Standard Is Pervasive at DAC 54

For the past few years, Accellera’s Portable Stimulus Working Group has been busy at work on the new standard to elevate stimulus generation to improve overall verification productivity.  As the call to attend the annual Accellera breakfast DAC 54 informs us, the Accellera Portable Stimulus early adopter release is planned to be made available prior to DAC.  I’m certain that a download of the early adopter release will make for good reading for those traveling to DAC and have not participated in the development of the standard.  I predict it will be a “page-turner.”  You will be in a much better position to attend the following events in and around DAC (some of which require DAC registration fee payment and some that are fee-free) if you download and read it when ready.  Here are some of the DAC 54 Portable Stimulus activities:

Monday June 19th

Tuesday June 20th

Wednesday June 21st

And for those who will not be attending DAC, I will update this blog with information on how you can download the Accellera Portable Stimulus early adopter release.  There is also online educational videos about the emerging Portable Stimulus standard.  You can find a two sessions at the Mentor Verification Academy on Portable Stimulus Basics. And the DVCon U.S. 2017 technical tutorial, Creating Portable Stimulus Models with the Upcoming Accellera Standard, presented in three parts is located at the Accellera website.  Both online educational video offerings require registration.

, , , ,

8 May, 2017

At the recent DVCon in Shanghai, China, my colleague Jin Hou delivered the tutorial “Back to Basics: Doing Formal the Right Way”. Jin is an expert in formal and CDC methodologies, and in her career she has trained hundreds of engineers how to get up to speed with formal, and leverage its strengths as part of a complete enterprise-class verification flow. I’ve been to DVCons in the US, EU, and India, so I was eager to hear her first-hand account of the latest incarnation of this successful conference series. Below is a review of her experience.

Joe: Tell me more about your tutorial

Jin: In recent years there has been a lot activity around automated formal apps. This is a good thing not only because they help any engineer take advantage of formal’s power, but their success has also inspired interest in property checking. Consequently, this tutorial was created to show engineers who are new to formal what is involved in the planning, setup, execution, and results analysis with formal property checking verification. Wherever it made sense, I would make analogies to testbench simulation flows to explain formal methodology, and show how formal and simulation can be used in combination.


Joe: Did you get to survey the attendees at all? If so, what were the results?

Jin: Yes, but first let me say the tutorial was standing room only – there were at least 40 people were in a room setup for half that number!

At the beginning I asked for a show of hands for the following questions:

“How many people are using formal now?” Only 3 people raised their hands.

“How many of you are thinking of using formal in 6 months?”  Just 7 hands went up.

In my 1-on-1 discussions after the tutorial concluded, an attendee told me, “Assertions are still pretty new to Chinese engineers. Except for [a major semiconductor supplier], most customers don’t have any assertion and formal experts.” However, I also heard some great news on this point: a professor from an area university told me his department was starting a course on SystemVerilog – including writing SVA — this coming academic year!

In general, it was clear to me that there is a great opportunity for formal usage to grow in China!


Joe: Were there any particular comments or questions from the attendees?

Jin: The overall theme of the Q&A I received was around the benefits that property checking provided over other verification techniques. Indeed, I was directly asked “Why do you need formal?” It’s a fair question – especially since property checking with hand-written assertions (vs. automated formal apps) does impose a learning curve on its users; and it is not always obvious what formal is capable of doing better than the alternatives.

Of course we setup the tutorial to address exactly these concerns; so over the course of the presentation I showed how the formal can address many serious verification tasks long before a simulation testbench would be available. For starters, hand-written assertions themselves help designers communicate their concerns to the verification team, bridging the gaps between them. With such an “executable specification” (which, by the way, with the right syntax can be easily reused in simulation and emulation), verification tasks that would take forever in simulation can be quickly solved by formal. Perhaps the most well received example of this was when I spoke of how formal can deliver exhaustive proofs of liveness and safety properties.

[Ed. Note: “Liveness” == something good must eventually occur. “Safety” == something bad must never occur, i.e., good behavior should be always true. A well regarded, in-depth review of these valuable formal analyses is available here ]

In general, the fact that you can do such significant verification with exhaustive results, before you write a single line of testbench code, means that formal should be part of any serious verification project.


Joe: Looking back on the experience, how has it inspired you?

Jin: It’s given me great ideas on how we can further enhance our Verification Academy courses to help new formal users to get up to speed with the key concepts and work flows. In particular, we need to better promote “formal coverage” – what it tells you, how it helps the formal engineer measure their personal progress, and how it feeds into the overall verification project status alongside simulation code and functional coverage reporting. 


Joe: Thanks, Jin!  Final thoughts?

Jin: While I enjoy supporting customers here in the San Francisco Bay Area, I also look forward to introducing more engineers to formal verification across China!


If you went to DVCon in Shanghai — or in San Jose – this spring, please share your experiences in the comments below.

Until DAC, may your power consumption be low and your coverage be high!

Joe Hupcey III

Reference links:

DVCon Shanghai 2017 proceedings —

Safety & Liveness Properties, Chapter 7, Concurrency: State Models & Java Programs, Jeff Magee and Jeff Kramer,

Related Verification Academy courses:

Getting Started with Formal-Based Technology

Formal Assertion-Based Verification

Formal-Based Technology: Automatic Formal Solutions

Assertion-Based Verification

, , , , , , , ,

13 March, 2017

Just getting around to gathering my thoughts about the great week we had at DVCon U.S. As Program Chair for the conference, I felt a great sense of pride that, with a great deal of help from my colleagues on the conference Steering Committee and especially the great team of experts on the Technical Program Committee, we were able to provide the attendees with a packed program of interesting, informative and entertaining events. But, as always happens, there was one topic that seemed to get the lion’s share of attention. This year, it was Portable Stimulus.

Starting with a standing-room-only crowd (even after bringing in more chairs) of nearly 200 people on Monday morning for the Accellera Day tutorial presented by the members of the Portable Stimulus Working Group (including yours truly), Portable Stimulus never seemed to be far from any of the discussions.

Full house at the DVCon U.S. 2017 Portable Stimulus Tutorial

Full house at the DVCon U.S. 2017 Portable Stimulus Tutorial

If you weren’t able to attend the conference, Accellera will be presenting the tutorial as a series of webinars in early April, so you’ll be able to see what got everyone so excited. In addition to the tutorial, there was a “Users Talk Back” panel session on Wednesday morning that gave several user companies a chance to voice their opinions about the upcoming Portable Stimulus standard. Having been so involved in the standardization effort, I was gratified to hear the generally positive feedback by these industry leaders.

We were pleased also to have two great Portable Stimulus articles in our most recent issue of Verification Horizons. The first article  is from our friends at CVC showing how they used Questa inFact to create a portable graph-based stimulus model that they used in their UVM environment to verify a memory controller design. The second is from my colleague Matthew Ballance, who is also a key technical contributor to the PSWG efforts, and discusses Automating Tests with Portable Stimulus from IP to SoC Level. In this article, you’ll learn about some of the concepts and language constructs of the proposed standard to see how the declarative nature of the standard makes it easier to specify complex scenarios for block-level verification and to combine those into SoC-level scenarios that are often difficult to express with UVM sequences.

The other exciting news I wanted to share with you is our new Portable Stimulus Basics video course on Verification Academy. We can’t yet share all the details of the upcoming standard, since things are still being finalized in the Working Group, but as things are made public, we’ll be sharing what we can so you’ll be the first to learn about this exciting new standard. As we add new sessions to the course, we’ll be sure to let you know. Please go check it out.

, , ,

24 May, 2016

Join us at the 53rd Design Automation Conference

DAC is always a time of jam-packed activity with multiple events that merit your time and attention.  As you prepare your own personal calendars and try your best to reduce or eliminate conflicts, let me share with you some candidate events that you may wish to consider having on your calendar.  I will highlight opportunities to learn more about ongoing and emerging standards from Accellera and IEEE.  I will focus on a few sessions at the Verification Academy booth (#627) that feature Partner presentations.  And I will spotlight some venues where other industry collaboration will be detailed.  You will also find me at many of these events as well.


Accellera will host its traditional Tuesday morning breakfast.  Registration is required – or you might not find a seat.  As always, breakfast is free.  The morning will feature a “Town Hall” style meeting that will cover UVM (also known as IEEE P1800.2) and other technical challenges that could help evolve UVM into other areas.   Find out more and learn about all things UVM, register here.


The Verification Academy is “partner-central” for us this year.  Each day will feature partner presentations that highlight evolving design and verification methodologies, standards support and evolution, and product integrations.  Verification Academy is booth #627, which is centrally located and easy to find.  Partner presentations include:

  • Back to the Stone Ages for Advanced Verification
    Monday June 6th
    2:00 PM | Neil Johnson – XtremeEDA

    Modern development approaches are leaving quality gaps that advanced verification techniques fail to address… and the gaps are growing in spite of new innovation. It’s time for a fun, frank and highly interactive discussion around the shortcomings of today’s advanced verification methods.

  • SystemVerilog Assertions – Bind files & Best Known Practices
    Monday June 6th
    3:00 PM | Cliff Cummings – Sunburst Design

    SystemVerilog Assertions (SVA) can be added directly to the RTL code or be added indirectly through bindfiles. 13 years of professional SVA usage strongly suggests that Best Known Practices use bindfiles to add assertions to RTL code.

  • Specification to Realization flow using ISequenceSpec™ and Questa® InFact
    Tuesday June 7th
    10:00 AM | Anupam Bakshi – Agnisys, Inc.

    Using an Ethernet Controller design, we show how complete verification can be done in an automated manner, saving time while improving quality. Integration of two tools will be shown. InFact creates tests for a variety of scenarios which is more efficient and exhaustive than a pure constrained random methodology. ISequenceSpec forms a layer of abstraction around the IP/SoC from a specification.

  • Safety Critical Verification
    Wednesday June 8th
    10:00 AM | Mike Bartley – TVS

    The traditional environments for safety-related hardware and software such as avionics, rail and nuclear have been joined by others (such as automotive and medical devices) as systems become increasingly complex and ever more reliant on embedded software. In tandem, further industry-specific safety standards (including ISO 26262 for automotive applications and IEC 62304 for medical device software) have been introduced to ensure that hardware and software in these application areas has been developed and tested to achieve a defined level of integrity. In this presentation, we will be explaining some of these changes and how they can be implemented.

  • Using a Chessboard Challenge to Discover Real-world Formal Techniques
    Wednesday June 8th
    3:00 PM | Vigyan Singhal & Prashant Aggarwal – Oski Technology

    In December 2015, Oski challenged formal users to solve a chessboard problem. This was an opportunity to show how nifty formal techniques might be used to solve a fun puzzle. Design verification engineers from a variety of semiconductor companies and research labs participated in the contest. The techniques submitted by participants presented a number of worthy solutions, with varying degrees of success.

Industry Collaboration

Debug Data API: “Cadence and Mentor Demonstrate Collaboration for open Debug Data API in Action”  It was just a year ago that the project to create an open debug data API was announced at DAC 52.  Since there several possible implementation styles were reviewed, an agreed specification created and early working prototypes demonstrated.  On Tuesday, June 7th at 2:00pm we will host a session at the Verification Academy (Booth #627).  You are encouraged to register for the free session – but walkups are always welcome!  You can find more information here.

Portable Stimulus Tutorial: “How Portable Stimulus Addresses Key Verification, Test Reuse, and Portability Challenges”  As part of the official DAC program, there will be a tutorial on the emerging standardization work in Accellera.  The tutorial is Monday, June 6th from 1:30pm – 3:00pm in the Austin Convention Center, Room 15.  You can register here for the tutorial.  There is a fee for this event.  Want to know more about the tutorial?  You can find more information here.


It is always good to end the day on a light note.  To that end, on Monday June 6th, will invite you to “grab a cold one” at the Verification Academy booth and continue discussions and networking with your colleagues.  If past year’s experience is any guide to this year, you may want to get here early for your drink!  There is no registration to guarantee a drink, unfortunately!  So, come early; stay late!  See you in Austin!

And if you miss me at any of the locations above, tweet me @dennisbrophy – your message is sure to reach me right away.

, , , , , , , , , , , , , , , , ,

25 April, 2016

Having been deeply involved with Universal Verification Methodology (UVM) from its inception, and before that, with OVM from its secret-meetings-in-a-hidden-hotel-room beginnings, I must admit that sometimes I forget some of the truly innovative and valuable aspects of what has become the leading verification methodology for both ASIC (see here) and FPGA (see here) verification teams. So I thought it might be helpful to all of us if I took a moment to review some of the key concepts in UVM. Perhaps it will help even those of us who may have become a bit jaded over the years just how cool UVM really is.

I have long preached that UVM allows engineers to create modular, reusable, randomized self-checking testbenches. Of course, these qualities are all inter-related. For example, modularity is the key to reuse. UVM promotes this through the use of transation-level modeling (TLM) interfaces. By abstracting the connections using ports on the “calling” side and exports on the “implementation” side, every component in a UVM testbench is blissfully unaware of the internal details of the component(s) to which it is connected. One of the most important places where this abstraction comes in handy is between sequences and drivers.

seq2driverconnection Figure 1: Sequence-to-Driver Connection(s)

Much of the “mechanical” details are, of course, hidden by the implementation of the sequencer, and the user view of the interaction is therefore rather straightforward:

seqdriverapi Figure 2: The Sequence-Driver API

Here’s the key: That “drive_item2bus(req)” call inside the driver can be anything. In many cases, it will be a task call inside the driver that manipulates signals inside the virtual interface, or the calls could be inline:

task run_phase(uvm_phase phase);
  forever begin
    my_transaction tx;
    @(posedge dut_vi.clock);
    dut_vi.cmd  = tx.cmd;
    dut_vi.addr = tx.addr; =;
    @(posedge dut_vi.clock)
endtask: run_phase

As long as the get_next_item() and item_done() calls are present in the driver, everything else is hidden from the rest of the environment, including the sequence. This opens up a world of possibilities.

One example of the value of this setup is when emulation is a consideration. In this case, the task can exist inside the interface, which can itself exist anywhere. For emulation, the interface often will be instantiated inside a protocol module, which includes other protocol-specific information:

dualtop Figure 3: Dual Top Architecture

You can find out more about how to set up your environment like this in the UVM Cookbook. And if you’re interested in learning more about setting up your testbench to facilitate emulation, you can download a very interesting paper here.

The flexibility of the TLM interface between the sequence and the driver gives UVM users the flexibility to reuse the same tests and sequences as the project progresses from block-level simulation through emulation. All that’s needed is a mechanism to allow a single environment to instantiate different components with the same interfaces without having to change the code. That’s what the factory is for, and we’ll cover that in our next session.

I’m looking forward to hearing from the new and advanced UVM users out there!

, , , , , ,

16 March, 2016

If you have been involved in either software or advanced verification for any length of time, then you probably have heard the term Design Patterns. In fact, the literature for many of today’s testbench verification methodologies (such as UVM) often reference various software or object-oriented related patterns in their discussions. For example, the UVM Cookbook (available out on the Verification Academy) references the observer pattern when discussing the Analysis Port. One problem with the discussion of patterns in existing publications is that they are generally difficult to search, reference, and leverage the solutions these patterns provide since these publications are distributed across multiple heterogeneous platforms and databases and documented using multiple varied formats. In addition, most of the published examples of design patterns deal more with the software implementation details of constructing a testbench. To address these concerns, we have decided to extend the application of patterns across the entire domain of verification (i.e., from specification to methodology to implementation—and across multiple verification engines such as formal, simulation, and emulation.) and have just released a comprehensive pattern library out on the Verification Academy.

But first, we should answer the question, “What is a pattern?” In the process of designing something (e.g., a building, a software program, or an airplane) the designer often makes numerous decisions about how to solve specific problems. It would be nice if the knowledge gained from solving a specific problem could be shared, and this is where patterns help out. That is, if the designer can identify common attributes contributing to the derived solution in such a way that it can be applied to other similar recurring problems, then the resulting generalized problem-solution pair is known as a pattern. Documenting patterns provides a method of describing good design practices within a field of expertise and enables designers to improve the quality in their own designs by reusing a proven solution on a recurring problem. And that is precisely what the Verification Academy Patterns Library is all about? Sharing provable solutions to recurring problems in an easily discoverable, referenceable, and relatable format.

Design patterns are not a new concept. In fact, they originated as a contemporary architectural concept from Christopher Alexander in 1977, and they have been applied to the design of buildings and urban planning.  In 1987, Kent Beck and Ward Cunningham proposed the idea of applying patterns to programming.  However, it was Gamma et al., also known as the Gang of Four (GoF) who popularized the concept of patterns in computer science after publishing their book Design Patterns: Elements of Reusable Object-Oriented Software in 1994.

How We Decided to Organized Our Patterns Library

Our Verification Academy Pattern Library contains a collection of pattern entries—where each documented pattern entry provides a solution to a single problem. To facilitate learning, ease of use, and quick access when searching for verification pattern content, we gave careful thought into organizing the library into searchable categories whose patterns solutions are related and exhibit similar characteristics. Since our goal in creating verification patterns is to broaden the application of patterns beyond the software domain, we decided that our categories should align from a high level with the digital design and verification process. Hence, we have identified two main verification pattern categories, which should be familiar to any design and verification engineer working in this domain. That is, Specification Patterns and Implementation Patterns, as illustrated in the following figure.

Creating a Community of Pattern Expertise

For the Verification Academy Patterns Library, we felt it important to set goals on the pattern creation process and how to effectively populate the library. The reality is that verification is a diverse field, and it often requires expertise in varied areas, such as methodologies, technologies, tools, and languages. No single person is a master in every aspect of verification. Thus, to create patterns across the broad field of verification, we built a team made up from experts in assertion-based verification, formal verification, constrained-random and coverage-driven verification, UVM, hardware-assisted verification, and emulation. However, even with this diverse team of experts we recognize that there is still additional verification expertise required for solving verification problems in specific application domains. Hence, for our verification patterns library, we set a goal that the pattern creation process should harness the power of online social communities made up from a diverse set of verification experts that work in multiple application domains. In turn, this community of experts would foster collective problem solving for the creation of novel patterns and provide alternative, optimized solutions for existing pattern content. To achieve these goals, we developed a web-based infrastructure that allows new content to be contributed in a consistent format from this community of experts, and decided to release our library out on the Verification Academy, since it consists of an existing online social community with over 35,000 design and verification engineers. In addition, the Verification Academy provides us an existing online infrastructure, which enabled the creation of a patterns knowledge base that is easily discoverable, referenceable, and relatable.

To learn more about the Verification Academy Patterns Library, check out

, , , , , ,

16 November, 2015

Thus far we have talked about the importance of having a VIP which is easy to connect to the DUT in part 1 and having the flexibility to configure the VIP as per your requirements and use the built-in or pre-packaged sequences in part 2. In this final part of the series, we will talk about the various built-in features of a VIP which helps with your debug.

If you have a UVM based testbench with one or multiple VIPs, your testbench could be more complex than your DUT and debugging this environment could be a major challenge. Debugging UVM VIP based environments could be thought of having 3 layers:

  1. UVM’s built-in debug mechanism
  2. Simulator with Class Based Debug
  3. VIP with built-in debug features

These are some of the features that Mentor’s VIP provide as built-in debug mechanisms:

VIP Query Commands:

These query commands provide the ability to query the state of the VIP in the testbench at any given time in a batch or CLI mode to get the summary of the VIP or the current configuration of the VIP and prints it to the simulation log.


For example, in PCI Express, the VIP can output B.A.R. information, to show the addressing set up in the system (above) as well as the PCIe configuration space showing the device capabilities (below).


Error Messaging & Reporting:

Error Messaging is very important as this is the first thing the user checks while debugging. The error messages have proper encoding to differentiate between methodology errors, protocol errors and infrastructure errors. It also provides the flexibility of customizing the report message and logging mechanism.



While the VIP is running, a built-in set of assertions check for any protocol violations to verify compliance with the specification.  When these fire, they result in a message that can be printed to the transcript, or piped through the UVM reporting mechanism.  The text of the message include the interface instance name, a description of what went wrong and a reference back to the specification to help when looking up further details.  Each assertion error is configurable and can be enabled or disabled and have its severity level changed.

 Protocol Debug:

Another important aspect of the VIP is to help with protocol debug.  Mentor VIP is transaction based, and those transactions are available for creating stimulus as well as for analysis, where they can be used in the testbench for scoreboarding and coverage collection.

Transaction Logging:

Transactions can also be logged to a text file, printed out via the standard UVM print mechanism, or output to a tracker file by a provided analysis component. Following is the sample print of transaction log file that shows attribute information that is printed along with the format:
   AXI Clk Cycle = 10.00 ns; AXI Clk Frequency = 100.00 MHz; Data bus width = 32 bits


Here, transaction wise data is printed in the log file which includes whether transaction is read or write, ID of transaction, starting address, accept time of address phase, data of each beat, write strobes, data burst accept time, response of transaction, response phase accept time, length of transaction, burst type, and size of burst.

The VIP can also output text based log, or tracker files.  This can typically be done at both the protocol level, and also at the symbol level for protocols such as PCI Express to help debug link training, or other state machines.  Here we can see the symbols for an OS PLP transmission, a TLP transmission and a DLLP transmission on the bus.


Transaction Linking:


Just logging transactions isn’t sufficient while debugging a cache coherent interconnect (CCI). An originating master request transaction results in snoops to other cached masters and a slave access as needed. While debugging system level stimulus, it becomes difficult to identify which all snoop transactions are related to a specific originating request. A Cache Coherency Interconnect Monitor (CCIM), helps overcome this debugging issue by providing a transaction linking component that connects to all the interfaces around a CCI. CCIM provides a top-level parent sequence item that links to all related child sequence items, such as originating request, snoop to cached masters, and slave access.

Protocol Stack Debug:


Along with the transactions, the VIP also records relationship information, relations to other transactions through the protocol stack and also to the signals themselves.  This allows you to quickly move from transaction, to signal level debug, and highlight not just which signals, but also the time at which those signals participated in any given transaction.

I hope this series has provided you with few insights into what makes a VIP easy to instantiate, connect, configure and start driving stimulus. I would really like to hear about your VIP usage experiences.

, , , , , , , , , , , , , , ,

8 June, 2015

Do you have a really tough verification problem – one that takes seemingly forever for a testbench simulation to solve – and are left wondering whether an automated formal application would be better suited for the task?

Are you curious about formal or clock-domain crossing verification, but are overwhelmed by all the results you get from a Google search?

Are you worried that adding in low power circuitry with a UPF file will completely mess up your CDC analysis?

Good news: inspired by the success of the UVM courses on the Verification Academy website, the Questa Formal and CDC team has created all new courses on a whole variety of formal and CDC-related subjects that address these questions and more.  New topics that are covered include:

* What’s a formal app, and what are the benefits of the approach?

* Reviews of automated formal apps for bug hunting, exhaustive connectivity checking and register verification, X-state analysis, and more

* New topics in CDC verification, such as the need for reconvergence analysis, and power-aware CDC verification

* How to get started with direct property checking including: test planning for formal, SVA coding tricks that get the most out of the formal analysis engines AND ensure reuse with simulation and emulation, how to setup the analysis for rapidly reaching a solution, and how to measure formal coverage and estimate whether you have enough assertions

The best part: all of this content is available NOW at, and it’s all FREE!


Joe Hupcey III,
on behalf of the Questa Formal and CDC team

P.S. If you’re coming to the DAC in San Francisco, be sure to come by the Verification Academy booth (#2408) for live presentations, end-user case studies, and demos on the full range of verification topics – UVM, low power, portable stimulus, formal, CDC, hardware-software co-verification, and more.  Follow this link for all the details & schedule of events (including “Formal & CDC Day” on June 10!):

, , , , , , , , ,

2 June, 2015

This year we are trying something new at the Verification Academy booth during next week’s 2015 Design Automation Conference.  We’ve decided to host an interactive panel on the controversial topic of Agile development. I say controversial because you typically find two camps of engineers when discussing the subject of Agile development—the believers and the non-believers.

My colleague Neil Johnson, principal consultant from XtremeEDA Corporation and a leading expert in Agile development, will provide some context for the topic with a short background on Agile methods to kick the panel off. Then I plan to join Neil on the panel, which will be moderated by Mentor’s own world-renowned Dennis Brophy.  Our intent is to have a healthy, interactive discussion with both the believers and the non-believers in the audience.

So, why is the subject of Agile development even worthy of discussion at DAC? Well, not to entirely give away my position on the subject…but I think it’s worthwhile to note some of the recent findings related to root cause of logical and functional flaws from the 2014 Wilson Research Group Functional Verification Study (see figure below).

Clearly, design errors are a major factor contributing to bugs. Yet a growing concern is the number of issues surrounding the specification that are leading to logical and functional flaws.  In reality, there is no such thing as the perfect specification—and few projects can afford to wait to start development until the perfection is achieved. Furthermore, in many market segments, late stage changes in the specification are common practice to ensure that the final product is competitive in a rapidly changing market. Could Agile development, in which requirements and solutions evolve through collaboration between self-organizing and cross-functional teams, be the saving grace?  Please join us on June 8th at 5pm in the Verification Academy booth at DAC and hear what the experts are saying!

, ,

@dennisbrophy tweets

Follow dennisbrophy

@dave_59 tweets

Follow dave_59

@jhupcey tweets

Follow jhupcey