Verification Horizons BLOG

This blog will provide an online forum to provide weekly updates on concepts, values, standards, methodologies and examples to assist with the understanding of what advanced functional verification technologies can do and how to most effectively apply them. We're looking forward to your comments and suggestions on the posts to make this a useful tool.

18 November, 2015

500x_shutterstock_37917877Given the dramatic increase in the scalability of formal engines over the past 5 years, “formal testbenches” have grown to comprise hundreds, if not thousands, of assertions, constraint properties, and cover properties. Like with simulation-based constrained-random verification, the number of constraints and the overlap and dependencies among them can quickly exceed what a single engineer can envision. Rephrasing, as the number of constraint properties grows from dozens to hundreds, it’s easy for them to constrain the input state space such that some of the legal input scenarios are omitted, thus causing assertions to be vacuous and/or cover statements unreachable. Fortunately, there is a clear procedure you can follow to untangle your formal constraints and move forward with verification.

First, temporarily remove ALL the constraints, run Questa PropCheck, and see which assertions are  reported as being “vacuous” – i.e. the property itself is inconsistent such that it never can be true — a trivial example is if you mistakenly ANDed a signal to its complement somehow. Naturally, the first step is to debug and fix any obvious errors. Vacuity can also be due to inconsistency with the design’s operation. Regardless of the source of the vacuity, these cases will need to be fixed regardless of external constraints.

Next, bring back in all the constraints and rerun PropCheck. This time some perfectly good properties could still be reported as being “vacuous”. Thus, if the properties ran clean before, without the constraints, chances are you are over-constraining the design with constraint properties that somehow conflict with each other to produce a false positive result – i.e. an error is reported when there really is none. This is where a very straightforward methodology described by Anshul Jain of Oski Technology comes in.

In his Verification Horizons article titled “Minimizing Constraints to Debug Vacuous Proofs”, Anshul outlines an easy-to-implement “divide and conquer” methodology that identifies a “minimum failing subset” (MFS) of constraints with a minimum number of formal runs.  In a nutshell, the recommended process is to split the number of constraints in half, execute a formal run with only one of the halves, and see if the formal tool still reports a vacuous proof and/or shows over-constraining. Then try the “other half” and see if vacuity/over-constraining is detected. If the properties run clean on one half or the other, you know the issue is in the other half. Next, keep recursively commenting out “half of the halves” until you’ve isolated the overwrought constraint properties.

As the article shows, even if you have 1,000 constraint properties, if one constraint is the culprit the worst-case is that you’ll only have to do 15 iterations to find your needle in a haystack.

I trust this tip will help you reach your verification goals faster – please share your questions and experiences in the Comments section.

Happy verifying!

Joe Hupcey III,
on behalf of the Questa Formal and CDC team

, , , ,

16 November, 2015

Thus far we have talked about the importance of having a VIP which is easy to connect to the DUT in part 1 and having the flexibility to configure the VIP as per your requirements and use the built-in or pre-packaged sequences in part 2. In this final part of the series, we will talk about the various built-in features of a VIP which helps with your debug.

If you have a UVM based testbench with one or multiple VIPs, your testbench could be more complex than your DUT and debugging this environment could be a major challenge. Debugging UVM VIP based environments could be thought of having 3 layers:

  1. UVM’s built-in debug mechanism
  2. Simulator with Class Based Debug
  3. VIP with built-in debug features

These are some of the features that Mentor’s VIP provide as built-in debug mechanisms:

VIP Query Commands:

These query commands provide the ability to query the state of the VIP in the testbench at any given time in a batch or CLI mode to get the summary of the VIP or the current configuration of the VIP and prints it to the simulation log.


For example, in PCI Express, the VIP can output B.A.R. information, to show the addressing set up in the system (above) as well as the PCIe configuration space showing the device capabilities (below).


Error Messaging & Reporting:

Error Messaging is very important as this is the first thing the user checks while debugging. The error messages have proper encoding to differentiate between methodology errors, protocol errors and infrastructure errors. It also provides the flexibility of customizing the report message and logging mechanism.



While the VIP is running, a built-in set of assertions check for any protocol violations to verify compliance with the specification.  When these fire, they result in a message that can be printed to the transcript, or piped through the UVM reporting mechanism.  The text of the message include the interface instance name, a description of what went wrong and a reference back to the specification to help when looking up further details.  Each assertion error is configurable and can be enabled or disabled and have its severity level changed.

 Protocol Debug:

Another important aspect of the VIP is to help with protocol debug.  Mentor VIP is transaction based, and those transactions are available for creating stimulus as well as for analysis, where they can be used in the testbench for scoreboarding and coverage collection.

Transaction Logging:

Transactions can also be logged to a text file, printed out via the standard UVM print mechanism, or output to a tracker file by a provided analysis component. Following is the sample print of transaction log file that shows attribute information that is printed along with the format:
   AXI Clk Cycle = 10.00 ns; AXI Clk Frequency = 100.00 MHz; Data bus width = 32 bits


Here, transaction wise data is printed in the log file which includes whether transaction is read or write, ID of transaction, starting address, accept time of address phase, data of each beat, write strobes, data burst accept time, response of transaction, response phase accept time, length of transaction, burst type, and size of burst.

The VIP can also output text based log, or tracker files.  This can typically be done at both the protocol level, and also at the symbol level for protocols such as PCI Express to help debug link training, or other state machines.  Here we can see the symbols for an OS PLP transmission, a TLP transmission and a DLLP transmission on the bus.


Transaction Linking:


Just logging transactions isn’t sufficient while debugging a cache coherent interconnect (CCI). An originating master request transaction results in snoops to other cached masters and a slave access as needed. While debugging system level stimulus, it becomes difficult to identify which all snoop transactions are related to a specific originating request. A Cache Coherency Interconnect Monitor (CCIM), helps overcome this debugging issue by providing a transaction linking component that connects to all the interfaces around a CCI. CCIM provides a top-level parent sequence item that links to all related child sequence items, such as originating request, snoop to cached masters, and slave access.

Protocol Stack Debug:


Along with the transactions, the VIP also records relationship information, relations to other transactions through the protocol stack and also to the signals themselves.  This allows you to quickly move from transaction, to signal level debug, and highlight not just which signals, but also the time at which those signals participated in any given transaction.

I hope this series has provided you with few insights into what makes a VIP easy to instantiate, connect, configure and start driving stimulus. I would really like to hear about your VIP usage experiences.

, , , , , , , , , , , , , , ,

4 November, 2015

Portable Stimulus Specification tends to bring to mind applications where a given verification scenario needs to be reused across multiple verification engines, such as simulation, emulation, and post-silicon, or must be reused between block-level verification and SoC-level verification. These are, of course, key application areas for the Portable Stimulus Specification being developed by the Accellera Portable Stimulus Working Group (PSWG). But applications for a Portable Stimulus Specification go far beyond just applications where verification portability is critical.

In the latest issue of Verification Horizons, Staffan Berg and Mike Andrews write about using the graph-based Portable Stimulus Specification used by the Questa inFact tool to model instruction sets.

What’s Unique about Instruction Sets?

Modeling an instruction set in order to generate instruction streams poses some unique challenges compared to generated transaction-oriented stimulus.

First off, instruction sets use distinct, but overlapping, fields to describe the attributes of the different instruction formats. Depending on the instruction format, different fields are significant. This is very different from transaction-oriented stimulus, where all transaction attributes are always significant. In addition to presenting a modeling challenge, having all these duplicate data fields presents a performance challenge for constraint solvers.

The state space of instruction sets is also enormous. This makes modeling coverage goals challenging, and reaching those coverage goals even more challenging!

How does a Portable Stimulus Specification Help?

A graph-based portable stimulus model allows the model writer to capture the natural structure of the instruction set, and only deal with the fields of interest for each instruction format. In the screenshot below, you can see how each format-specific branch of a graph contains the opcode fields of interest and only those.


In addition to making modeling simpler, this helps with tool performance, since the tool knows which opcode fields are actually significant and which are irrelevant for a given instruction type.

A graph-based Portable Stimulus Specification is an object-oriented description, which makes reuse by inheritance and reuse easy! This makes it easy to chain together multiple instructions and capture cross-instruction constraints to model useful corner cases that arise from implementation details, such as pipelines.

So, if you’re interested in how a Portable Stimulus Specification can be applied to make instruction-stream generation simpler, have a look at the article A New Stimulus Model for CPU Instruction Sets in the latest issue of Verification Horizons.

If you’re interested in learning more about Questa inFact, the graph-based portable stimulus tool from Mentor Graphics, please watch New School Stimulus Generation Techniques:

, ,

26 October, 2015

Random hardware faults – i.e. individual gates going nuts and driving a value they’re not supposed to – are practically expected in every electronic device, at a very low probability. When we talk about mobile or home entertainment devices, we could live with their impact. But when we talk about safety critical designs, such as automotive or medical, we could well die from it. That explains why ISO 26262 automotive safety standard is obsessed with analyzing and minimizing the risk they pose. While some may view that obsession as pure pain, I think it’s an exciting new challenge. I’m thrilled to join the Horizons BLOG team and get an opportunity to convince our readers of this view. If I do my job properly, I’ll get to blog much more on ISO 26262, so keep your fingers crossed.


Gates are a lot like bikes. A bike could go wrong in endless ways – I once had a bike so old it literally broke in two with me on it – but bikes usually fail in a few common ways: 70% flat tires, 15% chain-ring corrosion, 13% brakes, 2% everything else. Any bike shop could give you those numbers, and they’ll be largely similar. Which of these problems you get often depends on the ways you ride and the kind of bike you have. The exact same goes for gates: though they could go wrong in endless ways, they usually go wrong in just a few, which largely depend on environmental conditions and production process. The most common “failure modes” for gates are single event and stuck-at, which basically mean the gate gets a wrong value for one cycle or indefinitely. Your fab and some scientific measurements could give you the probabilities per each.

Some bike “faults” will be “safe” and others “unsafe”. With a flat tire you still get to stop on the road-side and curse, but not if you lose your brakes downhill. Some faults will be safe in one state and unsafe in another – lose your brakes on a plain road and you’re probably fine. At a high level, ISO requires that you look at the faults the gates in your design could have, then make sure the “unsafe” fault probability is below a certain number. Sticking to our bike example, we could say flat-tire and chain-ring problems are “safe”, and assuming all our trips are either down or uphill, we’re left with 5% “unsafe” faults, plus everything hiding in the remaining 2%.

7% unsafe faults are way too much for some ISO certifications, so what do we do? The expansive way is to put in a redundant brake system. The smart way is to refine our analysis and check if downhill drives are really 5%, and if all of them are really that bad. This can be a complicated thing to do, but would sure be cheaper than shipping an additional brake system with every bike. When we come to complex ICs, “smart” needs to be “very very smart” and “cheaper” might mean you get to keep your job. That explains, why, as I said, I find this such a challenging problem to solve. If you still think “fault analysis” is pure pain, I hope you see by now “no fault analysis” can be much worse.

For more information on getting ISO 26262 faults straights, please review my full article on Verification Academy.

I look forward to your comments.

, , ,

25 October, 2015

Verification Academy Brings “UVM Live” to the Santa Clara Convention Center

Uvm logoFor everyone involved in the functional verification of electronic systems, you know about the Universal Verification Methodology (UVM) and are probably using it in one fashion or another.  And if you have been reading this blog, you have undoubtedly seen blogs by Harry Foster on the adoption and use of UVM by the FPGA and ASIC/SoC community.  It has clearly become the world’s most popular and accepted verification methodology.  It is odd to point out that with this popularity, there has not been a UVM-only event to bring UVM users together this year.  We believe it is time for UVM users to come together to explore its use and share productivity tips and tricks with each other.  You are invited to register and attend.  The details of the event are:

          Event: UVM Forum – Verification Academy Live Seminar
          Location: Santa Clara Convention Center, Santa Clara, CA USA
          Date: 17 November 2015
          Time: 8:30 a.m. – 4:00 p.m. PT
          More Information & Agenda: Click Here
          Register: Click Here

Experts Learn Something New

If you are an UVM Expert, and already know just about everything about UVM, you might be interested in some new topics that will be introduced and expanded upon.  Here are four:

The first is UVM Framework.  UVM Framework supports reuse across projects, sites and companies from block to chip to system for both simulation and emulation.  Those using it have seen at least a four week reduction in verification product schedules.

The second is Verification IP.  VIP can help you overcome your IP verification challenges.  One session will explore integrating VIP into a UVM environment with examples based on protocols such as AMBA®, MIPI® and PCI Express®.  If you are not an expert on a specific protocol, you can use VIP to drive stimulus and verify protocol compliance for you.

The third is Automating Scenario-Level UVM Test with Portable Stimulus.  In this session you will learn to rise above the transaction level to make scenario creation more productive.  You will learn how to leverage lower-level descriptions, such as sequence items, into larger scenarios.  You will learn how to leverage graph-based methods to efficiently and predicable exercise the scenario space to deliver high quality verification results.  It should also be noted, that an ongoing Accellera Working Group is exploring standardization of Portable Stimulus.  While Accellera working group details are not part of the session, UVM Forum attendees might consider augmenting their knowledge by visiting the Accellera Portable Stimulus group.

The fourth is Improved UVM Testbench Debug Productivity and Visibility.  For those who debug UVM on a daily basis, you might hear a common question “Are we having fun yet?” asked.  The debug of UVM can be particularly difficult.  We will have a session to show you how to navigate complex UVM environments to quickly find your way around the code – whether its your own or inherited.  You will see how SystemVerilog/UVM dynamic class activity is as easy to debug as it is with RTL signals.  Want to learn how to solve the top 10 common UVM bring-up issues with config_db, the factory, and sequence execution?  Attend and you will learn.

Novices Welcome (and will learn something too!)

While I can’t promise that if you come as a novice you will leave as an expert, you can learn about UVM in the morning as one of the sessions is a technology overview to ensure you won’t be lost when the experts speak.  If you know very little about UVM, the UVM Forum will help you.  There will be a couple presentations from UVM users.  One session is on how UVM enabled advanced storage IP silicon success (presented by Micron) and another session on UVM and emulation to ease the path to advanced verification and analysis (presented by Qualcomm).

Still want to know more before you attend?  You can also boost your UVM knowledge by attending an online UVM Basics course at Verification Academy.  Visit here to learn more about the UVM Basics course.  The Basic UVM course consists of 8 sessions with over an hour of instructional content. This course is primarily aimed at existing VHDL and Verilog engineers or managers who recognize they have a functional verification problem but have little or no experience with constrained random verification or object-oriented programming. The goal of the course is to raise the level of UVM knowledge to the point where users have sufficient confidence in their own technical understanding that it becomes less of a barrier to adoption – and makes the UVM Forum 2015 more meaningful for you.

I look forward to seeing you there.

, , , , , , , , , ,

21 October, 2015

Join us to review the first public review of the Debug Data API specification!

At DAC 2015 we introduced Verification Academy attendees to a “New Verification Debug API” project at DAC.  Since that time we have held several teleconferences and a face-to-face meeting as we have extended and refined what was presented at DAC to match use cases for portable debug data based on user feedback.  The public group that is participating in this feedback and refinement is now reviewing the first version of the specification.  While we invite everyone to download a copy and review it, you must be a member of the group to download it and post questions and comments.  Everyone is welcome to join and there is not fee to participate, or just observe.

NewImageThe Debug Data API is a modernized way to share waveform information than VCD.  If VCD still works for you, don’t worry, we are not doing anything to change that flow.  But we are looking to extend and augment a traditional live simulation vpi-scheme with one that works for post simulation run datasets as well.  The specification is now being juxtaposed against use models to ensure that post-simulation waveform information can be used in a way you want to use it and in a more efficient way than you have available today.  The plan is to avoid the pitfalls for multi-step translation processes and allow you to author a verification function or set of functions once and perform that function on any dataset from all producers.

We invite you to join with us in building this.  Once you have joined, login and click this link to download the first version of the specification that is now out for review. In addition to being able to download the specification you will also be sent meeting notifications to join in with us if you wish.

, , , , ,

7 October, 2015

ieee-sa-logo2Design and verification flows are multifaceted and predominantly built by bringing tools and technology together from multiple sources.   The tools from these sources build upon IEEE standards – several IEEE standards.  What started with VHDL (IEEE 1076™) and Verilog/SystemVerilog (IEEE 1800™) and their documented interfaces has grown.  As more IEEE standards emerged and tools and technology combined these standards in innovative and differentiated ways the industry would benefit from an ongoing open and public discussion on interoperability.  The IEEE Standards Association (IEEE-SA) continues with this tradition started by my friends at Synopsys with the IEEE-SA EDA & IP Interoperability Symposium.  And for 2015, I’m pleased to chair the event.

Anyone working on or using design and verification flows that depend on tool interoperability as well as design and verification intellectual property (IP) working together will benefit from attending this symposium.  The symposium will be held Wednesday, 14 October 2015, at the offices of Cadence Design Systems in San Jose, CA USA.  You can find more information about the event at the links below:

  • Register: Click here.
  • Event Information: Click here.
  • Event Program: Click here.

A keynote presentation by Dan Armbrust, CEO Silicon Catalyst, opens the event with a talk on Realizing the next growth wave for semiconductors – A new approach to enable innovative startups.  If you are one of the Silicon Valley innovators, you might like to hear what Dan shares on this next growth wave.  From my perspective, I suspect it will include being more energy conscious in how we design.  The work on current and emerging IEEE standards that address those energy concerns will follow.  We will review what the conclusions were from the DAC Low Power Workshop and leadership from the IEEE low power standards groups will discuss what they are doing in context of Low Power Workshop.

We then take a lunch break and celebrate 10 Years of SystemVerilog.  The first IEEE SystemVerilog standard (IEEE Std. 1800™-2005) was published in November 2005.  It seems fitting we celebrate this accomplishment.  Joining many of the participants in the IEEE SystemVerilog standardization effort for this celebration will be participants from the Accellera group that incubated it before it became an IEEE standard.  We won’t stop with just celebrating SystemVerilog.  We will also share information on standards projects that have leveraged SystemVerilog, like UVM, which has recently become a full fledged IEEE standards project (IEEE P1800.2).  With so many people who have worked on completed and successful IEEE standards, Accellera offered to bring its Portable Stimulus Working Group members over for a lunch break during their 3-day face-to-face Silicon Valley meeting to mingle with them, to learn from them and hopefully be inspired by them as well.  Maybe some of the success of building industry relevant standards can be shared between the SystemVerilog participants and Accellera’s newer teams.

We will then return to a focus on energy related issues with our first topic area being on power modeling for IP.  Chris Rowen, Cadence Fellow, will take us through some recent experiences on issues his teams have faced driving even higher levels of power efficiency from design using ever more design IP. Tails from the trenches never get old and offer us insight on what we might do in the development of better standards to help address those issues.  While Chris will point to a lot of issues when it comes to the use of design IP, I believe these issues are only compounded when it comes to the Internet of Things (IoT).  We have assembled a great afternoon panel to discuss if the “ultimate power challenge” is IoT.  I can’t wait to hear what they say.

Lastly, when we pull all these systems together, LSI package board issues pose a design interoperability challenge as well.  The IEEE Computer Society’s (CS) Design Automation Standards Committee (DASC) has completed another standard developed primarily outside of North America.  The DASC has a long history of global participation and significant standards development outside of North America, like is the case for VHDL AMS (IEEE 1076.1).  We will hear from the IEEE 2401™-2015 leadership on their newly minted IEEE standard and the LSI package board issues that have been addressed.

We don’t have time to highlight all the EDA & IP standards work in the IEEE, but our principle theme to address issues of power in modern design and verification led us to focus on a subset of them.  So, if your favorite standard or topic area does not appear in the program, let me know and we can add that to our list to consider next year.  And when I say “we,” the work to put together an event like this takes a lot of people. All of us are interested in what we should do for next year and what your input is to us.  For me, in addition to working to collect this, I also need to thank those who did all the work to make this happen.  I’ve often said, as chair, you let the others do all the work.  It has been great to collaborate with my IEEE-SA friends, my peers at the other two Big-3 EDA companies.  It has also been great to get the input and advice on the Steering Committee from two of the world’s largest silicon suppliers (Intel & TSMC) and to include for the first time, support from standards incubators Accellera Systems Initiative and Si2.

, , , , , , , , , , , , , , , ,

22 September, 2015

Back to School!Now that summer is over and the kids are settled into their classrooms, it’s a great time for grown-ups to go back to school themselves by taking advantage of new Verification Academy events and courses. Specifically, the Questa Formal and CDC instructors have been working hard over the past spring and summer to create new training materials for both novices and intermediate students alike. We welcome you to take advantage of the following educational resources.

* Attend the upcoming Verification Academy Live: Formal Seminars in Fremont, CA on Tuesday October 6 or in Austin, TX on Thursday October 8.  The agenda and registration links to these free events (including lunch) is posted here:–formal-technology-seminar

* If you can’t make it to an in-person seminar, over the summer Verification Academy instructors have been busy adding all new courses on Formal and CDC-related topics – spanning automated applications to direct use of formal:

Getting Started with Formal-Based Technology

Formal Assertion-Based Verification

Formal-Based Technology: Automatic Formal Solutions

Clock-Domain Crossing Verification and Power Aware CDC Verification

* If you were not one of the 100’s of visitors to the Verification Academy booth at DAC 2015, the good news is that all the “Formal Day” presentations are online now.  Abstracts, slides, and videos are available here:

* For current Questa Formal and CDC users, remember that there are numerous quick start guides, product usage tutorials, and methodology app notes that are shipped with the product.

“Home page” for all tutorials and quick start guides: $QHOME/share/doc/index.html
Tutorials: $QHOME/share/examples/tutorials
Quick start guides: $QHOME/share/examples/doc/pdfdocs

* Last but not least, SupportNet hosts the latest app notes, tutorials, and product documentation:

Hurry – the bell is ringing, and you don’t want to be late for class!

Joe Hupcey III,
on behalf of the Questa Formal and CDC team

P.S. Outside of the Verification Academy, there is a new book out on formal verification by experienced formal practitioners Erik Seligman, Tom Schubert, and M V Achutha Kiran Kumar titled, Formal Verification: An Essential Toolkit for Modern VLSI Design. My colleagues have reviewed it, and they have high praise for its prose and examples.

8 September, 2015

We are proud to announce that Mentor Graphics, along with Cadence and Breker, is making a joint contribution of technology to the Accellera Portable Stimulus Working Group (PSWG) [see the Press Release]. I’d like to take this opportunity to provide some background information and hopefully answer any questions you may have.

As you may recall, Mentor Graphics was instrumental in pushing our industry towards a Portable Stimulus standard with the formation of a Proposed Working Group in May of last year, which led this year to the formation of the PSWG in January. The PSWG has identified more than 100 specific requirements for a standard and has spent the past several months developing a set of “Usage Examples” that will be used to help us evaluate technical contributions to the standard.

The PSWG has been open to accepting technology contributions for the past few months, but that window will be closing next week, on September 16th. Once contributions are received, the WG will evaluate them all based on the requirements and usage examples and will decide to accept or reject each contribution. Because of the difficulty in choosing among multiple contributions, Accellera’s Policy and Procedures state that “Accellera prefers not to have competing contributions. It is recommended that complementary contributions are worked out among different Contributors,” and that’s exactly what we’ve done.

We had always, of course, been planning to contribute our Questa inFact-based graph specification language (tweaked a bit based on the new requirements), and were fully expecting that Cadence and Breker, who each have products in this space, would make their own contributions. When faced with a situation like this, I like to fall back on the First Commandment of Effective Standards (thanks to Karen Bartleson), which is to cooperate on standards and compete on tools.

Rather than wait and fight it out in the Working Group, where unfortunately marketing and politics can sometimes detract from the technical value of a standard, we approached Breker and Cadence about working together and I think you’ll find our contribution to be “greater than the sum of its parts.” We all hope that it will serve as a strong basis for the standard and will help streamline the process. Of course, with additional input from the members of the Working Group, there are likely to be additional tweaks as we go forward, but by eliminating the “ours vs. theirs” issues beforehand, it is our hope that we can reach consensus on a final standard more quickly.

We would like to thank Breker and Cadence for their willingness to work with us on this important standard and look forward to healthy “co-op-etition” as we move forward. If you’re interested in participating, you can find out more information at the Accellera website.


25 August, 2015

I have always been wanting to contribute to the growing verification engineering community in India, which Mentor’s CEO Wally Rhines calls “the largest verification market in the world”. So when I first accompanied the affable Dennis Brophy to the IEEE India office back in April of 2014 to discuss the possibility of having a DVCon in India, I knew I was at the right place at the right time and it was opportunity to contribute to this community.

I has been two years since that meeting, I don’t have to write about how big a success the first ever DVCon India in 2014 was. I’m glad I played a small part by being on the Technical Program Committee on the DV track, reviewing various abstracts. It is a responsibility which I thoroughly enjoyed. This year in addition to being on the TPC, I am contributing as the Chair for Tutorials and Posters. I am eagerly looking forward to the second edition of the Verification Extravaganza which is on 10th and 11th Sept 2015 and the amazing agenda we have planned for attendees.

Day 1 of the conference is dedicated to keynotes, panel discussions and tutorials while day 2 is dedicated fully to Papers with a DV track and a panel in addition to papers in a ESL track. Participants are free to attend any track and can move between tracks. This year we had many sponsored tutorials submissions hence, there will be three parallel tutorial tracks, one on the DV side and two on the ESL track.

Below please find a list of those that Mentor Graphics will be presenting at:

  • Keynote from Harry Foster discussing the growing complexity across the entire design ecosystem
    Thursday, September 10, 2015
    9:45am – 10:30am
    Grand Ball Room, The Leela Palace
    More Information >
    Register for this event >
  • Creating SystemVerilog UVM Testbenches for Simulation and Emulation Platform Portability to Boost Block-to-System Verification Productivity
    Thursday, September 10, 2015
    1:30pm – 3:00pm
    DV Track, Diya, The Leela Palace
    More Information >
    Register for this event >
  • Expediting the code coverage closure using Static Formal Techniques – A proven approach at block and SoC Levels!
    Thursday, September 10, 2015
    1:30pm – 3:00pm
    DV Track, Grand Ball Room, The Leela Palace
    More Information >
    Register for this event >

The papers on day 2 are primarily split into 3 parallel tracks, one on DV track and 2 parallel tracks on ESL. Within the DV track, one area is dedicated to UVM/SV. The other categories within the DV track will cover Portable Stimulus & Graph Based Stimulus, AMS, SoC & Stimulus Generation, Emulation, Acceleration and Prototyping & a generic selected category. The surprise among the categories is Portable Stimulus, which was a tutorial in last year however has continued to be of high interest and sessions will build on last year’s initial tutorial.

Overall there is an exciting mix of keynotes, tutorials, panels, papers and posters, which will make two exceptional days of learning, networking and fun. I look forward to seeing at DVCon India, 2015 and if you see me at the show, please come say hello and let me know what you think of the conference.

, , , , , , , ,

@dennisbrophy tweets

Follow dennisbrophy

@dave_59 tweets

Follow dave_59

@jhupcey tweets

Follow jhupcey