Verification Horizons BLOG

This blog will provide an online forum to provide weekly updates on concepts, values, standards, methodologies and examples to assist with the understanding of what advanced functional verification technologies can do and how to most effectively apply them. We're looking forward to your comments and suggestions on the posts to make this a useful tool.

22 September, 2015

Back to School!Now that summer is over and the kids are settled into their classrooms, it’s a great time for grown-ups to go back to school themselves by taking advantage of new Verification Academy events and courses. Specifically, the Questa Formal and CDC instructors have been working hard over the past spring and summer to create new training materials for both novices and intermediate students alike. We welcome you to take advantage of the following educational resources.

* Attend the upcoming Verification Academy Live: Formal Seminars in Fremont, CA on Tuesday October 6 or in Austin, TX on Thursday October 8.  The agenda and registration links to these free events (including lunch) is posted here:–formal-technology-seminar

* If you can’t make it to an in-person seminar, over the summer Verification Academy instructors have been busy adding all new courses on Formal and CDC-related topics – spanning automated applications to direct use of formal:

Getting Started with Formal-Based Technology

Formal Assertion-Based Verification

Formal-Based Technology: Automatic Formal Solutions

Clock-Domain Crossing Verification and Power Aware CDC Verification

* If you were not one of the 100’s of visitors to the Verification Academy booth at DAC 2015, the good news is that all the “Formal Day” presentations are online now.  Abstracts, slides, and videos are available here:

* For current Questa Formal and CDC users, remember that there are numerous quick start guides, product usage tutorials, and methodology app notes that are shipped with the product.

“Home page” for all tutorials and quick start guides: $QHOME/share/doc/index.html
Tutorials: $QHOME/share/examples/tutorials
Quick start guides: $QHOME/share/examples/doc/pdfdocs

* Last but not least, SupportNet hosts the latest app notes, tutorials, and product documentation:

Hurry – the bell is ringing, and you don’t want to be late for class!

Joe Hupcey III,
on behalf of the Questa Formal and CDC team

P.S. Outside of the Verification Academy, there is a new book out on formal verification by experienced formal practitioners Erik Seligman, Tom Schubert, and M V Achutha Kiran Kumar titled, Formal Verification: An Essential Toolkit for Modern VLSI Design. My colleagues have reviewed it, and they have high praise for its prose and examples.

8 September, 2015

We are proud to announce that Mentor Graphics, along with Cadence and Breker, is making a joint contribution of technology to the Accellera Portable Stimulus Working Group (PSWG) [see the Press Release]. I’d like to take this opportunity to provide some background information and hopefully answer any questions you may have.

As you may recall, Mentor Graphics was instrumental in pushing our industry towards a Portable Stimulus standard with the formation of a Proposed Working Group in May of last year, which led this year to the formation of the PSWG in January. The PSWG has identified more than 100 specific requirements for a standard and has spent the past several months developing a set of “Usage Examples” that will be used to help us evaluate technical contributions to the standard.

The PSWG has been open to accepting technology contributions for the past few months, but that window will be closing next week, on September 16th. Once contributions are received, the WG will evaluate them all based on the requirements and usage examples and will decide to accept or reject each contribution. Because of the difficulty in choosing among multiple contributions, Accellera’s Policy and Procedures state that “Accellera prefers not to have competing contributions. It is recommended that complementary contributions are worked out among different Contributors,” and that’s exactly what we’ve done.

We had always, of course, been planning to contribute our Questa inFact-based graph specification language (tweaked a bit based on the new requirements), and were fully expecting that Cadence and Breker, who each have products in this space, would make their own contributions. When faced with a situation like this, I like to fall back on the First Commandment of Effective Standards (thanks to Karen Bartleson), which is to cooperate on standards and compete on tools.

Rather than wait and fight it out in the Working Group, where unfortunately marketing and politics can sometimes detract from the technical value of a standard, we approached Breker and Cadence about working together and I think you’ll find our contribution to be “greater than the sum of its parts.” We all hope that it will serve as a strong basis for the standard and will help streamline the process. Of course, with additional input from the members of the Working Group, there are likely to be additional tweaks as we go forward, but by eliminating the “ours vs. theirs” issues beforehand, it is our hope that we can reach consensus on a final standard more quickly.

We would like to thank Breker and Cadence for their willingness to work with us on this important standard and look forward to healthy “co-op-etition” as we move forward. If you’re interested in participating, you can find out more information at the Accellera website.


25 August, 2015

I have always been wanting to contribute to the growing verification engineering community in India, which Mentor’s CEO Wally Rhines calls “the largest verification market in the world”. So when I first accompanied the affable Dennis Brophy to the IEEE India office back in April of 2014 to discuss the possibility of having a DVCon in India, I knew I was at the right place at the right time and it was opportunity to contribute to this community.

I has been two years since that meeting, I don’t have to write about how big a success the first ever DVCon India in 2014 was. I’m glad I played a small part by being on the Technical Program Committee on the DV track, reviewing various abstracts. It is a responsibility which I thoroughly enjoyed. This year in addition to being on the TPC, I am contributing as the Chair for Tutorials and Posters. I am eagerly looking forward to the second edition of the Verification Extravaganza which is on 10th and 11th Sept 2015 and the amazing agenda we have planned for attendees.

Day 1 of the conference is dedicated to keynotes, panel discussions and tutorials while day 2 is dedicated fully to Papers with a DV track and a panel in addition to papers in a ESL track. Participants are free to attend any track and can move between tracks. This year we had many sponsored tutorials submissions hence, there will be three parallel tutorial tracks, one on the DV side and two on the ESL track.

Below please find a list of those that Mentor Graphics will be presenting at:

  • Keynote from Harry Foster discussing the growing complexity across the entire design ecosystem
    Thursday, September 10, 2015
    9:45am – 10:30am
    Grand Ball Room, The Leela Palace
    More Information >
    Register for this event >
  • Creating SystemVerilog UVM Testbenches for Simulation and Emulation Platform Portability to Boost Block-to-System Verification Productivity
    Thursday, September 10, 2015
    1:30pm – 3:00pm
    DV Track, Diya, The Leela Palace
    More Information >
    Register for this event >
  • Expediting the code coverage closure using Static Formal Techniques – A proven approach at block and SoC Levels!
    Thursday, September 10, 2015
    1:30pm – 3:00pm
    DV Track, Grand Ball Room, The Leela Palace
    More Information >
    Register for this event >

The papers on day 2 are primarily split into 3 parallel tracks, one on DV track and 2 parallel tracks on ESL. Within the DV track, one area is dedicated to UVM/SV. The other categories within the DV track will cover Portable Stimulus & Graph Based Stimulus, AMS, SoC & Stimulus Generation, Emulation, Acceleration and Prototyping & a generic selected category. The surprise among the categories is Portable Stimulus, which was a tutorial in last year however has continued to be of high interest and sessions will build on last year’s initial tutorial.

Overall there is an exciting mix of keynotes, tutorials, panels, papers and posters, which will make two exceptional days of learning, networking and fun. I look forward to seeing at DVCon India, 2015 and if you see me at the show, please come say hello and let me know what you think of the conference.

, , , , , , , ,

22 August, 2015

Impact of Design Size on First Silicon Success

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I present verification results findings in terms of schedules, number of required spins, and classification of functional bugs. In this blog, I conclude the series on the 2014 Wilson Research Group Functional Verification Study by providing a deeper analysis of respins by design size.

It’s generally assumed that the larger the design—the increased likelihood of the occurrence of bugs. Yet, a question worth answering is how effective projects are at finding these bugs prior to tapeout.

In Figure 1, we first extract the 2014 data from the required number of spins trends presented in my previous blog (click here), and then partition this data into sets based on design size (that is, designs less than 5 million gates, designs between 5 and 80 million gates, and designs greater than 80 million gates). This led to perhaps one of the most startling findings from our 2014 study. That is, the data suggest that the smaller the design—the less likelihood of achieving first silicon success! While 34 percent of the designs over 80 million gates achieve first silicon success, only 27 percent of the designs less than 5 million gates are able to achieve first silicon success. The difference is statistically significant.


Figure 1. Number of spins by design size

To understand what factors might be contributing to this phenomena, we decided to apply the same partitioning technique while examining verification technology adoption trends.

Figure 2 shows the adoption trends for various verification techniques from 2007 through 2014, which include code coverage, assertions, functional coverage, and constrained-random simulation.

One observation we can make from these adoption trends is that the electronic design industry is maturing its verification processes. This maturity is likely due to the need to address the challenge of verifying designs with growing complexity.


Figure 2. Verification Technology Adoption Trends

In Figure 3 we extract the 2014 data from the various verification technology adoptions trends presented in Figure 2, and then partition this data into sets based on design size (that is, designs less than 5 million gates, designs between 5 and 80 million gates, and designs greater than 80 million gates).


Figure 3. Verification Technology Adoption by Design

Across the board we see that designs less than 5 million gates are less likely to adopt code coverage, assertions, functional coverage, and constrained-random simulation. Hence, if you correlate this data with the number of spins by design size (as shown in Figure 1), then the data suggest that the verification maturity of an organization has a significant influence on its ability to achieve first silicon success.

As a side note, you might have noticed that there is less adoption of constrained-random simulation for designs greater than 80 million gates. There are a few factors contributing to this behavior: (1) constrained-random works well at the IP and subsystem level, but does not scale to the full-chip level for large designs. (2) There a number of projects working on large designs that predominately focuses on integrating existing or purchased IPs. Hence, these types of projects focus more of their verification effort on integration and system validation task, and constrained-random simulation is rarely applied here.

So, to conclude this blog series, in general, the industry is maturing its verification processes as witnessed by the verification technology adoption trends. However, we found that smaller designs were less likely to adopt what is generally viewed as industry best verification practices and techniques. Similarly, we found that projects working on smaller designs tend to have a smaller ratio of peak verification engineers to peak designers. Could the fact that fewer available verification resources combined with the lack of adoption of more advanced verification techniques account for fewer small designs achieving first silicon success? The data suggest that this might be one contributing factor. It’s certainly something worth considering.

Quick links to the 2014 Wilson Research Group Study results

, , , , , ,

18 August, 2015

In Part 1 of this series, inspired by security researchers that were able take over a new Jeep and drive it into a ditch I asserted that in the future all vehicles will need to encrypt their internal control and data bus traffic with an encryption key. This key would be stored in a secure memory element of some sort – a separate memory chip or a register bank inside a system on a chip (SoC). As such, Design & Verification (D&V) engineers will need to verify that this secure storage can’t be compromised.

White hat hacking and constrained-random test benches don’t scale and aren’t exhaustive, so in this post I’ll describe how formal verification technology can be brought to bear.

First, the verification challenge here can be boiled down to two concerns:

(A) Confidentiality: can the key be read by an unauthorized party, or accidentally “leak” to the outputs?

(B) Integrity: can the key be altered, overwritten, or erased by the bad guys (or due to some unforeseen hardware or firmware bug)?

The only way to exhaustively verify (A) and (B) with only a few hours of compute time on common, low-cost servers is by employing formal verification technology. In a nutshell, “formal verification uses mathematical formal methods to prove or disprove the correctness of a system’s design with respect to formal specifications expressed as properties.”(1)

Returning to our automotive example, the “formal specification” is that (A) and (B) above can never happen, i.e. the key can only be read and edited by authorized parties through specific, secure pathways – anything else is a design flaw that must be fixed before going into production.

So what can D&V engineers do at the RTL level to employ formal technology to this verification challenge – especially if they have never used formal tools or have written System Verilog Assertions (SVA) before? Luckily Mentor has developed a fully automated solution that exhaustively verifies that only the paths you specify can reach security or safety-critical storage elements – i.e. to formally prove the confidentiality and integrity of your DUTs “root of trust”. The best part is that no knowledge of formal or property specification languages is required.

Questa Secure Check block diagram

Questa Secure Check app block diagram

Specifically, using your RTL and cleartext, human and machine readable Tcl code to specify the secure / safety-critical storage and allowed access paths as input, the Questa Secure Check app automates formal technology to exhaustively verify the “root of trust” – i.e. the storage for the system’s encryption keys – can be read or tampered with via unauthorized paths.

To expedite the analysis and/or minimize formal compile and run time, the app supports “black boxing” of clearly extraneous IPs and paths to keep the focus on the secure channels alone. The result: an exhaustive proof of your design’s integrity and/or clear counterexamples showing how your specification can be violated.

Questa Secure Check screen shot

Questa Secure Check app GUI example: users click on the “Insecure Path” of concern and the app generates a schematic of the path and waveforms related to the signals involved

In summary, only a sound hardware-based solution based on securely stored encryption keys will establish a true root of trust. Only an exhaustive formal analysis can verify this with mathematical certainly, and thus the Questa Secure Check app was created to help customers address this challenge.

I look forward to hearing your feedback and comments on what you are doing to address this challenge.

Keep your eyes on the road and your hands up on the wheel,

Joe Hupcey III


(1) Using Formal Methods to Verify Complex Designs, IBM Haifa Research Lab, 2007

P.S. Shameless commercial pitch: the lead customer of the Questa Secure Check app is in the consumer electronics space, where their products are subject to world-wide attack, 24/7/365. Suffice to say that if Secure Check can help harden this customer’s system against determined, continuous attacks, it can help automakers, medical device manufacturers, smart phone designers, aircraft companies, etc.  In advance of a new Verification Academy course on this topic coming out this autumn, to learn more feel free to contact me offline, or ask questions in the comments section below.

, , , ,

17 August, 2015

ASIC/IC Verification Results

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blogs, I provided data that suggest a significant amount of effort is being applied to functional verification. An important question the various studies have tried to answer is whether this increasing effort is paying off. In this blog, I present verification results findings in terms of schedules, number of required spins, and classification of functional bugs.


Figure 1. Design Completion Compared to Original Schedule

Figure 1 presents the design completion time compared to the project’s original schedule. The data suggest that in 2014 there was a slight improvement in projects meeting their original schedule, where in the 2007 and 2012 studies, 67 percent of the projects were behind scheduled, compared to 61 percent in 2014. It is unclear if this improvement is due to the industry becoming more conservative in project planning or simply better at scheduling. Regardless, meeting the originally planned schedule is still a challenge for most of the industry.


Figure 2. Required Number of Spins

Other results trends worth examining relate to the number of spins required between the start of a project and final production. Figure 2 shows this industry trend from 2007 through 2014. Even though designs have increased in complexity, the data suggest that projects are not getting any worse in terms of the number of required spins before production. Still, only about 30 percent of today’s projects are able to achieve first silicon success.

Figure 3 shows various categories of flaws that are contributing to respins. Again, you might note that the sum is greater than 100 percent on this graph, which is because multiple flaws can trigger a respin.


Figure 3. Types of Flaws Resulting in Respins

Logic and functional flaws remain the leading causes of respins. However, the data suggest that there has been a slight improvement in this area over the past seven years.

Figure 4 examines the root cause of logical or functional flaws (previously identified in Figure 3) by various categories. The data suggest design errors are the leading cause of functional flaws, and the situation is worsening. In addition, problems associated with changing, incorrect, and incomplete specifications are a common theme often voiced by many verification engineers and project managers.


Figure 4. Root Cause of Functional Flaws

In my next blog (click here), I provide a deeper analysis of respins by design size.

Quick links to the 2014 Wilson Research Group Study results

12 August, 2015

Content delivery through the Internet gateway is ever changing and evolving. Today’s mechanism for delivery is more efficient, less power consuming, and better performing than previous generations. Competition is fierce for companies who want to support this gateway. It’s a crowded field and the chips driving high-capacity networks are large and massively complex.  In fact, these network switches and routers routinely have more than 500,000 gates. Project teams aim for a large number of ports, expanded throughput, while decreasing latency and beefing up security and ease of use.

Verifying the design of these chips requires a broad set of verification solutions, including hardware emulation. One verification team recently used hardware emulation in an in-circuit-emulation (ICE) mode to test a SoC design with a 128-port Ethernet interface, and a variable bandwidth of 1/10/40/100/120Gbps. This team elected to use hardware emulation because it could test a design with real traffic with one Ethernet tester per port. A speed rate adapter was inserted between the fast tester and the slow emulated design under test (DUT) since a direct connection is not possible due to rather different speed domains. The setup had 128 Ethernet testers, 128 Ethernet speed adapters and heaps of cables. Sadly, the entire setup could only support a single user who used the setup in an emulation lab.

Another verification team took an entirely different approach using the Mentor Ethernet VirtuaLAB, where Ethernet testers are modeled in software running under Linux on a workstation connected to the emulator. The model, an accurate representation of the actual physical tester, is based on intellectual property (IP) blocks that have already been implemented.

The virtual tester includes an Ethernet Packet Generator and Monitor (EPGM) that generates, transmits and monitors Ethernet packets within the DUT and can configure GMII, XGMII, XLGMII/CGMII and CXGMII interfaces for 1G, 10G, 40G/100G and 120G. VirtuaLAB software conducts off-line analysis of the traffic, provides statistics, and supports a variety of other functions.

An interface between the VirtuaLAB virtual tester and the DUT has one instance of VirtuaLAB-DPI communicating to a Virtual Ethernet xRTL (extended Register Transfer Level) transactor connected to a Null-PHY linked to the DUT. One xRTL transactor is required for each port of any xMII supported type.

Multiple VirtuaLAB applications can be bundled together across multiple workstations–– known as a multi co-model –– to support large port-count configurations. High Speed Link (HSL) cards are used to connect co-model channels from workstations to the emulator. This tightly integrated transport mechanism is tuned for maximum wall clock performance and transparent to the testbench. Data plane emulation throughput scales linearly with the port count because of this parallel runtime and debug architecture.

Reconfiguring the virtual tester to perform various functions is done through remote access to a workstation, a stable and reliable piece of equipment less costly than a complex Ethernet tester with equivalent functionality. It has the ability to concurrently support multiple users. VirtuaLAB can be used as an enterprise-wide resource in a datacenter, using Enterprise Server’s IT management capabilities.

If you want to know more, download the Accelerating Networking Products to Market Using Ethernet VirtuaLAB whitepaper

, , ,

10 August, 2015

ASIC/IC Power Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I presented our study findings on various verification language and library adoption trends. In this blog, I focus on power trends.

Today, we see that about 73 percent of design projects actively manage power with a wide variety of techniques, ranging from simple clock-gating, to complex hypervisor/OS-controlled power management schemes. What is interesting from our 2014 study is that the data indicates that there has been a 19% increase in the last two years in the designs that actively manage power (see Figure 1).


Figure 1. ASIC/IC projects working on designs that actively manage power

Figure 2 shows the various aspects of power-management that design projects must verify (for those 73 percent of design projects that actively manage power). The data from our study suggest that many projects are moving to more complex power-management schemes that involve software control. This adds a new layer of complexity to a project’s verification challenge, since these more complex power management schedules often require emulation to fully verify.


Figure 2. Aspects of power-managed design that are verified

Since the power intent cannot be directly described in an RTL model, alternative supporting notations have recently emerged to capture the power intent. In the 2014 study, we wanted to get a sense of where the industry stands in adopting these various notations. For projects that actively manage power, Figure 3 shows the various standards used to describe power intent that have been adopted. Some projects are actively using multiple standards (such as different versions of UPF or a combination of CPF and UPF). That’s why the adoption results do not sum to 100 percent.


Figure 3. Notation used to describe power intent

In an earlier blog in this series, I provided data that suggest a significant amount of effort is being applied to ASIC/IC functional verification. An important question the various studies have tried to answer is whether this increasing effort is paying off. In my next blog (click here), I present verification results findings in terms of schedules, number of required spins, and classification of functional bugs.

Quick links to the 2014 Wilson Research Group Study results

, , , , , ,

5 August, 2015

[Preface: everywhere it refers to automobiles in this post, you can also swap in “X-Ray machine”, “pacemaker”, and “aircraft”]

The dark side of our connected future is here: from the comfort of a living room sofa, security researchers were able to remotely disable the brakes and transmission of a new Jeep Cherokee — literally driving the vehicle into a ditch (Hackers Remotely Kill A Jeep On The Highway – With Me In It, Wired, 7-21-15). Another group of researchers were able to hack into a car’s braking and other critical systems via the digital audio broadcast (DAB) infotainment system (“Now car hackers can bust in through your motor’s DAB RADIO”, The Register, 7-24-2015). In this form of attack, multiple vehicles could be affected simultaneously.

wired -- jeep in a ditch -- IMG_0724-1024x768

Security Researcher Charlie Miller attempts to reverse out of a ditch after its brakes were remotely disabled. Source: Wired Magazine

Fortunately no one has been hurt in these experiments, and manufacturers have been quick to respond with patches. But these two stories (and a growing number of others like them) demonstrate just how insecure today’s automobile electronics are.

So what can be done to prevent this?

First, I’m not here to argue that there is a single “silver bullet”.  To combat the numerous direct and side channel attacks there needs to be multiple, overlapping solutions to provide the necessary defense in depth. That said, the #1 priority is to secure the “root of trust”, from which everything else – the hardware, firmware, OS, and application layer’s security – is derived. If the root of trust can be compromised, then the whole system is vulnerable.

So how exactly am I defining the “root of trust”? I assert that in the near future the root of trust will effectively be an encryption key – a digital signature for each vehicle — that will be encoded into the electronics of every vehicle. Hence, I argue that the data packets transiting the interior networks of the vehicle will need to be “signed” with the vehicle’s signature and decrypted by the receiving sensor packs/Engine Control Units (ECUs)/radios/etc.

That’s right: encrypt all packets in all the interior data networks. Every. Single. One.

Is this overkill? Look again at the picture of the Jeep in the ditch.

Unfortunately, most automobiles being sold today cannot support this proposal since popular bus protocols like CAN and LIN simply do not have the bandwidth or protocol architecture to support this.  However, real time packet encryption/decryption is certainly possible with the increasingly popular Automotive Ethernet standard (whose early adopters include BMW, Hyundai, and Volkswagen). In such an advanced system, the automaker would embed a unique encryption key in each vehicle in the factory (exactly like set-top-box and game console makers do today). The key would be stored in a secure memory element of some sort – a separate memory chip or a register bank inside a system on a chip (SoC)). As such, D&V engineers will need to verify that this secure storage can’t be compromised.

So what’s the best verification methodology to ensure that this secure storage can’t be (A) read by an unauthorized party or accidentally “leak” to the outputs; or (B) be altered, overwritten, or erased by the bad guys?

Unfortunately, the classical approach of employing squads of “white hat” hackers to try and crack the system drastically decreases in effectiveness as circuit complexity increases. Similarly, even a really well designed constrained-random simulation environment is not exhaustive either. Consequently, a mathematical, “formal” analysis is required. The good news is that my group has been successful partnering with customers to develop exactly this type of solution. I’ll explain in part 2 of this series …

Until then, keep your eyes on the road and your hands up on the wheel; and I look forward to hearing your comments.

Joe Hupcey III

P.S. I grant there are ways to isolate software processes from each other to forestall hacking (indeed, my colleagues down the hall in Mentor’s Embedded Systems Division coach customers on how to do this every day.) However, I assert that only a sound hardware-based solution based on securely stored encryption keys will establish a secure root of trust, as well as give consumers peace of mind.

In a somewhat related note, I predict that in the near future computers/phone/tablets will start implementing and promoting hardware switches to physically disconnect their built-in microphones and cameras. Similarly, just like all cars now have air bag shutoff switches, there should also be a mechanical switch on the dashboard that disconnects all the vehicle’s external antennas. Granted this will not completely eliminate clever, unforeseen side channel attacks; but it will go a long way. Besides, the marketing value will be fantastic.

, , , ,

30 July, 2015

Accellera Handoffs UVM to IEEE

It has been a long path from Mentor’s AVM to IEEE P1800.2.  But the moment has arrived: Accellera has formally announced UVM 1.2 will be submitted as a contribution to the IEEE P1800.2™ working group.

Verification Methodology Beginnings

As the IEEE finalized approval of the initial release of SystemVerilog (IEEE Std. 1800™) in 2005, I floated the idea of the need for a methodology that would be a companion to it.  At the time there was little to no industry desire to explore this opportunity in earnest – apart from interest by Mentor Graphics – so we launched our Advanced Verification Methodology (AVM) and set a new direction for an open functional verification methodology.  We built implementations of AVM based on SystemVerilog and SystemC (IEEE Std. 1666™).  We also pioneered an open-source mechanism based on the Apache 2.0 license which is now the accepted license to foster global and rapid open-source adoption in the EDA industry.  And as others joined with us in this journey, AVM grew to become OVM, then UVM.  Now UVM is set to become an IEEE standard.  The IEEE has assigned it project number 1800.2.

imagePath to IEEE

To say we are pleased to see UVM move to the IEEE is an understatement.  We congratulate the Accellera UVM team on its accomplishment and look forward to participate in this phase of UVM’s standardization. Since our first public announcement on May 8, 2006 when we introduced the world to AVM and announced support for it from 19 of our Questa Vanguard Partners, to our announced collaboration with Cadence Design Systems on the development of the Open Verification Methodology (OVM) on August 16, 2007 and the eventual announcement January 8, 2010 that Accellera adopts OVM as the basis of its Universal Verification Methodology, we have guided its development and supported a path for the Big-3 EDA to voice positive public support.  We are thrilled Accellera has announced its delivery of UVM to the IEEE for ongoing standardization and maintenance.

IEEE Standardization

What comes next?  The IEEE P1800.2 (UVM) project has announced a Call for Participation and kickoff meeting to be held August 6, 2015 from 9am – 11am PDT.  The first meeting will be held via teleconference.  In order to attend, you will need to register for the meeting.  Membership in the IEEE project will be “entity-based” with one company, one vote.  The call for participation has details on membership requirements in order to observe or actively participate.  The 1800.2 project will only focus on the written specification and not the open-source base class library (BCL).  The Accellera UVM TSC will continue to update the BCL.  Accellera has committed to keep the BCL implementation current with changes proposed and approved by the IEEE 1800.2 working group.  This is just like the arrangement Accellera has with the IEEE for SystemC.

Join us at the upcoming meeting and remember to register in order to attend!


, , , , , , , , , ,

@dennisbrophy tweets

Follow dennisbrophy

@dave_59 tweets

Follow dave_59

@jhupcey tweets

Follow jhupcey