Posts Tagged ‘emulation’

25 October, 2015

Verification Academy Brings “UVM Live” to the Santa Clara Convention Center

Uvm logoFor everyone involved in the functional verification of electronic systems, you know about the Universal Verification Methodology (UVM) and are probably using it in one fashion or another.  And if you have been reading this blog, you have undoubtedly seen blogs by Harry Foster on the adoption and use of UVM by the FPGA and ASIC/SoC community.  It has clearly become the world’s most popular and accepted verification methodology.  It is odd to point out that with this popularity, there has not been a UVM-only event to bring UVM users together this year.  We believe it is time for UVM users to come together to explore its use and share productivity tips and tricks with each other.  You are invited to register and attend.  The details of the event are:

          Event: UVM Forum – Verification Academy Live Seminar
          Location: Santa Clara Convention Center, Santa Clara, CA USA
          Date: 17 November 2015
          Time: 8:30 a.m. – 4:00 p.m. PT
          More Information & Agenda: Click Here
          Register: Click Here

Experts Learn Something New

If you are an UVM Expert, and already know just about everything about UVM, you might be interested in some new topics that will be introduced and expanded upon.  Here are four:

The first is UVM Framework.  UVM Framework supports reuse across projects, sites and companies from block to chip to system for both simulation and emulation.  Those using it have seen at least a four week reduction in verification product schedules.

The second is Verification IP.  VIP can help you overcome your IP verification challenges.  One session will explore integrating VIP into a UVM environment with examples based on protocols such as AMBA®, MIPI® and PCI Express®.  If you are not an expert on a specific protocol, you can use VIP to drive stimulus and verify protocol compliance for you.

The third is Automating Scenario-Level UVM Test with Portable Stimulus.  In this session you will learn to rise above the transaction level to make scenario creation more productive.  You will learn how to leverage lower-level descriptions, such as sequence items, into larger scenarios.  You will learn how to leverage graph-based methods to efficiently and predicable exercise the scenario space to deliver high quality verification results.  It should also be noted, that an ongoing Accellera Working Group is exploring standardization of Portable Stimulus.  While Accellera working group details are not part of the session, UVM Forum attendees might consider augmenting their knowledge by visiting the Accellera Portable Stimulus group.

The fourth is Improved UVM Testbench Debug Productivity and Visibility.  For those who debug UVM on a daily basis, you might hear a common question “Are we having fun yet?” asked.  The debug of UVM can be particularly difficult.  We will have a session to show you how to navigate complex UVM environments to quickly find your way around the code – whether its your own or inherited.  You will see how SystemVerilog/UVM dynamic class activity is as easy to debug as it is with RTL signals.  Want to learn how to solve the top 10 common UVM bring-up issues with config_db, the factory, and sequence execution?  Attend and you will learn.

Novices Welcome (and will learn something too!)

While I can’t promise that if you come as a novice you will leave as an expert, you can learn about UVM in the morning as one of the sessions is a technology overview to ensure you won’t be lost when the experts speak.  If you know very little about UVM, the UVM Forum will help you.  There will be a couple presentations from UVM users.  One session is on how UVM enabled advanced storage IP silicon success (presented by Micron) and another session on UVM and emulation to ease the path to advanced verification and analysis (presented by Qualcomm).

Still want to know more before you attend?  You can also boost your UVM knowledge by attending an online UVM Basics course at Verification Academy.  Visit here to learn more about the UVM Basics course.  The Basic UVM course consists of 8 sessions with over an hour of instructional content. This course is primarily aimed at existing VHDL and Verilog engineers or managers who recognize they have a functional verification problem but have little or no experience with constrained random verification or object-oriented programming. The goal of the course is to raise the level of UVM knowledge to the point where users have sufficient confidence in their own technical understanding that it becomes less of a barrier to adoption – and makes the UVM Forum 2015 more meaningful for you.

I look forward to seeing you there.

, , , , , , , , , ,

25 August, 2015

I have always been wanting to contribute to the growing verification engineering community in India, which Mentor’s CEO Wally Rhines calls “the largest verification market in the world”. So when I first accompanied the affable Dennis Brophy to the IEEE India office back in April of 2014 to discuss the possibility of having a DVCon in India, I knew I was at the right place at the right time and it was opportunity to contribute to this community.

I has been two years since that meeting, I don’t have to write about how big a success the first ever DVCon India in 2014 was. I’m glad I played a small part by being on the Technical Program Committee on the DV track, reviewing various abstracts. It is a responsibility which I thoroughly enjoyed. This year in addition to being on the TPC, I am contributing as the Chair for Tutorials and Posters. I am eagerly looking forward to the second edition of the Verification Extravaganza which is on 10th and 11th Sept 2015 and the amazing agenda we have planned for attendees.

Day 1 of the conference is dedicated to keynotes, panel discussions and tutorials while day 2 is dedicated fully to Papers with a DV track and a panel in addition to papers in a ESL track. Participants are free to attend any track and can move between tracks. This year we had many sponsored tutorials submissions hence, there will be three parallel tutorial tracks, one on the DV side and two on the ESL track.

Below please find a list of those that Mentor Graphics will be presenting at:

  • Keynote from Harry Foster discussing the growing complexity across the entire design ecosystem
    Thursday, September 10, 2015
    9:45am – 10:30am
    Grand Ball Room, The Leela Palace
    More Information >
    Register for this event >
  • Creating SystemVerilog UVM Testbenches for Simulation and Emulation Platform Portability to Boost Block-to-System Verification Productivity
    Thursday, September 10, 2015
    1:30pm – 3:00pm
    DV Track, Diya, The Leela Palace
    More Information >
    Register for this event >
  • Expediting the code coverage closure using Static Formal Techniques – A proven approach at block and SoC Levels!
    Thursday, September 10, 2015
    1:30pm – 3:00pm
    DV Track, Grand Ball Room, The Leela Palace
    More Information >
    Register for this event >

The papers on day 2 are primarily split into 3 parallel tracks, one on DV track and 2 parallel tracks on ESL. Within the DV track, one area is dedicated to UVM/SV. The other categories within the DV track will cover Portable Stimulus & Graph Based Stimulus, AMS, SoC & Stimulus Generation, Emulation, Acceleration and Prototyping & a generic selected category. The surprise among the categories is Portable Stimulus, which was a tutorial in last year however has continued to be of high interest and sessions will build on last year’s initial tutorial.

Overall there is an exciting mix of keynotes, tutorials, panels, papers and posters, which will make two exceptional days of learning, networking and fun. I look forward to seeing at DVCon India, 2015 and if you see me at the show, please come say hello and let me know what you think of the conference.

, , , , , , , ,

12 August, 2015

Content delivery through the Internet gateway is ever changing and evolving. Today’s mechanism for delivery is more efficient, less power consuming, and better performing than previous generations. Competition is fierce for companies who want to support this gateway. It’s a crowded field and the chips driving high-capacity networks are large and massively complex.  In fact, these network switches and routers routinely have more than 500,000 gates. Project teams aim for a large number of ports, expanded throughput, while decreasing latency and beefing up security and ease of use.

Verifying the design of these chips requires a broad set of verification solutions, including hardware emulation. One verification team recently used hardware emulation in an in-circuit-emulation (ICE) mode to test a SoC design with a 128-port Ethernet interface, and a variable bandwidth of 1/10/40/100/120Gbps. This team elected to use hardware emulation because it could test a design with real traffic with one Ethernet tester per port. A speed rate adapter was inserted between the fast tester and the slow emulated design under test (DUT) since a direct connection is not possible due to rather different speed domains. The setup had 128 Ethernet testers, 128 Ethernet speed adapters and heaps of cables. Sadly, the entire setup could only support a single user who used the setup in an emulation lab.

Another verification team took an entirely different approach using the Mentor Ethernet VirtuaLAB, where Ethernet testers are modeled in software running under Linux on a workstation connected to the emulator. The model, an accurate representation of the actual physical tester, is based on intellectual property (IP) blocks that have already been implemented.

The virtual tester includes an Ethernet Packet Generator and Monitor (EPGM) that generates, transmits and monitors Ethernet packets within the DUT and can configure GMII, XGMII, XLGMII/CGMII and CXGMII interfaces for 1G, 10G, 40G/100G and 120G. VirtuaLAB software conducts off-line analysis of the traffic, provides statistics, and supports a variety of other functions.

An interface between the VirtuaLAB virtual tester and the DUT has one instance of VirtuaLAB-DPI communicating to a Virtual Ethernet xRTL (extended Register Transfer Level) transactor connected to a Null-PHY linked to the DUT. One xRTL transactor is required for each port of any xMII supported type.

Multiple VirtuaLAB applications can be bundled together across multiple workstations–– known as a multi co-model –– to support large port-count configurations. High Speed Link (HSL) cards are used to connect co-model channels from workstations to the emulator. This tightly integrated transport mechanism is tuned for maximum wall clock performance and transparent to the testbench. Data plane emulation throughput scales linearly with the port count because of this parallel runtime and debug architecture.

Reconfiguring the virtual tester to perform various functions is done through remote access to a workstation, a stable and reliable piece of equipment less costly than a complex Ethernet tester with equivalent functionality. It has the ability to concurrently support multiple users. VirtuaLAB can be used as an enterprise-wide resource in a datacenter, using Enterprise Server’s IT management capabilities.

If you want to know more, download the Accelerating Networking Products to Market Using Ethernet VirtuaLAB whitepaper

, , ,

10 August, 2015

ASIC/IC Power Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I presented our study findings on various verification language and library adoption trends. In this blog, I focus on power trends.

Today, we see that about 73 percent of design projects actively manage power with a wide variety of techniques, ranging from simple clock-gating, to complex hypervisor/OS-controlled power management schemes. What is interesting from our 2014 study is that the data indicates that there has been a 19% increase in the last two years in the designs that actively manage power (see Figure 1).


Figure 1. ASIC/IC projects working on designs that actively manage power

Figure 2 shows the various aspects of power-management that design projects must verify (for those 73 percent of design projects that actively manage power). The data from our study suggest that many projects are moving to more complex power-management schemes that involve software control. This adds a new layer of complexity to a project’s verification challenge, since these more complex power management schedules often require emulation to fully verify.


Figure 2. Aspects of power-managed design that are verified

Since the power intent cannot be directly described in an RTL model, alternative supporting notations have recently emerged to capture the power intent. In the 2014 study, we wanted to get a sense of where the industry stands in adopting these various notations. For projects that actively manage power, Figure 3 shows the various standards used to describe power intent that have been adopted. Some projects are actively using multiple standards (such as different versions of UPF or a combination of CPF and UPF). That’s why the adoption results do not sum to 100 percent.


Figure 3. Notation used to describe power intent

In an earlier blog in this series, I provided data that suggest a significant amount of effort is being applied to ASIC/IC functional verification. An important question the various studies have tried to answer is whether this increasing effort is paying off. In my next blog (click here), I present verification results findings in terms of schedules, number of required spins, and classification of functional bugs.

Quick links to the 2014 Wilson Research Group Study results

, , , , , ,

29 July, 2015

VA DAC2015 smlIf you were not one of the 100’s of visitors to the Verification Academy booth at DAC 2015 and missed an opportunity to get a printed copy of the DAC 2015 issue of Verification Horizons, don’t worry.  You can also download it as well.

Questa Vanguard Partners Highlighted

Eight of the eleven articles were authored or co-authored by our partners and represent a wide range of topics.  There are two articles on DO-254 (Partners: eInfochips and Verisense).  There is an article on Formal and ABV of MBIST MCPs (Parnter: FishTail Design Automation).  There is an article on how to start formal analysis “right” (Partner: OSKI).  For UVM users, reuse of MATLAB® functions and Simulink® functions is covered (Partner: Mathworks).  Continuing with another article for the UVM users, intelligent testbench automation with UVM and Questa® is explored (Partner: Codasip Ltd.).  For the Agile community, unit testing your way to a reliable testbench is explored (Partner: XtremeEDA & User company: NVIDIA).  Lastly, a noted emulation consultant (Lauro Rizzatti) shares part 2 of his three decades of emulation evolution and a customer paper (Marvell) covers techniques to accelerate RTL simulation.

VH DAC 2015 CoverAll of this is inside the 60-page mega-issue of Verification Horizons in 11 articles.  Direct links to each of the articles is shared below along with the article titles and authors.  The editor introduction by Tom Fitzpatrick gives even more detail and background on this issue.  If you don’t already have some summer or vacation reading, get your electronic copy today!

, , , , , , , , , , , , , , , ,

19 July, 2015

ASIC/IC Verification Technology Adoption Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on the growing ASIC/IC design project resource trends due to rising design complexity. In this blog I examine various verification technology adoption trends.

Dynamic Verification Techniques

Figure 1 shows the ASIC/IC adoption trends for various simulation-based techniques from 2007 through 2014, which include code coverage, assertions, functional coverage, and constrained-random simulation.


Figure 1. ASIC/IC Dynamic Verification Technology Adoption Trends

One observation from these adoption trends is that the electronic design industry is maturing its verification processes. This maturity is likely due to the growing complexity of designs as discussed in the previous section. Another observation is that constrained-random simulation adoption appears to be leveling off. This trend is likely due to the scaling limitations of constrained-random simulation. This technique generally works well at the IP block or subsystem level in simulation, but does not scale to the entire SoC integration level.

ASIC/IC Static Verification Techniques

Figure 2 shows the ASIC/IC adoption trends for formal property checking (e.g., model checking), as well as automatic formal applications (e.g., SoC integration connectivity checking, deadlock detection, X semantic safety checks, coverage reachability analysis, and many other properties that can be automatically extracted and then formally proven). Formal property checking traditionally has been a high-effort process requiring specialized skills and expertise. However, the recent emergence of automatic formal applications provides narrowly focused solutions and does not require specialized skills to adopt. While formal property checking adoption is experiencing incremental growth between 2012 and 2014, the adoption of automatic formal applications increased by 62 percent. In general, formal solutions (i.e., formal property checking combined with automatic formal applications) are one of the fastest growing segments in functional verification.


Figure 2. ASIC/IC Formal Technology Adoption

Emulation and FPGA Prototyping

Historically, the simulation market has depended on processor frequency scaling as one means of continual improvement in simulation performance. However, as processor frequency scaling levels off, simulation-based techniques are unable to keep up with today’s growing complexity. This is particularly true when simulating large designs that include both software and embedded processor core models. Hence, acceleration techniques are now required to extend ASIC/IC verification performance for very large designs. In fact, emulation and FPGA prototyping have become key platforms for SoC integration verification where both hardware and software are integrated into a system for the first time. In addition to SoC verification, emulation and FPGA prototyping are also used today as a platform for software development.

Today, 35 percent of the industry has adopted emulation, while 33 percent of the industry has adopted FPGA prototyping. Figure 3 describes various reasons why projects are using these techniques. You might note that the results do not sum to 100 percent since multiple answers were accepted from each study participant. Also, we are unable to show trend analysis here since previous studies did not examine this aspect of functional verification.


Figure 3. Why Was Emulation or FPGA Prototyping Used?

Figure 4 partitions the data for emulation and FPGA prototyping adoption by the design size as follows: less than 5M gates, 5M to 80M gates, and greater than 80M gates. Notice that the adoption of emulation continues to increase as design sizes increase. However, the adoption of FPGA prototyping rapidly drops off as design sizes increase beyond 80M gates. Actually, the drop-off point is more likely around 40M gates or so since this is the average capacity limit of many of today’s FPGAs. This graph illustrates one of the problems with adopting FPGA prototyping of very large designs. That is, there can be an increased engineering effort required to partition designs across multiple FPGAs. However, better FPGA partitioning solutions are now emerging from EDA to address these challenges. In addition, better FPGA debugging solutions are also emerging from EDA to address today’s lab visibility challenges. Hence, I anticipate seeing an increase in adoption of FPGA prototyping for larger gate counts as time goes forward.


Figure 4. Emulation and FPGA Prototyping Adoption by Design Size

In my next blog (click here) I plan to discuss various ASIC/IC language and library adoption trends.

Quick links to the 2014 Wilson Research Group Study results

, , , , , ,

25 June, 2015

There are intrinsic limitations in the current approach for estimating dynamic power consumption. Briefly, the approach consists of a file-based flow that evolves through two steps. First, a simulator or emulator tracks the switching activity either cumulatively for the entire run in a switching activity interchange format (SAIF) file, or on a cycle-by-cycle basis for each signal in a signal database file such as FSDB or VCD. Then, a power estimation tool fed by as SAIF file calculates the average power consumption of a whole circuit, or an FSDB file computes the peak power in time and space of the design.


Figure 1: Conventional power analysis is grounded in the two-step, file-based approach.

The method may be acceptable when the design-under-test (DUT) is relatively small, in a ball-park of a few million gates or less, and the analysis is performed within a limited time window of up to a million or so clock cycles. Such time windows are typical when the DUT is tested with adaptive functional testbenches.

When applied to modern, large SoC designs of tens of hundreds or millions of gates executing embedded software, such as booting an operating system and running application programs that require billions of cycles, three problems defeat the conventional approach:

  1. The sizes of SAIF and, even more so, FSDB/VCD files become massive and unmanageable
  2. The file generation process slows to a crawl that extends to hours, possibly exceeding a day
  3. File loading into a power estimation tool extends to several days or more than a week

New software for the Veloce emulation platform eliminates the core problems affecting the conventional approach to estimate power consumption. It eliminates the two-step, file-based flow by tightly integrating the emulator to the power analysis tool.

In this new approach, an Activity Plot maps in one simple chart the global design switching activity over time as it is occurring, booting an OS and running live applications.Image 2

Chart 1: The Activity Plot identifies focus areas over long runs.

It identifies time frames of high switching activities that may pose power threats to the design team. While this chart is not unique, its creation is an order of magnitude faster than the generation time of file-based power charts. As a data-point, Veloce takes 15 minutes to generate an Activity Plot of a 100-million gate design for 75-million design clock cycles. Power analysis tools could consume more than a week to generate similar information. Moreover, they may not be able to handle such a large volume of data.

Of course, the next questions are, “where” are those peaks happening in the DUT and “what” is causing them? This is addressed by replacing the file based approach with a Dynamic Read Waveform API for the signal data.

Dynamic Read Waveform API Flow

Once high-switching activity time frames are identified at the top level of the design, the design team can zoom into those frames. Users can dig deep into the hierarchy of the design and the embedded software to uncover the main source of such high-switching activity.

The Dynamic Read Waveform API replaces the cumbersome SAIF/FSDB/VCD file generation process by live streaming switching data from the emulator into the power analysis tool. All operations run concurrently, from emulating the SoC, design capturing switching data, reading switching data by the power analysis tool and generating power numbers. The net effect is a jump in overall performance from booting an OS and running real applications.

As a side benefit, the Dynamic Read Waveform API also delivers improved accuracy compared to SAIF-based average flows because conditional controls are incorporated automatically for switching.

The bottom line is that the Activity Plot and the Dynamic Read Waveform API enable power analysis and power exploration at the system level that is not possible with a file-based flow.


5 November, 2014

Between 2006 and 2014, the average number of IPs integrated into an advanced SoC increased from about 30 to over 120. In the same period, the average number of embedded processors found in an advanced SoC increased from one to as many as 20. However, increased design size is only one dimension of the growing verification complexity challenge. Beyond this growing-functionality phenomenon are new layers of requirements that must be verified. Many of these verification requirements did not exist ten years ago, such as multiple asynchronous clock domains, interacting power domains, security domains, and complex HW/SW dependencies. Add all these challenges together, and you have the perfect storm brewing.

It’s not just the challenges in design and verification that have been changing, of course. New technologies have been developed to address emerging verification challenges. For example, new automated ways of applying formal verification have been developed that allow non-Formal experts to take advantage of the significant benefits of formal verification. New technology for stimulus generation have also been developed that allow verification engineers to develop complex stimulus scenarios 10x more efficiently than with directed tests and execute those tests 10x more efficiently than with pure-random generation.

It’s not just technology, of course. Along with new technologies, new methodologies are needed to make adoption of new technologies efficient and repeatable. The UVM is one example of these new methodologies that make it easier to build complex and modular testbench environments by enabling reuse – both of verification components and knowledge.

The Verification Academy website provides great resources for learning about new technologies and methodologies that make verification more effective and efficient. This year, we tried something new and took Verification Academy on the road with live events in Austin, Santa Clara, and Denver. It was great to see so many verification engineers and managers attending to learn about new verification techniques and share their experiences applying these techniques with their colleagues.


If you weren’t able to attend one of the live events – or if you did attend and really want to see a particular session again – you’re in luck. The presentations from the Verification Academy Live seminars are now available on the Verification Academy site:

  • Navigating the Perfect Storm: New School Verification Solutions
  • New School Coverage Closure
  • New School Connectivity Checking
  • New School Stimulus Generation Techniques
  • New School Thinking for Fast and Efficient Verification using EZ-VIP
  • Verification and Debug: Old School Meets New School
  • New Low Power Verification Techniques
  • Establishing a company-wide verification reuse library with UVM
  • Full SoC Emulation from Device Drivers to Peripheral Interfaces

You can find all the sessions via the following link:

, , , , , , , , ,

25 February, 2014

As DVCon expands, we at Mentor Graphics have grown our sponsored sessions as well.  Would you expect less?

In DVCon’s recent past, it was a tradition for the North American SystemC User Group (NASCUG) to sponsor a day of activity before the official start of the conference.  When OSCI merged with Accellera, the day before the official conference start grew to become Accellera Day with a broader set of meetings and activities covering many of Accellera’s standards.  This has all grown into a more official part of the DVCon program.  On Monday at DVCon – or as many still call it – Accellera Day – the tradeshow now joins in opening.  I covered this in detail in an earlier blog, so I won’t repeat myself now.

The pre-conference education and meet-up to discuss the latest in standards development is joined by an end of conference tutorial series that has expanded to allow four parallel sessions from three.  Instead of the one tutorial we at Mentor Graphics would otherwise sponsor at DVCon, we will offer two in this expanded series. Given the impact verification has on design it would seem right that more time be devoted to topics that address this.  One half-day tutorial is just to short to give the subject its due respect.

The two Mentor Graphics sponsored tutorials at DVCon, to be run in series, will devote a day to explore the application of current verification technology by us and users like you.  If you are already attending DVCon, you are making your tutorial selections now.  And for those who might only be interested to attend the tutorials themselves, DVCon offers a tutorials-only package ($145/Tutorial).  Mentor’s two tutorials are:

The first tutorial references “smooth sailing,” not because this will be a “no-pirate zone,” although I can tell you that since International Talk Like a Pirate Day is in late September, one won’t have to worry about a morning of pirate talk! [Interesting Fun Fact: Mentor Graphics’ headquarters in Wilsonville, OR USA is a short 50 miles (~80 km) north of the creators of this parotic holiday.]  The smooth sailing comes from the ability to easily use multiple engines from simulation, formal, emulation, FPGA prototyping to address your block to system-level verification needs.

The second tutorial is all about formal.  Or, in a more colloquial way to say it, we will answer the question: Whatsup with formal?  No, I doubt we will find more slang terms for formal technology being used and created in the tutorial.  But the tutorial will certainly look at more focused applications of formal technology.  As a pioneer in focused formal applications (like clock domain crossing) the creation of these focused formal applications has greatly simplified use and expanded technology access to verification teams with RTL design checks, X-state verification, and more joining the list.  Maybe we should ask Whatsapp with formal! But wait!  That slang question is already taken – and Facebook affirmed ownership with a $19B purchase of it recently.  Oh well, I lament.  Join me at this tutorial and we can explore something suitable and not yet taken as a replacement.  I can’t think of a better way to close DVCon than to see if we can invent another $19B term (or app).

, , , , , ,

4 October, 2013

We are truly living in the age of SoC design, where 78 percent of all designs today contain one or more embedded processors.  In fact, 56 percent of all designs contain two or more embedded processors, which brings a whole new level of verification challenges—requiring unique solutions.

A great example of this is STMicroelectronics who recently shared their experience and solution in addressing verification challenges due to rising complexity. In 2012, STMicroelectronics began a pilot project to build what it called the Eagle Reference Design, or ERD. The goal was to see if it would be possible to stitch together three ARM products — a Cortex-A15, Cortex-7 and DMC 400 — into one highly flexible platform, one that customers might eventually be able to tweak based on nothing more than an XML description of the system.

Engineers at STMicroelectronics sought to understand and benchmark the Eagle Reference Design. To speed this benchmarking along, they wanted a verification environment that would link software-based simulation and hardware-based emulation in a common flow.

Their solution was unique, and their story worth reading. They first built a simulation testbench that relied heavily on verification IP (VIP). Next, the team connected this testbench to a Veloce emulation system via TestBench XPress (TBX) co-modeling software. Running verification required separating all blocks of design code into two domains — synthesizable code, including all RTL, for running on the emulator; and all other modules that run on the HDL portion of the environment on the simulator (which is connected to the emulator). Throughout the project, the team worked closely with Mentor Graphics to fine-tune the new co-emulation verification environment, which requires that all SoC components be mapped exactly the same way in simulation and emulation.

Because the reference design was not bound to any particular project, the main goal was not to arrive at the complete verification of the design but rather to do performance analysis and establish verification methodologies and techniques that would work in the future. In this they succeeded, agreeing that when they eventually try this sort of combined approach on a real project, they will be able to port the verification environment to the emulator more or less seamlessly.

This is a great success story worth reading on how STMicroelectronics combined Questa simulation, Mentor verification IP (VIP), and Veloce emulation to speed up their benchmarking verification process. Check out the full story here!

, , , , ,

@dennisbrophy tweets

Follow dennisbrophy

@dave_59 tweets

Follow dave_59

@jhupcey tweets

Follow jhupcey