Posts Tagged ‘SoC’

6 December, 2016

Emulation technology has been around for a long time—more than four decades by my count—and industry observers believe more than ever that it’s a key ingredient in the IC verification strategy, albeit with a rebirth. The question is, what is this new era of emulation all about and why is hardware emulation, which remained on the fringes of IC design ecosystem with a small customer base for many years, now becoming a mainstream design tool for system-on-chip (SoC) verification? The answer can be found in the advent of bigger and more complex chips that often contain multiple processor cores and exceed 100 million gates.

In a nutshell, a register-transfer-level (RTL) simulator, a go-to verification tool is being challenged as design capacity surpasses 100 million gates.  Greater gate counts are possible because of the scaling roadmap for processors.  After all, there is only so much you can do with multi-threading. Next, even hardware-description-language (HDL) software simulators running in parallel over PC farms don’t create a viable option because design-under-test (DUT) environments are sequential in nature.

On the other hand, hardware emulation, once the staple of large IC designs like processors and graphic chips, is now becoming a popular verification tool precisely because it runs faster than HDL simulators for full-chip verification. A hardware emulation tool can run the verification of large, SoC designs over 10 times and sometimes much greater than 10 times faster than software simulation.

Graphic 1 for blog

The view of the key verification modes supported by Mentor Graphics’ Veloce emulator platform.

Hardware emulation has been steadily evolving over the past decade or so, as the cost of ownership is coming down while emulation tools are becoming easier to install and operate. And with the changing equation of emulator ROI and SoC design imperatives, an increasing number of IC designers are inclined to use emulation tools for debugging the hardware and testing hardware and software integration. Moreover, emulation tools are becoming more versatile, ranging from in-circuit emulation (ICE), where physical devices are cabled to the emulator, to more innovative co-emulation solutions like Veloce VirtuaLAB that virtualizes the interfaces amid the growing functionality of today’s SoC designs.

Software simulation or hardware emulation

A simulator tries to model the behavior of the SoC or system-level design while an emulator creates an actual implementation of the design. Here, it’s important to note that both software simulators and hardware emulators are employed for design verification—a stage also known as design-under-test or DUT—where a compiler converts the design model into a data structure stored in memory.

However, in the case of simulation, a software algorithm processes the data representing a design model using a design language, while an emulator processes the data structure using a compute engine enabled by a processor array. Although hardware emulation is growing beyond a $300 million market, it doesn’t mean that it’s going to be the end of the road for HDL simulation tools.

 large SoC graphic

The Veloce emulator, which supports both traditional ICE and transaction-based verification, runs verification of SoCs with multiple protocol interfaces.

HDL-based software simulation will most likely remain the verification engine of choice, especially at the early stage of verification process—for instance, at the IP and subsystem levels—as it represents an economical, easy-to-use, and quick-to-setup EDA tool. On the other hand, emulation will gain traction in larger SoC designs that encompass millions of verification cycles and where hardware bugs are difficult to find. In other words, the two EDA tool markets for SoC and system-level design verification will co-exist for the foreseeable future.

, , , , , , , , ,

10 October, 2016

ASIC/IC Verification Technology Adoption Trends

This blog is a continuation of a series of blogs related to the 2016 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on the growing ASIC/IC design project resource trends due to rising design complexity. In this blog I examine various verification technology adoption trends.

Dynamic Verification Techniques

Figure 1 shows the ASIC/IC adoption trends for various simulation-based techniques from 2007 through 2016, which include code coverage, assertions, functional coverage, and constrained-random simulation.

BLOG-2016-WRG-figure-9-1

Figure 1. ASIC/IC Dynamic Verification Technology Adoption Trends

One observation from these adoption trends is that the ASIC/IC electronic design industry has matured its verification processes. This maturity is likely due to the growing complexity of designs as discussed in the previous section. Another observation is that constrained-random simulation and code coverage adoption appears to have declined. However, as I mentioned in Part 7 (click here) one interesting observation for this year’s study is that there was a large increase in design projects working on designs less than 100K gates, perhaps is due to an increased number of projects working on smaller sensor chips for IoT devices. Nonetheless, it is important to keep in mind that very small projects do not apply advanced verification techniques, which can bias the overall industry verification technique adoption trends in some cases. Hence, in reality the adoption of constrained-random simulation and code coverage has actually leveled off (versus declined) if you ignore these very small devices.

Another reason constrained-random simulation has leveled off is due to its scaling limitations. Constrained-random simulation generally works well at the IP block or subsystem level, but does not scale to the entire SoC integration level.

ASIC/IC Static Verification Techniques

Figure 2 shows the ASIC/IC adoption trends for formal property checking (e.g., model checking), as well as automatic formal applications (e.g., SoC integration connectivity checking, deadlock detection, X semantic safety checks, coverage reachability analysis, and many other properties that can be automatically extracted and then formally proven). Formal property checking traditionally has been a high-effort process requiring specialized skills and expertise. However, the recent emergence of automatic formal applications provides narrowly focused solutions and does not require specialized skills to adopt. While formal property checking adoption is experiencing incremental growth between 2012 and 2014, the adoption of automatic formal applications increased by 62 percent during this period. What was interesting in 2016 was that formal property checking experience a 31 percent increase since the 2014 study, while automatic formal applications was essentially flat, which I suspect is a temporary phenomenon. Regardless, if you calculate the compounded annual growth rate between 2012 and 2016, you see healthy adoption growth for both, as shown in Figure 2.

BLOG-2016-WRG-figure-9-2

Figure 2. ASIC/IC Formal Technology Adoption

Emulation and FPGA Prototyping

Historically, the simulation market has depended on processor frequency scaling as one means of continual improvement in simulation performance. However, as processor frequency scaling levels off, simulation-based techniques are unable to keep up with today’s growing complexity. This is particularly true when simulating large designs that include both software and embedded processor core models. Hence, acceleration techniques are now required to extend SoC verification performance for very large designs. In fact, emulation and FPGA prototyping have become key platforms for SoC integration verification where both hardware and software are integrated into a system for the first time. In addition to SoC verification, emulation and FPGA prototyping are also used today as a platform for software development.

Figure 3 describes various reasons why projects are using emulation, while Figure 4 describes why FPGA Prototyping was performed. You might note that the results do not sum to 100 percent since multiple answers were accepted from each study participant.

BLOG-2016-WRG-figure-9-3

Figure 3. Why Was Emulation Performed?

BLOG-2016-WRG-figure-9-4

Figure 4. Why Was FPGA Prototyping Performed?

Figure 5 partitions the data for emulation and FPGA prototyping adoption by the design size as follows: less than 5M gates, 5M to 80M gates, and greater than 80M gates. Notice that the adoption of emulation continues to increase as design sizes increase. However, the adoption of FPGA prototyping levels off as design sizes increase beyond 80M gates. Actually, the level-off point is more likely around 40M gates or so since this is the average capacity limit of many of today’s FPGAs. This graph illustrates one of the problems with adopting FPGA prototyping of very large designs. That is, there can be an increased engineering effort required to partition designs across multiple FPGAs. However, better FPGA partitioning solutions are now emerging from EDA to address these challenges. In addition, better FPGA debugging solutions are also emerging from EDA to address today’s lab visibility challenges. Hence, I anticipate seeing an increase in adoption of FPGA prototyping for larger gate counts as time goes forward.

BLOG-2016-WRG-figure-9-5

Figure 5. Emulation and FPGA Prototyping Adoption by Design Size

In my next blog (click here) I plan to discuss various ASIC/IC language and library adoption trends.

Quick links to the 2016 Wilson Research Group Study results

, , , , , ,

25 September, 2016

ASIC/IC Design Trends

This blog is a continuation of a series of blogs related to the 2016 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I focused on FPGA design and verification trends.  I now will shift the focus of this series of blogs from FPGA trends to ASIC/IC trends.

In this blog, I present trends related to various aspects of design to illustrate growing design complexity. Figure 1 shows the trends from the 2014 and 2016 studies in terms of active ASIC/IC design project by design sizes (gates of logic and datapath, excluding memories). Keep in mind that Figure 1 represents design projects, not silicon volume.

One interesting observation from this year’s study is that there appeared to be a large increase in design projects working on designs less than 100K gates. This was somewhat of a surprise. Perhaps it is due to a number of projects working on smaller sensor chips for IoT devices. At any rate, it is important to keep this in mind when observing some of the verification technology adoption trends. The reason is typically these very small projects do not apply advanced verification techniques, which can bias the overall industry verification technique adoption trends in some cases.

BLOG-2016-WRG-figure-7-1

Figure 1. ASIC/IC Design Sizes

The key takeaway from Figure 1 is that the electronic industry continues to move to larger designs. In fact, 31 percent of today’s design projects are working on designs over 80M gates, while 20 percent of today’s design projects are working on designs over 500M gates.

But increased design size is only one dimension of the growing complexity challenge. What has changed significantly in design since the original Collett studies is the dramatic movement to SoC class of designs. In 2004, Collett found that 52 percent of design projects were working on designs that includ one or more embedded processors. Our 2012 study found that the number of design projects working on designs with embedded processors had increased to 71 percent, and has been fairly consistent since that point in time as shown in Figure 2.

Another interesting trend is the increase in the number of embedded processes in a single SoC. For example, 49 percent of design projects today are working on designs that contain two or more embedded processors, while 16 percent of today’s designs include eight or more embedded processors. SoC class designs add a new layer of verification complexity to the verification process that did not exist with traditional non-SoC class designs due to hardware and software interactions, new coherency architectures, and the emergence of complex network on-a-chip interconnect.

BLOG-2016-WRG-figure-7-2

Figure 2. ASIC/IC Design Sizes

In addition to the increasing number of embedded processors contained within an SoC, it is not uncommon to find in the order of 120 integrated IP blocks within today’s more advanced SoCs. Many of these IP blocks have their own clocking requirements, which often present new verification challenges due to metastability issues involving signals that cross between multiple asynchronous clock domains.

In Figure 3, we see that 91 percent of all ASIC/IC design projects today are working on designs that have two or more asynchronous clock domains.

BLOG-2016-WRG-figure-7-3

Figure 3. Number of Asynchronous Clock Domains in ASIC/IC Designs

One of the challenges with verifying clock domain crossing issues is that there is a class of metastability bugs that cannot be demonstrated in simulation on an RTL model. To simulate these issues requires a gate-level model with timing, which is often not available until later stages in the design flow. However, static clock-domain crossing (CDC) verification tools have emerged as a solution used to automatically identify clock domain issues directly on an RTL model at earlier stages in the design flow.

In my next blog (click here) I plan to discuss the growing ASIC/IC design project resource trends due to rising design complexity.

Quick links to the 2016 Wilson Research Group Study results

, ,

8 August, 2016

This is the first in a series of blogs that presents the findings from our new 2016 Wilson Research Group Functional Verification Study. Similar to my previous 2014 Wilson Research Group functional verification study blogs, I plan to begin this set of blogs with an exclusive focus on FPGA trends. Why? For the following reasons:

  1. Some of the more interesting trends in our 2016 study are related to FPGA designs. The 2016 ASIC/IC functional verification trends are overall fairly flat, which is another indication of a mature market.
  2. Unlike the traditional ASIC/IC market, there has historically been very few studies published on FPGA functional verification trends. We started studying the FPGA market segment back in the 2010 study, and we now have collected sufficient data to confidently present industry trends related to this market segment.
  3. Today’s FPGA designs have grown in complexity—and many now resemble complete systems. The task of verifying SoC-class designs is daunting, which has forced many FPGA projects to mature their verification process due to rising complexity. The FPGA-focused data I present in this set of blogs will support this claim.

My plan is to release the ASIC/IC functional verification trends through a set of blogs after I finish presenting the FPGA trends.

Introduction

In 2002 and 2004, Collett International Research, Inc. conducted its well-known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification at that point in time. However, after the 2004 study, no additional Collett studies were conducted, which left a void in identifying industry trends. To address this dearth of knowledge, five functional verification focused studies were commissioned by Mentor Graphics in 2007, 2010, 2012, 2014, and 2016. These were world-wide, double-blind, functional verification studies, covering all electronic industry market segments. To our knowledge, the 2014 and 2016 studies are two of the largest functional verification study ever conducted. This set of blogs presents the findings from our 2016 study and provides invaluable insight into the state of the electronic industry today in terms of both design and verification trends.

Study Background

Our study was modeled after the original 2002 and 2004 Collett International Research, Inc. studies. In other words, we endeavored to preserve the original wording of the Collett questions whenever possible to facilitate trend analysis. To ensure anonymity, we commissioned Wilson Research Group to execute our study. The purpose of preserving anonymity was to prevent biasing the participants’ responses. Furthermore, to ensure that our study would be executed as a double-blind study, the compilation and analysis of the results did not take into account the identity of the participants.

For the purpose of our study we used a multiple sampling frame approach that was constructed from eight independent lists that we acquired. This enabled us to cover all regions of the world—as well as cover all relevant electronic industry market segments. It is important to note that we decided not to include our own account team’s customer list in the sampling frame. This was done in a deliberate attempt to prevent biasing the final results. My next blog in this series will discuss other potential bias concerns when conducting a large industry study and describe what we did to address these concerns.

After data cleaning the results to remove inconsistent or random responses (e.g., someone who only answered “a” on all questions), the final sample size consisted of 1703 eligible participants (i.e., n=1703). This was approximately 90% this size of our 2014 study (i.e., 2014 n=1886). However, to put this figure in perspective, the famous 2004 Ron Collett International study sample size consisted of 201 eligible participants.

Unlike the 2002 and 2004 Collett IC/ASIC functional verification studies, which focused only on the ASIC/IC market segment, our studies were expanded in 2010 to include the FPGA market segment. We have partitioned the analysis of these two different market segments separately, to provide a clear focus on each. One other difference between our studies and the Collett studies is that our study covered all regions of the world, while the original Collett studies were conducted only in North America (US and Canada). We have the ability to compile the results both globally and regionally, but for the purpose of this set of blogs I am presenting only the globally compiled results.

Confidence Interval

All surveys are subject to sampling errors. To quantify this error in probabilistic terms, we calculate a confidence interval. For example, we determined the “overall” margin of error for our study to be ±2.36% at a 95% confidence interval. In other words, this confidence interval tells us that if we were to take repeated samples of size n=1703 from a population, 95% of the samples would fall inside our margin of error ±2.36%, and only 5% of the samples would fall outside. Obviously, response rate per individual question will impact the margin of error. However, all data presented in this blog has a margin of error of less than ±5%, unless otherwise noted.

Study Participants

This section provides background on the makeup of the study.

Figure 1 shows the percentage of overall study FPGA and ASIC/IC participants by market segment. It is important to note that this figures does not represent silicon volume by market segment.

BLOG-2016-WRG-figure-0-1

Figure 1: FPGA and ASIC/IC study participants by market segment

Figure 2 shows the percentage of overall study eligible FPGA and ASIC/IC participants by their job description. An example of eligible participant would be a self-identified design or verification engineer, or engineering manager, who is actively working within the electronics industry. Overall, design and verification engineers accounted for 60 percent of the study participants.

BLOG-2016-WRG-figure-0-2

Figure 2: FPGA and ASIC/IC study participants job title description

Before I start presenting the findings from our 2016 functional verification study, I plan to discuss in my next blog (click here) general bias concerns associated with all survey-based studies—and what we did to minimize these concerns.

Quick links to the 2016 Wilson Research Group Study results

, , , , , , , , , , , , , , , , , ,

26 February, 2015

A colleague recently asked me: Has anything changed? Do design teams tape-out nowadays without GLS (Gate-Level Simulation)? And if so, does their silicon actually work?

In his day (and mine), teams prepared in 3 phases: hierarchical gate-level netlist to weed out X-propagation issues, then full chip-level gate simulation (unit delay) to come out of reset and exercise all I/Os, and weed out any other X’s, then finally a run with SDF back-annotation on the clocktree-inserted final netlist.

gates

After much discussion about the actual value of GLS and the desirability of eliminating the pain of having to do it from our design flows, my firm conclusion:

Yes! Gate-level simulation is still required, at subsystem level and full chip level.

Its usage has been minimized over the years – firstly by adding LEC (Logical Equivalence Checking) and STA (Static Timing Analysis) to the RTL-to-GDSII design flow in the 90s, and secondly by employing static analysis of common failure modes that were traditionally caught during GLS – x-prop, clock-domain-crossing errors, power management errors, ATPG and BIST functionality, using tools like Questa® AutoCheck, in the last decade.

So there should not be any setup/hold or CDC issues remaining by this stage.  However, there are a number of reasons why I would always retain GLS:

  1. Financial prudence.  You tape out to foundry at your own risk, and GLS is the closest representation you can get to the printed design that you can do final due diligence on before you write that check.  Are you willing to risk millions by not doing GLS?
  2. It is the last resort to find any packaging issues that may be masked by use of inaccurate behavioral models higher up the flow, or erroneous STA due to bad false path or multi-cycle path definitions.  Also, simple packaging errors due to inverted enable signals can remain undetected by bad models.
  3. Ensure that the actual bringup sequence of your first silicon when it hits the production tester after fabrication.  Teams have found bugs that would have caused the sequence of first power-up, scan-test, and then blowing some configuration and security fuses on the tester, to completely brick the device, had they not run a final accurate bring-up test, with all Design-For-Verification modes turned off.
  4. In block-level verification, maybe you are doing a datapath compilation flow for your DSP core which flips pipeline stages around, so normal LEC tools are challenged.  How can you be sure?
  5. The final stages of processing can cause unexpected transformations of your design that may or may not be caught by LEC and STA, e.g. during scan chain insertion, or clocktree insertion, or power island retention/translation cell insertion.  You should not have any new setup/hold problems if the extraction and STA does its job, but what if there are gross errors affecting clock enables, or tool errors, or data processing errors.  First silicon with stuck clocks is no fun.  Again, why take the risk?  Just one simulation, of the bare metal design, coming up from power-on, wiggling all pads at least once, exercising all test modes at least once, is all that is required.
  6. When you have design deltas done at the physical netlist level: e.g. last minute ECOs (Engineering Change Orders), metal layer fixes, spare gate hookup, you can’t go back to an RTL representation to validate those.  Gates are all you have.
  7. You may need to simulate the production test vectors and burn-in test vectors for your first silicon, across process corners.  Your foundry may insist on this.
  8. Finally, you need to sleep at night while your chip is in the fab!

There are still misconceptions:

  • There is no need to repeat lots of RTL regression tests in gatelevel.  Don’t do that.  It takes an age to run those tests, so identify a tiny percentage of your regression suite that needs to rerun on GLS, to make it count.
  • Don’t wait until tapeout week before doing GLS – prepare for it very early in your flow by doing the 3 preparation steps mentioned above as soon as practical, so that all X-pessimism issues are sorted out well before crunch time.
  • The biggest misconception of all: ”designs today are too big to simulate.”.  Avoid that kind of scaremongering.  Buy a faster computer with more memory.  Spend the right amount of money to offset the risk you are about to undertake when you print a 20nm mask set.

Yes, it is possible to tape out silicon that works without GLS.  But no, you should not consider taking that risk.  And no, there is no justification for viewing GLS as “old school” and just hoping it will go away.

Now, the above is just one opinion, and reflects recent design/verification work I have done with major semiconductor companies.  I anticipate that large designs will be harder and harder to simulate and that we may need to find solutions for gate-level signoff using an emulator.  I also found some interesting recent papers, resources, and opinion – I don’t necessarily agree with all the content but it makes for interesting reading:

I’d be interested to know what your company does differently nowadays.  Do you sleep at night?

If you are attending DVCon next week, check out some of Mentor’s many presentations and events as described by Harry Foster, and please come and find me in the Mentor Graphics booth (801), I would be happy to hear about your challenges in Design/Verification/UVM and especially Debug.
Thanks for reading,
Gordon

, , , , , ,

17 November, 2014

Few verification tasks are more challenging than trying to achieve code coverage goals for a complex system that, by design, has numerous layers of configuration options and modes of operation.  When the verification effort gets underway and the coverage holes start appearing, even the most creative and thorough UVM testbench architect can be bogged down devising new tests – either constrained-random or even highly directed tests — to reach uncovered areas.

armpaper
At the recent ARM® Techcon, Nguyen Le, a Principal Design Verification Engineer in the Interactive Entertainment Business Unit at Microsoft Corp. documented a real world case study on this exact situation.  Specifically, in the paper titled “Advanced Verification Management and Coverage Closure Techniques”, Nguyen outlined his initial pain in verification management and improving cover closure metrics, and how he conquered both these challenges – speeding up his regression run time by 3x, while simultaneously moving the overall coverage needle up to 97%, and saving 4 man-months in the process!  Here are the highlights:

* DUT in question
— SoC with multi-million gate internal IP blocks
— Consumer electronics end-market = very high volume production = very high cost of failure!

* Verification flow
— Constrained-random, coverage driven approach using UVM, with IP block-level testbenches as well as  SoC level
— Rigorous testplan requirements tracking, supported by a variety of coverage metrics including functional coverage with SystemVerilog covergroups, assertion coverage with SVA covers, and code coverage on statements, Branches, Expressions, Conditions, and FSMs

* Sign-off requirements
— All test requirements tracked through to completion
— 100% functional, assertion and code coverage

* Pain points
— Code coverage: code coverage holes can come from a variety of expected and unforeseen sources: dead code can be due to unused functions in reused IP blocks, from specific configuration settings, or a bug in the code.  Given the rapid pace of the customer’s development cycle, it’s all too easy for dead code to slip into the DUT due to the frequent changes in the RTL, or due to different interpretations of the spec.  “Unexplainably dead” code coverage areas were manually inspected, and the exclusions for properly unreachable code were manually addressed with the addition of pragmas.  Both procedures were time consuming and error prone
— Verification management: the verification cycle and the generated data were managed through manually-maintained scripting.  Optimizing the results display, throughput, and tool control became a growing maintenance burden.

* New automation
— Questa Verification Manager: built around the Unified Coverage Database (UCDB) standard, the tool supports a dynamic verification plan cross-linked with the functional coverage points and code coverage of the DUT.  In this way the dispersed project teams now had a unified view which told them at a glance which tests were contributing the most value, and which areas of the DUT needed more attention.  In parallel, the included administrative features enabled efficient control of large regressions, merging of results, and quick triage of failures.

— Questa CoverCheck: this tool reads code coverage results from simulation in UCDB, and then leverages formal technology under-the-hood to mathematically prove that no stimulus could ever activate the code in question. If it’s OK for a given block of code to be dead due to a particular configuration choice, etc., the user can automatically generate wavers to refine the code coverage results.  Additionally, the tool can also identify segments of code that, though difficult to reach, might someday be exercised in silicon. In such cases, CoverCheck helps point the way to testbench enhancements to better reach these parts of the design.

— The above tools used in concert (along with Questasim) enabled a very straightforward coverage score improvement process as follows:
1 – Run full regression and merge the UCDB files
2 – Run Questa CoverCheck with the master UCDB created in (1)
3 – Use CoverCheck to generate exclusions for “legitimate” unreachable holes, and apply said exclusions to the UCDB
4 – Use CoverCheck to generate waveforms for reachable holes, and share these with the testbench developer(s) to refine the stimulus
5 – Report the new & improved coverage results in Verification Manager

* Results
— Automation with Verification Manager enabled Microsoft to reduce the variation of test sequences from 10x runtime down to a focused 2x variation.  Additionally, using the coverage reporting to rank and optimize their tests, they increased their regression throughput by 3x!
— With CoverCheck, the Microsoft engineers improved code coverage by 10 – 15% in most hand-coded RTL blocks, saw up to 20% coverage improvement for auto-generated RTL code, and in a matter of hours were able to increase their overall coverage number from 87% to 97%!
— Bottom-line: the customer estimated that they saved 4 man-months on one project with this process

2014 MSFT presentation at ARM Techcon -- cover check ROI

Taking a step back, success stories like this one, where automated, formal-based applications leverage the exhaustive nature of formal analysis to tame once intractable problems (which require no prior knowledge of formal or assertion-based verification), are becoming more common by the day.  In this case, Mentor’s formal-based CoverCheck is clearly the right tool for this specific verification need, literally filling in the gaps in a traditional UVM testbench verification flow.  Hence, I believe the overall moral of the story is a simple rule of thumb: when you are grappling with a “last mile problem” of unearthing all the unexpected, yet potentially damaging corner cases, consider a formal-based application as the best tool for job.  Wouldn’t you agree?

Joe Hupcey III

Reference links:

Direct link to the presentation slides: https://verificationacademy.com/Advanced-Verification-Management-Presentation

ARM Techcon 2014 Proceedings: http://www.armtechcon.com/

Official paper citation:
Advanced Verification Management and Coverage Closure Techniques, Nguyen Le, Microsoft; Harsh Patel, Roger Sabbagh, Darron May, Josef Derner, Mentor Graphics

, , , , , , , , , , ,

5 November, 2014

Between 2006 and 2014, the average number of IPs integrated into an advanced SoC increased from about 30 to over 120. In the same period, the average number of embedded processors found in an advanced SoC increased from one to as many as 20. However, increased design size is only one dimension of the growing verification complexity challenge. Beyond this growing-functionality phenomenon are new layers of requirements that must be verified. Many of these verification requirements did not exist ten years ago, such as multiple asynchronous clock domains, interacting power domains, security domains, and complex HW/SW dependencies. Add all these challenges together, and you have the perfect storm brewing.

It’s not just the challenges in design and verification that have been changing, of course. New technologies have been developed to address emerging verification challenges. For example, new automated ways of applying formal verification have been developed that allow non-Formal experts to take advantage of the significant benefits of formal verification. New technology for stimulus generation have also been developed that allow verification engineers to develop complex stimulus scenarios 10x more efficiently than with directed tests and execute those tests 10x more efficiently than with pure-random generation.

It’s not just technology, of course. Along with new technologies, new methodologies are needed to make adoption of new technologies efficient and repeatable. The UVM is one example of these new methodologies that make it easier to build complex and modular testbench environments by enabling reuse – both of verification components and knowledge.

The Verification Academy website provides great resources for learning about new technologies and methodologies that make verification more effective and efficient. This year, we tried something new and took Verification Academy on the road with live events in Austin, Santa Clara, and Denver. It was great to see so many verification engineers and managers attending to learn about new verification techniques and share their experiences applying these techniques with their colleagues.

va_live_sc

If you weren’t able to attend one of the live events – or if you did attend and really want to see a particular session again – you’re in luck. The presentations from the Verification Academy Live seminars are now available on the Verification Academy site:

  • Navigating the Perfect Storm: New School Verification Solutions
  • New School Coverage Closure
  • New School Connectivity Checking
  • New School Stimulus Generation Techniques
  • New School Thinking for Fast and Efficient Verification using EZ-VIP
  • Verification and Debug: Old School Meets New School
  • New Low Power Verification Techniques
  • Establishing a company-wide verification reuse library with UVM
  • Full SoC Emulation from Device Drivers to Peripheral Interfaces

You can find all the sessions via the following link:

https://verificationacademy.com/seminars/academy-live

, , , , , , , , ,

30 October, 2013

MENTOR GRAPHICS AT ARM TECHCON

This week ARM® TechCon® 2013 is being held at the Santa Clara Convention Center from Tuesday October 29 through Thursday October 31st, but don’t worry, there’s nothing to be scared about.  The theme is “Where Intelligence Counts”, and in fact as a platinum sponsor of the event, Mentor Graphics is excited to present no less than ten technical and training sessions about using intelligent technology to design and verify ARM-based designs.

My personal favorite is scheduled for Halloween Day at 1:30pm, where I’ll tell you about a trick that Altera used to shave several months off their schedule, while verifying the functionality and performance of an ARM AXI™ fabric interconnect subsystem.  And the real treat is that they achieved first silicon success as well.  In keeping with the event’s theme, they used something called “intelligent” testbench automation.

And whether you’re designing multi-core designs with AXI fabrics, wireless designs with AMBA® 4 ACE™ extensions, or even enterprise computing systems with ARM’s latest AMBA® 5 CHI™ architecture, these sessions show you how to take advantage of the very latest simulation and formal technology to verify SoC connectivity, ensure correct interconnect functional operation, and even analyze on-chip network performance.

On Tuesday at 10:30am, Gordon Allan described how an intelligent performance analysis solution can leverage the power of an SQL database to analyze and verify interconnect performance in ways that traditional verification techniques cannot.  He showed a wide range of dynamic visual representations produced by SoC regressions that can be quickly and easily manipulated by engineers to verify performance to avoid expensive overdesign.

Right after Gordon’s session, Ping Yeung discussed using intelligent formal verification to automate SoC connectivity, overcoming observability and controllability challenges faced by simulation-only solutions.  Formal verification can examine all possible scenarios exhaustively, verifying on-chip bus connectivity, pin multiplexing of constrained interfaces, connectivity of clock and reset signals, as well as power control and scan test signal connectivity.

On Wednesday, Mark Peryer shows how to verify AMBA interconnect performance using intelligent database analysis and intelligent testbench automation for traffic scenario generation.  These techniques enable automatic testbench instrumentation for configurable ARM-based interconnect subsystems, as well as highly-efficient dense, medium, sparse, and varied bus traffic generation that covers even the most difficult to achieve corner-case conditions.

And finally also on Halloween, Andy Meyer offers an intelligent workshop for those that are designing high performance systems with hierarchical and distributed caches, using either ARM’s AMBA 5 CHI architecture or ARM’s AMBA 4 ACE architecture.  He’ll cover topics including how caching works, how to improve caching performance, and how to verify cache coherency.

For more information about these sessions, be sure to visit the ARM TechCon program website.  Or if you miss any of them, and would like to learn about how this intelligent technology can help you verify your ARM designs, don’t be afraid to email me at mark_olen@mentor.com.   Happy Halloween!

, , , , , , , , , ,

26 June, 2013

Design Trends (Continued)

In Part 1 of this series of blogs, I focused on design trends (click here) as identified by the 2012 Wilson Research Group Functional Verification Study (click here). In this blog, I continue presenting the study findings related to design trends, with a focus on embedded processor, DSP, and on-chip bussing trends.

Embedded Processors

In Figure 1, we see the percentage of today’s designs by the number of embedded processor cores. It’s interesting to note that 79 percent of all non-FPGA designs (in green) contain one or more embedded processors and could be classified as an SoC, which are inherently more difficult to verify than designs without embedded processors. Also note that 55 percent of all FPGA designs (in red) contain one or more embedded processors. 

Figure 1. Number of embedded processor cores

Figure 2 shows the trends in terms of the number of embedded processor cores for non-FPGA designs. The comparison includes the 2004 Collett study (in dark green), the 2007 Far West Research study (in gray), and the 2010 Wilson Research Group study (in green). 

Figure 2. Trends: Number of embedded processor cores

For reference, between the 2010 and 2012 Wilson Research Group study, we did not see a significant change in the number of embedded processors for FPGA designs. The results look essentially the same as the red curve in Figure 1.

Another way to look at the data is to calculate the mean number of embedded processors that are being designed in by SoC projects around the world. In Figure 3, you can see the continual rise in the mean number of embedded processor cores, where the mean was about 1.06 in 2004 (in dark green). This mean increased in 2007(in gray) to 1.46. Then, it increased again in 2010 (in blue) to 2.14. Today (in green) the mean number of embedded processors is 2.25. Of course, this calculation represents the industry average—where some projects are creating designs with many embedded processors, while other projects are creating designs with few or none.

It’s also important to note here that the analysis is per project, and it does not represent the number of embedded processors in terms of silicon volume (i.e., production). Some projects might be creating designs that result in high volume, while other projects are creating designs with low volume. 

Figure 3. Trends: Mean number of embedded processor core

Another interesting way to look at the data is to partition it into design sizes (for example, less than 5M gates, 5M to 20M gates, greater than 20M gates), and then calculate the mean number of embedded processors by design size. The results are shown in Figure 4, and as you would expect, the larger the design, the more embedded processor cores.

Figure 4. Non-FPGA mean embedded processor cores by design size

Platform-based SoC design approaches (i.e., designs containing multiple embedded processor cores with lots of third-party and internally developed IP) have driven the demand for common bus architectures. In Figure 5 we see the percentage of today’s designs by the type of on-chip bus architecture for both FPGA (in red) and non-FPGA (in green) designs.

Figure 5. On-chip bus architecture adoption

Figure 6 shows the trends in terms of on-chip bus architecture adoption for Non-FPGA designs. The comparison includes the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). Note that there was about a 250 percent reported increase in Non-FPGA design projects using the ARM AMBA bus architecture between the years 2007 and ??. 

Figure 6. Trends: Non-FPGA on-chip bus architecture adoption  

Figure 7 shows the trends in terms of on-chip bus architecture adoption for FPGA designs. The comparison includes the 2010 Wilson Research Group study (in pink), and the 2012 Wilson Research Group study (in red). Note that there was about a 163 percent increase in FPGA design projects using the ARM AMBA bus architecture between the years 2010 and 2012. 

Figure 7. FPGA on-chip bus architecture adoption trends

In Figure 8 we see the percentage of today’s designs by the number of embedded DSP cores for both FPGA designs (in red) and non-FPGA designs (in green).

Figure 8. Number of embedded DSP cores

Figure 9 shows the trends in terms of the number of embedded DSP cores for non-FPGA designs. The comparison includes the 2007 Far West Research study (in grey), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green).

 

Figure 9. Trends: Number of embedded DSP core

In my next blog (click here), I’ll present clocking and power trends.

, , , , , ,

26 July, 2012

A system-level verification engineer once told me that his company consumes over 50% of its emulation capacity debugging failures. According to him there was just no way around consuming emulators while debugging their SoC design emulation runs. In fact when failures occur during emulation, verification engineers often turn to live debugging with JTAG interfaces to the Design Under Test. This enables one engineer to debug one problem at a time, while consuming expensive emulation capacity for extended periods of time. After all, when some of the intricate interactions between system software and design hardware fail, it can take days if not weeks to debug. To say this is painful, slow, and expensive would be an understatement.

Would you be interested to learn about a better alternative for debugging SoC emulation runs? Veloce Codelink offers instant replay capability for emulation. This allows multiple engineers to debug multiple problems at the same time, without consuming any emulation capacity, leaving the emulators to be used where they’re most needed – running more regression tests. And Veloce Codelink is non-invasive – no additional clock cycles needed to extract emulation data.

If you consume as much time debugging emulation failures as the system-level verification engineer above, Veloce Codelink could double your emulation capacity, too. To learn more about Veloce Codelink’s “virtual emulation” that enables “DVR” control of emulation runs, check out our On-Demand Web Seminar titled “Off-line Debug of Multi-Core SoCs with Veloce Emulation“. In this web seminar you’ll also learn about Veloce Codelink’s “flight data recording” technology that enables long emulation runs to be debugged, without requiring huge amount of memory to store all of the data.

http://www.mentor.com/products/fv/multimedia/veloce-codelink-web-seminar

, , , , , ,

@dennisbrophy tweets

Follow dennisbrophy

@dave_59 tweets

Follow dave_59

@jhupcey tweets

  • #ARM now hiring formal verification engineers in Austin: exciting tech challenge + Ram is a great guy to work with.…https://t.co/uwIXLHWqvg
  • Attention all SF Bay Area formal practitioners: next week Wednesday 7/26 on Mentor's Fremont campus the Verificatio…https://t.co/9Y0iFXJdYi
  • This is a very hands-on, creative role for a verification expert -- join us! https://t.co/jXWFGxGrpn

Follow jhupcey