Posts Tagged ‘ARM’

17 November, 2014

Few verification tasks are more challenging than trying to achieve code coverage goals for a complex system that, by design, has numerous layers of configuration options and modes of operation.  When the verification effort gets underway and the coverage holes start appearing, even the most creative and thorough UVM testbench architect can be bogged down devising new tests – either constrained-random or even highly directed tests — to reach uncovered areas.

At the recent ARM® Techcon, Nguyen Le, a Principal Design Verification Engineer in the Interactive Entertainment Business Unit at Microsoft Corp. documented a real world case study on this exact situation.  Specifically, in the paper titled “Advanced Verification Management and Coverage Closure Techniques”, Nguyen outlined his initial pain in verification management and improving cover closure metrics, and how he conquered both these challenges – speeding up his regression run time by 3x, while simultaneously moving the overall coverage needle up to 97%, and saving 4 man-months in the process!  Here are the highlights:

* DUT in question
— SoC with multi-million gate internal IP blocks
— Consumer electronics end-market = very high volume production = very high cost of failure!

* Verification flow
— Constrained-random, coverage driven approach using UVM, with IP block-level testbenches as well as  SoC level
— Rigorous testplan requirements tracking, supported by a variety of coverage metrics including functional coverage with SystemVerilog covergroups, assertion coverage with SVA covers, and code coverage on statements, Branches, Expressions, Conditions, and FSMs

* Sign-off requirements
— All test requirements tracked through to completion
— 100% functional, assertion and code coverage

* Pain points
— Code coverage: code coverage holes can come from a variety of expected and unforeseen sources: dead code can be due to unused functions in reused IP blocks, from specific configuration settings, or a bug in the code.  Given the rapid pace of the customer’s development cycle, it’s all too easy for dead code to slip into the DUT due to the frequent changes in the RTL, or due to different interpretations of the spec.  “Unexplainably dead” code coverage areas were manually inspected, and the exclusions for properly unreachable code were manually addressed with the addition of pragmas.  Both procedures were time consuming and error prone
— Verification management: the verification cycle and the generated data were managed through manually-maintained scripting.  Optimizing the results display, throughput, and tool control became a growing maintenance burden.

* New automation
— Questa Verification Manager: built around the Unified Coverage Database (UCDB) standard, the tool supports a dynamic verification plan cross-linked with the functional coverage points and code coverage of the DUT.  In this way the dispersed project teams now had a unified view which told them at a glance which tests were contributing the most value, and which areas of the DUT needed more attention.  In parallel, the included administrative features enabled efficient control of large regressions, merging of results, and quick triage of failures.

— Questa CoverCheck: this tool reads code coverage results from simulation in UCDB, and then leverages formal technology under-the-hood to mathematically prove that no stimulus could ever activate the code in question. If it’s OK for a given block of code to be dead due to a particular configuration choice, etc., the user can automatically generate wavers to refine the code coverage results.  Additionally, the tool can also identify segments of code that, though difficult to reach, might someday be exercised in silicon. In such cases, CoverCheck helps point the way to testbench enhancements to better reach these parts of the design.

— The above tools used in concert (along with Questasim) enabled a very straightforward coverage score improvement process as follows:
1 – Run full regression and merge the UCDB files
2 – Run Questa CoverCheck with the master UCDB created in (1)
3 – Use CoverCheck to generate exclusions for “legitimate” unreachable holes, and apply said exclusions to the UCDB
4 – Use CoverCheck to generate waveforms for reachable holes, and share these with the testbench developer(s) to refine the stimulus
5 – Report the new & improved coverage results in Verification Manager

* Results
— Automation with Verification Manager enabled Microsoft to reduce the variation of test sequences from 10x runtime down to a focused 2x variation.  Additionally, using the coverage reporting to rank and optimize their tests, they increased their regression throughput by 3x!
— With CoverCheck, the Microsoft engineers improved code coverage by 10 – 15% in most hand-coded RTL blocks, saw up to 20% coverage improvement for auto-generated RTL code, and in a matter of hours were able to increase their overall coverage number from 87% to 97%!
— Bottom-line: the customer estimated that they saved 4 man-months on one project with this process

2014 MSFT presentation at ARM Techcon -- cover check ROI

Taking a step back, success stories like this one, where automated, formal-based applications leverage the exhaustive nature of formal analysis to tame once intractable problems (which require no prior knowledge of formal or assertion-based verification), are becoming more common by the day.  In this case, Mentor’s formal-based CoverCheck is clearly the right tool for this specific verification need, literally filling in the gaps in a traditional UVM testbench verification flow.  Hence, I believe the overall moral of the story is a simple rule of thumb: when you are grappling with a “last mile problem” of unearthing all the unexpected, yet potentially damaging corner cases, consider a formal-based application as the best tool for job.  Wouldn’t you agree?

Joe Hupcey III

Reference links:

Direct link to the presentation slides:

ARM Techcon 2014 Proceedings:

Official paper citation:
Advanced Verification Management and Coverage Closure Techniques, Nguyen Le, Microsoft; Harsh Patel, Roger Sabbagh, Darron May, Josef Derner, Mentor Graphics

, , , , , , , , , , ,

30 October, 2013


This week ARM® TechCon® 2013 is being held at the Santa Clara Convention Center from Tuesday October 29 through Thursday October 31st, but don’t worry, there’s nothing to be scared about.  The theme is “Where Intelligence Counts”, and in fact as a platinum sponsor of the event, Mentor Graphics is excited to present no less than ten technical and training sessions about using intelligent technology to design and verify ARM-based designs.

My personal favorite is scheduled for Halloween Day at 1:30pm, where I’ll tell you about a trick that Altera used to shave several months off their schedule, while verifying the functionality and performance of an ARM AXI™ fabric interconnect subsystem.  And the real treat is that they achieved first silicon success as well.  In keeping with the event’s theme, they used something called “intelligent” testbench automation.

And whether you’re designing multi-core designs with AXI fabrics, wireless designs with AMBA® 4 ACE™ extensions, or even enterprise computing systems with ARM’s latest AMBA® 5 CHI™ architecture, these sessions show you how to take advantage of the very latest simulation and formal technology to verify SoC connectivity, ensure correct interconnect functional operation, and even analyze on-chip network performance.

On Tuesday at 10:30am, Gordon Allan described how an intelligent performance analysis solution can leverage the power of an SQL database to analyze and verify interconnect performance in ways that traditional verification techniques cannot.  He showed a wide range of dynamic visual representations produced by SoC regressions that can be quickly and easily manipulated by engineers to verify performance to avoid expensive overdesign.

Right after Gordon’s session, Ping Yeung discussed using intelligent formal verification to automate SoC connectivity, overcoming observability and controllability challenges faced by simulation-only solutions.  Formal verification can examine all possible scenarios exhaustively, verifying on-chip bus connectivity, pin multiplexing of constrained interfaces, connectivity of clock and reset signals, as well as power control and scan test signal connectivity.

On Wednesday, Mark Peryer shows how to verify AMBA interconnect performance using intelligent database analysis and intelligent testbench automation for traffic scenario generation.  These techniques enable automatic testbench instrumentation for configurable ARM-based interconnect subsystems, as well as highly-efficient dense, medium, sparse, and varied bus traffic generation that covers even the most difficult to achieve corner-case conditions.

And finally also on Halloween, Andy Meyer offers an intelligent workshop for those that are designing high performance systems with hierarchical and distributed caches, using either ARM’s AMBA 5 CHI architecture or ARM’s AMBA 4 ACE architecture.  He’ll cover topics including how caching works, how to improve caching performance, and how to verify cache coherency.

For more information about these sessions, be sure to visit the ARM TechCon program website.  Or if you miss any of them, and would like to learn about how this intelligent technology can help you verify your ARM designs, don’t be afraid to email me at   Happy Halloween!

, , , , , , , , , ,

15 October, 2013

Low Power Flow Kicks-off Symposium

In the world of electronic design automation, as an idea takes hold and works its way from thought to silicon, numerous tools are used by engineers and the like to help bring a good idea to product fruition.  Standards play a key and important role to help move your user information from high-level concepts into the netlists can be realized in silicon.  The IEEE Standards Association is holding a Symposium on EDA Interoperability to help members of the electronics/semiconductor design and verification community better understand the landscape of EDA and IP standards and the role they play to address interoperability.

Another key component are the programs and business relationships we foster to promote tool connectivity and interoperability among each other.  The Questa users rely on the Questa Vanguard Partnership program so their trusted tool and technology partners have access to our verification technology to allow them to craft the leading edge design and verification flows with technology from numerous sources.  If your users want you to connect with Questa, we invite them to explore the benefits of this program.  Even better, join us at the IEEE SA Symposium on EDA Interoperability where can also discuss this in person – Register Here!

Event Details
Date: 24 October 2013
Time: 9:00 a.m. – 6:00 p.m. PT
Location: Techmart – 5201 Great America Parkway, Santa Clara, CA 95054-1125
Cost: Free!

One of the more pressing issues in design and verification today is address the issue of low power.  The IEEE SA Symposium on EDA kicks-off the morning with its first session on “Interoperability Challenges: Power Management in Silicon.”  The session will feature an opening presentation on the state of standardization by the Vice Chair of the IEEE P1801 Working Group (and Mentor Graphics Verification Architect) as well as two presentations from ARM on the use of the IEEE 1801 (UPF) standard.

11:00 a.m. – 12:00 p.m. Session 1: Interoperability Challenges: Power Management in Silicon
IEEE 1801 Low Power Format: Impact and Opportunities
Erich Marschner, Vice Chair of IEEE P1801 Working Group, Verification Architect, Mentor Graphics
Power Intent Constraints: Using IEEE1801 to improve the quality of soft IP
Stuart Riches, Project Manager, ARM
Power Intent Verification: Using IEEE1801 for the verification of ARM Cortex A53 processor
Adnan Khan, Senior Engineer, ARM

The event is sponsored by Mentor Graphics and Synopsys and we have made sure the symposium is free to attend.  You just need to register.  There are other great aspects to the event, not just the ability to have a conversation on the state of standards for low power design and verification in the morning.  In fact, the end of the event will take a look at EDA 2020 and what is needed in the future.  This will be a very interactive session that will open the conversation to all attendees.  I can’t wait to learn what you have to share!  See you at the Techmart on the 24th.

, , , , , , ,

26 June, 2013

Design Trends (Continued)

In Part 1 of this series of blogs, I focused on design trends (click here) as identified by the 2012 Wilson Research Group Functional Verification Study (click here). In this blog, I continue presenting the study findings related to design trends, with a focus on embedded processor, DSP, and on-chip bussing trends.

Embedded Processors

In Figure 1, we see the percentage of today’s designs by the number of embedded processor cores. It’s interesting to note that 79 percent of all non-FPGA designs (in green) contain one or more embedded processors and could be classified as an SoC, which are inherently more difficult to verify than designs without embedded processors. Also note that 55 percent of all FPGA designs (in red) contain one or more embedded processors. 

Figure 1. Number of embedded processor cores

Figure 2 shows the trends in terms of the number of embedded processor cores for non-FPGA designs. The comparison includes the 2004 Collett study (in dark green), the 2007 Far West Research study (in gray), and the 2010 Wilson Research Group study (in green). 

Figure 2. Trends: Number of embedded processor cores

For reference, between the 2010 and 2012 Wilson Research Group study, we did not see a significant change in the number of embedded processors for FPGA designs. The results look essentially the same as the red curve in Figure 1.

Another way to look at the data is to calculate the mean number of embedded processors that are being designed in by SoC projects around the world. In Figure 3, you can see the continual rise in the mean number of embedded processor cores, where the mean was about 1.06 in 2004 (in dark green). This mean increased in 2007(in gray) to 1.46. Then, it increased again in 2010 (in blue) to 2.14. Today (in green) the mean number of embedded processors is 2.25. Of course, this calculation represents the industry average—where some projects are creating designs with many embedded processors, while other projects are creating designs with few or none.

It’s also important to note here that the analysis is per project, and it does not represent the number of embedded processors in terms of silicon volume (i.e., production). Some projects might be creating designs that result in high volume, while other projects are creating designs with low volume. 

Figure 3. Trends: Mean number of embedded processor core

Another interesting way to look at the data is to partition it into design sizes (for example, less than 5M gates, 5M to 20M gates, greater than 20M gates), and then calculate the mean number of embedded processors by design size. The results are shown in Figure 4, and as you would expect, the larger the design, the more embedded processor cores.

Figure 4. Non-FPGA mean embedded processor cores by design size

Platform-based SoC design approaches (i.e., designs containing multiple embedded processor cores with lots of third-party and internally developed IP) have driven the demand for common bus architectures. In Figure 5 we see the percentage of today’s designs by the type of on-chip bus architecture for both FPGA (in red) and non-FPGA (in green) designs.

Figure 5. On-chip bus architecture adoption

Figure 6 shows the trends in terms of on-chip bus architecture adoption for Non-FPGA designs. The comparison includes the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). Note that there was about a 250 percent reported increase in Non-FPGA design projects using the ARM AMBA bus architecture between the years 2007 and ??. 

Figure 6. Trends: Non-FPGA on-chip bus architecture adoption  

Figure 7 shows the trends in terms of on-chip bus architecture adoption for FPGA designs. The comparison includes the 2010 Wilson Research Group study (in pink), and the 2012 Wilson Research Group study (in red). Note that there was about a 163 percent increase in FPGA design projects using the ARM AMBA bus architecture between the years 2010 and 2012. 

Figure 7. FPGA on-chip bus architecture adoption trends

In Figure 8 we see the percentage of today’s designs by the number of embedded DSP cores for both FPGA designs (in red) and non-FPGA designs (in green).

Figure 8. Number of embedded DSP cores

Figure 9 shows the trends in terms of the number of embedded DSP cores for non-FPGA designs. The comparison includes the 2007 Far West Research study (in grey), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green).


Figure 9. Trends: Number of embedded DSP core

In my next blog (click here), I’ll present clocking and power trends.

, , , , , ,

31 October, 2012

Ready for 100 billion “things” connected by the Internet?

The IEEE Standards Association (SA) Corporate Advisory Group (CAG) has been working to bring industry input into the standards development organization on the emerging Internet of Things (IoT) trend that will connect billions of devices with each other.

As you can imagine, the impact this will have to the service structure down to the development of connected devices will have impact on tools used to create, verify and test them from the EDA industry to the protocols that will need to be in place to facilitate this.

This past summer the oneM2M was launched to bring some groups together who were dedicated to product technical specification for the M2M Service Layer.  The impact on the IEEE, that is responsible for ongoing Internet standardization, is likewise large and not totally known.

Segars ARM Techcon Presentation smlI was reminded of the IoT impact this week by ARM’s EVP, Simon Segars.  His ARM Techcon keynote presentation this week.  noted the IoT is a merging of our digital and physical worlds.  He also said predictions are the data from smartphones is “exploding at a 100% growth rate a year for the next 4-5 years.”   To make the point even more stunning, Simon shared that Facebook expects 1-2 billion pictures will be taken and uploaded to their website around Halloween 2012.  The good news for those who did not have the time to make it to Santa Clara, CA USA for ARM Techcon, his presentation has been made available for viewing on YouTube.  You can find it here.

The IoT conversation continues around the globe.

IEEE IoT Workshop: You are invited!

ieee_mb_blue IEEE has restored service to their Internet connection at  However, connection from IEEE staff locations is tentative due to the widespread devastation of Hurricane Sandy in the New Jersey USA area where they live and work.  There may be delays in getting official invitations out on the IoT workshop.  The IEEE workshop on Internet of Things has been put together in conjunction with several of the CAG member companies, with direct leadership from our STMicroelectonics representative and input from representatives from Broadcom, GE Medical, Ericsson, Qualcomm and others. The IEEE SA staff and IoT Workshop leadership have asked those who are connected to share workshop information.  I am doing that here.

You are invited to attend and participate in the workshop.  Details on the event are:

Event Description:

The event will feature a combination of keynote speeches, product showcase and panel sessions with the goal to:

  • identify collaboration opportunities and standardization gaps related to IoT
  • help industry foster the growth of IoT markets;
  • leverage IEEE’s value and platform for IoT industry-wide consensus development,; and
  • help industry with the creation of a vibrant IoT ecosystem.

Date: 13 November 2012
Location: Milan, Italy
Fee: Free

Keynotes include:

  • Service Provider’s View of the IoT World (SP)
  • End to End Systems Security (ST)
  • IEEE-SA – Perfect Platform for the New Millennia of Consensus Development

Panel Topics include:

  • GW as an Enabler of the New Services in the IoT World
  • Monetizing Services in the IoT World
  • Security in the IoT World
  • Standard, what we have and what is missing, convergence in the technology world, collaboration opportunities.


31 October 2012 4:25 p.m. PDT
Access to has been restored.  That was quick! You can now access IoT Workshop details from IEEE directly.
31 October 2012 3:00 p.m. PDT
Due to the impact of Hurricane Sandy, power to IEEE servers has been lost and backup power sources have been depleted.  Access to the IEEE website for more information, registration and additional details is not available at this moment.  The workshop will be held.If the servers return to the Internet, I will update this notice.And if their absence appears to be something that will last longer than another day or so, I will update this blog with alternate contact information for those who would like more detailed information on how to register and where to go to attend the event. 

, , , , , ,

20 July, 2012

Live & In-Person at DAC 2012!

DAC 2012 4Verification Academy, the brain child of Harry Foster, Chief Verification Scientist at Mentor Graphics, was live from the Design Automation Conference tradeshow floor this year.  Harry is pictured to the right giving an update on his popular verification survey from the DAC tradeshow floor.

The Verification Academy, predominantly a web-based resource is a popular site for verification information with more than 11,000 registered members for forum access on topics ranging from OVM/UVM, SystemVerilog and Analog/Mixed-Signal design.  The popular OVM/UVM Cookbook, which used to be available as a print edition, is now a live online resource there as well.  A whole host of educational modules and seminars can also be found there too.

If you know about the Verification Academy, you know all about  the content mentioned above and that there is much more to be found there.  For those who don’t know as much about it, Harry took a break from the being at the Verification Academy booth at DAC to discuss the Verification Academy with Luke Collins, Technology Journalist, Tech Design Forum.  (Flash is required to watch Harry discuss Verification Academy with Luke.)

The Verification Academy at DAC was a great venue to connect in person with other Verification Academy users to discuss standards, methodologies, flows and other industry trends.  Each hour there were short presentations by Verification Academy members that proved to be a popular way to start some interesting conversations.  While we realize not all Verification Academy members were able to attend DAC in person, we know many have expressed an interest to some of the presentations.  Verification Academy “Total Access” members now have access to many of the presentations.






Thales Alenia Space


Test & Verification Solutions


Willamette HDL


Sunburst Design


Mentor Graphics

Total Access members can also download all the presentations in a .zip file.  Happy reading to all those who were unable to visit us at DAC and thank you to all who were able to stop by and visit.

, , , , , , , , , , , , , , , , , , , ,

13 December, 2011

Instant Replay Offers Multiple Views at Any Speed

If you’ve watched any professional sporting event on television lately, you’ve seen the pressure put on referees and umpires.  They have to make split-second decisions in real-time, having viewed ultra-high-speed action just a single time.  But watching at home on television, we get the luxury of viewing multiple replays of events in question in high-definition super-slow-motion, one frame at a time, and even in reverse.  We also get to see many different views of these controversial events, from the front, the back, the side, up close, or far away.  Sometimes it seems there must be twenty different cameras at every sporting event.

Wouldn’t it nice if you could apply this same principle to your SoC level simulations?  What if you had instant replay from multiple viewing angles in your functional verification toolbox?  It turns out that such a technology indeed exists, and it’s called “Codelink Replay”.

Codelink Replay enables verification engineers to use instant replay with multiple viewing angles to quickly and accurately debug even the most complex SoC level simulation failures.  This is becoming increasingly important, as we see in Harry Foster’s blog series about the 2010 Wilson Research Group Functional Verification Study that over half of all new design starts now contain multiple embedded processors.  If you’re responsible for verifying a design with multiple embedded cores such as ARM’s new Cortex A15 and Cortex A7 processors, this technology will have a dramatic impact for you.

Multi-Core SoC Design Verification

Multi-core designs present a whole new level of verification challenges.  Achieving functional coverage of your IP blocks at the RTL level has become merely a pre-requisite now – as they say “necessary but not sufficient”.  Welcome to the world of SoC level verification, where you use your design’s software as a testbench.  After all, since a testbench’s role is to mimic the design’s target environment, so as to test its functionality, how better to accomplish this than to execute the design’s software against its hardware, albeit during simulation?

Some verification teams have already dabbled in this world.   Perhaps you’ve written a handful of tests in C or assembly code, loaded them into memory, initialized your processor, and executed them.  This is indeed the best way to verify SoC level functionality including power optimization management, clocking domain control, bus traffic arbitration schemes, driver-to-peripheral compatibility, and more, as none of these aspects of an SoC design can be appropriately verified at the RTL IP block level.

However, imagine running a software testbench program only to see that the processor stopped executing code two hours into the simulation.  What do you do next?  Debugging “software as a testbench” simulation can be daunting.  Especially when the software developers say “the software is good”, and the hardware designers say “the hardware is fine”.  Until recently, you could count on weeks to debug these types of failures.  And the problem is compounded with today’s SoC designs with multiple processors running software test programs from memory.

This is where Codelink Replay comes in.  It enables you to replay your simulation in slow motion or fast forward, while observing many different views including hardware views (waveforms, CPU register values, program counter, call stack, bus transactions, and four-state logic) and software views (memory, source code, decompiled code, variable values, and output) – all remaining in perfect synchrony, whether you’re playing forward or backward, single-step, slow-motion, or fast speed.  So when your simulation fails, just start at that point in time, and replay backwards to the root of the problem.  It’s non-invasive.  It doesn’t require any modifications to your design or to your tests.

Debugging SoC Designs Quickly and Accurately

So if you’re under pressure to make fast and accurate decisions when your SoC level tests fail, you can relate to the challenges faced by professional sports referees and umpires.  But with Codelink Replay, you can be assured that there are about 20 different virtual “cameras” tracing and logging your processors during simulation, giving you the same instant replay benefit we get when we watch sporting events on television.  If you’re interested to learn more about this new technology, check out the web seminar at the URL below, that introduces Codelink Replay, and shows how it supports the entire ARM family of processors, including even the latest Cortex A-Series, Cortex R-Series, and Cortex M-Series.

, , , , , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@jhupcey Tweets

  • Loading tweets...