Posts Tagged ‘SoC’

30 October, 2013

MENTOR GRAPHICS AT ARM TECHCON

This week ARM® TechCon® 2013 is being held at the Santa Clara Convention Center from Tuesday October 29 through Thursday October 31st, but don’t worry, there’s nothing to be scared about.  The theme is “Where Intelligence Counts”, and in fact as a platinum sponsor of the event, Mentor Graphics is excited to present no less than ten technical and training sessions about using intelligent technology to design and verify ARM-based designs.

My personal favorite is scheduled for Halloween Day at 1:30pm, where I’ll tell you about a trick that Altera used to shave several months off their schedule, while verifying the functionality and performance of an ARM AXI™ fabric interconnect subsystem.  And the real treat is that they achieved first silicon success as well.  In keeping with the event’s theme, they used something called “intelligent” testbench automation.

And whether you’re designing multi-core designs with AXI fabrics, wireless designs with AMBA® 4 ACE™ extensions, or even enterprise computing systems with ARM’s latest AMBA® 5 CHI™ architecture, these sessions show you how to take advantage of the very latest simulation and formal technology to verify SoC connectivity, ensure correct interconnect functional operation, and even analyze on-chip network performance.

On Tuesday at 10:30am, Gordon Allan described how an intelligent performance analysis solution can leverage the power of an SQL database to analyze and verify interconnect performance in ways that traditional verification techniques cannot.  He showed a wide range of dynamic visual representations produced by SoC regressions that can be quickly and easily manipulated by engineers to verify performance to avoid expensive overdesign.

Right after Gordon’s session, Ping Yeung discussed using intelligent formal verification to automate SoC connectivity, overcoming observability and controllability challenges faced by simulation-only solutions.  Formal verification can examine all possible scenarios exhaustively, verifying on-chip bus connectivity, pin multiplexing of constrained interfaces, connectivity of clock and reset signals, as well as power control and scan test signal connectivity.

On Wednesday, Mark Peryer shows how to verify AMBA interconnect performance using intelligent database analysis and intelligent testbench automation for traffic scenario generation.  These techniques enable automatic testbench instrumentation for configurable ARM-based interconnect subsystems, as well as highly-efficient dense, medium, sparse, and varied bus traffic generation that covers even the most difficult to achieve corner-case conditions.

And finally also on Halloween, Andy Meyer offers an intelligent workshop for those that are designing high performance systems with hierarchical and distributed caches, using either ARM’s AMBA 5 CHI architecture or ARM’s AMBA 4 ACE architecture.  He’ll cover topics including how caching works, how to improve caching performance, and how to verify cache coherency.

For more information about these sessions, be sure to visit the ARM TechCon program website.  Or if you miss any of them, and would like to learn about how this intelligent technology can help you verify your ARM designs, don’t be afraid to email me at mark_olen@mentor.com.   Happy Halloween!

, , , , , , , , , ,

26 June, 2013

Design Trends (Continued)

In Part 1 of this series of blogs, I focused on design trends (click here) as identified by the 2012 Wilson Research Group Functional Verification Study (click here). In this blog, I continue presenting the study findings related to design trends, with a focus on embedded processor, DSP, and on-chip bussing trends.

Embedded Processors

In Figure 1, we see the percentage of today’s designs by the number of embedded processor cores. It’s interesting to note that 79 percent of all non-FPGA designs (in green) contain one or more embedded processors and could be classified as an SoC, which are inherently more difficult to verify than designs without embedded processors. Also note that 55 percent of all FPGA designs (in red) contain one or more embedded processors. 

Figure 1. Number of embedded processor cores

Figure 2 shows the trends in terms of the number of embedded processor cores for non-FPGA designs. The comparison includes the 2004 Collett study (in dark green), the 2007 Far West Research study (in gray), and the 2010 Wilson Research Group study (in green). 

Figure 2. Trends: Number of embedded processor cores

For reference, between the 2010 and 2012 Wilson Research Group study, we did not see a significant change in the number of embedded processors for FPGA designs. The results look essentially the same as the red curve in Figure 1.

Another way to look at the data is to calculate the mean number of embedded processors that are being designed in by SoC projects around the world. In Figure 3, you can see the continual rise in the mean number of embedded processor cores, where the mean was about 1.06 in 2004 (in dark green). This mean increased in 2007(in gray) to 1.46. Then, it increased again in 2010 (in blue) to 2.14. Today (in green) the mean number of embedded processors is 2.25. Of course, this calculation represents the industry average—where some projects are creating designs with many embedded processors, while other projects are creating designs with few or none.

It’s also important to note here that the analysis is per project, and it does not represent the number of embedded processors in terms of silicon volume (i.e., production). Some projects might be creating designs that result in high volume, while other projects are creating designs with low volume. 

Figure 3. Trends: Mean number of embedded processor core

Another interesting way to look at the data is to partition it into design sizes (for example, less than 5M gates, 5M to 20M gates, greater than 20M gates), and then calculate the mean number of embedded processors by design size. The results are shown in Figure 4, and as you would expect, the larger the design, the more embedded processor cores.

Figure 4. Non-FPGA mean embedded processor cores by design size

Platform-based SoC design approaches (i.e., designs containing multiple embedded processor cores with lots of third-party and internally developed IP) have driven the demand for common bus architectures. In Figure 5 we see the percentage of today’s designs by the type of on-chip bus architecture for both FPGA (in red) and non-FPGA (in green) designs.

Figure 5. On-chip bus architecture adoption

Figure 6 shows the trends in terms of on-chip bus architecture adoption for Non-FPGA designs. The comparison includes the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). Note that there was about a 250 percent reported increase in Non-FPGA design projects using the ARM AMBA bus architecture between the years 2007 and ??. 

Figure 6. Trends: Non-FPGA on-chip bus architecture adoption  

Figure 7 shows the trends in terms of on-chip bus architecture adoption for FPGA designs. The comparison includes the 2010 Wilson Research Group study (in pink), and the 2012 Wilson Research Group study (in red). Note that there was about a 163 percent increase in FPGA design projects using the ARM AMBA bus architecture between the years 2010 and 2012. 

Figure 7. FPGA on-chip bus architecture adoption trends

In Figure 8 we see the percentage of today’s designs by the number of embedded DSP cores for both FPGA designs (in red) and non-FPGA designs (in green).

Figure 8. Number of embedded DSP cores

Figure 9 shows the trends in terms of the number of embedded DSP cores for non-FPGA designs. The comparison includes the 2007 Far West Research study (in grey), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green).

 

Figure 9. Trends: Number of embedded DSP core

In my next blog (click here), I’ll present clocking and power trends.

, , , , , ,

26 July, 2012

A system-level verification engineer once told me that his company consumes over 50% of its emulation capacity debugging failures. According to him there was just no way around consuming emulators while debugging their SoC design emulation runs. In fact when failures occur during emulation, verification engineers often turn to live debugging with JTAG interfaces to the Design Under Test. This enables one engineer to debug one problem at a time, while consuming expensive emulation capacity for extended periods of time. After all, when some of the intricate interactions between system software and design hardware fail, it can take days if not weeks to debug. To say this is painful, slow, and expensive would be an understatement.

Would you be interested to learn about a better alternative for debugging SoC emulation runs? Veloce Codelink offers instant replay capability for emulation. This allows multiple engineers to debug multiple problems at the same time, without consuming any emulation capacity, leaving the emulators to be used where they’re most needed – running more regression tests. And Veloce Codelink is non-invasive – no additional clock cycles needed to extract emulation data.

If you consume as much time debugging emulation failures as the system-level verification engineer above, Veloce Codelink could double your emulation capacity, too. To learn more about Veloce Codelink’s “virtual emulation” that enables “DVR” control of emulation runs, check out our On-Demand Web Seminar titled “Off-line Debug of Multi-Core SoCs with Veloce Emulation“. In this web seminar you’ll also learn about Veloce Codelink’s “flight data recording” technology that enables long emulation runs to be debugged, without requiring huge amount of memory to store all of the data.

http://www.mentor.com/products/fv/multimedia/veloce-codelink-web-seminar

, , , , , ,

21 February, 2012

Is my car trying to tell me something?

This past Friday was the beginning of a two day internal functional verification meeting at Mentor Graphics corporate headquarters on Intelligent Testbench Automation (iTBA).  (Mentor’s iTBA product, Questa inFact is hot and getting hotter.) After getting to my car to return home at the end of the first day, I was thinking that the large interest in this technology – demonstrated by a standing room only training event – has got to be a tipping point indication for iTBA.

I turned my car on.  (Actually, I “pushed” it on as there is no place to put a key to turn anymore.)

Tornado-bp2Moments after starting my car a winter storm alert interrupted the music on the radio and displayed two notices.  One I am familiar with when the temperature falls and snow begins to collect on the mountain passes.  I’m not going to drive in the direction of the snow, so no problem.  The other alert was of grave concern.  It was a tornado watch.  And the tornado watch was not off in some other direction many miles away, it was “0 miles” from me.  I looked up, I scanned the horizon and dark black was in one direction and sun in the other.  I changed the radio channel to a local AM evening drive station, but no mention of a tornado watch.  I headed in the direction of the sun.  It seemed the safest direction to head.  But before I did, I snapped a quick picture as proof I actually read “Tornado Watch” on the car’s navigation screen.

iTBA to the Rescue?

I returned to ponder if functional verification has just gotten too big for current techniques that iTBA is going from a nice to have, to a must have.

Several years back it was popular to brag about the compute farms & ranches one had.  With 5,000 machines here and another 5,000 machines there it seemed a sane demonstration of one’s design and verification prowess.  But this gave way to 50,000 multicore machines and who is talking about this with pride?  All talk is out of necessity.  And what about the next step?  Who has 500,000 or 5,000,000 on the drawing board or in their data centers?  Looking around, it seems very few admit to more than 100,000 and even fewer have more than 500,000.

Verification may be in crisis, as many will say, but it you hold verification technology constant, it is not in crisis, is on a  collision coarse with disaster.  Addressing this crisis has been the theme of many of Mentor Graphics CEO Wally Rhines’ keynotes at DVCon.  His 2011 keynote was taken to heart by many who attended.  The need to improve by a several orders of magnitude the “Velocity of Verification” has been followed by several examples over the year.

One example was shared several months after DVCon when Mentor Graphics and TSMC announced we had partnered to validate advanced functional verification technology.  While not all test results at TSMC or our common customer, AppliedMicro, were revealed, one of the slower tests demonstrated the value of iTBA to shorten time-to-coverage by over 100x.  Even days after that announcement we disclosed Mentor’s Veloce emulation platform offered 400x OVM/UVM driven verification improvement.

100x  and 400x seem like a large numbers, but it appears even bigger when you put it into the context of the time it was measured.  With current constrained random techniques, a project that takes 6 weeks of simulator run time to reach 100% closure can reach it in about 10 hours with Questa inFact or about 2.5 hours with Veloce.  Instead of using complex scripts to peek in on a simulation run over the course of a month and a half, a verification team could actually leave work for the day, return the next morning and have a full, complete and exhaustive verification run.  And when even faster turnaround time is needed, emulation returns results during the work day.

SoC Verification: A Balance of simulation, iTBA & emulation

Wally’s DVCon 2011 keynote referenced 8 customer results coming from Mentor’s Questa inFact tool.  Many more have discovered what this can do for them as well.  And with each success, come the requests from more to see what it can do for them.

But changing the “Velocity of  SoC Verification” has not rested on one technique alone.  Stop by the Mentor Graphics DVCon booth and we can share with you the advances we have made to address system-level verification since last year.

Crossing The Chasm

Which brings me to the point of the “Tornado Watch.”  As I pondered the iTBA tipping point, about “how little things can make big differences” as can be found in Malcolm Gladwell’s book, my car must have been channeling Geoffrey Moore of  “Crossing The Chasm” fame instead.  For that reason it must have issued the Tornado Watch.  Could it be that iTBA is set to cross the chasm from early adopters to the early majority?

And thankfully, I don’t think my car is programmed to issue tipping point warnings, nor do I want to see if it can.

In the end, it will be with the benefit of hindsight that let’s us know if we are crossing the chasm into the tornado or not now or soon.  But for Mentor’s part, full and advanced support of iTBA technology with Questa inFact is ready now, and we are set to cross the chasm into the tornado.   My colleague, Mark Olen, blogs about iTBA here.   If you have not had a chance yet to read his blog on iTBA delivering 10x to 100x faster functional verification, it is worth the time to do so.  You can look for him to give frequent updates on iTBA and comment on the positive impact is has on SoC design and verification teams in the months ahead.

I look forward to seeing you at DVCon.

, , , , , , , , ,

13 December, 2011

Instant Replay Offers Multiple Views at Any Speed

If you’ve watched any professional sporting event on television lately, you’ve seen the pressure put on referees and umpires.  They have to make split-second decisions in real-time, having viewed ultra-high-speed action just a single time.  But watching at home on television, we get the luxury of viewing multiple replays of events in question in high-definition super-slow-motion, one frame at a time, and even in reverse.  We also get to see many different views of these controversial events, from the front, the back, the side, up close, or far away.  Sometimes it seems there must be twenty different cameras at every sporting event.

Wouldn’t it nice if you could apply this same principle to your SoC level simulations?  What if you had instant replay from multiple viewing angles in your functional verification toolbox?  It turns out that such a technology indeed exists, and it’s called “Codelink Replay”.

Codelink Replay enables verification engineers to use instant replay with multiple viewing angles to quickly and accurately debug even the most complex SoC level simulation failures.  This is becoming increasingly important, as we see in Harry Foster’s blog series about the 2010 Wilson Research Group Functional Verification Study that over half of all new design starts now contain multiple embedded processors.  If you’re responsible for verifying a design with multiple embedded cores such as ARM’s new Cortex A15 and Cortex A7 processors, this technology will have a dramatic impact for you.

Multi-Core SoC Design Verification

Multi-core designs present a whole new level of verification challenges.  Achieving functional coverage of your IP blocks at the RTL level has become merely a pre-requisite now – as they say “necessary but not sufficient”.  Welcome to the world of SoC level verification, where you use your design’s software as a testbench.  After all, since a testbench’s role is to mimic the design’s target environment, so as to test its functionality, how better to accomplish this than to execute the design’s software against its hardware, albeit during simulation?

Some verification teams have already dabbled in this world.   Perhaps you’ve written a handful of tests in C or assembly code, loaded them into memory, initialized your processor, and executed them.  This is indeed the best way to verify SoC level functionality including power optimization management, clocking domain control, bus traffic arbitration schemes, driver-to-peripheral compatibility, and more, as none of these aspects of an SoC design can be appropriately verified at the RTL IP block level.

However, imagine running a software testbench program only to see that the processor stopped executing code two hours into the simulation.  What do you do next?  Debugging “software as a testbench” simulation can be daunting.  Especially when the software developers say “the software is good”, and the hardware designers say “the hardware is fine”.  Until recently, you could count on weeks to debug these types of failures.  And the problem is compounded with today’s SoC designs with multiple processors running software test programs from memory.

This is where Codelink Replay comes in.  It enables you to replay your simulation in slow motion or fast forward, while observing many different views including hardware views (waveforms, CPU register values, program counter, call stack, bus transactions, and four-state logic) and software views (memory, source code, decompiled code, variable values, and output) – all remaining in perfect synchrony, whether you’re playing forward or backward, single-step, slow-motion, or fast speed.  So when your simulation fails, just start at that point in time, and replay backwards to the root of the problem.  It’s non-invasive.  It doesn’t require any modifications to your design or to your tests.

Debugging SoC Designs Quickly and Accurately

So if you’re under pressure to make fast and accurate decisions when your SoC level tests fail, you can relate to the challenges faced by professional sports referees and umpires.  But with Codelink Replay, you can be assured that there are about 20 different virtual “cameras” tracing and logging your processors during simulation, giving you the same instant replay benefit we get when we watch sporting events on television.  If you’re interested to learn more about this new technology, check out the web seminar at the URL below, that introduces Codelink Replay, and shows how it supports the entire ARM family of processors, including even the latest Cortex A-Series, Cortex R-Series, and Cortex M-Series.

http://www.mentor.com/products/fv/multimedia/verifying-complex-soc-designs-with-questa-codelink

, , , , , , , ,

15 April, 2011

Watch DVCon Co-Located Event Presentations

Two presentations from the second annual SystemC Day at DVCon 2011 are available now.  The first presentation is the keynote by Jim Hogan, serial EDA entrepreneur at Vista Ventures, LLC and the second is an introduction to the emerging IEEE Std. 1666™, SystemC standard by Jim Aynsley at Doulos.  SystemC Day brought users together to discuss the current state of the market for ESL design and the pending content of the SystemC standard that is current in final ballot by the IEEE.

To view the video presentations, you will need to register with the Open SystemC Initiative.

Jim Hogan, Vista Ventures LLC, California, USA
Keynote Presentation: “Navigating the SoC Era”

Abstract: SoCs are becoming ubiquitous in semiconductor development. Further, these SoCs are no longer processor-centric, and they are differentiated through the integration of design elements such as multi-CPU, multi-core, DSP cores, hardware accelerators, peripherals and software.

Industry expert and private investor Jim Hogan will discuss the semiconductor industry’s growing adoption of SoC design, and its reliance on diverse sources of hardware and software IP, developed both internally and externally.

John Aynsley, Doulos Ltd., UK
The New IEEE 1666 SystemC Standard

Abstract: The IEEE SystemC Standard is currently being revised and updated, with the new standard due to be published later in 2011. This new version of the SystemC standard will for the first time include the TLM-1 and TLM-2.0 libraries. Meanwhile, OSCI is working to ensure that the SystemC Proof-of-Concept simulator tracks any changes to the IEEE standard. This presentation will give a concise technical summary of the most important new and revised features in the SystemC standard, will give a behind-the-scenes insight into the rationale behind the changes, and will show examples to illustrate the new features in action.

, , , , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

Recent Comments