Posts Tagged ‘SystemVerilog’

24 November, 2014

SystemVerilog Testbench Debug – Are we having fun yet?

Fun

Debug should be fun. Watching waveforms march by, seeing ERRORS and WARNINGS pop out in a transcript file, tracing drivers back to their source, understanding race conditions between simulators and between source code changes – and my favorite – debugging random stability issues. Fun.

Old School – logfiles and interactive

Or at least it should be fun. It used to be fun. I’d setup my collection of scripts to run tests and examine logfiles. Push the button and go for coffee or go home. The next day I’d examine log files and figure out what happened. Usually I’d have to jump into interactive simulation and debug on the fly. Set some breakpoints and watch what happened. That was then. My tests and RTL were all Verilog. Life was good. I was in control of what was going on, and could get my head around it.

New School – logfiles, interactive and class handles

Fast-forward to today. Still have scripts to run tests. Still have log files. Still push the button and get coffee or go home. Still jump into interactive simulation. Still set breakpoints. But now my tests are SystemVerilog class-based – usually UVM. My tests are C code. My tests are constrained random tests. Debug just got harder. I can’t fit the whole testbench + RTL into my head at once. I need help.

Debugging your class based testbench

I prefer to do as much debug as possible in “post-sim” mode. I want to run simulation and capture as much as possible. Then debug my wavefile and source code. What to do about my SystemVerilog class based testbench? Easy. Capture my classes in the wave database. Show them to me in the wave window.

<UVM Testbench class hierarchy window and those same classes in the wave window>

Wave Window

Wave Window

But that’s not possible. Is it? What IS possible?

What? Objects in the wave database? Yes. Objects and their members in the wave database.

Examine the values of class member variables in post-sim mode. Use the waveform window for classes and class member variables just like signals.

What about the handles that are in my classes? Can I chase them to other objects? Yes. Follow class handle “pointers” to other objects – essentially exploring the OBJECT SPACE that existed at THAT time during simulation. But I’m in post sim!

Can I see all the sequence items that hit my driver? Yes. How? Just put the driver “handle” into the wave window and “open” it. You can see the virtual interface handle (if you have one). You can see the transactions that went through the driver (the driver did a ‘get_next_item (t)’ 100,000 times!).

<Transaction handle ‘t’ from the driver in the wave window, with the driver’s virtual interface>

Driver and 't' in Wave Window

Driver and ‘t’ in Wave Window

In the wave window? Yes. All 100,000 of them? Yes.

Now I’m having fun again. That’s great. I can see what’s going on inside my objects. In post-sim mode.

What’s NOT possible?

Will it babysit? No. One thing at a time.

Are you having fun yet?

Find more details in Verification Horizons article – Old School vs. New School – Visualizer and on Verification Academy – Verification and Debug: Old School Meets New School 

You can find all the sessions on New School verification techniques via the following link:

https://verificationacademy.com/seminars/academy-live

, , , , , ,

9 October, 2014

DVCon India, held in September 2014 in Bangalore, built on the Indian SystemC User Group meeting events and added a Design & Verification track to its popular system-level design (ESL) track that has been popular for many years.  The main stage played host to the keynote presentations, opening ceremonies and best paper and poster awards.

Several DVCon India keynote presentations, which I will go into more depth later touched on emerging use of virtual platforms in system design and the growing impact India has on design verification.  In particular, Mentor’s CEO, Wally Rhines contrasted Wilson Research survey data on design verification from India and the rest of the world.  A strong adoption of SystemVerilog and its popular methodology, the Universal Verification Methodology (UVM) was clear from the survey results Wally shared.

But even beyond SystemVerilog and UVM, the discuss of what could come next anchored the first day of DVCon India discussion on Accellera’s exploration of “portable stimulus.”  Accellera has a group exploring if the industry is ready to start a standards project on this concept.  And the first day when DVCon India attendees were offered an opportunity to learn about this, the multi-company (Mentor Graphics, Breker & CVC) tutorial on the topic was standing room only.

DVCon Europe – The Stage is Set!

A tutorial slot at DVCon Europe will be devoted to the same topic that was popular at DVCon India.  For DVCon Europe attendees, you will find Tutorial T9, “Creating Portable Tests with a Graph-Based Test Specification” will cover this topic.  Technical representatives from Mentor Graphics and Breker will cover aspects of portable stimulus and offer examples of how it can work.  And early application of the technology will be covered by a representative from IBM.  To cover the topic appropriately, we have modified the presenters listed in the official printed program and full details are available online.  The presenters will be, in this order:

  • Holger Horbach, IBM, Germany
  • Frederic Krampac, Breker, France
  • Staffan Berg, Mentor Graphics, Sweden

Please join us for this tutorial and ensuing conversation and discussion.  Verification productivity is a pressing issue and our ability to better control and create stimulus is a step in the direction to address the verification challenges we all face.

One last note, the concept of “portable stimulus” is language agnostic so no matter which language you use for design and verification, the intention is this technology will be able to help.   The tutorial will help you understand how using a graph-based approach enables the highest degree of verification re-use, from IP block to sub-system to full-system level verification. You will see how it supports verification in SystemVerilog, Verilog, VHDL, C, C/C++, assembly, and even other non-traditional base languages. And it also can be extended from simulation to emulation to FPGA prototyping, and even silicon validation.

I look forward to seeing you at DVCon Europe in Munich!  And if you have not yet registered, please do so to secure your seat.

, , , , , , , , , , ,

11 September, 2014

From those just beginning to study electronic systems design to the practicing engineer, this is the time of the year when those taking their first steps to learn VHDL, Verilog/SystemVerilog join the academic “back to school” crowd and those who are using design & verification languages in practice are honing skills at industry events around the world.

A new academic year has started and the Mentor Higher Education Program (HEP) is well set to help students at more than 1200 colleges and universities secure access to the same commercial tools and technology used by industry.  It is a real win-win when students learn using the same tools they will use after graduating.  Early exposure and use means better skilled and productive engineers for employers.

The functional verification team at Mentor Graphics knows that many students would prefer to have a local copy of ModelSim on their personal computer to do their course work and smaller projects as they learn VHDL or Verilog.  To help facilitate that we make the ModelSim PE Student Edition available for download without charge.  More than 10,000 students use ModelSim PE Student Edition around the world now in addition to our commercial grade tools they can access in their university labs.

For the practicing engineer, the Verification Academy offers an online community of more than 25,000 design and verification engineers that exchange ideas on a wide variety issues across the numerous standards and methodologies.  If you are not a member of the Verification Academy, I recommend you join.  You will also find the Verification Academy at DAC for one-on-one discussions and even more recently Verification Academy Live daylong seminars which came to Austin and which will be in Santa Clara – as of the writing of this blog.  There is still time to register for the Santa Clara event and I invite you to attend.

As design and verification is global, Accellera realized that DVCon should explore the needs of the global design and verification engineer population as well.  For 2014, DVCon Europe and DVCon India were born from an already successful running SystemC User Group events.  These user-led conferences will be held so engineers in these areas can more easily come together to share experiences and knowledge to ultimately become more productive.

Students and practicing engineers alike can benefit from fee-free access to some of the popular IEEE EDA standards.   While I don’t think reading them alone is the ultimate way to educate yourself, they make great companions to daily design and verification activities.  Accellera has worked with the IEEE to place several EDA standards in the IEEE Standards Association’s “Get™” program.  Almost 16,000 copies of the SystemC standard (1666) and just about the same number of SystemVerilog standards (1800) have been downloaded as of the end of August 2014.  Have you download your free copies yet?

The chart below shows the distribution of nearly 45,000 downloads which have occurred since 2010.  Stay tuned for breaking news on some updates to the EDA standards in the Get program.  When updated, they will replace the versions available now.  So if you want to have the current versions and the ones to come out shortly, you better download your copies now.  If the electronic version is not sufficient for you, the IEEE continues to sell printed versions.

image

From students to practicing engineers, the season of learning has started.  I encourage you to find your right venue or style of learning and connect with others to advance and improve your design and verification productivity.

, , , , , , , , , ,

7 May, 2014

My Feb. 4 post introduced Mentor Graphics’ three-step FPGA verification process intended to help design teams get out of the reprogrammable lab more effectively. Since then, I’ve engaged FPGA vendors, design managers and engineers to explain the process, paying special attention to the merits and technical detail for injecting automation into any FPGA verification environment, the hallmark of Mentor’s process. The feedback from these conversations helped me to develop a series of technical webinars, now available for free and on-demand. Check them out and let us know what you think in the comments below. My hope is the webinars might serve as a starting point for your own conversations on verification of FPGAs, demand for which seems to continue to grow as process nodes shrink.

Injecting Automation into Verification – FPGA Market Trends

Injecting Automation into Verification – Code Coverage

Injecting Automation into Verification – Assertions

Injecting Automation into Verification – Improved Throughput

, , , , , , , , ,

10 April, 2014

Its always fun to take the wraps off of solutions we have been hard at work developing.  The global team of Mentor Graphics engineers have spent considerable time and energy to bring the next level of SoC design and verification productivity to what seems to be a never ending response to Moore’s Law.  As silicon feature sizes get smaller, design sizes get larger and the verification problem mushrooms.  But you know that.  These changes are the constants that drive the need for continued innovation.  Our next level of innovation for design verification is embodied in the Mentor Enterprise Verification Platform (EVP) which we recently announced.

Gary Smith recently published Keeping Up with the Emulation Market, and lays out the fact that verification platforms are unifying with emulation now a pivotal element, not just for microprocessor design success, but for Multi-Platform Based SoC design success as well.  The need to bring software debug into the loop with early hardware concepts is a verification challenge that must be supported as well.  Pradeep Chakraborty reported on the point made by Anil Gupta of Applied Micro at the UVM 1.2 Day in Bangalore where Anil implored “Think about the block, the subsystem and the top.”  The point made was software is often overlooked or under tested prior to committing to hardware implementation implying that our focus on UVM leaves us to verify no higher than where UVM takes us – and that is not the “top” of the SoC that mandates software be part of the verification plan.

Path to Success

With the Mentor EVP, we do address these issues.  We bring simulation and emulation together in a unified platform.  Software debug on conceptual hardware is supported to address verification at the “top.”  And even as Gary’s report concludes with a wonder about how easy access to emulation will be supported for the masses.  That too is solved in the Mentor EVP using VirtuaLAB that can be hosted in data centers along with the emulator vs. complex, one-off lab setups that lock an emulator to a design and lock out your global team of software developers from collaborating.  The Mentor EVP moves to emulation for the masses in a 24×7 world.

With big designs comes big data and complex debug tasks.  These complex debug tasks are all easily handled by the new Mentor Visualizer Debug Environment that has native UVM and SystemVerilog class-based debug capabilities and low-power UPF debug support to easily pinpoint design errors. All of this works in both interactive and post-simulation modes for simulation and emulation.  To keep the software team productive, and get to SoC signoff sooner, the innovative and new Veloce OS3 global emulation resourcing technology moves software debug think-time offline to Mentor’s Codelink software debug tool.

And there’s more!  But I’ll leave that for you to discover.  When you have time, visit us here, to learn more about the Mentor Enterprise Verification Platform.

Path to Standards

As the move to support Multi-Platform Based SoC evolves, so do the standards that underpin it.  And as I’ve reported on the comments of others in this blog – and the understanding from our experience that UVM can only go so far in Multi-Platform Based SoC verification – we concluded the time is right for the industry to explore the need for new standards.

We announced at DVCon 2014 an offer to take our graph-based test specification into an Accellera committee to help move beyond the limitations today’s standards have.  As our investment in tools, technology and platforms continues, we are keenly aware users want their design and verification data to be as portable as possible.  The Accellera user community members echoed the need to discuss portable stimulus that can take you up and down the design hierarchy from block, to subsystem, to system (“top”) and support the concurrent design of hardware and software.

In support of this, Accellera approved the formation of a Portable Stimulus Specification Proposed Working Group (PWG) to study the validity and need for a portable stimulus specification.  To that end, join me at the kickoff meeting to launch this activity on Wednesday, May 7, 2014 from 10:00am to 4:00pm Pacific time at the offices of Mentor Graphics in Fremont, CA USA.  If you would to attend, or you would  like time on the agenda to discuss technology that would advance the development of a Portable Stimulus Specification or discuss your objectives/requirements for this group, contact me and I will put you in touch with the meeting organizer.  Accellera PWG meetings are open to all and do not require Accellera membership status to attend.

, , , , , , , , , ,

4 February, 2014

Marketing teams at FPGA vendors have been busy as the silicon nanometer geometry race escalates. Altera is “delivering the unimaginable” while Xilinx is offering “all programmable SoCs” to design centers. It’s clear that the SoC has become more accessible to a broader market today and that FPGA vendors have staked out a solid technology roadmap for the near future. Do marketing messages surrounding the geometry race effect day to day life of engineers, and if so, how – especially when it comes to verification?
An excellent whitepaper from Altera, “The Breakthrough Advantage for FPGAs with Tri-Gate Technology,” covers Altera’s Stratix 10 FPGAs and SoCs. The paper describes verification challenges in this new expanded market this way: “Although current generation FPGAs require a rigorous simulation verification methodology rivaling ASICs, the additional lab testing and ability to reprogram FPGAs save substantial manpower investment. The overall cost of ownership must be considered when comparing an FPGA whose component price is higher than an ASIC of similar complexity.” I believe you can use this statement to engage your management in a discussion about better verification processes.

Xilinx also has excellent published technical resources. Its recent UltraScale backgrounder describes how they are solving the challenges in implementing a design with their reprogrammable silicon. Clearly Xilinx has made an impressive investment to make it easier to implement a design with its FPGA UltraScale products. Improvements include ASIC-like clocking and annealing dataflow bottlenecks without compromising performance. Xilinx also describes improvements when using its Vivado design suite, particularly when it comes to in-lab design bring up.

For other FPGA insights, it’s also worth checking out Electronics Engineering Journal’s recent article “Proliferating Programmability in 2014,” which claims that the long-term future of FPGAs tool flows even though, as Kevin Morris sees it, EDA seems to have abandoned the market. (Kevin, I’m here to tell you you’re wrong.)

Do you think it’s inevitable that your FPGA team will first struggle to make it across the verification finish line before adopting a more process-oriented verification flow like the ASIC market demands? It’s not. I base this conclusion on the many conversations I’ve had with FPGA designers, their managers, sales engineers and many other talented people in this market over the years. Yes, there are significant challenges in FPGA design, but not all of them are technology related. With some emotion, one engineer remarked that debugging the same type of issue over and over in the hardware lab and expecting a different outcome was insane. (He’s right.) Others say they need specific ROI information for their management to even accept their need for change. Still others state that had they only known the solutions I talked about in my seminar a year ago, they would have not spent months and months bringing up their design in the lab.
With my peers here at Mentor Graphics, I have developed a three-step verification flow that includes coverage, assertions and improved throughput. I’ll write about this flow and related issues in the weeks ahead here on this blog. The flow is built on fundamental verification technologies that benefit the broad FPGA market. The goal, in developing the technology and writing about it here, has been to provide practical solutions and help more FPGA teams cross the verification gap.

In the meantime, what are your stories? Are you able to influence your management into adopting advanced technology to aid lab bring-up? Is your management’s bias towards lower cost and faster implementation (at the expense of verification)? Let me know in the comments or, if you prefer, by e-mail: joe_rodriguez@mentor.com.

, , , , , , , , ,

4 October, 2013

We are truly living in the age of SoC design, where 78 percent of all designs today contain one or more embedded processors.  In fact, 56 percent of all designs contain two or more embedded processors, which brings a whole new level of verification challenges—requiring unique solutions.

A great example of this is STMicroelectronics who recently shared their experience and solution in addressing verification challenges due to rising complexity. In 2012, STMicroelectronics began a pilot project to build what it called the Eagle Reference Design, or ERD. The goal was to see if it would be possible to stitch together three ARM products — a Cortex-A15, Cortex-7 and DMC 400 — into one highly flexible platform, one that customers might eventually be able to tweak based on nothing more than an XML description of the system.

Engineers at STMicroelectronics sought to understand and benchmark the Eagle Reference Design. To speed this benchmarking along, they wanted a verification environment that would link software-based simulation and hardware-based emulation in a common flow.

Their solution was unique, and their story worth reading. They first built a simulation testbench that relied heavily on verification IP (VIP). Next, the team connected this testbench to a Veloce emulation system via TestBench XPress (TBX) co-modeling software. Running verification required separating all blocks of design code into two domains — synthesizable code, including all RTL, for running on the emulator; and all other modules that run on the HDL portion of the environment on the simulator (which is connected to the emulator). Throughout the project, the team worked closely with Mentor Graphics to fine-tune the new co-emulation verification environment, which requires that all SoC components be mapped exactly the same way in simulation and emulation.

Because the reference design was not bound to any particular project, the main goal was not to arrive at the complete verification of the design but rather to do performance analysis and establish verification methodologies and techniques that would work in the future. In this they succeeded, agreeing that when they eventually try this sort of combined approach on a real project, they will be able to port the verification environment to the emulator more or less seamlessly.

This is a great success story worth reading on how STMicroelectronics combined Questa simulation, Mentor verification IP (VIP), and Veloce emulation to speed up their benchmarking verification process. Check out the full story here!

, , , , ,

19 September, 2013

It’s hard for me to believe that SystemVerilog 3.1 was released just over 10 years ago. The 3.1 version added Object-Oriented Programming features for testbench development to a language predominately used for RTL design synthesis. Making debug easier was one of the driving forces in unifying testbench and design features into a single language. The semantics for evaluating expressions and executing statements would be the same in the testbench and design. Setting breakpoints and stepping through the code would be seamless. That should have made it easier for either a verification or a design engineer to understand a complete verification environment. Or maybe it would enable either one to at least understand enough of the environment to isolate a particular problem.

Ten years later, I have yet to see that promise fulfilled. Most design engineers still debug their simulations the same way they debug in the lab: they look at waveforms. During simulation, they rarely look at the design source code, and certainly never look at the testbench code (unless it’s just basic pin wiggling like a waveform). Verification engineers are not much different. They rely on waveform debugging because that is what they were brought up on, and many do not even realize source-level debugging is available to them. However the test/testbench is more like a piece of software than a hardware description, and there are many things about a modern testbench that is difficult to display in a waveform (e.g. call stacks, local variables, and random constraints). And methodologies like the UVM add many layers of source-level complexity that most users do not have the time to wade through.

Next week I will be presenting as part of an Industry Special Session during the Forum on specification & Design Languages (FDL September 24-26,2013) that will discuss these issues and try to get more involvement from the academic and user communities to help resolve them. Was combining constructs from many languages into one a success? Can tools provide representations of source-level constructs in an easier graphical form? We hopefully will not need another decade.

, , , ,

26 August, 2013

Verification Techniques & Technologies Adoption Trends (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 10 click here), I presented verification techniques and technologies adoption trends, as identified by the 2012 Wilson Research Group study. In this blog, I continue those discussions and focus on formal verification, acceleration/emulation, and FPGA prototyping.

For years, the term “formal verification” has bugged me since it is quite often misunderstood in the industry. The problem originated back in the mid 1990’s with the emergence of formal equivalence checking tools from various EDA vendors, such as Chrysalis Symbolic Design. These tools were introduced to the market as formal verification, which is technically a true statement. However, there are a range of tools available under the category formal verification, such as formal property checkers and equivalence checkers.

So, what’s the problem? The question related to formal property checking in prior studies could have been misinterpreted by some participants to mean equivalence checking, which reduces the confidence in the results. To prevent this misinterpretation, we decided to change the question in 2012 to clarify that we were talking about the formal verification of assertions and clearly state “not equivalence checking” in the question.

One other thing we wanted to learn in the formal verification space during this study was what percentage of the market was using these auto-formal analysis tools (such as X safety checks, deadlock detection, reset analysis, etc.) versus formal property checking tools. The previous studies never made this distinction.

The fact that we changed the question related to formal property checking while adding in auto-formal in the 2012 study means that there is no meaningful way to compare this study’s formal verification results to the formal verification results from prior studies.

Formal Technology Adoption Trends

Figure 1 shows the adoption percentages for formal property checking and auto-formal techniques.

Figure 1. Formal Technology Adoption

We found that about five percent of the participants who are applying auto-formal techniques are not doing formal property checking. This means that the combined adoption of formal property checking and auto-formal techniques is about 32 percent. As a point of reference, the 2007 FarWest Research study found 19 percent adoption for formal verification—and the 2010 study found the adoption at 29 percent. Both the 2007 and 2010 studies included the potential erroneous responses associated with formal equivalence checking, as well as auto-formal usage.

Figure 2 provides a different analysis of the formal property adoption data by partitioning the results by design sizes. The design size partitions are represented as: less than 5M gates, 5M to 20M gates, and greater than 20M gates.

Figure 2. Formal property checking adoption by design size

Acceleration/Emulation & FPGA Prototyping Adoption Trends

The amount of time spent in a simulation regression is an increasing concern for many projects. Intuitively, we tend to think that the design size influences simulation performance. However, there are two equally important factors that must be considered: number of tests in the simulation regression suite and the length of each test in terms of clock cycles.

For example, a project might have a small or moderate-sized design, yet verification of this design requires a long running test (e.g., a video input stream). Hence, in this example, the simulation regression time is influenced by the number of clock cycles required for the test and not necessarily the design size itself.

Figure 3 shows the number of directed tests created to verify a design in simulation (i.e., the regression suite). The findings obviously varied dramatically from a handful of tests to thousands of tests in a regression suite, depending on the design.

Figure 3. Number directed test created to verify a design

The increase in tests in the range of 1-100 is interesting to note. Is this due to the increase in adoption of constrained-random verification techniques in the past few years? Or possibly, something else is going on here. This line of questioning illustrates the value of reviewing various industry studies. That is, it is not so much in the absolute values a study presents, but the questions the new data raises.

Next, let’s look at regression times as shown in Figure 5. As you can see, it also varies dramatically from short regression times for some projects to multiple days for other projects. The median simulation regression time is about 16-24 hours. Here, we also see an increase in shorter regression times. Again this data raises some interesting questions that are worth exploring.

Figure 4. Simulation regression time trends

One technique that is often used to speed up simulation regressions (either due to very long tests and lots of tests) is either hardware-assisted acceleration or emulation. In addition, FPGA prototyping, while historically used as a platform for software development, has recently served a role in SoC integration validation.

Figure 5 shows the adoption trend for both HW-assisted acceleration/emulation and FPGA prototyping by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). We see a continual rise in HW acceleration and emulation. This is not only due to the need to verify larger designs, or designs with long test times. HW acceleration and emulation has become the key platform for SoC Integration verification, where both hardware and software are integrated into a system for the first time. In addition, emulation is being used increasingly as a software development platform.

Figure 5. HW-assisted acceleration/emulation and FPGA Prototyping trends

Note that the adoption of FPGA prototyping has remained flat (or decreased slightly as the 2012 data suggest). This might seem counter-intuitive since we previously saw a trend in terms of the increase in SoC class designs. So what’s going on?

Figure 6 partitions the data for HW-assisted acceleration/emulation and FPGA prototyping adoption by design size: less than 1M gates, 1M to 20M gates, and greater than 20M gates. Notice that the adoption of HW-assisted acceleration/emulation continues to increase as design sizes increase. However, the adoption of FPGA prototyping rapidly drops off as design sizes increase beyond 20M gates. 

Figure 6. Acceleration/emulation and FPGA prototyping adoption by design size

This graph illustrates one of the problems with FPGA prototyping of very large designs, which is that there is an increased engineering effort required to partition designs across multiple FPGAs. In fact, what I have found is that FPGA prototyping of very large designs is often a major engineering effort in itself, and that many projects are seeking alternative solutions to address this problem.

In my next blog (click here), I will present the final data I plan to share from the Wilson Research Group study. This blog will focus on results in terms of meeting schedules, required spins, and classes of bugs contributing to respins. I will then wrap up this series of blogs in what I call the Epilogue—which will discuss potential gotchas and cautions on interpreting certain aspects of the data and thoughts about how the data could be used constructively.

, , , , , ,

19 August, 2013

Verification Techniques & Technologies Adoption Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for background on the study, click here).

In my previous blog (Part 9 click here), I focused on some of the 2012 Wilson Research Group findings related to design and verification language and library trends. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2012 Wilson Research Group study.

An interesting trend we are starting to see is that the electronic industry is maturing its functional verification processes, whether they are targeting their designs at IC/ASIC or FPGA implementations. This blog provides data to support this claim. An interesting question you might ask is, “What is driving this trend?” In some of my earlier blogs (click here for Part 1 and Part 2) I showed an that design complexity is increasing in terms design sizes and number of embedded processors. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (click here for Part 5 and Part 6). My belief is that the industry is being forced to mature its functional verification processes to address increasing complexity and effort.

Simulation Techniques Adoption Trends

Let’s begin by comparing  non-FPGA adoption trends related to various simulation techniques from the 2007 Far West Research study  (in blue) with the 2012 Wilson Research Group study  (in green), as shown in Figure 1.

Figure 1. Simulation-based technique adoption trends for non-FPGA designs

You can see that the study finds the industry increasing its adoption of various functional verification techniques for non-FPGA targeted designs. Clearly the industry is maturing its processes as I previously claimed.

For example, in 2007, the Far West Research Group found that only 48 percent of the industry performed code coverage. This surprised me. After all, HDL-based code coverage is a technology that has been around since the early 1990’s. However, I did informally verify the 2007 results through numerous customer visits and discussions. In 2012, we see that the industry adoption of code coverage has increased to 70 percent.

In 2007, the Far West Research Group study found that 37 percent of the industry had adopted assertions for use in simulation. In 2012, we find that industry adoption of assertions had increased to 63 percent. I believe that the maturing of the various assertion language standards has contributed to this increased adoption.

In 2007, the Far West Research Group study found that 40 percent of the industry had adopted functional coverage for use in simulation. In 2010, the industry adoption of functional coverage had increased to 66 percent. Part of this increase in functional coverage adoption has been driven by the increased adoption of constrained-random simulation, since you really can’t effectively do constrained-random simulation without doing functional coverage.

Now let’s look at  FPGA adoption trends related to various simulation techniques from the 2010 Far West Research study  (in pink) with the 2012 Wilson Research Group study  (in red).

Figure 2. Simulation-based technique adoption trends for non-FPGA designs

Again, you can clearly see that the industry is increasing its adoption of various functional verification techniques for FPGA targeted designs. This past year I have spent a significant amount of time in discussions with FPGA project managers around the world. During these discussions, most mangers mention the drive to improve verification process within their projects due to  rising complexity of this class of designs. The Wilson Research Group data supports these claims.

In fact, Figure 3 illustrates this maturing trend in the FPGA space, where we saw a 15 percent increase in the adoption of RTL simulation and an 8.5 percent increase in the adoption of code coverage. For complex FPGA designs, the traditional approach of “burn and churn” and debug in the lab is no longer a viable option. Nonetheless, it is still somewhat alarming that 31 percent of the FPGA study participants work on projects that perform no RTL simulation.

Figure 3. FPGA projects maturing their verification processes

Signoff Criteria Trends

We saw earlier in this blog the increased adoption of coverage techniques in the industry. Coverage has become a major component of a project’s verification signoff criteria. In Figure 4, we see how coverage has increased in importance in verification signoff criteria within the past five years, while other decision attributes have declined in terms of importance.

Figure 4. Non-FPGA functional verification signoff criteria trends

We see the same trends for FPGA designs, as shown in Figure 5.

Figure 5. FPGA functional verification signoff criteria trends

In my next blog (click here), I plan to continue the discussion related to adoption of various verification technologies and techniques as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@jhupcey Tweets

  • Loading tweets...