Posts Tagged ‘SystemVerilog’

10 April, 2014

Its always fun to take the wraps off of solutions we have been hard at work developing.  The global team of Mentor Graphics engineers have spent considerable time and energy to bring the next level of SoC design and verification productivity to what seems to be a never ending response to Moore’s Law.  As silicon feature sizes get smaller, design sizes get larger and the verification problem mushrooms.  But you know that.  These changes are the constants that drive the need for continued innovation.  Our next level of innovation for design verification is embodied in the Mentor Enterprise Verification Platform (EVP) which we recently announced.

Gary Smith recently published Keeping Up with the Emulation Market, and lays out the fact that verification platforms are unifying with emulation now a pivotal element, not just for microprocessor design success, but for Multi-Platform Based SoC design success as well.  The need to bring software debug into the loop with early hardware concepts is a verification challenge that must be supported as well.  Pradeep Chakraborty reported on the point made by Anil Gupta of Applied Micro at the UVM 1.2 Day in Bangalore where Anil implored “Think about the block, the subsystem and the top.”  The point made was software is often overlooked or under tested prior to committing to hardware implementation implying that our focus on UVM leaves us to verify no higher than where UVM takes us – and that is not the “top” of the SoC that mandates software be part of the verification plan.

Path to Success

With the Mentor EVP, we do address these issues.  We bring simulation and emulation together in a unified platform.  Software debug on conceptual hardware is supported to address verification at the “top.”  And even as Gary’s report concludes with a wonder about how easy access to emulation will be supported for the masses.  That too is solved in the Mentor EVP using VirtuaLAB that can be hosted in data centers along with the emulator vs. complex, one-off lab setups that lock an emulator to a design and lock out your global team of software developers from collaborating.  The Mentor EVP moves to emulation for the masses in a 24×7 world.

With big designs comes big data and complex debug tasks.  These complex debug tasks are all easily handled by the new Mentor Visualizer Debug Environment that has native UVM and SystemVerilog class-based debug capabilities and low-power UPF debug support to easily pinpoint design errors. All of this works in both interactive and post-simulation modes for simulation and emulation.  To keep the software team productive, and get to SoC signoff sooner, the innovative and new Veloce OS3 global emulation resourcing technology moves software debug think-time offline to Mentor’s Codelink software debug tool.

And there’s more!  But I’ll leave that for you to discover.  When you have time, visit us here, to learn more about the Mentor Enterprise Verification Platform.

Path to Standards

As the move to support Multi-Platform Based SoC evolves, so do the standards that underpin it.  And as I’ve reported on the comments of others in this blog – and the understanding from our experience that UVM can only go so far in Multi-Platform Based SoC verification – we concluded the time is right for the industry to explore the need for new standards.

We announced at DVCon 2014 an offer to take our graph-based test specification into an Accellera committee to help move beyond the limitations today’s standards have.  As our investment in tools, technology and platforms continues, we are keenly aware users want their design and verification data to be as portable as possible.  The Accellera user community members echoed the need to discuss portable stimulus that can take you up and down the design hierarchy from block, to subsystem, to system (“top”) and support the concurrent design of hardware and software.

In support of this, Accellera approved the formation of a Portable Stimulus Specification Proposed Working Group (PWG) to study the validity and need for a portable stimulus specification.  To that end, join me at the kickoff meeting to launch this activity on Wednesday, May 7, 2014 from 10:00am to 4:00pm Pacific time at the offices of Mentor Graphics in Fremont, CA USA.  If you would to attend, or you would  like time on the agenda to discuss technology that would advance the development of a Portable Stimulus Specification or discuss your objectives/requirements for this group, contact me and I will put you in touch with the meeting organizer.  Accellera PWG meetings are open to all and do not require Accellera membership status to attend.

, , , , , , , , , ,

4 February, 2014

Marketing teams at FPGA vendors have been busy as the silicon nanometer geometry race escalates. Altera is “delivering the unimaginable” while Xilinx is offering “all programmable SoCs” to design centers. It’s clear that the SoC has become more accessible to a broader market today and that FPGA vendors have staked out a solid technology roadmap for the near future. Do marketing messages surrounding the geometry race effect day to day life of engineers, and if so, how – especially when it comes to verification?
An excellent whitepaper from Altera, “The Breakthrough Advantage for FPGAs with Tri-Gate Technology,” covers Altera’s Stratix 10 FPGAs and SoCs. The paper describes verification challenges in this new expanded market this way: “Although current generation FPGAs require a rigorous simulation verification methodology rivaling ASICs, the additional lab testing and ability to reprogram FPGAs save substantial manpower investment. The overall cost of ownership must be considered when comparing an FPGA whose component price is higher than an ASIC of similar complexity.” I believe you can use this statement to engage your management in a discussion about better verification processes.

Xilinx also has excellent published technical resources. Its recent UltraScale backgrounder describes how they are solving the challenges in implementing a design with their reprogrammable silicon. Clearly Xilinx has made an impressive investment to make it easier to implement a design with its FPGA UltraScale products. Improvements include ASIC-like clocking and annealing dataflow bottlenecks without compromising performance. Xilinx also describes improvements when using its Vivado design suite, particularly when it comes to in-lab design bring up.

For other FPGA insights, it’s also worth checking out Electronics Engineering Journal’s recent article “Proliferating Programmability in 2014,” which claims that the long-term future of FPGAs tool flows even though, as Kevin Morris sees it, EDA seems to have abandoned the market. (Kevin, I’m here to tell you you’re wrong.)

Do you think it’s inevitable that your FPGA team will first struggle to make it across the verification finish line before adopting a more process-oriented verification flow like the ASIC market demands? It’s not. I base this conclusion on the many conversations I’ve had with FPGA designers, their managers, sales engineers and many other talented people in this market over the years. Yes, there are significant challenges in FPGA design, but not all of them are technology related. With some emotion, one engineer remarked that debugging the same type of issue over and over in the hardware lab and expecting a different outcome was insane. (He’s right.) Others say they need specific ROI information for their management to even accept their need for change. Still others state that had they only known the solutions I talked about in my seminar a year ago, they would have not spent months and months bringing up their design in the lab.
With my peers here at Mentor Graphics, I have developed a three-step verification flow that includes coverage, assertions and improved throughput. I’ll write about this flow and related issues in the weeks ahead here on this blog. The flow is built on fundamental verification technologies that benefit the broad FPGA market. The goal, in developing the technology and writing about it here, has been to provide practical solutions and help more FPGA teams cross the verification gap.

In the meantime, what are your stories? Are you able to influence your management into adopting advanced technology to aid lab bring-up? Is your management’s bias towards lower cost and faster implementation (at the expense of verification)? Let me know in the comments or, if you prefer, by e-mail: joe_rodriguez@mentor.com.

, , , , , , , , ,

4 October, 2013

We are truly living in the age of SoC design, where 78 percent of all designs today contain one or more embedded processors.  In fact, 56 percent of all designs contain two or more embedded processors, which brings a whole new level of verification challenges—requiring unique solutions.

A great example of this is STMicroelectronics who recently shared their experience and solution in addressing verification challenges due to rising complexity. In 2012, STMicroelectronics began a pilot project to build what it called the Eagle Reference Design, or ERD. The goal was to see if it would be possible to stitch together three ARM products — a Cortex-A15, Cortex-7 and DMC 400 — into one highly flexible platform, one that customers might eventually be able to tweak based on nothing more than an XML description of the system.

Engineers at STMicroelectronics sought to understand and benchmark the Eagle Reference Design. To speed this benchmarking along, they wanted a verification environment that would link software-based simulation and hardware-based emulation in a common flow.

Their solution was unique, and their story worth reading. They first built a simulation testbench that relied heavily on verification IP (VIP). Next, the team connected this testbench to a Veloce emulation system via TestBench XPress (TBX) co-modeling software. Running verification required separating all blocks of design code into two domains — synthesizable code, including all RTL, for running on the emulator; and all other modules that run on the HDL portion of the environment on the simulator (which is connected to the emulator). Throughout the project, the team worked closely with Mentor Graphics to fine-tune the new co-emulation verification environment, which requires that all SoC components be mapped exactly the same way in simulation and emulation.

Because the reference design was not bound to any particular project, the main goal was not to arrive at the complete verification of the design but rather to do performance analysis and establish verification methodologies and techniques that would work in the future. In this they succeeded, agreeing that when they eventually try this sort of combined approach on a real project, they will be able to port the verification environment to the emulator more or less seamlessly.

This is a great success story worth reading on how STMicroelectronics combined Questa simulation, Mentor verification IP (VIP), and Veloce emulation to speed up their benchmarking verification process. Check out the full story here!

, , , , ,

19 September, 2013

It’s hard for me to believe that SystemVerilog 3.1 was released just over 10 years ago. The 3.1 version added Object-Oriented Programming features for testbench development to a language predominately used for RTL design synthesis. Making debug easier was one of the driving forces in unifying testbench and design features into a single language. The semantics for evaluating expressions and executing statements would be the same in the testbench and design. Setting breakpoints and stepping through the code would be seamless. That should have made it easier for either a verification or a design engineer to understand a complete verification environment. Or maybe it would enable either one to at least understand enough of the environment to isolate a particular problem.

Ten years later, I have yet to see that promise fulfilled. Most design engineers still debug their simulations the same way they debug in the lab: they look at waveforms. During simulation, they rarely look at the design source code, and certainly never look at the testbench code (unless it’s just basic pin wiggling like a waveform). Verification engineers are not much different. They rely on waveform debugging because that is what they were brought up on, and many do not even realize source-level debugging is available to them. However the test/testbench is more like a piece of software than a hardware description, and there are many things about a modern testbench that is difficult to display in a waveform (e.g. call stacks, local variables, and random constraints). And methodologies like the UVM add many layers of source-level complexity that most users do not have the time to wade through.

Next week I will be presenting as part of an Industry Special Session during the Forum on specification & Design Languages (FDL September 24-26,2013) that will discuss these issues and try to get more involvement from the academic and user communities to help resolve them. Was combining constructs from many languages into one a success? Can tools provide representations of source-level constructs in an easier graphical form? We hopefully will not need another decade.

, , , ,

26 August, 2013

Verification Techniques & Technologies Adoption Trends (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 10 click here), I presented verification techniques and technologies adoption trends, as identified by the 2012 Wilson Research Group study. In this blog, I continue those discussions and focus on formal verification, acceleration/emulation, and FPGA prototyping.

For years, the term “formal verification” has bugged me since it is quite often misunderstood in the industry. The problem originated back in the mid 1990’s with the emergence of formal equivalence checking tools from various EDA vendors, such as Chrysalis Symbolic Design. These tools were introduced to the market as formal verification, which is technically a true statement. However, there are a range of tools available under the category formal verification, such as formal property checkers and equivalence checkers.

So, what’s the problem? The question related to formal property checking in prior studies could have been misinterpreted by some participants to mean equivalence checking, which reduces the confidence in the results. To prevent this misinterpretation, we decided to change the question in 2012 to clarify that we were talking about the formal verification of assertions and clearly state “not equivalence checking” in the question.

One other thing we wanted to learn in the formal verification space during this study was what percentage of the market was using these auto-formal analysis tools (such as X safety checks, deadlock detection, reset analysis, etc.) versus formal property checking tools. The previous studies never made this distinction.

The fact that we changed the question related to formal property checking while adding in auto-formal in the 2012 study means that there is no meaningful way to compare this study’s formal verification results to the formal verification results from prior studies.

Formal Technology Adoption Trends

Figure 1 shows the adoption percentages for formal property checking and auto-formal techniques.

Figure 1. Formal Technology Adoption

We found that about five percent of the participants who are applying auto-formal techniques are not doing formal property checking. This means that the combined adoption of formal property checking and auto-formal techniques is about 32 percent. As a point of reference, the 2007 FarWest Research study found 19 percent adoption for formal verification—and the 2010 study found the adoption at 29 percent. Both the 2007 and 2010 studies included the potential erroneous responses associated with formal equivalence checking, as well as auto-formal usage.

Figure 2 provides a different analysis of the formal property adoption data by partitioning the results by design sizes. The design size partitions are represented as: less than 5M gates, 5M to 20M gates, and greater than 20M gates.

Figure 2. Formal property checking adoption by design size

Acceleration/Emulation & FPGA Prototyping Adoption Trends

The amount of time spent in a simulation regression is an increasing concern for many projects. Intuitively, we tend to think that the design size influences simulation performance. However, there are two equally important factors that must be considered: number of tests in the simulation regression suite and the length of each test in terms of clock cycles.

For example, a project might have a small or moderate-sized design, yet verification of this design requires a long running test (e.g., a video input stream). Hence, in this example, the simulation regression time is influenced by the number of clock cycles required for the test and not necessarily the design size itself.

Figure 3 shows the number of directed tests created to verify a design in simulation (i.e., the regression suite). The findings obviously varied dramatically from a handful of tests to thousands of tests in a regression suite, depending on the design.

Figure 3. Number directed test created to verify a design

The increase in tests in the range of 1-100 is interesting to note. Is this due to the increase in adoption of constrained-random verification techniques in the past few years? Or possibly, something else is going on here. This line of questioning illustrates the value of reviewing various industry studies. That is, it is not so much in the absolute values a study presents, but the questions the new data raises.

Next, let’s look at regression times as shown in Figure 5. As you can see, it also varies dramatically from short regression times for some projects to multiple days for other projects. The median simulation regression time is about 16-24 hours. Here, we also see an increase in shorter regression times. Again this data raises some interesting questions that are worth exploring.

Figure 4. Simulation regression time trends

One technique that is often used to speed up simulation regressions (either due to very long tests and lots of tests) is either hardware-assisted acceleration or emulation. In addition, FPGA prototyping, while historically used as a platform for software development, has recently served a role in SoC integration validation.

Figure 5 shows the adoption trend for both HW-assisted acceleration/emulation and FPGA prototyping by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). We see a continual rise in HW acceleration and emulation. This is not only due to the need to verify larger designs, or designs with long test times. HW acceleration and emulation has become the key platform for SoC Integration verification, where both hardware and software are integrated into a system for the first time. In addition, emulation is being used increasingly as a software development platform.

Figure 5. HW-assisted acceleration/emulation and FPGA Prototyping trends

Note that the adoption of FPGA prototyping has remained flat (or decreased slightly as the 2012 data suggest). This might seem counter-intuitive since we previously saw a trend in terms of the increase in SoC class designs. So what’s going on?

Figure 6 partitions the data for HW-assisted acceleration/emulation and FPGA prototyping adoption by design size: less than 1M gates, 1M to 20M gates, and greater than 20M gates. Notice that the adoption of HW-assisted acceleration/emulation continues to increase as design sizes increase. However, the adoption of FPGA prototyping rapidly drops off as design sizes increase beyond 20M gates. 

Figure 6. Acceleration/emulation and FPGA prototyping adoption by design size

This graph illustrates one of the problems with FPGA prototyping of very large designs, which is that there is an increased engineering effort required to partition designs across multiple FPGAs. In fact, what I have found is that FPGA prototyping of very large designs is often a major engineering effort in itself, and that many projects are seeking alternative solutions to address this problem.

In my next blog (click here), I will present the final data I plan to share from the Wilson Research Group study. This blog will focus on results in terms of meeting schedules, required spins, and classes of bugs contributing to respins. I will then wrap up this series of blogs in what I call the Epilogue—which will discuss potential gotchas and cautions on interpreting certain aspects of the data and thoughts about how the data could be used constructively.

, , , , , ,

19 August, 2013

Verification Techniques & Technologies Adoption Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for background on the study, click here).

In my previous blog (Part 9 click here), I focused on some of the 2012 Wilson Research Group findings related to design and verification language and library trends. In this blog, I present verification techniques and technologies adoption trends, as identified by the 2012 Wilson Research Group study.

An interesting trend we are starting to see is that the electronic industry is maturing its functional verification processes, whether they are targeting their designs at IC/ASIC or FPGA implementations. This blog provides data to support this claim. An interesting question you might ask is, “What is driving this trend?” In some of my earlier blogs (click here for Part 1 and Part 2) I showed an that design complexity is increasing in terms design sizes and number of embedded processors. In addition, I’ve presented trend data that showed an increase in total project time and effort spent in verification (click here for Part 5 and Part 6). My belief is that the industry is being forced to mature its functional verification processes to address increasing complexity and effort.

Simulation Techniques Adoption Trends

Let’s begin by comparing  non-FPGA adoption trends related to various simulation techniques from the 2007 Far West Research study  (in blue) with the 2012 Wilson Research Group study  (in green), as shown in Figure 1.

Figure 1. Simulation-based technique adoption trends for non-FPGA designs

You can see that the study finds the industry increasing its adoption of various functional verification techniques for non-FPGA targeted designs. Clearly the industry is maturing its processes as I previously claimed.

For example, in 2007, the Far West Research Group found that only 48 percent of the industry performed code coverage. This surprised me. After all, HDL-based code coverage is a technology that has been around since the early 1990’s. However, I did informally verify the 2007 results through numerous customer visits and discussions. In 2012, we see that the industry adoption of code coverage has increased to 70 percent.

In 2007, the Far West Research Group study found that 37 percent of the industry had adopted assertions for use in simulation. In 2012, we find that industry adoption of assertions had increased to 63 percent. I believe that the maturing of the various assertion language standards has contributed to this increased adoption.

In 2007, the Far West Research Group study found that 40 percent of the industry had adopted functional coverage for use in simulation. In 2010, the industry adoption of functional coverage had increased to 66 percent. Part of this increase in functional coverage adoption has been driven by the increased adoption of constrained-random simulation, since you really can’t effectively do constrained-random simulation without doing functional coverage.

Now let’s look at  FPGA adoption trends related to various simulation techniques from the 2010 Far West Research study  (in pink) with the 2012 Wilson Research Group study  (in red).

Figure 2. Simulation-based technique adoption trends for non-FPGA designs

Again, you can clearly see that the industry is increasing its adoption of various functional verification techniques for FPGA targeted designs. This past year I have spent a significant amount of time in discussions with FPGA project managers around the world. During these discussions, most mangers mention the drive to improve verification process within their projects due to  rising complexity of this class of designs. The Wilson Research Group data supports these claims.

In fact, Figure 3 illustrates this maturing trend in the FPGA space, where we saw a 15 percent increase in the adoption of RTL simulation and an 8.5 percent increase in the adoption of code coverage. For complex FPGA designs, the traditional approach of “burn and churn” and debug in the lab is no longer a viable option. Nonetheless, it is still somewhat alarming that 31 percent of the FPGA study participants work on projects that perform no RTL simulation.

Figure 3. FPGA projects maturing their verification processes

Signoff Criteria Trends

We saw earlier in this blog the increased adoption of coverage techniques in the industry. Coverage has become a major component of a project’s verification signoff criteria. In Figure 4, we see how coverage has increased in importance in verification signoff criteria within the past five years, while other decision attributes have declined in terms of importance.

Figure 4. Non-FPGA functional verification signoff criteria trends

We see the same trends for FPGA designs, as shown in Figure 5.

Figure 5. FPGA functional verification signoff criteria trends

In my next blog (click here), I plan to continue the discussion related to adoption of various verification technologies and techniques as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

12 August, 2013

Language and Library Trends (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 8 click here), I focused on design and verification language trends, as identified by the Wilson Research Group study. This blog presents additional trends related to verification language and library adoption trends.

You might note that for some of the language and library data I present, the percentage sums to more than 100 percent. The reason for this is that some participants’ projects use multiple languages or multiple testbench methodologies.

Testbench Methodology Class Library Adoption

Now let’s look at testbench methodology and class library adoption for IC/ASIC designs. Figure 1 shows the trends in terms of methodology and class library adoption by comparing the 2010 Wilson Research Group study (in blue) with the 2012 study (in green). Today, we see a downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has increased by 486 percent since the fall of 2010. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 46 percent.

Figure 1. Methodology and class library trends

Figure 2 show the adoption of testbench methodologies and class libraries for FPGA designs (in red). We do not have sufficient data to show prior adoption trends in the FPGA space, but we anticipate that our future studies will enable us to do this. However, we did ask the FPGA study participants which testbench methodologies and class libraries they were planning to adopt within the next 12 months. Based on these responses, we anticipate that UVM adoption will increase by 40 percent, and OVM increase by 24 percent in the FPGA space.

Figure 2. Methodology and class library trends

Assertion Languages and Libraries

Finally, let’s examine assertion language and library adoption for IC/ASIC designs. The Wilson Research Group study found that 63 percent of all the IC/ASIC participants have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trend related to those participants who have adopted ABV.

Figure 3 shows the trends in terms of assertion language and library adoption by comparing the 2010 Wilson Research Group study (in blue), the 2012 Wilson Research Group study (in green), and the projected adoption trends within the next 12 months (in purple). The adoption of SVA continues to increase, while other assertion languages and libraries either remain flat or decline.

Figure 3. Assertion language and library adoption for Non-FPGA designs

Figure 4 shows the adoption of assertion language trends for FPGA designs (in red). Again, we do not have sufficient data to show prior adoption trends in the FPGA space, but we anticipate that our future studies will enable us to do this. We did ask the FPGA study participants which assertion languages and libraries they planned to adopt within the next 12 months. Based on these responses, we anticipate an increase in adoption for OVL, SVA, and PSL in the FPGA space within the next 12 months.

Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I plan to focus on the adoption of various verification technologies and techniques used in the industry, as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

5 August, 2013

Language and Library Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 7 click here), I focused on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies. In this blog, I present design and verification language trends, as identified by the Wilson Research Group study.

You might note that for some of the language and library data I present, the percentage sums to more than one hundred percent. The reason for this is that some participants’ projects use multiple languages.

RTL Design Languages

Let’s begin by examining the languages used for RTL design. Figure 1 shows the trends in terms of languages used for design, by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), the 2012 Wilson Research Group study (in green), as well as the projected design language adoption trends within the next twelve months (in purple) as identified by the study participants. Note that the design language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption continues to increase.

Also, it’s important to note that this study focused on languages used for RTL design. We have conducted a few informal studies related to languages used for architectural modeling—and it’s not too big of a surprise that we see increased adoption of C/C++ and SystemC in that space. However, since those studies have (thus far) been informal and not as rigorously executed as the Wilson Research Group study, I have decided to withhold that data until a more formal blind study can be executed related to architectural modeling and virtual prototyping.

Figure 1. Trends in languages used for Non-FPGA design

Let’s now look at the languages used specifically for FPGA RTL design. Figure 2 shows the trends in terms of languages used for FPGA design, by comparing the 2012 Wilson Research Group study (in red) with the projected design language adoption trends within the next twelve months (in purple).

Figure 2. Languages used for Non-FPGA design

It’s not too big of a surprise that VHDL is the predominant language used for FPGA RTL design, although we are starting to see increased interest in SystemVerilog.

Verification Languages

Next, let’s look at the languages used to verify Non-FPGA designs (that is, languages used to create simulation testbenches). Figure 3 shows the trends in terms of languages used to create simulation testbenches by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green).

Figure 3. Trends in languages used in verification to create Non-FPGA simulation testbenches

The study revealed that verification language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption is increasing. In fact, SystemVerilog adoption increased by 8.3 percent between 2010 and 2012.

Figure 4 provides a different analysis of the data by partitioning the projects by design size, and then calculating the adoption of SystemVerilog for creating testbenches by size. The design size partitions are represented as: less than 5M gates, 5M to 20M gates, and greater than 20M gates. Obviously, we find that the larger the design size, the greater the adoption of SystemVerilog for creating testbenches. Yet, probably the most interesting observation we can make from examining Figure 4 is related to smaller designs that are less than 5M gates. Here we see that 58.8 percent of the industry has adopted SystemVerilog for verification. In other words, it is safe to say that SystemVerilog for verification has become mainstream today and not just limited to early adopters or leading-edge design projects.

Figure 4. SystemVerilog (for verification) adoption by design size

Let’s now look at the languages used specifically for FPGA RTL design. Figure 5 shows the trends in terms of languages used for FPGA design, by comparing the 2012 Wilson Research Group study (in red) with the projected design language adoption trends within the next twelve months (in purple).

Figure 5. Trends in languages used in verification to create FPGA simulation testbenches

In my next blog (click here), I’ll continue the discussion on design and verification language trends as revealed by the 2012 Wilson Research Group Functional Verification Study.

, , , , , , , , , , , , , , , , , ,

29 July, 2013

Testbench Characteristics and Simulation Strategies

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for background on the study, click here).

In my previous blog (click here), I focused on the controversial topic of effort spent in verification. In this blog, I focus on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies. Although I am shifting the focus away from verification effort, I believe that the data I present in this blog is related to my previous blog and really needs to be considered when calculating effort.

Time Spent in full-chip versus Subsystem-Level Simulation

Let’s begin by looking at Figure 1, which shows the percentage of time (on average) that a project spends in full-chip or SoC integration-level verification versus subsystem and IP block-level verification. The mean time performing full chip verification is represented by the dark green bar, while the mean time performing subsystem verification is represented by the light green bar. Keep in mind that this graph represents the industry average. Some projects spend more time in full-chip verification, while other projects spend less time.

Figure 1. Mean time spent in full chip versus subsystem simulation

Number of Tests Created to Verify the Design in Simulation

Next, let’s look at Figure 2, which shows the number of tests various projects create to verify their designs using simulation. The graph represents the findings from the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). Note that the curves look remarkably similar over the past five years. The median number of tests created to verify the design is within the range of (>200 – 500) tests. It is interesting to see a sharp percentage increase in the number of participants who claimed that fewer tests (1 – 100) were created to verify a design in 2012. It’s hard to determine exactly why this was the case—perhaps it is due to the increased use of constrained random (which I will talk about shortly). Or perhaps there has been an increased use of legacy tests. The study was not design to go deeper into this issue and try to uncover the root cause. This is something I intend to informally study this next year through discussions with various industry thought leaders.

Figure 2. Number of tests created to verify a design in simulation

Percentage of Directed Tests versus Constrained-Random Tests

Now let’s compare the percentage of directed testing that is performed on a project to the percentage of constrained-random testing. Of course, in reality there is a wide range in the amount of directed and constrained-random testing that is actually performed on various projects. For example, some projects spend all of their time doing directed testing, while other projects combine techniques and spend part of their time doing directed testing—and the other part doing constrained-random. For our comparison, we will look at the industry average, as shown in Figure 3. The average percentage of tests that were directed is represented by the dark green bar, while the average percentage of tests that are constrained-random is represented by the light green bar.

Figure 3. Mean directed versus constrained-random testing performed on a project

Notice how the percentage mix of directed versus constrained-random testing has changed over the past two years.Today we see that, on average, a project performs more constrained-random simulation. In fact, between 2010 and 2012 there has been a 39 percent increase in the use of constrained-random simulation on a project. One driving force behind this increase has been the maturing and acceptance of both the SystemVerilog and UVM standards—since two standards facilitate an easier implementation of a constrained-random testbench. In addition, today we find that an entire ecosystem has emerged around both the SystemVerilog and UVM standards. This ecosystem consists of tools, verification IP, and industry expertise, such as consulting and training.

Nonetheless, even with the increased adoption of constrained-random simulation on a project, you will find that constrained-random simulation is generally only performed at the IP block or subsystem level. For the full SoC level simulation, directed testing and processor-driven verification are the prominent simulation-based techniques in use today.

Simulation Regression Time

Now let’s look at the time that various projects spend in a simulation regression. Figure 4 shows the trends in terms of simulation regression time by comparing the 2007 Far West Research study (in gray) with the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). There really hasn’t been a significant change in the time spent in a simulation regression within the past three years. You will find that some teams spend days or even weeks in a regression. Yet today, the industry median is between 8 and 16 hours, and for many projects, there has been a decrease in regression time over the past few years. Of course, this is another example of where deeper analysis is required to truly understand what is going on. To begin with, these questions should probably be refined to better understand simulation times related to IP versus SoC integration-level regressions. We will likely do that in future studies—with the understanding that we will not be able to show trends (or at least not initially).

Figure 4. Simulation regression time trends

In my next blog (click here), I’ll focus on design and verification language trends, as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

22 July, 2013

Effort Spent On Verification (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (click here), I focused on the controversial topic of effort spent in verification. This blog continues that discussion.

I stated in my previous blog that I don’t believe there is a simple answer to the question, “how much effort was spent on verification in your last project?” I believe that it is necessary to look at multiple data points to truly get a sense of the real effort involved in verification today. So, let’s look at a few additional findings from the study.

Time designers spend in verification

It’s important to note that verification engineers are not the only project members involved in functional verification. Design engineers spend a significant amount of their time in verification too, as shown in Figure 1.

Figure 1. Average (mean) time design engineers spend in design vs. verification

In fact, you might note that design engineers now actually spend more time doing verification than design. This time expenditure has shifted in the last five years. In fact, the amount of time that design engineers spend doing verification has increased by 15 percent since 2007, while the amount of time they spend doing design has decreased by about 13 percent.

The designer’s involvement in verification ranges from:

  • Small sandbox testing to explore various aspects of the implementation
  • Full functional testing of IP blocks and SoC integration
  • Debugging verification problems identified by a separate verification team

Percentage of time verification engineers spends in various task

Next, let’s look at the mean time verification engineers spend in performing various tasks related to their specific project. You might note that verification engineers spend most of their time in debugging. Ideally, if all the tasks were optimized, then you would expect this. Yet, unfortunately, the time spent in debugging can vary significantly from project-to-project, which presents scheduling challenges for managers during a project’s verification planning process.

Figure 2. Average (mean) time verification engineers spend in various task

Number of formal analysis, FPGA prototyping, and emulation Engineers

Functional verification is not limited to simulation-based techniques. Hence, it’s important to gather data related to other functional verification techniques, such as the number of verification engineers involved in formal analysis, FPGA prototyping, and emulation.

Figure 3 presents the trends in terms of the number of verification engineers focused on formal analysis on a project. In 2007, the mean number of verification engineers focused on formal analysis on a project was 1.68, while in 2010 the mean number increased to 1.84. For some reason, we did see a slight decreased in the mean number of verification engineers who focus on formal in 2012. Regardless, the curve is remarkably consistent for the past five years.

Figure 3. Median number of verification engineers focused on formal analysis

Although FPGA prototyping is a common technique used to create platforms for software development, it is also sometimes used by projects for SoC integration verification and system validation. Figure 4 presents the trends in terms of the number of verification engineers focused on FPGA prototyping. In 2007, the mean number of verification engineers focused on FPGA prototyping on a project was 1.42, while in 2010 the mean number was 1.86. In 2012 we saw a slight decline in mean number of verification engineers focused on FPGA prototyping. However, the curve has been remarkably similar for the past five years.

Figure 4. Number of verification engineers focused on FPGA prototyping

Figure 5 presents the trends in terms of the number of verification engineers focused on hardware-assisted acceleration and emulation. In 2007, the mean number of verification engineers focused on hardware-assisted acceleration and emulation on a project was 1.31, while in 2010 the mean number was 1.86. In 2012, we see a slight decrease in the mean number of verification engineers who focus on hardware-assisted acceleration and emulation.

Figure 5. Number of verification engineers focused on emulation

Again, noticed how the curve has been consistent over the past five years. In other words, we are not seeing any big trends in terms of increased verification engineers focused predominately on formal, FPGA prototyping, and hardware-assisted acceleration and emulation. This trend was certainly not true for general verification engineers who focus on simulation-based techniques, as I presented in my previous blog, where we saw a 75 percent increase in the peak number verification engineers involved on a project within the past five years.

A few more thoughts on verification effort

So, can I conclusively state that 70 percent of a project’s effort is spent in verification today as some people have claimed? No. In fact, even after reviewing the data on different aspects of today’s verification process, I would still find it difficult to state quantitatively what the effort is. Yet, the data that I’ve presented so far seems to indicate that the effort (whatever it is) is increasing. And there is still additional data relevant to the verification effort discussion that I plan to present in upcoming blogs. However, in my next blog (click here), I shift the discussion from verification effort, and focus on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies.

, , , , , , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@HarryAtMentor Tweets

  • Loading tweets...

Recent Comments