Posts Tagged ‘IEEE 1800’

11 September, 2014

From those just beginning to study electronic systems design to the practicing engineer, this is the time of the year when those taking their first steps to learn VHDL, Verilog/SystemVerilog join the academic “back to school” crowd and those who are using design & verification languages in practice are honing skills at industry events around the world.

A new academic year has started and the Mentor Higher Education Program (HEP) is well set to help students at more than 1200 colleges and universities secure access to the same commercial tools and technology used by industry.  It is a real win-win when students learn using the same tools they will use after graduating.  Early exposure and use means better skilled and productive engineers for employers.

The functional verification team at Mentor Graphics knows that many students would prefer to have a local copy of ModelSim on their personal computer to do their course work and smaller projects as they learn VHDL or Verilog.  To help facilitate that we make the ModelSim PE Student Edition available for download without charge.  More than 10,000 students use ModelSim PE Student Edition around the world now in addition to our commercial grade tools they can access in their university labs.

For the practicing engineer, the Verification Academy offers an online community of more than 25,000 design and verification engineers that exchange ideas on a wide variety issues across the numerous standards and methodologies.  If you are not a member of the Verification Academy, I recommend you join.  You will also find the Verification Academy at DAC for one-on-one discussions and even more recently Verification Academy Live daylong seminars which came to Austin and which will be in Santa Clara – as of the writing of this blog.  There is still time to register for the Santa Clara event and I invite you to attend.

As design and verification is global, Accellera realized that DVCon should explore the needs of the global design and verification engineer population as well.  For 2014, DVCon Europe and DVCon India were born from an already successful running SystemC User Group events.  These user-led conferences will be held so engineers in these areas can more easily come together to share experiences and knowledge to ultimately become more productive.

Students and practicing engineers alike can benefit from fee-free access to some of the popular IEEE EDA standards.   While I don’t think reading them alone is the ultimate way to educate yourself, they make great companions to daily design and verification activities.  Accellera has worked with the IEEE to place several EDA standards in the IEEE Standards Association’s “Get™” program.  Almost 16,000 copies of the SystemC standard (1666) and just about the same number of SystemVerilog standards (1800) have been downloaded as of the end of August 2014.  Have you download your free copies yet?

The chart below shows the distribution of nearly 45,000 downloads which have occurred since 2010.  Stay tuned for breaking news on some updates to the EDA standards in the Get program.  When updated, they will replace the versions available now.  So if you want to have the current versions and the ones to come out shortly, you better download your copies now.  If the electronic version is not sufficient for you, the IEEE continues to sell printed versions.

image

From students to practicing engineers, the season of learning has started.  I encourage you to find your right venue or style of learning and connect with others to advance and improve your design and verification productivity.

, , , , , , , , , ,

8 September, 2013

Schedules, respins, and bug classification

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 11 click here), I focused on some of the 2012 Wilson Research Group findings related to formal verification, acceleration/emulation, and FPGA prototyping trends. In this blog, I present verification results findings in terms of schedules, respins, and classification of functional bugs.

We have seen in previous blogs that a significant amount of effort is being applied to functional verification. An important question the various studies have tried to answer is whether this increasing effort is paying off.

Schedules

Figure 1 presents the design completion time compared to the project’s original schedule. What is interesting is that we really have not seen a change in this trend in over five years. That is, 67 percent of all projects are behind schedule with respect to the original plan. One could argue that designs have increased in complexity in terms of gate counts, embedded processors, and lots of software between 2007 and 2012. Yet, achieving project schedules has not worsened.

Figure 1. Non-FPGA design completion time compared to the project’s original schedule

What’s interesting is that the FPGA designs follows this same trend, as shown in figure 2.

Figure 2. Non-FPGA vs. FPGA design completion time relative to the original plan

Respins

Other verification data points worth looking at relate to the number of spins required between the start of a project and final production. Figure 3 shows this industry trend all the way back to the 2004 Collett study. Again, even though designs have increased in complexity, the data suggest that projects are not getting any worse in terms of the number of spins before production. If anything, there appears to be a slight improvement recently in this trend in projects requiring three or more spins.

Figure 3. Number of spins required from start of project until production

Bug classification

Figure 4 shows various categories of flaws that are contributing to respins. Note that the sum is greater than 100% on this graph, which is because a respin can be triggered by multiple flaws. 

Figure 4. Number of spins required from start of project until production

Although logic and functional flaws remain the leading causes of respins, the data suggest that there has been some improvement in this area over the past eight years. Is this due to the increased amount of reuse that is occurring in the industry? Or is the industry maturing its verification processes? Or is something entirely different going on? This data point raises some interesting questions worth exploring.

Figure 5 examines root cause of functional bugs by various categories. The data suggest an improvement in logic errors over an eleven year period, and potentially, a worsening of problems related to changing specifications. Problems associated with changing, incorrect, and incomplete specifications is a common theme I often hear when visiting customers.

Figure 5. Root cause of functional flaws

In my next blog (click here), I plan to wrap up this series of blogs in what I call the Epilogue—which will discuss potential gotchas and cautions on interpreting certain aspects of the data and thoughts about how the data might be used constructively.

, , ,

5 August, 2013

Language and Library Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 7 click here), I focused on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies. In this blog, I present design and verification language trends, as identified by the Wilson Research Group study.

You might note that for some of the language and library data I present, the percentage sums to more than one hundred percent. The reason for this is that some participants’ projects use multiple languages.

RTL Design Languages

Let’s begin by examining the languages used for RTL design. Figure 1 shows the trends in terms of languages used for design, by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), the 2012 Wilson Research Group study (in green), as well as the projected design language adoption trends within the next twelve months (in purple) as identified by the study participants. Note that the design language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption continues to increase.

Also, it’s important to note that this study focused on languages used for RTL design. We have conducted a few informal studies related to languages used for architectural modeling—and it’s not too big of a surprise that we see increased adoption of C/C++ and SystemC in that space. However, since those studies have (thus far) been informal and not as rigorously executed as the Wilson Research Group study, I have decided to withhold that data until a more formal blind study can be executed related to architectural modeling and virtual prototyping.

Figure 1. Trends in languages used for Non-FPGA design

Let’s now look at the languages used specifically for FPGA RTL design. Figure 2 shows the trends in terms of languages used for FPGA design, by comparing the 2012 Wilson Research Group study (in red) with the projected design language adoption trends within the next twelve months (in purple).

Figure 2. Languages used for Non-FPGA design

It’s not too big of a surprise that VHDL is the predominant language used for FPGA RTL design, although we are starting to see increased interest in SystemVerilog.

Verification Languages

Next, let’s look at the languages used to verify Non-FPGA designs (that is, languages used to create simulation testbenches). Figure 3 shows the trends in terms of languages used to create simulation testbenches by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green).

Figure 3. Trends in languages used in verification to create Non-FPGA simulation testbenches

The study revealed that verification language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption is increasing. In fact, SystemVerilog adoption increased by 8.3 percent between 2010 and 2012.

Figure 4 provides a different analysis of the data by partitioning the projects by design size, and then calculating the adoption of SystemVerilog for creating testbenches by size. The design size partitions are represented as: less than 5M gates, 5M to 20M gates, and greater than 20M gates. Obviously, we find that the larger the design size, the greater the adoption of SystemVerilog for creating testbenches. Yet, probably the most interesting observation we can make from examining Figure 4 is related to smaller designs that are less than 5M gates. Here we see that 58.8 percent of the industry has adopted SystemVerilog for verification. In other words, it is safe to say that SystemVerilog for verification has become mainstream today and not just limited to early adopters or leading-edge design projects.

Figure 4. SystemVerilog (for verification) adoption by design size

Let’s now look at the languages used specifically for FPGA RTL design. Figure 5 shows the trends in terms of languages used for FPGA design, by comparing the 2012 Wilson Research Group study (in red) with the projected design language adoption trends within the next twelve months (in purple).

Figure 5. Trends in languages used in verification to create FPGA simulation testbenches

In my next blog (click here), I’ll continue the discussion on design and verification language trends as revealed by the 2012 Wilson Research Group Functional Verification Study.

, , , , , , , , , , , , , , , , , ,

29 July, 2013

Testbench Characteristics and Simulation Strategies

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for background on the study, click here).

In my previous blog (click here), I focused on the controversial topic of effort spent in verification. In this blog, I focus on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies. Although I am shifting the focus away from verification effort, I believe that the data I present in this blog is related to my previous blog and really needs to be considered when calculating effort.

Time Spent in full-chip versus Subsystem-Level Simulation

Let’s begin by looking at Figure 1, which shows the percentage of time (on average) that a project spends in full-chip or SoC integration-level verification versus subsystem and IP block-level verification. The mean time performing full chip verification is represented by the dark green bar, while the mean time performing subsystem verification is represented by the light green bar. Keep in mind that this graph represents the industry average. Some projects spend more time in full-chip verification, while other projects spend less time.

Figure 1. Mean time spent in full chip versus subsystem simulation

Number of Tests Created to Verify the Design in Simulation

Next, let’s look at Figure 2, which shows the number of tests various projects create to verify their designs using simulation. The graph represents the findings from the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). Note that the curves look remarkably similar over the past five years. The median number of tests created to verify the design is within the range of (>200 – 500) tests. It is interesting to see a sharp percentage increase in the number of participants who claimed that fewer tests (1 – 100) were created to verify a design in 2012. It’s hard to determine exactly why this was the case—perhaps it is due to the increased use of constrained random (which I will talk about shortly). Or perhaps there has been an increased use of legacy tests. The study was not design to go deeper into this issue and try to uncover the root cause. This is something I intend to informally study this next year through discussions with various industry thought leaders.

Figure 2. Number of tests created to verify a design in simulation

Percentage of Directed Tests versus Constrained-Random Tests

Now let’s compare the percentage of directed testing that is performed on a project to the percentage of constrained-random testing. Of course, in reality there is a wide range in the amount of directed and constrained-random testing that is actually performed on various projects. For example, some projects spend all of their time doing directed testing, while other projects combine techniques and spend part of their time doing directed testing—and the other part doing constrained-random. For our comparison, we will look at the industry average, as shown in Figure 3. The average percentage of tests that were directed is represented by the dark green bar, while the average percentage of tests that are constrained-random is represented by the light green bar.

Figure 3. Mean directed versus constrained-random testing performed on a project

Notice how the percentage mix of directed versus constrained-random testing has changed over the past two years.Today we see that, on average, a project performs more constrained-random simulation. In fact, between 2010 and 2012 there has been a 39 percent increase in the use of constrained-random simulation on a project. One driving force behind this increase has been the maturing and acceptance of both the SystemVerilog and UVM standards—since two standards facilitate an easier implementation of a constrained-random testbench. In addition, today we find that an entire ecosystem has emerged around both the SystemVerilog and UVM standards. This ecosystem consists of tools, verification IP, and industry expertise, such as consulting and training.

Nonetheless, even with the increased adoption of constrained-random simulation on a project, you will find that constrained-random simulation is generally only performed at the IP block or subsystem level. For the full SoC level simulation, directed testing and processor-driven verification are the prominent simulation-based techniques in use today.

Simulation Regression Time

Now let’s look at the time that various projects spend in a simulation regression. Figure 4 shows the trends in terms of simulation regression time by comparing the 2007 Far West Research study (in gray) with the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). There really hasn’t been a significant change in the time spent in a simulation regression within the past three years. You will find that some teams spend days or even weeks in a regression. Yet today, the industry median is between 8 and 16 hours, and for many projects, there has been a decrease in regression time over the past few years. Of course, this is another example of where deeper analysis is required to truly understand what is going on. To begin with, these questions should probably be refined to better understand simulation times related to IP versus SoC integration-level regressions. We will likely do that in future studies—with the understanding that we will not be able to show trends (or at least not initially).

Figure 4. Simulation regression time trends

In my next blog (click here), I’ll focus on design and verification language trends, as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

22 July, 2013

Effort Spent On Verification (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (click here), I focused on the controversial topic of effort spent in verification. This blog continues that discussion.

I stated in my previous blog that I don’t believe there is a simple answer to the question, “how much effort was spent on verification in your last project?” I believe that it is necessary to look at multiple data points to truly get a sense of the real effort involved in verification today. So, let’s look at a few additional findings from the study.

Time designers spend in verification

It’s important to note that verification engineers are not the only project members involved in functional verification. Design engineers spend a significant amount of their time in verification too, as shown in Figure 1.

Figure 1. Average (mean) time design engineers spend in design vs. verification

In fact, you might note that design engineers now actually spend more time doing verification than design. This time expenditure has shifted in the last five years. In fact, the amount of time that design engineers spend doing verification has increased by 15 percent since 2007, while the amount of time they spend doing design has decreased by about 13 percent.

The designer’s involvement in verification ranges from:

  • Small sandbox testing to explore various aspects of the implementation
  • Full functional testing of IP blocks and SoC integration
  • Debugging verification problems identified by a separate verification team

Percentage of time verification engineers spends in various task

Next, let’s look at the mean time verification engineers spend in performing various tasks related to their specific project. You might note that verification engineers spend most of their time in debugging. Ideally, if all the tasks were optimized, then you would expect this. Yet, unfortunately, the time spent in debugging can vary significantly from project-to-project, which presents scheduling challenges for managers during a project’s verification planning process.

Figure 2. Average (mean) time verification engineers spend in various task

Number of formal analysis, FPGA prototyping, and emulation Engineers

Functional verification is not limited to simulation-based techniques. Hence, it’s important to gather data related to other functional verification techniques, such as the number of verification engineers involved in formal analysis, FPGA prototyping, and emulation.

Figure 3 presents the trends in terms of the number of verification engineers focused on formal analysis on a project. In 2007, the mean number of verification engineers focused on formal analysis on a project was 1.68, while in 2010 the mean number increased to 1.84. For some reason, we did see a slight decreased in the mean number of verification engineers who focus on formal in 2012. Regardless, the curve is remarkably consistent for the past five years.

Figure 3. Median number of verification engineers focused on formal analysis

Although FPGA prototyping is a common technique used to create platforms for software development, it is also sometimes used by projects for SoC integration verification and system validation. Figure 4 presents the trends in terms of the number of verification engineers focused on FPGA prototyping. In 2007, the mean number of verification engineers focused on FPGA prototyping on a project was 1.42, while in 2010 the mean number was 1.86. In 2012 we saw a slight decline in mean number of verification engineers focused on FPGA prototyping. However, the curve has been remarkably similar for the past five years.

Figure 4. Number of verification engineers focused on FPGA prototyping

Figure 5 presents the trends in terms of the number of verification engineers focused on hardware-assisted acceleration and emulation. In 2007, the mean number of verification engineers focused on hardware-assisted acceleration and emulation on a project was 1.31, while in 2010 the mean number was 1.86. In 2012, we see a slight decrease in the mean number of verification engineers who focus on hardware-assisted acceleration and emulation.

Figure 5. Number of verification engineers focused on emulation

Again, noticed how the curve has been consistent over the past five years. In other words, we are not seeing any big trends in terms of increased verification engineers focused predominately on formal, FPGA prototyping, and hardware-assisted acceleration and emulation. This trend was certainly not true for general verification engineers who focus on simulation-based techniques, as I presented in my previous blog, where we saw a 75 percent increase in the peak number verification engineers involved on a project within the past five years.

A few more thoughts on verification effort

So, can I conclusively state that 70 percent of a project’s effort is spent in verification today as some people have claimed? No. In fact, even after reviewing the data on different aspects of today’s verification process, I would still find it difficult to state quantitatively what the effort is. Yet, the data that I’ve presented so far seems to indicate that the effort (whatever it is) is increasing. And there is still additional data relevant to the verification effort discussion that I plan to present in upcoming blogs. However, in my next blog (click here), I shift the discussion from verification effort, and focus on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies.

, , , , , , , , ,

8 May, 2013

 Design Trends

In my previous blog, I introduced the 2012 Wilson Research Group Functional Verification Study (click here). The objective of my previous blog was to provide background on this large, worldwide industry study. I will present the key findings from this study in a set of upcoming blogs. 

This blog begins the process of revealing the 2012 Wilson Research Group study findings by first focusing on current design trends.  Let’s begin by examining process geometry adoption trends, as shown in Figure 1.  Here, you will see trend comparisons between the 2007 Far West Research study (gray line), the 2010 Wilson Research Group study (blue line), and the 2012 Wilson Research Group study (green line).

Figure 1. Process geometry trends

Worldwide, the median process geometry size from the 2007 Far West Research study was about 90nm, while the median process geometry size is about 65nm in 2010. Today, the mean process geometry size for a typical project is about 45nm—although you can see that over a third of projects today are designing below 32nm.

In addition to the industry moving to smaller process geometries, the industry is also moving to larger design sizes as measured in number of gates of logic and datapath, excluding memories (which should not be a surprise). Figure 2 compares design sizes from the 2002 Collett study (dark blue line), the 2007 Far West Research study (gray line), the 2010 Wilson Research Group study (light blue line), and the 2012 Wilson Research Group study (green line).

Figure 2. Number of gates of logic and datapath trends, excluding memories

The study revealed that about a third of the non-FPGA designs today are less than 5M gates, while a third range in size between 5M to 20M gates, and about a third of all designs are larger than 20M gates.

It’s important to note here that the data on the mean design size trends does not reflect volume in terms of semiconductor production. For example, you could have fewer projects designing at a small geometry, yet they have higher volume in terms of production.

In Figure 3, I show the mean design size trends between the 2002 Collett study (dark blue line), the 2007 Far West Research study (gray line), the 2010 Wilson Research Group study (light blue line), and the 2012 Wilson Research Group study (green line). Obviously, gate counts have increased over the years, yet a significant number of designs continue to be developed with smaller (and larger) gate counts as indicated by the mean calculation. Another observation is that, as you would expect, the mean gate count trend is essentially following Moore’s law.

Figure 3. Mean design size trends

Figure 4 presents the current design implementation trends for non-FPGAs as identified by the survey participants.

 

Figure 4. Non-FPGA current design implementation trends

The data in Figure 4 presents trends in design implementation approaches for non-FPGA designs, ranging from the 2002 Collett study (dark blue bar), the 2004 Collet study (dark green bar),  the 2007 Far West Research study (gray bar), the 2010 Wilson Research Group study (blue bar), and the 2012 Wilson Research Group study (green bar). Note that the study seems to indicate that there is a downward trend in standard cell design implementation.

Figure 5. FPGA design implementation trends

For the 2012 study, we decided that we wanted to get a sense of the percentage of FPGA projects that target the very complex programmable SoC FPGAs that have recently emerged, which is shown in Figure 5. Examples of these programmable SoC FPGAs include: Xilinx’s Zynq, Altera’s Arria/Cydone, and Microsemi’s SmarFusion.

In my next blog (click here), I’ll continue discussing current design trends, focusing specifically on embedded processors, power, and clock domains.

, , , , ,

25 February, 2013

Download the standard now – at no charge!

The IEEE has published the latest update to the SystemVerilog standard.  And courtesy of Accellera, the standard is available for download without charge directly from the IEEE.

1800-2012 1

The latest update to the SystemVerilog standard is now ready for download.  It joins other EDA standards, like SystemC in the IEEE Get™ program that grants public access to view and download current individual standards at no charge as a PDF.  (If you wish to have an older, superseded and withdrawn version of the standard or if you wish to have a printed copy or have it in a CD-ROM format, you can purchase older and alternate formats from IEEE for a fee.)

Over the years Accellera came to understand that many people continued to use the freely available version that seeded the initial IEEE 1800 SystemVerilog standard.  Since it is significantly out of date, Accellera collaborated with the IEEE Standards Association to ensure the latest version of the SystemVerilog standard would be freely available in electronic form to all whom wish to download it.  Accellera now hopes all those old 3.1a versions that everyone has and uses can now be placed in the archives.

The new version of standard should be used by the UVM (Universal Verification Methodology) community as the definitive specification of the SystemVerilog standard upon which UVM is built.   It goes very well with the UVM Cookbook and the Coverage Cookbook.

From Mentor’s perspective, it also makes a good companion to the Questa verification platform and complements our latest product update in which we announced support for the IEEE 1800-2012 SystemVerilog standard among other things.

If you have not done so already, download your copy now by clicking here.

, , , , , , ,

20 November, 2012

Verification Academy Adds Major New Technical Resource

The Verification Academy adds another major methodology cookbook to focus on effective coverage adoption.  The Coverage Cookbook describes the different types of coverage that are available to track your verification process progress, how to create a functional coverage model from a specification, and provides examples to implement functional coverage for different types of designs.

Verification Academy “full access” members have access to the free Coverage Cookbook and the UVM/OVM Cookbooks as well.  Are you a registered full access member?  If not, register now to become a full access member.  (Restrictions apply.)

Coverage is not a new topic.  It was one of major additions to the SystemVerilog (IEEE Std. 1800™-2009) standard.  But the SystemVerilog functional coverage extensions were left to the verification engineer to use in such as way to return meaningful measurements of how much of the design specification was being tested.  The Universal Verification Methodology (UVM) offers greater structure for coverage over SystemVerilog, but it too, is still only a piece of the puzzle.

imageAs verification teams have come to generate greater amounts of information from use of SystemVerilog, UVM and other verification tools, the data from the verification runs needs to be easily used to drive coverage closure.  Within the Mentor Graphics Questa verification platform, this resulted in the development of the Unified Coverage Database (UCDB) and associated verification management and planning features.

Since verification teams use a variety of tools and technology from many sources, it was an imperative that verification information could be easily shared and combined to help drive faster coverage closure across the industry.  This is why Mentor Graphics donated its UCDB API to Accellera where it became the Unified Coverage Interoperability Standard (UCIS).

It would be great to think that we are done; but we’re not.  Tools and data are just two dimensions of the three dimensions to any IC design project.  A comprehensive approach to verification management that handles all of this adds the third dimension.  The Mentor Graphics Questa Verification Management features handle all this.

Now the question is how to best adopt and use all the capabilities at hand from the standards to the verification technology at your finger tips.

The Verification Academy Coverage Cookbook is one of the important tools you now have to help pull all the information into a single place where you can learn the theory and put that theory into practice.  The Coverage Cookbook is much like the OVM/UVM Cookbooks in that it is web friendly, while supporting the ability for you to generate a PDF file of the whole document in case you want to have a printed copy or have it available for offline reference.

The Theory section covers:

  • What is coverage?
  • Kinds of coverage
  • Code Coverage
  • Functional Coverage
  • Specification to coverage
  • Coding for analysis

The Practice section shows three examples you can use today:

  • Bus protocol coverage using ARM® APB3
  • Block level coverage using UART
  • Datapath coverage using BiQuad IIR Filter

The Coverage Cookbook is a live document. You can expect continued extensions and contributions to enhance it.  As Harry Foster, Mentor Graphics’ Chief Scientist Verification put it, “Methodology is the bridge between tools and technologies, which creates a productive, predictable, and repeatable solution.”  We should expect that our collective use of this technology will help hone the methodology which is the heart of the Coverage Cookbook.  And with this use, we should expect the Coverage Cookbook to evolve as we achieve greater verification productivity.

Let us know what you think about the Coverage Cookbook and what we might be able to do to improve it.  In the meantime, Happy Coverage Closing!

, , , , , , , , , , , ,

13 May, 2011

Language and Library Trends

This blog is a continuation of a series of blogs, which present the highlights from the 2010 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 7 click here), I focused on some of the 2010 Wilson Research Group findings related to testbench characteristics and simulation strategies. In this blog, I present design and verification language trends, as identified by the Wilson Research Group study.

You might note for some of the language and library data I present, the percentage sums to more than one hundred percent. The reason for this is that some perticipant’s projects use multiple languages and multiple methodologies.

Design Languages

Let’s begin by examining the languages used for design, as shown in Figure 1.  Here, we compare the results for languages used to design FPGAs (in grey) with languages used to design non-FPGAs (in green).

p8-slide1

Figure 1. Languages used for design

Not too surprising, we see that VHDL is the most popular language used for the design of FPGAs, while Verilog and SystemVerilog are the most popular languages used for the design of non-FPGAs.

Figure 2 shows the trends in terms of languages used for design, by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green), as well as the projected design language adoption trends within the next twelve months (in purple). Note that the design language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption is increasing.

p8-slide2

Figure 2. Trends in languages used for design

Verification Languages

Next, let’s look at the languages used for verification (that is, languages used to create simulation testbenches). Figure 3 compares the results between FPGA designs (in grey) and non-FPGA designs (in green). p8-slide3

Figure 3. Languages used in verification to create simulation testbenches

And again, it’s not too surprising to see that VHDL is the most popular language used to create verification testbenches for FPGAs, while SystemVerilog  is the most popular language used to create testbenches for non-FPGAs.

Figure 4 shows the trends in terms of languages used to create simulation testbenches by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green), as well as the projected language adoption trends within the next twelve months (in purple). Note that verification language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption is increasing.

p8-slide4

Figure 4. Trends in languages used in verification to create simulation testbenches

Now, let’s look at methodology and class library adoption. Figure 5 shows the future trends in terms of methodology and class library adoption by comparing the 2010 Wilson Research Group study (in green) with the projected adoption trends within the next twelve months (in purple). Previous studies did not include data on methodology and class library adoption, so we are unable to show previous trends.

survey-blog-part8

Figure 5. Methodology and class library future trends

The study indicates that the only methodology adoption projected to grow in the next twelve months are OVM and UVM. 

Assertion Languages and Libraries

Finally, let’s examine assertion language and library adoption, as shown in Figure 6.  Here, we compare the results for FPGA designs (in grey) and non-FPGA designs (in green).

p8-slide6

Figure 6. Assertion language and library adoption

SystemVerilog Assertions (SVA) is the most popular assertion language used for both FPGA and non-FPGA designs.

Figure 7 shows the trends in terms assertion language and library adoption by comparing the 2007 Far West Research study (in blue) with the 2010 Wilson Research Group study (in green), as well as the projected adoption trends within the next twelve months (in purple). Note that the adoption of most of the assertion languages is declining, with the exception of SVA whose adoption is increasing.

p8-slide7

Figure 7. Trends in assertion language and library adoption

In my next blog (click here), I plan to focus on the adoption of various verification technologies and techniques used in the industry, as identified by the 2010 Wilson Research Group study.

, , , , , , , , , , , , , , ,

13 July, 2010

Another frequently asked question: Should I import my classes from a package or `include them? To answer this properly, you need to know more about SystemVerilog’s type system, especially the difference between its strong and weak typing systems.

In programming languages, weak typing is characterized by implicit  or ad-hoc conversions without explicit casting between values of different data types. Verilog’s bit vectors, or integral types, represent these weak typing aspects by implicitly padding and truncating values to be the proper bit lengths – at least proper by Verilog standards. If you perform a bitwise AND of a 7-bit and 8-bit vector, Verilog implicitly zero pads an 8th bit to the 7-bit operand and returns an 8-bit result. In contrast using VHDL, you would have to explicitly state whether you wanted the 7-bit operand to be padded, or the 8-bit operand to be truncated so that you have an expression with operands of equal size.

With a few exceptions, all other types in SystemVerilog follow strong typing rules. Strong typing rules require explicit conversions or casts when assigning or expressing operands of unequal types. And understanding what SystemVerilog considers equivalent types is key to understanding the effect of importing a class from a package versus including it from a file.

Inheritance aside, SystemVerilog uses the name of a type alone to determine type equivalence of a class. For example, suppose I have these two class definitions A and B below:

class A;
int i;
endclass : A
class B;
int i;
endclass : B

SystemVerilog considers these two class definitions unequal types because they have different names, even though their contents, or class bodies, are identical. The name of a class includes more than just the simple names A and B; the names also include the scope where the definition is declared. When you declare a class in a package, the package name becomes a prefix to the class name:

package P;
class A;
int i;
endclass : A
A a1;
endpackage : P
package Q;
class A;
int i;
endclass : A
A a1;
endpackage : Q

Now there are two definitions of class A, one called P::A and the other called Q::A. And the variables P::a1 and Q::a1 are type incompatible referencing two different class A’s. Re-writing the above example using an include file creates the same situation – two incompatible class definitions.

File A.sv File P.sv File Q.sv
class A;
int i;
endclass : A
package P;
`include “A.sv"
A a1;
endpackage : P
package Q;
`include “A.sv"
A a1;
endpackage : Q

After `including class A into each package, you wind up with two definitions of class A. Using `include is just a shortcut for cut and pasting text in a file. Importing a name from a package does not duplicate text; it makes that name visible from another package without copying the definition.

File A.sv File P.sv File R.sv File S.sv
class A;
int i;
endclass : A
package P;
`include “A.sv"
endpackage : P
package R;
import P::A;
A a1;
endpackage : R
package S;
import P::A;
A a1;
endpackage : S

Class A is declared in package P, and only in package P. The variables R::a1 and S::a1 are type compatible because they are both of type P::A. The fact that class A was `included from another file once it is expanded is no longer relevant once you consider the placement of the text from the file.

When you get compiler errors claiming that two types are incompatible even though they appear to have the same name, make sure you consider the scope where the types are declared as part of the full name. Class names declared in a module are prefixed by the module instance name, so the same module instantiated multiple times will create unique class names, all incompatible types.

For further information about packages, check out the June Verification Horizons article entitled “Using SystemVerilog Packages in Real Verification Projects”.

Dave Rich

, , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...