Posts Tagged ‘1800’

19 September, 2013

It’s hard for me to believe that SystemVerilog 3.1 was released just over 10 years ago. The 3.1 version added Object-Oriented Programming features for testbench development to a language predominately used for RTL design synthesis. Making debug easier was one of the driving forces in unifying testbench and design features into a single language. The semantics for evaluating expressions and executing statements would be the same in the testbench and design. Setting breakpoints and stepping through the code would be seamless. That should have made it easier for either a verification or a design engineer to understand a complete verification environment. Or maybe it would enable either one to at least understand enough of the environment to isolate a particular problem.

Ten years later, I have yet to see that promise fulfilled. Most design engineers still debug their simulations the same way they debug in the lab: they look at waveforms. During simulation, they rarely look at the design source code, and certainly never look at the testbench code (unless it’s just basic pin wiggling like a waveform). Verification engineers are not much different. They rely on waveform debugging because that is what they were brought up on, and many do not even realize source-level debugging is available to them. However the test/testbench is more like a piece of software than a hardware description, and there are many things about a modern testbench that is difficult to display in a waveform (e.g. call stacks, local variables, and random constraints). And methodologies like the UVM add many layers of source-level complexity that most users do not have the time to wade through.

Next week I will be presenting as part of an Industry Special Session during the Forum on specification & Design Languages (FDL September 24-26,2013) that will discuss these issues and try to get more involvement from the academic and user communities to help resolve them. Was combining constructs from many languages into one a success? Can tools provide representations of source-level constructs in an easier graphical form? We hopefully will not need another decade.

, , , ,

12 August, 2013

Language and Library Trends (Continued)

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 8 click here), I focused on design and verification language trends, as identified by the Wilson Research Group study. This blog presents additional trends related to verification language and library adoption trends.

You might note that for some of the language and library data I present, the percentage sums to more than 100 percent. The reason for this is that some participants’ projects use multiple languages or multiple testbench methodologies.

Testbench Methodology Class Library Adoption

Now let’s look at testbench methodology and class library adoption for IC/ASIC designs. Figure 1 shows the trends in terms of methodology and class library adoption by comparing the 2010 Wilson Research Group study (in blue) with the 2012 study (in green). Today, we see a downward trend in terms of adoption of all testbench methodologies and class libraries with the exception of UVM, which has increased by 486 percent since the fall of 2010. The study participants were also asked what they plan to use within the next 12 months, and based on the responses, UVM is projected to increase an additional 46 percent.

Figure 1. Methodology and class library trends

Figure 2 show the adoption of testbench methodologies and class libraries for FPGA designs (in red). We do not have sufficient data to show prior adoption trends in the FPGA space, but we anticipate that our future studies will enable us to do this. However, we did ask the FPGA study participants which testbench methodologies and class libraries they were planning to adopt within the next 12 months. Based on these responses, we anticipate that UVM adoption will increase by 40 percent, and OVM increase by 24 percent in the FPGA space.

Figure 2. Methodology and class library trends

Assertion Languages and Libraries

Finally, let’s examine assertion language and library adoption for IC/ASIC designs. The Wilson Research Group study found that 63 percent of all the IC/ASIC participants have adopted assertion-based verification (ABV) as part of their verification strategy. The data presented in this section shows the assertion language and library adoption trend related to those participants who have adopted ABV.

Figure 3 shows the trends in terms of assertion language and library adoption by comparing the 2010 Wilson Research Group study (in blue), the 2012 Wilson Research Group study (in green), and the projected adoption trends within the next 12 months (in purple). The adoption of SVA continues to increase, while other assertion languages and libraries either remain flat or decline.

Figure 3. Assertion language and library adoption for Non-FPGA designs

Figure 4 shows the adoption of assertion language trends for FPGA designs (in red). Again, we do not have sufficient data to show prior adoption trends in the FPGA space, but we anticipate that our future studies will enable us to do this. We did ask the FPGA study participants which assertion languages and libraries they planned to adopt within the next 12 months. Based on these responses, we anticipate an increase in adoption for OVL, SVA, and PSL in the FPGA space within the next 12 months.

Figure 4. Trends in assertion language and library adoption for FPGA designs

In my next blog (click here), I plan to focus on the adoption of various verification technologies and techniques used in the industry, as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

5 August, 2013

Language and Library Trends

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for a background on the study, click here).

In my previous blog (Part 7 click here), I focused on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies. In this blog, I present design and verification language trends, as identified by the Wilson Research Group study.

You might note that for some of the language and library data I present, the percentage sums to more than one hundred percent. The reason for this is that some participants’ projects use multiple languages.

RTL Design Languages

Let’s begin by examining the languages used for RTL design. Figure 1 shows the trends in terms of languages used for design, by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), the 2012 Wilson Research Group study (in green), as well as the projected design language adoption trends within the next twelve months (in purple) as identified by the study participants. Note that the design language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption continues to increase.

Also, it’s important to note that this study focused on languages used for RTL design. We have conducted a few informal studies related to languages used for architectural modeling—and it’s not too big of a surprise that we see increased adoption of C/C++ and SystemC in that space. However, since those studies have (thus far) been informal and not as rigorously executed as the Wilson Research Group study, I have decided to withhold that data until a more formal blind study can be executed related to architectural modeling and virtual prototyping.

Figure 1. Trends in languages used for Non-FPGA design

Let’s now look at the languages used specifically for FPGA RTL design. Figure 2 shows the trends in terms of languages used for FPGA design, by comparing the 2012 Wilson Research Group study (in red) with the projected design language adoption trends within the next twelve months (in purple).

Figure 2. Languages used for Non-FPGA design

It’s not too big of a surprise that VHDL is the predominant language used for FPGA RTL design, although we are starting to see increased interest in SystemVerilog.

Verification Languages

Next, let’s look at the languages used to verify Non-FPGA designs (that is, languages used to create simulation testbenches). Figure 3 shows the trends in terms of languages used to create simulation testbenches by comparing the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green).

Figure 3. Trends in languages used in verification to create Non-FPGA simulation testbenches

The study revealed that verification language adoption is declining for most of the languages with the exception of SystemVerilog whose adoption is increasing. In fact, SystemVerilog adoption increased by 8.3 percent between 2010 and 2012.

Figure 4 provides a different analysis of the data by partitioning the projects by design size, and then calculating the adoption of SystemVerilog for creating testbenches by size. The design size partitions are represented as: less than 5M gates, 5M to 20M gates, and greater than 20M gates. Obviously, we find that the larger the design size, the greater the adoption of SystemVerilog for creating testbenches. Yet, probably the most interesting observation we can make from examining Figure 4 is related to smaller designs that are less than 5M gates. Here we see that 58.8 percent of the industry has adopted SystemVerilog for verification. In other words, it is safe to say that SystemVerilog for verification has become mainstream today and not just limited to early adopters or leading-edge design projects.

Figure 4. SystemVerilog (for verification) adoption by design size

Let’s now look at the languages used specifically for FPGA RTL design. Figure 5 shows the trends in terms of languages used for FPGA design, by comparing the 2012 Wilson Research Group study (in red) with the projected design language adoption trends within the next twelve months (in purple).

Figure 5. Trends in languages used in verification to create FPGA simulation testbenches

In my next blog (click here), I’ll continue the discussion on design and verification language trends as revealed by the 2012 Wilson Research Group Functional Verification Study.

, , , , , , , , , , , , , , , , , ,

29 July, 2013

Testbench Characteristics and Simulation Strategies

This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (for background on the study, click here).

In my previous blog (click here), I focused on the controversial topic of effort spent in verification. In this blog, I focus on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies. Although I am shifting the focus away from verification effort, I believe that the data I present in this blog is related to my previous blog and really needs to be considered when calculating effort.

Time Spent in full-chip versus Subsystem-Level Simulation

Let’s begin by looking at Figure 1, which shows the percentage of time (on average) that a project spends in full-chip or SoC integration-level verification versus subsystem and IP block-level verification. The mean time performing full chip verification is represented by the dark green bar, while the mean time performing subsystem verification is represented by the light green bar. Keep in mind that this graph represents the industry average. Some projects spend more time in full-chip verification, while other projects spend less time.

Figure 1. Mean time spent in full chip versus subsystem simulation

Number of Tests Created to Verify the Design in Simulation

Next, let’s look at Figure 2, which shows the number of tests various projects create to verify their designs using simulation. The graph represents the findings from the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). Note that the curves look remarkably similar over the past five years. The median number of tests created to verify the design is within the range of (>200 – 500) tests. It is interesting to see a sharp percentage increase in the number of participants who claimed that fewer tests (1 – 100) were created to verify a design in 2012. It’s hard to determine exactly why this was the case—perhaps it is due to the increased use of constrained random (which I will talk about shortly). Or perhaps there has been an increased use of legacy tests. The study was not design to go deeper into this issue and try to uncover the root cause. This is something I intend to informally study this next year through discussions with various industry thought leaders.

Figure 2. Number of tests created to verify a design in simulation

Percentage of Directed Tests versus Constrained-Random Tests

Now let’s compare the percentage of directed testing that is performed on a project to the percentage of constrained-random testing. Of course, in reality there is a wide range in the amount of directed and constrained-random testing that is actually performed on various projects. For example, some projects spend all of their time doing directed testing, while other projects combine techniques and spend part of their time doing directed testing—and the other part doing constrained-random. For our comparison, we will look at the industry average, as shown in Figure 3. The average percentage of tests that were directed is represented by the dark green bar, while the average percentage of tests that are constrained-random is represented by the light green bar.

Figure 3. Mean directed versus constrained-random testing performed on a project

Notice how the percentage mix of directed versus constrained-random testing has changed over the past two years.Today we see that, on average, a project performs more constrained-random simulation. In fact, between 2010 and 2012 there has been a 39 percent increase in the use of constrained-random simulation on a project. One driving force behind this increase has been the maturing and acceptance of both the SystemVerilog and UVM standards—since two standards facilitate an easier implementation of a constrained-random testbench. In addition, today we find that an entire ecosystem has emerged around both the SystemVerilog and UVM standards. This ecosystem consists of tools, verification IP, and industry expertise, such as consulting and training.

Nonetheless, even with the increased adoption of constrained-random simulation on a project, you will find that constrained-random simulation is generally only performed at the IP block or subsystem level. For the full SoC level simulation, directed testing and processor-driven verification are the prominent simulation-based techniques in use today.

Simulation Regression Time

Now let’s look at the time that various projects spend in a simulation regression. Figure 4 shows the trends in terms of simulation regression time by comparing the 2007 Far West Research study (in gray) with the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). There really hasn’t been a significant change in the time spent in a simulation regression within the past three years. You will find that some teams spend days or even weeks in a regression. Yet today, the industry median is between 8 and 16 hours, and for many projects, there has been a decrease in regression time over the past few years. Of course, this is another example of where deeper analysis is required to truly understand what is going on. To begin with, these questions should probably be refined to better understand simulation times related to IP versus SoC integration-level regressions. We will likely do that in future studies—with the understanding that we will not be able to show trends (or at least not initially).

Figure 4. Simulation regression time trends

In my next blog (click here), I’ll focus on design and verification language trends, as identified by the 2012 Wilson Research Group study.

, , , , , , , , , ,

16 July, 2013

It is often said that the English language is one of the most difficult languages to learn: inconsistent spelling rules; the same words are used with different meanings in different contexts. “Why does a farmer produce produce?

Working on the SystemVerilog IEEE 1800 standard, I understand that even more clearly now. The word class is used many times in different contexts to mean slightly different things that the reader is expected to understand. With brevity, but little more attention to clarity, I will explain some of these different contexts.

Class Types

When you declare a class, you are declaring set of members and a set of methods that operate on those members.

class MyClass; bit [7:0] member1; 
 bit member2;
 function void method; 
   $display("members are %h %b", member1, member2); 
 endfunction 
endclass 

Consider this a user defined type, like a typedef. We are declaring the form and behavior of a type, but nothing is allocated to store the values of this type.

Class Objects

A class object is a particular instance of a class type. Each instance is an allocation of the members defined by the class type. The only way to create an object is to call the class constructor using the built-in new() method of a class. We classify class objects as dynamic objects because executing a procedural statement is the only way to create them.

Class Handles

Each time you call the new() method, it constructs a new class object and the method returns a class handle to the class object. A handle is an indirect reference to a class object, like a pointer to an address in memory. Unlike pointers in other languages such as C/C++, you are very limited in what you can do with a handle in SystemVerilog.

Class Variables

A class variable is where you store the class handle that references a particular class object of a particular class type. Declaring a class variable does not create a class object, only the space to hold the class handles. This contrasts with other data types where the declaration of a variable creates an object of that type and gives you a symbolic name to reference those objects. For example:

typesef struct {bit [7:0] member1; bit member2;} MyStruct; 
MyStruct StructVar1, StrucVar2

This creates and allocates the space for two MyStruct type objects and you can use StructVar1.member1 to access one of its members. On the other hand:

MyClass ClassVar1, ClassVar2;

This creates and allocates the space for two MyClass variables, but only allocates the space to hold handles to MyClass objects, not the objects themselves. If you tried to access ClassVar1.member1 now, you would get a null handle reference error because the initial value of a class variable is the special value null. One of the nice things about handles instead of pointers is that they remove the possibility of corrupt object references that access uninitialized or de-allocated memory.

Class Types, Objects, Handles, and Variables

Once you have a class variable, you can call the new() method to construct a class object

ClassVar1 = new();

This calls the constructor of the MyClass type that returns a handle to a MyClass object and then stores that handle in the MyClass variable, ClassVar1. You can now access ClassVar1.member1 because ClassVar1 references an actual object. If you then do:
ClassVar2 = ClassVar1;
Both class variables reference now the same class object – but there is still only one object of MyClass. ClassVar1.member1 and ClassVar2.member1 refer to the same class member.

Class Dismissed

Just remember to add a few extra words when mentioning classes and it will make things much clearer to the reader.

There are whole other classes of class thingy’s I could have explored, but I hope this class on classes will motivate you to learn more of the subtle meanings behind the words.

, ,

5 December, 2012

IEEE Std. 1800™-2012 Officially Ratified

The IEEE Standards Association (SA) Standards Board (SASB) officially approved the latest SystemVerilog revision, Draft 6, as an IEEE standard.  The SASB Review Committee (RevCom) agenda and the SASB agenda include review and formal approval of the latest work by the IEEE Computer Society Design Automation Standards Committee’s (DASC) SystemVerilog Working Group at their December 2012 meeting series.

What’s New?

The new standard has many new features, numerous clarifications and various corrections to improve the standard and keep pace with electronic system design and verification.  DVCon 2012 included a session presentation, Keeping Up with Chip – The Proposed SystemVerilog 2012 Standard Makes Verifying Ever-Increasing Design Complexity More Efficient” that detailed the standard.  The paper was written by Stuart Sutherland (Sutherland HDL, Inc.) and Tom Fitzpatrick (Mentor Graphics).  You can find a copy of the paper here at the DVCon 2012 archive and the presentation can be found at Sutherland HDL’s site here.

For users of Mentor Graphics’ Questa Verification Platform, many of the major SystemVerilog 2012 features can be used today, like multiple inheritance.  As Stu and Tom said in their presentation, “This is BIG!”  If you read their full paper, they discuss some ways this new feature might be useful for a UVM testbench.

Major work was done to augment the current notion of constraints in SystemVerilog.  In past versions of the standard they were known as hard constraints.  What this meant was all the conditions of the constraints had to be met otherwise there would be an error.  There was no built-in method to relax the need to satisfy the constraints.  Given the world of multiple constraints is the norm for testbenches today the potential for conflicts between them is high.  To alleviate this the SystemVerilog Working Group introduced soft constraints to the standard.  If you are interested in the details of what was proposed to be added the standard, you can reference the full proposal here that is included in the standard.  Stu and Tom said that “This is also a big enhancement!”

Availability

IEEE 1800™-2012 has only now been approved.  The standard itself is not ready to be published yet.  Plans are to have it ready to be published before DVCon 2013, which is scheduled for late February 2013.  I will  share publication information as it becomes available.  And, I hope you join me and attend DVCon 2013 where we can plan to celebrate the unveiling of the published standard.

sva3rdE_cover-wsWhile the IEEE publication will be the authoritative source on the standard, I have pointed to the presentation and paper by Stu Sutherland and Tom Fitzpatrick for information on the new standard that you can reference now.  For those who depend on assertions, you will find SystemVerilog-2012 has a major update with enhancements for properties and sequences in the area of immediate assertions, data type support, argument passing, vacuity definitions, global clock resolution and inferred clocking in sequences and much more.  You may find the SystemVerilog Assertions Handbook 3rd Edition by Ben Cohen, et. al. to be of value as well.  You can find more information about it on Amazon.com here.

The Story Continues…

There is much more to the SystemVerilog-2012 story I will share more of that in the months ahead.  The global team of experts who have put this together has been an outstanding collection of individuals ranging from producers and suppliers of electronic design automation software to consumers of said technology who have ensured the language can be used to design and verify the most demanding of electronic systems.

Stay tuned!  For now, I encourage you to get informed!

, , , , , , , , , , , , , , ,

10 September, 2012

OVM Bridges SystemVerilog and SystemC Languages

When UVM Connect was first released, the multilingual connection between IEEE Std. 1800™ (SystemVerilog) and IEEE Std. 1666™ (SystemC) standards bridged the two languages to allow design and verification engineers to access UVM from SystemC or SystemVerilog to exploit native languages advantages.  OVM users wondered if it was possible to support them as well since OVM is a derived from UVM.

It is possible and UVM Connect has been extended to allow OVM users to enjoy the same benefits.  An update to UVM Connect now allows it to be compiled to run with the OVM.  And since the extensions are based on IEEE standards, they can be used in your simulator of choice.

OVM Thrives

The thriving OVM community is of no surprise.  Last year, Harry Foster blogged about research on the use and adoption of verification methodologies.  The research was done after UVM was established as an Accellera standard, and showed OVM continued its leading position as shown in one of the charts from Harry’s blog (see below).  The chart even showed OVM was predicted to have a modest growth in adoption as well.

Mentor continues to bring many of the UVM additions back to the OVM user community in a way that does not disturb the upgrade path from OVM to UVM.  The major addition to UVM in the first round of Accellera standardization was the addition of a register and memory package.  This was back ported to OVM.  (The OVM register and memory kit can be found here, if you are interested.)  Now, UVM Connect has been extended to provide full OVM use.

Download

The UVM Connect 2.2 kit supports multilingual use of OVM and can be found at the Verification Academy and the Accellera UVM World contributions download site.

If you find issues or have other suggestions that we should consider, you can always share your input at the OVM Forum or UVM Forum.  In addition to interacting with other users, the Verification Academy is a good site for online resources like the UVM/OVM Cookbook, basic and advanced OVM/UVM training, and more.

, , , , , , , , , , ,

22 March, 2012

Remembering Don Loughry

“How did you get involved in standards,” I was asked.

On a business trip to India in 2009, I was asked to come by the Mentor office in Noida to meet with some “freshers” and other participants in Mentor’s Displaced Worker Program who were in the middle of a SystemVerilog training.  As one of many who have been engaged in the development of the SystemVerilog (aka IEEE Std 1800™-2009) standard the past decade, they were curious to know how I became involved in the development of this standard.

“How did you get involved in standards,” I was asked.

“My work on SystemVerilog comes from an early exposure to IEEE standards, much like you are getting today,” I told them.

In the late 1970’s a visiting lecture from Hewlett-Packard spent a year at UC Davis where I went to school.  One of the courses I took was a hardware interface to computers course that borrowed from the Hewlett-Packard Interface Bus (HP-IB).  While we all called the protocol HP-IB, it was already an IEEE standard.  Today it is known as IEEE Std 488.1™-2003.

In addition to the normal material that had to be purchased for the class, I also had to buy a copy of the IEEE standard.  My first thought was the standard was expensive!  When looking inside the standard, it looked more like a someone used an IBM Selectric typewriter to write it and inserted hand-drawn state diagrams.  Maybe I bought a draft of the standard instead.  This is not at all the IEEE standards of today.

Recently I visited IEEE Xplore and downloaded the current standard and the content, as I would expect, looks nothing like the one I bought for my class.  Print was professional as all the standards look today.  Even the state diagrams are computer generated.

This was my first IEEE standard I bought, studied and built prototype interfaces to connect.  While one might have expected we would have spent 100% of our course time on the application of what we were learning, we did not.  We got a dose of indoctrination on the importance of standards.  “There may be times in your professional career where you may need to volunteer on standards development: Do it,” we were told.

This is the story I related to those learning SystemVerilog in Noida.  I told them the knowledge they gain may prove to be indispensable in the work they do in the years ahead.  But thank you for the question on how I got involved in standards, as it reminds me I should encourage you to be mindful of standards in your future.  Let me pass on what I learned from Hewlett-Packard that if there is a time in your professional career where you  may need to volunteer for standards development: Do it.

My Mentor, In Pectore

In late 2006, my home phone number rings.  I answer.  “Hi, this is Don Loughry calling on behalf of the IEEE and I have some good news to share with you.”  “What is the good news,” I ask.  “You have been elected to the IEEE Standards Association Board of Governors.  As past chair it is my privilege to bring you this news,” he says.  […] “Thank you, I look forward to serving,” I said as I concluded the call.

Many weeks later, my office number rings and I answer.  “Hi this is Don Loughry calling.  Dennis, is this you,” he asks.  “Yes, this is Dennis,” I say.  “Did you see the email I sent to you asking if you would join the Charles Proteus Steinmetz awards committee,” he asked.  “No, I can’t recall seeing that email.  Does your email come in with your first or last name listed,” I asked.  “Neither,” Don told me.  “You will see my email address as ‘Sunkist,’” he said.  “Oh, I thought I got some message from the ‘orange company’ and did not read it.  Let me do that now,” I said.  And, yes I joined the committee.  [From this moment on, Don Loughry was known to me as Sunkist, though I never told him.]

Not too long ago, I related the story of getting involved in standards – the story above – with the now chair of the IEEE SA BoG, Steve Mills.  Steve is with Hewlett-Packard Co. and told me that standardization of HP-IB/IEEE 488 was the work of Don Loughry.  He was also instrumental in setting a corporate culture that was pro-standardization and Steve told me the encouragement I got  to “think standards” while in college is “all Don.”

Interesting, I thought.  How I got here has a lot to do with what Don Loughry has done.  This was not self evident to me, and kept in secret, in pectore, to me and Don for that matter.  Don, my mentor, in pectore.

As you have read the title of the blog, you know there is some sad news to share.  This is it:

Don passed away about a month ago.  And as I write this, family and friends plan to gather this weekend to remember him.  While his life will be recounted by personal and professional accomplishments extraordinaire – and Don’s are certainly substantial by any measure – his ripples on the pond of life continue to radiate and touch many.  In my case, his call to volunteer for standards has become my endeavor.  As Don has called to action, I have with those I met in Noida in 2009, as I do now with you dear reader of this blog.

Expression of Gratitude

While Don led the development of IEEE 488, he was also key to the development of IEEE 802.3 (the Ethernet LAN standard) that connects 100’s of millions of machines around the world today.  We should all be grateful for that.

He launched the IEEE Standards Association and served as its first president.  We all benefit from his vision.  Standards developers around the globe are grateful for this.

And as for Don appointing me to be a member of the Charles Proteus Steinmetz committee, I went on to be its chair for a couple years.  I am grateful for his trust.

As an aside, Don was given the 2003 Steinmetz award.  Having been on the committee and its chair, I was offered one action of privilege this year.  And that was to appoint myself to be a member of the committee a last time as its past chair.  I appointed myself.  Thank you Don for your initial appointment to this committee.

The week before last, while in India, after concluding a long week of meetings for the IEEE SA Corporate Advisory Group, it was bittersweet as I dialed into my last Steinmetz committee meeting.   I could not finish the call in my hotel room before having to check out and share a ride to the Bangalore airport.  Therefore I continued the call on my mobile phone in the car.  I thank my friend from Broadcom for sharing his car to the airport with me.  And, knowing Broadcom may like 802.3 a bit, perhaps I can be forgiven for this minor annoyance – knowing the rest of the story now.  After all, “How did I get here?”  How did I become to be on the phone for this call at this moment?  In large measure by Don, the same person who helped sow the seeds that Broadcom reaps today with 802.3.

To Sunkist

Yes, I know why Don’s email address has “sunkist” in it.  I came to learn why when we were on the Stienmetz committee together when he participated as “past chair.”  And no, it is not about oranges.  However, oranges will be one of those things that will remind me of him.  So why it is his email address that way?  Well, let’s say that is one thing I will keep in pectore.

, , , , , , , , , , ,

10 November, 2011

IEEE Announces Revision to IEEE 1666™ – Adds Transaction-Level Modeling Support

A significant step forward to address standards for advanced system-on-chip (SoC) designs has taken place by the IEEE.  The IEEE announced the new revision of the SystemC standard, known as IEEE 1666™-2011, has been approved.  While it is a revision of the current SystemC standard, IEEE 1666™-2005, the major new feature added was Transaction-Level Modeling (TLM), which is new to an IEEE standard.

For many years now, the TLM specification and accompanying open source code has been incubating in the Open SystemC Initiative (OSCI).  OSCI’s TLM Working Group has developed the TLM 1.0 and TLM 2.0 specifications, both of which are part of the revised IEEE 1666 standard.  TLM is important to SystemC, but it has also been leveraged outside of it.

We at Mentor Graphics pioneered the use of TLM in SystemVerilog (IEEE 1800™-2009) when our seminal open-source work on the Advanced Verification Methodology (AVM) brought an implementation to the verification community based on SystemVerilog.  This lives on today, as AVM motivated the Open Verification Methodology (OVM), which became the basis for Accellera’s Universal Verification Methodology (UVM).

If you don’t already know what TLM is and how the verification community is using it in OVM and UVM, the Verification Academy has a lot of written material and video training modules that will help you learn how this important new IEEE standard is used from simulation to emulation and has boosted verification productivity.  The “Understanding TLM” module is featured in the Advanced UVM section, so if you are still a novice to UVM, you may wish to start with the Basic material first.  This module is presented by fellow Verification Horizons Blogger, Tom Fitzpatrick and offers subtitles in English, Russian, Japanese and Chinese (Traditional & Simplified) to help drive rapid global adoption.

As we brought TLM into the modern verification methodology practice with a SystemVerilog implementation, it also surfaced that there is an opportunity for the creator of TLM, OSCI, and an adopter of it in UVM, Accellera, to discuss what they could do together.  And as I’ve blogged before, those two organizations announced their intention to unite before the end of 2011, as others have seen the potential when both are brought together.  I expect to see more great ideas come from these two groups when they join forces, just like the TLM work that is now an IEEE standard.

For those who want a copy of the revised IEEE 1666 standard, it is still in final IEEE editorial review as the they do their last formatting.  I will share with you when it is ready to use as well as how to get it and where to find it.

, , , , , , , , , , , , ,

5 October, 2011

Is Legacy Holding You Back?

Harry Foster, Mentor’s Verification Chief Scientist, will take center stage to give live presentations on the pressing SoC verification issues as he highlights recent research he has been reporting on in his numerous blogs. The first event will be held in San Jose, CA USA (18 October 2011) and the second event will be held in Reading, UK (15 November 2011).

Harry has been reporting on the 2010 Wilson Research Group Functional Verification Study that has shown a rapid market move towards the broadly supported SystemVerilog (IEEE 1800) language standard and ubiquitous support of the OVM/UVM methodologies. While humans have a general disdain for change, human nature also seems to wait to respond to the “crowd effect” to make a change. It appears the market is in the throes of this strain as the market moves in a direction leaving legacy behind.

To learn firsthand from Harry, I recommend attending two upcoming events where he will speak:

Date: 18 October 2011 (Tuesday)
Event: Design & Verification in the SoC Era
Location: DoubleTree – San Jose, CA USA
Website: http://www.mentor.com/events/verification/
Cost: Free; registration restrictions apply

Date: 15 November 2011 (Tuesday)
Event: Verification Futures: The Next Five Years
Location: Hilton Hotel, Reading, UK
Website: http://verificationfutures2011.eventbrite.com/
Cost: Free

Legacy set for replacement?

Have you ever noticed that one restaurant alone may get little traffic, but if there are many restaurants clustered together, they garner much greater traffic than going it alone? The crowd effect demonstrates its power and user benefit with choice and bounty. After DVCon 2011, I blogged about Wally Rhines’ keynote address and pointed to one slide that showed SystemVerilog is the clear language winner and pointed to another slide that showed OVM/UVM, built on top of SystemVerilog, as the clear methodology winner.

This has impact on legacy. And those with entrenched legacy may find it hard to adopt market driven standards practice quickly. This is to be expected.

When Accellera began its Verification IP Technical Subcommittee (VIP-TSC), I argued that the first step is to preserve legacy investment and offer a path to reuse that which has proven valuable in the past. The vote to move in this direction was close with consumer input saying all efforts should focus on a single industry supported base class library and standard. My point was we could build it, but if there was no path from where consumers were, there would be limited uptake. In a short time, a proof that OVM and VMM could interoperate demonstrated that we knew how to do this. It also gave hope that other proprietary and single-supplier solutions could take this work and adapt it for their paths forward.

With that finished, the Accellera VIP-TSC set to create the Universal Verification Methodology (UVM) standard. This has now been completed, short of finishing one commitment to expand the Phasing scheme and address a few lingering issues. While Accellera could focus on completing this work, users and owners of legacy verification languages and proprietary environments have come to realize a startling truth: the market has moved away from them. And, proprietary and single-solution suppliers have offered little in terms of paths forward. They now look for Accellera to address legacy preservation requirements and do it for them.

While this was to be expected, their shock has exposed the fact that more work could have been done on building the bridges to legacy’s past in the initial phase rather than now when the market demands time and focus on its adopted standards practice instead.

Why bring all this up?

We now find the Accellera VIP-TSC has a bifurcated focus. Part of the focus is to complete the content promises for UVM 1.0 and the other is to preserve legacy investment. But can Accellera overcome the crowd effect? The crowd effect, after all, has taken hold. In terms of product choice, legacy offers one product from a single supplier to SystemVerilog’s multiple competitive suppliers. When it comes to bounty, the availability of legacy verification IP has fewer and fewer sources while OVM/UVM offer an expanding bounty.

In the face of this rapid market move, one can expect single solution suppliers will extol features of their solution over the market’s choice. Users faced with the grim prospect of having to adapt to market changes will praise the past in hopes others will depart from the crowd. I am at a loss to think of a time when actions like this have worked to change the market. Maybe someone knows of examples and can share them.

In fact, I was a user who praised the technical benefits of one format over another. I made further investments in it. I even moved to a new job in a new area to find the community I moved to seemed to favor my selected format equally with what was to be the market winner. In time, in very short time, even my new community gave way to the market and the crowd. Can you guess what that format was?

I will share the details this with you next week when I discuss how one might actually bring value to legacy while allowing the market to continue its move forward. In the meantime, if you are close to the San Jose, CA or Reading, UK events, I suggest you register to attend.

, , , , , , , ,

@dennisbrophy Tweets

  • Loading tweets...

@dave_59 Tweets

  • Loading tweets...

@jhupcey Tweets

  • Loading tweets...