Posts Tagged ‘RTL’
For years one of the objectives in EDA has been to make formal property checking easy to use and its results easy to understand. With the Automatic formal check feature in the June release of the 0-In Formal tool version 3.0, I think we have made significant progress in this area.
The feature, which predefines a set of assertion rules to look for design issues automatically, makes formal technology accessible to users who are not yet ready to write properties in System Verilog Assertion (SVA) or Property Specification Language (PSL). To make it easier to comprehend problems in the design, the tool highlights the violations back to the RTL code.
Automatic formal check focuses on three areas inadequately addressed by dynamical simulation:
The first area is functional coverage. Today, when constrained random simulation fails to achieve the targeted coverage goal, engineers have to fine tune the environment or add new tests. These efforts, often attempted relatively late in the verification cycle, can consume vast amounts of time and resources while still failing to reach parts of the design. In contrast, automatic formal check can be used to identify unreachable code early in the verification cycle. These targets can be eliminated from the coverage model. As a result, the coverage measurement is more accurate and you know when you are done.
The next area is design initialization. If a design cannot be initialized reliably in silicon, it will not function correctly. An obvious precursor then is making sure all the registers are initialized correctly at RTL. If X’es are used, we need to monitor the X creation, propagation and usage cycle. Dynamic simulation does not interpret X’es accurately as in silicon, which has only 1s and 0s. Automatic formal check is ideal in verifying register initialization under different modes or configurations. Then, with internal assertions and formal technologies, we can check that although X’es are created, they are not used by downstream registers.
The final area is corner case design issues. Time and time again, designers unintentionally write code that violates logical correctness. Examples include combinational loops, full case violations, parallel case violations, undriven logic, finite-state machine (FSM) deadlocks and FSM livelocks. Unless tests are written to specifically target these corner case design issues are, such issues are difficult to exercise. On the other hand, by formally analyzing the design semantics, automatic formal check identifies these design issues statically and creates the stimuli to highlight them to the users.
If you are interested to know more about the automatic formal check feature in 0-In Formal, please feel free to register for our upcoming seminar in San Jose.
After spending years verifying ASICs with dynamic simulation, I started working on static verification 10 years ago in a startup called 0-In Design Automation. I firmly believe that static verification can complement dynamic simulation. Static verification uses synthesis and formal technologies to find bugs in the design. It does not rely on simulation stimulus. You do not need to exercise the bugs, propagate the results, and check the outputs to detect them.
Static verification includes RTL lint, static checks, formal checks, automated and assertion-based formal property checking. To read more on static verification, you can take a look at my white paper: Getting Started With Static Verification. If you are interested in formal methods, you can take a look at Harry Foster’s white paper: Why Now for Formal Property Checking. Both can be found in the Knowledge Center of DAC.com.
In the future, we are going to talk about individual static verification technologies and its application in areas such as RTL verification, clock domain crossing verification, low power verification, timing constraint verification, etc. Your feedback and comments are most welcome.
Noted EDA analyst and guru Gary Smith delivered keynote address: “ESL: Where We Are and Where We’re Going”
OSCI sponsored the first annual SystemC Day at DVCon 2010. The presentations were video recorded and are available for free for those who missed DVCon or who may wish to see them again. Gary Smith’s presentation (registration required) and OSCI chair, Eric Lish’s OSCI Update lead the video set from SystemC Day.
The 12th North American SystemC Users Group (NASCUG) meeting was part of SystemC Day at DVCon and featured technical presentations on architectural modeling, verification, and analog/mixed-signal design using SystemC.
OSCI ON YOUTUBE: More videos of users and their perspectives on SystemC events and activities can be found via the OSCI channel on YouTube: http://www.youtube.com/officialsystemc
Click here for completing listing of the following technical presentations.
|The Metaport: A Technique for Managing Code Complexity
Jack Donovan, HighIP Design, Texas, USA
The metaport is a coding style that can effectively manage code complexity for complex ESL models, especially models that are intended for high-level synthesis. This presentation will give an overview of the metaport concept and dive into the details of a possible implementation.
|OCP Socket Modeling with TLM-2.0
Hervé Alexanian, Sonics Inc., California, USA
This presentation discusses work performed by the OCP-IP Committee, specifically modeling that OCP built upon the TLM-2.0 standard.
|ADL Synthesis using ArchC
Samuel Goto, Master student at UNICAMP
The design and implementation of processors is a complex task. Architecture Description Languages (ADLs) were created to extend existing HDLs, to ease the process of developing and prototyping an architecture by providing a set of tools and algorithms to optimize and automate some of the tedious parts. While much has been done on the specification and business levels of ADLs, there is a huge gap between ADL specifications/simulators and real life processors written in RTL. This project addresses the issues of bringing an ADL description to the RTL level, and reports the development of an extension of ArchC to support this level.
|Look Ma, No Clocks! Improving Model Performance
David Black, XtremeEDA, Texas, USA
This tutorial-style presentation illustrates some techniques to avoid the inclusion of clocks in SystemC simulations and provide results of simple experiments showing the simulation performance benefits. Concepts discussed include synchronization, clock-free timers, and the effects of clocks on performance. A proposal is made for a simple SystemC class that can simplify coding when clocks are thought to be needed.
|TLM-driven Design and Verification Methodology
Brian Bailey, independent consultant
SystemC is well on the road to adoption in a number of areas within the Electronic System Level (ESL) space, but many of those are separated islands today. Virtual Prototyping has seen a huge leap forward with the standardization of TLM 2.0. SystemC is also being used successfully for high-level synthesis at the module level, but to make SystemC pervasive, there must be a link between the applications. In addition, to reap the maximum productivity gains from a migration to a higher-level of abstraction, the verification methodology must also change in significant ways. In this presentation we will explore a new TLM-driven design and verification methodology that is being developed within Cadence, critiqued by their customers and documented in a book, which will be released over a number of months as pieces of it mature.
I have lots of blog entries about 95% ready to publish. This entry is from an e-mail I wrote a few months ago when somebody asked about SystemVerilog coding guidelines. I thought it would make a good article. It’s been sitting as a draft because I always have trouble finding the right title or opening words to catch people’s attention. I couldn’t come up with anything better, so here it is: SystemVerilog Coding Guidelines
My official position about this is that you pick a style and stick to it.
Which style you pick is far less important than developing the required culture that will follow it. People can spend hours arguing over the merits of camelCaps versus under_score, but once the decision is made, people on the project need to document it, adhere to it, and police it. Otherwise you’re just wasting everyone’s time.
Why are coding guidelines so important? So someone else can read your code, or maybe you can read your own code six months after you wrote it. Good comments and documentation are an import part, but you still need to be able to read the code.
This concept of reading someone else’s code without understanding the underlying conventions reminds me of the classic short article “Meihem In Ce Klasrum” by Dolton Edwards. The article was written as a satirical response to George Bernhard Shaw’s final wishes to simplify the English language. See how easy the last paragraph is to read once you understand the new writing conventions. Reading your code should be just like that (although not as cryptic).
My advice would be to pick a style from industry and adapt it to fit SystemVerilog. For example, from Google: http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml or from the Free Software Foundation: http://www.gnu.org/prep/standards/
All of these rules for naming conventions, indentation, and file organization still apply to SystemVerilog. Additional rules would be for unsupported constructs, which should be in the release notes for the tool(s) you are using. Finally, there are a few constructs to avoid, like program blocks and wildcard indexes for associative arrays, most of which might be worth hours of debate, each.
I’ve been around simulation and synthesis languages for a while; back when you needed an NDA to see the Verilog LRM, and again with SUPERLOG, the predecessor to SystemVerilog. It’s easy for those like me to get caught up in the features of the language and forget that any programming language is just a tool. With any technology, people pick the tools they think will get the job finished most effectively. Tools evolve to meet the challenges and requirements of their users. Verilog and VHDL have clearly evolved to become the prevailing languages for hardware design.
But before the language wars came the methodology wars. At the time when Verilog and VHDL were being introduced in the late 1980s, most hardware design was by schematic gale-level entry. We would come to our clients with our simulators and synthesis tools and try to change their design methodology by writing RTL. They would bring their best engineers to compete with our tools – and the engineer would always win by producing a design with better area and timing! However, once the productivity of synthesizing large designs with practical quality of results prevailed over the manual effort, the methodology shift was an easier sell.
Fast forward a decade – although designs have increased exponentially as predicted by Moore’s Law, RTL design has not changed in any significant way during that time. Why? Because it takes the same number of lines of RTL code to write an 8-bit adder as it does a 64-bit adder. OK, so that’s an over-simplification, but number of lines of RTL code written by a single design engineer has remained manageable. However, for every registered bit added to the design, the state space doubles, and the transition space for testing all permutations of the state space quadruples. Again that’s a simplification, but the correct order of magnitude.
During that period, a number of technologies have addressed the increasing complexities of verification, such as constrained random generation, coverage driven verification, and object-oriented programming. These technologies require a change in verification methodology from writing a linear set of test patterns.
Success in captivating verification engineers to these new methodologies is taking the same path of those earlier design engineers. A single verification engineer may find a single test to exercise a specific piece of functionally much quicker than it takes a constrained random test to reach that same function. But eventually, a constrained random test will exercise more functionality faster than an engineer can write individual tests. Functional Coverage fits into Constrained Random generation to help measure the quality of your tests by telling you if your constraints are working to exercise the functionality you are required to hit. It takes the randomness out of Constrained Random generation.
Since the verification environment is more software design than hardware, Object-Oriented Programming is a technology that helps you write re-usable code, which in turn keeps your verification code manageable.
SystemVerilog has become the prevailing language that incorporates all these technologies. But does that mean the need for writing directed tests goes away. No. Designers still layout transistors or gates by hand where it’s critical to their project. Verification engineers should use the technology that is best suited to verify their design.
Firmware tests will continue to be written in C/C++ and SystemVerilog’s DPI can help link C based tests to the RTL. SystemC has become the prevailing modeling language for DSP and algorithmic based design. Most simulation tools can seamlessly link those models to RTL to either drive the test or compare results.
So next time you’re thinking about which language to choose to verify your design, step back and think about the methodology first.
About Verification Horizons BLOG
This blog will provide an online forum to provide weekly updates on concepts, values, standards, methodologies and examples to assist with the understanding of what advanced functional verification technologies can do and how to most effectively apply them. We're looking forward to your comments and suggestions on the posts to make this a useful tool.
- Texas-Sized DAC Edition of Verification Horizons Now Up on Verification Academy
- IEEE 1801™-2013 UPF Standard Is Published
- Part 1: The 2012 Wilson Research Group Functional Verification Study
- What’s the deal with those wire’s and reg’s in Verilog
- Getting AMP’ed Up on the IEEE Low-Power Standard
- Prologue: The 2012 Wilson Research Group Functional Verification Study
- May 2013 (4)
- April 2013 (2)
- March 2013 (2)
- February 2013 (5)
- January 2013 (1)
- December 2012 (1)
- November 2012 (1)
- October 2012 (4)
- September 2012 (1)
- August 2012 (1)
- July 2012 (6)
- June 2012 (1)
- May 2012 (3)
- March 2012 (1)
- February 2012 (6)
- January 2012 (2)
- December 2011 (2)
- November 2011 (2)
- October 2011 (3)
- September 2011 (1)
- July 2011 (3)
- June 2011 (6)
- Intelligent Testbench Automation Delivers 10X to 100X Faster Functional Verification
- Part 9: The 2010 Wilson Research Group Functional Verification Study
- Verification Horizons DAC Issue Now Available Online
- Accellera & OSCI Unite
- The IEEE’s Most Popular EDA Standards
- UVM Register Kit Available for OVM 2.1.2
- May 2011 (2)
- April 2011 (7)
- User-2-User’s Functional Verification Track
- Part 7: The 2010 Wilson Research Group Functional Verification Study
- Part 6: The 2010 Wilson Research Group Functional Verification Study
- SystemC Day 2011 Videos Available Now
- Part 5: The 2010 Wilson Research Group Functional Verification Study
- Part 4: The 2010 Wilson Research Group Functional Verification Study
- Part 3: The 2010 Wilson Research Group Functional Verification Study
- March 2011 (5)
- February 2011 (4)
- January 2011 (1)
- December 2010 (2)
- October 2010 (3)
- September 2010 (4)
- August 2010 (1)
- July 2010 (3)
- June 2010 (9)
- The reports of OVM’s death are greatly exaggerated (with apologies to Mark Twain)
- New Verification Academy Advanced OVM (&UVM) Module
- OVM/UVM @DAC: The Dog That Didn’t Bark
- DAC: Day 1; An Ode to an Old Friend
- UVM: Joint Statement Issued by Mentor, Cadence & Synopsys
- Static Verification
- OVM/UVM at DAC 2010
- DAC Panel: Bridging Pre-Silicon Verification and Post-Silicon Validation
- Accellera’s DAC Breakfast & Panel Discussion
- May 2010 (9)
- Easier UVM Testbench Construction – UVM Sequence Layering
- North American SystemC User Group (NASCUG) Meeting at DAC
- An Extension to UVM: The UVM Container
- UVM Register Package 2.0 Available for Download
- Accellera’s OVM: Omnimodus Verification Methodology
- High-Level Design Validation and Test (HLDVT) 2010
- New OVM Sequence Layering Package – For Easier Tests
- OVM 2.0 Register Package Released
- OVM Extensions for Testbench Reuse
- April 2010 (6)
- SystemC Day Videos from DVCon Available Now
- On Committees and Motivations
- The Final Signatures (the meeting during the meeting)
- UVM Adoption: Go Native-UVM or use OVM Compatibility Kit?
- UVM-EA (Early Adopter) Starter Kit Available for Download
- Accellera Adopts OVM 2.1.1 for its Universal Verification Methodology (UVM)
- March 2010 (4)
- February 2010 (5)
- January 2010 (5)
- December 2009 (15)
- A Cliffhanger ABV Seminar, Jan 19, Santa Clara, CA
- Truth in Labeling: VMM2.0
- IEEE Std. 1800™-2009 (SystemVerilog) Ready for Purchase & Download
- December Verification Horizons Issue Out
- Evolution is a tinkerer
- It Is Better to Give than It Is to Receive
- Zombie Alert! (Can the CEDA DTC “User Voice” Be Heard When They Won’t Let You Listen)
- DVCon is Just Around the Corner
- The “Standards Corner” Becomes a Blog
- I Am Honored to Honor
- IEEE Standards Association Awards Ceremony
- ABV and being from Missouri…
- Time hogs, blogs, and evolving underdogs…
- Full House – and this is no gamble!
- Welcome to the Verification Horizons Blog!
- September 2009 (2)
- July 2009 (1)
- May 2009 (1)