Assuring Software Quality

In my last blog I discussed the importance of support and the value it provides in the physical verification space.  As indicated, one of the key components in providing support is having an infrastructure helps to assure quality software releases in the first place.  In this blog, I will provide more insight into the procedures in place within the Calibre organization that help to ensure the high standards of Quality that Calibre has become known for.

For Calibre, the concept of quality starts at the very beginning.  When new features or functions are conceived and planned, marketing takes the initial role to scope the expected behavior.  Depending upon the complexity of the functionality, this is often done in the form of project teams that consist of marketing, development, product engineering and QA and often customer support and documentation.  Through the course of the definition, the customer goals and requirements are always kept first and foremost.  With those goals in mind, various implementation proposals are explored.

While R&D develops functionality, they will typically incorporate any tests that come from customer support or the customers directly as well as add their own validation tests.

Ultimately, however, product engineering has the goal to test that the final solution meets the initial requirements and does not break any existing functionality.  Functional tests can come from customers, marketing, customer support, R&D, and their own generated test cases.  All of these tests are collected to form an initial baseline test for the specific set of functionality.

Once an agreed upon implementation is put into the code, testing can begin.  For each new function, the functional tests are run and validated.  In addition to this, all historic functional tests must also be run and validated.  This assures that new functionality has not introduced an unexpected change to any existing behavior.

Unlike many solutions, Calibre is a platform of offerings.  It consists of a single processing engine with solutions built for offerings for DRC, LVS, parasitic extraction, retical enhancement technology, mask data prep and fracture, and the list goes on.  It is important to ensure that a change to an offering, such as DRC, does not have an adverse effect for another application, such as OPC.  This means even further testing and validation.

Because a product like Calibre can be run in several modes (flat, hierarchical, multi-threaded, distributed, hyper, …) it is critical to validate that all modes deliver the same expected behavior release to release.  This means that all those functional tests must be run and compared in every possible configuration.  In addition to this, these configurations must also be run across all supported hardware and OS platforms. This translates to literally thousands of validation runs that must be platformed for each new release.

In addition to functional testing, it is also critical to also validate performance and capacity.  With Calibre, the goal is to continue to lead the industry in performance.  In most releases, performance improvements can be anticipated.  At an absolute minimal, it is critical to ensure that performance and capacity have not degraded.  To validate these goals have been met, large design tests, specifically targeted to tax the performance limits, are also run.  Typically these will consists of large designs and rule files from key partner customers.  By providing such cases, customers can feel comfortable that similar future designs will continue to see performance gains.

With each new release providing key enhancements for every tool offering in the Calibre platform, this testing becomes a critical component of the release process.  For the Calibre platform, there are four new releases planned each year, typically released in the middle of each calender quarter.  While it is not expected that any customer is going to upgrade to a new release every quarter, it is important that new releases are available frequently, so that new functionality is available promptly when a customer does choose to transition.

Of course, even with all of the testing that is done, it is impossible to guarantee no bus.  To help address that, in addition to each official release, there are typically 2 to 3 update releases for Calibre.  These update releases will build upon a previous official release and will consist of bug fixes and minor enhancements.  These update releases allow users a way to upgrade to new functionality or gain access to bug fixes, without bringing the additional risk associated with other changes.

For the Mentor Graphics Calibre platform, the responsibility associated with sign-off quality and accuracy is taken very seriously.  When you add up all of the official releases, and all of hand-off patches provided, the number of tests required per year with Calibre quickly climbs to the order of hundreds of thousands, and consumes several thousand CPUs at near 100% utilization around the clock!  Can your other vendors claim the same?  When they fail, will they be there to bail you out?  In my next blog, we’ll examine in more detail how Mentor Graphics has organized to provide the best support possible in the event that you do have a problem.

TTFN,

John Ferguson

Post Author

Posted March 15th, 2010, by

Post Tags

, , , , ,

Post Comments

No Comments

About John Ferguson's Blog

Will provide insight into the challenges and requirements of physical verification across multiple process nodes. We'll explore new requirements, solutions and challenges. John Ferguson's Blog

Comments

Add Your Comment