Joe Davis' BlogJoe Davis' Blog RSS
Almost 10 years ago, as the industry was starting to adopt model-based OPC and other resolution enhancing techniques on a large scale, the ITRS got out its looking glass and saw an “explosion” in the size of the files used to describe chip layouts. As a result, a group of industry companies collaborated to create a SEMI spec for the OASIS format for layout data. The format was officially approved by SEMI in late 2005.
In 2006, Peggy Aycinena did a nice interview/article called “Midnight at the Oasis” on EDA Cafe (link to article). She interviewed a number of experts from industry, including Tom Grebinski and Mentor’s own Steffen Schulze, Dir. of Marketing for Calibre’s Mask Data Prep solutions. One of the interesting questions in that interview was about the shelf life of OASIS — how long would it last? Interestingly, the panel guessed “Yes, 10 years … until 2011, at least”. Now that it is 2011, it is even more interesting to see that OASIS has really been adopted as a standard only in the last few years and is still gaining speed. Definitely, we’ve moved from “midnight” to “dawn”, and we can expect to see OASIS start pushing back from the manufacturing areas where it has become dominant into the design areas where data size is swiftly becoming a problem.
There are many reasons why it has taken as long to really start adopting OASIS as the original panelists thought the format would last. As pointed out in James A. Rodger’s paper on diffusion of software innovation (see link), how innovations fit into organizational structures and processes can be as important or more important to adoption than the direct benefits of the innovation. My colleagues and I took a look at the actual adoption curve of OASIS in the industry and the forces affecting its adoption. The original paper was presented at the 2010 European Mask Conference in Grenoble, available online from the SPIE, and a general audience article was published on the DAC Knowledge Center. If you are interested in how OASIS has been adopted in the industry or in the diffusion of innovations in general, I encourage you to take a look.
The bottom line is that OASIS has become the de facto standard for post-TO layout files in the most advanced nodes and is rapidly moving back up the design flow. When your full chip GDSII is 30+Gb and the OASIS is 1Gb, you save a huge amount of time by not having to move those extra 29Gb in and out of your tools every day. Today, the leading edge chips are much bigger than 30Gb GDSII and OASIS really saves the day. Eventually, design flows will have to deal with OASIS even earlier in the design flow.
Recently, I attended the latest OpenAccess (OA) conference put on by Si2. Attendance this year seemed to be up from last year. Whether the increased attendance was due to the increased adoption that we’ve seen in the industry or the fact that the conference was free this year is unclear. However, it is crystal clear that OA is no longer just a promise, and that adoption has moved from the true early adopters into the mainstream market. Adoption is still in its early stages when you view the industry as a whole, but both our own experiences and the presentations at the conference show that customers want to use OA “out of the box” for their design flows. Perhaps the strongest statement came from Mark Magnum of MicroMagic, who said that OA support was a customer requirement for at least 10 of the evaluations for their 3D layout tool.
Much of the activity revolves around the battle for a piece of the custom design market. Cadence has been aggressively deploying their OA-based version of Virtuoso for several years now and all of the other players either have OA products on the market or in development. Eric Leavitt of Synopsys introduced their new OA-based custom design tool, cleverly named Galaxy Custom Designer. Eric gets the award for the best quote of the day — “GUI — such a small word, such a lot of code.” Can’t agree more, Eric.
Of course, tool developers are finding out that just being on OA doesn’t automatically mean that your tool can completely reproduce everything that is in Virtuoso. There are some very key bits of the database that Cadence was smart enough to keep to themselves. You can be sure that the developers at Cadence will be amusing themselves for years with a lively game of “keep away” with those key pieces, to ensure that Virtuoso compatibility is an ongoing challenge for their competitors.
Another interesting talk was from Michaela Guiney, Change Team Co-Architect from Cadence. While she covered all of the expected topics such as bugs fixed, new enhancements, and so forth, the part that I found most interesting was about constraints for 32nm. Cadence is currently working with customers to define and implement new constraint types for 32nm processes and expects to contribute them in mid-2010. According to Michaela, the focus is on representing those constraints relevant to routers, ala LEF/DEF rules. When asked whether the constraints would also cover the DFM constraints that Jake Burma talked about, she answered (paraphrased) “Yes, we should cover the DFM rules and more for the routers.” The constraints sound like an elegant solution, although it seems that they will always be behind the technology curve, since it requires new code for each new constraint type — which has to go through development, testing, donation, deployment, etc. I am interested to see how this evolves.
Finally, my favorite talk of the day was from Luigi Capodieci now at GLOBALFOUNDRIES. Even though his talk wasn’t really about OpenAccess, his presentation was my favorite for two reasons. First, he talked in depth about how DFM-related issues can affect yield and performance variability in real life and he set a vision for the foundry to provide “IDM-like” DFM collaboration to foundry customers. This is a tall order, but Luigi and his team have a lot of experience with providing infrastructure to get bleeding edge designs to yield well. The second reason is that Luigi showed screenshots of Calibre RVE and Calibre LFD in his examples of how GLOBALFOUNDRIES has implemented their DFM flows. We in the Calibre team have been collaborating closely with AMD, the progenitor of GLOBALFOUNDRIES , for many years (see refs: 2006, 2009_a, 2009_b, Gabe on EDA) and look forward to continuing that work as they move into the foundry business as GLOBALFOUNDRIES.
Then, as the old adage says, … “don’t do that.” Periodically, we get a complaint from someone who is becoming concerned about the time it takes to stream out GDSII from their P&R tool in order to run Calibre. We keep making Calibre faster and faster, so eventually the stream-out time starts to look big and hairy. In the typical final verification loop, you may have to do this whole stream, verify, fix, stream, verify, fix, a few times. If the verification time is 2 hrs and it takes more than 20 minutes to stream out, you start to get concerned about the stream time.
The root of the problem is that, if you are doing final verification, you need to merge the top-level from the P&R tool with the cells that contain the base layers. In some modern design flows, this can mean merging literally 100′s of files with the top level and streaming it out. P&R tools are famous for being slow at doing just this.
Luckily, there are several cures for this ill. First, instead of doing the merging in the P&R tool, you can stream out only the top level from the P&R tool and use Calibre to merge the libraries on input. This is very easy and can be implemented very quickly. If you want to got even further, you can introduce another step in the middle where you use Calibre DRV’s filemerge utility to merge the 100′s of input files and then push it into Calibre DRC.
The table below shows the results from just such an example with a real customer test case.
- merge in P&R and stream out : 120 min
- merge on input to Calibre : 60min
- merge with Calibre DESIGNrev filemerge : 8min
So, just using the right tool for the job gives a 15x improvement in the time to do the merging that you have to do before you even get to running verification.
Now that almost all of the major custom design tools run on OpenAccess, we often get asked about how well Calibre supports OpenAccess (OA). The truth is that Calibre has supported reading polygonal data from OA since February 2007 and we have kept up with the new releases of OA as they come along. What has really driven adoption of OA in the last year or so has been the release of Virtuoso on OA, the availability of PCELL caching from Ciranova and Cadence, and the IPL effort. Now, the Synopsys and Springsoft custom design tools are getting on the bandwagon and suddenly there are enough customers using or evaluating OA-based design tools that the question of Calibre compatibility comes up.
First of all, be confident that the existing stream-based (GDSII) verification flow is not going away. OA adoption is still in a nascent phase and we expect that customers will always prefer to do final verification on the same file that they will deliver to the foundry — the GDSII or OASIS file.
The basic steps to getting Calibre running on OA are the following:
- Tell Calibre that you want to use OA and point it to the library and topcell
- Create a lib.defs file to define where the library and topcell are located
- (optional) Configure the version of OA and compiler your database used
- (optional) Configure the location of custom libraries for OA plug-ins
Items #1, #2: Basic set-up. If you don’t have PCELLs in your design, running Calibre from OA is as simple as making a few changes to your rulefile or Calibre Interactive and creating a lib.defs file. In Calibre Interactive, you can specify OpenAccess as the source format on the Inputs pane. Alternatively, you can add the following statements to your rule file:
LAYOUT SYSTEM OA
LAYOUT PATH “libraryName”
LAYOUT PRIMARY “topcellName”
where of course “libraryName” is the name of your OA library and “topcellName” is the name of your topcell. Then, you simply have to create a lib.defs file in your home or run directory. A typical lib.defs file can be as simple as the one line “DEFINE libraryName ./lib/topcellName”.These steps cover items 1 and 2 above.
Item #3. Calibre is currently shipping with the 22.04p007 version of OA. In most cases, this will work for you even if you are using an older version of OA. However, you may be using a very new or pre-release version of OA. In this case, you will need to set the OA_HOME environment variable to point to your OA installation.
Item #4. PCELLS. However, most people using custom design tools will also be using one or more forms of PCELLs. Of course the dominant PCELL provider is Cadence (SKILL-based). However, there are now TCL-based (Springsoft, Synopsys) and Python-based (Ciranova) PCELLs. You must have a plug-in from one of these providers linked into the OA library to enable Calibre to read the PCELLs from the database. Conceptually, this isn’t difficult, but you have to get the path to the plug-in and the plug-in libraries set up properly, which can be a little challenging when you are used to simply pointing Calibre at a single GDSII file.
If you are having problems getting Calibre to read your PCELLS out of the database, there are a couple of steps to take. First, make sure that the “Abort on PCELLs” option in Calibre is turned off. Second, make sure that you can use the utility oa2strm to create a GDSII file containing the evaluated PCELLS from the database. The oa2strm ships with the OA libraries and can be found in the MGC_HOME tree or your own installation of OA. If your set-up is right, oa2strm will create a valid GDSII file and Calibre should not have any OA-related problems. On the other hand, if oa2strm doesn’t create a valid GDSII file with all of the evaluated PCELLs, then the problem is in your paths and set-up and not in Calibre.
I hope that this is helpful. We also have a more detailed AppNote that is available if you want more information.
Let us know if you are using Calibre with OA and your experiences.
DAC is less than two months away… and the phone is starting to ring again…saying “We are doing demos and realize that we are showing Calibre everywhere. Do you want to participate in our demos?” Of course we do
Our approach has always been to make Calibre available in every design tool and on every database. This approach is good for everyone. Designs have to be certified “clean” (more on that in a subsequent post) before they can go to the next stage — be it IP creation and characterization, P&R, chip assembly, or tape-out. The only way to be truly successful in physical verification is to be available everywhere in the flow and in everyone’s design tools. Voila — universal integration. However, this is the product side of the equation — why we work hard to be a great integration partner. However, what’s in it for you?
As usual, the what’s in it for you depends on your perspective. If you are a designer — someone trying to actually lay down polygons and get a chip out the door — the value is that you have one production solution throughout the flow and in every tool. If you are in a big company and your focus is on running DRC and dispositioning the results to those who need to fix them, you get the foundry sign-off deck, run it on the same tool (Calibre) that your foundry does, and look at the results in a viewer using RVE. If your company changes viewers, no problem — RVE still works. If it doesn’t, just talk to your vendor and we’ll be happy to work with them through OpenDoor.
If you are in a small company and have to do everything yourself, the value is higher. When you are adding those “special” standard cells to the library, you just go to the “Calibre” menu and everything works. When you are doing your SP&R, same thing — Calibre menu and go. Need to do incremental verification? No problem, that’s just a click away. Run only the metal checks? No problem, that’s on the select check menu. Now, you go to chip finishing. Pull in the routing layers, the IP, the pad rings, then back to the Calibre menu and RVE again. It is the same interface for running Calibre and for debugging the results at every stage in the development and in each of the tools that you use along the way. For custom design, you can take your pick of Mentor’s ICStation, Virtuoso(r), Laker, Galaxy Custom Designer, or even Tanner or Mentor DESIGNrev. For SP&R, you have a Calibre menu in Olympus, Cadence Encounter, Synopsys IC Compiler, and Magma BlastFusion.
For the designer, having the same interface across the tools is a convenience that enables them to get more work done with least work. For the CAD department, universal integration means lower training and support costs. For the project manager, using the same physical verification tool throughout the flow means fewer (there are always some!) problems at tape-out. This is the value that you get across the organization from Calibre integration. It is a value that is hard to measure and never shows up in a benchmark, but it makes a difference in getting your job done.
If you are using a design tool that isn’t integrated with Calibre at the tool level, let me know …
About Joe Davis' Blog
Flows and integrated solutions using Calibre
- Dawn at the OASIS
- 14th OpenAccess Conference
- Doc, it hurts when I stream out …
- Running Calibre from an OpenAccess Database
- Calibre Everywhere — the customer value of universal integration