Process Variation: The Use of In-Die Variation

Thanks everyone for voting on my posting:

Fun and edgy parasitic extraction blog?

Since I didn’t exactly get consensus on what topic I should work on next, I thought I’d pick two topics that a few of you wanted. Here it goes:

Topic 15: Why my previous car was named Bob?

Answer:

My old white Toyota Camry was called Bob, because it was sooooooo boring that I named it the most boring name I could think of. Unfortunately, Bob is also my father-in-law’s name. But it’s not commentary on him at all, just the car. :)

My husband’s red Ford F-150 is called “Cotton-Eyed Joe” because it’s a hick name for a hick truck.

My current white Ford Escape is nameless. It did originally have the name “The Cloud Car” because it housed my Care Bears and it was white, but it just didn’t have the right vibe. Hey, maybe I’ll have a “Name My White Ford Escape” contest. What do you think I should name it? The winner will get a lovely Calibre prize package, including a white Calibre dress shirt, Calibre golf balls, and a Calibre stopwatch. Just respond to this blog with your suggested name, and you’ll be entered into the contest.

Now for something a little more serious…

Topic 11: Process variation – the use of in-die variation

Process variation refers to the thickness and width variation that occurs during the chip fabrication process. If the in-die process variations are not modeled accurately, the potential for silicon failure is very high. When doing extraction at the cell and block level, it is possible to complete extraction with process variation models such as in-die variation tables, but since metal fill is only inserted at the full-chip level, calculations such as density only make sense at the full chip. Therefore, in the design flow, at the early stages of cell and block level, using estimated density to calculate in-die variation makes sense. Then after full-chip metal fill insertion, a more accurate extraction can take place.

Parasitic resistance, capacitance, and inductance are calculated based on the drawn dimensions of the polygons, and on the process information such as metal thickness, dielectric thickness, and the dielectric constant. But control of fabrication tolerances have not kept up with the fast rate of technology shrink. Chemical mechanical polishing (CMP) is a technique used to manufacture copper interconnect. Because a slurry is used to grind down the copper, conductors that are wider will have more copper loss than conductors that are skinnier. Since the thickness has changed because of the CMP process, the parasitic resistance, capacitance and inductance of that metal will shift.

Another cause of the manufactured interconnect being different from the drawn dimensions is due to the limitations of the manufacturing process. It is limited in part by the wavelength of light used in photolithography. In order to overcome this limitation, optical proximity correction (OPC) is used. OPC is a method used to manufacture structures with dimensions less than the wavelength of the light used to illuminate the wafer. Even with the use of OPC, the manufactured dimensions will vary from the drawn dimensions. Like CMP, OPC effects will also alter the parasitic resistance, capacitance, and inductance of the interconnect.

There are three main methods used to feed process variations into a parasitic extraction engine: process corners, in-die variation, and statistical analysis.

With process corners, there will be several different extraction rule files that are created, depending on the changes to the process layer information. For example, the metal, polysilicon and dielectric layers will have a minimum and maximum thickness, depending on the variability of the process. Using combinations of the typical, minimum and maximum thicknesses, different extraction rule files can be created. Then using these different process corner files, several extraction and re-simulation runs can be completed.

The next method is to use either table-based or equation-based in-die variation. These tables or equations model the changes to both the manufactured conductor width and thickness due to three different factors: drawn conductor width, spacing to the nearest conductor, and local density. The tables or equations are built from metrics measured from test chips. For the local density measurements, it only makes sense to do local density calculations after metal fill has been inserted. Therefore, it is important for a parasitic extraction tool to be able to both insert an estimated local density, and to calculate actual local density with the real metal fill polygons. Metal fill prevents slumps in the vacant areas, but impacts capacitance. Therefore, it is also important to compute floating net coupling capacitance between two signal lines across fill.

The third method is to use a statistical approach for modeling manufacturing variations. The process parameter variables to consider are gate length, thickness of the gate oxide, metal width, thickness and dielectric thickness. Transistor-level simulation with Monte Carlo analysis can be done using a normal distribution of the process variables. This type of approach would require parasitic information as well as process variability information. In addition to the Monte Carlo simulation approach, there are two additional approaches specific to statistical static timing analysis: path-based approach, and topological approach. All three of these methods aim to find the statistical distribution of the delay values for all of the critical nets.

Here’s a question for everyone:

Are you concerned about in-die variation? If you are, what are you doing to model in-die variation?

Post Author

Posted June 3rd, 2009, by

Post Tags

, , , ,

Post Comments

9 Comments

About Karen Chow's Calibre Blog

This blog talks about the latest advances in Calibre, and also covers news in the high tech industry. Karen Chow's Calibre Blog

Comments

9 comments on this post | ↓ Add Your Own

Commented on June 5, 2009 at 6:31 pm
By Michael Rifani

Another factor that might be useful in the statistical modeling is transistor threshold voltage variation, which is largely a function of dopant density variations.

Commented on June 8, 2009 at 11:07 am
By Karen Chow

That is true the threshold voltage is an extremely important variable to consider in modeling. The modeling would be done inside the transistor model. Thanks for the comment! :)

Commented on June 10, 2009 at 1:05 pm
By Harry Gries

I think “Harry” would have been a more boring name for your car.

Harry

Commented on June 11, 2009 at 12:59 am
By Harry Gries

My experience with in-die variation for standard digital synchronous designs is to use multiple extraction corners (CMAX, CMIN) and on-chip variation (OCV) and then pray. In all seriousness, the choosing of an OCV percentage is very difficult without process variation data that the semi vendors are reluctant to share, for obvious competitive reasons. That means we are left with choosing an arbitrary percentage (e.g. 5%) and hoping we can still close timing. To tell the truth, I’ve not seen a chip not tape out due to OCV timing violations, testimony to how much faith is put in this method.

For more timing critical circuits such as DDR, I’ve seen mismatch corner sims performed. That is, one corner is slow P transistors and fast N. Then slow N and fast P. But this is time consuming and requires Spice netlists, something that not all vendors supply. And you can only do it on a portion of the design with a limited simulation.

Monte Carlo is the last choice due to the long sim times. Good luck getting your vendor to supply these models.

harry

Commented on June 12, 2009 at 5:48 pm
By Daniel Payne

I’m curious when IC designers will start using frequency-dependent parasitics like S-Parameters instead of the simplistic, static LRC values.

Commented on June 17, 2009 at 2:45 pm
By Karen Chow

RF designers commonly do use s-parameters, but more commonly for things like characterizing spiral inductors, including K factors between spiral inductors, but I still haven’t seen s-parameter usage in analog design.

Commented on June 17, 2009 at 3:12 pm
By Michael Rifani

I agree that Monte Carlo should be a last resort. A lossy technique to speed-up statistical variation simulation is to curve-fit the quantity to a polynomial. The free variables of the polynomial is your design parameters. The curve fitting uses plackett-burman points or latin hypercube. Then truncate the polynomial, keeping only the most dominant parameters. For most design, the efficiency improvement of the truncation outweighs the loss of information (accuracy).

Commented on June 28, 2009 at 7:54 pm
By Karen Chow

How have you used this in your design work? You mention that the free variables of the polynomial are the design parameters.

Commented on July 2, 2009 at 7:03 pm
By Michael Rifani

For example, one wants to characterize the speed of write-ability of a cache bit. Each bit has 6 transistors. The free variables are the channel widths and lengths of the transistors, and their threshold voltages. A brute force Monte Carlo would have had 6×3=18 dimensions. Now a skeptic might say that a polynomial in 18 dimensions representating the write-ability speed, with all the 2nd order cross-terms, ends up with more than 300 terms, which is impractical. Let alone 3rd order cross-terms and so forth . . . But usually the polynomial reveals that only, say, 6 terms are dominant, hence you can safely truncate the rest. The method by which one identifies the dominant terms uses the plackett-burman and/or latin hypercube points in the 18-dimensional space.

Add Your Comment