DAC Highlight: Unified EDA Roadmap? and thoughts on “the next DFM”

So, I’ve “volunteered” to provide the occassional highlight of my DAC experience this year for Mentor Graphics.  I was a little concerned about this, as I’ve been affraid this was going to be a rather lack-lustre event.  Unfortunately, I have to say that so far my expectations have been dead on.  But, due to a little serendipity, I did stumble upon something that at least sparked some thought and interest.

On Monday morning, I received a phone call from a collegue.  He’d planned to go attend a the EDA Roadmap Workshop hosted by Juan-Antonio Carballo of IBM and Andrew Kahng of UCSD.  Unfortunately, he’d been pulled away and asked if I would substitute in.  Unfortunately, I was only free to attend the first hour.  But, I have to say just that first hour was enough to spark some interesting discussion.

Just the concept alone is interesting.  Can we and should we align the EDA vendors to working on the same technologies to be better prepared with enablement software when the next generation of process technology roles out.  Let me first say, I think this is a discussion that warrants such meetings.  I hope and expect that it will continue.  That said, so far I’m not convinced we should or will ever get there.

Why do I say that?  First, lets admit it, EDA is shrinking.  There are four (soon to be three??) big players.  Each has their own area of expertise.  Each pays attention to and does its best to be involved with the various industry technology roadmaps.  But, its only natural that as the roadmaps identify problematic areas that need EDA solutions, that each EDA provider would focus on an implementation solution that builds from and integrates closely to their existing areas of strength.   For example, for a problem impacting designers, it is only natural that Synopsys would build from the P&R perspective, Cadence would build from the custom design space, and Mentor would build from the physical verification domain.

In my opinion, this is not a bad thing, in fact it is a very good thing.  Why?  Because as each vendor focuses from their areas of knowledge, they intuitively solve the problems that their users care about.  But, each may miss concerns from the spaces that they are less involved in.  But, since one of the other vendors will have come from that direction, they will have found those holes.  In the end, I believe, this approach eventually leads towards all the vendors better understanding the requirements across the broad user spectrum.  At that point, it just becomes a matter of who can implement fastest and deliver a quality solution to the market best.  On the other hand, if we take the approach of driving the EDA direction through committee, we are much more likely to have three or more vendors providing very similar solutions, all with the exact same holes and problems.

But, as I expressed, having a more centralized manner to discuss the issues and combine the needs and requirements across different customers to present commonly to the EDA community is a good thing.  I think it will help get the ball started with all vendors.  I just hope that we’ll do it in a way that will still allow each vendor to diverge as they see best.

Now for a little side discussion … :)

Most of the first hour was a summary of the various IC technology roadmaps, presented by Dr. Alan Allan of Intel with a particular focus on ITRS, but also some interesting commentary on where it diverges from other roadmaps including TWG.  While a bit like drinking from a fire hose, I found this discussion fascinating.  One thought, in particular, kept coming back to me.

One of the first things Dr. Alan discussed was how the way Moore’s Law has been implemented over time has changed.  Moore’s Law was initially targeted at providing an improvement in IC performance, with a target of a 30% speed-up at each generation. Historically, this was achieved through a process node shrink.  By shrinking the transistors, the transistor performance was sped-up.  Because the device performance was the primary limiter of the overall IC performance, designs were thus inherently sped-up.

But, eventually, a simple shrink was no longer enough.  Eventually all the physical issues which once could be ignored as in the noise, like performance loss due to interconnect parasitics or issues of leakage current, etc, started to creep up.  Ultimately, in addition to the device size shrinks, other techniques, including new interconnect layers, new dielectric materials, etc., were implemented to help provide the performance needed.

As Dr. Alan summarized his summary, one of the main differences between ITRS and TWG, one thing that seemed to pop-out, was TWG’s greater emphasis on System in Package (SiP) and techniques like Through Silicon Vias (TSV) to connect multiple chips together in one package.  Here I got the impression that many in the room were unconvinced that this was as important as the focus on next process node.  Naturally, I find myself, once again, the contrarian.

Why do I say this?  It all comes down to economics.  What I think people forget is that the good old days of a pure process node shrink not only provided a performance bump, but it also represented an economic advantage.  If you were at 0.25 micron, a move to 0.18 micron meant you could get more chips per wafer.  As a result, the total cost per chip was reduced.  But, now with the significant increase in the cost to go to a new wafer, this no longer seems to be the case.

That said, I’m predicting a shift in the way the industry works.  For several years now we’ve been going down the SoC road.  More and more design components get integrated into a single chip, all targeting the same or at least complementary process nodes.  But, does this really make sense?  Why pay to have every transistor at 28nm, if only some components are performance critical?  What if you could create a package that efficiently connected a digital core at 40nm with a memory at 60nm and a high performance graphics processor at 28nm?  If we, as an industry, can provide a means for designers to connect timing critical components made from a chip processed at an advanced node, with less timing critical components processed at an older node, we may provide a more economic approach to delivering product to consumers.

Keep in mind, this is a big “if”.  There is a lot of work to be done.  Technologies like TSV, where a super large “via” is drilled and connected through the substrate itself, allowing multiple chips to be stacked and connected through bump pads, still has many unknowns.  How do you model its impact on performance?  How much variability will it have in manufacturing?  How do you manage the heat flow and other introduced problems? How do we make it more consistent?  Eventually the answers will come.  There is still the possiblity that when they do, the answer will reveal that to implement properly it is equally or even more expensive than the approach of keeping everything on a single chip.

Rest assured, this is not a topic of passing interest.  Like the DFM buzz that started about 5 years ago, the SiP and TSV discussions are here to stay, along with some heated arguments both for and against.  I predict it will be one of the hot topics for DAC next year.  That is, of course, assuming that there is a DAC next year! If it can provide more discussions like the one I attended I think it will.  But, if they are relying on vendor donations to carry the weight, somebody better start figuring out how to make actual customers care about DAC again!

That’s my 2 cents.  TTFN,

Ferg

Post Author

Posted July 28th, 2009, by

Post Tags

, , ,

Post Comments

3 Comments

About John Ferguson's Blog

Will provide insight into the challenges and requirements of physical verification across multiple process nodes. We'll explore new requirements, solutions and challenges. John Ferguson's Blog

Comments

3 comments on this post | ↓ Add Your Own

Commented on March 27, 2010 at 11:18 pm
By UofT Eng

The current most critical challenge is development cost, EDA becomes critical issue for 3D IC migration.

Commented on March 29, 2010 at 12:12 pm
By John Ferguson

Agreed.

Commented on June 28, 2010 at 3:55 pm
By SKMurphy » DAC 2009 Blog Coverage Roundup

[...] John Ferguson on “Unified EDA Roadmap? and Thoughts on the Next DFM“ [...]

Add Your Comment