Posts Tagged ‘HyperLynx’

7 February, 2013

Last week at DesignCon 2013, I provided an Analysis Anchored to Reality presentation of the highlighting and validating the new features of HyperLynx 9.0. Teledyne LeCroy, Molex, CCN, and Picosecond Pulse Labs partnered with us to provide measurement equipment, hardware to test, and a pattern generator for this demonstration. We started off with a correlation study so customers could see that the analysis results matched reality. This live analysis was performed in two steps.

First, I used HyperLynx 9.0 to create a virtual prototype of the multi-board system which included 2d and 3d models for the trace interconnect, via’s, and connector models. Using this virtual prototype of the passive channel I generated an s-parameter model for the entire channel.  We then correlated this passive channel model to the measured s-parameter model.  Next, I showed the time domain analysis in which we looked at the bath tub plots and the eye density plots to correlate it to the measured data.

Take a moment to watch my entire demonstration during an interview with
Mark Thompson on PCBDesign007.

-Chuck Ferry
Senior Product Marketing Manager

31 January, 2013

Over the past 16 years I’ve had the pleasure of being part of the development of HyperLynx and am excited about the latest release, HyperLynx 9.0 as we are simultaneously celebrating its 25th birthday and 10th year with Mentor Graphics.

HyperLynx was founded in 1988 when cell phones were the size of a lunch box, FPGAs were just hitting the streets with a Xilinx 2064 touting 64 logic cells, VHDL and Verilog usage was basically zero and Intel was yet to announce the 486. Things have definitely changed since then and so has HyperLynx. Our first tool, LineSim for DOS, was developed during nights in a basement in Redmond, Washington by Steve Kaufer and Kellee Crisafulli who at the time worked at Data I/O.

Today, HyperLynx has grown to be a comprehensive product line of tools that includes advanced SERDES channel analysis, AC and DC Power Integrity, Full DDRx Signoff Verification, Electrical Design Rule Checking, and Full-Wave 3D Electromagnetics for model generation of discontinuities. HyperLynx has become the best solution for SI, PI, thermal, and 3D EM simulation and analysis widely used by electrical engineers and PCB layout designers around the world. As our customers push the performance of interconnects, this need requires more accurate results much sooner in the process.

HyperLynx 9.0 is one of the fastest releases to date with more than 50 new and improved features on advance 3D channel and trace modeling, improved DDR sign-off verification and accelerated simulation performance up to five times faster, here’s why:

  • The Fastest Simulator in the Industry
  • State-of-the-Art 3D Planar Trace Extraction Technology
    • For accurate modeling of signal path discontinuities
  • Extensive Results Reporting
    • So you can understand your exact timing margins
  • Batch Extraction of S Parameters
    • For performing multiple channel simulations
  • DDR3L/U Support
    • Mentor provides the latest device support
  • DDR Wizard Enhancements
    • Give you more measurements for more accurate simulations

These key benefits with HyperLynx 9.0 will increase accuracy in your simulations for faster time-to-market, fewer spins, and higher quality results. By getting the right PCB analysis technology, you will be able to deliver faster, higher-bandwidth-interconnect products your customers demand. For 25 years, the Mentor Graphics HyperLynx team has proven they can deliver the technology when you need it! The new release will ship in March 2013 and will be interfaced with all major PCB layout tools including Mentor’s Expedition Enterprise, Board Station, PADS, Cadence Allegro and Zuken CR. For more information about HyperLynx 9.0, visit and be sure to take a moment to watch the anniversary video on the HyperLynx YouTube channel.

-Dave Kohlmeier
Senior Product Line Director, PCB Analysis Products

6 June, 2012

What?  Isn’t that backwards?  Technically, yes.  The board is merely a pathway through which ICs talk to each other, and receive power to do so.  However, if the power distribution network (PDN) of the board is inadequately designed, it can actually be heating up the ICs.

ICs are supposed to be the main source of heat on a PCB.  Heat is conducted from the ICs to the board through their pins.  Sometimes a metal slug or thermal glue is placed at the base of the component to enhance this effect.  This method of component cooling is sufficient for most components.  (The really power-hungry components will also require a heat sink to help cool them).  So this means that the hottest locations on the board are right beneath the components.  What is beneath the component?  In the case of a BGA, there is usually a dense pinfield.  This pinfield creates a web of copper which will choke the current feeding the IC.  The current has to find its way through narrow pathways created by the “swiss cheese effect” of all the pinfield anitpads in the power and ground planes.  These areas of high current density will start emitting power in the form of heat, which means a drop in voltage for the IC as well as additional heat in that area of the board.

This article discusses the problem in more detail.

Don’t heat up your ICs even more by designing an inadequate PDN.
How do you avoid that?  Simulate it!

, , , , , , , , , ,

3 May, 2012

Everybody knows you are supposed to turn off your phone and other electronic devices when you are on a plane.  You can leave it on during the flight, but it has to be off for takeoff and landing.  I like to remind people in case they “forget”.  I tend not to make a big deal during takeoff, but landing has me a little more on edge.

The problem is coupled noise.  Sure, most modern planes should be designed such that some electronic interference in the cabin should not affect the plane’s electronics, but you never know.  And is it worth the risk?  I don’t think so.  The thing that worries me most is the fact that many handheld electronics take some design shortcuts in order to minimize cost.  Where many should probably use boards with 4 or more layers, often they will only use two layers.  That means that the PCB traces, instead of coupling noise onto a solid plane (no pun intended (seriously, I meant an actual ground plane (no, not a grounded plane (although that could be the result) but an actual solid sheet of copper in the PCB (which yes, should of course be grounded, anyway, I digress….)))), are coupling noise out into the world.  Such coupling is a special kind of “crosstalk” commonly referred to as electromagnetic interference, or EMI.  It is but one of the many mechanisms that unwanted noise can get coupled from one place to another in electronic designs.  You can read more about noise coupling in the following article:

And please, turn off your phone when you are on a plane (more precisely, an airplane) during takeoff and landing.  Unless you can see the radiated emissions (which you can’t, even if you squint) you won’t know they are there.

And speaking of radiated emissions… well, no, that’s really the subject for another blog.  It has to do with restricting airport restaurants from serving beans…

, , ,

2 May, 2012

In the lab, both simultaneous switching noise (SSN) and crosstalk look the same.  They appear as unwanted pulses of energy that line up with the (aggressor) signal edges.  However, the mode of energy coupling is much different between SSN and crosstalk.  In the case of crosstalk, they are lining up with the edges because the signal edges are coupling energy onto the victim signal through electric (and magnetic) fields.  This occurs from one trace to another, and increases the closer those traces are.  SSN, however, couples noise through the power distribution network (PDN).  If the impedance of the board PDN is too high at the IC power pins, the switching current of the I/O buffers will induce a voltage onto the other I/O lines.  And because these current demands occur as the signals are switching, the resulting SSN appears as a pulse that lines up with the signal edges.

How can these two phenomena be distinguished?  Well, in the lab, you could try to toggle only the nearest two bits to the victim signal.  Most of the crosstalk on a single layer will come from the nearest two aggressors.  And with only two bits toggling, if the problem is indeed SSN, there should be a significant reduction in the coupled noise. The easier solution, however, is to run SI and PI simulations during the design phase to ensure such problems are avoided in the first place.

To learn more about various sources of noise coupling in your PCB designs, and how to prevent them, take a look at this article:

, , , ,

1 May, 2012

Crosstalk is everywhere.  Really, in a more general sense, noise coupling is everywhere.  Usually the method of noise coupling is traditional “crosstalk” – the unwanted transfer of noise from one place to another through coupled electric fields.  This most often occurs on PCB designs with dense routing, and on wide parallel busses.  Even on newer SERDES busses, however, it is still an issue, as many such busses have multiple lanes, such as PCI Express.  And crosstalk is also an issue on SERDES busses when they are routed close to slower, much higher voltage signals such as 3.3V and 5V signals.  Crosstalk can also occur in a similar fashion between higher-voltage switching power supplies and sensitive lines like resets.

But it is not only crosstalk that causes noise coupling.  Shared return paths are another common method of noise coupling.  This occurs most often in connectors without sufficient ground pins.  Since ground pins act as the return paths in connectors, an insufficient number of ground pins will cause shared return paths and hence coupling between signals travelling through the connectors.  A similar type of situation can occur in boards without enough stitching vias near signal layer transitions.

And coupling can also occur through the PDN.  An inadequately designed PDN can directly result in simultaneous switching noise, or SSN.

All of these coupling problems can be identified and resolved through simulation in HyperLynx.
To learn more about how to control this noise, take a look at this article:

, , , , ,

3 April, 2012

Found a signal integrity problem in the lab?  How do you go about fixing it?  Well, if it’s a SERDES bus, you can’t do any kind of re-work because it will most likely kill the signal even more.  Maybe you can play with some driver strength or pre-emphasis settings.  Or is it a slower, parallel bus?  Maybe you can re-work in some necessary termination.  This is where post-layout SI simulation is useful.  In simulation, you can mimic the problematic situation, and try to figure out a solution, without even touching a soldering iron.  And what’s more, you can simulate all the nets on the board to make sure they don’t have similar problems.
In fact, you wouldn’t be in such a situation if you did a full-board SI verification before sending it to the fab house.  Better yet, if you did some pre-layout simulation to identify the necessary constraints on the critical busses, there may not have even been a problem to find in post-layout simulation.  In every step of the design phase, changes become orders of magntiude more costly and time-consuming.  That’s where doing the bulk of your signal integrity (and power integrity) work towards the beginning of the design cycle really pays off.
You can read more about it here:

, , ,

31 March, 2012

Running at 6GHz is actually kind of scary regardless, but especially so with your eyes closed.  And I mean that more figuratively than literally.  Obviously, if your eye diagrams are closed on your serial links in your design there is cause for fear, but the fear of the unknown can be even greater, especially if you are running at multi-GHz speeds.  That is where a complement of pre-layout and post-layout signal integrity simulation can help.
Take a look at this article discussing the differences:

Pre-layout simulations are a great way to see if a design is even feasible.  For instance, if you are trying to run a 6Gbps link like Serial ATA or SAS through several boards and a long backplane, it might not make it unless you make the appropriate choice of connector, board stackup, and trace geometries.  Pre-layout simulation is a great way of gaining an understanding of the limitations of a certain bus architecture, and understanding the margins of your system.  This opens your eyes to what your design is actually doing.  It also leads to a better understanding of what might be a potential problem once the system is built.  In fact, post-layout simulation is even more useful in that regard, as it gives the most accurate view of what is going on at the receiver.  Post-layout simulation is often more useful than actual measurement.  Multi-GHz busses cannot be measured while they are running; they usually need to be measured into some sort of test fixture.  So, having a post-layout simulation handy to see what is going on in the actual design, including the effects of equalization at the receiver, is invaluable.

, , , , , ,

29 March, 2012

It’s never too late to fix a design problem.  Well, maybe if the product is shipping, that might be classified as “too late”.  But during the design phase, whether you’ve laid out your board or not, it’s a good time to make sure there are no design issues.  When it comes to signal integrity, that means performing pre-layout or post-layout simulations.  I think most experts agree that pre-layout simulations are the best time to do signal integrity.  There those, however, that feel pre-layout simulations are a lot of time wasted on “what might be”, but the counter-argument is that if you don’t simulate, how do you know what constraints to apply to your board layout as it is being layed out.  I say as long as you do EITHER before the product ships, you’ve done well.  But, when it comes to getting products out in the fastest and most efficient manner, a mixture of both pre-layout and post-layout simulation will best suit your needs.

What’s the difference?  Well, other than the obvious fact that post-route simulations are done after layout, I would classify the main difference as the fact that pre-layout simulation is aimed more at exploring a solution space and creating design constraints, while post-layout is aimed at verifying that those constraints were met.  For a more in-depth discussion on the differences, take a look at this article in Electronic Design magazine:

, , , ,

7 March, 2012

Parallel busses are a pain to implement.  They really are.  Sure, they are slower than blazing-fast SERDES busses, but they introduce a lot more problems.  SERDES busses introduce a new set of problems because they are so fast, but they are also differential and serial, which eliminates a bunch of problems.  Parallel busses are single-ended, so they tend to draw a lot more power.  So that means you have to worry about designing a good power distribution network (PDN), and worry about things like simultaneous switching noise.  Any layer transitions require ample stitching vias (or stitching capacitors) as well, so the vias and PDN are inter-related.  Not to mention all the complicated timing relationships that need to be maintained…

The original DDR was probably the toughest parallel bus to implement successfully.  DDR2 got faster, but also implemented a number of changes to make implementation easier – changes like using slew-rate derating to get a better picture of your timing margin, and allowing for 2T timing on the heavily-loaded address bus.  And DDR3 added the new fly-by address routing and write-leveling.  Really, these changes were necessary to operate at faster speeds, but also helped make things easier.  Easier, that is, if you understand how to include all of them in your analysis of the bus.

If you are interested in finding out more about the challenges facing DDR3/4 and SERDES busses, take a look at this article in New Electronics magazine:

, , , , , ,

@MentorPCB tweets

Follow MentorPCB