# Beginning at the Beginning …3 Improving SIMPLE

_{n}– the correction is denoted v

_{n}’) and the changes in pressure on either side of the velocity (p

_{N}’ and p

_{P}’). The aim is then to make a series of pressure corrections which will lead to velocity corrections so as to eliminate the continuity errors for each grid cell.

For the grid cell at P this leads to an equation (effectively a Poisson equation for pressure) that relates the pressure changes at P, N, S, E, W (for 2D) to the continuity error at P. When the pressure correction equation is solved for all grid nodes, and the velocities are updated, then continuity is satisfied at each grid cell. So – job done! Or is it?

Of course the answer is no. Because of approximations made in the relationship between velocity and pressure corrections (eg between v_{n}’ and p_{N}’ and p_{P}’) – which remember comes from an approximated form of the momentum equation for v_{n} – then, when solution of the full momentum equations is repeated at the next iteration, the velocities change again to non-continuity-satisfying values. These then need to be corrected again (via pressure corrections) – and so on. Thus the need for iteration. This is fine, so long as the iterative solution proceeds systematically to convergence.

I found however that in my case the approximations to the momentum equations in the velocity-correction/pressure-correction relationship were leading to the instability that I was observing.

The problem comes in approximating the effects of velocity corrections at neighbouring locations on the velocity in question (say v_{n}). The momentum equation tells us that v_{n} is affected via convection and diffusion (ie viscosity – and also, when using a turbulent viscosity model, by turbulent transport) by any changes in its immediate neighbours (v_{s}, v_{nn}, v_{ne}, v_{ns}). If, for example v_{nn} increases, it will tend to induce an increase in v_{n}, even in the absence of any changes in pressures – and so on. The way I thought of it is that there is a sort of “elastic” connecting these velocities, so that a change in one will tend to cause a similar change in all others.

The trouble is that these links cannot be included in full in the pressure correction algorithm. They must be approximated in some way. In the standard SIMPLE, as devised by Suhas Patankar, the practice is to neglect these effects – that is, to presume that neighbouring velocity corrections are zero. This is what I found was leading to the instabilities that I observed.

In any systematically-changing flow field (in my case, for example, the flow develops steadily as the solution proceeds downstream) the velocity changes occurring at any stage in the solution will tend to be in the same direction and of roughly similar magnitude. This means that the effects of the “elastic” will tend to be to cause velocity corrections in the required direction even without pressure corrections. Neglecting this effect, means that the pressure correction equation will “think” that it need to make bigger pressure changes than is really the case. This leads to an over prediction of pressure correction (sometimes a severe one), and the instability and oscillation that I observed.

The solution that I devised was to replace the assumption that “neighbouring velocity corrections are zero” by the assumption that “neighbouring velocity corrections will, on average, be the same as the central velocity correction” (ie the average changes in v_{nn}, v_{s}., v_{ne}, and v_{nw} will be the same as v_{n}’). Of course this is still an assumption, and is “wrong” – but, for many flow situations, it proves to be a better assumption than that in the standard SIMPLE algorithm.

I tried this for my square duct case, and found that I got unconditional stability for any value of step length, with no need for under-relaxation. So – success – it worked! I therefore used this treatment for all my PhD computations – it was described as “Modified SIMPLE” in my Ph D Thesis. It became known in the research group at Imperial College and later at CHAM, and was used fairly widely by both groups. At CHAM I recall that it was referred to as “the DGT modification”. Then, by the time PHOENICS was developed at CHAM around 1980 it had been replaced by Spalding’s SIMPLEST algorithm, which addresses this problem and others in a rather different way.

But that is not quite the end of the story. Exactly the same modification to SIMPLE was later quite independently devised by van Doormal and Raithby, and was published in 1984 as the “SIMPLEC algorithm”. It has since become widely known and used under this name.

**Postscript**

** **

## Post Author

Posted August 21st, 2009, by **David Tatchell**

## Post Tags

## Post Comments

## About David Tatchell’s Blog

Reflections on experiences and lessons learned during 40 years in the CFD business. Thoughts on the present state of the CFD industry and future trends, in the broader context of the CAE, CAD and EDA industries. David Tatchell’s Blog

## 4 comments on this post | ↓ Add Your Own

Commented on August 24, 2009 at 2:51 am

By Lindsay

They are like a series of stories, interesting and attractive, even for a CFD newbie.

Commented on September 18, 2009 at 6:32 pm

By Naweed Khan

I came across your blog just by chance. I started reading the first part of series with a cursory glance, but could not resist myself until I followed the advice that the King of Hearts gave to the White Rabbit – “Begin at the beginning, go on to the end, and then stop.” !!

This series of blogs offered me the insight I could not get from any text on CFD that I have read till now.

Commented on September 21, 2009 at 1:27 am

By David Tatchell

Naweed

Thanks for your feedback. I’m glad that you found these blogs interesting.

Commented on July 29, 2011 at 4:58 am

By Low Lee Leong

Thanks for the great blog of CFD. Appreciate!