Beginning at the Beginning …3 Improving SIMPLE

In my last blog I explained how Suhas Patankar created the SIMPLE algorithm in 1970, and how the 3D Boundary Layer Group at Imperial College then adopted SIMPLE for all its future work.
I first applied SIMPLE to the problem that I had been working on with the SIVA method – laminar flow in the inlet region of a rectangular- (mainly square-) sectioned duct. This led me to identify some deficiencies in SIMPLE in its standard form, and to devise an improved formulation.

3D parabolic flows of the kind that we were working on are solved by a marching integration, in which the solution starts from the upstream boundary (in my case the inlet plane of the duct, where flow is presumed uniform), and proceeds step by step downstream (with iteration to convergence at each downstream plane) until the region of interest has been covered (or, as in my case, until the flow becomes fully developed). At this stage the solution is complete – there is no need to repeat the “solution sweep” because, by the nature of the parabolic, boundary-layer form of the equations, no influences from downstream to upstream (in the main flow direction) are allowed. In other words, it has much in common with a transient solution, in which the solution proceeds step-by-step in the “time direction“.
When I first tried SIMPLE for my square-duct case I found that it worked well so long as the interval by which the solution was advanced in the downstream direction for each forward step was relatively small. The critical step length seemed to be of the order of the grid size in the lateral direction (typically a small fraction – say a few percent – of the duct width). If the step size was larger that this, the solution was unstable, and required large damping to retain stability.
I investigated possible causes for this, and came up with the following explanation:-
The way that SIMPLE works involves creating relationships between the change (correction) to a velocity (say vn – the correction is denoted vn’) and the changes in pressure on either side of the velocity (pN’ and pP’). The aim is then to make a series of pressure corrections which will lead to velocity corrections so as to eliminate the continuity errors for each grid cell. 
The Grid Locations and Notation for SIMPLE

The Grid Locations and Notation for SIMPLE

For the grid cell at P this leads to an equation (effectively a Poisson equation for pressure) that relates the pressure changes at P, N, S, E, W (for 2D) to the continuity error at P. When the pressure correction equation is solved for all grid nodes, and the velocities are updated, then continuity is satisfied at each grid cell. So – job done! Or is it?

Of course the answer is no. Because of approximations made in the relationship between velocity and pressure corrections (eg between vn’ and pN’ and pP’) – which remember comes from an approximated form of the momentum equation for vn – then, when solution of the full momentum equations is repeated at the next iteration, the velocities change again to non-continuity-satisfying values. These then need to be corrected again (via pressure corrections) – and so on. Thus the need for iteration. This is fine, so long as the iterative solution proceeds systematically to convergence.

I found however that in my case the approximations to the momentum equations in the velocity-correction/pressure-correction relationship were leading to the instability that I was observing.

The problem comes in approximating the effects of velocity corrections at neighbouring locations on the velocity in question (say vn). The momentum equation tells us that vn is affected via convection and diffusion (ie viscosity – and also, when using a turbulent viscosity model, by turbulent transport) by any changes in its immediate neighbours (vs, vnn, vne, vns). If, for example vnn increases, it will tend to induce an increase in vn, even in the absence of any changes in pressures – and so on. The way I thought of it is that there is a sort of “elastic” connecting these velocities, so that a change in one will tend to cause a similar change in all others.

The trouble is that these links cannot be included in full in the pressure correction algorithm. They must be approximated in some way. In the standard SIMPLE, as devised by Suhas Patankar, the practice is to neglect these effects – that is, to presume that neighbouring velocity corrections are zero. This is what I found was leading to the instabilities that I observed.

In any systematically-changing flow field (in my case, for example, the flow develops steadily as the solution proceeds downstream) the velocity changes occurring at any stage in the solution will tend to be in the same direction and of roughly similar magnitude. This means that the effects of the “elastic” will tend to be to cause velocity corrections in the required direction even without pressure corrections. Neglecting this effect, means that the pressure correction equation will “think” that it need to make bigger pressure changes than is really the case. This leads to an over prediction of pressure correction (sometimes a severe one), and the instability and oscillation that I observed.

The solution that I devised was to replace the assumption that “neighbouring velocity corrections are zero” by the assumption that “neighbouring velocity corrections will, on average, be the same as the central velocity correction” (ie the average changes in vnn, vs., vne, and vnw will be the same as vn’). Of course this is still an assumption, and is “wrong” – but, for many flow situations, it proves to be a better assumption than that in the standard SIMPLE algorithm.

I tried this for my square duct case, and found that I got unconditional stability for any value of step length, with no need for under-relaxation. So – success – it worked! I therefore used this treatment for all my PhD computations – it was described as “Modified SIMPLE” in my Ph D Thesis. It became known in the research group at Imperial College and later at CHAM, and was used fairly widely by both groups. At CHAM I recall that it was referred to as “the DGT modification”. Then, by the time PHOENICS was developed at CHAM around 1980 it had been replaced by Spalding’s SIMPLEST algorithm, which addresses this problem and others in a rather different way.

But that is not quite the end of the story. Exactly the same modification to SIMPLE was later quite independently devised by van Doormal and Raithby, and was published in 1984 as the “SIMPLEC algorithm”. It has since become widely known and used under this name.

I am aware that my last few bogs have been rather “technically dense” (I feel a bit like I‘m writing another PhD thesis!)- but I’m afraid that this is the nature of the work I was involved in at Imperial College in the early 1970s. I am hoping that future blogs will provide some light relief!


Post Author

Posted August 21st, 2009, by

Post Tags


Post Comments


About David Tatchell’s Blog

Reflections on experiences and lessons learned during 40 years in the CFD business. Thoughts on the present state of the CFD industry and future trends, in the broader context of the CAE, CAD and EDA industries. David Tatchell’s Blog


4 comments on this post | ↓ Add Your Own

Commented on August 24, 2009 at 2:51 am
By Lindsay

They are like a series of stories, interesting and attractive, even for a CFD newbie.

Commented on September 18, 2009 at 6:32 pm
By Naweed Khan

I came across your blog just by chance. I started reading the first part of series with a cursory glance, but could not resist myself until I followed the advice that the King of Hearts gave to the White Rabbit – “Begin at the beginning, go on to the end, and then stop.” !!

This series of blogs offered me the insight I could not get from any text on CFD that I have read till now.

Commented on September 21, 2009 at 1:27 am
By David Tatchell

Thanks for your feedback. I’m glad that you found these blogs interesting.

Commented on July 29, 2011 at 4:58 am
By Low Lee Leong

Thanks for the great blog of CFD. Appreciate!

Add Your Comment