[petsc-users] Tuning the parallel performance of a 3D FEM CFD code

Henning Sauerland uerland at gmail.com
Fri May 13 08:50:43 CDT 2011


Let me first of all explain the problem I'm considering a bit more detailed. I'm working on two-phase flow problems in the low Reynolds number regime (laminar flow). The flow field is described by the incompressible Navier-Stokes equations and the phase interface is tracked implicitly using the level-set method. This leads to a strongly coupled problem of the flow field and the level-set field. That is, during one time step the Navier-Stokes equations are solved in a series of Picard iterations and subsequently the interface (level-set field) is advected in the flow field. Those 'two' steps are carried out until the fluid and the level-set field are converged. A typical output of my current testcase for one time step looks like that (showing the relative norm of the solution vector and the number of solver iterations):


       KSP Iterations: 170
    Picard iteration step 1:  1.000000e+00
       KSP Iterations: 151
    Picard iteration step 2:  6.972740e-07
       KSP Iterations: 4
  Level-set iteration step 1: 2.619094e-06
       KSP Iterations: 166
    Picard iteration step 1:  1.124124e-06
       KSP Iterations: 4
  Level-set iteration step 2: 5.252072e-11
Time step 1 of 1, time: 0.005000

Excuse me for not mentioning it in the first place. The log_summary output on it's own may be misleading. For comparison I think one should probably concentrate on the iteration counts for one Picard iteration only.

The problem is discretized using FEM (more precisely XFEM) with stabilized, trilinear hexahedral elements. As the XFEM approximation space is time-dependant as well as the physical properties at the nodes the resulting system may change quite significantly between time steps. Furthermore, the system matrix tends to be ill-conditioned which can luckily be greatly imporved using a diagonal scaling.


On 12.05.2011, at 16:02, Jed Brown wrote:

> On Thu, May 12, 2011 at 15:41, Henning Sauerland <uerland at gmail.com> wrote:
> Applying -sub_pc_type lu helped a lot in 2D, but in 3D apart from reducing the number of iterations the whole solution takes more than 10 times longer.
> 
> Does -sub_pc_type ilu -sub_pc_factor_levels 2 (default is 0) help relative to the default? Direct subdomain solves in 3D are very expensive. How much does the system change between time steps?
ILU(2) requires less than half the number of KSP iterations, but it scales similar to ILU(0) and requires about 1/3 more time.

> 
> What "CFD" formulation is this (physics, discretization) and what regime (Reynolds and Mach numbers, etc)?
> 
> I attached the log_summary output for a problem with about 240000 unkowns (1 time step) using 4, 8 and 16 Intel Xeon E5450 processors (InfiniBand-connected). As far as I see the number of iterations seems to be the major issue here or am I missing something?
> 
> Needing more iterations is the algorithmic part of the problem, 
I guess you are talking about the nonlinear iterations? I was always referring to the KSP iterations and I thought that the ksp iteration count grows with increasing number of processors is more or less solely related to the iterative solver and preconditioner.

> but the relative cost of orthogonaliztaion is going up. You may want to see if the iteration count can stay reasonable with -ksp_type ibcgs. If this works algorithmically, it may ease the pain. Beyond that, the algorithmic scaling needs to be improved. How does the iteration count scale if you use a direct solver? (I acknowledge that it is not practical, but it provides some insight towards the underlying problem.)
ibcgs is slightly faster, requiring less number of ksp iterations compared to lgmres. Unfortunately, the iteration count scales very similar to lgmres and generally the lack of robustness of bcgs solvers turns out to problematic for tougher testcases in my experience.


Thanks
Henning



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110513/9b696b6e/attachment.htm>


More information about the petsc-users mailing list