[petsc-users] Tuning the parallel performance of a 3D FEM CFD code

Henning Sauerland uerland at gmail.com
Sun May 15 08:23:58 CDT 2011

>     I think this is a more important point then you and Jed may give it credit for. It is not clear to me that the worse convergence (of the linear solve) with the number of subdomains is the chief issue you should be concerned about. An equally important issue is that the ILU(0) is not a good preconditioner (and while user a more direct solver, LU or ILU(2)) helps the convergence it is too expensive).  Are you using AIJ matrix (and hence it is point ILU?) see below

>    Is Jed is correct on this? That you have equal order elements hence all the degrees of freedom live at the same points, then you can switch to BAIJ matrix format and ILU becomes automatically point block ILU and it may work much better as Jed indicates. But you can make this change before mucking with coarse grid solves or multilevel methods. I'm guess that the point-block ILU will be a good bit better and since this is easily checked (just create a BAIJ matrix and set the block size appropriately and leave the rest of the code unchanged (well interlace the variables if they are not currently interlaced). 
Unfortunately, it is not that easy to switch to BAIJ. I only have a local XFEM basis. That is, I have variable numbers of degrees of freedom throughout the domain (nodes of elements cut by the fluid interface have on additional "XFEM unknown"). At the moment, I don't know how or if I can handle the XFEM block when using BAIJ.


More information about the petsc-users mailing list