[petsc-users] which is the best PC for GMRES Poisson solver?

Zhang zyzhang at nuaa.edu.cn
Sat Aug 24 00:07:41 CDT 2013


Hi,

Recently  I wrote a code of projection method for solving the incompressile flow. It includes a poisson solution of pressure.
I transfered the code to be based on Petsc .

However, in the case of 3D lid-driven flow case, the speed of petsc version is not advantageous yet.

I tried different combination of preconditioner with  GMRES solver. Among them GMRES+PCILU or GMRES+SOR are both the fastest.
For a grid 80x80x80, GMRES+SOR serial version used 185.816307 secs. However, for case 120x120x120, it diverged. So is GMRES+PCILU.

Then  I tried a parallel comparison, as follows,


#############################################################################################
# with Petsc-3.4.2, time comparison (sec)
# size (80,80,80), 200 steps, dt=0.002
#debug version  177.695939 sec
#opt   version  106.694733 sec
#mpirun -np 8 ./nsproj  ../input/sample_80.zzy.dat -ksp_type gmres -pc_type hypre -ksp_rtol 1.e-2
#debug version  514.718544 sec
#opt   version  331.114555 sec
#mpirun -np 12 ./nsproj  ../input/sample_80.zzy.dat -ksp_type gmres -pc_type hypre -ksp_rtol 1.e-2
#debug version  796.765428 sec
#opt   version  686.151788 sec
#mpirun -np 16 ./nsproj  ../input/sample_80.zzy.dat -ksp_type gmres -pc_type hypre -ksp_rtol 1.e-2

I do know sometimes problem with speed is not due to petsc, but my own code, so please you are welcome for any suggestion
about which combination is the best for such a computation. I know solving Poiison and Helmholtz is so common to see in numerical work.
Thank you first.

BTW, I also tried to use superLU_dist as PC.
#mpirun -np 16 ./nsproj  ../input/sample_20.zzy.dat -ksp_type gmres -pc_type lu -pc_factor_mat_solver_package superlu_dist -ksp_rtol 1.e-2

But with 16 nodes, except case of 20x20x20 grids, all larger grids run extremly slow.
 Since I never use the direct PC before, is it true that a good usage of direct LU as Preconditioner
requires that the amount of procedures be much larger so that for each node the calculation of direct solver is smalled enough to use it?

Cheers,

Zhenyu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130824/3eb2399b/attachment.html>


More information about the petsc-users mailing list