[petsc-users] which is the best PC for GMRES Poisson solver?

Zhang zyzhang at nuaa.edu.cn
Mon Aug 26 02:33:22 CDT 2013


Hi, Matt
BTW,

Could you leave me any suggestion of those direct solver PC, such as superLU_dist? Thanks

Zhenyu


-----原始邮件-----
发件人: "Matthew Knepley" <knepley at gmail.com>
发送时间: 2013-08-24 18:20:00 (星期六)
收件人: Zhang <zyzhang at nuaa.edu.cn>
抄送: petsc-users at mcs.anl.gov
主题: Re: [petsc-users] which is the best PC for GMRES Poisson solver?


On Sat, Aug 24, 2013 at 12:07 AM, Zhang <zyzhang at nuaa.edu.cn> wrote:

Hi,

Recently  I wrote a code of projection method for solving the incompressile flow. It includes a poisson solution of pressure.
I transfered the code to be based on Petsc .



Use -pc_type gamg


  Matt
 
However, in the case of 3D lid-driven flow case, the speed of petsc version is not advantageous yet.

I tried different combination of preconditioner with  GMRES solver. Among them GMRES+PCILU or GMRES+SOR are both the fastest.
For a grid 80x80x80, GMRES+SOR serial version used 185.816307 secs. However, for case 120x120x120, it diverged. So is GMRES+PCILU.

Then  I tried a parallel comparison, as follows,


#############################################################################################
# with Petsc-3.4.2, time comparison (sec)
# size (80,80,80), 200 steps, dt=0.002
#debug version  177.695939 sec
#opt   version  106.694733 sec
#mpirun -np 8 ./nsproj  ../input/sample_80.zzy.dat -ksp_type gmres -p c_type hypre -ksp_rtol 1.e-2
#debug version  514.718544 sec
#opt   version  331.114555 sec
#mpirun -np 12 ./nsproj  ../input/sample_80.zzy.dat -ksp_type gmres -pc_type hypre -ksp_rtol 1.e-2
#debug version  796.765428 sec
#opt   version  686.151788 sec
#mpirun -np 16 ./nsproj  ../input/sample_80.zzy.dat -ksp_type gmres -pc_type hypre -ksp_rtol 1.e-2

I do know sometimes problem with speed is not due to petsc, but my own code, so please you are welcome for any suggestion
about which combination is the best for such a computation. I know solving Poiison and Helmholtz is so common to see in numerical work.
Thank you first.

BTW, I also tried to use superLU_dist as PC.
#mpirun -np 16 ./nsproj  ../input/sample_20.zzy.dat -ksp_type gmres -pc_type lu -pc_factor_mat_solver_package superlu_dist -ksp_rtol 1.e-2

But with 16 nodes, except case of 20x20x20 grids, all larger grids run e xtremly slow.
 Since I never use the direct PC before, is it true that a good usage of direct LU as Preconditioner
requires that the amount of procedures be much larger so that for each node the calculation of direct solver is smalled enough to use it?

Cheers,

Zhenyu






--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130826/7f6aa591/attachment.html>


More information about the petsc-users mailing list