[petsc-users] Memory and Speed Issue of using MG as preconditioner
Dave May
dave.mayhem23 at gmail.com
Wed Nov 6 02:07:43 CST 2013
Hey Alan,
1/ One difference in the memory footprint is likely coming from your coarse
grid solver which is redundant LU.
The 2 level case has a coarse grid problem with 70785 unknowns whilst the 5
level case has a coarse grid problem with 225 unknowns.
2/ The solve time difference will be affected by your coarse grid size. Add
the command line argument
-pc_mg_log
to profile the setup time spent on the coarse grid and all other levels.
See
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCMG.html
3/ You can change the smoother on all levels by using the command line
argument with the appropriate prefix, eg
-mg_levels_ksp_type cg
Note the prefix is displayed in the result of -ksp_view
Also, your mesh size can be altered at run time using arguments like
-da_grid_x 5
You shouldn't have to modify the source code each time
See
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMDACreate3d.html
Cheers,
Dave
On 6 November 2013 04:21, Alan <zhenglun.wei at gmail.com> wrote:
> Dear all,
> I hope you're having a nice day.
> Recently, I came across a problem on using MG as preconditioner.
> Basically, to achieve the same finest mesh with pc_type = mg, the memory
> usage for -da_refine 2 is much more than that for -da_refine 5. To my
> limited knowledge, more refinement should consume more memory, which is
> contradict to the behavior of pc_type = mg in PETsc.
> Here, I provide two output files. They are all from
> /src/ksp/ksp/example/tutorial/ex45.c with 32 processes.
> The execute file for out-level2 is
> mpiexec -np 32 ./ex45 -pc_type mg -ksp_type cg -da_refine 2
> -pc_mg_galerkin -ksp_rtol 1.0e-7 -mg_levels_pc_type jacobi
> -mg_levels_ksp_type chebyshev -dm_view -log_summary -pc_mg_log
> -pc_mg_monitor -ksp_view -ksp_monitor > out &
> and in ex45.c, KSPCreate is changed as:
> ierr =
>
> DMDACreate3d(PETSC_COMM_WORLD,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE,DMDA_STENCIL_STAR,-65,-33,-33,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,1,1,0,0,0,&da);CHKERRQ(ierr);
> On the other hand, the execute file for out-level5 is
> mpiexec -np 32 ./ex45 -pc_type mg -ksp_type cg -da_refine 5
> -pc_mg_galerkin -ksp_rtol 1.0e-7 -mg_levels_pc_type jacobi
> -mg_levels_ksp_type chebyshev -dm_view -log_summary -pc_mg_log
> -pc_mg_monitor -ksp_view -ksp_monitor > out &
> and in ex45.c, KSPCreate is changed as:
> ierr =
>
> DMDACreate3d(PETSC_COMM_WORLD,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE,DMDA_STENCIL_STAR,-9,-5,-5,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,1,1,0,0,0,&da);CHKERRQ(ierr);
> In summary, the final finest meshes obtained for both cases are
> 257*129*129 as documented in both files. However, the out-level2 shows
> that the Matrix requested 822871308 memory while out-level5 only need
> 36052932.
> Furthermore, although the total iterations for KSP solver are shown as 5
> times in both files. the wall time elapsed for out-level2 is around
> 150s, while out-level5 is about 4.7s.
> At last, there is a minor question. In both files, under 'Down solver
> (pre-smoother) on level 1' and 'Down solver (pre-smoother) on level 2',
> the type of "KSP Object: (mg_levels_1_est_)" and "KSP Object:
> (mg_levels_2_est_)" are all 'gmres'. Since I'm using uniformly Cartesian
> mesh, would it be helpful to speed up the solver if the 'gmres' is
> replaced by 'cg' here? If so, which PETSc option can change the type of
> KSP object.
>
> sincerely appreciate,
> Alan
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20131106/d8b46269/attachment.html>
More information about the petsc-users
mailing list