[petsc-users] Poor multigrid convergence in parallel

Dave May dave.mayhem23 at gmail.com
Mon Jul 21 12:33:25 CDT 2014


Hi Lawrence,

I agree that you shouldn't expect magical things to work when using SOR in
parallel, but I'm a bit surprised you see such variation for Poisson

Take
  src/ksp/ksp/examples/tutorials/ex28.c
for example

Running with 1 and 16 cores I get very similar convergence histories

mpiexec -n 1 ./ex29 -pc_type mg -pc_mg_levels 3 -da_grid_x 33 -da_grid_y 33
-mg_coarse_ksp_type cg -mg_coarse_pc_type jacobi -mg_coarse_ksp_max_it 100
-mg_levels_pc_type sor -ksp_type fgmres -ksp_monitor

  0 KSP Residual norm 6.680980151738e-03
  1 KSP Residual norm 2.600644743629e-04
  2 KSP Residual norm 7.722227428855e-06
  3 KSP Residual norm 2.001120894208e-07
  4 KSP Residual norm 6.663821440723e-09

mpiexec -n 16 ./ex29 -pc_type mg -pc_mg_levels 3 -da_grid_x 33 -da_grid_y
33 -mg_coarse_ksp_type cg -mg_coarse_pc_type jacobi -mg_coarse_ksp_max_it
100 -mg_levels_pc_type sor -ksp_type fgmres -ksp_monitor

  0 KSP Residual norm 6.680980151738e-03
  1 KSP Residual norm 4.555242291797e-04
  2 KSP Residual norm 1.508911073478e-05
  3 KSP Residual norm 3.520772689849e-07
  4 KSP Residual norm 1.128900788683e-08


On 21 July 2014 19:16, Lawrence Mitchell <lawrence.mitchell at imperial.ac.uk>
wrote:

> To follow up,
>
> On 21 Jul 2014, at 13:11, Lawrence Mitchell <
> lawrence.mitchell at imperial.ac.uk> wrote:
>
> >
> > On 21 Jul 2014, at 12:52, Dave May <dave.mayhem23 at gmail.com> wrote:
> >
> >>
> >> -pc_type mg -mg_levels_ksp_type richardson -mg_levels_pc_type jacobi
> -mg_levels_ksp_max_it 2
> >>
> >> then I get identical convergence in serial and parallel
> >>
> >>
> >> Good. That's the correct result.
> >>
> >> if, however, I run with
> >>
> >> -pc_type mg -mg_levels_ksp_type chebyshev -mg_levels_pc_type sor
> -mg_levels_ksp_max_it 2
> >> (the default according to -ksp_view)
> >>
> >> then I get very differing convergence in serial and parallel as
> described.
> >>
> >>
> >> It's normal that the behaviour is different. The PETSc SOR
> implementation is not parallel. It only performs SOR on your local
> subdomain.
>
> I think I've convinced myself that I was just getting unlucky with the
> chebyshev+sor smoothing combination when I looked, modifying the sor
> weighting, or choosing something other than 2 processes for the particular
> problem I was seeing bad 2-process convergence on, gives good convergence
> again.
>
> I'll keep an eye on it in case it turns out I have been doing something
> stupid.  Thanks for the various pointers on debugging though.
>
> Cheers,
>
> Lawrence
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140721/e01bb593/attachment.html>


More information about the petsc-users mailing list