[petsc-users] Poor multigrid convergence in parallel

Dave May dave.mayhem23 at gmail.com
Mon Jul 21 06:52:27 CDT 2014


> -pc_type mg -mg_levels_ksp_type richardson -mg_levels_pc_type jacobi
> -mg_levels_ksp_max_it 2
>
> then I get identical convergence in serial and parallel
>
>
Good. That's the correct result.


> if, however, I run with
>
> -pc_type mg -mg_levels_ksp_type chebyshev -mg_levels_pc_type sor
> -mg_levels_ksp_max_it 2
> (the default according to -ksp_view)
>
> then I get very differing convergence in serial and parallel as described.
>
>
It's normal that the behaviour is different. The PETSc SOR implementation
is not parallel. It only performs SOR on your local subdomain.



> > 1) How did you configure the coarse grid solver in the serial and
> parallel test? Are they consistent?
>
> I initially just used the default (which is LU in serial and redundant
> with LU in parallel), when I rerun with:
>
> -pc_type mg -ksp_type fgmres -mg_coarse_ksp_type gmres -mg_coarse_pc_type
> jacobi -mg_coarse_ksp_max_it 100
>
> Which should be the same in serial and parallel, I again see bad behaviour
> in parallel.
>
>
I see that this is a nested Krylov solve. Using fgmres on the outer
sometimes is not enough. I've had problems where I needed to use the more
stable orthogonalization routine in gmres.

Do you also observe different convergence behaviour (serial versus
parallel) with these choices
1) -mg_coarse_ksp_type gmres -mg_coarse_pc_type jacobi
-mg_coarse_ksp_max_it 1

2) -mg_coarse_ksp_type gmres -mg_coarse_pc_type jacobi
-mg_coarse_ksp_max_it 100 -mg_coarse_ksp_gmres_modifiedgramschmidt

3) -mg_coarse_ksp_type cg -mg_coarse_pc_type jacobi -mg_coarse_ksp_max_it
100



> > 2) Does using one level with PCMG and a chebyshev smoother give the same
> answers in serial and parallel? If you precondition with jacobi, the
> residuals in serial and parallel should be very similar. If this test
> didn't pass, try a 1 level method again again without the preconditioner
> (i.e. just apply chebyshev). If the residuals are really different between
> these runs, there is likely something wrong/inconsistent with the
> definition of the operator on each level, or the way the boundary
> conditions are imposed.
>
> For these tests, I use the following options:
>
> -pc_type mg -ksp_type fgmres -pc_mg_levels 2 -mg_coarse_ksp_type gmres
> -mg_coarse_pc_type jacobi -mg_coarse_ksp_max_it 100
>
> I then tried the following options for the smoother:
>
> -mg_levels_ksp_type chebyshev -mg_levels_pc_type sor -mg_levels_ksp_max_it
> 2
>
> Works in serial, doesn't converge well in parallel.
>
>
Sure - see above.


> -mg_levels_ksp_type chebyshev -mg_levels_pc_type jacobi
> -mg_levels_ksp_max_it 2
>
> Converges veery slowly in both serial and parallel (but has the same
> convergence in both cases).
>
> -mg_levels_ksp_type chebyshev -mg_levels_pc_type none
> -mg_levels_ksp_max_it 2
>
> Converges well in both serial and parallel (identical convergence
> behaviour).
>
>
Sure - this wasn't a convergence test. I just wanted to see that the
methods which should be identical in serial and parallel are in fact
behaving as expected. Seems there are. So I'm included to think the problem
is associated with having nested Krylov solves.


Cheers,
  Dave


>
> > 3) Is the code valgrind clean?
>
> It's python-based, so it's a little difficult to say.  It appears to be so.
>
> Cheers,
>
> Lawrence
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140721/dc86660d/attachment-0001.html>


More information about the petsc-users mailing list