[petsc-users] Poor multigrid convergence in parallel
Lawrence Mitchell
lawrence.mitchell at imperial.ac.uk
Mon Jul 21 06:41:35 CDT 2014
On 21 Jul 2014, at 11:50, Dave May <dave.mayhem23 at gmail.com> wrote:
> Hi Lawrence,
>
> Hmm this sounds odd. The convergence obtained with chebyshev should be essentially identical in serial and parallel when using a jacobi preconditioner
So I was maybe a bit unclear in my previous mail:
If I run with:
-pc_type mg -mg_levels_ksp_type richardson -mg_levels_pc_type jacobi -mg_levels_ksp_max_it 2
then I get identical convergence in serial and parallel
if, however, I run with
-pc_type mg -mg_levels_ksp_type chebyshev -mg_levels_pc_type sor -mg_levels_ksp_max_it 2
(the default according to -ksp_view)
then I get very differing convergence in serial and parallel as described.
> 1) How did you configure the coarse grid solver in the serial and parallel test? Are they consistent?
I initially just used the default (which is LU in serial and redundant with LU in parallel), when I rerun with:
-pc_type mg -ksp_type fgmres -mg_coarse_ksp_type gmres -mg_coarse_pc_type jacobi -mg_coarse_ksp_max_it 100
Which should be the same in serial and parallel, I again see bad behaviour in parallel.
> 2) Does using one level with PCMG and a chebyshev smoother give the same answers in serial and parallel? If you precondition with jacobi, the residuals in serial and parallel should be very similar. If this test didn't pass, try a 1 level method again again without the preconditioner (i.e. just apply chebyshev). If the residuals are really different between these runs, there is likely something wrong/inconsistent with the definition of the operator on each level, or the way the boundary conditions are imposed.
For these tests, I use the following options:
-pc_type mg -ksp_type fgmres -pc_mg_levels 2 -mg_coarse_ksp_type gmres -mg_coarse_pc_type jacobi -mg_coarse_ksp_max_it 100
I then tried the following options for the smoother:
-mg_levels_ksp_type chebyshev -mg_levels_pc_type sor -mg_levels_ksp_max_it 2
Works in serial, doesn't converge well in parallel.
-mg_levels_ksp_type chebyshev -mg_levels_pc_type jacobi -mg_levels_ksp_max_it 2
Converges veery slowly in both serial and parallel (but has the same convergence in both cases).
-mg_levels_ksp_type chebyshev -mg_levels_pc_type none -mg_levels_ksp_max_it 2
Converges well in both serial and parallel (identical convergence behaviour).
> 3) Is the code valgrind clean?
It's python-based, so it's a little difficult to say. It appears to be so.
Cheers,
Lawrence
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140721/5362d778/attachment.pgp>
More information about the petsc-users
mailing list