[petsc-users] Do more solver iterations = more communication?

Justin Chang jychang48 at gmail.com
Thu Feb 18 13:56:10 CST 2016


Hi all,

For a poisson problem with roughly 1 million dofs (using second-order
elements), I solved the problem using two different solver/preconditioner
combinations: CG/ILU and CG/GAMG.

ILU takes roughly 82 solver iterations whereas with GAMG it takes 14
iterations (wall clock time is roughly 15 and 46 seconds respectively). I
have seen from previous mailing threads that there is a strong correlation
between solver iterations and communication (which could lead to less
strong-scaling scalability). It makes sense to me if I strictly use one of
these preconditioners to solve two different problems and compare the
number of respective iterations, but what about solving the same problem
with two different preconditioners?

If GAMG takes 14 iterations whereas ILU takes 82 iterations, does this
necessarily mean GAMG has less communication? I would think that the
"bandwidth" that happens within a single GAMG iteration would be much
greater than that within a single ILU iteration. Is there a way to
officially determine this?

I see from log_summary that we have this information:

MPI Messages:         5.000e+00      1.00000   5.000e+00  5.000e+00
MPI Message Lengths:  5.816e+07      1.00000   1.163e+07  5.816e+07
MPI Reductions:       2.000e+01      1.00000

Can this information be used to determine the "bandwidth"? If so, does
PETSc have the ability to document this for other preconditioner packages
like HYPRE's BoomerAMG or Trilinos' ML?

Thanks,
Justin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160218/aebb15bb/attachment.html>


More information about the petsc-users mailing list