<div dir="ltr"><div>Hi all,<br><br></div>For a poisson problem with roughly 1 million dofs (using second-order elements), I solved the problem using two different solver/preconditioner combinations: CG/ILU and CG/GAMG. <br><div><br></div><div>ILU takes roughly 82 solver iterations whereas with GAMG it takes 14 iterations (wall clock time is roughly 15 and 46 seconds respectively). I have seen from previous mailing threads that there is a strong correlation between solver iterations and communication (which could lead to less strong-scaling scalability). It makes sense to me if I strictly use one of these preconditioners to solve two different problems and compare the number of respective iterations, but what about solving the same problem with two different preconditioners?</div><div><br></div><div>If GAMG takes 14 iterations whereas ILU takes 82 iterations, does this necessarily mean GAMG has less communication? I would think that the "bandwidth" that happens within a single GAMG iteration would be much greater than that within a single ILU iteration. Is there a way to officially determine this?</div><div><br></div><div>I see from log_summary that we have this information:</div><div><pre class="">MPI Messages: 5.000e+00 1.00000 5.000e+00 5.000e+00
MPI Message Lengths: 5.816e+07 1.00000 1.163e+07 5.816e+07
MPI Reductions: 2.000e+01 1.00000</pre></div><div>Can this information be used to determine the "bandwidth"? If so, does PETSc have the ability to document this for other preconditioner packages like HYPRE's BoomerAMG or Trilinos' ML?</div><div><br></div><div>Thanks,</div><div>Justin</div></div>