[petsc-users] Guidance on GAMG preconditioning

Matthew Knepley knepley at gmail.com
Sat Jun 6 06:13:24 CDT 2015


On Sat, Jun 6, 2015 at 4:29 AM, Justin Chang <jychang48 at gmail.com> wrote:

> Matt and Mark thank you guys for your responses.
>
> The reason I brought up GAMG was because it seems to me that this is the
> preconditioner to use for elliptic problems. However, I am using CG/Jacobi
> for my larger problems and the solver converges (with -ksp_atol and
> -ksp_rtol set to 1e-8). Using GAMG I get rough the same wall-clock time,
> but significantly fewer solver iterations.
>
> As I also kind of mentioned in another mail, the ultimate purpose is to
> compare how this "correction" methodology using the TAO solver (with
> bounded constraints) performs compared to the original methodology using
> the KSP solver (without constraints). I have the A for BLMVM and CG/Jacobi
> and they are roughly 0.3 and 0.2 respectively (do these sound about
> right?). Although the AI is higher for TAO , the ratio of actual FLOPS/s
> over the AI*STREAMS BW is smaller, though I am not sure what conclusions to
> make of that. This was also partly why I wanted to see what kind of metrics
> another KSP solver/preconditioner produces.
>
> Point being, if I were to draw such comparisons between TAO and KSP, would
> I get crucified if people find out I am using CG/Jacobi and not GAMG?
>

Here is what someone like me reviewing your paper would say first. I can
believe that a well-conditioned problem would
converge using CG/Jacobi. However, if the highest order derivative looks
like the Laplacian, then the condition number of
the equations will be O(h^2), and even with CG it will be O(h), so the
number of iterations should increase as the square root
of the problem size (in 2D), where GAMG should be constant. Thus at some
size GAMG will be more efficient. I would want
to see where the crossover is for your problem. If you do not get the O(h)
dependence, I would think that there is a problem
in the formulation.

  Thanks,

     Matt


> Thanks,
> Justin
>
> On Fri, Jun 5, 2015 at 2:02 PM, Mark Adams <mfadams at lbl.gov> wrote:
>
>>
>>>>
>>> The overwhleming cost of AMG is the Galerkin triple-product RAP.
>>>
>>>
>> That is overstating it a bit.  It can be if you have a hard 3D operator
>> and coarsening slowly is best.
>>
>> Rule of thumb is you spend 50% time is the solver and 50% in the setup,
>> which is often mostly RAP (in 3D, 2D is much faster).  That way you are
>> within 2x of optimal and it often works out that way anyway.
>>
>> Mark
>>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150606/101d2f62/attachment.html>


More information about the petsc-users mailing list