[petsc-users] Guidance on GAMG preconditioning

Mark Adams mfadams at lbl.gov
Sun Jun 7 11:11:01 CDT 2015


On Sat, Jun 6, 2015 at 4:00 PM, Young, Matthew, Adam <may at bu.edu> wrote:

>  Forgive me for being like a child who wanders into the middle of a
> movie...
>
>  I've been attempting to follow this conversation from a beginner's level
> because I am trying to solve an elliptic PDE with variable coefficients.
> Both the operator and the RHS change at each time step and the operator has
> off-diagonal terms that become dominant
>

Yikes.


> as the instability of interest grows.
>

As Matt says, out-of-the-box multigrid will not solve all elliptic problems
fast.  Is the problem even elliptic if the off diagonals are dominant?

Anyway, another way of looking at it is: if the Green's function decays
quickly you can exploit that with a local process plus a coarse grid
correction.  If you have a funny Green's function you need a funny method
to deal with it.



> I read somewhere that a direct method is the best for this but I'm
> intrigued by Justin's comment that GAMG seems to be "the preconditioner to
> use for elliptic problems". I don't want to highjack this conversation but
> it seems like a good chance to ask for your collective advice on resources
> for understanding my problem. Any thoughts?
>
>  --Matt
>
>   --------------------------------------------------------------
> Matthew Young
> Graduate Student
> Boston University Dept. of Astronomy
> --------------------------------------------------------------
>
>    ------------------------------
> *From:* petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov]
> on behalf of Justin Chang [jychang48 at gmail.com]
> *Sent:* Saturday, June 06, 2015 5:29 AM
> *To:* Mark Adams
> *Cc:* petsc-users
> *Subject:* Re: [petsc-users] Guidance on GAMG preconditioning
>
>   Matt and Mark thank you guys for your responses.
>
> The reason I brought up GAMG was because it seems to me that this is the
> preconditioner to use for elliptic problems. However, I am using CG/Jacobi
> for my larger problems and the solver converges (with -ksp_atol and
> -ksp_rtol set to 1e-8). Using GAMG I get rough the same wall-clock time,
> but significantly fewer solver iterations.
>
> As I also kind of mentioned in another mail, the ultimate purpose is to
> compare how this "correction" methodology using the TAO solver (with
> bounded constraints) performs compared to the original methodology using
> the KSP solver (without constraints). I have the A for BLMVM and CG/Jacobi
> and they are roughly 0.3 and 0.2 respectively (do these sound about
> right?). Although the AI is higher for TAO , the ratio of actual FLOPS/s
> over the AI*STREAMS BW is smaller, though I am not sure what conclusions to
> make of that. This was also partly why I wanted to see what kind of metrics
> another KSP solver/preconditioner produces.
>
>  Point being, if I were to draw such comparisons between TAO and KSP,
> would I get crucified if people find out I am using CG/Jacobi and not GAMG?
>
>  Thanks,
> Justin
>
> On Fri, Jun 5, 2015 at 2:02 PM, Mark Adams <mfadams at lbl.gov> wrote:
>
>>
>>>>
>>>  The overwhleming cost of AMG is the Galerkin triple-product RAP.
>>>
>>>
>>  That is overstating it a bit.  It can be if you have a hard 3D operator
>> and coarsening slowly is best.
>>
>>  Rule of thumb is you spend 50% time is the solver and 50% in the setup,
>> which is often mostly RAP (in 3D, 2D is much faster).  That way you are
>> within 2x of optimal and it often works out that way anyway.
>>
>>  Mark
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150607/d541797a/attachment.html>


More information about the petsc-users mailing list