[petsc-users] Guidance on GAMG preconditioning

Mark Adams mfadams at lbl.gov
Sun Jun 7 11:26:38 CDT 2015


On Sat, Jun 6, 2015 at 8:21 PM, Matthew Knepley <knepley at gmail.com> wrote:

> On Sat, Jun 6, 2015 at 6:02 PM, Young, Matthew, Adam <may at bu.edu> wrote:
>
>>  This is a problem from ionospheric plasma physics. The simulation
>> treats ions via a particle-in-cell method and electrons as an inertialess
>> fluid, the justification being that ionospheric ions are 10^4 times more
>> massive than electrons. We further assume that the plasma is effectively
>> neutral on the length scale of interest (i.e. quasi-neutral) and those
>> assumptions allows us to write an elliptic equation for the electrostatic
>> potential, phi: Div[n(x) T Grad(phi)]. n(x) is the quasi-neutral plasma
>> density, which is updated via an ion gather at each time step, and T is a
>> tensor of constant coefficients that looks like {{1, kappa, 0},{-kappa, 1,
>> 0},{0, 0, 1+kappa^2}}, where kappa is the ratio of gyrofrequency to
>> collision frequency for electrons (~100 for our problem)*. The RHS is a
>> function of density, ion current (or flux, both of which are related to
>> density), and constant electron fluid parameters. Eq 1 of the attached
>> paper shows this equation for the 2-D problem in the plane perpendicular to
>> the ambient magnetic field.
>>
>
> Its hard to say anything without knowing what the n(x) functions look
> like.
>

I would think its pretty smooth unless it is a perturbation, this does not
sound like it has shocks or anything ... but T scares the hell out of me!
Very anisotropy on the vertical direction with a sort of skew thing going
on in the horizontal plane?

If you are seeing the convergence rate tank as kappa is increased then you
will probably find that ASM smoothers help.  Making good subdomains for ASM
is a problem.  If you have small subdomains per process you can use these
(just use bjacobi as the smoother).  GAMG can let you use the GAMG
aggregates as blocks (this has not been tested for ages and we don't have a
regression test it).  Try just using processor ASM and use as many
processors as you can.  If this helps your convergence rate (a lot) then
this might be the way to go and we can look at GAMG blocks.

Mark


> I would say now that it is easy to try GAMG, so
> that is what I would do first.
>
>   Thanks,
>
>     Matt
>
>
>> --Matt
>>
>>  *It was a little unfair of me to say earlier that the off-diagonal
>> terms grow as the simulation progresses. It's the density that gradients
>> grow and they are multiplied by kappa.
>>
>>
>>
>>  --------------------------------------------------------------
>> Matthew Young
>> Graduate Student
>> Boston University Dept. of Astronomy
>> --------------------------------------------------------------
>>
>>    ------------------------------
>> *From:* Matthew Knepley [knepley at gmail.com]
>> *Sent:* Saturday, June 06, 2015 6:12 PM
>> *To:* Young, Matthew, Adam
>> *Cc:* Justin Chang; Mark Adams; petsc-users
>>
>> *Subject:* Re: [petsc-users] Guidance on GAMG preconditioning
>>
>>    On Sat, Jun 6, 2015 at 3:00 PM, Young, Matthew, Adam <may at bu.edu>
>> wrote:
>>
>>>  Forgive me for being like a child who wanders into the middle of a
>>> movie...
>>>
>>>  I've been attempting to follow this conversation from a beginner's
>>> level because I am trying to solve an elliptic PDE with variable
>>> coefficients. Both the operator and the RHS change at each time step and
>>> the operator has off-diagonal terms that become dominant as the instability
>>> of interest grows. I read somewhere that a direct method is the best for
>>> this but I'm intrigued by Justin's comment that GAMG seems to be "the
>>> preconditioner to use for elliptic problems". I don't want to highjack this
>>> conversation but it seems like a good chance to ask for your collective
>>> advice on resources for understanding my problem. Any thoughts?
>>>
>>
>>  The problem here is that fast methods do not depend on the operator
>> being elliptic so much as they depend on the operator
>> falling off away from the diagonal (satisfying a Calderon-Zygmund bound,
>> there are lots of ways of expressing this). When
>> this ceases to be true, these methods stop being fast.
>>
>>  So the answer is, when you have complicated coefficient structure,
>> there are no general methods and you need to know more
>> about exactly what is going on. Where is your problem from?
>>
>>    Matt
>>
>>
>>>  --Matt
>>>
>>>   --------------------------------------------------------------
>>> Matthew Young
>>> Graduate Student
>>> Boston University Dept. of Astronomy
>>> --------------------------------------------------------------
>>>
>>>    ------------------------------
>>> *From:* petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov]
>>> on behalf of Justin Chang [jychang48 at gmail.com]
>>> *Sent:* Saturday, June 06, 2015 5:29 AM
>>> *To:* Mark Adams
>>> *Cc:* petsc-users
>>> *Subject:* Re: [petsc-users] Guidance on GAMG preconditioning
>>>
>>>    Matt and Mark thank you guys for your responses.
>>>
>>> The reason I brought up GAMG was because it seems to me that this is the
>>> preconditioner to use for elliptic problems. However, I am using CG/Jacobi
>>> for my larger problems and the solver converges (with -ksp_atol and
>>> -ksp_rtol set to 1e-8). Using GAMG I get rough the same wall-clock time,
>>> but significantly fewer solver iterations.
>>>
>>> As I also kind of mentioned in another mail, the ultimate purpose is to
>>> compare how this "correction" methodology using the TAO solver (with
>>> bounded constraints) performs compared to the original methodology using
>>> the KSP solver (without constraints). I have the A for BLMVM and CG/Jacobi
>>> and they are roughly 0.3 and 0.2 respectively (do these sound about
>>> right?). Although the AI is higher for TAO , the ratio of actual FLOPS/s
>>> over the AI*STREAMS BW is smaller, though I am not sure what conclusions to
>>> make of that. This was also partly why I wanted to see what kind of metrics
>>> another KSP solver/preconditioner produces.
>>>
>>>  Point being, if I were to draw such comparisons between TAO and KSP,
>>> would I get crucified if people find out I am using CG/Jacobi and not GAMG?
>>>
>>>  Thanks,
>>> Justin
>>>
>>> On Fri, Jun 5, 2015 at 2:02 PM, Mark Adams <mfadams at lbl.gov> wrote:
>>>
>>>>
>>>>>>
>>>>>  The overwhleming cost of AMG is the Galerkin triple-product RAP.
>>>>>
>>>>>
>>>>  That is overstating it a bit.  It can be if you have a hard 3D
>>>> operator and coarsening slowly is best.
>>>>
>>>>  Rule of thumb is you spend 50% time is the solver and 50% in the
>>>> setup, which is often mostly RAP (in 3D, 2D is much faster).  That way you
>>>> are within 2x of optimal and it often works out that way anyway.
>>>>
>>>>  Mark
>>>>
>>>
>>>
>>
>>
>>  --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150607/aa7c3876/attachment-0001.html>


More information about the petsc-users mailing list