[petsc-dev] [issue1595] Issues of limited number of MPI communicators when having many instances of hypre boomerAMG with Moose

Derek Gaston friedmud at gmail.com
Tue Apr 3 17:46:17 CDT 2018


Firstly, thank you, everyone, for taking time to talk about this issue!  We
definitely appreciate it.

Jed has this right - we would ideally not want to setup and teardown the
preconditioner for each solve every iteration.

For VERY large solves (10k+ simultaneous solves) using this method we have
no choice - and have done exactly what has been suggested:
setup-solve-teardown for each one in serial.  But that's an extreme case
where we can't fit all of the matrices in memory at the same time.  For the
"normal" runs (10-1000 solves) that's not the case.

Derek

On Tue, Apr 3, 2018 at 4:37 PM Jed Brown <jed at jedbrown.org> wrote:

> This is likely in the context of a preconditioner for a composite solve,
> so the many BoomerAMG solvers will persist.  They all get set up and
> then applied in sequence to the many scalar systems at each iteration of
> an outer method.
>
> If hypre doesn't want to dup its communicators internally, then PETSc
> will dup a communicator for hypre, attach it to the outer communicator,
> and look for it each time a new hypre object is created.  It's easy for
> PETSc (we've done this for PETSc since the early days), but a headache
> for other libraries.
>
> "Li, Ruipeng" <li50 at llnl.gov> writes:
>
> > Hi, Fande,
> >
> > "For instance, if an application has multiple fields and each field will
> be solved by a HYPRE solver,  then we end up with  multiple hypre solves.
> The multiple hypre solves are not run simultaneously, instead, it will be
> run one-by-one. If we assign the same communicator to the multiple hypre
> solves, any potential tag conflict?"
> >
> > In this scenario, is just the solve phase of BoomerAMG one after
> another, or also the setup of AMG? Since BoomerAMG takes the communicator
> from the A matrix, if the entire hypre solve (setup + solve) is serial,
> there should be no conflicts.
> >
> > -Ruipeng
> >
> > . -- .- .. .-.. / ..-. .-. --- -- / .-. ..- .. .--. . -. --. / .-.. ..
> >  Ruipeng Li
> >  Center for Applied Scientific Computing
> >  Lawrence Livermore National Laboratory
> >  P.O. Box 808, L-561
> >  Livermore, CA 94551
> >  phone - (925) 422-6037,  email - li50 at llnl.gov
> >
> >
> > ________________________________________
> > From: Kong, Fande <fande.kong at inl.gov>
> > Sent: Tuesday, April 3, 2018 1:34 PM
> > To: hypre-support
> > Cc: Barry Smith; Derek Gaston; Li, Ruipeng; Osei-Kuffuor, Daniel; For
> users of the development version of PETSc; Schroder, Jacob B.;
> tzanio at llnl.gov; umyang at llnl.gov; Wang, Lu
> > Subject: Re: [issue1595] Issues of limited number of MPI communicators
> when having many instances of hypre boomerAMG with Moose
> >
> >
> >
> > On Tue, Apr 3, 2018 at 2:12 PM, Rob Falgout hypre Tracker <
> hypre-support at llnl.gov<mailto:hypre-support at llnl.gov>> wrote:
> >
> > Rob Falgout <rfalgout at llnl.gov<mailto:rfalgout at llnl.gov>> added the
> comment:
> >
> > Hi Barry,
> >
> > Can you explain the scenario where multiple hypre solves would be
> working in parallel with the same communicator?  Thanks!
> >
> > Hi Rob,
> >
> > For instance, if an application has multiple fields and each field will
> be solved by a HYPRE solver,  then we end up with  multiple hypre solves.
> The multiple hypre solves are not run simultaneously, instead, it will be
> run one-by-one. If we assign the same communicator to the multiple hypre
> solves, any potential tag conflict?
> >
> > Fande,
> >
> >
> >
> > -Rob
> >
> > ____________________________________________
> > hypre Issue Tracker <hypre-support at llnl.gov<mailto:
> hypre-support at llnl.gov>>
> > <
> https://urldefense.proofpoint.com/v2/url?u=http-3A__cascb1.llnl.gov_hypre_issue1595&d=DwIFaQ&c=54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00&r=DUUt3SRGI0_JgtNaS3udV68GRkgV4ts7XKfj2opmiCY&m=3KsQjTxI8eRWrSIJtB3o9NlBc4bEuuP4HxiT-BlMx3U&s=FjvfvXD21bX8LA2TAD-J6MWCJyMNm0aameRp_9nfLYs&e=
> >
> > ____________________________________________
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20180403/8b164951/attachment.html>


More information about the petsc-dev mailing list