[petsc-dev] [issue1595] Issues of limited number of MPI communicators when having many instances of hypre boomerAMG with Moose

Jed Brown jed at jedbrown.org
Tue Apr 3 16:26:08 CDT 2018


So hypre won't conflict with itself, but may conflict with something the
user is doing.  In the longer term, will it remain the user's
responsibility to MPI_Comm_dup before calling hypre or would hypre do
that internally?

(PETSc dups it internally and attaches PETSc's private communicator as
an attribute to the user's communicator.  That way it's always safe and
users don't need to know about the issue.)

Rob Falgout hypre Tracker <hypre-support at llnl.gov> writes:

> Rob Falgout <rfalgout at llnl.gov> added the comment:
>
> Hi Fande,
>
> If the solves are one-by-one, then there should not be an issue with communicators conflicting with each other.  All communications to/from a process will have completed before the next solve is called.  Also, with maybe only one exception (our assumed partition algorithm), sends are receives are deterministic, so through a combination of tags and the MPI guarantee that communications from any one process will be received in the order they are sent, I don't see a problem with messages getting mixed up.  With the assumed partition, we also are careful to use tags to differentiate communications and prevent things from getting mixed.
>
> Maybe I'm missing something and there is a problem somewhere in hypre.  If you come across this, though, please let us know so we can fix it.
>
> Hope this helps!
>
> -Rob
>
> ____________________________________________
> hypre Issue Tracker <hypre-support at llnl.gov>
> <http://cascb1.llnl.gov/hypre/issue1595>
> ____________________________________________


More information about the petsc-dev mailing list