[petsc-dev] [issue1595] Issues of limited number of MPI communicators when having many instances of hypre boomerAMG with Moose

Rob Falgout hypre Tracker hypre-support at llnl.gov
Tue Apr 3 15:46:03 CDT 2018


Rob Falgout <rfalgout at llnl.gov> added the comment:

Hi Fande,

If the solves are one-by-one, then there should not be an issue with communicators conflicting with each other.  All communications to/from a process will have completed before the next solve is called.  Also, with maybe only one exception (our assumed partition algorithm), sends are receives are deterministic, so through a combination of tags and the MPI guarantee that communications from any one process will be received in the order they are sent, I don't see a problem with messages getting mixed up.  With the assumed partition, we also are careful to use tags to differentiate communications and prevent things from getting mixed.

Maybe I'm missing something and there is a problem somewhere in hypre.  If you come across this, though, please let us know so we can fix it.

Hope this helps!

-Rob

____________________________________________
hypre Issue Tracker <hypre-support at llnl.gov>
<http://cascb1.llnl.gov/hypre/issue1595>
____________________________________________


More information about the petsc-dev mailing list