[petsc-dev] [issue1595] Issues of limited number of MPI communicators when having many instances of hypre boomerAMG with Moose

Rob Falgout hypre Tracker hypre-support at llnl.gov
Wed Apr 4 11:06:03 CDT 2018


Rob Falgout <rfalgout at llnl.gov> added the comment:

Hi All,

Some comments and questions:

1. The Comm_create is used to create a subcommunicator that involves only the currently active MPI tasks so that the Allgather() will happen only over that subset.  I don't think we can create this once, attach it to a parent communicator, and use it again in a different solve based on a different AMG setup (because the active tasks will likely be different).  If the AMG setup is the same, the same communicator will be used.  When the AMG Destroy() is called, that communicator is freed.

2. Ulrike has already provided a way to get around the Comm_create() by changing one of the AMG parameters.  This is something that PETSc has control over and can do.

3. The idea of dup'ing a communicator, attaching it as an attribute to a parent communicator, and using it as hypre's internal communicator makes sense to me.  I think it would require quite a bit of code changes to implement.  My next comment is important to consider as well.

4. Regarding pending point-to-point messages, you are right that this could create problems without creating deadlock.  I had not thought about this scenario.  However, as long as all of the corresponding user send requests have also been issued (in a non-blocking manner), is there still really a problem here?  MPI is supposed to guarantee message order, so any receives that hypre posts from the same task with the same tag should not interfere with an already posted (but not finalized) receive that the user has posted, right?

-Rob

____________________________________________
hypre Issue Tracker <hypre-support at llnl.gov>
<http://cascb1.llnl.gov/hypre/issue1595>
____________________________________________


More information about the petsc-dev mailing list