[petsc-dev] [issue1595] Issues of limited number of MPI communicators when having many instances of hypre boomerAMG with Moose
Smith, Barry F.
bsmith at mcs.anl.gov
Tue Apr 3 14:56:27 CDT 2018
> On Apr 3, 2018, at 1:46 PM, Rob Falgout hypre Tracker <hypre-support at llnl.gov> wrote:
>
>
> Rob Falgout <rfalgout at llnl.gov> added the comment:
>
> Hi Barry,
>
> It looks like the only time we call MPI_Comm_create is to build a communicator for the coarsest grid solve using Gaussian elimination. There are probably alternatives that do not require creating a sub-communicator.
When the sub communicator is of size 1 you can use MPI_COMM_SELF instead of creating a new communicator each time.
> Ulrike or someone else more familiar with the code should comment.
>
> I don't see a need to do a Comm_dup() before calling hypre.
If I have 1000 hypre solvers on the same communicator how do I know that hypre won't send messages between the different solvers and hence gets messed up? In other words how do you handle tags to prevent conflicts between different matrices? What communicators do you actually do communication on and where do you get them? From the hypre matrix?
>
> Hope this helps.
>
> -Rob
>
> ----------
> status: unread -> chatting
>
> ____________________________________________
> hypre Issue Tracker <hypre-support at llnl.gov>
> <http://cascb1.llnl.gov/hypre/issue1595>
> ____________________________________________
More information about the petsc-dev
mailing list