[petsc-dev] Issues of limited number of MPI communicators when having many instances of hypre boomerAMG with Moose
Smith, Barry F.
bsmith at mcs.anl.gov
Tue Apr 3 14:18:05 CDT 2018
hypre developers,
Some of the MPI implementations only support around 2000 MPI communicators, this can cause difficulties for MOOSE users who have many instances of hypre BoomerAMG at the same time. Thus I have some questions for you,
For each HYPRE matrix we do a MPI_Comm_dup() to make sure that hypre has a unique communicator that won't conflict with PETSc communicators or other hypre communicators associated with other matrices. Do we need to do this? Is hypre smart enough that we could pass the same communicator with all the matrices (assuming the matrices live on the same communicator)?
It appears that each BoomerAMG instance calls MPI_Comm_create() down in the guts (even on one process where the new communicator is the same size as the original communicator). Is this necessary? Could you detect when the sub-communicator is the same size as the original communicator and then skip the MPI_Comm_create()?
Due to the limitations above MOOSE users are limited to about 1,000 instances of hypre BoomerAMG at the same time
Any thoughts on how this limitation could be increased?
Thanks
Barry
More information about the petsc-dev
mailing list