[petsc-users] Reaching limit number of communicator with Spectrum MPI

Jed Brown jed at jedbrown.org
Thu Aug 19 08:01:55 CDT 2021


Junchao Zhang <junchao.zhang at gmail.com> writes:

> Hi, Feimi,
>   I need to consult Jed (cc'ed).
>   Jed, is this an example of
> https://lists.mcs.anl.gov/mailman/htdig/petsc-dev/2018-April/thread.html#22663?
> If Feimi really can not free matrices, then we just need to attach a
> hypre-comm to a petsc inner comm, and pass that to hypre.

Are there a bunch of solves as in that case?

My understanding is that one should be able to MPI_Comm_dup/MPI_Comm_free as many times as you like, but the implementation has limits on how many communicators can co-exist at any one time. The many-at-once is what we encountered in that 2018 thread.

One way to check would be to use a debugger or tracer to examine the stack every time (P)MPI_Comm_dup and (P)MPI_Comm_free are called.

case 1: we'll find lots of dups without frees (until the end) because the user really wants lots of these existing at the same time.

case 2: dups are unfreed because of reference counting issue/inessential references


In case 1, I think the solution is as outlined in the thread, PETSc can create an inner-comm for Hypre. I think I'd prefer to attach it to the outer comm instead of the PETSc inner comm, but perhaps a case could be made either way.


More information about the petsc-users mailing list