<div dir="ltr">Feimi,<div> I'm able to reproduce the problem. I will have a look. Thanks a lot for the example. <br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">--Junchao Zhang</div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Aug 20, 2021 at 2:02 PM Feimi Yu <<a href="mailto:yuf2@rpi.edu">yuf2@rpi.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<p>Sorry, I forgot to destroy the matrix after the loop, but anyway,
the in-loop preconditioners are destroyed. Updated the code here
and the google drive.<br>
</p>
<p>Feimi<br>
</p>
<div>On 8/20/21 2:54 PM, Feimi Yu wrote:<br>
</div>
<blockquote type="cite">
<p>Hi Barry and Junchao,</p>
<p>Actually I did a simple MPI "dup and free" test before with
Spectrum MPI, but that one did not have any problem. I'm not a
PETSc programmer as I mainly use deal.ii's PETSc wrappers, but I
managed to write a minimal program based on
petsc/src/mat/tests/ex98.c to reproduce my problem. This piece
of code creates and destroys 10,000 instances of Hypre Parasail
preconditioners (for my own code, it uses Euclid, but I don't
think it matters). It runs fine with OpenMPI but reports the out
of communicator error with Sepctrum MPI. The code is attached in
the email. In case the attachment is not available, I also
uploaded a copy on my google drive:</p>
<p><a href="https://drive.google.com/drive/folders/1DCf7lNlks8GjazvoP7c211ojNHLwFKL6?usp=sharing" target="_blank">https://drive.google.com/drive/folders/1DCf7lNlks8GjazvoP7c211ojNHLwFKL6?usp=sharing</a></p>
<p>Thanks!</p>
<p>Feimi<br>
</p>
<div>On 8/20/21 9:58 AM, Junchao Zhang
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>Feimi, if it is easy to reproduce, could you give
instructions on how to reproduce that? <br>
</div>
<div><br>
</div>
<div>PS: Spectrum MPI is based on OpenMPI. I don't understand
why it has the problem but OpenMPI does not. It could be a
bug in petsc or user's code. For reference counting on
MPI_Comm, we already have petsc inner comm. I think we can
reuse that. </div>
<div><br>
</div>
<div>
<div dir="ltr">
<div dir="ltr">--Junchao Zhang</div>
</div>
</div>
<br>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Fri, Aug 20, 2021 at
12:33 AM Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<div><br>
</div>
It sounds like maybe the Spectrum MPI_Comm_free() is not
returning the comm to the "pool" as available for future
use; a very buggy MPI implementation. This can easily be
checked in a tiny standalone MPI program that simply comm
dups and frees thousands of times in a loop. Could even be
a configure test (that requires running an MPI program). I
do not remember if we ever tested this possibility; maybe
and I forgot.
<div><br>
</div>
<div> If this is the problem we can provide a "work
around" that attributes the new comm (to be passed to
hypre) to the old comm with a reference count value also
in the attribute. When the hypre matrix is created that
count is (with the new comm) is set to 1, when the hypre
matrix is freed that count is set to zero (but the comm
is not freed), in the next call to create the hypre
matrix when the attribute is found, the count is zero so
PETSc knows it can pass the same comm again to the new
hypre matrix.</div>
<div><br>
</div>
<div>This will only allow one simultaneous hypre matrix to
be created from the original comm. To allow multiply
simultaneous hypre matrix one could have multiple comms
and counts in the attribute and just check them until
one finds an available one to reuse (or creates yet
another one if all the current ones are busy with hypre
matrices). So it is the same model as DMGetXXVector()
where vectors are checked out and then checked in to be
available later. This would solve the currently reported
problem (if it is a buggy MPI that does not properly
free comms), but not solve the MOOSE problem where
10,000 comms are needed at the same time. </div>
<div><br>
</div>
<div> Barry</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
<div><br>
<blockquote type="cite">
<div>On Aug 19, 2021, at 3:29 PM, Junchao Zhang <<a href="mailto:junchao.zhang@gmail.com" target="_blank">junchao.zhang@gmail.com</a>>
wrote:</div>
<br>
<div>
<div dir="ltr">
<div dir="ltr"><br>
<br>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Thu, Aug
19, 2021 at 2:08 PM Feimi Yu <<a href="mailto:yuf2@rpi.edu" target="_blank">yuf2@rpi.edu</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Jed,<br>
<br>
In my case, I only have 2 hypre
preconditioners at the same time, and <br>
they do not solve simultaneously, so it
might not be case 1.<br>
<br>
I checked the stack for all the calls of
MPI_Comm_dup/MPI_Comm_free on <br>
my own machine (with OpenMPI), all the
communicators are freed from my <br>
observation. I could not test it with
Spectrum MPI on the clusters <br>
immediately because all the dependencies
were built in release mode. <br>
However, as I mentioned, I haven't had this
problem with OpenMPI before, <br>
so I'm not sure if this is really an MPI
implementation problem, or just <br>
because Spectrum MPI has less limit for the
number of communicators, <br>
and/or this also depends on how many MPI
ranks are used, as only 2 out <br>
of 40 ranks reported the error.<br>
</blockquote>
<div>You can add printf around
MPI_Comm_dup/MPI_Comm_free sites on the two
ranks, e.g., if (myrank == 38) printf(...),
to see if the dup/free are paired.</div>
<div> </div>
<div> As a workaround, I replaced the
MPI_Comm_dup() at</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
petsc/src/mat/impls/hypre/mhypre.c:2120 with
a copy assignment, and also <br>
removed the MPI_Comm_free() in the hypre
destroyer. My code runs fine <br>
with Spectrum MPI now, but I don't think
this is a long-term solution.<br>
<br>
Thanks!<br>
<br>
Feimi<br>
<br>
On 8/19/21 9:01 AM, Jed Brown wrote:<br>
> Junchao Zhang <<a href="mailto:junchao.zhang@gmail.com" target="_blank">junchao.zhang@gmail.com</a>>
writes:<br>
><br>
>> Hi, Feimi,<br>
>> I need to consult Jed (cc'ed).<br>
>> Jed, is this an example of<br>
>> <a href="https://lists.mcs.anl.gov/mailman/htdig/petsc-dev/2018-April/thread.html#22663" rel="noreferrer" target="_blank">https://lists.mcs.anl.gov/mailman/htdig/petsc-dev/2018-April/thread.html#22663</a>?<br>
>> If Feimi really can not free
matrices, then we just need to attach a<br>
>> hypre-comm to a petsc inner comm,
and pass that to hypre.<br>
> Are there a bunch of solves as in that
case?<br>
><br>
> My understanding is that one should be
able to MPI_Comm_dup/MPI_Comm_free as many
times as you like, but the implementation
has limits on how many communicators can
co-exist at any one time. The many-at-once
is what we encountered in that 2018 thread.<br>
><br>
> One way to check would be to use a
debugger or tracer to examine the stack
every time (P)MPI_Comm_dup and
(P)MPI_Comm_free are called.<br>
><br>
> case 1: we'll find lots of dups without
frees (until the end) because the user
really wants lots of these existing at the
same time.<br>
><br>
> case 2: dups are unfreed because of
reference counting issue/inessential
references<br>
><br>
><br>
> In case 1, I think the solution is as
outlined in the thread, PETSc can create an
inner-comm for Hypre. I think I'd prefer to
attach it to the outer comm instead of the
PETSc inner comm, but perhaps a case could
be made either way.<br>
</blockquote>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
</div>
</blockquote>
</blockquote>
</div>
</blockquote></div>