[petsc-users] Memory growth issue

Lawrence Mitchell wence at gmx.li
Thu May 30 06:48:45 CDT 2019


Hi Sanjay,

> On 30 May 2019, at 08:58, Sanjay Govindjee via petsc-users <petsc-users at mcs.anl.gov> wrote:
> 
> The problem seems to persist but with a different signature.  Graphs attached as before.
> 
> Totals with MPICH (NB: single run)
> 
> For the CG/Jacobi          data_exchange_total = 41,385,984; kspsolve_total = 38,289,408
> For the GMRES/BJACOBI      data_exchange_total = 41,324,544; kspsolve_total = 41,324,544
> 
> Just reading the MPI docs I am wondering if I need some sort of MPI_Wait/MPI_Waitall before my MPI_Barrier in the data exchange routine?
> I would have thought that with the blocking receives and the MPI_Barrier that everything will have fully completed and cleaned up before
> all processes exited the routine, but perhaps I am wrong on that.


Skimming the fortran code you sent you do:

for i in ...:
   call MPI_Isend(..., req, ierr)

for i in ...:
   call MPI_Recv(..., ierr)

But you never call MPI_Wait on the request you got back from the Isend. So the MPI library will never free the data structures it created.

The usual pattern for these non-blocking communications is to allocate an array for the requests of length nsend+nrecv and then do:

for i in nsend:
   call MPI_Isend(..., req[i], ierr)
for j in nrecv:
   call MPI_Irecv(..., req[nsend+j], ierr)

call MPI_Waitall(req, ..., ierr)

I note also there's no need for the Barrier at the end of the routine, this kind of communication does neighbourwise synchronisation, no need to add (unnecessary) global synchronisation too.

As an aside, is there a reason you don't use PETSc's VecScatter to manage this global to local exchange?

Cheers,

Lawrence


More information about the petsc-users mailing list