[petsc-users] petsc4py - Spike in memory usage when loading a matrix in parallel

Matthew Knepley knepley at gmail.com
Thu Oct 7 09:09:15 CDT 2021


On Thu, Oct 7, 2021 at 10:03 AM Barry Smith <bsmith at petsc.dev> wrote:

>
>    How many ranks are you using? Is it a sparse matrix with MPIAIJ?
>
>    The intention is that for parallel runs the first rank reads in its own
> part of the matrix, then reads in the part of the next rank and sends it,
> then reads the part of the third rank and sends it etc. So there should not
> be too much of a blip in memory usage. You can run valgrind with the option
> for tracking memory usage to see exactly where in the code the blip occurs;
> it could be a regression occurred in the code making it require more
> memory. But internal MPI buffers might explain some blip.
>

Is it possible that we free the memory, but the OS has just not given back
that memory for use yet? How are you measuring memory usage?

  Thanks,

     Matt


>   Barry
>
>
> > On Oct 7, 2021, at 9:50 AM, Michael Werner <michael.werner at dlr.de>
> wrote:
> >
> > Hello,
> >
> > I noticed that there is a peak in memory consumption when I load an
> > existing matrix into PETSc. The matrix is previously created by an
> > external program and saved in the PETSc binary format.
> > The code I'm using in petsc4py is simple:
> >
> > viewer = PETSc.Viewer().createBinary(<path/to/existing/matrix>, "r",
> > comm=PETSc.COMM_WORLD)
> > A = PETSc.Mat().create(comm=PETSc.COMM_WORLD)
> > A.load(viewer)
> >
> > When I run this code in serial, the memory consumption of the process is
> > about 50GB RAM, similar to the file size of the saved matrix. However,
> > if I run the code in parallel, for a few seconds the memory consumption
> > of the process doubles to around 100GB RAM, before dropping back down to
> > around 50GB RAM. So it seems as if, for some reason, the matrix is
> > copied after it is read into memory. Is there a way to avoid this
> > behaviour? Currently, it is a clear bottleneck in my code.
> >
> > I tried setting the size of the matrix and to explicitly preallocate the
> > necessary NNZ (with A.setSizes(dim) and A.setPreallocationNNZ(nnz),
> > respectively) before loading, but that didn't help.
> >
> > As mentioned above, I'm using petsc4py together with PETSc-3.16 on a
> > Linux workstation.
> >
> > Best regards,
> > Michael Werner
> >
> > --
> >
> > ____________________________________________________
> >
> > Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR)
> > Institut für Aerodynamik und Strömungstechnik | Bunsenstr. 10 | 37073
> Göttingen
> >
> > Michael Werner
> > Telefon 0551 709-2627 | Telefax 0551 709-2811 | Michael.Werner at dlr.de
> > DLR.de
> >
> >
> >
> >
> >
> >
> >
> >
> >
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20211007/4b152468/attachment-0001.html>


More information about the petsc-users mailing list