[petsc-users] petsc4py mpi matrix size

Smith, Barry F. bsmith at mcs.anl.gov
Fri Jan 10 09:21:35 CST 2020


  Yes, with, for example, MATMPAIJ, the matrix entries are distributed among the processes; first verify that you are using a MPI matrix, not Seq, since Seq will keep an entire copy on each process.

  But the parallel matrices do come with some overhead for meta data. So for small matrices like yours it can seem the memory grows unrealistically. Try a much bigger matrix, say 100 times as big and look at the memory usage then. You should see that the meta data is now a much smaller percentage of the memory usage. 

  Also be careful if you use top or other such tools for determining memory usage; since malloced() is often not returned to the OS, the can indicate much higher memory usage than is really taking place. You can run PETSc with -log_view -log_view_memory to get a good idea of where PETSc is allocating memory and how much

   Barry


> On Jan 10, 2020, at 7:52 AM, Lukas Razinkovas <lukasrazinkovas at gmail.com> wrote:
> 
> Hello,
> 
> I am trying to use petsc4py and slepc4py for parallel sparse matrix diagonalization.
> However I am a bit confused about matrix size increase when I switch from single processor to multiple processors. For example 100 x 100 matrix with 298 nonzero elements consumes 
> 8820 bytes of memory (mat.getInfo()["memory"]), however on two processes it consumes 20552 bytes of memory  and on four 33528.  My matrix is taken from the slepc4py/demo/ex1.py,
> where nonzero elements are on three diagonals.
> 
> Why memory usage increases with MPI processes number?
> I thought that each process stores its own rows and it should stay the same. Or some elements are stored globally?
> 
> Lukas



More information about the petsc-users mailing list