[petsc-users] petsc4py mpi matrix size

Matthew Knepley knepley at gmail.com
Fri Jan 10 15:14:11 CST 2020


On Fri, Jan 10, 2020 at 7:23 AM Lukas Razinkovas <lukasrazinkovas at gmail.com>
wrote:

> Thank you very much!
>
> I already checked that its MPIAIJ matrix and for size I use MatGetInfo
> routine.
> You are right. With matrix of dim 100000x10000 i get sizes:
>
> - serial: 5.603 Mb
> - 4 proc.: 7.626 Mb
> - 36 proc.: 7.834 Mb
>
> That looks fine to me. Thank you again for such quick response.
>
> I am really impressed with python interface to petsc and slepc.
> I think it is missing detailed documentation and that discouraged me to
> use it initially,
> so I was writing C code and then wrapping it with python. I am still
> confused,
> how for example to set MUMPS parameters from python code, but that is
> different topic.
>

We would discourage you from setting parameters in the code, and rather use
the command line interface to MUMPS parameters.
However, you can put that in the code itself using PetscOptionSetValue().

  Thanks,

    Matt


> Lukas
>
> On Fri, Jan 10, 2020 at 5:21 PM Smith, Barry F. <bsmith at mcs.anl.gov>
> wrote:
>
>>
>>   Yes, with, for example, MATMPAIJ, the matrix entries are distributed
>> among the processes; first verify that you are using a MPI matrix, not Seq,
>> since Seq will keep an entire copy on each process.
>>
>>   But the parallel matrices do come with some overhead for meta data. So
>> for small matrices like yours it can seem the memory grows unrealistically.
>> Try a much bigger matrix, say 100 times as big and look at the memory usage
>> then. You should see that the meta data is now a much smaller percentage of
>> the memory usage.
>>
>>   Also be careful if you use top or other such tools for determining
>> memory usage; since malloced() is often not returned to the OS, the can
>> indicate much higher memory usage than is really taking place. You can run
>> PETSc with -log_view -log_view_memory to get a good idea of where PETSc is
>> allocating memory and how much
>>
>>    Barry
>>
>>
>> > On Jan 10, 2020, at 7:52 AM, Lukas Razinkovas <
>> lukasrazinkovas at gmail.com> wrote:
>> >
>> > Hello,
>> >
>> > I am trying to use petsc4py and slepc4py for parallel sparse matrix
>> diagonalization.
>> > However I am a bit confused about matrix size increase when I switch
>> from single processor to multiple processors. For example 100 x 100 matrix
>> with 298 nonzero elements consumes
>> > 8820 bytes of memory (mat.getInfo()["memory"]), however on two
>> processes it consumes 20552 bytes of memory  and on four 33528.  My matrix
>> is taken from the slepc4py/demo/ex1.py,
>> > where nonzero elements are on three diagonals.
>> >
>> > Why memory usage increases with MPI processes number?
>> > I thought that each process stores its own rows and it should stay the
>> same. Or some elements are stored globally?
>> >
>> > Lukas
>>
>>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20200110/6bd69c82/attachment.html>


More information about the petsc-users mailing list