[petsc-users] Equivalent of all_reduce for sparse matrices

marco restelli mrestelli at gmail.com
Thu May 8 14:45:03 CDT 2014


2014-05-08 21:13 GMT+0200, Matthew Knepley <knepley at gmail.com>:
> On Thu, May 8, 2014 at 2:06 PM, marco restelli <mrestelli at gmail.com> wrote:
>
>> 2014-05-08 18:29 GMT+0200, Matthew Knepley <knepley at gmail.com>:
>> > On Thu, May 8, 2014 at 11:25 AM, marco restelli <mrestelli at gmail.com>
>> > wrote:
>> >
>> >> Hi,
>> >>    I have a Cartesian communicator and some matrices distributed along
>> >> the "x" direction. I would like to compute an all_reduce operation for
>> >> these matrices in the y direction, and I wander whether there is a
>> >> PETSc function for this.
>> >>
>> >>
>> >> More precisely:
>> >>
>> >> a matrix A is distributed among processors  0 , 1 , 2
>> >> another A is distributed among processors   3 , 4 , 5
>> >> another A is distributed among processors   6 , 7 , 8
>> >> ...
>> >>
>> >> The x direction is 0,1,2; while the y direction is 0,3,6,...
>> >>
>> >> I would like to compute a matrix  B = "sum of the matrices A"  and a
>> >> copy of B should be distributed among processors 0,1,2, another copy
>> >> among 3,4,5 and so on.
>> >>
>> >> A way of doing this is getting the matrix coefficients, broadcasting
>> >> them along the y direction and summing them in the matrix B; maybe
>> >> however there is already a PETSc function doing this.
>> >>
>> >
>> > There is nothing like this in PETSc. There are many tools for this
>> > using
>> > dense
>> > matrices in Elemental, but I have not seen anything for sparse
>> > matrices.
>> >
>> >    Matt
>> >
>>
>> OK, thank you.
>>
>> Now, to do it myself, is MatGetRow the best way to get all the local
>> nonzero entries of a matrix?
>
>
> I think MatGetSubmatrices() is probably better.
>
>    Matt

Matt, thanks but this I don't understand. What I want is getting three
arrays (i,j,coeff) with all the nonzero local coefficients, so that I
can send them around with MPI.

MatGetSubmatrices would give me some PETSc objects, which I can not
pass to MPI, right?

Marco


More information about the petsc-users mailing list