[petsc-users] About MatGetOwnershipRange()

Jed Brown jedbrown at mcs.anl.gov
Thu Sep 12 18:51:21 CDT 2013

Joon Hee Choi <choi240 at purdue.edu> writes:

> Hi Jed,
> Thank you for your reply. I tried to follow your method, but I think
> my code still has something wrong because "localsize" is the same as
> global row size("I"). 

Did you run in parallel?

> Could you let me know what I am missing? My new code is as follows:
> PetscErrorCode SetUpMatrix(Mat *X, vector< std::tr1::tuple< PetscInt, PetscInt, PetscInt > >, PetscInt I, PetscInt J)
> {
>    PetscInt i, j;
>    PetscScalar val;
>    MatCreate(PETSC_COMM_WORLD, X);
>    MatSetType(*X, MATMPIAIJ);
>    PetscInt begin, end, localsize=PETSC_DECIDE;
>    PetscSplitOwnership(PETSC_COMM_WORLD, &localsize, &I);
>    MPI_Scan(&localsize, &end, 1, MPI_INT, MPI_SUM, PETSC_COMM_WORLD);
>    begin = end - localsize;
>    PetscPrintf(PETSC_COMM_WORLD, "Local Size: %d, begin: %d, end: %d \n", localsize, begin, end);
>    ...
> }
> Also, tuples are the total pairs of (row dimension, column dimension,
> element), values of sparse matrix, which are read from a file. 

Using files this way is a massive bottleneck that you'll have to
eliminate if you want your code to be scalable.

> The tuples are sorted and distributed. 

When you distribute, are you sure that each process really gets the
entire row, or would it be possible to cut in the middle of a row?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 835 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130912/48454336/attachment.pgp>

More information about the petsc-users mailing list