[petsc-users] About MatGetOwnershipRange()
Jed Brown
jedbrown at mcs.anl.gov
Thu Sep 12 03:04:17 CDT 2013
Joon Hee Choi <choi240 at purdue.edu> writes:
> Hello,
>
> I am trying to set up a large MATMPIAIJ matrix from tuples. The tuples
> is pairs of (row, column, value). I will use 12~20 machines for this
> implementation. However, I think I need to know rows on each process
> before preallocation, because for large matrices the exact
> preallocation is important for good performance. Thus, I tried to use
> MatGetOwnershipRange(), but I found out that I cannot use it before
> preallocation. Could you let me know how to change my code? I am
> attaching the code.
>
> Thank you,
> Joon
>
>
> PetscErrorCode SetUpMatrix(Mat *X, vector< std::tr1::tuple< PetscInt, PetscInt, PetscInt > >, PetscInt I, PetscInt J)
> {
> PetscInt i, j;
> PetscScalar val;
>
> MatCreate(PETSC_COMM_WORLD, X);
> MatSetType(*X, MATMPIAIJ);
> MatSetSizes(*X, PETSC_DECIDE, PETSC_DECIDE, I, J);
>
> PetscInt begin, end;
> MatGetOwnershipRange(*X, &begin, &end);
Use PetscSplitOwnership to determine sizes rows/columns, then MPI_Scan
(or MPI_Exscan) to find your starting offset. Alternatively, use the
following sequence (from PetscLayoutCreate).
Notes: Typical calling sequence
PetscLayoutCreate(MPI_Comm,PetscLayout *);
PetscLayoutSetBlockSize(PetscLayout,1);
PetscLayoutSetSize(PetscLayout,n) or PetscLayoutSetLocalSize(PetscLayout,N);
PetscLayoutSetUp(PetscLayout);
Optionally use any of the following:
PetscLayoutGetSize(PetscLayout,PetscInt *); or PetscLayoutGetLocalSize(PetscLayout,PetscInt *;)
PetscLayoutGetRange(PetscLayout,PetscInt *rstart,PetscInt *rend); or PetscLayoutGetRanges(PetscLayout,const PetscInt *range[])
PetscLayoutDestroy(PetscLayout*);
> PetscInt localsize = end - begin;
>
> PetscInt d_nnz[localsize];
> PetscInt o_nnz[localsize];
This allocation will overflow your stack. You probably want to put it
on the heap.
> for (int x=0; x<tups.size(); x++) {
> i = std::tr1::get<0>(tups[x]);
> if (i>=end) break;
Are your tuples sorted to begin with? Are they specified redundantly or
distributed? Are any of the tuples nonlocal? If it is distributed
without matching the row distribution, then you need to communicate
d_nnz and o_nnz.
> if (i>=begin) {
> j = std::tr1::get<1>(tups[x]);
> if (j>=begin && j<end) d_nnz[i-begin]++;
> else o_nnz[i-begin]++;
> }
> }
> MatMPIAIJSetPreallocation(*X, 0, d_nnz, 0, o_nnz);
>
> for (int x=0; x<tups.size(); x++) {
> i = std::tr1::get<0>(tups[x]);
> if (i>=end) break;
> if (i>=begin) {
> j = std::tr1::get<1>(tups[x]);
> val = std::tr1::get<2>(tups[x]);
> MatSetValues(*X, 1, &i, 1, &j, &val, INSERT_VALUES);
> }
> }
> MatAssemblyBegin(*X, MAT_FINAL_ASSEMBLY);
> MatAssemblyEnd(*X, MAT_FINAL_ASSEMBLY);
> }
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 835 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130912/b9d1f7a7/attachment.pgp>
More information about the petsc-users
mailing list