[petsc-users] Problem when getting matrix values owned by other processor

Hong hzhang at mcs.anl.gov
Fri Apr 15 12:05:07 CDT 2016


Yaoyu :
Thanks for sending me your code and matrices. I tested it and found the
answer.

When ISs are created with
#define MY_COMMUNICATOR (PETSC_COMM_WORLD)
the parallel subMatrixIS_Row is stored as (ISView() shows it)

IS Object: 2 MPI processes
  type: general
[0] Number of indices in set 6
[0] 0 0
[0] 1 190
[0] 2 0
[0] 3 10
[0] 4 9
[0] 5 1
[1] Number of indices in set 6
[1] 0 95
[1] 1 475
[1] 2 275
[1] 3 295
[1] 4 284
[1] 5 286

i.e., your local ISs are concatenated as a parallel IS.
In MatGetSubMatrices_MPIAIJ() implementation, petsc only uses local size
and local indices of input IS, this explains why you got same answer.

>
> The situation for me is that, I am trying to write a piece of code to
> do some simple CFD simulations using finite volume method. The row IS
> I passed to MatGetSubMatrices() is the neighbor element indices for
> the current element being processed. The reason for the duplicated
> indices in the row IS is that I just simply put a ZERO index at the
> place where the actual neighbor element is the boundary of the fluid
> domain. You see, it is all because I’m too lazy to deal with the
> differences between the boundary elements and the inner elements while
> I’m gathering information from the parallel matrix. I think I should
> stop doing this. And I am working on refining the code. Do you have
> any suggestions for me to handle the boundary elements?
>

Sorry, I'm not an expert on finite element. Others may help you on this
issue. Intuitively, matrix is assembled from elements. You should have more
efficient way to retrieve elements without using matrix.

Hong

>
> 2016-04-15 2:01 GMT+08:00 Hong <hzhang at mcs.anl.gov>:
> > Yaoyu :
> > Can you send me the code for calling MatGetSubMatrices()?
> > I want to check how did you create mpi ISs.
> >
> > Checking MatGetSubMatrices() for mpiaij matrix format, it seems
> duplicate IS
> > indices would work. I guess duplicate rows might be ok, but duplicate
> > columns might run into trouble from our implementation.
> >
> > By definition, a submatrix of A should not have duplicate row/column of
> A,
> > isn't it?
> >
> > Hong
> >
> > Hi Hong,
> >>
> >>
> >> I think what you said about the ISs for MatGetSubMatrices() are right.
> >>
> >> Today I tried to run my program to get submatrix by
> >> MatGetSubMatrices() twice. One time with ISs created with
> >> PETSC_COMM_WORLD and the other time with PETSC_COMM_SELF. I was
> >> creating a 6x3 submatrix on each processor from a 570x3 parallel dense
> >> matrix. There are only 2 processors in total. The results are as
> >> follows:
> >>
> >> ======== submatrix with ISs created with PETSC_COMM_WORLD =========
> >> Processor 0
> >> Mat Object: 1 MPI processes
> >>   type: seqdense
> >> 2.7074996615625420e+10 6.8399452804377966e+11 -1.2730861976538324e+08
> >> 2.7074996615625420e+10 6.8399452804377966e+11 -1.2730861976538324e+08
> >> 2.7359996580000423e+10 8.2079343365253589e+11 6.3654303672968522e+06
> >> 2.9639996295000458e+10 1.9151846785225828e+12 -1.0184686034752984e+08
> >> 2.9924996259375465e+10 1.1947308948674746e+12 -3.8181422587255287e+08
> >> 5.4149993231250839e+10 2.6675786593707410e+13 -4.9650360873402004e+09
> >>
> >> Processor 1
> >> Mat Object: 1 MPI processes
> >>   type: seqdense
> >> 5.4149993231250839e+10 1.9183199555758315e+12 5.9880080526417112e+08
> >> 7.8374990203126236e+10 5.7441876402563477e+12 1.7367770705944958e+09
> >> 8.0939989882501266e+10 5.7347670250898613e+12 1.7900992494213123e+09
> >> 8.1509989811251282e+10 5.7751527083651416e+12 1.8027055821637158e+09
> >> 8.4074989490626328e+10 6.1543675577970762e+12 1.8557943339637902e+09
> >> 1.0829998646250171e+11 9.5915997778791719e+12 2.9940040263208551e+09
> >>
> >> ==== end of submatrix with ISs created with PETSC_COMM_WORLD ======
> >>
> >> ======== submatrix with ISs created with PETSC_COMM_SELF =========
> >> Processor 0
> >> Mat Object: 1 MPI processes
> >>   type: seqdense
> >> 2.7074996615625420e+10 6.8399452804377966e+11 -1.2730861976538324e+08
> >> 2.7074996615625420e+10 6.8399452804377966e+11 -1.2730861976538324e+08
> >> 2.7359996580000423e+10 8.2079343365253589e+11 6.3654303672968522e+06
> >> 2.9639996295000458e+10 1.9151846785225828e+12 -1.0184686034752984e+08
> >> 2.9924996259375465e+10 1.1947308948674746e+12 -3.8181422587255287e+08
> >> 5.4149993231250839e+10 2.6675786593707410e+13 -4.9650360873402004e+09
> >>
> >> Processor 1
> >> Mat Object: 1 MPI processes
> >>   type: seqdense
> >> 5.4149993231250839e+10 1.9183199555758315e+12 5.9880080526417112e+08
> >> 7.8374990203126236e+10 5.7441876402563477e+12 1.7367770705944958e+09
> >> 8.0939989882501266e+10 5.7347670250898613e+12 1.7900992494213123e+09
> >> 8.1509989811251282e+10 5.7751527083651416e+12 1.8027055821637158e+09
> >> 8.4074989490626328e+10 6.1543675577970762e+12 1.8557943339637902e+09
> >> 1.0829998646250171e+11 9.5915997778791719e+12 2.9940040263208551e+09
> >>
> >> ==== end of submatrix with ISs created with PETSC_COMM_SELF ======
> >>
> >> The results are identical. I really agree with you that only
> >> sequential ISs should be used when creating submatrix by
> >> MatGetSubMatrices(). The above results may be just a coincidence.
> >>
> >> I take your suggestions and I am using only sequential ISs with
> >> MatGetSubMatrices() now.
> >>
> >> Yet another question: Can I use IS which have duplicate entries in it?
> >> The documentation of MatGetSubMatrices() says that "The index sets may
> >> not have duplicate entries", so I think no duplicated entries are
> >> allowed in the IS. But again, I just tried IS which have duplicated
> >> entries. And the resultant submatrices for each processor seemd to be
> >> correct. In fact, in the above sample submatrices I showed, the
> >> submatrix owned by Processor 0 has its row IS specified as [x, x, a,
> >> b, c, d]. Where 'x' means the duplicated entries in the IS. Then I got
> >> two identical rows in the submatrix owned by Processor 0.
> >>
> >> But I think I was doing this incorrectly.
> >>
> >> Thanks.
> >>
> >> HU Yaoyu
> >>
> >> >
> >> > Yaoyu:
> >> >
> >> >  "MatGetSubMatrices() can extract ONLY sequential submatrices
> >> >    (from both sequential and parallel matrices). Use MatGetSubMatrix()
> >> >    to extract a parallel submatrix."
> >> >
> >> > Using parallel IS for MatGetSubMatrices() definitely incorrect, unless
> >> > you
> >> > only use one process.
> >> >
> >> >>
> >> >> And further. I checked the submatrices I obtained by
> >> >> MatGetSubMatrices() with both parallel ISs and sequential ISs. The
> >> >> submatrices are identical. Is it means that I could actually use
> >> >> parallel IS (created with PETSC_COMM_WORLD) ?
> >> >>
> >> >
> >> > What did you compare with?  I do not understand what submatrices would
> >> > obtain with parallel IS using more than one process.
> >> >
> >> > Hong
> >> >
> >> >>
> >> >> > Yaoyu :
> >> >> > MatGetSubMatrices() returns sequential matrices.
> >> >> > IS must be sequential, created with PETSC_COMM_SELF.
> >> >> > See petsc/src/mat/examples/tests/ex42.c
> >> >> >
> >> >> > Check your submatrices.
> >> >> >
> >> >> > Hong
> >> >> >
> >> >> > Hi everyone,
> >> >> >>
> >> >> >> I am trying to get values owned by other processors of a parallel
> >> >> matrix.
> >> >> >>
> >> >> >> I tried to create a sub-matrix by using MatGetSubMatrices(), and
> >> >> >> then
> >> >> >> MatGetRow() on the sub-matrix. But MatGetRow() give me the
> following
> >> >> >> error message:
> >> >> >>
> >> >> >> ===== Error message begins =====
> >> >> >>
> >> >> >> No support for this operation for this object type
> >> >> >> only local rows
> >> >> >>
> >> >> >> ===== Error message ends =====
> >> >> >>
> >> >> >> The parallel matrix is a parallel dense matrix. The ISs for
> >> >> >> MatGetSubMatrices() are created using ISCreateGeneral() and
> >> >> >> PETSC_COMM_WORLD. The row IS is sorted by ISSort().
> >> >> >>
> >> >> >> What did I mistake while using the above functions? Is there a
> >> >> >> better
> >> >> >> way to get access to matrix values owned by other processor?
> >> >> >>
> >> >> >> Thanks!
> >> >> >>
> >> >> >> HU Yaoyu
> >> >> >>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160415/a3e941bb/attachment.html>


More information about the petsc-users mailing list