[petsc-users] Parallel matrix-vector multiplication

Gianluca Meneghello gianmail at gmail.com
Thu Nov 12 16:38:09 CST 2015


Hi Barry,

thanks for your help. I am going to have a look at the code right away, I
am curious to understand better how PETSc works.

I have in the mean time tried a different approach, modifying
DMCreateInterpolation_DA_2D_Q0 to provide the mapping (B) between two dms
which are not one the refinement of the other. It looks like it is working.
If that can be useful, I will clean it up and mail it to you.

Thanks again for your help,

Gianluca







2015-11-12 12:25 GMT-08:00 Barry Smith <bsmith at mcs.anl.gov>:

>  I forgot the attached code
>
> > On Nov 12, 2015, at 2:24 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> >
> >
> >   This is kind of tedious to get right. Plus it unfortunately will not
> work for all possible partitionings
> >
> >   Issues:
> >
> > 1) The MatSetValues() works with global number but PETSc global
> numbering is per process so with DMDA does not normally match the "natural"
> ordering. You are trying to use global numbering in the natural ordering.
> To get it to work you need to use local numbering and MatSetValuesLocal()
> >
> > 2) it is next to impossible to debug as written. So what I did to help
> debug is to put values using the local numbering into the two vectors f and
> x and print those vectors. This makes it easy to see when values are put in
> the wrong place.
> >
> > 3) VecView() for DMDA prints out the vectors in the global natural
> ordering so it is easy to see if the vectors have the correct values in the
> correct locations. MatView() for DMDA however prints it only in the PETSc
> ordering by process so one needs to manual translate to make sure the
> matrix is correct.
> >
> >  Anyways I've attached your code with small Mx=5 and My=4  it runs
> correctly with 1,2 and 4 processes here is the output
> >
> >
> >   $ petscmpiexec -n 1  ./ex5
> > Vec Object: 1 MPI processes
> >  type: seq
> > Vec Object:Vec_0x84000000_0 1 MPI processes
> >  type: mpi
> > Process [0]
> > 0.
> > 0.
> > 0.
> > 0.
> > 0.
> > 0.
> > 1.
> > 1.
> > 1.
> > 0.
> > 0.
> > 1.
> > 1.
> > 1.
> > 0.
> > 0.
> > 0.
> > 0.
> > 0.
> > 0.
> > Vec Object: 1 MPI processes
> >  type: seq
> > Vec Object:Vec_0x84000000_1 1 MPI processes
> >  type: mpi
> > Process [0]
> > 1.
> > 1.
> > 1.
> > 1.
> > 1.
> > 1.
> > Mat Object: 1 MPI processes
> >  type: seqaij
> > row 0:
> > row 1:
> > row 2:
> > row 3:
> > row 4:
> > row 5:
> > row 6: (0, 1.)
> > row 7: (1, 1.)
> > row 8: (2, 1.)
> > row 9:
> > row 10:
> > row 11: (3, 1.)
> > row 12: (4, 1.)
> > row 13: (5, 1.)
> > row 14:
> > row 15:
> > row 16:
> > row 17:
> > row 18:
> > row 19:
> > ~/Src/petsc/test-dir (master=) arch-double
> > $ petscmpiexec -n 2  ./ex5
> > Vec Object: 2 MPI processes
> >  type: mpi
> > Vec Object:Vec_0x84000004_0 2 MPI processes
> >  type: mpi
> > Process [0]
> > 0.
> > 0.
> > 0.
> > 0.
> > 0.
> > 0.
> > 1.
> > 1.
> > 2.
> > 0.
> > 0.
> > 1.
> > Process [1]
> > 1.
> > 2.
> > 0.
> > 0.
> > 0.
> > 0.
> > 0.
> > 0.
> > Vec Object: 2 MPI processes
> >  type: mpi
> > Vec Object:Vec_0x84000004_1 2 MPI processes
> >  type: mpi
> > Process [0]
> > 1.
> > 1.
> > 2.
> > 1.
> > Process [1]
> > 1.
> > 2.
> > Mat Object: 2 MPI processes
> >  type: mpiaij
> > row 0:
> > row 1:
> > row 2:
> > row 3:
> > row 4: (0, 1.)
> > row 5: (1, 1.)
> > row 6:
> > row 7: (2, 1.)
> > row 8: (3, 1.)
> > row 9:
> > row 10:
> > row 11:
> > row 12:
> > row 13:
> > row 14: (4, 2.)
> > row 15:
> > row 16: (5, 2.)
> > row 17:
> > row 18:
> > row 19:
> > ~/Src/petsc/test-dir (master=) arch-double
> > $ petscmpiexec -n 4  ./ex5
> > Vec Object: 4 MPI processes
> >  type: mpi
> > Vec Object:Vec_0x84000004_0 4 MPI processes
> >  type: mpi
> > Process [0]
> > 0.
> > 0.
> > 0.
> > 0.
> > 0.
> > 0.
> > Process [1]
> > 1.
> > 1.
> > 2.
> > 0.
> > Process [2]
> > 0.
> > 3.
> > 3.
> > 4.
> > 0.
> > 0.
> > Process [3]
> > 0.
> > 0.
> > 0.
> > 0.
> > Vec Object: 4 MPI processes
> >  type: mpi
> > Vec Object:Vec_0x84000004_1 4 MPI processes
> >  type: mpi
> > Process [0]
> > 1.
> > 1.
> > Process [1]
> > 2.
> > Process [2]
> > 3.
> > 3.
> > Process [3]
> > 4.
> > Mat Object: 4 MPI processes
> >  type: mpiaij
> > row 0:
> > row 1:
> > row 2:
> > row 3:
> > row 4: (0, 1.)
> > row 5: (1, 1.)
> > row 6:
> > row 7:
> > row 8: (2, 2.)
> > row 9:
> > row 10:
> > row 11: (3, 3.)
> > row 12: (4, 3.)
> > row 13:
> > row 14:
> > row 15:
> > row 16: (5, 4.)
> > row 17:
> > row 18:
> > row 19:
> >
> >
> > It does NOT run correctly with 3 processes
> >
> > $ petscmpiexec -n 3  ./ex5
> > [1]PETSC ERROR: --------------------- Error Message
> --------------------------------------------------------------
> > [1]PETSC ERROR: Argument out of range
> > [1]PETSC ERROR: Local index 2 too large 1 (max) at 0
> > [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> > [1]PETSC ERROR: Petsc Development GIT revision: v3.6.2-1539-gf82f7f7
> GIT Date: 2015-11-09 20:26:06 -0600
> > [1]PETSC ERROR: ./ex5 on a arch-double named
> visitor098-088.wl.anl-external.org by barrysmith Thu Nov 12 14:18:56 2015
> > [1]PETSC ERROR: Configure options --download-hwloc --download-hypre
> --download-mpich --with-afterimage PETSC_ARCH=arch-double
> > [1]PETSC ERROR: #1 ISLocalToGlobalMappingApply() line 423 in
> /Users/barrysmith/Src/PETSc/src/vec/is/utils/isltog.c
> > [1]PETSC ERROR: #2 MatSetValuesLocal() line 2020 in
> /Users/barrysmith/Src/PETSc/src/mat/interface/matrix.c
> > [1]PETSC ERROR: --------------------- Error Message
> --------------------------------------------------------------
> > [1]PETSC ERROR: Argument out of range
> > [1]PETSC ERROR: Local index 2 too large 1 (max) at 0
> > [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> > [1]PETSC ERROR: Petsc Development GIT revision: v3.6.2-1539-gf82f7f7
> GIT Date: 2015-11-09 20:26:06 -0600
> > [1]PETSC ERROR: ./ex5 on a arch-double named
> visitor098-088.wl.anl-external.org by barrysmith Thu Nov 12 14:18:56 2015
> > [1]PETSC ERROR: Configure options --download-hwloc --download-hypre
> --download-mpich --with-afterimage PETSC_ARCH=arch-double
> > [1]PETSC ERROR: #3 ISLocalToGlobalMappingApply() line 423 in
> /Users/barrysmith/Src/PETSc/src/vec/is/utils/isltog.c
> > [1]PETSC ERROR: #4 VecSetValuesLocal() line 1058 in
> /Users/barrysmith/Src/PETSc/src/vec/vec/interface/rvector.c
> > Vec Object: 3 MPI processes
> >  type: mpi
> > Vec Object:Vec_0x84000004_0 3 MPI processes
> >  type: mpi
> > Process [0]
> > 0.
> > 0.
> > 0.
> > 0.
> > 0.
> > 0.
> > 1.
> > 2.
> > Process [1]
> > 2.
> > 0.
> > 0.
> > 1.
> > 2.
> > 2.
> > 0.
> > 0.
> > Process [2]
> > 0.
> > 0.
> > 0.
> > 0.
> > Vec Object: 3 MPI processes
> >  type: mpi
> > Vec Object:Vec_0x84000004_1 3 MPI processes
> >  type: mpi
> > Process [0]
> > 1.
> > 2.
> > Process [1]
> > 0.
> > 1.
> > Process [2]
> > 4.
> > 0.
> > Mat Object: 3 MPI processes
> >  type: mpiaij
> > row 0:
> > row 1:
> > row 2:
> > row 3: (0, 1.)
> > row 4:
> > row 5: (1, 1.)
> > row 6:
> > row 7:
> > row 8:
> > row 9:
> > row 10: (2, 2.)
> > row 11: (3, 2.)
> > row 12: (3, 2.)
> > row 13:
> > row 14:
> > row 15:
> > row 16:
> > row 17:
> > row 18:
> > row 19:
> >
> > The reason is that in this case the DMDAs are decomposed into three
> strips, for x the strips are xi = 0,1  then xi = 2,3 then xi= 4
> > for f the strips are fi=0, fi=1, fi=2 so there is no way to get a
> consistent local numbering for both x and f at the same time with 3
> strips.  So like the DMDA interpolation routines your code can only work
> for certain decompositions. Forget what I said about lx and ly I don't
> think that is relevant for what you are trying to do.
> >
> >  Barry
> >
> >
> >> On Nov 12, 2015, at 12:23 PM, Gianluca Meneghello <gianmail at gmail.com>
> wrote:
> >>
> >> Hi Barry,
> >>
> >> sorry, but I still cannot make it. I guess what I need is something
> similar to MatRestrict/MatInterpolate (and B is something similar to what
> is created from DMCreateInterpolation, except for the fact that the nonzero
> entries are distributed differently).
> >>
> >> Am I mistaken? Is there any example I could start from?
> >>
> >> Thanks again,
> >>
> >> Gianluca
> >>
> >> On Wed, Nov 11, 2015 at 10:19 PM, Barry Smith <bsmith at mcs.anl.gov>
> wrote:
> >> DMDAGetOwnershipRanges
> >>
> >>> On Nov 11, 2015, at 10:47 PM, Gianluca Meneghello <gianmail at gmail.com>
> wrote:
> >>>
> >>> Hi,
> >>>
> >>> thanks for the very quick reply.
> >>>
> >>> One more question: is there a way to get the lx and ly from the first
> dm and use them (modified) for the second dm? DMDAGetInfo does not seem to
> provide this information.
> >>>
> >>> Thanks again for your help
> >>>
> >>> Gianluca
> >>>
> >>> On Wed, Nov 11, 2015 at 8:12 PM, Barry Smith <bsmith at mcs.anl.gov>
> wrote:
> >>>
> >>>  When you create the 2 DM you must be set the lx, ly arguments (the
> ones you set to 0) in your code carefully to insure that the vectors for
> the 2 DM you create have compatible layout to do the matrix vector product.
> >>>
> >>>   You can run a very small problem with 2 processors and printing out
> the vectors to see the layout to make sure you get it correct.
> >>>
> >>>   The 2 DM don't have any magically way of knowing that you created
> another DMDA and want it to be compatible automatically.
> >>>
> >>>  Barry
> >>>
> >>> DMDACreate2d(PETSC_COMM_WORLD , DM_BOUNDARY_GHOSTED ,
> DM_BOUNDARY_GHOSTED , DMDA_STENCIL_BOX ,
> >>>      Mx      , Nx      , PETSC_DECIDE , PETSC_DECIDE , 1 , 0 , 0 , 0 ,
> &dax);
> >>>  DMDACreate2d(PETSC_COMM_WORLD , DM_BOUNDARY_NONE    ,
> DM_BOUNDARY_NONE    , DMDA_STENCIL_BOX ,
> >>>      Mx-2*bs , Nx-2*bs , PETSC_DECIDE , PETSC_DECIDE , 1 , 0 , 0 , 0 ,
> &daf);
> >>>
> >>>> On Nov 11, 2015, at 10:05 PM, Gianluca Meneghello <gianmail at gmail.com>
> wrote:
> >>>>
> >>>> Hi,
> >>>>
> >>>> I am trying to do something apparently really simple but with no
> success.
> >>>>
> >>>> I need to perform a matrix-vector multiplication x = B f , where the
> length of x is bigger than the length of f (or viceversa). Thus, B cannot
> be created using DMCreateMatrix.
> >>>>
> >>>> Both x and f are obtained from different DMs, the smaller covering
> only a subdomain of the larger. The application is to apply a control f to
> a system, e.g. \dot{x} = A x + B f.
> >>>>
> >>>> The problem is, when running on more than one core, the vector x is
> not organized as I would expect (everything works fine on a single core).
> >>>>
> >>>> I attach a short example where B is intended to map f to the interior
> of x.
> >>>>
> >>>> mpirun -n 1 ./test -draw_pause -1     works fine while
> >>>> mpirun -n 2 ./test -draw_pause -1     shows the problem
> >>>>
> >>>> I have not found any example with non square matrices in the src
> folder, any help is very welcome.
> >>>>
> >>>> Thanks for your time,
> >>>>
> >>>> Gianluca
> >>>> <test.cpp>
> >>>
> >>>
> >>
> >>
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20151112/926b5136/attachment-0001.html>


More information about the petsc-users mailing list