[petsc-users] I am wondering if there is a way to implement SPMM

Hong hzhang at mcs.anl.gov
Wed Aug 5 10:28:34 CDT 2015


Cong,
For the first loop:

do stepIdx= 2, step_k

    blockShift = KArrayOffset + (stepIdx-1) * (local_RRow * local_RCol)

    call MatCreateDense(PETSC_COMM_WORLD,  PETSC_DECIDE, &

PETSC_DECIDE , nDim, bsize,KArray(blockShift+1), Km(stepIdx), ierr)
    ...
  end do

Do you use Km(stepIdx) here?
If not, replace MatCreateDense() with
MatMatMult(A,Km(stepIdx-1),MAT_INITIAL_MATRIX,...).
Is matrix A dense or sparse?

Hong


On Wed, Aug 5, 2015 at 9:43 AM, Cong Li <solvercorleone at gmail.com> wrote:

> Hong,
>
> Thanks for your answer.
> However, in my problem, I have a pre-allocated matrix K, and its columns
> are associated with Km(1), .. Km(step_k) respectively. What I want to do is
> to update Km(2) by using the result of A*Km(1), and then to update Km(3) by
> using the product of A and updated Km(2) and so on.
>
> So, I think I need to use MAT_REUSE_MATRIX from the beginning, since even
> when it is the first time  I call
> MatMatMult(A,Km(stepIdx-1), MAT_REUSE_MATRIX,PETSC_
> DEFAULT_INTEGER,Km(stepIdx), ierr)',
>
> Km(stepIdx) have actually already been allocated (in K).
>
> Do you think it is possible that I can do this, and could you please
> suggest some possible ways.
>
> Thanks
>
> Cong Li
>
> On Wed, Aug 5, 2015 at 11:23 PM, Hong <hzhang at mcs.anl.gov> wrote:
>
>> Cong:
>> You cannot use "MAT_REUSE_MATRIX" on arbitrary matrix product.
>> The correct process is
>>
>> call MatMatMult(A,Km(stepIdx-1), MAT_INITIAL_MATRIX,PETSC_
>> DEFAULT_INTEGER,C, ierr)
>> call MatMatMult(A,Km(stepIdx-1), MAT_REUSE_MATRIX,PETSC_
>> DEFAULT_INTEGER,C, ierr)
>> i.e., C has data structure of A*Km(stepIdx-1) and is created in the
>> first call. C can be reused in the 2nd call when A or Km(stepIdx-1)
>> changed values, but not the structures.
>>
>> In your case, Km(stepIdx) = A*Km(stepIdx-1). You should do
>> 'call MatMatMult(A,Km(stepIdx-1), MAT_INITIAL_MATRIX
>> ,PETSC_DEFAULT_INTEGER,Km(stepIdx), ierr)'
>> directly.
>>
>> Hong
>>
>> On Wed, Aug 5, 2015 at 4:42 AM, Cong Li <solvercorleone at gmail.com> wrote:
>>
>>> Hi
>>>
>>> I tried the method you suggested. However, I got the error message.
>>> My code and message are below.
>>>
>>> K is the big matrix containing column matrices.
>>>
>>> code:
>>>
>>> call MatGetArray(K,KArray,KArrayOffset,ierr)
>>>
>>> call MatGetLocalSize(R,local_RRow,local_RCol)
>>>
>>> call MatGetArray(R,RArray,RArrayOffset,ierr)
>>>
>>> call MatCreateDense(PETSC_COMM_WORLD,  PETSC_DECIDE, &
>>>
>>> PETSC_DECIDE , nDim, bsize,KArray(KArrayOffset + 1), Km(1), ierr)
>>>
>>>   localRsize = local_RRow * local_RCol
>>>   do genIdx= 1, localRsize
>>>     KArray(KArrayOffset + genIdx) = RArray(RArrayOffset + genIdx)
>>>   end do
>>>
>>>   call MatRestoreArray(R,RArray,RArrayOffset,ierr)
>>>
>>>   call MatAssemblyBegin(Km(1), MAT_FINAL_ASSEMBLY, ierr)
>>>   call MatAssemblyEnd  (Km(1), MAT_FINAL_ASSEMBLY, ierr)
>>>
>>>   do stepIdx= 2, step_k
>>>
>>>     blockShift = KArrayOffset + (stepIdx-1) * (local_RRow * local_RCol)
>>>
>>>     call MatCreateDense(PETSC_COMM_WORLD,  PETSC_DECIDE, &
>>>
>>> PETSC_DECIDE , nDim, bsize,KArray(blockShift+1), Km(stepIdx), ierr)
>>>     call MatAssemblyBegin(Km(stepIdx), MAT_FINAL_ASSEMBLY, ierr)
>>>     call MatAssemblyEnd  (Km(stepIdx), MAT_FINAL_ASSEMBLY, ierr)
>>>   end do
>>>
>>>   call MatRestoreArray(K,KArray,KArrayOffset,ierr)
>>>
>>>    do stepIdx= 2, step_k
>>>
>>>
>>> call MatMatMult(A,Km(stepIdx-1),MAT_REUSE_MATRIX,PETSC_DEFAULT_INTEGER,Km(stepIdx), ierr)
>>>   end do
>>>
>>>
>>> And I got the error message as below:
>>>
>>>
>>> [0]PETSC ERROR:
>>> ------------------------------------------------------------------------
>>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
>>> probably memory access out of range
>>> [0]PETSC ERROR: Try option -start_in_debugger or
>>> -on_error_attach_debugger
>>> [0]PETSC ERROR: or see
>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC
>>> ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to
>>> find memory corruption errors
>>> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link,
>>> and run
>>> [0]PETSC ERROR: to get more information on the crash.
>>> [0]PETSC ERROR: --------------------- Error Message
>>> ------------------------------------
>>> [0]PETSC ERROR: Signal received!
>>> [0]PETSC ERROR:
>>> ------------------------------------------------------------------------
>>> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 7, Sat May 11
>>> 22:15:24 CDT 2013
>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>>> [0]PETSC ERROR: See docs/index.html for manual pages.
>>> [0]PETSC ERROR: --------------------[1]PETSC ERROR:
>>> ------------------------------------------------------------------------
>>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
>>> probably memory access out of range
>>> ----------------------------------------------------
>>> [0]PETSC ERROR: ./kmath.bcbcg on a arch-fuji named p01-024 by a03293 Wed
>>> Aug  5 18:24:40 2015
>>> [0]PETSC ERROR: Libraries linked from
>>> /volume1/home/ra000005/a03293/kmathlibbuild/petsc-3.3-p7/arch-fujitsu-sparc64fx-opt/lib
>>> [0]PETSC ERROR: Configure run at Tue Jul 28 19:23:51 2015
>>> [0]PETSC ERROR: Configure options --known-level1-dcache-size=32768
>>> --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0
>>> --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8
>>> --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8
>>> --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8
>>> --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8
>>> --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1
>>> --known-mpi-c-double-complex=1 --with-cc=mpifccpx --CFLAGS="-mt -Xg"
>>> --COPTFLAGS=-Kfast,openmp --with-cxx=mpiFCCpx --CXXFLAGS=-mt
>>> --CXXOPTFLAGS=-Kfast,openmp --with-fc=mpifrtpx --FFLAGS=-Kthreadsafe
>>> --FOPTFLAGS=-Kfast,openmp --with-blas-lapack-lib="-SCALAPACK -SSL2"
>>> --with-x=0 --with-c++-support --with-batch=1 --with-info=1
>>> --with-debugging=0 --known-mpi-shared-libraries=0 --with-valgrind=0
>>> [0]PETSC ERROR:
>>> ------------------------------------------------------------------------
>>> [0]PETSC ERROR: User provided function() line 0 in unknown directory
>>> unknown file
>>>
>>> --------------------------------------------------------------------------
>>> [mpi::mpi-api::mpi-abort]
>>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>>> with errorcode 59.
>>>
>>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>>> You may or may not see output from other processes, depending on
>>> exactly when Open MPI kills them.
>>>
>>> --------------------------------------------------------------------------
>>> [p01-024:26516]
>>> /opt/FJSVtclang/GM-1.2.0-18/lib64/libmpi.so.0(orte_errmgr_base_error_abort+0x84)
>>> [0xffffffff0091f684]
>>> [p01-024:26516]
>>> /opt/FJSVtclang/GM-1.2.0-18/lib64/libmpi.so.0(ompi_mpi_abort+0x51c)
>>> [0xffffffff006c389c]
>>> [p01-024:26516]
>>> /opt/FJSVtclang/GM-1.2.0-18/lib64/libmpi.so.0(MPI_Abort+0x6c)
>>> [0xffffffff006db3ac]
>>> [p01-024:26516]
>>> /opt/FJSVtclang/GM-1.2.0-18/lib64/libtrtmet_c.so.1(MPI_Abort+0x2c)
>>> [0xffffffff00281bf0]
>>> [p01-024:26516] ./kmath.bcbcg [0x1bf620]
>>> [p01-024:26516] ./kmath.bcbcg [0x1bf20c]
>>> [p01-024:26516] /lib64/libc.so.6(killpg+0x48) [0xffffffff02d52600]
>>> [p01-024:26516] [(nil)]
>>> [p01-024:26516] ./kmath.bcbcg [0x1a2054]
>>> [p01-024:26516] ./kmath.bcbcg [0x1064f8]
>>> [p01-024:26516] ./kmath.bcbcg(MAIN__+0x9dc) [0x105d1c]
>>> [p01-024:26516] ./kmath.bcbcg(main+0xec) [0x8a329c]
>>> [p01-024:26516] /lib64/libc.so.6(__libc_start_main+0x194)
>>> [0xffffffff02d3b81c]
>>> [p01-024:26516] ./kmath.bcbcg [0x1051ec]
>>> [0]PETSC ERROR:
>>> ------------------------------------------------------------------------
>>> [0]PETSC ERROR: Caught signal number 15 Terminate: Somet process (or the
>>> batch system) has told this process to end
>>> [0]PETSC ERROR: Try option -start_in_debugger or
>>> -on_error_attach_debugger
>>> [0]PETSC ERROR: or see
>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC
>>> ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to
>>> find memory corruption errors
>>> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link,
>>> and run
>>> [0]PETSC ERROR: to get more information on the crash.
>>> [0]PETSC ERROR: --------------------- Error Message
>>> ------------------------------------
>>> [0]PETSC ERROR: Signal received!
>>> [0]PETSC ERROR:
>>> ------------------------------------------------------------------------
>>> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 7, Sat May 11
>>> 22:15:24 CDT 2013
>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>>> [0]PETSC ERROR: See docs/index.html for manual pages.
>>> [0]PETSC ERROR:
>>> ------------------------------------------------------------------------
>>> [0]PETSC ERROR: ./kmath.bcbcg on a arch-fuji named p01-024 by a03293 Wed
>>> Aug  5 18:24:40 2015
>>> [0]PETSC ERROR: Libraries linked from
>>> /volume1/home/ra000005/a03293/kmathlibbuild/petsc-3.3-p7/arch-fujitsu-sparc64fx-opt/lib
>>> [0]PETSC ERROR: Configure run at Tue Jul 28 19:23:51 2015
>>> [0]PETSC ERROR: Configure options --known-level1-dcache-size=32768
>>> --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0
>>> --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8
>>> --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8
>>> --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8
>>> --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8
>>> --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1
>>> --known-mpi-c-double-complex=1 --with-cc=mpifccpx --CFLAGS="-mt -Xg"
>>> --COPTFLAGS=-Kfast,openmp --with-cxx=mpiFCCpx --CXXFLAGS=-mt
>>> --CXXOPTFLAGS=-Kfast,openmp --with-fc=mpifrtpx --FFLAGS=-Kthreadsafe
>>> --FOPTFLAGS=-Kfast,openmp --with-blas-lapack-lib="-SCALAPACK -SSL2"
>>> --with-x=0 --with-c++-support --with-batch=1 --with-info=1
>>> --with-debugging=0 --known-mpi-shared-libraries=0 --with-valgrind=0
>>> [0]PETSC ERROR:
>>> ------------------------------------------------------------------------
>>> [0]PETSC ERROR: User provided function() line 0 in unknown directory
>>> unknown file
>>> [ERR.] PLE 0019 plexec One of MPI processes was
>>> aborted.(rank=0)(nid=0x020a0028)(CODE=1938,793745140674134016,15104)
>>>
>>> However, if I change from
>>>
>>> call MatMatMult(A,Km(stepIdx-1),MAT_REUSE_MATRIX,PETSC_DEFAULT_INTEGER,Km(stepIdx), ierr)
>>> to
>>> call MatMatMult(A,Km(stepIdx-1), MAT_INITIAL_MATRIX
>>> ,PETSC_DEFAULT_INTEGER,Km(stepIdx), ierr)
>>>
>>> everything is fine.
>>>
>>> could you please suggest some way to solve this?
>>>
>>> Thanks
>>>
>>> Cong Li
>>>
>>> On Wed, Aug 5, 2015 at 10:53 AM, Cong Li <solvercorleone at gmail.com>
>>> wrote:
>>>
>>>> Thank you very much for your help and suggestions.
>>>> With your help, finally I could continue my project.
>>>>
>>>> Regards
>>>>
>>>> Cong Li
>>>>
>>>>
>>>>
>>>> On Wed, Aug 5, 2015 at 3:09 AM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>>>
>>>>>
>>>>>   From the manual page:  Unless scall is MAT_REUSE_MATRIX C will be
>>>>> created.
>>>>>
>>>>>   Since you want to use the C that is passed in you should use
>>>>> MAT_REUSE_MATRIX.
>>>>>
>>>>>   Note that since your B and C matrices are dense the issue of
>>>>> sparsity pattern of C is not relevant.
>>>>>
>>>>>   Barry
>>>>>
>>>>> > On Aug 4, 2015, at 11:59 AM, Cong Li <solvercorleone at gmail.com>
>>>>> wrote:
>>>>> >
>>>>> > Thanks very much. This answer is very helpful.
>>>>> > And I have a following question.
>>>>> > If I create B1, B2, .. by the way you suggested and then use
>>>>> MatMatMult to do SPMM.
>>>>> > PetscErrorCode  MatMatMult(Mat A,Mat B,MatReuse scall,PetscReal
>>>>> fill,Mat *C)
>>>>> > should I use  MAT_REUSE_MATRIX for MatReuse part of the arguement.
>>>>> >
>>>>> > Thanks
>>>>> >
>>>>> > Cong Li
>>>>> >
>>>>> > On Wed, Aug 5, 2015 at 1:27 AM, Barry Smith <bsmith at mcs.anl.gov>
>>>>> wrote:
>>>>> >
>>>>> > > On Aug 4, 2015, at 4:09 AM, Cong Li <solvercorleone at gmail.com>
>>>>> wrote:
>>>>> > >
>>>>> > > I am sorry that I should have explained it more clearly.
>>>>> > > Actually I want to compute a recurrence.
>>>>> > >
>>>>> > > Like, I want to firstly compute A*X1=B1, and then calculate
>>>>> A*B1=B2, A*B2=B3 and so on.
>>>>> > > Finally I want to combine all these results into a bigger matrix
>>>>> C=[B1,B2 ...]
>>>>> >
>>>>> >    First create C with MatCreateDense(,&C). Then call
>>>>> MatDenseGetArray(C,&array); then create B1 with
>>>>> MatCreateDense(....,array,&B1); then create
>>>>> > B2 with MatCreateDense(...,array+shift,&B2) etc where shift equals
>>>>> the number of __local__ rows in B1 times the number of columns in B1, then
>>>>> create B3 with a larger shift etc.
>>>>> >
>>>>> >    Note that you are "sharing" the array space of C with B1, B2, B3,
>>>>> ..., each Bi contains its columns of the C matrix.
>>>>> >
>>>>> >   Barry
>>>>> >
>>>>> >
>>>>> >
>>>>> > >
>>>>> > > Is there any way to do this efficiently.
>>>>> > >
>>>>> > >
>>>>> > >
>>>>> > > On Tue, Aug 4, 2015 at 5:45 PM, Patrick Sanan <
>>>>> patrick.sanan at gmail.com> wrote:
>>>>> > > On Tue, Aug 04, 2015 at 03:42:14PM +0900, Cong Li wrote:
>>>>> > > > Thanks for your reply.
>>>>> > > >
>>>>> > > > I have an other question.
>>>>> > > > I want to do SPMM several times and combine result matrices into
>>>>> one bigger
>>>>> > > > matrix.
>>>>> > > > for example
>>>>> > > > I firstly calculate AX1=B1, AX2=B2 ...
>>>>> > > > then I want to combine B1, B2.. to get a C, where C=[B1,B2...]
>>>>> > > >
>>>>> > > > Could you please suggest a way of how to do this.
>>>>> > > This is just linear algebra, nothing to do with PETSc specifically.
>>>>> > > A * [X1, X2, ... ] = [AX1, AX2, ...]
>>>>> > > >
>>>>> > > > Thanks
>>>>> > > >
>>>>> > > > Cong Li
>>>>> > > >
>>>>> > > > On Tue, Aug 4, 2015 at 3:27 PM, Jed Brown <jed at jedbrown.org>
>>>>> wrote:
>>>>> > > >
>>>>> > > > > Cong Li <solvercorleone at gmail.com> writes:
>>>>> > > > >
>>>>> > > > > > Hello,
>>>>> > > > > >
>>>>> > > > > > I am a PhD student using PETsc for my research.
>>>>> > > > > > I am wondering if there is a way to implement SPMM (Sparse
>>>>> matrix-matrix
>>>>> > > > > > multiplication) by using PETSc.
>>>>> > > > >
>>>>> > > > >
>>>>> > > > >
>>>>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatMatMult.html
>>>>> > > > >
>>>>> > >
>>>>> >
>>>>> >
>>>>>
>>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150805/e9c6e8cb/attachment-0001.html>


More information about the petsc-users mailing list