[petsc-users] I am wondering if there is a way to implement SPMM

Hong hzhang at mcs.anl.gov
Thu Aug 6 10:09:02 CDT 2015


Barry:
>
>
>   Hong, we want to reuse the space in the Km(stepIdx-1) from which it was
> created which means that MAT_INITIAL_MATRIX cannot be used. Since the
> result is always dense it is not the difficult case when

a symbolic computation needs to be done initially so, at least in theory,
> he should not have to use MAT_INITIAL_MATRIX the first time through.
>

Petsc implementation of MatMatMult() assumes user call
MatMatMultSymbolic() first, in which, we define which
MatMatMultNumeric() routine to be followed, and most importantly, we create
specific data structure to be reused by  MatMatMultNumeric().
In MatMatMultSymbolic_MPIAIJ_MPIDense(), we create a 'container'.

Without calling the case of MAT_INITIAL_MATRIX, these info are missing, and
code simply crashes.

Hong


> >
> > Hong
> >
> >
> > On Wed, Aug 5, 2015 at 8:56 PM, Cong Li <solvercorleone at gmail.com>
> wrote:
> > The entire source code files are attached.
> >
> > Also I copy and paste the here in this email
> >
> > thanks
> >
> > program test
> >
> >   implicit none
> >
> > #include <finclude/petscsys.h>
> > #include <finclude/petscvec.h>
> > #include <finclude/petscmat.h>
> > #include <finclude/petscviewer.h>
> >
> >
> >   PetscViewer    :: view
> >   ! sparse matrix
> >   Mat            :: A
> >   ! distributed dense matrix of size n x m
> >   Mat            :: B, X, R, QDlt, AQDlt
> >   ! distributed dense matrix of size n x (m x k)
> >   Mat            :: Q, K, AQ_p, AQ
> >   ! local dense matrix (every process keep the identical copies), (m x
> k) x (m x k)
> >   Mat            :: AConjPara, QtAQ, QtAQ_p, Dlt
> >
> >   PetscInt       :: nDim, mDim, rhsNDim,rhsMDim,ierr, maxIter, iter,
> step_k,bsize
> >   PetscInt       :: ownRowS,ownRowE
> >   PetscScalar, allocatable :: XInit(:,:)
> >   PetscInt       :: XInitI, XInitJ
> >   PetscScalar    :: v=1.0
> >   PetscBool      :: flg
> >   PetscMPIInt    :: size, rank
> >
> >   character(128) ::  fin, rhsfin
> >
> >
> >   call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
> >   call MPI_Comm_size(PETSC_COMM_WORLD,size,ierr)
> >   call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)
> >
> >   ! read binary matrix file
> >   call PetscOptionsGetString(PETSC_NULL_CHARACTER,'-f',fin,flg,ierr)
> >   call PetscOptionsGetString(PETSC_NULL_CHARACTER,'-r',rhsfin,flg,ierr)
> >
> >   call PetscOptionsGetInt(PETSC_NULL_CHARACTER,'-i',maxIter,flg,ierr)
> >   call PetscOptionsGetInt(PETSC_NULL_CHARACTER,'-k',step_k,flg,ierr)
> >   call PetscOptionsGetInt(PETSC_NULL_CHARACTER,'-w',bsize,flg,ierr)
> >
> >
> >   call
> PetscViewerBinaryOpen(PETSC_COMM_WORLD,fin,FILE_MODE_READ,view,ierr)
> >   call MatCreate(PETSC_COMM_WORLD,A,ierr)
> >   call MatSetType(A,MATAIJ,ierr)
> >   call MatLoad(A,view,ierr)
> >   call PetscViewerDestroy(view,ierr)
> >   ! for the time being, assume mDim == nDim is true
> >   call MatGetSize(A, nDim, mDim, ierr)
> >
> >   if (rank == 0) then
> >     print*,'Mat Size = ', nDim, mDim
> >   end if
> >
> >   call MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY, ierr)
> >   call MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY, ierr)
> >   call MatGetOwnershipRange(A,ownRowS,ownRowE, ierr)
> >
> >   ! create right-and-side matrix
> >   ! for the time being, choose row-wise decomposition
> >   ! for the time being, assume nDim%size = 0
> >   call MatCreateDense(PETSC_COMM_WORLD, (ownRowE - ownRowS), &
> >                       bsize, nDim, bsize,PETSC_NULL_SCALAR, B, ierr)
> >   call
> PetscViewerBinaryOpen(PETSC_COMM_WORLD,rhsfin,FILE_MODE_READ,view, ierr)
> >   call MatLoad(B,view,ierr)
> >   call PetscViewerDestroy(view,ierr)
> >   call MatGetSize(B, rhsMDim, rhsNDim, ierr)
> >   if (rank == 0) then
> >     print*,'MRHS Size actually are:', rhsMDim, rhsNDim
> >     print*,'MRHS Size should be:', nDim, bsize
> >   end if
> >   call MatAssemblyBegin(B, MAT_FINAL_ASSEMBLY, ierr)
> >   call MatAssemblyEnd(B, MAT_FINAL_ASSEMBLY, ierr)
> >
> >   ! inintial value guses X
> >   allocate(XInit(nDim,bsize))
> >   do XInitI=1, nDim
> >     do XInitJ=1, bsize
> >       XInit(XInitI,XInitJ) = 1.0
> >     end do
> >   end do
> >
> >   call MatCreateDense(PETSC_COMM_WORLD, (ownRowE - ownRowS), &
> >                       bsize, nDim, bsize,XInit, X, ierr)
> >
> >   call MatAssemblyBegin(X, MAT_FINAL_ASSEMBLY, ierr)
> >   call MatAssemblyEnd  (X, MAT_FINAL_ASSEMBLY, ierr)
> >
> >
> >   !  B, X, R, QDlt, AQDlt
> >   call MatDuplicate(B, MAT_DO_NOT_COPY_VALUES, R, ierr)
> >   call MatAssemblyBegin(R, MAT_FINAL_ASSEMBLY, ierr)
> >   call MatAssemblyEnd  (R, MAT_FINAL_ASSEMBLY, ierr)
> >
> >   call MatDuplicate(B, MAT_DO_NOT_COPY_VALUES, QDlt, ierr)
> >   call MatAssemblyBegin(QDlt, MAT_FINAL_ASSEMBLY, ierr)
> >   call MatAssemblyEnd  (QDlt, MAT_FINAL_ASSEMBLY, ierr)
> >
> >   call MatDuplicate(B, MAT_DO_NOT_COPY_VALUES, AQDlt, ierr)
> >   call MatAssemblyBegin(AQDlt, MAT_FINAL_ASSEMBLY, ierr)
> >   call MatAssemblyEnd  (AQDlt, MAT_FINAL_ASSEMBLY, ierr)
> >
> > ! Q, K, AQ_p, AQ of size n x (m x k)
> >   call MatCreateDense(PETSC_COMM_WORLD, (ownRowE - ownRowS), &
> >                       (bsize*step_k), nDim,
> (bsize*step_k),PETSC_NULL_SCALAR, Q, ierr)
> >   call MatAssemblyBegin(Q, MAT_FINAL_ASSEMBLY, ierr)
> >   call MatAssemblyEnd(Q, MAT_FINAL_ASSEMBLY, ierr)
> >
> >   call MatDuplicate(Q, MAT_DO_NOT_COPY_VALUES, K, ierr)
> >   call MatAssemblyBegin(K, MAT_FINAL_ASSEMBLY, ierr)
> >   call MatAssemblyEnd(K, MAT_FINAL_ASSEMBLY, ierr)
> >
> >   call MatDuplicate(Q, MAT_DO_NOT_COPY_VALUES, AQ_p, ierr)
> >   call MatAssemblyBegin(AQ_p, MAT_FINAL_ASSEMBLY, ierr)
> >   call MatAssemblyEnd(AQ_p, MAT_FINAL_ASSEMBLY, ierr)
> >
> >   call MatDuplicate(Q, MAT_DO_NOT_COPY_VALUES, AQ, ierr)
> >   call MatAssemblyBegin(AQ, MAT_FINAL_ASSEMBLY, ierr)
> >   call MatAssemblyEnd(AQ, MAT_FINAL_ASSEMBLY, ierr)
> >
> > ! QtAQ, QtAQ_p, Dlt of size (m x k) x (m x k)
> >   call MatCreateSeqDense(PETSC_COMM_SELF,(bsize*step_k),(bsize*step_k),&
> >                          PETSC_NULL_SCALAR, QtAQ, ierr)
> >   call MatAssemblyBegin(QtAQ, MAT_FINAL_ASSEMBLY, ierr)
> >   call MatAssemblyEnd  (QtAQ, MAT_FINAL_ASSEMBLY, ierr)
> >
> >   call MatDuplicate(QtAQ, MAT_DO_NOT_COPY_VALUES, QtAQ_p    , ierr)
> >   call MatAssemblyBegin(QtAQ_p, MAT_FINAL_ASSEMBLY, ierr)
> >   call MatAssemblyEnd  (QtAQ_p, MAT_FINAL_ASSEMBLY, ierr)
> >
> >   call MatDuplicate(QtAQ, MAT_DO_NOT_COPY_VALUES, Dlt       , ierr)
> >   call MatAssemblyBegin(Dlt, MAT_FINAL_ASSEMBLY, ierr)
> >   call MatAssemblyEnd  (Dlt, MAT_FINAL_ASSEMBLY, ierr)
> >
> >   call MatDuplicate(QtAQ, MAT_DO_NOT_COPY_VALUES, AConjPara , ierr)
> >   call MatAssemblyBegin(AConjPara, MAT_FINAL_ASSEMBLY, ierr)
> >   call MatAssemblyEnd  (AConjPara, MAT_FINAL_ASSEMBLY, ierr)
> >
> > ! calculation for R
> >
> > ! call matrix powers kernel
> >   call mpk_monomial (K, A, R, step_k, rank,size)
> >
> > ! destory matrices
> >   deallocate(XInit)
> >
> >   call MatDestroy(B, ierr)
> >   call MatDestroy(X, ierr)
> >   call MatDestroy(R, ierr)
> >   call MatDestroy(QDlt, ierr)
> >   call MatDestroy(AQDlt, ierr)
> >   call MatDestroy(Q, ierr)
> >   call MatDestroy(K, ierr)
> >   call MatDestroy(AQ_p, ierr)
> >   call MatDestroy(AQ, ierr)
> >   call MatDestroy(QtAQ, ierr)
> >   call MatDestroy(QtAQ_p, ierr)
> >   call MatDestroy(Dlt, ierr)
> >
> >
> >   call PetscFinalize(ierr)
> >
> >   stop
> >
> > end program test
> >
> >
> > subroutine mpk_monomial (K, A, R, step_k, rank, sizeMPI)
> >       implicit none
> >
> > #include <finclude/petscsys.h>
> > #include <finclude/petscvec.h>
> > #include <finclude/petscmat.h>
> > #include <finclude/petscviewer.h>
> >
> >       Mat            :: K, Km(step_k)
> >       Mat            :: A, R
> >       PetscMPIInt    :: sizeMPI, rank
> >       PetscInt       :: nDim, bsize, step_k, local_RRow, local_RCol,
> genIdx
> >       PetscInt       :: ierr
> >       PetscInt       :: stepIdx, blockShift, localRsize
> >   PetscScalar    :: KArray(1), RArray(1), PetscScalarSize
> >   PetscOffset    :: KArrayOffset, RArrayOffset
> >
> >       call MatGetSize(R, nDim, bsize, ierr)
> >       if (rank == 0) then
> >         print*,'Mat Size = ', nDim, bsize
> >       end if
> >
> >   call MatGetArray(K,KArray,KArrayOffset,ierr)
> >
> >   call MatGetLocalSize(R,local_RRow,local_RCol)
> > !   print *, "local_RRow,local_RCol", local_RRow,local_RCol
> >
> >   ! get arry from R to add values to K(1)
> >   call MatGetArray(R,RArray,RArrayOffset,ierr)
> >
> >   call MatCreateDense(PETSC_COMM_WORLD,  PETSC_DECIDE, &
> >                         PETSC_DECIDE , nDim, bsize,KArray(KArrayOffset +
> 1), Km(1), ierr)
> >
> >
> > !   call PetscMemmove(KArray(KArrayOffset + 1),RArray(RArrayOffset + 1) &
> > !                  ,local_RRow * local_RCol *
> STORAGE_SIZE(PetscScalarSize), ierr)
> >
> >   localRsize = local_RRow * local_RCol
> >   do genIdx= 1, localRsize
> >     KArray(KArrayOffset + genIdx) = RArray(RArrayOffset + genIdx)
> >   end do
> >
> >
> >   call MatRestoreArray(R,RArray,RArrayOffset,ierr)
> >
> >   call MatAssemblyBegin(Km(1), MAT_FINAL_ASSEMBLY, ierr)
> >   call MatAssemblyEnd  (Km(1), MAT_FINAL_ASSEMBLY, ierr)
> >
> >   do stepIdx= 2, step_k
> >
> >     blockShift = KArrayOffset + (stepIdx-1) * (local_RRow * local_RCol)
> >
> >     call MatCreateDense(PETSC_COMM_WORLD,  PETSC_DECIDE, &
> >                         PETSC_DECIDE , nDim, bsize,KArray(blockShift+1),
> Km(stepIdx), ierr)
> >     call MatAssemblyBegin(Km(stepIdx), MAT_FINAL_ASSEMBLY, ierr)
> >     call MatAssemblyEnd  (Km(stepIdx), MAT_FINAL_ASSEMBLY, ierr)
> >
> >   end do
> >
> >   call MatRestoreArray(K,KArray,KArrayOffset,ierr)
> >
> > !   do stepIdx= 2, step_k
> >   do stepIdx= 2,2
> >
> >     call
> MatMatMult(A,Km(stepIdx-1),MAT_REUSE_MATRIX,PETSC_DEFAULT_INTEGER,Km(stepIdx),
> ierr)
> > !     call
> MatMatMult(A,Km(stepIdx-1),MAT_INITIAL_MATRIX,PETSC_DEFAULT_INTEGER,Km(stepIdx),
> ierr)
> >   end do
> >
> > !   call MatView(K,PETSC_VIEWER_STDOUT_WORLD,ierr)
> >
> > end subroutine mpk_monomial
> >
> >
> >
> > Cong Li
> >
> > On Thu, Aug 6, 2015 at 3:30 AM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> >
> >    Send the entire code so that we can compile it and run it ourselves
> to see what is going wrong.
> >
> >   Barry
> >
> > > On Aug 5, 2015, at 4:42 AM, Cong Li <solvercorleone at gmail.com> wrote:
> > >
> > > Hi
> > >
> > > I tried the method you suggested. However, I got the error message.
> > > My code and message are below.
> > >
> > > K is the big matrix containing column matrices.
> > >
> > > code:
> > >
> > > call MatGetArray(K,KArray,KArrayOffset,ierr)
> > >
> > > call MatGetLocalSize(R,local_RRow,local_RCol)
> > >
> > > call MatGetArray(R,RArray,RArrayOffset,ierr)
> > >
> > > call MatCreateDense(PETSC_COMM_WORLD,  PETSC_DECIDE, &
> > >                         PETSC_DECIDE , nDim, bsize,KArray(KArrayOffset
> + 1), Km(1), ierr)
> > >
> > >   localRsize = local_RRow * local_RCol
> > >   do genIdx= 1, localRsize
> > >     KArray(KArrayOffset + genIdx) = RArray(RArrayOffset + genIdx)
> > >   end do
> > >
> > >   call MatRestoreArray(R,RArray,RArrayOffset,ierr)
> > >
> > >   call MatAssemblyBegin(Km(1), MAT_FINAL_ASSEMBLY, ierr)
> > >   call MatAssemblyEnd  (Km(1), MAT_FINAL_ASSEMBLY, ierr)
> > >
> > >   do stepIdx= 2, step_k
> > >
> > >     blockShift = KArrayOffset + (stepIdx-1) * (local_RRow * local_RCol)
> > >
> > >     call MatCreateDense(PETSC_COMM_WORLD,  PETSC_DECIDE, &
> > >                         PETSC_DECIDE , nDim,
> bsize,KArray(blockShift+1), Km(stepIdx), ierr)
> > >     call MatAssemblyBegin(Km(stepIdx), MAT_FINAL_ASSEMBLY, ierr)
> > >     call MatAssemblyEnd  (Km(stepIdx), MAT_FINAL_ASSEMBLY, ierr)
> > >   end do
> > >
> > >   call MatRestoreArray(K,KArray,KArrayOffset,ierr)
> > >
> > >    do stepIdx= 2, step_k
> > >
> > >     call
> MatMatMult(A,Km(stepIdx-1),MAT_REUSE_MATRIX,PETSC_DEFAULT_INTEGER,Km(stepIdx),
> ierr)
> > >   end do
> > >
> > >
> > > And I got the error message as below:
> > >
> > >
> > > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
> probably memory access out of range
> > > [0]PETSC ERROR: Try option -start_in_debugger or
> -on_error_attach_debugger
> > > [0]PETSC ERROR: or see
> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC
> ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find
> memory corruption errors
> > > [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link,
> and run
> > > [0]PETSC ERROR: to get more information on the crash.
> > > [0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> > > [0]PETSC ERROR: Signal received!
> > > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 7, Sat May 11
> 22:15:24 CDT 2013
> > > [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > > [0]PETSC ERROR: See docs/index.html for manual pages.
> > > [0]PETSC ERROR: --------------------[1]PETSC ERROR:
> ------------------------------------------------------------------------
> > > [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
> probably memory access out of range
> > > ----------------------------------------------------
> > > [0]PETSC ERROR: ./kmath.bcbcg on a arch-fuji named p01-024 by a03293
> Wed Aug  5 18:24:40 2015
> > > [0]PETSC ERROR: Libraries linked from
> /volume1/home/ra000005/a03293/kmathlibbuild/petsc-3.3-p7/arch-fujitsu-sparc64fx-opt/lib
> > > [0]PETSC ERROR: Configure run at Tue Jul 28 19:23:51 2015
> > > [0]PETSC ERROR: Configure options --known-level1-dcache-size=32768
> --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0
> --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8
> --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8
> --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8
> --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8
> --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1
> --known-mpi-c-double-complex=1 --with-cc=mpifccpx --CFLAGS="-mt -Xg"
> --COPTFLAGS=-Kfast,openmp --with-cxx=mpiFCCpx --CXXFLAGS=-mt
> --CXXOPTFLAGS=-Kfast,openmp --with-fc=mpifrtpx --FFLAGS=-Kthreadsafe
> --FOPTFLAGS=-Kfast,openmp --with-blas-lapack-lib="-SCALAPACK -SSL2"
> --with-x=0 --with-c++-support --with-batch=1 --with-info=1
> --with-debugging=0 --known-mpi-shared-libraries=0 --with-valgrind=0
> > > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > > [0]PETSC ERROR: User provided function() line 0 in unknown directory
> unknown file
> > >
> --------------------------------------------------------------------------
> > > [mpi::mpi-api::mpi-abort]
> > > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> > > with errorcode 59.
> > >
> > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> > > You may or may not see output from other processes, depending on
> > > exactly when Open MPI kills them.
> > >
> --------------------------------------------------------------------------
> > > [p01-024:26516]
> /opt/FJSVtclang/GM-1.2.0-18/lib64/libmpi.so.0(orte_errmgr_base_error_abort+0x84)
> [0xffffffff0091f684]
> > > [p01-024:26516]
> /opt/FJSVtclang/GM-1.2.0-18/lib64/libmpi.so.0(ompi_mpi_abort+0x51c)
> [0xffffffff006c389c]
> > > [p01-024:26516]
> /opt/FJSVtclang/GM-1.2.0-18/lib64/libmpi.so.0(MPI_Abort+0x6c)
> [0xffffffff006db3ac]
> > > [p01-024:26516]
> /opt/FJSVtclang/GM-1.2.0-18/lib64/libtrtmet_c.so.1(MPI_Abort+0x2c)
> [0xffffffff00281bf0]
> > > [p01-024:26516] ./kmath.bcbcg [0x1bf620]
> > > [p01-024:26516] ./kmath.bcbcg [0x1bf20c]
> > > [p01-024:26516] /lib64/libc.so.6(killpg+0x48) [0xffffffff02d52600]
> > > [p01-024:26516] [(nil)]
> > > [p01-024:26516] ./kmath.bcbcg [0x1a2054]
> > > [p01-024:26516] ./kmath.bcbcg [0x1064f8]
> > > [p01-024:26516] ./kmath.bcbcg(MAIN__+0x9dc) [0x105d1c]
> > > [p01-024:26516] ./kmath.bcbcg(main+0xec) [0x8a329c]
> > > [p01-024:26516] /lib64/libc.so.6(__libc_start_main+0x194)
> [0xffffffff02d3b81c]
> > > [p01-024:26516] ./kmath.bcbcg [0x1051ec]
> > > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > > [0]PETSC ERROR: Caught signal number 15 Terminate: Somet process (or
> the batch system) has told this process to end
> > > [0]PETSC ERROR: Try option -start_in_debugger or
> -on_error_attach_debugger
> > > [0]PETSC ERROR: or see
> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC
> ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find
> memory corruption errors
> > > [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link,
> and run
> > > [0]PETSC ERROR: to get more information on the crash.
> > > [0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> > > [0]PETSC ERROR: Signal received!
> > > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 7, Sat May 11
> 22:15:24 CDT 2013
> > > [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > > [0]PETSC ERROR: See docs/index.html for manual pages.
> > > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > > [0]PETSC ERROR: ./kmath.bcbcg on a arch-fuji named p01-024 by a03293
> Wed Aug  5 18:24:40 2015
> > > [0]PETSC ERROR: Libraries linked from
> /volume1/home/ra000005/a03293/kmathlibbuild/petsc-3.3-p7/arch-fujitsu-sparc64fx-opt/lib
> > > [0]PETSC ERROR: Configure run at Tue Jul 28 19:23:51 2015
> > > [0]PETSC ERROR: Configure options --known-level1-dcache-size=32768
> --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0
> --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8
> --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8
> --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8
> --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8
> --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1
> --known-mpi-c-double-complex=1 --with-cc=mpifccpx --CFLAGS="-mt -Xg"
> --COPTFLAGS=-Kfast,openmp --with-cxx=mpiFCCpx --CXXFLAGS=-mt
> --CXXOPTFLAGS=-Kfast,openmp --with-fc=mpifrtpx --FFLAGS=-Kthreadsafe
> --FOPTFLAGS=-Kfast,openmp --with-blas-lapack-lib="-SCALAPACK -SSL2"
> --with-x=0 --with-c++-support --with-batch=1 --with-info=1
> --with-debugging=0 --known-mpi-shared-libraries=0 --with-valgrind=0
> > > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > > [0]PETSC ERROR: User provided function() line 0 in unknown directory
> unknown file
> > > [ERR.] PLE 0019 plexec One of MPI processes was
> aborted.(rank=0)(nid=0x020a0028)(CODE=1938,793745140674134016,15104)
> > >
> > > However, if I change from
> > > call
> MatMatMult(A,Km(stepIdx-1),MAT_REUSE_MATRIX,PETSC_DEFAULT_INTEGER,Km(stepIdx),
> ierr)
> > > to
> > > call MatMatMult(A,Km(stepIdx-1),
> MAT_INITIAL_MATRIX,PETSC_DEFAULT_INTEGER,Km(stepIdx), ierr)
> > >
> > > everything is fine.
> > >
> > > could you please suggest some way to solve this?
> > >
> > > Thanks
> > >
> > > Cong Li
> > >
> > > On Wed, Aug 5, 2015 at 10:53 AM, Cong Li <solvercorleone at gmail.com>
> wrote:
> > > Thank you very much for your help and suggestions.
> > > With your help, finally I could continue my project.
> > >
> > > Regards
> > >
> > > Cong Li
> > >
> > >
> > >
> > > On Wed, Aug 5, 2015 at 3:09 AM, Barry Smith <bsmith at mcs.anl.gov>
> wrote:
> > >
> > >   From the manual page:  Unless scall is MAT_REUSE_MATRIX C will be
> created.
> > >
> > >   Since you want to use the C that is passed in you should use
> MAT_REUSE_MATRIX.
> > >
> > >   Note that since your B and C matrices are dense the issue of
> sparsity pattern of C is not relevant.
> > >
> > >   Barry
> > >
> > > > On Aug 4, 2015, at 11:59 AM, Cong Li <solvercorleone at gmail.com>
> wrote:
> > > >
> > > > Thanks very much. This answer is very helpful.
> > > > And I have a following question.
> > > > If I create B1, B2, .. by the way you suggested and then use
> MatMatMult to do SPMM.
> > > > PetscErrorCode  MatMatMult(Mat A,Mat B,MatReuse scall,PetscReal
> fill,Mat *C)
> > > > should I use  MAT_REUSE_MATRIX for MatReuse part of the arguement.
> > > >
> > > > Thanks
> > > >
> > > > Cong Li
> > > >
> > > > On Wed, Aug 5, 2015 at 1:27 AM, Barry Smith <bsmith at mcs.anl.gov>
> wrote:
> > > >
> > > > > On Aug 4, 2015, at 4:09 AM, Cong Li <solvercorleone at gmail.com>
> wrote:
> > > > >
> > > > > I am sorry that I should have explained it more clearly.
> > > > > Actually I want to compute a recurrence.
> > > > >
> > > > > Like, I want to firstly compute A*X1=B1, and then calculate
> A*B1=B2, A*B2=B3 and so on.
> > > > > Finally I want to combine all these results into a bigger matrix
> C=[B1,B2 ...]
> > > >
> > > >    First create C with MatCreateDense(,&C). Then call
> MatDenseGetArray(C,&array); then create B1 with
> MatCreateDense(....,array,&B1); then create
> > > > B2 with MatCreateDense(...,array+shift,&B2) etc where shift equals
> the number of __local__ rows in B1 times the number of columns in B1, then
> create B3 with a larger shift etc.
> > > >
> > > >    Note that you are "sharing" the array space of C with B1, B2, B3,
> ..., each Bi contains its columns of the C matrix.
> > > >
> > > >   Barry
> > > >
> > > >
> > > >
> > > > >
> > > > > Is there any way to do this efficiently.
> > > > >
> > > > >
> > > > >
> > > > > On Tue, Aug 4, 2015 at 5:45 PM, Patrick Sanan <
> patrick.sanan at gmail.com> wrote:
> > > > > On Tue, Aug 04, 2015 at 03:42:14PM +0900, Cong Li wrote:
> > > > > > Thanks for your reply.
> > > > > >
> > > > > > I have an other question.
> > > > > > I want to do SPMM several times and combine result matrices into
> one bigger
> > > > > > matrix.
> > > > > > for example
> > > > > > I firstly calculate AX1=B1, AX2=B2 ...
> > > > > > then I want to combine B1, B2.. to get a C, where C=[B1,B2...]
> > > > > >
> > > > > > Could you please suggest a way of how to do this.
> > > > > This is just linear algebra, nothing to do with PETSc specifically.
> > > > > A * [X1, X2, ... ] = [AX1, AX2, ...]
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > Cong Li
> > > > > >
> > > > > > On Tue, Aug 4, 2015 at 3:27 PM, Jed Brown <jed at jedbrown.org>
> wrote:
> > > > > >
> > > > > > > Cong Li <solvercorleone at gmail.com> writes:
> > > > > > >
> > > > > > > > Hello,
> > > > > > > >
> > > > > > > > I am a PhD student using PETsc for my research.
> > > > > > > > I am wondering if there is a way to implement SPMM (Sparse
> matrix-matrix
> > > > > > > > multiplication) by using PETSc.
> > > > > > >
> > > > > > >
> > > > > > >
> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatMatMult.html
> > > > > > >
> > > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150806/9e5c2521/attachment-0001.html>


More information about the petsc-users mailing list