[petsc-users] DMCreateMatrix from one DMDA to an another

Matthew Knepley knepley at gmail.com
Wed Dec 23 09:35:44 CST 2015


On Wed, Dec 23, 2015 at 12:34 AM, Timothée Nicolas <
timothee.nicolas at gmail.com> wrote:

> Ho,
>
> I found that MatSetUp or something like MatMPIAIJSetPreallocation have to
> be used beofre the MatSetValuesStencil. And if MatMPIAIJSetPreallocation
> then accordingly MatSetType must also be set to MPIAIJ. That solves the
> above problem. With this, and using my own routine to set the
> preallocation, as suggested by Dave, I can recover the same behavior for my
> matrices as when I set them with DMCreateMatrix, but in the *square *case
> only (and by the way the whole allocation business seems to be faster in
> this case than using DMCreateMatrix).
>
> However I still have problems in the rectangular case. The parallel case
> does not work and in one processor it works and gives the expected result
> only if I use ADD_VALUES in MatSetValuesStencil, even though the entries
> are supposed to be empty !
>
> In the multiprocessor case, I get the "argument out of range" error like
> this:
>
> 1]PETSC ERROR: --------------------- Error Message
> --------------------------------------------------------------
> [1]PETSC ERROR: Argument out of range
> [1]PETSC ERROR: New nonzero at (3833,342512) caused a malloc
> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn
> off this check
> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [1]PETSC ERROR: Petsc Release Version 3.6.0, Jun, 09, 2015
> [1]PETSC ERROR: ./miips on a arch-linux2-c-opt named helios91 by tnicolas
> Wed Dec 23 15:15:52 2015
> [1]PETSC ERROR: Configure options
> --prefix=/csc/softs/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real
> --with-debugging=0 --with-x=0 --with-cc=mpicc --with-fc=mpif90
> --with-cxx=mpicxx --with-fortran --known-mpi-shared-libraries=1
> --with-scalar-type=real --with-precision=double --CFLAGS="-g -O3 -mavx
> -mkl" --CXXFLAGS="-g -O3 -mavx -mkl" --FFLAGS="-g -O3 -mavx -mkl"
> [1]PETSC ERROR: #2265 MatSetValues_MPIAIJ() line 616 in
> /csc/releases/buildlog/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real/petsc-3.6.0/src/mat/impls/aij/mpi/mpiaij.c
> [1]PETSC ERROR: #2266 MatSetValues() line 1175 in
> /csc/releases/buildlog/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real/petsc-3.6.0/src/mat/interface/matrix.c
> [1]PETSC ERROR: #2267 MatSetValuesLocal() line 2023 in
> /csc/releases/buildlog/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real/petsc-3.6.0/src/mat/interface/matrix.c
> [1]PETSC ERROR: #2268 MatSetValuesStencil() line 1423 in
> /csc/releases/buildlog/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real/petsc-3.6.0/src/mat/interface/matrix.c
>
> This suggests my allocation is incorrect, however using the -info option,
> I see that
>
> [1] MatAssemblyEnd_SeqAIJ(): Matrix size: 209088 X 348480; storage space:
> 20645544 unneeded,6170106 used
> [1] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
> [1] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 31
>
> Which seems to be in contradiction with the above error. So I am thinking
> it is a different problem.
>

Make this problem simpler. That is the only way to tell what is going on.
Do a 4x4 grid, so you have maybe 16 rows and can print everything out.

  Matt


> I am wondering if the problem does not come from the fact that ideally the
> matrix should be constituted from 3x5 blocks, but this does not seem to be
> allowed yet :
> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatMPIBAIJSetPreallocation.html
> :
>
> "*bs* - size of block, the blocks are ALWAYS square. One can use
> MatSetBlockSizes
> <http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetBlockSizes.html#MatSetBlockSizes>()
> to set a different row and column blocksize but the row blocksize always
> defines the size of the blocks. The column blocksize sets the blocksize of
> the vectors obtained with MatCreateVecs
> <http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateVecs.html#MatCreateVecs>()
> "
>
> So I cannot call MatSetBlockSizes(A,3,5,ierr)
>
> and then MatMPIBAIJSetPreallocation because it would still assume square
> blocks (size 3) if I understand well. So I still don't see how to define a
> rectangular matrix which handily takes a vector from a DMDA and upon
> applying the operation, turns it to an other DMDA with same layout (except
> for number of DOF).
>
> Best
>
> Timothee
>
>
>
> 2015-12-23 6:05 GMT+09:00 Barry Smith <bsmith at mcs.anl.gov>:
>
>>
>>   First run with valgrind and if that doesn't find anything then in the
>> debugger and check the data structures where it crashes. There is nothing
>> obviously wrong, some subtle mistake that can only be found with valgrind.
>>
>>   Barry
>>
>> > On Dec 22, 2015, at 12:02 AM, Timothée Nicolas <
>> timothee.nicolas at gmail.com> wrote:
>> >
>> > Hi,
>> >
>> > I'd like to take advantage of the stencil structure of the DMDA to
>> write into my matrix. I have already written all my routines so that I can
>> use MatSetValuesStencil so it's most convenient for me, otherwise I will
>> have to rethink all the structure.
>> >
>> > Since I don't create my rectangular matrix with DMCreateMatrix but with
>> MatCreate and MatSetSizes instead, the manual tells me I have to call first
>> MatSetStencil and MatSetLocalToGlobalMapping before being allowed to call
>> MatSetValuesStencil. So before going on the full example, I tried to do
>> this on a simple square matrix but I can't get the thing to work. It must
>> be an obvious mistake but can't find it. My code is
>> >
>> >   PetscErrorCode :: ierr
>> >   DM :: da
>> >   Mat :: A
>> >   PetscInt :: xm
>> >   ISLocalToGlobalMapping :: ltog
>> >   PetscInt :: dims(1), starts(1)
>> >   PetscInt :: dim
>> >   MatStencil :: row(4,1), col(4,1)
>> >   PetscScalar :: v
>> >
>> >   call
>> DMDACreate1d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE,iten,ione,ione,PETSC_NULL_INTEGER,da,ierr)
>> >   call
>> DMDAGetCorners(da,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,
>>  &
>> >        &              xm,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,ierr)
>> >
>> >   call MatCreate(PETSC_COMM_WORLD,A,ierr)
>> >
>> >   call MatSetSizes(A,xm,xm,PETSC_DETERMINE,PETSC_DETERMINE,ierr)
>> >
>> >   call DMGetLocalToGlobalMapping(da,ltog,ierr)
>> >   call MatSetLocalToGlobalMapping(A,ltog,ltog,ierr)
>> >   call
>> DMDAGetGhostCorners(da,starts(1),PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,
>>    &
>> >        &
>> dims(1),PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,ierr)
>> >   dim=ione
>> >   call MatSetStencil(A,dim,dims,starts,ione,ierr)
>> >
>> >   row(MatStencil_i,1) = 0
>> >   col(MatStencil_i,1) = 0
>> >
>> >   call MatSetValuesStencil(A,ione,row,ione,col,zero,INSERT_VALUES,ierr)
>> >
>> > But I keep getting a segmentation fault error.
>> >
>> > [0]PETSC ERROR:
>> ------------------------------------------------------------------------
>> > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
>> probably memory access out of range
>> > [0]PETSC ERROR: Try option -start_in_debugger or
>> -on_error_attach_debugger
>> > [0]PETSC ERROR: or see
>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
>> > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac
>> OS X to find memory corruption errors
>> > [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link,
>> and run
>> > [0]PETSC ERROR: to get more information on the crash.
>> > [0]PETSC ERROR: --------------------- Error Message
>> --------------------------------------------------------------
>> > [0]PETSC ERROR: Signal received
>> > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
>> for trouble shooting.
>> > [0]PETSC ERROR: Petsc Release Version 3.6.1, Jul, 22, 2015
>> > [0]PETSC ERROR: ./miips on a arch-darwin-c-debug named
>> iMac27Nicolas.nifs.ac.jp by timotheenicolas Tue Dec 22 14:57:46 2015
>> > [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++
>> --with-fc=gfortran --download-fblaslapack --download-mpich
>> --with-debugging=no
>> > [0]PETSC ERROR: #1 User provided function() line 0 in  unknown file
>> > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
>> > [cli_0]: aborting job:
>> > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
>> >
>> >
>> ===================================================================================
>> > =   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
>> > =   PID 9812 RUNNING AT iMac27Nicolas.nifs.ac.jp
>> > =   EXIT CODE: 59
>> > =   CLEANING UP REMAINING PROCESSES
>> > =   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
>> >
>> ===================================================================================
>> >
>> > If I replace everything with call DMCreateMatrix(da,A,ierr) it works
>> fine.
>> >
>> > What information did I miss from the manual ? What am I doing wrong ?
>> >
>> > Best
>> >
>> > Timothee
>> >
>> >
>> >
>> >
>> >
>> > 2015-12-06 23:16 GMT+09:00 Timothée Nicolas <timothee.nicolas at gmail.com
>> >:
>> > Wow, Great answer, thanks, I should probably be able to do it like
>> this, I'll try my best !
>> >
>> > I actually do want the assembled matrix now. I have coded the
>> matrix-free version already and it works fine, but I want to use multigrid
>> and I figured that in order to be be most efficient at the coarse level it
>> is best to use -ksp_type preonly -pc_type lu, which requires a true matrix
>> (the matrix I am talking about is the product between a matrix 3dof -->
>> 5dof with a matrix 5dof --> 3dof, which ends up square). But I want to
>> retain a memory light implementation so I use matrix-free formulation at
>> all the levels but the last, which is also light by definition, so I should
>> not suffer much in terms of memory.
>> >
>> > Thanks
>> >
>> > Timothée
>> >
>> >
>> >
>> > 2015-12-06 22:29 GMT+09:00 Dave May <dave.mayhem23 at gmail.com>:
>> >
>> >
>> > On 6 December 2015 at 11:19, Timothée Nicolas <
>> timothee.nicolas at gmail.com> wrote:
>> > Hi,
>> >
>> > The answer to the first question is yes. I was puzzled about the second
>> part of the answer, but I realize I was not clear enough. I don't want to
>> just transfer information between 2 DMDAs, I want to perform an operation
>> on vectors with a rectangular matrix.
>> >
>> > Okay - I misunderstood.
>> >
>> >
>> > To be more explicit, the input vector of the operation is a vector with
>> 3 components (hence dof=3), and the results of the operator is put in one
>> vector equation + 2 scalar equation, which makes it dof = 5.
>> >
>> > So, either I cannot fill my matrix (if the number of rows exceeds what
>> is allowed by the DMDA), or I can fill my matrix but then the matrix and
>> vectors are not compatible, as in this error :
>> >
>> >
>> > Do you need to explicitly assemble this rectangular operator?
>> > Expressing this operator via a matrix-free representation would be
>> relatively straight forward.
>> >
>> > However, since the DMDA's have the same parallel layout you are in good
>> shape to define the rectangular matrix if you really want the assembled
>> thing. Your row space would be defined by the DMDA with block size 5 and
>> the column space would be defined by the DMDA with block size 3. For
>> example the matrix would be created like in the following way (assuming a
>> 2D DMDA)
>> >
>> > PetscInt m,n,M,N;
>> >
>> >
>> DMDAGetInfo(dm3,NULL,&M,&N,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);
>> > nodes = M *N;
>> > DMDAGetCorners(dm3,NULL,NULL,NULL,&m,&n,NULL);
>> > lnodes = m * n;
>> > MatCreate(PETSC_COMM_WORLD,&A);
>> > MatSetSizes(A,lnodes*5,lnodes*3,nodes*5,nodes*3);
>> >
>> > The sparsity pattern is defined by the connectivity of the DMDA points
>> AND the different block sizes.
>> > To define the non-zero structure, you need to traverse the part of the
>> dmda defined by the natural indices given by si <= i < si+m, sj <= j < sj+n
>> and check whether the the 5x3 block associated with each i,j is living
>> within the diagonal or off-diagonal block of the matrix. For a given i,j
>> the off-diagonal is defined by any i',j' associated your stencil which is
>> not in the range si <= i' <= si + m, sj <= j' < sj + n. (e.g. it is a ghost
>> node).
>> >
>> > Your arrays counting the non-zero entries on the diagonal (nnz[]) and
>> off diag (nnz_od[]) can be indexed using i + j * m. This corresponds to the
>> indexing used by the DMDA. Your non-zero counting function would look
>> something like the pseudo code below
>> >
>> > PetscInt *nnz,*nnz_od;
>> >
>> > PetscMalloc(sizeof(PetscInt)*m*n*5,&nnz);
>> > PetscMalloc(sizeof(PetscInt)*m*n*5,&nnz_od);
>> >
>> > PetscMemzero(nnz,sizeof(PetscInt)*m*n*5);
>> > PetscMemzero(nnz_od,sizeof(PetscInt)*m*n*5);
>> >
>> > for (j=0; j<n; j++) {
>> > for (i=0; i<m; i++) {
>> >   PetscInt idx,d,cnti,cntg;
>> >   PetscBool is_ghost;
>> >
>> >   idx = i + j * m;
>> >
>> >   cnti = 0;
>> >   cntg = 0;
>> >   for (all_points_in_stencil) { /* User logic to define this loop */
>> >
>> >    is_ghost = PETSC_FALSE;
>> >    /*User logic goes here to test if stencil point is a ghost point */
>> >
>> >     /* accumulate counts for the stencil */
>> >     if (is_ghost) { cntg++; } /* values living on a neighbour rank */
>> >     else { cnti++; } /* values local to current rank */
>> >   }
>> >
>> >   /* assume all dofs in row and col space in block are coupled to each
>> other */
>> >   for (d=0; d<5; d++) {
>> >     nnz[5*idx+d]       = 3 * cnti;
>> >     nnz_od[5*idx+d] = 3 * cntg;
>> >   }
>> >
>> > }
>> > }
>> >
>> > Thanks,
>> >   Dave
>> >
>> >
>> >
>> > [0]PETSC ERROR: --------------------- Error Message
>> --------------------------------------------------------------
>> > [0]PETSC ERROR: Nonconforming object sizes
>> > [0]PETSC ERROR: Mat mat,Vec y: global dim 4900 2940
>> > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
>> for trouble shooting.
>> > [0]PETSC ERROR: Petsc Release Version 3.6.1, Jul, 22, 2015
>> > [0]PETSC ERROR: ./miips on a arch-linux2-c-debug named Carl-9000 by
>> timothee Sun Dec  6 19:17:20 2015
>> > [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++
>> --with-fc=gfortran --download-fblaslapack --download-mpich
>> > [0]PETSC ERROR: #1 MatMult() line 2215 in
>> /home/timothee/Documents/petsc-3.6.1/src/mat/interface/matrix.c
>> >
>> > So I assume it means the matrix is assumed to be square. Is there a way
>> to change this ?
>> >
>> > Best
>> >
>> > Timothée
>> >
>> >
>> >
>> > 2015-12-05 20:41 GMT+09:00 Dave May <dave.mayhem23 at gmail.com>:
>> >
>> >
>> > On 5 December 2015 at 11:15, Timothée Nicolas <
>> timothee.nicolas at gmail.com> wrote:
>> > Hi,
>> >
>> > I need to create a rectangular matrix which takes vector structured by
>> a DMDA and transforms them to vectors structured with an other DMDA. It
>> does not look like there is a routine to do this since apparently
>> DMCreateMatrix assumes square matrix. What is the clean way to do this ?
>> >
>> > Do the DMDA's have exactly the same parallel layout and only differ by
>> the number of DOFs?
>> >
>> > If yes, and you only want to move entries between vectors associated
>> with the two DMDA's, you can perform the transfer between vectors locally
>> (rank-wise) in a very simple manner using the local number of points from
>> DMDA and the known block sizes (DOFs) associated with your DMDA and the
>> size of the subspace you ultimately want to construct.
>> >
>> > Thanks,
>> >   Dave
>> >
>> >
>> > The problem is I have a DMDA with 8 degrees of freedom (dof) and my
>> matrix sends vectors living in a subspace with 3 dof to the other 5 dof
>> subspace. I can define a square matrix living in the large, 8 dof subspace,
>> but this is of course very inefficient in terms of memory and time.
>> >
>> > Thanks
>> >
>> > Best
>> >
>> > Timothée
>> >
>> >
>> >
>> >
>> >
>>
>>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20151223/7122cd48/attachment-0001.html>


More information about the petsc-users mailing list