[petsc-users] DMCreateMatrix from one DMDA to an another
Barry Smith
bsmith at mcs.anl.gov
Tue Dec 22 15:05:51 CST 2015
First run with valgrind and if that doesn't find anything then in the debugger and check the data structures where it crashes. There is nothing obviously wrong, some subtle mistake that can only be found with valgrind.
Barry
> On Dec 22, 2015, at 12:02 AM, Timothée Nicolas <timothee.nicolas at gmail.com> wrote:
>
> Hi,
>
> I'd like to take advantage of the stencil structure of the DMDA to write into my matrix. I have already written all my routines so that I can use MatSetValuesStencil so it's most convenient for me, otherwise I will have to rethink all the structure.
>
> Since I don't create my rectangular matrix with DMCreateMatrix but with MatCreate and MatSetSizes instead, the manual tells me I have to call first MatSetStencil and MatSetLocalToGlobalMapping before being allowed to call MatSetValuesStencil. So before going on the full example, I tried to do this on a simple square matrix but I can't get the thing to work. It must be an obvious mistake but can't find it. My code is
>
> PetscErrorCode :: ierr
> DM :: da
> Mat :: A
> PetscInt :: xm
> ISLocalToGlobalMapping :: ltog
> PetscInt :: dims(1), starts(1)
> PetscInt :: dim
> MatStencil :: row(4,1), col(4,1)
> PetscScalar :: v
>
> call DMDACreate1d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE,iten,ione,ione,PETSC_NULL_INTEGER,da,ierr)
> call DMDAGetCorners(da,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER, &
> & xm,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,ierr)
>
> call MatCreate(PETSC_COMM_WORLD,A,ierr)
>
> call MatSetSizes(A,xm,xm,PETSC_DETERMINE,PETSC_DETERMINE,ierr)
>
> call DMGetLocalToGlobalMapping(da,ltog,ierr)
> call MatSetLocalToGlobalMapping(A,ltog,ltog,ierr)
> call DMDAGetGhostCorners(da,starts(1),PETSC_NULL_INTEGER,PETSC_NULL_INTEGER, &
> & dims(1),PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,ierr)
> dim=ione
> call MatSetStencil(A,dim,dims,starts,ione,ierr)
>
> row(MatStencil_i,1) = 0
> col(MatStencil_i,1) = 0
>
> call MatSetValuesStencil(A,ione,row,ione,col,zero,INSERT_VALUES,ierr)
>
> But I keep getting a segmentation fault error.
>
> [0]PETSC ERROR: ------------------------------------------------------------------------
> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range
> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
> [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors
> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
> [0]PETSC ERROR: to get more information on the crash.
> [0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
> [0]PETSC ERROR: Signal received
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.6.1, Jul, 22, 2015
> [0]PETSC ERROR: ./miips on a arch-darwin-c-debug named iMac27Nicolas.nifs.ac.jp by timotheenicolas Tue Dec 22 14:57:46 2015
> [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich --with-debugging=no
> [0]PETSC ERROR: #1 User provided function() line 0 in unknown file
> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
> [cli_0]: aborting job:
> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
>
> ===================================================================================
> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
> = PID 9812 RUNNING AT iMac27Nicolas.nifs.ac.jp
> = EXIT CODE: 59
> = CLEANING UP REMAINING PROCESSES
> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
> ===================================================================================
>
> If I replace everything with call DMCreateMatrix(da,A,ierr) it works fine.
>
> What information did I miss from the manual ? What am I doing wrong ?
>
> Best
>
> Timothee
>
>
>
>
>
> 2015-12-06 23:16 GMT+09:00 Timothée Nicolas <timothee.nicolas at gmail.com>:
> Wow, Great answer, thanks, I should probably be able to do it like this, I'll try my best !
>
> I actually do want the assembled matrix now. I have coded the matrix-free version already and it works fine, but I want to use multigrid and I figured that in order to be be most efficient at the coarse level it is best to use -ksp_type preonly -pc_type lu, which requires a true matrix (the matrix I am talking about is the product between a matrix 3dof --> 5dof with a matrix 5dof --> 3dof, which ends up square). But I want to retain a memory light implementation so I use matrix-free formulation at all the levels but the last, which is also light by definition, so I should not suffer much in terms of memory.
>
> Thanks
>
> Timothée
>
>
>
> 2015-12-06 22:29 GMT+09:00 Dave May <dave.mayhem23 at gmail.com>:
>
>
> On 6 December 2015 at 11:19, Timothée Nicolas <timothee.nicolas at gmail.com> wrote:
> Hi,
>
> The answer to the first question is yes. I was puzzled about the second part of the answer, but I realize I was not clear enough. I don't want to just transfer information between 2 DMDAs, I want to perform an operation on vectors with a rectangular matrix.
>
> Okay - I misunderstood.
>
>
> To be more explicit, the input vector of the operation is a vector with 3 components (hence dof=3), and the results of the operator is put in one vector equation + 2 scalar equation, which makes it dof = 5.
>
> So, either I cannot fill my matrix (if the number of rows exceeds what is allowed by the DMDA), or I can fill my matrix but then the matrix and vectors are not compatible, as in this error :
>
>
> Do you need to explicitly assemble this rectangular operator?
> Expressing this operator via a matrix-free representation would be relatively straight forward.
>
> However, since the DMDA's have the same parallel layout you are in good shape to define the rectangular matrix if you really want the assembled thing. Your row space would be defined by the DMDA with block size 5 and the column space would be defined by the DMDA with block size 3. For example the matrix would be created like in the following way (assuming a 2D DMDA)
>
> PetscInt m,n,M,N;
>
> DMDAGetInfo(dm3,NULL,&M,&N,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);
> nodes = M *N;
> DMDAGetCorners(dm3,NULL,NULL,NULL,&m,&n,NULL);
> lnodes = m * n;
> MatCreate(PETSC_COMM_WORLD,&A);
> MatSetSizes(A,lnodes*5,lnodes*3,nodes*5,nodes*3);
>
> The sparsity pattern is defined by the connectivity of the DMDA points AND the different block sizes.
> To define the non-zero structure, you need to traverse the part of the dmda defined by the natural indices given by si <= i < si+m, sj <= j < sj+n and check whether the the 5x3 block associated with each i,j is living within the diagonal or off-diagonal block of the matrix. For a given i,j the off-diagonal is defined by any i',j' associated your stencil which is not in the range si <= i' <= si + m, sj <= j' < sj + n. (e.g. it is a ghost node).
>
> Your arrays counting the non-zero entries on the diagonal (nnz[]) and off diag (nnz_od[]) can be indexed using i + j * m. This corresponds to the indexing used by the DMDA. Your non-zero counting function would look something like the pseudo code below
>
> PetscInt *nnz,*nnz_od;
>
> PetscMalloc(sizeof(PetscInt)*m*n*5,&nnz);
> PetscMalloc(sizeof(PetscInt)*m*n*5,&nnz_od);
>
> PetscMemzero(nnz,sizeof(PetscInt)*m*n*5);
> PetscMemzero(nnz_od,sizeof(PetscInt)*m*n*5);
>
> for (j=0; j<n; j++) {
> for (i=0; i<m; i++) {
> PetscInt idx,d,cnti,cntg;
> PetscBool is_ghost;
>
> idx = i + j * m;
>
> cnti = 0;
> cntg = 0;
> for (all_points_in_stencil) { /* User logic to define this loop */
>
> is_ghost = PETSC_FALSE;
> /*User logic goes here to test if stencil point is a ghost point */
>
> /* accumulate counts for the stencil */
> if (is_ghost) { cntg++; } /* values living on a neighbour rank */
> else { cnti++; } /* values local to current rank */
> }
>
> /* assume all dofs in row and col space in block are coupled to each other */
> for (d=0; d<5; d++) {
> nnz[5*idx+d] = 3 * cnti;
> nnz_od[5*idx+d] = 3 * cntg;
> }
>
> }
> }
>
> Thanks,
> Dave
>
>
>
> [0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
> [0]PETSC ERROR: Nonconforming object sizes
> [0]PETSC ERROR: Mat mat,Vec y: global dim 4900 2940
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.6.1, Jul, 22, 2015
> [0]PETSC ERROR: ./miips on a arch-linux2-c-debug named Carl-9000 by timothee Sun Dec 6 19:17:20 2015
> [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich
> [0]PETSC ERROR: #1 MatMult() line 2215 in /home/timothee/Documents/petsc-3.6.1/src/mat/interface/matrix.c
>
> So I assume it means the matrix is assumed to be square. Is there a way to change this ?
>
> Best
>
> Timothée
>
>
>
> 2015-12-05 20:41 GMT+09:00 Dave May <dave.mayhem23 at gmail.com>:
>
>
> On 5 December 2015 at 11:15, Timothée Nicolas <timothee.nicolas at gmail.com> wrote:
> Hi,
>
> I need to create a rectangular matrix which takes vector structured by a DMDA and transforms them to vectors structured with an other DMDA. It does not look like there is a routine to do this since apparently DMCreateMatrix assumes square matrix. What is the clean way to do this ?
>
> Do the DMDA's have exactly the same parallel layout and only differ by the number of DOFs?
>
> If yes, and you only want to move entries between vectors associated with the two DMDA's, you can perform the transfer between vectors locally (rank-wise) in a very simple manner using the local number of points from DMDA and the known block sizes (DOFs) associated with your DMDA and the size of the subspace you ultimately want to construct.
>
> Thanks,
> Dave
>
>
> The problem is I have a DMDA with 8 degrees of freedom (dof) and my matrix sends vectors living in a subspace with 3 dof to the other 5 dof subspace. I can define a square matrix living in the large, 8 dof subspace, but this is of course very inefficient in terms of memory and time.
>
> Thanks
>
> Best
>
> Timothée
>
>
>
>
>
More information about the petsc-users
mailing list