[petsc-users] DMCreateMatrix from one DMDA to an another

Timothée Nicolas timothee.nicolas at gmail.com
Sun Dec 6 08:16:03 CST 2015


Wow, Great answer, thanks, I should probably be able to do it like this,
I'll try my best !

I actually do want the assembled matrix now. I have coded the matrix-free
version already and it works fine, but I want to use multigrid and I
figured that in order to be be most efficient at the coarse level it is
best to use -ksp_type preonly -pc_type lu, which requires a true matrix
(the matrix I am talking about is the product between a matrix 3dof -->
5dof with a matrix 5dof --> 3dof, which ends up square). But I want to
retain a memory light implementation so I use matrix-free formulation at
all the levels but the last, which is also light by definition, so I should
not suffer much in terms of memory.

Thanks

Timothée



2015-12-06 22:29 GMT+09:00 Dave May <dave.mayhem23 at gmail.com>:

>
>
> On 6 December 2015 at 11:19, Timothée Nicolas <timothee.nicolas at gmail.com>
> wrote:
>
>> Hi,
>>
>> The answer to the first question is yes. I was puzzled about the second
>> part of the answer, but I realize I was not clear enough. I don't want to
>> just transfer information between 2 DMDAs, I want to perform an operation
>> on vectors with a rectangular matrix.
>>
>
> Okay - I misunderstood.
>
>
>
>> To be more explicit, the input vector of the operation is a vector with 3
>> components (hence dof=3), and the results of the operator is put in one
>> vector equation + 2 scalar equation, which makes it dof = 5.
>>
>> So, either I cannot fill my matrix (if the number of rows exceeds what is
>> allowed by the DMDA), or I can fill my matrix but then the matrix and
>> vectors are not compatible, as in this error :
>>
>
>
> Do you need to explicitly assemble this rectangular operator?
> Expressing this operator via a matrix-free representation would be
> relatively straight forward.
>
> However, since the DMDA's have the same parallel layout you are in good
> shape to define the rectangular matrix if you really want the assembled
> thing. Your row space would be defined by the DMDA with block size 5 and
> the column space would be defined by the DMDA with block size 3. For
> example the matrix would be created like in the following way (assuming a
> 2D DMDA)
>
> PetscInt m,n,M,N;
>
>
> DMDAGetInfo(dm3,NULL,&M,&N,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);
> nodes = M *N;
> DMDAGetCorners(dm3,NULL,NULL,NULL,&m,&n,NULL);
> lnodes = m * n;
> MatCreate(PETSC_COMM_WORLD,&A);
> MatSetSizes(A,lnodes*5,lnodes*3,nodes*5,nodes*3);
>
> The sparsity pattern is defined by the connectivity of the DMDA points AND
> the different block sizes.
> To define the non-zero structure, you need to traverse the part of the
> dmda defined by the natural indices given by si <= i < si+m, sj <= j < sj+n
> and check whether the the 5x3 block associated with each i,j is living
> within the diagonal or off-diagonal block of the matrix. For a given i,j
> the off-diagonal is defined by any i',j' associated your stencil which is
> not in the range si <= i' <= si + m, sj <= j' < sj + n. (e.g. it is a ghost
> node).
>
> Your arrays counting the non-zero entries on the diagonal (nnz[]) and off
> diag (nnz_od[]) can be indexed using i + j * m. This corresponds to the
> indexing used by the DMDA. Your non-zero counting function would look
> something like the pseudo code below
>
> PetscInt *nnz,*nnz_od;
>
> PetscMalloc(sizeof(PetscInt)*m*n*5,&nnz);
> PetscMalloc(sizeof(PetscInt)*m*n*5,&nnz_od);
>
> PetscMemzero(nnz,sizeof(PetscInt)*m*n*5);
> PetscMemzero(nnz_od,sizeof(PetscInt)*m*n*5);
>
> for (j=0; j<n; j++) {
> for (i=0; i<m; i++) {
>   PetscInt idx,d,cnti,cntg;
>   PetscBool is_ghost;
>
>   idx = i + j * m;
>
>   cnti = 0;
>   cntg = 0;
>   for (all_points_in_stencil) { /* User logic to define this loop */
>
>    is_ghost = PETSC_FALSE;
>    /*User logic goes here to test if stencil point is a ghost point */
>
>     /* accumulate counts for the stencil */
>     if (is_ghost) { cntg++; } /* values living on a neighbour rank */
>     else { cnti++; } /* values local to current rank */
>   }
>
>   /* assume all dofs in row and col space in block are coupled to each
> other */
>   for (d=0; d<5; d++) {
>     nnz[5*idx+d]       = 3 * cnti;
>     nnz_od[5*idx+d] = 3 * cntg;
>   }
>
> }
> }
>
> Thanks,
>   Dave
>
>
>>
>>
>> [0]PETSC ERROR: --------------------- Error Message
>> --------------------------------------------------------------
>> [0]PETSC ERROR: Nonconforming object sizes
>> [0]PETSC ERROR: Mat mat,Vec y: global dim 4900 2940
>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
>> for trouble shooting.
>> [0]PETSC ERROR: Petsc Release Version 3.6.1, Jul, 22, 2015
>> [0]PETSC ERROR: ./miips on a arch-linux2-c-debug named Carl-9000 by
>> timothee Sun Dec  6 19:17:20 2015
>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++
>> --with-fc=gfortran --download-fblaslapack --download-mpich
>> [0]PETSC ERROR: #1 MatMult() line 2215 in
>> /home/timothee/Documents/petsc-3.6.1/src/mat/interface/matrix.c
>>
>> So I assume it means the matrix is assumed to be square. Is there a way
>> to change this ?
>>
>> Best
>>
>> Timothée
>>
>>
>>
>> 2015-12-05 20:41 GMT+09:00 Dave May <dave.mayhem23 at gmail.com>:
>>
>>>
>>>
>>> On 5 December 2015 at 11:15, Timothée Nicolas <
>>> timothee.nicolas at gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> I need to create a rectangular matrix which takes vector structured by
>>>> a DMDA and transforms them to vectors structured with an other DMDA. It
>>>> does not look like there is a routine to do this since apparently
>>>> DMCreateMatrix assumes square matrix. What is the clean way to do this ?
>>>>
>>>
>>> Do the DMDA's have exactly the same parallel layout and only differ by
>>> the number of DOFs?
>>>
>>> If yes, and you only want to move entries between vectors associated
>>> with the two DMDA's, you can perform the transfer between vectors locally
>>> (rank-wise) in a very simple manner using the local number of points from
>>> DMDA and the known block sizes (DOFs) associated with your DMDA and the
>>> size of the subspace you ultimately want to construct.
>>>
>>> Thanks,
>>>   Dave
>>>
>>>
>>>> The problem is I have a DMDA with 8 degrees of freedom (dof) and my
>>>> matrix sends vectors living in a subspace with 3 dof to the other 5 dof
>>>> subspace. I can define a square matrix living in the large, 8 dof subspace,
>>>> but this is of course very inefficient in terms of memory and time.
>>>>
>>>> Thanks
>>>>
>>>> Best
>>>>
>>>> Timothée
>>>>
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20151206/62e2918f/attachment.html>


More information about the petsc-users mailing list