[petsc-users] Customizeing MatSetValuesBlocked(...)

Jinquan Zhong jzhong at scsolutions.com
Wed Aug 8 16:17:35 CDT 2012


Thanks, Jed.

Your solution seems to be comprehensive.  I would love to have it in the next phase of implementation.  But I am running out of time till this Friday.  What I am trying to do is to use a quick fix to improve the performance of building sparse based on the lines we have as follows:

                       ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr);
                       ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,m*n,m*n);CHKERRQ(ierr);
                       ierr = MatSetFromOptions(A);CHKERRQ(ierr);
                       ierr = MatSetUp(A);CHKERRQ(ierr);

…compute Ii,J, and v
  ierr = MatSetValues(A,1,&Ii,1,&J,&v,INSERT_VALUES);CHKERRQ(ierr);


“For my application, I have LDA=LDB=3 for local A and N=9 in the following


                                                ierr = MatCreateAIJ(PETSC_COMM_WORLD,  LDA,LDB,  N,N,  LDA,0,0,0,  &A);    CHKERRQ(ierr);


I got the following error message:



[0]PETSC ERROR: --------------------- Error Message ------------------------------------

[0]PETSC ERROR: Nonconforming object sizes!

[0]PETSC ERROR: Sum of local lengths 27 does not equal global length 9, my local length 3

                                                likely a call to VecSetSizes() or MatSetSizes() is wrong.





It does NOT appear PETSc  had the 9x9 matrix. It tried to build 27*27 since I am using 9 procs.  Is there a quick way to resolve this local size and global size issue?”





Thanks a lot anyway.



Jinquan



From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Jed Brown
Sent: Wednesday, August 08, 2012 2:05 PM
To: PETSc users list
Subject: Re: [petsc-users] Customizeing MatSetValuesBlocked(...)

On Wed, Aug 8, 2012 at 2:53 PM, Jinquan Zhong <jzhong at scsolutions.com<mailto:jzhong at scsolutions.com>> wrote:
The dense matrix is predominantly so big ( 800GB) that it would cost too much to move it around in the distributed memory.  What I tried to do would be to retain the 800GB already in core from ScaLAPACK untouched and repartition their indices.  The next step would be to repartition the indices for each block after I complete the construction of the sparse matrix.  This sparse matrix is mainly populated with 800GB entries from dense matrix, along with about 3 GB entries from other matrices.

Had it be true as you said, wouldn’t that be a lot of exchange of data among distributed memory?  Or I am missing something huge.

You can change indexing so that Elemental's matrix uses the same memory. PETSc uses yet another indexing. You can look at petsc-dev/src/mat/impls/elemental/matelem.cxx for the index translation between the ordering PETSc presents to the user and the one Elemental uses internally. Anyway, if you configure an Elemental matrix to use your memory, while the (simpler) PETSc ordering has contiguous vectors [1], then you can combine it with traditional sparse matrices using MATNEST.

[1] Note that PETSc's ownership ranges really fundamentally refer to the Vec distribution; the matrix entries can be stored elsewhere.


There is nothing especially hard about doing the above, but you will have to do some careful indexing math. Look at the MATNEST man page and the Elemental documentation.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120808/0c7cda86/attachment.html>


More information about the petsc-users mailing list