[petsc-users] Customizeing MatSetValuesBlocked(...)

Jinquan Zhong jzhong at scsolutions.com
Wed Aug 8 15:53:20 CDT 2012


Jed,

Your comments got into the core of the application.

The dense matrix is predominantly so big ( 800GB) that it would cost too much to move it around in the distributed memory.  What I tried to do would be to retain the 800GB already in core from ScaLAPACK untouched and repartition their indices.  The next step would be to repartition the indices for each block after I complete the construction of the sparse matrix.  This sparse matrix is mainly populated with 800GB entries from dense matrix, along with about 3 GB entries from other matrices.

Had it be true as you said, wouldn’t that be a lot of exchange of data among distributed memory?  Or I am missing something huge.

Jinquan



From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Jed Brown
Sent: Wednesday, August 08, 2012 1:42 PM
To: PETSc users list
Subject: Re: [petsc-users] Customizeing MatSetValuesBlocked(...)

On Wed, Aug 8, 2012 at 2:25 PM, Jinquan Zhong <jzhong at scsolutions.com<mailto:jzhong at scsolutions.com>> wrote:
If I understand you correctly, ScaLAPACK blocks don't have anything to do with the sparse matrix structure.

******************************************************************************************************************************************

You are correct.  What I meant was how to define the diagonal and off-diagonal parts of each submatrix matrix  A (LDA, LDB).  For example, in the following matrix,

                                  Proc0 Proc1       Proc2
            1  2  0  |  0  3  0  |  0  4
    Proc0   0  5  6  |  7  0  0  |  8  0
            9  0 10  | 11  0  0  | 12  0
    -------------------------------------
           13  0 14  | 15 16 17  |  0  0
    Proc3   0 18  0  | 19 20 21  |  0  0      <=== owned by  Proc 5
            0  0  0  | 22 23  0  | 24  0
    -------------------------------------
    Proc6  25 26 27  |  0  0 28  | 29  0
                     30  0  0  | 31 32 33  |  0  34

I am not sure how to fill out the values for d_nz, d_nnz,o_nz, o_nnz peoperly for the subblock (0 0 ;0 0; 24 0) owned by Proc 5 since it was based on diagonal and off-diagonal parts.

Throw your 2D block cyclic nonsense out the window. These are sparse matrices and that layout would be terrible. Logically permute your matrices all you want, then define a global ordering and chunk it into contiguous blocks of rows (no partition of columns). Work this out with a pencil and paper. You should have a function that translates row/column pairs from your ordering to our ordering. Now compute the sparsity pattern in the new ordering. (Usually you can figure this out on paper as well.) Then preallocate and call MatSetValues() with the new (row,column) locations.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120808/7ad23e58/attachment.html>


More information about the petsc-users mailing list