most efficient way to use local CSR matrix?

Barry Smith bsmith at mcs.anl.gov
Tue Oct 20 20:32:35 CDT 2009


    Here's the deal. In parallel PETSc does not use a single CSR  
matrix on each process to hold the MPIAIJ matrix. Hence if you store  
the matrix on a process as a single CSR matrix then it has to be  
copied into the PETSc datastructure. MatMPIAIJSetPreallocationCSR()  
does the copy very efficiently. But you are right, until you free YOUR  
rowpt[], colind[] and a[] there are two copies of the matrix.
One could argue that this is ok, because anyways any preconditioner  
worth anything will take up at least that much memory later anyways.   
For example, if you use the PETSc default preconditioner, block Jacobi  
with ILU(0) on each block it will take pretty much the same amount of  
memory.

    For people who use SOR or some other preconditioner that requires  
very little memory this is not satisfactory because they feel they  
could run larger problems without the copy.

    I strongly recommend you use MatMPIAIJSetPreallocationCSR() and  
live with any memory limitation. The reason is that doing more than  
that has orders of magnitude more pain with generally little payoff.
If you like pain then you can
1) change your code to not build directly into CSR, instead call  
MatSetValuesLocal() as you compute the entries. With this you need to  
figure out the preallocation which can be painful code.
2) write an entire matrix class that stores the matrix like you store  
it, not like you store it. This is a huge project, since you have to  
write your own matrix-vector, ILU factorization etc.

    Barry



On Oct 20, 2009, at 8:04 PM, Chris Kees wrote:

> Hi,
>
> Our unstructured finite element code does a fairly standard  
> overlapping decomposition that allows it to calculate all the non- 
> zero column entries for the rows owned by the processor (rows  
> 0...ldim-1 in the local numbering).  We assemble them into a local  
> CSR matrix and then copy them into a PETSc matrix like this:
>
> for (int i=0;i<ldim;i++)
> {
>   irow[0] = i;
>   MatSetValuesLocal(PETSCMAT(par_L),1,irow,rowptr[i+1]- 
> rowptr[i],&colind[rowptr[i]],&a[rowptr[i]],INSERT_VALUES);
> }
>
> where rowptr and colind are the CSR indexing arrays in the local  
> numbering.
>
> I would like to eliminate the duplicate memory (and the copy above).  
> Is there a recommended way to let PETSc reference array 'a' directly  
> or a way to get rid of 'a' and translate our CSR assembly routines  
> to use low level PETSc data structures?
>
> So far I've added MatMPIAIJSetPreallocationCSR to try to speed up  
> the first solve, but the documentation is clear that 'a' is not a  
> pointer to the AIJ storage so I still need the copy. I tried using  
> MatCreateMPIAIJWithSplitArrays with the 'o' arrays set to zero but I  
> got indexing errors, and the documentation really seems to imply  
> that the local matrix is really not a standard CSR anyway. If that's  
> true then it seems like the options I have are to write a shell  
> matrix for parallel CSR storage or write some new assembly routines  
> that use MatSetValuesLocal. I'm more concerned about the memory than  
> the overhead of MatSetValuesLocal, but it would certainly be easier  
> on me to use the CSR-based assembly routines we already have.
>
> Thanks,
> Chris
>
>
>




More information about the petsc-dev mailing list