[petsc-users] Matrix and vector type selection & memory allocation for efficient matrix import?

Smith, Barry F. bsmith at mcs.anl.gov
Wed Apr 18 11:37:51 CDT 2018


  If you can only generate the nonzero allocation sequentially you can only solve sequentially which means your matrix is MATSEQAIJ and your vector is VECSEQ and your communicator is PETSC_COMM_SELF.

   If you pass and array for nnz, what you pass for nz is irrelevant, you might as well pass 0.

   Barry


> On Apr 18, 2018, at 10:48 AM, Klaus Burkart <k_burkart at yahoo.com> wrote:
> 
> More questions about matrix and vector type selection for my application:
> 
> My starting point is a huge sparse matrix which can be symmetric or asymmetric and a rhs vector. There's no defined local or block structure at all, just row and column indices and the values and an array style rhs vector together describing the entire linear system to be solved. With quite some effort, I should be able to create an array nnz[N] containing the number of nonzeros per row in the global matrix for memory allocation which would leave me with MatSeqAIJSetPreallocation(M, 0, nnz); as the only option for efficient memory allocation ie. a MATSEQAIJ matrix and VECSEQ. I assume here, that 0 indicates different numbers of nonzero values in each row, the exact number being stored in the nnz array. Regarding this detail but one example assume a constant number of nz per row so I am not sure whether I should write 0 or NULL for nz?
> 
> I started with:
> 
>     MatCreate(PETSC_COMM_WORLD, &M);
>     MatSetSizes(M, PETSC_DECIDE, PETSC_DECIDE, N, N);
>     MatSetFromOptions(M);
> 
> taken from a paper and assume, the latter would set the matrix type to MATSEQAIJ which might conflict with PETSC_COMM_WORLD. Maybe decompositioning took place at an earlier stage and the authors of the paper were able to retrieve the local data and structure. 
> 
> What type of matrix and vector should I use for my application e.g. MATSEQAIJ and VECSEQ to be able to use MatSeqAIJSetPreallocation(M, 0, nnz); for efficient memory allocation?
> 
> In this case, where would the  decompositioning / MPI process allocation take place?



More information about the petsc-users mailing list