[petsc-users] MatMPIBAIJSetPreallocationCSR i and j indices
Barry Smith
bsmith at mcs.anl.gov
Fri Sep 27 14:23:09 CDT 2013
Just use MatCreateMPIBAIJWithArrays() saves thinking and work.
The i and j indices are with respect to blocks, not the individual entries in the matrix.
Run one one process with just a couple of grid points and then use MatView() to look at the matrix and make sure it is correct
On Sep 27, 2013, at 12:20 PM, Matteo Parsani <parsani.matteo at gmail.com> wrote:
> Dear PETSc users and developers,
> I am assembling a PC matrix in parallel. For instance, with just one element in the mesh the structure of the matrix is
>
> [5x5 nonzero] [5x5 nonzero] [5x5 nonzero] [5x5 nonzero] [5x5 ZEROs] [5x5 ZEROs] [5x5 nonzero] [5x5 ZEROs] [5x5 ZEROs]
> [5x5 nonzero] [5x5 nonzero] [5x5 nonzero] [5x5 ZEROs] [5x5 nonzero] [5x5 ZEROs] [5x5 ZEROs] [5x5 nonzero] [5x5 ZEROs]
> [5x5 nonzero] [5x5 nonzero] [5x5 nonzero] [5x5 ZEROs] [5x5 ZEROs] [5x5 nonzero] [5x5 ZEROs] [5x5 ZEROs] [5x5 nonzero]
> [5x5 nonzero] [5x5 ZEROs] [5x5 ZEROs] [5x5 nonzero] [5x5 nonzero] [5x5 nonzero] [5x5 nonzero] [5x5 ZEROs] [5x5 ZEROs]
> [5x5 ZEROs] [5x5 nonzero] [5x5 ZEROs] [5x5 nonzero] [5x5 nonzero] [5x5 nonzero] [5x5 ZEROs] [5x5 nonzero] [5x5 ZEROs]
> [5x5 ZEROs] [5x5 ZEROs] [5x5 nonzero] [5x5 nonzero] [5x5 nonzero] [5x5 nonzero] [5x5 ZEROs] [5x5 ZEROs] [5x5 nonzero]
> [5x5 nonzero] [5x5 ZEROs] [5x5 ZEROs] [5x5 nonzero] [5x5 ZEROs] [5x5 ZEROs] [5x5 nonzero] [5x5 nonzero] [5x5 nonzero]
> [5x5 ZEROs] [5x5 nonzero] [5x5 ZEROs] [5x5 ZEROs] [5x5 nonzero] [5x5 ZEROs] [5x5 nonzero] [5x5 nonzero] [5x5 nonzero]
> [5x5 ZEROs] [5x5 ZEROs] [5x5 nonzero] [5x5 ZEROs] [5x5 ZEROs] [5x5 nonzero] [5x5 nonzero] [5x5 nonzero] [5x5 nonzero]
>
>
> With multiple elements in the mesh, the above matrix will be the structure of the diagonal blocks of the PC matrix. If I call the block above [A], then the full PC matrix (assemble through all the processors) will look like
>
> [A] 0 0 0 0 0 0 0
> 0 [A] 0 0 0 0 0 0
> 0 0 [A] 0 0 0 0 0
> 0 0 0 [A] 0 0 0 0
> PC= 0 0 0 0 [A] 0 0 0 -> PC matrix if I have 8 element in my mesh.
> 0 0 0 0 0 [A] 0 0
> 0 0 0 0 0 0 [A] 0
> 0 0 0 0 0 0 0 [A]
>
> Now in my code I have constructed the CSR format which gives the position of the blue nonzero blocks in the PC matrix constructed by each processor.
>
> I am calling the following PETSc functions:
>
> call MatCreate(PETSC_COMM_WORLD,petsc_pc_mat,i_err)
> call check_petsc_error(i_err)
>
> call MatSetType(petsc_pc_mat,MATMPIBAIJ,i_err)
> call check_petsc_error(i_err)
>
> call MatSetSizes(petsc_pc_mat,nprows,nprows,ngrows,ngrows,i_err) ! nprows is the number of local rows which is equal to the number of local columns
> call check_petsc_error(i_err)
>
> call MatSetFromOptions(petsc_pc_mat,i_err)
> call check_petsc_error(i_err)
>
> call MatSetUp(petsc_pc_mat,i_err)
> call check_petsc_error(i_err)
>
> block_size = nequations
> call MatMPIBAIJSetPreallocationCSR(petsc_pc_mat,block_size,ia, ja, PETSC_NULL_SCALAR i_err)
> call check_petsc_error(i_err)
>
> 1) In the latter call, should ia and ja be the CSR vectors which gives the position of the blue blocks in the matrix or the ia and ja be the CSR vectors which gives the position of the nnz scalar terms in the PC matrix?
>
> 2) My blue block are stored in an array of 3 dimensions: a (1:5,1:5, Number_of_blocks). I am thinking to try to use 2 approaches:
> a) Looping over the number of blocks and at each time set a single block in the matrix. However, I have seen that I should flatten the array before to pass the values. Is there a way to avoid to flatten the array and just pass it as a(1:5,1:5,block_index)?
>
> b) If I am not wrong, having the CSR structure I could call just 1 time MatSetValueBlocked and set the entire matrix owned by one process. Am I right?
>
> Thank you.
>
> Best Regards
>
>
>
>
>
>
>
> --
> Matteo
More information about the petsc-users
mailing list