[petsc-users] MatMPIBAIJSetPreallocationCSR i and j indices

Matteo Parsani parsani.matteo at gmail.com
Fri Sep 27 12:20:09 CDT 2013


Dear PETSc users and developers,
I am assembling a PC matrix in parallel. For instance, with just one
element in the mesh the structure of the matrix is

*[5x5 nonzero] [5x5 nonzero] [5x5 nonzero] [5x5 nonzero]* [5x5 ZEROs]
[5x5 ZEROs]*   [5x5 nonzero]* [5x5 ZEROs]   [5x5 ZEROs]*
[5x5 nonzero] [5x5 nonzero] [5x5 nonzero] *[5x5 ZEROs]   *[5x5 nonzero]* [5x5
ZEROs]   [5x5 ZEROs]*   [5x5 nonzero] *[5x5 ZEROs]
*[5x5 nonzero] [5x5 nonzero] [5x5 nonzero]* [5x5 ZEROs]   [5x5 ZEROs]   *[5x5
nonzero]* [5x5 ZEROs]   [5x5 ZEROs]   * [5x5 nonzero]*
*[5x5 nonzero]* [5x5 ZEROs]   [5x5 ZEROs]*   [5x5 nonzero] [5x5 nonzero]
[5x5 nonzero]** [5x5 nonzero]* [5x5 ZEROs]   [5x5 ZEROs]
[5x5 ZEROs]   *[5x5 nonzero]* [5x5 ZEROs]*   ** [5x5 nonzero] [5x5 nonzero]
[5x5 nonzero] *[5x5 ZEROs]   *[5x5 nonzero]* [5x5 ZEROs]
[5x5 ZEROs]   [5x5 ZEROs]*   [5x5 nonzero]** ** [5x5 nonzero] [5x5 nonzero]
[5x5 nonzero]* [5x5 ZEROs]   [5x5 ZEROs]   *[5x5 nonzero]*
*[5x5 nonzero]* [5x5 ZEROs]   [5x5 ZEROs]*   [5x5 nonzero]* [5x5 ZEROs]
[5x5 ZEROs]*   ** [5x5 nonzero] [5x5 nonzero] [5x5 nonzero]*
[5x5 ZEROs]   *[5x5 nonzero]* [5x5 ZEROs]*   *[5x5 ZEROs]   *[5x5 nonzero]*[5x5
ZEROs]*   ** [5x5 nonzero] [5x5 nonzero] [5x5 nonzero]*
[5x5 ZEROs]*   *[5x5 ZEROs]   *[5x5 nonzero]* [5x5 ZEROs]   [5x5 ZEROs]   *[5x5
nonzero] [5x5 nonzero] [5x5 nonzero]** [5x5 nonzero]*



With multiple elements in the mesh, the above matrix will be the structure
of the diagonal blocks of the PC matrix. If I call the block above [A],
then the full PC matrix (assemble through all the processors) will look like

        [A]  0   0     0   0   0   0   0
         0  [A]  0    0   0   0    0   0
         0   0  [A]   0   0   0    0   0
         0   0   0   [A]  0   0    0   0
  PC= 0   0   0    0  [A]  0    0   0      -> PC matrix if I have 8 element
in my mesh.
         0   0   0    0   0  [A]   0   0
         0   0   0    0   0    0  [A]  0
         0   0   0    0   0    0   0  [A]

Now in my code I have constructed the CSR format which gives the position
of the blue nonzero blocks in the PC matrix constructed by each processor.

I am calling the following PETSc functions:

call MatCreate(PETSC_COMM_WORLD,petsc_pc_mat,i_err)
call check_petsc_error(i_err)

call MatSetType(petsc_pc_mat,MATMPIBAIJ,i_err)
call check_petsc_error(i_err)

call MatSetSizes(petsc_pc_mat,nprows,nprows,ngrows,ngrows,i_err)    !
nprowsis the number of local rows which is equal to the number of
local columns
call check_petsc_error(i_err)

call MatSetFromOptions(petsc_pc_mat,i_err)
call check_petsc_error(i_err)

call MatSetUp(petsc_pc_mat,i_err)
call check_petsc_error(i_err)

block_size = nequations
call MatMPIBAIJSetPreallocationCSR(petsc_pc_mat,block_size,ia, ja,
PETSC_NULL_SCALAR
i_err)
call check_petsc_error(i_err)

1) In the latter call, should ia and ja be the CSR vectors which gives the
position of the blue blocks in the matrix or the ia and ja be the
CSRvectors which gives the position of the
nnz scalar terms in the PC matrix?

2) My blue block are stored in an array of 3 dimensions: a (1:5,1:5,
Number_of_blocks). I am thinking to try to use 2 approaches:
     a) Looping over the number of blocks and at each time set a single
block in the matrix. However, I have seen that I should flatten the array
before to pass the values. Is there a way to avoid to flatten the array and
just pass it as a(1:5,1:5,block_index)?

     b) If I am not wrong, having the CSR structure I could call just 1
time MatSetValueBlocked and set the entire matrix owned by one process. Am
I right?

Thank you.

Best Regards







-- 
Matteo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130927/c2574df2/attachment.html>


More information about the petsc-users mailing list