[petsc-users] Parallel matrix assembly questions

Hamid M. spam.wax at gmail.com
Wed Jan 12 16:53:43 CST 2011


Hello,

I need to build a dense matrix in parallel and looking at examples in
ksp/examples/tutorials
it seems there are two approaches.
In ex3.c MPI_Comm_rank and MPI_Comm_size are used to compute the index
range for the
current processor and ex5.c uses MatGetOwnershipRange to figure out
the index range.

1- Is there any advantage using either of these methods or they will
behave the same when it comes to performance and portability ?

Based on comments in ex5.c, PETSc partitions the matrix by contiguous
chunks of rows.
2- Is there a way of changing this scheme using PETSc routines ?
3- If not, can I partition my matrix in a different way, say 2D block
cycling method, and still be able to use PETSc to solve it ?

Since my matrix is dense I am going to start with direct solvers and I
am concerned whether the partitioning scheme will affect the solver's
performance.

thanks in advance,
Hamid


More information about the petsc-users mailing list