[petsc-users] Parallel matrix assembly questions
Barry Smith
bsmith at mcs.anl.gov
Wed Jan 12 17:04:48 CST 2011
On Jan 12, 2011, at 4:53 PM, Hamid M. wrote:
> Hello,
>
> I need to build a dense matrix in parallel and looking at examples in
> ksp/examples/tutorials
> it seems there are two approaches.
> In ex3.c MPI_Comm_rank and MPI_Comm_size are used to compute the index
> range for the
> current processor and ex5.c uses MatGetOwnershipRange to figure out
> the index range.
>
> 1- Is there any advantage using either of these methods or they will
> behave the same when it comes to performance and portability ?
Use MatGetOwnershipRange because it works with any Mat layout.
>
> Based on comments in ex5.c, PETSc partitions the matrix by contiguous
> chunks of rows.
> 2- Is there a way of changing this scheme using PETSc routines ?
No
> 3- If not, can I partition my matrix in a different way, say 2D block
> cycling method, and still be able to use PETSc to solve it ?
>
No
> Since my matrix is dense I am going to start with direct solvers and I
> am concerned whether the partitioning scheme will affect the solver's
> performance.
>
It will affect performance, unless you hope/plan to use iterative solvers there is no good reason to use PETSc for dense matrices with direct solvers. That is another world with a different world of software.
Barry
> thanks in advance,
> Hamid
More information about the petsc-users
mailing list