Using parMetis in petsc for ordering

Barry Smith bsmith at mcs.anl.gov
Wed Jan 10 18:00:49 CST 2007


  Dimitri,

   No, I think this is not the correct way to look at things. Load
balancing the original matrix is not neccessarily a good thing for
doing an LU factorization (in fact it is likely just to make the LU
factorization have much more fill and require much more floating
operations). 

  Packages like SuperLU_dist and Mumps have their own internal ordering
routines that are specifically for getting a good ordering for doing
the parallel LU factorization, you should just have these solvers
use them (which they do automatically).

   Barry


On Thu, 11 Jan 2007, Dimitri Lecas wrote:

> Barry Smith a écrit :
> >   1) The PETSc LU and Cholesky solvers only run sequentially.
> >   2) The parallel LU and Cholesky solvers PETSc interfaces to, SuperLU_dist,
> >      MUMPS, Spooles, DSCPACK do NOT accept an external ordering provided for
> >      them.
> >     Hence we do not have any setup for doing parallel matrix orderings for
> > factorizations, since we cannot use them. We could allow calling a parallel
> > ordering but I'm not sure what it would be useful for.
> > 
> >    Barry
> > 
> >   
> Ok i see that i was looking for a wrong direction.
> 
> Just in ksp/examples/tutorials/ex10.c, Partitioning is used on the linear
> system matrix. I don't understand why ?
> 
> What i understand it's, with MatPartitioning we try to partitioning the graph
> build  from the matrix (vertices is the row/columns and edge between i and j
> if aij or aji is non zero value). But in my mind, a good partitioning for
> solving linear system with iterative algorithm is to load balance the non zero
> value between processors, so we have to use weight, number of non zero value
> in the row, to have a good partitioning.
> Do i have it right ?
> 
> 


More information about the petsc-users mailing list