[petsc-users] Sparse triangular solver

Barry Smith bsmith at mcs.anl.gov
Mon Mar 9 11:26:18 CDT 2015


> On Mar 9, 2015, at 11:12 AM, Hoang-Vu Dang <dang.hvu at gmail.com> wrote:
> 
> Hi Barry,
> 
> >  Are you free to pick whatever (parallel) data structure for the lower and upper triangular parts you want? Or are they given to you?
> 
> Yes I am free to pick any data structure, of course it must be possible for me to work with or be able to convert from a different format (say CSR / COO). I do not care yet about how efficient is the conversion part.

   Then you can either design your own or try to use one used by SuperLU_Dist, MUMPS, or Pastix. PETSc does't directly provide such data structures or code.

  Barry

> 
> >  Are you doing many triangular solves at the same time with different right hand sides, or are you doing them one at a time?
> 
> My current intention is to work with one vector at a time.
> 
> 
> On Mon, Mar 9, 2015 at 10:38 AM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> 
> > On Mar 8, 2015, at 10:40 PM, Hoang-Vu Dang <dang.hvu at gmail.com> wrote:
> >
> > Sorry for causing the confusion, I should have clarify the term "triangular solver".
> >
> > What I mean is that I do not need a factorization from a general matrix to LU form.
> >
> >  I already have the matrix in lower/upper triangular form. So the solver is really just backward/forward substitution.
> 
>   The details of the backward/forward substitution depend in great deal on the (parallel) data structure used to store the lower and upper triangular part. This is why the lower/upper triangular solvers are tied to the same code that does the factorization; for example I cannot factor with MUMPS and then do the triangular solves with SuperLU_Dist since they assume different and complicated (parallel) data structures.
> 
>    Are you free to pick whatever (parallel) data structure for the lower and upper triangular parts you want? Or are they given to you? Are you doing many triangular solves at the same time with different right hand sides, or are you doing them one at a time?
> 
>   Barry
> 
> >
> > See here, I got the terminology from http://www.mcs.anl.gov/papers/P1658.pdf
> >
> > The algorithm "sparse triangular solve", unless I misunderstood the paper. The paper specifically mentioned PETSC so that's where I'm starting from.
> >
> > I hope that makes thing clearer,
> >
> > Cheers,
> > Vu
> >
> > On Sun, Mar 8, 2015 at 9:28 PM, Hong <hzhang at mcs.anl.gov> wrote:
> > Hoang-Vu :
> > If I do not need the full solver/factorization but just the backward subs, do i need any special treatment ? Is there a way to hint the solver to apply only the last step to reduce overhead ?
> >
> > What do you mean " do not need the full solver/factorization"?
> > Do you need incomplete matrix factorization, e.g., ILU, instead of full factorization?
> > The backward subs are steps AFTER matrix factorization.
> >
> > Hong
> >
> > On Mar 8, 2015 6:26 PM, "Barry Smith" <bsmith at mcs.anl.gov> wrote:
> >
> >   PETSc provides sparse parallel LU (and Cholesky) factorizations and solves via the external packages SuperLU_Dist, MUMPS, and Pastix. You need to first configure PETSc to use one or more of those packages for example ./configure --download-superlu_dist --download-metis --download-parmetis.
> >
> >   It is generally best to use the linear solvers via the PETSc KSP interface (even for direct solvers such as LU). So you create a KSP object, provide the matrix object and call KSPSolve(). You can control the solver used via the options database; to use the installed SuperLU_Dist you would use -pc_type lu -pc_factor_mat_solver_package superlu_dist
> >
> >   The MatrixMarket format is no good for parallel computing so you must first convert the file from MatrixMarket format to the PETSc binary format (see http://www.mcs.anl.gov/petsc/documentation/faq.html#sparse-matrix-ascii-format ) and then  you can use MatLoad() to load the matrix in parallel and then pass it to the KSP solver. For example src/ksp/ksp/examples/tutorials/ex10.c does this.
> >
> >
> >   Barry
> >
> > > On Mar 8, 2015, at 6:08 PM, Hoang-Vu Dang <dang.hvu at gmail.com> wrote:
> > >
> > > Hi,
> > >
> > > I would like to use petcs to perform parallel backward/forward substitution for sparse triangular matrices in a distributed memory cluster (with MPI).
> > >
> > > Could someone provide me some pointers on how to do this or whether petsc is good for this task ?
> > >
> > > I think there is MatSolve method, but unsure whether it supports good algorithm for sparse triangular matrices and how to provide an input in a MartrixMarket format / CSR format.
> > >
> > > Thank you
> > > Vu
> >
> >
> >
> 
> 



More information about the petsc-users mailing list