[petsc-users] Algorithms to remove null spaces in a singular system
Kong, Fande
fande.kong at inl.gov
Tue Oct 11 12:01:34 CDT 2016
On Tue, Oct 11, 2016 at 10:39 AM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> > On Oct 11, 2016, at 9:33 AM, Kong, Fande <fande.kong at inl.gov> wrote:
> >
> > Barry, Thanks so much for your explanation. It helps me a lot.
> >
> > On Mon, Oct 10, 2016 at 4:00 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> >
> > > On Oct 10, 2016, at 4:01 PM, Kong, Fande <fande.kong at inl.gov> wrote:
> > >
> > > Hi All,
> > >
> > > I know how to remove the null spaces from a singular system using
> creating a MatNullSpace and attaching it to Mat.
> > >
> > > I was really wondering what is the philosophy behind this? The exact
> algorithms we are using in PETSc right now? Where we are dealing with
> this, preconditioner, linear solver, or nonlinear solver?
> >
> > It is in the Krylov solver.
> >
> > The idea is very simple. Say you have a singular A with null space
> N (that all values Ny are in the null space of A. So N is tall and skinny)
> and you want to solve A x = b where b is in the range of A. This problem
> has an infinite number of solutions Ny + x* since A (Ny + x*) = ANy +
> Ax* = Ax* = b where x* is the "minimum norm solution; that is Ax* = b and
> x* has the smallest norm of all solutions.
> >
> > With left preconditioning B A x = B b GMRES, for example,
> normally computes the solution in the as alpha_1 Bb + alpha_2 BABb +
> alpha_3 BABABAb + .... but the B operator will likely introduce some
> component into the direction of the null space so as GMRES continues the
> "solution" computed will grow larger and larger with a large component in
> the null space of A. Hence we simply modify GMRES a tiny bit by building
> the solution from alpha_1 (I-N)Bb + alpha_2 (I-N)BABb + alpha_3
> >
> > Does "I" mean an identity matrix? Could you possibly send me a link for
> this GMRES implementation, that is, how PETSc does this in the actual code?
>
> Yes.
>
> It is in the helper routine KSP_PCApplyBAorAB()
> #undef __FUNCT__
> #define __FUNCT__ "KSP_PCApplyBAorAB"
> PETSC_STATIC_INLINE PetscErrorCode KSP_PCApplyBAorAB(KSP ksp,Vec x,Vec
> y,Vec w)
> {
> PetscErrorCode ierr;
> PetscFunctionBegin;
> if (!ksp->transpose_solve) {
> ierr = PCApplyBAorAB(ksp->pc,ksp->pc_side,x,y,w);CHKERRQ(ierr);
> ierr = KSP_RemoveNullSpace(ksp,y);CHKERRQ(ierr);
> } else {
> ierr = PCApplyBAorABTranspose(ksp->pc,ksp->pc_side,x,y,w);
> CHKERRQ(ierr);
> }
> PetscFunctionReturn(0);
> }
>
PETSC_STATIC_INLINE PetscErrorCode KSP_RemoveNullSpace(KSP ksp,Vec y)
{
PetscErrorCode ierr;
PetscFunctionBegin;
if (ksp->pc_side == PC_LEFT) {
Mat A;
MatNullSpace nullsp;
ierr = PCGetOperators(ksp->pc,&A,NULL);CHKERRQ(ierr);
ierr = MatGetNullSpace(A,&nullsp);CHKERRQ(ierr);
if (nullsp) {
ierr = MatNullSpaceRemove(nullsp,y);CHKERRQ(ierr);
}
}
PetscFunctionReturn(0);
}
"ksp->pc_side == PC_LEFT" deals with the left preconditioning Krylov
methods only? How about the right preconditioning ones? Are they just
magically right for the right preconditioning Krylov methods?
Fande Kong,
>
> There is no code directly in the GMRES or other methods.
>
> >
> > (I-N)BABABAb + .... that is we remove from each new direction anything
> in the direction of the null space. Hence the null space doesn't directly
> appear in the preconditioner, just in the KSP method. If you attach a
> null space to the matrix, the KSP just automatically uses it to do the
> removal above.
> >
> > With right preconditioning the solution is built from alpha_1 b +
> alpha_2 ABb + alpha_3 ABABb + .... and again we apply (I-N) to each term to
> remove any part that is in the null space of A.
> >
> > Now consider the case A y = b where b is NOT in the range of A. So
> the problem has no "true" solution, but one can find a least squares
> solution by rewriting b = b_par + b_perp where b_par is in the range of A
> and b_perp is orthogonal to the range of A and solve instead A x =
> b_perp. If you provide a MatSetTransposeNullSpace() then KSP automatically
> uses it to remove b_perp from the right hand side before starting the KSP
> iterations.
> >
> > The manual pages for MatNullSpaceAttach() and
> MatTranposeNullSpaceAttach() discuss this an explain how it relates to the
> fundamental theorem of linear algebra.
> >
> > Note that for symmetric matrices the two null spaces are the same.
> >
> > Barry
> >
> >
> > A different note: This "trick" is not a "cure all" for a totally
> inappropriate preconditioner. For example if one uses for a preconditioner
> a direct (sparse or dense) solver or an ILU(k) one can end up with a very
> bad solver because the direct solver will likely produce a very small pivot
> at some point thus the triangular solver applied in the precondition can
> produce HUGE changes in the solution (that are not physical) and so the
> preconditioner basically produces garbage. On the other hand sometimes it
> works out ok.
> >
> > What preconditioners are appropriate? asm, bjacobi, amg? I have an
> example which shows lu and ilu indeed work, but asm and bjacobi do not at
> all. That is why I am asking questions about algorithms. I am trying to
> figure out a default preconditioner for several singular systems.
>
> Hmm, normally asm and bjacobi would be fine with this unless one or
> more of the subblocks are themselves singular (which normally won't
> happen). AMG can also work find sometimes.
>
> Can you send a sample code?
>
> Barry
>
> >
> > Thanks again.
> >
> >
> > Fande Kong,
> >
> >
> >
> > >
> > >
> > > Fande Kong,
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20161011/a7b0dd08/attachment.html>
More information about the petsc-users
mailing list