[petsc-users] Algorithms to remove null spaces in a singular system

Kong, Fande fande.kong at inl.gov
Tue Oct 11 17:07:15 CDT 2016


Barry,

I am trying to reproduce this issue using a pure PETSc code. VecLoad does
not work for me. I do not know why. Anyway, I can reproduce this using a
very small system.  Here are some info:

Mat, A
Mat Object:() 2 MPI processes
  type: mpiaij
row 0: (0, 1.)
row 1: (0, -0.820827)  (1, 1.51669)  (2, -0.820827)
row 2: (1, -0.820827)  (2, 1.51669)  (3, -0.820827)
row 3: (2, -0.820827)  (3, 1.51669)  (4, -0.820827)
row 4: (3, -0.820827)  (4, 1.51669)  (5, -0.820827)
row 5: (4, -0.820827)  (5, 1.51669)  (6, -0.820827)
row 6: (5, -0.820827)  (6, 1.51669)  (7, -0.820827)
row 7: (6, -0.820827)  (7, 1.51669)  (8, -0.820827)
row 8: (8, 1.)


Right hand side b:

Vec Object: 2 MPI processes
  type: mpi
Process [0]
0.
-0.356693
-0.50444
-0.356693
-5.55112e-17
Process [1]
0.356693
0.50444
0.356693
0.


Mat Null space N(A):

Vec Object: 2 MPI processes
  type: mpi
Process [0]
0.
0.191342
0.353553
0.46194
0.5
Process [1]
0.46194
0.353553
0.191342
6.12323e-17


Please run with two MPI threads using -ksp_pc_side right -pc_type bjacobi
and -ksp_pc_side left -pc_type bjacobi. Will produce different solutions.
The one obtained with using "left" is correct (we have an analytical
solution).

I also attached data for matrix, rhs and nullspace, but I am not sure if
you can read them or not. I can load mat.dat, but I could not read rhs.dat
and nullspace.dat.

Fande,



On Tue, Oct 11, 2016 at 3:44 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:

>
>   Fande,
>
>      Could you send me (petsc-maint at mcs.anl.gov) a non symmetric matrix
> you have that has a different null space for A and A'. This would be one
> that is failing with right preconditioning. Smaller the better but whatever
> size you have. Run the code with -ksp_view_mat binary and send the
> resulting file called binaryoutput.
>
>    I need a test matrix to update the PETSc code for this case.
>
>
>    Barry
>
> > On Oct 11, 2016, at 3:04 PM, Kong, Fande <fande.kong at inl.gov> wrote:
> >
> >
> >
> > On Tue, Oct 11, 2016 at 12:18 PM, Barry Smith <bsmith at mcs.anl.gov>
> wrote:
> >
> > > On Oct 11, 2016, at 12:01 PM, Kong, Fande <fande.kong at inl.gov> wrote:
> > >
> > >
> > >
> > > On Tue, Oct 11, 2016 at 10:39 AM, Barry Smith <bsmith at mcs.anl.gov>
> wrote:
> > >
> > > > On Oct 11, 2016, at 9:33 AM, Kong, Fande <fande.kong at inl.gov> wrote:
> > > >
> > > > Barry, Thanks so much for your explanation. It helps me a lot.
> > > >
> > > > On Mon, Oct 10, 2016 at 4:00 PM, Barry Smith <bsmith at mcs.anl.gov>
> wrote:
> > > >
> > > > > On Oct 10, 2016, at 4:01 PM, Kong, Fande <fande.kong at inl.gov>
> wrote:
> > > > >
> > > > > Hi All,
> > > > >
> > > > > I know how to remove the null spaces from a singular system using
> creating a MatNullSpace and attaching it to Mat.
> > > > >
> > > > > I was really wondering what is the philosophy behind this? The
> exact algorithms we are using in PETSc right now?  Where we are dealing
> with this, preconditioner, linear solver, or nonlinear solver?
> > > >
> > > >    It is in the Krylov solver.
> > > >
> > > >    The idea is very simple. Say you have   a singular A with null
> space N (that all values Ny are in the null space of A. So N is tall and
> skinny) and you want to solve A x = b where b is in the range of A. This
> problem has an infinite number of solutions     Ny + x*  since A (Ny + x*)
> = ANy + Ax* = Ax* = b where x* is the "minimum norm solution; that is Ax* =
> b and x* has the smallest norm of all solutions.
> > > >
> > > >       With left preconditioning   B A x = B b GMRES, for example,
> normally computes the solution in the as alpha_1 Bb   + alpha_2 BABb +
> alpha_3 BABABAb + ....  but the B operator will likely introduce some
> component into the direction of the null space so as GMRES continues the
> "solution" computed will grow larger and larger with a large component in
> the null space of A. Hence we simply modify GMRES a tiny bit by building
> the solution from alpha_1 (I-N)Bb   + alpha_2 (I-N)BABb + alpha_3
> > > >
> > > >  Does "I" mean an identity matrix? Could you possibly send me a link
> for this GMRES implementation, that is, how PETSc does this in the actual
> code?
> > >
> > >    Yes.
> > >
> > >     It is in the helper routine KSP_PCApplyBAorAB()
> > > #undef __FUNCT__
> > > #define __FUNCT__ "KSP_PCApplyBAorAB"
> > > PETSC_STATIC_INLINE PetscErrorCode KSP_PCApplyBAorAB(KSP ksp,Vec x,Vec
> y,Vec w)
> > > {
> > >   PetscErrorCode ierr;
> > >   PetscFunctionBegin;
> > >   if (!ksp->transpose_solve) {
> > >     ierr = PCApplyBAorAB(ksp->pc,ksp->pc_side,x,y,w);CHKERRQ(ierr);
> > >     ierr = KSP_RemoveNullSpace(ksp,y);CHKERRQ(ierr);
> > >   } else {
> > >     ierr = PCApplyBAorABTranspose(ksp->pc,ksp->pc_side,x,y,w);
> CHKERRQ(ierr);
> > >   }
> > >   PetscFunctionReturn(0);
> > > }
> > >
> > >
> > > PETSC_STATIC_INLINE PetscErrorCode KSP_RemoveNullSpace(KSP ksp,Vec y)
> > > {
> > >   PetscErrorCode ierr;
> > >   PetscFunctionBegin;
> > >   if (ksp->pc_side == PC_LEFT) {
> > >     Mat          A;
> > >     MatNullSpace nullsp;
> > >     ierr = PCGetOperators(ksp->pc,&A,NULL);CHKERRQ(ierr);
> > >     ierr = MatGetNullSpace(A,&nullsp);CHKERRQ(ierr);
> > >     if (nullsp) {
> > >       ierr = MatNullSpaceRemove(nullsp,y);CHKERRQ(ierr);
> > >     }
> > >   }
> > >   PetscFunctionReturn(0);
> > > }
> > >
> > > "ksp->pc_side == PC_LEFT" deals with the left preconditioning Krylov
> methods only? How about the right preconditioning ones? Are  they just
> magically right for the right preconditioning Krylov methods?
> >
> >    This is a good question. I am working on a branch now where I will
> add some more comprehensive testing of the various cases and fix anything
> that comes up.
> >
> >    Were you having trouble with ASM and bjacobi only for right
> preconditioning?
> >
> >
> > Yes. ASM and bjacobi works fine for left preconditioning NOT for RIGHT
> preconditioning. bjacobi converges, but produces a wrong solution. ASM
> needs more iterations, however the solution is right.
> >
> >
> >
> >     Note that when A is symmetric the range of A is orthogonal to null
> space of A so yes I think in that case it is just "magically right" but if
> A is not symmetric then I don't think it is "magically right". I'll work on
> it.
> >
> >
> >    Barry
> >
> > >
> > > Fande Kong,
> > >
> > >
> > > There is no code directly in the GMRES or other methods.
> > >
> > > >
> > > > (I-N)BABABAb + ....  that is we remove from each new direction
> anything in the direction of the null space. Hence the null space doesn't
> directly appear in the preconditioner, just in the KSP method.   If you
> attach a null space to the matrix, the KSP just automatically uses it to do
> the removal above.
> > > >
> > > >     With right preconditioning the solution is built from alpha_1 b
>  + alpha_2 ABb + alpha_3 ABABb + .... and again we apply (I-N) to each term
> to remove any part that is in the null space of A.
> > > >
> > > >    Now consider the case   A y = b where b is NOT in the range of A.
> So the problem has no "true" solution, but one can find a least squares
> solution by rewriting b = b_par + b_perp where b_par is in the range of A
> and b_perp is orthogonal to the range of A and solve instead    A x =
> b_perp. If you provide a MatSetTransposeNullSpace() then KSP automatically
> uses it to remove b_perp from the right hand side before starting the KSP
> iterations.
> > > >
> > > >   The manual pages for MatNullSpaceAttach() and
> MatTranposeNullSpaceAttach() discuss this an explain how it relates to the
> fundamental theorem of linear algebra.
> > > >
> > > >   Note that for symmetric matrices the two null spaces are the same.
> > > >
> > > >   Barry
> > > >
> > > >
> > > >    A different note: This "trick" is not a "cure all" for a totally
> inappropriate preconditioner. For example if one uses for a preconditioner
> a direct (sparse or dense) solver or an ILU(k) one can end up with a very
> bad solver because the direct solver will likely produce a very small pivot
> at some point thus the triangular solver applied in the precondition can
> produce HUGE changes in the solution (that are not physical) and so the
> preconditioner basically produces garbage. On the other hand sometimes it
> works out ok.
> > > >
> > > > What preconditioners  are appropriate? asm, bjacobi, amg? I have an
> example which shows  lu and ilu indeed work, but asm and bjacobi do not at
> all. That is why I am asking questions about algorithms. I am trying to
> figure out a default preconditioner for several singular systems.
> > >
> > >    Hmm, normally asm and bjacobi would be fine with this unless one or
> more of the subblocks are themselves singular (which normally won't
> happen). AMG can also work find sometimes.
> > >
> > >    Can you send a sample code?
> > >
> > >   Barry
> > >
> > > >
> > > > Thanks again.
> > > >
> > > >
> > > > Fande Kong,
> > > >
> > > >
> > > >
> > > > >
> > > > >
> > > > > Fande Kong,
> >
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20161011/2bf74d87/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: mat.dat
Type: application/octet-stream
Size: 328 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20161011/2bf74d87/attachment.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: nullspace.dat
Type: application/octet-stream
Size: 80 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20161011/2bf74d87/attachment-0001.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: rhs.dat
Type: application/octet-stream
Size: 80 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20161011/2bf74d87/attachment-0002.obj>


More information about the petsc-users mailing list