[petsc-users] Algorithms to remove null spaces in a singular system

Barry Smith bsmith at mcs.anl.gov
Mon Oct 17 14:05:55 CDT 2016


> On Oct 17, 2016, at 10:47 AM, Kong, Fande <fande.kong at inl.gov> wrote:
> 
> 
> 
> On Thu, Oct 13, 2016 at 8:21 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> 
>   Fande,
> 
>   What SNES method are you using?  If you use SNESKSPONLY I think it is ok, it will solve for the norm minimizing least square solution during the one KSPSolve() and then return.
> 
> The problem we are currently working on is a linear problem, but it could be extended to be nonlinear.  Yes, you are right. "ksponly" indeed works, and returns the right solution. But the norm of residual still could confuse users because it is not close to zero.
> 
>  
> 
>   Yes, if you use SNESNEWTONLS or others though the SNES solver will, as you say, think that progress has not been made.
> 
>    I do not like what you propose to do, changing the right hand side of the system the user provides is a nasty and surprising side effect.
> 
> I do not like this way either. The reason I posted this code here is that I want to let you know what are inconsistent  between the nonlinear solvers and the linear solvers. 

   You could have SNESSolve_KSPONLY subtract off the null space in the right hand side initially just like KSPSolve() does with code like

  ierr = MatGetTransposeNullSpace(pmat,&nullsp);CHKERRQ(ierr);
  if (nullsp) {
    ierr = VecDuplicate(ksp->vec_rhs,&btmp);CHKERRQ(ierr);
    ierr = VecCopy(ksp->vec_rhs,btmp);CHKERRQ(ierr);
    ierr = MatNullSpaceRemove(nullsp,btmp);CHKERRQ(ierr);
    vec_rhs      = ksp->vec_rhs;
    ksp->vec_rhs = btmp;
  }

It is not perfect, see my comment below, but it gets what you want and "kind of" makes the residuals decrease as in the KSPSolve directly case.

> 
>  
> 
>     What is your goal? To make it look like the SNES system has had a residual norm reduction?
> 
> Yes, I would like to make SNES have a residual reduction.  Possibly, we could add something in the converged_test function? For example, the residual vector is temporarily subtracted when evaluating the residual  norm  if the system has a null space?

    There is likely to always be confusion (in the linear case) or with any least squares type solver. The true residual is not really decreasing past a certain point but if the solver only sees the consistent part then  it looks like the residual is decreasing. 

   The problem is that none of this stuff (the PETSc model and API) was designed for the generality of inconsistent least squares problems and we have just been bolting on more support over time without enhancing the model. For example we could introduce the concept of a consistent residual and an inconsistent residual and have the default monitors display both when they are different; instead we just display "the residual norm" without clarity of "what" residual norm.  We should think about this and wait for Jed to come up with the ideal design :-)


  Barry




> 
>  
> 
>    We could generalize you question and ask what about solving for nonlinear problems: find the minimal norm solution of min_x || F(x) - b||. This may or may not belong in Tao, currently SNES doesn't do any kind of nonlinear least squares.
> 
> 
> It would be great, if we could add this kind of solvers. Tao does have one, I think.  I would like  to contribute something like this latter (of course, if you are ok with this algorithm), when we are moving  to nonlinear problems in our applications. 
> 
> Fande Kong,
>  
> 
>   Barry
> 
> 
> > On Oct 13, 2016, at 5:20 PM, Kong, Fande <fande.kong at inl.gov> wrote:
> >
> > One more question.
> >
> > Suppose that we are solving the singular linear system Ax = b. N(A) is the null space of A, and N(A^T) is the null space of the transpose of A.
> >
> > The linear system is solved using SNES, that is, F(x) = Ax-b = Ax -b_r - b_n. Here b_n  in N(A^T),  and b_r in R(A).  During each nonlinear iteration, a linear system A \delta x = F(x) is solved. N(A) is applied to Krylov space  during the linear iterating. Before the actual solve "(*ksp->ops->solve)(ksp)" for \delta x,  a temporary copy of F(x) is made, F_tmp. N(A^T) is applied to F_tmp. We will get a \delta x.  F(x+\delta x ) = A(x+\delta x)-b_r - b_n.
> >
> > F(x+\delta x ) always contain the vector b_n, and then the algorithm never converges because the normal of F is at least 1.
> >
> > Should we apply N(A^T) to F instead of F_tmp so that b_n can be removed from F?
> >
> > MatGetTransposeNullSpace(pmat,&nullsp);
> > if (nullsp) {
> >    VecDuplicate(ksp->vec_rhs,&btmp);
> >    VecCopy(ksp->vec_rhs,btmp);
> >    MatNullSpaceRemove(nullsp,btmp);
> >    vec_rhs      = ksp->vec_rhs;
> >    ksp->vec_rhs = btmp;
> > }
> >
> > should  be changed to
> >
> > MatGetTransposeNullSpace(pmat,&nullsp);
> > if (nullsp) {
> >    MatNullSpaceRemove(nullsp,ksp->vec_rhs);
> > }
> > ???
> >
> > Or other solutions to this issue?
> >
> >
> > Fande Kong,
> >
> >
> >
> >
> >
> > On Thu, Oct 13, 2016 at 8:23 AM, Matthew Knepley <knepley at gmail.com> wrote:
> > On Thu, Oct 13, 2016 at 9:06 AM, Kong, Fande <fande.kong at inl.gov> wrote:
> >
> >
> > On Wed, Oct 12, 2016 at 10:21 PM, Jed Brown <jed at jedbrown.org> wrote:
> > Barry Smith <bsmith at mcs.anl.gov> writes:
> > >   I would make that a separate routine that the users would call first.
> >
> > We have VecMDot and VecMAXPY.  I would propose adding
> >
> >   VecQR(PetscInt nvecs,Vec *vecs,PetscScalar *R);
> >
> > (where R can be NULL).
> >
> > What does R mean here?
> >
> > It means the coefficients of the old basis vectors in the new basis.
> >
> >   Matt
> >
> > If nobody working on this, I will be going to take a try.
> >
> > Fande,
> >
> >
> > Does anyone use the "Vecs" type?
> >
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> > -- Norbert Wiener
> >
> 
> 



More information about the petsc-users mailing list