[petsc-users] Fwd: Implementing a homotopy solver

Dave May dave.mayhem23 at gmail.com
Sat Sep 29 19:12:55 CDT 2018


On Sat, 29 Sep 2018 at 16:09, Matthew Knepley <knepley at gmail.com> wrote:

> On Sat, Sep 29, 2018 at 9:47 AM zakaryah <zakaryah at gmail.com> wrote:
>
>> Hi Matt - thanks for all your help.
>>
>> Let's say I want exactly the same solver for the tangent vector and the
>> SNES update, so I should reuse the KSP.
>>
>
If you want to do this, there is no need or reason to call
KSPSetFromOptions() inside your Jacobian evaluator - just call
SNESSetFromOptions once on the outer object.



>> My attempt to do this looks like the summary of FormJacobian() in the
>> previous message:
>>
>>    - assemble Jacobian
>>    - assemble RHS
>>    - get KSP from the SNES passed to FormJacobian()
>>    - KSPSetFromOptions()
>>    - KSPSetOperators()
>>    - KSPSolve()
>>
>> I'm not sure that's the right approach, but it doesn't work - the
>> KSPSolve() in the summary, i.e. the one for the tangent vector, seems to
>> work fine.  But the next KSPSolve() looks strange - it seems to use a
>> preconditioner even with -pc_type none, etc.  This makes me think I am
>> doing something seriously wrong.
>>
>
> 1) To get things going, just make the separate. Then we can optimize.
>
> 2) Give -ksp_view -ksp_monitor_true_residual -ksp_converged_reason to see
> what is happening
>
>   Matt
>
>
>> On Sat, Sep 29, 2018, 8:16 AM Matthew Knepley <knepley at gmail.com> wrote:
>>
>>> On Fri, Sep 28, 2018 at 11:13 PM zakaryah <zakaryah at gmail.com> wrote:
>>>
>>>> I'm working on a homotopy solver which follows a zero curve through
>>>> solution space.  A central aspect of this method is to calculate the vector
>>>> tangent to the curve, predict the next point, then correct using iteration
>>>> of, e.g. Newton's method, orthogonal to the tangent vector.
>>>>
>>>> Previously, we discussed the possibilities of implementing this within
>>>> PETSc's SNES.  Within my FormJacobian() function, I construct the linear
>>>> system which defines the tangent vector, solve it, then add the vector to
>>>> the nullspace of the Jacobian.  I think that in principle this can work,
>>>> but I suspect I'm doing something wrong.
>>>>
>>>> Here's a summary of the code within FormJacobian():
>>>>
>>>>    - Set values and assemble Jacobian matrix A - this is working fine
>>>>    - Set values and assemble RHS vector b for linear system defining
>>>>    tangent vector n - this is working fine
>>>>    - SNESGetKSP(snes,&my_ksp) - I thought it made sense to use the KSP
>>>>    associated with the SNES, hoping that PCs which use a factorization could
>>>>    be reused when the SNES calls KSPSolve() to calculate the update
>>>>    - KSPSetFromOptions(my_ksp) - not sure this is necessary but one of
>>>>    my problems is setting options for this KSP from the command line and even
>>>>    with this call it doesn't seem to be working properly
>>>>    - MatSetNullSpace(A,NULL) - remove any existing null space from
>>>>    Jacobian
>>>>    - KSPSetOperators(my_ksp,A,P) - P is the other matrix in
>>>>    FormJacobian()
>>>>    - VecSet(n,0) - set initial guess to zero
>>>>    - KSPSolve(my_ksp,b,n) - these solves appear to work, i.e. use the
>>>>    options passed from the command line with -ksp_XXX or -pc_XXX
>>>>    - VecNormalize(n,NULL)
>>>>    -
>>>>    MatNullSpaceCreate(PetscObjectComm((PetscObject)A),PETSC_FALSE,1,&n,&nullsp)
>>>>    - MatSetNullSpace(A,nullsp)
>>>>    - MatNullSpaceDestroy(&nullsp)
>>>>    - return
>>>>
>>>> The immediate problem is that the subsequent KSPSolve(), i.e. the one
>>>> called internally by SNESSolve(), behaves strangely.  For example, if I use
>>>> -pc_type none -ksp_monitor -ksp_monitor_true_residual, then the KSPSolve()
>>>> that I call within FormJacobian() looks correct - "preconditioned" norm and
>>>> true norm are identical, and both converge as I expect (i.e. slowly but
>>>> geometrically).  However, the subsequent KSPSolve(), internal to the
>>>> SNESSolve(), has large differences between the preconditioned norm and the
>>>> true norm.  In addition, the KSP does not converge in the true residual,
>>>> but I'll have a hard time debugging that without knowing how to properly
>>>> set the options.
>>>>
>>>
>>> We need to clear up the usage first. If you want EXACTLY the same solver
>>> for both solvers, then reuse
>>> the KSP, otherwise do not do it. Does it work then?
>>>
>>>   Thanks,
>>>
>>>      Matt
>>>
>>>
>>>> I hope someone can help me see what I'm doing wrong.
>>>>
>>>> On Sun, Jul 22, 2018 at 9:09 PM zakaryah <zakaryah at gmail.com> wrote:
>>>>
>>>>> Thanks Matt and Barry,
>>>>>
>>>>> Matt - if I do the calculation in FormJacobian(), which makes by far
>>>>> the most sense and is as per your suggestion, do I need to set the
>>>>> operators of the SNES's KSP back to whatever they were before I set them?  The
>>>>> linear system I want to solve within FormJacobian() involves the Jacobian
>>>>> matrix itself, and I want to remove the "nullspace" from that same matrix
>>>>> within FormFunction().
>>>>>
>>>>> Barry - I'm trying to implement a homotopy solver.  In short, I have a
>>>>> system of n nonlinear equations in n variables, F(x), which is hard to
>>>>> solve because the Jacobian tends to become singular using standard
>>>>> methods.  I want to add an auxiliary variable, lambda, to create a
>>>>> homotopy: H(lambda,x) = lambda*F(x) + (1-lambda)G(x), where G is "easy to
>>>>> solve", and the idea is that the Jacobian of the n+1 variable system will
>>>>> not become singular along the curve H(lambda,x) = 0.
>>>>>
>>>>> The method involves adding an equation to H so that the Jacobian H' is
>>>>> square.  The "submatrix" refers to the n x (n+1) matrix which represents
>>>>> the Jacobian without the added equation, whereas my FormJacobian() forms
>>>>> the entire (n+1) x (n+1) matrix H'.  I only refer to the submatrix because
>>>>> it has a nullspace, and I want to find it by solving a linear system
>>>>> designed for this purpose, H' u = b, where b is not the zero vector.  H'
>>>>> has no nullspace, but I want to remove the projection of u from my SNES
>>>>> solution vector, as u IS in the nullspace of the submatrix.
>>>>>
>>>>> The RHS vector b can be calculated outside the SNES solve.  I guess my
>>>>> FormJacobian() should look like this:
>>>>>
>>>>> FormJacobian(SNES snes, Vec x, Mat Amat, Mat Pmat, void *ctx) {
>>>>>   KSP ksp;
>>>>>   MatNullSpace unull;
>>>>>   user_struct *user = (user_struct*)ctx;
>>>>>
>>>>>   calculate Amat
>>>>>   assemble Amat
>>>>>
>>>>>   if (Pmat != Amat) {
>>>>>     assemble Amat
>>>>>   }
>>>>>
>>>>>   SNESGetKSP(snes,&ksp);
>>>>>   KSPSetOperators(ksp,Amat,Pmat);
>>>>>   KSPSolve(ksp,user->b,user->u);
>>>>>
>>>>>   MatNullSpaceCreate(PetscObjectComm((PetscObject)Amat), PETSC_FALSE,
>>>>> 1, &(user->u),&unull);
>>>>>   MatSetNullSpace(Amat,unull);
>>>>>   MatNullSpaceDestroy(&unull);
>>>>> }
>>>>>
>>>>> Does this look right?
>>>>>
>>>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/
>>> <http://www.cse.buffalo.edu/~knepley/>
>>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180930/11eaa682/attachment.html>


More information about the petsc-users mailing list