[petsc-users] Fwd: Implementing a homotopy solver

zakaryah zakaryah at gmail.com
Fri Oct 5 15:54:45 CDT 2018


Sorry again about the formatting - I struggle with notation in email.

I think I see what you are saying - my augmented matrix should differ in
the last row between the one used to calculate the nullspace, u, and the
one the SNES uses as the Jacobian.  If I construct the SNES Jacobian A so
that u is truly in its nullspace, then I can just add u to the nullspace of
A and let PETSc take care of the projection?  I will try this - thanks for
the excellent suggestion.

On Fri, Oct 5, 2018 at 4:42 PM Matthew Knepley <knepley at gmail.com> wrote:

> On Fri, Oct 5, 2018 at 4:15 PM zakaryah <zakaryah at gmail.com> wrote:
>
>> Thanks Matt - I was trying to write it in linear algebra notation, but
>> the nature of the problem might produce some confusion.  I was also
>> originally a bit confused by a fairly major typo in the paper I was
>> following, but I'm almost sure I have it clear now.  I'll try to be
>> concise - the meat is in the last two paragraphs.
>>
>> My system of nonlinear equations, F(x)=0, has dimension m.  Traditional
>> methods for solving, like Newton's method, tend to reach points at which
>> the Jacobian becomes singular.  I'm trying the homotopy trick, which
>> involves introducing an additional scalar variable, s, and solving the
>> system H(x,s) = sF(x) + (1-s)(x-x_0), starting at s=0 and following the
>> zero curve to, hopefully, a solution at s=1.
>>
>> The homotopy map H only has m components, so its Jacobian J is m x
>> (m+1), and has rank m.  I need to find a vector u in its null space: J u
>> = 0.  Note that u is size m+1.  Rather than solve J u = 0, I make an
>> augmented matrix A, which is (m+1)x(m+1), and consists of J in its first m
>> rows, then a somewhat arbitrary vector in the last row.  Then the vector
>> u is the solution to A u = b, where b is zero everywhere except its last
>> element.  This solve seems to be working fine.
>>
>> The SNES should then solve for a particular solution y of the Newton
>> update, J y = - H(y).  A (unique) particular solution can also be found by
>> solving A y = [-H(y) 0]^T.  Finally - and this is the step I can't figure
>> out how to implement properly, I want to find the norm-minimizing solution
>> z for the Newton update:
>> z = y - u(y^T u)/(u^T u).
>>
>> I doubt there is any way to do this elegantly within the existing
>> routines, because u is not in the nullspace of A.  Rather, it's in the
>> nullspace of J, which is the submatrix I care about.  I think what I need
>> is a way to just apply the orthogonal projection in the preceding
>> paragraph, after the SNES solves the linear system but before the Newton
>> update is applied.  I don't know the SNES internals well at all.  Is it
>> possible to hook this in somehow, or put it in an existing function?
>>
>
> I do not claim to fully understand the above (it would be easier for me to
> write the equations explicitly on their own lines). However, the issue with
> nullspaces is soluble I think. So you say that
>
>   J \in R^{m \cross (m+1)} is full rank
>
> so that its nullspace has dimension 1 and is spanned by u,
>
>   J u = 0.
>
> Now you augment J to get a square matrix A by adding a row,
>
>   /  J  \   u = /   J u  \  =  / 0 \
>  \ v^T /        \ v^T u /     \  0 /
>
> if v is orthogonal to u. Thus, just choose v from the m-1 dimensional
> space orthogonal to u.
>
>   Thanks,
>
>      Matt
>
>
>> Thanks again for all your help.
>>
>>
>> On Oct 1, 2018 6:49 AM, "Matthew Knepley" <knepley at gmail.com> wrote:
>>
>> On Sun, Sep 30, 2018 at 6:06 PM zakaryah <zakaryah at gmail.com> wrote:
>>
>>> OK, thanks.
>>>
>>> I'm using a composite DM, DM_packer.  To make a separate KSP, I did the
>>> following in the FormJacobian() routine, after assembling the Jacobian
>>> matrix A and the RHS for the tangent vector, b:
>>>
>>>    - KSPCreate(PETSC_COMM_WORLD,&ksp)
>>>    - KSPSetDM(ksp,DM_packer)
>>>    - KSPSetDMActive(ksp,PETSC_FALSE) - because I want to set the
>>>    operators
>>>    - KSPSetOperators(ksp,A,P)
>>>    - VecSet(n,0) - set initial guess to zero
>>>    - KSPSolve(ksp,b,n) - this solve works correctly
>>>    - VecNormalize(n,NULL)
>>>    -
>>>    MatNullSpaceCreate(PetscObjectComm((PetscObject)A),PETSC_FALSE,1,&n,&nullsp)
>>>    - MatSetNullSpace(A,nullsp)
>>>
>>> Then, with -snes_type newtonls -pc_type none -ksp_monitor
>>> -ksp_monitor_true_residual -ksp_view, the output for the KSPSolve described
>>> above, i.e. for the tangent vector, looks correct:
>>>
>>>   0 KSP preconditioned resid norm 1.000000000000e+03 true resid norm
>>> 1.000000000000e+03 ||r(i)||/||b|| 1.000000000000e+00
>>> ...
>>>
>>> 185 KSP preconditioned resid norm 9.900713131874e-03 true resid norm
>>> 9.900713131904e-03 ||r(i)||/||b|| 9.900713131904e-06
>>>
>>> Linear solve converged due to CONVERGED_RTOL iterations 185
>>>
>>> KSP Object: 1 MPI processes
>>>
>>>   type: gmres
>>>
>>>     restart=30, using Classical (unmodified) Gram-Schmidt
>>> Orthogonalization with no iterative refinement
>>>
>>>     happy breakdown tolerance 1e-30
>>>
>>>   maximum iterations=10000, initial guess is zero
>>>
>>>   tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>>
>>>   left preconditioning
>>>
>>>   using PRECONDITIONED norm type for convergence test
>>>
>>> PC Object: 1 MPI processes
>>>
>>>   type: none
>>>
>>>   linear system matrix = precond matrix:
>>>
>>>   Mat Object: 1 MPI processes
>>>
>>>     type: seqaij
>>>
>>>     rows=78247, cols=78247
>>>
>>>     total: nonzeros=6063481, allocated nonzeros=6063481
>>>
>>>     total number of mallocs used during MatSetValues calls =0
>>>
>>>       using I-node routines: found 26083 nodes, limit used is 5
>>>
>>>
>>> But the next KSP, i.e. the one in the SNES, doesn't converge in the true
>>> residual:
>>>
>>>
>>>   0 KSP preconditioned resid norm 2.045599092896e-04 true resid norm
>>> 2.803828296212e-04 ||r(i)||/||b|| 1.000000000000e+00
>>>
>>> 191 KSP preconditioned resid norm 2.009941278534e-09 true resid norm
>>> 1.636010142734e-04 ||r(i)||/||b|| 5.834915586465e-01
>>>
>>>
>>> KSP Object: 1 MPI processes
>>>
>>>   type: gmres
>>>
>>>     restart=30, using Classical (unmodified) Gram-Schmidt
>>> Orthogonalization with no iterative refinement
>>>
>>>     happy breakdown tolerance 1e-30
>>>
>>>   maximum iterations=10000, initial guess is zero
>>>
>>>   tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>>
>>>   left preconditioning
>>>
>>>   using PRECONDITIONED norm type for convergence test
>>>
>>> PC Object: 1 MPI processes
>>>
>>>   type: none
>>>
>>>   linear system matrix = precond matrix:
>>>
>>>   Mat Object: 1 MPI processes
>>>
>>>     type: seqaij
>>>
>>>     rows=78247, cols=78247
>>>
>>>     total: nonzeros=6063481, allocated nonzeros=6063481
>>>
>>>     total number of mallocs used during MatSetValues calls =0
>>>
>>>       has attached null space
>>>
>>>       using I-node routines: found 26083 nodes, limit used is 5
>>>
>>>
>>> My guess about what's going on is that the tangent vector n isn't really
>>> in the nullspace of A.
>>>
>>
>> I do not understand the rest, but this is the problem. Maybe it would
>> make more sense if you wrote things
>> in linear algebraic notation.
>>
>>   Thanks,
>>
>>      Matt
>>
>>
>>> Rather, it's in the nullspace of the m x (m+1) submatrix of A.  So, An=c
>>> e_{m+1}, where c is an arbitrary constant and e_{m+1} is the m+1 th basis
>>> vector.  The nonlinear function also has an added row, F_{m+1}, which is
>>> set to zero in the FormFunction() routine.  I don't care about the value of
>>> F_{m+1}, but I suppose that if it's included in the true residual, and NOT
>>> in the "preconditioned" residual, even with pc_type none, then it will be
>>> tricky to diagnose the performance.  Should I be using my own routine to
>>> evaluate the residual, so that F_{m+1} is not included?
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> <http://www.cse.buffalo.edu/~knepley/>
>>
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20181005/d741a541/attachment-0001.html>


More information about the petsc-users mailing list