<div dir="ltr"><div>Perhaps Barry is right that you want Picard, but suppose you really want Newton.</div><div><br></div>"<span style="font-family:Helvetica,Arial,sans-serif,serif,EmojiFont;font-size:14.6667px">This problem can be solved efficiently using the Sherman-Morrison formula" Well, maybe. The main assumption here is that inverting K is cheap. I see two things you can do in a straightforward way:</span><div><span style="font-family:Helvetica,Arial,sans-serif,serif,EmojiFont;font-size:14.6667px"><br></span></div><div><span style="font-family:Helvetica,Arial,sans-serif,serif,EmojiFont;font-size:14.6667px"> 1) Use MatCreateLRC() </span><a href="https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateLRC.html">https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateLRC.html</a> to create the Jacobian</div><div> and solve using an iterative method. If you pass just K was the preconditioning matrix, you can use common PCs.</div><div><br></div><div> 2) We only implemented MatMult() for LRC, but you could stick your SMW code in for MatSolve_LRC if you really want to factor K. We would</div><div> of course help you do this.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Feb 5, 2020 at 1:36 AM Smith, Barry F. via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
I am not sure of everything in your email but it sounds like you want to use a "Picard" iteration to solve [K(u)−kaaT]Δu=−F(u). That is solve<br>
<br>
A(u^{n}) (u^{n+1} - u^{n}) = F(u^{n}) - A(u^{n})u^{n} where A(u) = K(u) - kaaT<br>
<br>
PETSc provides code to this with SNESSetPicard() (see the manual pages) I don't know if Petsc4py has bindings for this.<br>
<br>
Adding missing python bindings is not terribly difficult and you may be able to do it yourself if this is the approach you want.<br>
<br>
Barry<br>
<br>
<br>
<br>
> On Feb 4, 2020, at 5:07 PM, Olek Niewiarowski <<a href="mailto:aan2@princeton.edu" target="_blank">aan2@princeton.edu</a>> wrote:<br>
> <br>
> Hello, <br>
> I am a FEniCS user but new to petsc4py. I am trying to modify the KSP solver through the SNES object to implement the Sherman-Morrison formula(e.g. <a href="http://fourier.eng.hmc.edu/e176/lectures/algebra/node6.html" rel="noreferrer" target="_blank">http://fourier.eng.hmc.edu/e176/lectures/algebra/node6.html</a> ). I am solving a nonlinear system of the form [K(u)−kaaT]Δu=−F(u). Here the jacobian matrix K is modified by the term kaaT, where k is a scalar. Notably, K is sparse, while the term kaaT results in a full matrix. This problem can be solved efficiently using the Sherman-Morrison formula : <br>
> <br>
> [K−kaaT]-1 = K-1 - (kK-1 aaTK-1)/(1+kaTK-1a)<br>
> I have managed to successfully implement this at the linear solve level (by modifying the KSP solver) inside a custom Newton solver in python by following an incomplete tutorial at <a href="https://www.firedrakeproject.org/petsc-interface.html#defining-a-preconditioner" rel="noreferrer" target="_blank">https://www.firedrakeproject.org/petsc-interface.html#defining-a-preconditioner</a> :<br>
> • while (norm(delU) > alpha): # while not converged<br>
> • <br>
> • self.update_F() # call to method to update r.h.s form<br>
> • self.update_K() # call to update the jacobian form<br>
> • K = assemble(self.K) # assemble the jacobian matrix<br>
> • F = assemble(self.F) # assemble the r.h.s vector<br>
> • a = assemble(self.a_form) # assemble the a_form (see Sherman Morrison formula)<br>
> • <br>
> • for bc in self.mem.bc: # apply boundary conditions<br>
> • bc.apply(K, F) <br>
> • bc.apply(K, a) <br>
> • <br>
> • B = PETSc.Mat().create() <br>
> • <br>
> • # Assemble the bilinear form that defines A and get the concrete <br>
> • # PETSc matrix <br>
> • A = as_backend_type(K).mat() # get the PETSc objects for K and a<br>
> • u = as_backend_type(a).vec() <br>
> • <br>
> • # Build the matrix "context" # see firedrake docs<br>
> • Bctx = MatrixFreeB(A, u, u, self.k) <br>
> • <br>
> • # Set up B <br>
> • # B is the same size as A <br>
> • B.setSizes(*A.getSizes()) <br>
> • B.setType(B.Type.PYTHON) <br>
> • B.setPythonContext(Bctx) <br>
> • B.setUp() <br>
> • <br>
> • <br>
> • ksp = PETSc.KSP().create() # create the KSP linear solver object<br>
> • ksp.setOperators(B) <br>
> • ksp.setUp() <br>
> • pc = ksp.pc <br>
> • pc.setType(pc.Type.PYTHON) <br>
> • pc.setPythonContext(MatrixFreePC()) <br>
> • ksp.setFromOptions() <br>
> • <br>
> • solution = delU # the incremental displacement at this iteration<br>
> • <br>
> • b = as_backend_type(-F).vec() <br>
> • delu = solution.vector().vec() <br>
> • <br>
> • ksp.solve(b, delu) <br>
> <br>
> • self.mem.u.vector().axpy(0.25, self.delU.vector()) # poor man's linesearch<br>
> • counter += 1 <br>
> Here is the corresponding petsc4py code adapted from the firedrake docs:<br>
> <br>
> • class MatrixFreePC(object): <br>
> • <br>
> • def setUp(self, pc): <br>
> • B, P = pc.getOperators() <br>
> • # extract the MatrixFreeB object from B <br>
> • ctx = B.getPythonContext() <br>
> • self.A = ctx.A <br>
> • self.u = ctx.u <br>
> • self.v = ctx.v <br>
> • self.k = ctx.k <br>
> • # Here we build the PC object that uses the concrete, <br>
> • # assembled matrix A. We will use this to apply the action <br>
> • # of A^{-1} <br>
> • self.pc = PETSc.PC().create() <br>
> • self.pc.setOptionsPrefix("mf_") <br>
> • self.pc.setOperators(self.A) <br>
> • self.pc.setFromOptions() <br>
> • # Since u and v do not change, we can build the denominator <br>
> • # and the action of A^{-1} on u only once, in the setup <br>
> • # phase. <br>
> • tmp = self.A.createVecLeft() <br>
> • self.pc.apply(self.u, tmp) <br>
> • self._Ainvu = tmp <br>
> • self._denom = 1 + self.k*self.v.dot(self._Ainvu) <br>
> • <br>
> • def apply(self, pc, x, y): <br>
> • # y <- A^{-1}x <br>
> • self.pc.apply(x, y) <br>
> • # alpha <- (v^T A^{-1} x) / (1 + v^T A^{-1} u) <br>
> • alpha = (self.k*self.v.dot(y)) / self._denom <br>
> • # y <- y - alpha * A^{-1}u <br>
> • y.axpy(-alpha, self._Ainvu) <br>
> • <br>
> • <br>
> • class MatrixFreeB(object): <br>
> • <br>
> • def __init__(self, A, u, v, k): <br>
> • self.A = A <br>
> • self.u = u <br>
> • self.v = v <br>
> • self.k = k <br>
> • <br>
> • def mult(self, mat, x, y): <br>
> • # y <- A x <br>
> • self.A.mult(x, y) <br>
> • <br>
> • # alpha <- v^T x <br>
> • alpha = self.v.dot(x) <br>
> • <br>
> • # y <- y + alpha*u <br>
> • y.axpy(alpha, self.u) <br>
> However, this approach is not efficient as it requires many iterations due to the Newton step being fixed, so I would like to implement it using SNES and use line search. Unfortunately, I have not been able to find any documentation/tutorial on how to do so. Provided I have the FEniCS forms for F, K, and a, I'd like to do something along the lines of:<br>
> solver = PETScSNESSolver() # the FEniCS SNES wrapper<br>
> snes = solver.snes() # the petsc4py SNES object<br>
> ## ??<br>
> ksp = snes.getKSP()<br>
> # set ksp option similar to above<br>
> solver.solve()<br>
> <br>
> I would be very grateful if anyone could could help or point me to a reference or demo that does something similar (or maybe a completely different way of solving the problem!). <br>
> Many thanks in advance!<br>
> Alex<br>
<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div>