<div><br></div><div><br><div class="gmail_quote"><div dir="ltr">On Mon 14. Jun 2021 at 17:27, Anton Popov <<a href="mailto:popov@uni-mainz.de">popov@uni-mainz.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div>
    <p><br>
    </p>
    <div>On 14.06.21 15:04, Dave May wrote:<br>
    </div>
    <blockquote type="cite">
      
      <div dir="ltr">
        <div dir="ltr"><br>
        </div>
        <div>Hi Anton,<br>
        </div>
      </div>
    </blockquote>
    Hi Dave,<br>
    <blockquote type="cite">
      <div dir="ltr">
        <div><br>
        </div>
        <div class="gmail_quote">
          <div dir="ltr" class="gmail_attr">On Mon, 14 Jun 2021 at
            14:47, Anton Popov <<a href="mailto:popov@uni-mainz.de" target="_blank">popov@uni-mainz.de</a>> wrote:<br>
          </div>
          <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
            <div>
              <p>Hi Barry & Matt,<br>
                <br>
                thanks for your quick response. These options were
                exactly what I needed and expected:<br>
                <br>
                -pc_mg_galerkin pmat<br>
                -pc_use_amat false<br>
                <br>
                I just assumed that it’s a default behavior of the PC
                object.<br>
                <br>
                So to clarify my case, I don't use nonlinear multigrid.
                Galerkin is expected to deal with Pmat only, and it's
                enough if Amat implements a matrix-vector product for
                the Krylov accelerator.<br>
                <br>
                Matt, the reason for switching Amat during the iteration
                is a quite common Picard-Newton combination. Jacobian
                matrix gives accurate updates close to the solution, but
                is rather unstable far form the solution. Picard matrix
                (approximate Jacobian) is quite the opposite – it’s kind
                of stable, but slow. So the idea is to begin the
                iteration with Picard matrix, and switch to the Jacobian
                later.<br>
                <br>
                If the assembled matrices are used, then the standard
                SNES interface is just perfect. I can decide how to fill
                the matrices. But I don’t bother with  Jacobian assembly
                and want to use a built-in MFFD approximation instead. I
                did quite a few tests previously and figured out that
                MFFD is practically the same as closed-from matrix-free
                Jacobian for the later stages of the iteration. The
                Picard matrix still does a good job as a preconditioner.
                But it is important to start the iteration with Picard
                and only change to MFFD later.<br>
                <br>
                Is my workaround with the shell matrix acceptable, or
                there is a better solution? <br>
              </p>
            </div>
          </blockquote>
          <div><br>
          </div>
          <div>Given what you write, it sounds like you already have a
            good heuristic for when to switch from Picard to Newton,<br>
          </div>
          <div>Hence I think the simplest option is just to use two
            seperate SNES objects - one for performing Picard and one
            for Newton.<br>
          </div>
        </div>
      </div>
    </blockquote>
    <p><br>
    </p>
    <p>Yes, I considered this option initially. Sometimes it is
      necessary to switch back and forth between the methods, so it
      becomes a bit messy to organize this in the code.</p>
    <p>But maybe if Newton fails after Picard, I should just reduce the
      time step and restart, instead of switching back to Picard.
      Thanks, Dave.</p>
    <p></p></div></blockquote><div dir="auto"><br></div><div dir="auto">Oh yeah, for a nasty multiphysics problem a single SNES is likely not the end of the story! Definitely there is almost certainly an entire other outer loop of several nonlinear solver strategies slapped together in some problem specific manner in an effort to get the monolithic nonlinear residual down. Aborting the time step and dropping dt is often the thing you fall back to when all of those fail.</div><div dir="auto"><br></div><div dir="auto">Thanks,</div><div dir="auto">Dave</div><div dir="auto"><br></div><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><p>Thanks,</p>
    <p>Anton<br>
    </p></div><div>
    <p><br>
    </p>
    <blockquote type="cite">
      <div dir="ltr">
        <div class="gmail_quote">
          <div>The stopping condition for the Picard object would encode
            your heuristic to abort earlier when the solution was deemed
            sufficiently accurate.<br>
          </div>
          <div><br>
          </div>
          <div>Thanks,</div>
          <div>Dave<br>
          </div>
          <div> </div>
          <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
            <div>
              <p> <br>
                Thanks,<br>
                Anton <br>
              </p>
              <div>On 13.06.21 20:52, Barry Smith wrote:<br>
              </div>
              <blockquote type="cite">
                <div><br>
                </div>
                <div>  Anton,</div>
                <div><br>
                </div>
                  -pc_mg_galerkin pmat
                <div><br>
                </div>
                <div>  Though it seems simple, there is some subtly in
                  swapping out matrices with SNES.</div>
                <div><br>
                </div>
                <div>  When using multigrid with SNES there are at least
                  five distinct uses of the Jacobian operator. </div>
                <div>
                  <ol>
                    <ul>
                      <li>Perform matrix-vector product in line search
                        to check Wolf sufficient decrease convergence
                        criteria</li>
                      <li>Perform the matrix-vector product for the
                        Krylov accelerator of the system</li>
                      <li>Perform smoothing on the finest level of MG</li>
                      <li>Perform the matrix-vector product needed on
                        the finest level of MG to compute the residual
                        that will be restricted to the coarser level of
                        multigrid</li>
                      <li>When using Galerkin products to compute the
                        coarser grid operator performing the Galerkin
                        matrix triple product</li>
                    </ul>
                  </ol>
                  <div><br>
                  </div>
                  <div>when one swaps out the mat, which of these do
                    they wish to change? The first two seem to naturally
                    go together as do the last three. In your case I
                    presume you want to swap for the first two, but
                    always use pmat for the last three? To achieve this
                    you will also need -pc_use_amat  false</div>
                  <div><br>
                  </div>
                  <div>If you run with -info and -snes_view it will
                    print out some of the information about which
                    operator it is using for each case, but probably not
                    all of them.</div>
                  <div><br>
                  </div>
                  <div>Note: if the pmat is actually an accurate
                    computation of the Jacobian then it is likely best
                    not to ever use a matrix-free product. It is only
                    when pmat is approximated in some specific way that
                    using the matrix-free product would be useful to
                    insure the "Newton" method actually computes a
                    Newton step.</div>
                  <div><br>
                  </div>
                  <div>Barry</div>
                  <div><br>
                  </div>
                  <div><br>
                  </div>
                </div>
                <div>
                  <div><br>
                    <blockquote type="cite">
                      <div>On Jun 13, 2021, at 11:21 AM, Matthew Knepley
                        <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>>
                        wrote:</div>
                      <br>
                      <div>
                        <div dir="ltr">
                          <div dir="ltr">On Sun, Jun 13, 2021 at 10:55
                            AM Anton Popov <<a href="mailto:popov@uni-mainz.de" target="_blank">popov@uni-mainz.de</a>>
                            wrote:<br>
                          </div>
                          <div class="gmail_quote">
                            <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
                              <br>
                              I want a simple(?) thing. I want to decide
                              and be able to assign the <br>
                              Jacobian matrix (Amat) on the fly within
                              the FormJacobian function (i.e. <br>
                              during Newton iteration) to one of the
                              following options:<br>
                              <br>
                              1) built-in MFFD approximation<br>
                              2) assembled preconditioner matrix (Pmat)<br>
                              <br>
                              I have not found this scenario
                              demonstrated in any example, therefore <br>
                              I’m asking how to do that.<br>
                              <br>
                              Currently I do the following:<br>
                              <br>
                              1) setup Amat as a shell matrix with a
                              MATOP_MULT operation that simply <br>
                              retrieves a matrix object form its context
                              and calls MatMult on it.<br>
                              <br>
                              2) if I need MFFD, I put a matrix
                              generated with MatCreateSNESMF in the <br>
                              Amat context (of course I also call
                              MatMFFDComputeJacobian before that).<br>
                              <br>
                              3) if I need Pmat, I simply put Pmat in
                              the Amat context.<br>
                              <br>
                              4) call MatAssemblyBegin/End on Amat<br>
                              <br>
                              So far so good.<br>
                              <br>
                              However, shell Amat and assembled Pmat
                              generate a problem if Galerkin <br>
                              multigrid is requested as a preconditioner
                              (I just test on 1 CPU):<br>
                              <br>
                              [0]PETSC ERROR: MatPtAP requires A, shell,
                              to be compatible with P, <br>
                              seqaij (Misses composed function
                              MatPtAP_shell_seqaij_C)<br>
                              [0]PETSC ERROR: #1 MatPtAP()<br>
                              [0]PETSC ERROR: #2 MatGalerkin()<br>
                              [0]PETSC ERROR: #3 PCSetUp_MG()<br>
                              [0]PETSC ERROR: #4 PCSetUp()<br>
                              [0]PETSC ERROR: #5 KSPSetUp()<br>
                              [0]PETSC ERROR: #6 KSPSolve()<br>
                              [0]PETSC ERROR: #7 SNESSolve_NEWTONLS()<br>
                              [0]PETSC ERROR: #8 SNESSolve()<br>
                              <br>
                              It seems like PETSc tries to apply
                              Galerkin coarsening to the shell Amat <br>
                              matrix instead of the assembled Pmat. Why?<br>
                              <br>
                              Pmat is internally generated by SNES using
                              a DMDA attached to SNES, so <br>
                              it has everything to define Galerkin
                              coarsening. And it actually works <br>
                              if I just get rid of the shell Amat.<br>
                              <br>
                              I can get around this problem by wrapping
                              a PC object in a shell matrix <br>
                              and pass it as Pmat. This is a rather ugly
                              approach, so I wonder how to <br>
                              resolve the situation in a better way, if
                              it is possible.<br>
                            </blockquote>
                            <div><br>
                            </div>
                            <div>Hi Anton,</div>
                            <div><br>
                            </div>
                            <div>You are correct that there is no
                              specific test, so I can believe that a bug
                              might be lurking here.</div>
                            <div>I believe the way it is supposed to
                              work is that you set the type of Galerkin
                              coarsening that you</div>
                            <div>want</div>
                            <div><br>
                            </div>
                            <div>  <a href="https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCMGSetGalerkin.html" target="_blank">https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCMGSetGalerkin.html</a></div>
                            <div><br>
                            </div>
                            <div>So I am thinking you want 'pmat' only,
                              and you would be in charge of making the
                              coarse MFFD operator</div>
                            <div>and telling PCMG about it. I could
                              imagine that you might want us to
                              automatically do that, but we would</div>
                            <div>need some way to know that it is MFFD,
                              and with the scheme above we do not have
                              that.</div>
                            <div><br>
                            </div>
                            <div>What is the advantage of switching
                              representations during the Newton
                              iteration?</div>
                            <div><br>
                            </div>
                            <div>  Thanks,</div>
                            <div><br>
                            </div>
                            <div>     Matt</div>
                            <div> </div>
                            <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                              Thanks.<br>
                              Anton<br>
                            </blockquote>
                          </div>
                          -- <br>
                          <div dir="ltr">
                            <div dir="ltr">
                              <div>
                                <div dir="ltr">
                                  <div>
                                    <div dir="ltr">
                                      <div>What most experimenters take
                                        for granted before they begin
                                        their experiments is infinitely
                                        more interesting than any
                                        results to which their
                                        experiments lead.<br>
                                        -- Norbert Wiener</div>
                                      <div><br>
                                      </div>
                                      <div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>
                                      </div>
                                    </div>
                                  </div>
                                </div>
                              </div>
                            </div>
                          </div>
                        </div>
                      </div>
                    </blockquote>
                  </div>
                  <br>
                </div>
              </blockquote>
            </div>
          </blockquote>
        </div>
      </div>
    </blockquote>
  </div>

</blockquote></div></div>