[petsc-users] SNESSetFunction and MatMFFDSetFunction

Song Gao song.gao2 at mail.mcgill.ca
Wed Jan 8 10:47:16 CST 2014


Dear Barry,

Thanks for reply. I basically implemented your codes. Then I have two
questions.

The first is I'm working on Fortran. So I can't use MatShellSetContext to
set the structure. Therefore I let the variable I want to set, MyJctx, to
be global. Is there other way to do that?

The second question is I did some tests.  let the D vec to be zero, I
expect the code which I explicit set the matrix-free jacobian and the code
which I use runtime option -snes_mf give the same residual history. But it
doesn't.

Here is the histories for
-snes_monitor -ksp_max_it 5 -snes_converged_reason -snes_max_it 2
-ksp_converged_reason -ksp_monitor -snes_max_linear_solve_fail 300
-pc_type none -snes_view -snes_linesearch_type basic

  0 SNES Function norm 4.272952196300e-02
    0 KSP Residual norm 4.272952196300e-02
    1 KSP Residual norm 4.234712668718e-02
    2 KSP Residual norm 3.683301946690e-02
    3 KSP Residual norm 3.465586805169e-02
    4 KSP Residual norm 3.452667066800e-02
    5 KSP Residual norm 3.451739518719e-02
  Linear solve did not converge due to DIVERGED_ITS iterations 5
  1 SNES Function norm 4.203973403992e-02
    0 KSP Residual norm 4.203973403992e-02
    1 KSP Residual norm 4.203070641961e-02
    2 KSP Residual norm 4.202387940443e-02
    3 KSP Residual norm 4.183739347023e-02
    4 KSP Residual norm 4.183629424897e-02
    5 KSP Residual norm 4.159456024825e-02
  Linear solve did not converge due to DIVERGED_ITS iterations 5
  2 SNES Function norm 4.200901009970e-02
Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 2
Here is the histories for
-snes_mf -snes_monitor -ksp_max_it 5 -snes_converged_reason
-snes_max_it 2 -ksp_converged_reason -ksp_monitor
-snes_max_linear_solve_fail 300 -pc_type none -snes_view
-snes_linesearch_type basic

 0 SNES Function norm 4.272952196300e-02
    0 KSP Residual norm 4.272952196300e-02
    1 KSP Residual norm 4.270267664569e-02
    2 KSP Residual norm 3.690026921954e-02
    3 KSP Residual norm 3.681740616743e-02
    4 KSP Residual norm 3.464377294985e-02
    5 KSP Residual norm 3.464376048536e-02
  Linear solve did not converge due to DIVERGED_ITS iterations 5
  1 SNES Function norm 3.461633424373e-02
    0 KSP Residual norm 3.461633424373e-02
    1 KSP Residual norm 3.461632119472e-02
    2 KSP Residual norm 3.406130197963e-02
    3 KSP Residual norm 3.406122155751e-02
    4 KSP Residual norm 3.403393397001e-02
    5 KSP Residual norm 3.403367748538e-02
  Linear solve did not converge due to DIVERGED_ITS iterations 5
  2 SNES Function norm 3.403367847002e-02
Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 2

We can see that at 0 SNES 1 KSP step, the residual norms are
different. Did I do something wrong here?

The codes are like

      type MyJContext
        Mat mf
        Vec D
        Vec work
      end type MyJContext
c
      type(MyJContext) myJctx

--------------------------------------------------------------------------
          call SNESCreate(PETSC_COMM_WORLD, snes, ierpetsc)
          call SNESSetFunction(snes, pet_rhs_snes, flowsolrhs, ctx,
     @       ierpetsc)
c
          call MatCreateSNESMF(snes, myJctx%mf, ierpetsc)
          call MatMFFDSetFunction(myJctx%mf, flowsolrhs, ctx, ierpetsc)
          call VecDuplicate(pet_solu_snes, myJctx%D, ierpetsc)
          call VecDuplicate(pet_solu_snes, myJctx%work, ierpetsc)
          call VecSet(myJctx%D, 0.0D-3, ierpetsc)
          call MatCreateShell(PETSC_COMM_WORLD, pet_nfff, pet_nfff,
     @       PETSC_DETERMINE, PETSC_DETERMINE, ctx, myJ, ierpetsc)
          call MatShellSetOperation(myJ, MATOP_MULT, mymultply,
     @       ierpetsc)
          call SNESSetJacobian(snes, myJ, pet_mat_pre,
     @      flowsoljac, ctx, ierpetsc)

--------------------------------------------------------------------------
     subroutine mymultply (  A,  x,  y, ierpet)
      Mat  :: A
      Vec  :: x, y
      PetscErrorCode :: ierpet
c
      call MatMult(myJctx%mf,x,y, ierpet)
c
      end
--------------------------------------------------------------------------
     subroutine flowsoljac ( snes, pet_solu_snes, pet_mat_snes,
     @  pet_mat_pre, flag,  ctxx,   ierrpet )

c    explicitly assemble pet_mat_pre matrix here
c    .........
c    .........

      call MatMFFDSetBase(myJctx%mf, pet_solu_snes, PETSC_NULL_INTEGER,
     @   ierrpet)
     end




On Fri, Jan 3, 2014 at 6:46 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:

>
>   Dario,
>
>      Your discussion below (SOR, ILU(200)) seems to imply that you are
> providing some actual explicit representation of the Jacobian, not just
> doing something completely matrix free. Is this correct? But the PETSc
> MatMFFD() is completely matrix free, it provides only a matrix-vector
> product and no access to the matrix entries, hence I am slightly confused.
>
>     If you wish to use for the Jacobian something like D + J and do it
> completely matrix free then rather than than monkeying with “changing the
> function” I would
> use the “correct" function to compute J x using matrix free multiply  and
> then apply the D to as an additional operation. Hence you would do
> something like
>
>       typedef struct {  /* data structure to store the usual matrix free
> matrix and the additional diagonal matrix */
>           Mat mf;
>           Vec D;
>           Vec work;
>       } MyJContext;
>
>       MyJContext myJctx;
>
>       MatCreateSNESMF(SNES,&myJctx.mf);   /* create the usual MFFD matrix
> using the real nonlinear function */
>
> MatMFFDSetFunction(myJctx.mf,yournonlinearfunction,nonlinearfunctionctx);
>       VecCreate(comm,&myJctx.D);
>       /*  set the correct sizes for D and fill up with your diagonal
> matrix entries */
>       VecDuplicate(&myJctx.D,&myJCtx.work);
>       MatCreateShell(comm,…. &myJ);
>       MatShellSetOperation(myJ,MATOP_MULT, mymultiply);
>       MatShellSetContext(myJ,&myJctx);
>       SNESSetJacobian(snes,myJ,myJ, myJFunction,NULL);
>
>       where
>
>       PetscErrorCode mymultply(Mat A,Vec x,Vec y)  /* computes y = J x  +
> D x
>       {
>           MyJContext *myJctx;
>
>           MatShellGetContext(A,&myJctx);
>           MatMult(myJctx->mf,x,y);
>           VecPointwiseMult(myJctx->D,x,myJctx->work);
>           VecAXPY(y,1.myJctx->work);
>       }
>
>      and
>
>       PetscErrorCode myJFunction(SNES snes,Vec x,Mat *A,Mat
> *B,MatStructure *str,void* ctx)
>
>       /* this is called for each new “Jacobian” to set the point at which
> it is computed */
>       {
>          MyJContext *myJctx;
>           Vec f;
>           MatShellGetContext(*A,&myJctx);
>           SNESGetFunction(snes,&f);
>          MatMFFDSetBase(myJctx->mf,x,f);
>
>          /* change the D entries if they depend on the current solution
> etc */
>          return 0;
>        }
>
>        Sorry now that I have typed it out it looks a bit more complicated
> then it really is. It does what you want but without any trickery or
> confusing code.
>
>        But, of course, since it is completely matrix free you cannot use
> SOR with it. Of course by making D suitably large you can make it as well
> conditioned as you want and thus get rapid linear convergence (though that
> may slow down or ruin the nonlinear convergence).
>
>     Hope this helps,
>
>     Barry
>
>
>
>
>
> On Jan 3, 2014, at 12:18 PM, Dario Isola <dario.isola at newmerical.com>
> wrote:
>
> > Dear, Barry and Jed,
> >
> > Thanks for your replies.
> >
> > We understand your doubts, so let me to put our question into context.
> In CFD it is standard practice to solve non-linear equations of
> conservation for steady flows by means of a inexact Newton method.  The
> original Jacobian matrix is modified by adding terms on the diagonal which
> are proportional to the Courant number and to the lumped mass matrix. This
> allows us to obtain two things, "relax" the solution update and increase
> the diagonal dominance of the matrix itself.
> >
> > The latter is key when simple preconditioners are adopted, in our case
> point Jacobi or SOR.  Indeed, if the original matrix was to be used, the
> GMRES method would converge only on very regular meshes and only when
> adopting ILU preconditioners with a very high level of fill-in. As result a
> higher number of non-linear iterations is traded with a simpler linear
> system to be solved.
> >
> > While exploring the SNES+MF capabilities we found out that we could
> successfully solve the linear system only with ILU(200) or so. Of course we
> do not want to touch the function used to evaluate the residual, which
> determines the final solution. However we think that a suitable
> modification of the function that Petsc differences to compute the matrix
> vector product would allow us to obtain a behavior similar to the inexact
> Newton method.
> >
> > Best regards,
> > Dario
> >
> >
> > On 01/03/2014 12:32 PM, Song Gao wrote:
> >>
> >>
> >> ---------- Forwarded message ----------
> >> From: Jed Brown <jedbrown at mcs.anl.gov>
> >> Date: Thu, Jan 2, 2014 at 10:20 AM
> >> Subject: Re: [petsc-users] SNESSetFunction and MatMFFDSetFunction
> >> To: Song Gao <song.gao2 at mail.mcgill.ca>, Barry Smith <
> bsmith at mcs.anl.gov>
> >> Cc: petsc-users <petsc-users at mcs.anl.gov>
> >>
> >>
> >> Song Gao <song.gao2 at mail.mcgill.ca> writes:
> >>
> >> > Thanks, Barry.
> >> >
> >> > I mean 2) providing a function that I want PETSc to difference to
> evaluate
> >> > the matrix vector product.
> >> >
> >> > I want to make a slight modification of the matrix after PETSc
> evaluate the
> >> > matrix vector product.
> >>
> >> Performing a matrix-vector product is not supposed to modify the matrix.
> >> It's unlikely that you really want this.
> >>
> >>
> >> On Wed, Jan 1, 2014 at 3:01 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> >>
> >> On Jan 1, 2014, at 11:09 AM, Song Gao <song.gao2 at mail.mcgill.ca> wrote:
> >>
> >> > Dear all,
> >> >
> >> > Happy new year!
> >> >
> >> > I'm using the matrix-free method to solve NS equations. I call the
> SNESSetFunction to set the RHS function. I think SNES also uses that RHS
> function to evaluate the matrix vector product.
> >>
> >>    Yes, PETSc differences this function to evaluate the matrix vector
> product.
> >>
> >> >
> >> > But I want to set one function to evaluate the residual, and another
> different function to evaluate the matrix vector product.
> >>
> >>    Are you providing a function that
> >>
> >>     1) actually evaluates the matrix vector product or are you
> >>
> >>     2) providing a function that you want PETSc to difference to
> evaluate the matrix vector product?
> >>
> >> > How can I do that? Does MatMFFDSetFunction do this job?
> >>
> >>     For 2) yes, but if the function you provide is different than the
> function provided with SNESSetFunction then the matrix-vector product
> obtained from differencing it will not be “correct” Jacobian for the
> SNESSetFunction() you are providing so I don’t see why you would do it.
> >>
> >>    For 1) you should use MatCreateShell() and
> MatShellSetOperation(mat,MATOP_MULT, ….) and then pass that matrix to
> SNESSetJacobian(), then PETSc will use that “matrix” to do its
> matrix-vector products.
> >>
> >>    Barry
> >>
> >>
> >>
> >> >
> >> > Any suggestion is appreciated. Thank you.
> >> >
> >> >
> >> > Song
> >>
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140108/e0d02625/attachment-0001.html>


More information about the petsc-users mailing list