<div dir="ltr"><pre>Dear Barry,
Thanks for reply. I basically implemented your codes. Then I have two
questions.
The first is I'm working on Fortran. So I can't use MatShellSetContext to
set the structure. Therefore I let the variable I want to set, MyJctx, to
be global. Is there other way to do that?
The second question is I did some tests. let the D vec to be zero, I
expect the code which I explicit set the matrix-free jacobian and the code
which I use runtime option -snes_mf give the same residual history. But it
doesn't.
Here is the histories for <br>-snes_monitor -ksp_max_it 5 -snes_converged_reason -snes_max_it 2 -ksp_converged_reason -ksp_monitor -snes_max_linear_solve_fail 300 -pc_type none -snes_view -snes_linesearch_type basic<br>
0 SNES Function norm 4.272952196300e-02 <br> 0 KSP Residual norm 4.272952196300e-02 <br> 1 KSP Residual norm <span style="color:rgb(255,0,0)">4.234712668718e-02</span> <br> 2 KSP Residual norm 3.683301946690e-02 <br>
3 KSP Residual norm 3.465586805169e-02 <br> 4 KSP Residual norm 3.452667066800e-02 <br> 5 KSP Residual norm 3.451739518719e-02 <br> Linear solve did not converge due to DIVERGED_ITS iterations 5<br> 1 SNES Function norm 4.203973403992e-02 <br>
0 KSP Residual norm 4.203973403992e-02 <br> 1 KSP Residual norm 4.203070641961e-02 <br> 2 KSP Residual norm 4.202387940443e-02 <br> 3 KSP Residual norm 4.183739347023e-02 <br> 4 KSP Residual norm 4.183629424897e-02 <br>
5 KSP Residual norm 4.159456024825e-02 <br> Linear solve did not converge due to DIVERGED_ITS iterations 5<br> 2 SNES Function norm 4.200901009970e-02 <br>Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 2
<br>Here is the histories for <br>-snes_mf -snes_monitor -ksp_max_it 5 -snes_converged_reason -snes_max_it 2 -ksp_converged_reason -ksp_monitor -snes_max_linear_solve_fail 300 -pc_type none -snes_view -snes_linesearch_type basic<br>
<br> 0 SNES Function norm 4.272952196300e-02 <br> 0 KSP Residual norm 4.272952196300e-02 <br> 1 KSP Residual norm <span style="color:rgb(255,0,0)">4.270267664569e-02</span> <br> 2 KSP Residual norm 3.690026921954e-02 <br>
3 KSP Residual norm 3.681740616743e-02 <br> 4 KSP Residual norm 3.464377294985e-02 <br> 5 KSP Residual norm 3.464376048536e-02 <br> Linear solve did not converge due to DIVERGED_ITS iterations 5<br> 1 SNES Function norm 3.461633424373e-02 <br>
0 KSP Residual norm 3.461633424373e-02 <br> 1 KSP Residual norm 3.461632119472e-02 <br> 2 KSP Residual norm 3.406130197963e-02 <br> 3 KSP Residual norm 3.406122155751e-02 <br> 4 KSP Residual norm 3.403393397001e-02 <br>
5 KSP Residual norm 3.403367748538e-02 <br> Linear solve did not converge due to DIVERGED_ITS iterations 5<br> 2 SNES Function norm 3.403367847002e-02 <br>Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 2<br>
<br>We can see that at 0 SNES 1 KSP step, the residual norms are different. Did I do something wrong here?<br><br>The codes are like<br><br> type MyJContext<br> Mat mf<br> Vec D<br> Vec work<br> end type MyJContext<br>
c <br> type(MyJContext) myJctx<br><br>--------------------------------------------------------------------------<br> call SNESCreate(PETSC_COMM_WORLD, snes, ierpetsc)<br> call SNESSetFunction(snes, pet_rhs_snes, flowsolrhs, ctx, <br>
@ ierpetsc)<br>c<br> call MatCreateSNESMF(snes, myJctx%mf, ierpetsc)<br> call MatMFFDSetFunction(myJctx%mf, flowsolrhs, ctx, ierpetsc)<br> call VecDuplicate(pet_solu_snes, myJctx%D, ierpetsc)<br>
call VecDuplicate(pet_solu_snes, myJctx%work, ierpetsc)<br> call VecSet(myJctx%D, 0.0D-3, ierpetsc)<br> call MatCreateShell(PETSC_COMM_WORLD, pet_nfff, pet_nfff,<br> @ PETSC_DETERMINE, PETSC_DETERMINE, ctx, myJ, ierpetsc)<br>
call MatShellSetOperation(myJ, MATOP_MULT, mymultply,<br> @ ierpetsc)<br> call SNESSetJacobian(snes, myJ, pet_mat_pre, <br> @ flowsoljac, ctx, ierpetsc)<br><br>--------------------------------------------------------------------------
<br> subroutine mymultply ( A, x, y, ierpet)<br> Mat :: A<br> Vec :: x, y<br> PetscErrorCode :: ierpet<br>c<br> call MatMult(myJctx%mf,x,y, ierpet)<br>c<br> end <br>--------------------------------------------------------------------------
<br> subroutine flowsoljac ( snes, pet_solu_snes, pet_mat_snes,<br> @ pet_mat_pre, flag, ctxx, ierrpet )<br><br>c explicitly assemble pet_mat_pre matrix here <br>c ......... <br>c .........<br><br> call MatMFFDSetBase(myJctx%mf, pet_solu_snes, PETSC_NULL_INTEGER,<br>
@ ierrpet)<br> end<br><br><br></pre>
<div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Jan 3, 2014 at 6:46 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Dario,<br>
<br>
Your discussion below (SOR, ILU(200)) seems to imply that you are providing some actual explicit representation of the Jacobian, not just doing something completely matrix free. Is this correct? But the PETSc MatMFFD() is completely matrix free, it provides only a matrix-vector product and no access to the matrix entries, hence I am slightly confused.<br>
<br>
If you wish to use for the Jacobian something like D + J and do it completely matrix free then rather than than monkeying with “changing the function” I would<br>
use the “correct" function to compute J x using matrix free multiply and then apply the D to as an additional operation. Hence you would do something like<br>
<br>
typedef struct { /* data structure to store the usual matrix free matrix and the additional diagonal matrix */<br>
Mat mf;<br>
Vec D;<br>
Vec work;<br>
} MyJContext;<br>
<br>
MyJContext myJctx;<br>
<br>
MatCreateSNESMF(SNES,&myJctx.mf); /* create the usual MFFD matrix using the real nonlinear function */<br>
MatMFFDSetFunction(myJctx.mf,yournonlinearfunction,nonlinearfunctionctx);<br>
VecCreate(comm,&myJctx.D);<br>
/* set the correct sizes for D and fill up with your diagonal matrix entries */<br>
VecDuplicate(&myJctx.D,&myJCtx.work);<br>
MatCreateShell(comm,…. &myJ);<br>
MatShellSetOperation(myJ,MATOP_MULT, mymultiply);<br>
MatShellSetContext(myJ,&myJctx);<br>
SNESSetJacobian(snes,myJ,myJ, myJFunction,NULL);<br>
<br>
where<br>
<br>
PetscErrorCode mymultply(Mat A,Vec x,Vec y) /* computes y = J x + D x<br>
{<br>
MyJContext *myJctx;<br>
<br>
MatShellGetContext(A,&myJctx);<br>
MatMult(myJctx->mf,x,y);<br>
VecPointwiseMult(myJctx->D,x,myJctx->work);<br>
VecAXPY(y,1.myJctx->work);<br>
}<br>
<br>
and<br>
<br>
PetscErrorCode myJFunction(SNES snes,Vec x,Mat *A,Mat *B,MatStructure *str,void* ctx)<br>
<br>
/* this is called for each new “Jacobian” to set the point at which it is computed */<br>
{<br>
MyJContext *myJctx;<br>
Vec f;<br>
MatShellGetContext(*A,&myJctx);<br>
SNESGetFunction(snes,&f);<br>
MatMFFDSetBase(myJctx->mf,x,f);<br>
<br>
/* change the D entries if they depend on the current solution etc */<br>
return 0;<br>
}<br>
<br>
Sorry now that I have typed it out it looks a bit more complicated then it really is. It does what you want but without any trickery or confusing code.<br>
<br>
But, of course, since it is completely matrix free you cannot use SOR with it. Of course by making D suitably large you can make it as well conditioned as you want and thus get rapid linear convergence (though that may slow down or ruin the nonlinear convergence).<br>
<br>
Hope this helps,<br>
<br>
Barry<br>
<div class=""><div class="h5"><br>
<br>
<br>
<br>
<br>
On Jan 3, 2014, at 12:18 PM, Dario Isola <<a href="mailto:dario.isola@newmerical.com">dario.isola@newmerical.com</a>> wrote:<br>
<br>
> Dear, Barry and Jed,<br>
><br>
> Thanks for your replies.<br>
><br>
> We understand your doubts, so let me to put our question into context. In CFD it is standard practice to solve non-linear equations of conservation for steady flows by means of a inexact Newton method. The original Jacobian matrix is modified by adding terms on the diagonal which are proportional to the Courant number and to the lumped mass matrix. This allows us to obtain two things, "relax" the solution update and increase the diagonal dominance of the matrix itself.<br>
><br>
> The latter is key when simple preconditioners are adopted, in our case point Jacobi or SOR. Indeed, if the original matrix was to be used, the GMRES method would converge only on very regular meshes and only when adopting ILU preconditioners with a very high level of fill-in. As result a higher number of non-linear iterations is traded with a simpler linear system to be solved.<br>
><br>
> While exploring the SNES+MF capabilities we found out that we could successfully solve the linear system only with ILU(200) or so. Of course we do not want to touch the function used to evaluate the residual, which determines the final solution. However we think that a suitable modification of the function that Petsc differences to compute the matrix vector product would allow us to obtain a behavior similar to the inexact Newton method.<br>
><br>
> Best regards,<br>
> Dario<br>
><br>
><br>
> On 01/03/2014 12:32 PM, Song Gao wrote:<br>
>><br>
>><br>
>> ---------- Forwarded message ----------<br>
>> From: Jed Brown <<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>><br>
>> Date: Thu, Jan 2, 2014 at 10:20 AM<br>
>> Subject: Re: [petsc-users] SNESSetFunction and MatMFFDSetFunction<br>
>> To: Song Gao <<a href="mailto:song.gao2@mail.mcgill.ca">song.gao2@mail.mcgill.ca</a>>, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>><br>
>> Cc: petsc-users <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>
>><br>
>><br>
>> Song Gao <<a href="mailto:song.gao2@mail.mcgill.ca">song.gao2@mail.mcgill.ca</a>> writes:<br>
>><br>
>> > Thanks, Barry.<br>
>> ><br>
>> > I mean 2) providing a function that I want PETSc to difference to evaluate<br>
>> > the matrix vector product.<br>
>> ><br>
>> > I want to make a slight modification of the matrix after PETSc evaluate the<br>
>> > matrix vector product.<br>
>><br>
>> Performing a matrix-vector product is not supposed to modify the matrix.<br>
>> It's unlikely that you really want this.<br>
>><br>
>><br>
>> On Wed, Jan 1, 2014 at 3:01 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
>><br>
>> On Jan 1, 2014, at 11:09 AM, Song Gao <<a href="mailto:song.gao2@mail.mcgill.ca">song.gao2@mail.mcgill.ca</a>> wrote:<br>
>><br>
>> > Dear all,<br>
>> ><br>
>> > Happy new year!<br>
>> ><br>
>> > I'm using the matrix-free method to solve NS equations. I call the SNESSetFunction to set the RHS function. I think SNES also uses that RHS function to evaluate the matrix vector product.<br>
>><br>
>> Yes, PETSc differences this function to evaluate the matrix vector product.<br>
>><br>
>> ><br>
>> > But I want to set one function to evaluate the residual, and another different function to evaluate the matrix vector product.<br>
>><br>
>> Are you providing a function that<br>
>><br>
>> 1) actually evaluates the matrix vector product or are you<br>
>><br>
>> 2) providing a function that you want PETSc to difference to evaluate the matrix vector product?<br>
>><br>
>> > How can I do that? Does MatMFFDSetFunction do this job?<br>
>><br>
>> For 2) yes, but if the function you provide is different than the function provided with SNESSetFunction then the matrix-vector product obtained from differencing it will not be “correct” Jacobian for the SNESSetFunction() you are providing so I don’t see why you would do it.<br>
>><br>
>> For 1) you should use MatCreateShell() and MatShellSetOperation(mat,MATOP_MULT, ….) and then pass that matrix to SNESSetJacobian(), then PETSc will use that “matrix” to do its matrix-vector products.<br>
>><br>
>> Barry<br>
>><br>
>><br>
>><br>
>> ><br>
>> > Any suggestion is appreciated. Thank you.<br>
>> ><br>
>> ><br>
>> > Song<br>
>><br>
><br>
<br>
</div></div></blockquote></div><br></div></div>