[petsc-users] PetscFV and TS implicit

Thibault Bridel-Bertomeu thibault.bridelbertomeu at gmail.com
Thu Aug 27 01:15:07 CDT 2020


Sorry Barry for the late reply.

Le mar. 25 août 2020 à 15:19, Barry Smith <bsmith at petsc.dev> a écrit :

>
>    Yes, your use of the coloring is what I was thinking of. I don't think
> you need any of calls to the coloring code as it is managed
> in SNESComputeJacobianDefaultColor() if you don't provide it initially. Did
> that not work, just using -snes_fd_color?
>

Yes it works with the command line flag too, I just wanted to write down
the lines somewhere in case I needed them and I left them there in the end.


>    Regarding direct solvers. Add the arguments
>
> --download-superlu_dist --download-metis --download-parmetis
> --download-mumps --download-scalapack --download-ptscotch
>
> to ./configure
>
> Then when you run the code you can use -pc_type lu
> -pc_factor_mat_solver_type superlu_dist or mumps
>

Ah, thanks ! I haven't experimented much with the direct solvers in PETSc,
I mostly use iterative resolution so far, but I'll have a look.

With this first implementation using the automatic differentiation by
coloring, I was able to solve with implicit time stepping problems
involving the Euler equations or the Navier-Stokes equations (which contain
a parabolic term) in both 2D and axisymmetric form. It is definitely a win
for me. I played around the different SNES, KSP and PC I could use, and it
turns out using respectively newtonls, gmres and sor with 5 iterations is
probably the most robust combination, with which I was able to achieve
start-up CFL numbers around 50 (based solely on the non-parabolic part of
the systems of equations).

Now, coming back to why I first sent a message in this mailing list, I am
still trying to create the Jacobian and Preconditioner myself.  As you can
see in the attached PDF, I am still using my PetscJacobianFunction_JFNK and
my PetscIJacobian routines, but they evolved since last time. Following
your advice Barry I rewrote carefully the JFNK method to account for global
vectors as input and I feel it should be okay now even though I don't
really have a way of testing it : with the systems of equations I am trying
to solve, no preconditioner yields a SNES divergence. So I wrote the
PetscComputePreconMatImpl to get the preconditioner, the idea being I would
like to get something like alpha * Id - grad (dF/dU) where grad represents
the spatial differentiation operator and dF/dU is the exact jacobian in
each cell this time, alpha being a shift parameter introduced by the
implicit time stepping method. The exact dF/dU is computed from the state
in a cell using the static NavierStokes2DJFunction.
To compute the gradient of this jacobian I took inspiration from the way
the gradient is computed in the DMPlex_reconstruct_gradient_internal but I
am not quite sure of what I am doing there.
If you don't mind, could you please tell me what you think of the way I
approach this preconditioner computation ?

Thank you so much for your help once again,

Best regards,

Thibault



> Barry
>
>
>
>
> On Aug 25, 2020, at 2:06 AM, Thibault Bridel-Bertomeu <
> thibault.bridelbertomeu at gmail.com> wrote:
>
> Hello everyone,
>
> Barry, I followed your recommendations and came up with the pieces of code
> that are in the attached PDF - mostly pages 1 & 3 are important, page 2 is
> almost entirely commented.
>
> I tried to use DMCreateColoring as the doc says it may produce a more
> accurate coloring, however it is not implemented for a Plex yet hence the
> call to Matcoloringcreate that you will see. I left the test DMHascoloring
> in case in a later release PETSc allows for the generation of the coloring
> from a Plex.
>
> Also, you'll see in the input file that contrary to what you suggested I
> am using the jacobi PC. It is simply because it appears that the way I
> compiled my PETSc does not support a PC LU or PC CHOLESKY (per the seg
> fault print in the console). Do I need scalapack or mumps or something else
> ?
>
> Altogether this implementation works and produces results that are correct
> physically speaking. Now I have to try and increase the CFL number a lot to
> see how robust this approach is.
>
> All in all, what do you think of this implementation, is it what you had
> in mind ?
>
> Thank you for your help,
>
> Thibault
>
> Le lun. 24 août 2020 à 22:16, Barry Smith <bsmith at petsc.dev> a écrit :
>
>>
>>
>> On Aug 24, 2020, at 2:20 PM, Thibault Bridel-Bertomeu <
>> thibault.bridelbertomeu at gmail.com> wrote:
>>
>> Good evening everyone,
>>
>> Thanks Barry for your answer.
>>
>> Le lun. 24 août 2020 à 18:51, Barry Smith <bsmith at petsc.dev> a écrit :
>>
>>>
>>>
>>> On Aug 24, 2020, at 11:38 AM, Thibault Bridel-Bertomeu <
>>> thibault.bridelbertomeu at gmail.com> wrote:
>>>
>>> Thank you Barry for taking the time to go through the code !
>>>
>>> I indeed figured out this afternoon that the function related to the
>>> matrix-vector product is always handling global vectors. I corrected mine
>>> so that it compiles, but I have a feeling it won't run properly without a
>>> preconditioner.
>>>
>>> Anyways you are right, my PetscJacobianFunction_JFNK() aims at doing
>>> some basic finite-differencing ; user->RHS_ref is my F(U) if you see the
>>> system as dU/dt = F(U). As for the DMGlobalToLocal() it was there mainly
>>> because I had not realized the vectors I was manipulating were global.
>>> I will take your advice and try with just the SNESSetUseMatrixFree.
>>> I haven't quite fully understood what it does "under the hood" though:
>>> just calling SNESSetUseMatrixFree(snes,PETSC_TRUE,PETSC_FALSE)  before the
>>> TSSolve call is enough to ensure that the implicit matrix is computed ?
>>> Does it use the function we set as a RHS to build the matrix ?
>>>
>>>
>>>    All it does is "replace" the A matrix with one automatically created
>>> for the job using MatCreateMFFD(). It does not touch the B matrix, it does
>>> not build the matrix but yes if does use the function to provide to do the
>>> differencing.
>>>
>>
>> OK, thank you. This MFFD Matrix is then called by the TS to construct the
>> linear system that will be solved to advance the system of equations, right
>> ?
>>
>>>
>>> To create the preconditioner I will do as you suggest too, thank you.
>>> This matrix has to be as close as possible to the inverse of the implicit
>>> matrix to ensure that the eigenvalues of the system are as close to 1 as
>>> possible. Given the implicit matrix is built "automatically" thanks to the
>>> SNES matrix free capability, can we use that matrix as a starting point to
>>> the building of the preconditioner ?
>>>
>>>
>>>    No the MatrixFree doesn't build a matrix, it can only do
>>> matrix-vector products with differencing.
>>>
>>
>> My bad, wrong word. Yes of course it's all matrix-free hence it's just a
>> functional, however maybe the inner mechanisms can be accessed and used for
>> the preconditioner ?
>>
>>
>>    Probably not, it really only can do matrix-vector products.
>>
>> You were talking about the coloring capabilities in PETSc, is that where
>>> it can be applied ?
>>>
>>>
>>>   Yes you can use that.  See MatFDColoringCreate() but since you are
>>> using a DM in theory you can use -snes_fd_color and PETSc will manage
>>> everything for you so you don't have to write any code for Jacobians at
>>> all. Again it uses your function to do differences using coloring to be
>>> efficient to build the Jacobian for you.
>>>
>>
>> I read a bit about the coloring you are mentioning. As I understand it,
>> it is another option to have a matrix-free Jacobian behavior during the
>> Newton-Krylov iterations, right ? Either we use the SNESSetUseMatrixFree()
>> alone, then it works using "basic" finite-differencing, or we use the
>> SNESSetUseMatrixFree + MatFDColoringCreate
>> & SNESComputeJacobianDefaultColor as an option to SNESSetJacobian to access
>> the finite-differencing based on coloring. Is that right ?
>> Then if i come back to my preconditioner problem ... once you have set-up
>> the implicit matrix with one or the other aforementioned matrix-free ways,
>> how would you go around setting up the preconditioner ? In a matrix-free
>> way too, or rather as a real matrix that we assemble ourselves this time,
>> as you seemed to mean with the previous MatAij DMCreateMatrix ?
>>
>> Sorry if it seems like I am nagging, but I would really like to
>> understand how to manipulate the matrix-free methods and structures in
>> PETSc to run a time-implicit finite volume computation, it's so promising !
>>
>>
>>   There are many many possibilities as we discussed in previous email,
>> most with various limitations.
>>
>>   When you use -snes_fd_color (or put code into the source like
>> MatFDColoringCreate which is unnecessary  a since you are doing the same
>> thing as  -snes_fd_color you get back the true Jacobian (approximated so in
>> less digits than analytic) so you can use any preconditioner that you can
>> use as if you built the true Jacobian yourself.
>>
>>   I always recommend starting with -pc_type lu and making sure you are
>> getting the correct answers to your problem and then worrying about the
>> preconditioner. Faster preconditioner are JUST optimizations, nothing more,
>> they should not change the quality of the solution to your PDE/ODE and you
>> absolutely need to make sure your are getting correct quality answers
>> before fiddling with the preconditioner.
>>
>>   Once you have the solution correct and figured out a good
>> preconditioner (assuming using the true Jacobian works for your
>> discretization) then you can think about optimizing the computation of the
>> Jacobian by doing it analytically finite volume by finite volume. But you
>> shouldn't do any of that until you are sure that your implicit TS
>> integrator for FV produces good numerical answers.
>>
>>   Barry
>>
>>
>>
>>
>>
>> Thanks again,
>>
>>
>> Thibault
>>
>>
>>>   Barry
>>>
>>> Internally it uses SNESComputeJacobianDefaultColor() if you are
>>> interested in what it does.
>>>
>>>
>>>
>>>
>>>
>>> Thank you so much again,
>>>
>>> Thibault
>>>
>>>
>>> Le lun. 24 août 2020 à 15:45, Barry Smith <bsmith at petsc.dev> a écrit :
>>>
>>>>
>>>> I think the attached is wrong.
>>>>
>>>> <Untitled.png>
>>>>
>>>> The input to the matrix vector product for the Jacobian is always
>>>> global vectors which means on each process the dimension is not the size of
>>>> the DMGetLocalVector() it should be the VecGetLocalSize() of the
>>>> DMGetGlobalVector()
>>>>
>>>> But you may be able to skip all this and have the DM create the shell
>>>> matrix setting it sizes appropriately and you only need to supply the MATOP
>>>>
>>>> DMSetMatType(dm,MATSHELL);
>>>> DMCreateMatrix(dm,&A);
>>>>
>>>> In fact, I also don't understand the PetscJacobianFunction_JFKN()
>>>> function It seems to be doing finite differencing on the
>>>> DMPlexTSComputeRHSFunctionFVM() assuming the current function value is in
>>>> usr->RHS_ref.   How is this different than just letting PETSc/SNES used
>>>> finite differences to do the matrix-vector product. Your code seems rather
>>>> complicated with the DMGlobalToLocal() which I don't understand what it is
>>>> suppose to do there.
>>>>
>>>> I think you can just call
>>>>
>>>> TSGetSNES()
>>>> SNESSetUseMatrixFree(snes,PETSC_TRUE,PETSC_FALSE);
>>>>
>>>> and it will set up an internal matrix that does the finite differencing
>>>> for you. Then you never need a shell matrix.
>>>>
>>>>
>>>> Also to create the preconditioner matrix B this should work
>>>>
>>>> DMSetMatType(dm,MATAIJ);
>>>> DMCreateMatrix(dm,&B);
>>>>
>>>> no need for you to figure out the sizes.
>>>>
>>>>
>>>> Note that both A and B need to have the same dimensions on each process
>>>> as the global vectors which I don't think your current code has.
>>>>
>>>>
>>>>
>>>> Barry
>>>>
>>>>
>>>>
>>>> On Aug 24, 2020, at 12:56 AM, Thibault Bridel-Bertomeu <
>>>> thibault.bridelbertomeu at gmail.com> wrote:
>>>>
>>>> Barry, first of all, thank you very much for your detailed answer, I
>>>> keep reading it to let it soak in - I might come back to you for more
>>>> details if you do not mind.
>>>>
>>>> In the meantime, to fuel the conversation, I attach to this e-mail two
>>>> pdfs containing the pieces of the code that regard what we are discussing.
>>>> In the *timedisc.pdf, you'll find how I handle the initialization of the TS
>>>> object, and in the *petscdefs.pdf you'll find the method that calls the
>>>> TSSolve as well as the methods that are linked to the TS (the timestep
>>>> adapt, the jacobian etc ...). [Sorry for the quality, I cannot do better
>>>> than that sort of pdf ...]
>>>>
>>>> Based on what is in the structured code I sent you the other day, I
>>>> rewrote the PetscJacobianFunction_JFNK. I think it should be all right, but
>>>> although it compiles, execution raises a seg fault I think when I do
>>>> ierr = TSSetIJacobian(ts, A, A, PetscIJacobian, user);
>>>> saying that A does not have the right dimensions. It is quite new, I am
>>>> still looking into where exactly the error is raised. What do you think of
>>>> this implementation though, does it look correct in your expert eyes ?
>>>> As for what we really discussed so far, it's that
>>>> PetscComputePreconMatImpl that I do not know how to implement (with the
>>>> derivative of the jacobian based on the FVM object).
>>>>
>>>>  I understand now that what I am showing you today might not be the
>>>> right way to go if one wants to really use the PetscFV, but I just wanted
>>>> to add those code lines to the conversation to have your feedback.
>>>>
>>>> Thank you again for your help,
>>>>
>>>> Thibault
>>>>
>>>>
>>>> Le ven. 21 août 2020 à 19:25, Barry Smith <bsmith at petsc.dev> a écrit :
>>>>
>>>>
>>>>>
>>>>> On Aug 21, 2020, at 10:58 AM, Thibault Bridel-Bertomeu <
>>>>> thibault.bridelbertomeu at gmail.com> wrote:
>>>>>
>>>>> Thank you Barry for the tip ! I’ll make sure to do that when
>>>>> everything is set.
>>>>> What I also meant is that there will not be any more direct way to set
>>>>> the preconditioner than to go through SNESSetJacobian after having
>>>>> assembled everything by hand ? Like, in my case, or in the more general
>>>>> case of fluid dynamics equations, the preconditioner is not a fun matrix to
>>>>> assemble, because for every cell the derivative of the physical flux
>>>>> jacobian has to be taken and put in the right block in the matrix - finite
>>>>> element style if you want. Is there a way to do that with Petsc methods,
>>>>> maybe short-circuiting the FEM based methods ?
>>>>>
>>>>>
>>>>>   Thibault
>>>>>
>>>>>    I am not sure what you mean but there are a couple of things that
>>>>> may be helpful.
>>>>>
>>>>>    PCSHELL
>>>>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCSHELL.html allows
>>>>> you to build your own preconditioner (that can and often will use one or
>>>>> more of its own Mats, and KSP or PC inside it, or even use another PETScFV
>>>>> etc to build some of the sub matrices for you if it is appropriate), this
>>>>> approach means you never need to construct a "global" PETSc matrix from
>>>>> which PETSc builds the preconditioner. But you should only do this if the
>>>>> conventional approach is not reasonable for your problem.
>>>>>
>>>>>    MATNEST
>>>>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATNEST.html allows
>>>>> you to build a global matrix by building parts of it separately and even
>>>>> skipping parts you decide you don't need in the preconditioner.
>>>>> Conceptually it is the same as just creating a global matrix and filling up
>>>>> but the process is a bit different and something suitable for "multi
>>>>> physics" or "multi-equation" type applications.
>>>>>
>>>>>     Of course what you put into PCSHELL and MATNEST will affect the
>>>>> convergence of the nonlinear solver.  As Jed noted what you put in the
>>>>> "Jacobian" does not have to be directly the same mathematically as what you
>>>>> put into the TSSetI/RHSFunction with the caveat that it does have to
>>>>> appropriate spectral properties to result in a good preconditioner for the
>>>>> "true" Jacobian.
>>>>>
>>>>>     Couple of other notes:
>>>>>
>>>>> The entire business of "Jacobian" matrix-free or not (with for example
>>>>> -snes_fd_operator) is tricky because as Jed noted if your finite volume
>>>>> scheme has non-differential terms such as if () tests. There is a concept
>>>>> of sub-differential for this type of thing but I know absolutely nothing
>>>>> about that and probably not worth investigating.
>>>>>
>>>>> In this situation you can avoid the "true" Jacobian completely (both
>>>>> for matrix-vector product and preconditioner) and use something else as Jed
>>>>> suggested a lower order scheme that is differentiable. This can work well
>>>>> for solving the nonlinear system or not depending on how suitable it is for
>>>>> your original "function"
>>>>>
>>>>>
>>>>> 1) In theory at least you can have the Jacobian matrix-vector product
>>>>> computed directly using DMPLEX/PETScFV infrastructure (it would apply the
>>>>> Jacobian locally matrix-free using code similar to the code that evaluates
>>>>> the FV "function". I do no know if any of this code is written, it will be
>>>>> more efficient than -snes_mf_operator that evaluates the FV "function" and
>>>>> does traditional differencing to compute the Jacobian. Again it has the
>>>>> problem of non-differentialability if the function is not differential. But
>>>>> it could be done for a different (lower order scheme) that is
>>>>> differentiable.
>>>>>
>>>>> 2) You can have PETSc compute the  Jacobian explicitly coloring and
>>>>> from that build the preconditioner, this allows you to avoid the hassle of
>>>>> writing the code for the derivatives yourself. This uses finite differences
>>>>> on your function and coloring of the graph to compute many columns of the
>>>>> Jacobian simultaneously and can be pretty efficient. Again if the function
>>>>> is not differential there can be issues of what the result means and will
>>>>> it work in a nonlinear solver. SNESComputeJacobianDefaultColor
>>>>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESComputeJacobianDefaultColor.html
>>>>>
>>>>> 3) Much more outlandish is to skip Newton and Jacobians completely and
>>>>> use the full approximation scheme SNESFAS
>>>>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNESFAS/SNESFAS.html this
>>>>> requires a grid hierarchy and appropriate way to interpolate up through the
>>>>> grid hierarchy your finite volume solutions. Probably not worth
>>>>> investigating unless you have lots of time on your hands and keen interest
>>>>> in this kind of stuff https://arxiv.org/pdf/1607.04254.pdf
>>>>>
>>>>> So to summarize, and Matt and Jed can correct my mistakes.
>>>>>
>>>>> 1)   Form the full Jacobian from the original "function" using
>>>>> analytic approach use it for both the matrix-vector product and to build
>>>>> the preconditioner. Problem if full Jacobian not well defined
>>>>> mathematically. Tough to code, usually not practical.
>>>>>
>>>>> 2)   Do any matrix free (any way) for the full Jacobian and
>>>>>
>>>>>  a) build another "approximate" Jacobian (using any technique analytic
>>>>> or finite differences using matrix coloring on a new "lower order"
>>>>> "function") Still can have trouble if this original Jacobian is no well
>>>>> defined
>>>>>
>>>>>  b)   "write your own preconditioner" that internally can use anything
>>>>> in PETSc that approximately solves the Jacobian. Same potential problems if
>>>>> original Jacobian is not differential, plus convergence will depend on how
>>>>> good your own preconditioner approximates the inverse of the true Jacobian.
>>>>>
>>>>> 3) Use a lower Jacobian (computed anyway you want) for the
>>>>> matrix-vector product and the preconditioner. The problem of
>>>>> differentiability is gone but convergence of the nonlinear solver depends
>>>>> on how well lower order Jacobian is appropriate for the original "function"
>>>>>
>>>>>     a) Form the "lower order" Jacobian analytically or with coloring
>>>>> and use for both matrix-vector product and building preconditioner. Note
>>>>> that switching between this and 2a is trivial.
>>>>>
>>>>>      b) Do the "lower order" Jacobian matrix free and provide your own
>>>>> PCSHELL. Note that switching between this and 2b is trivial.
>>>>>
>>>>>    Barry
>>>>>
>>>>> I would first try competing the "true" Jacobian via coloring, if that
>>>>> works and give satisfactory results (fast enough) then stop.
>>>>>
>>>>> Then I would do 2a/2b by writing my "function" using PETScFV  and
>>>>> writing the "lower order function" via PETScFV and use matrix coloring to
>>>>> get the Jacobian from the second "lower order function". If this works well
>>>>> (either with 2a or 3a or both) then stop or you can compute the "lower
>>>>> order" Jacobian analytically (again using PetscFV) for a more efficient
>>>>> evaluation of the Jacobian.
>>>>>
>>>>>
>>>>
>>>>>
>>>>>
>>>>> Thanks !
>>>>>
>>>>> Thibault
>>>>>
>>>>> Le ven. 21 août 2020 à 17:22, Barry Smith <bsmith at petsc.dev> a écrit :
>>>>>
>>>>>
>>>>>>
>>>>>> On Aug 21, 2020, at 8:35 AM, Thibault Bridel-Bertomeu <
>>>>>> thibault.bridelbertomeu at gmail.com> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> Le ven. 21 août 2020 à 15:23, Matthew Knepley <knepley at gmail.com> a
>>>>>> écrit :
>>>>>>
>>>>>>> On Fri, Aug 21, 2020 at 9:10 AM Thibault Bridel-Bertomeu <
>>>>>>> thibault.bridelbertomeu at gmail.com> wrote:
>>>>>>>
>>>>>>>> Sorry, I sent too soon, I hit the wrong key.
>>>>>>>>
>>>>>>>> I wanted to say that context.npoints is the local number of cells.
>>>>>>>>
>>>>>>>> PetscRHSFunctionImpl allows to generate the hyperbolic part of the
>>>>>>>> right hand side.
>>>>>>>> Then we have :
>>>>>>>>
>>>>>>>> PetscErrorCode PetscIJacobian(
>>>>>>>> TS ts, /*!< Time stepping object (see PETSc TS)*/
>>>>>>>> PetscReal t, /*!< Current time */
>>>>>>>> Vec Y, /*!< Solution vector */
>>>>>>>> Vec Ydot, /*!< Time-derivative of solution vector */
>>>>>>>> PetscReal a, /*!< Shift */
>>>>>>>> Mat A, /*!< Jacobian matrix */
>>>>>>>> Mat B, /*!< Preconditioning matrix */
>>>>>>>> void *ctxt /*!< Application context */
>>>>>>>> )
>>>>>>>> {
>>>>>>>> PETScContext *context = (PETScContext*) ctxt;
>>>>>>>> HyPar *solver = context->solver;
>>>>>>>> _DECLARE_IERR_;
>>>>>>>>
>>>>>>>> PetscFunctionBegin;
>>>>>>>> solver->count_IJacobian++;
>>>>>>>> context->shift = a;
>>>>>>>> context->waqt = t;
>>>>>>>> /* Construct preconditioning matrix */
>>>>>>>> if (context->flag_use_precon) { IERR PetscComputePreconMatImpl(B,Y,
>>>>>>>> context); CHECKERR(ierr); }
>>>>>>>>
>>>>>>>> PetscFunctionReturn(0);
>>>>>>>> }
>>>>>>>>
>>>>>>>> and PetscJacobianFunction_JFNK which I bind to the matrix shell,
>>>>>>>> computes the action of the jacobian on a vector : say U0 is the state of
>>>>>>>> reference and Y the vector upon which to apply the JFNK method, then the
>>>>>>>> PetscJacobianFunction_JFNK returns shift * Y - 1/epsilon * (F(U0 +
>>>>>>>> epsilon*Y) - F(U0)) where F allows to evaluate the hyperbolic flux (shift
>>>>>>>> comes from the TS).
>>>>>>>> The preconditioning matrix I compute as an approximation to the
>>>>>>>> actual jacobian, that is shift * Identity - Derivative(dF/dU) where dF/dU
>>>>>>>> is, in each cell, a 4x4 matrix that is known exactly for the system of
>>>>>>>> equations I am solving, i.e. Euler equations. For the structured grid, I
>>>>>>>> can loop on the cells and do that 'Derivative' thing at first order by
>>>>>>>> simply taking a finite-difference like approximation with the neighboring
>>>>>>>> cells, Derivative(phi) = phi_i - phi_{i-1} and I assemble the B matrix
>>>>>>>> block by block (JFunction is the dF/dU)
>>>>>>>>
>>>>>>>> /* diagonal element */
>>>>>>>>
>>>>>>>>
>>>>>>>>   for (v=0; v<nvars; v++) { rows[v] = nvars*pg + v; cols[v] =
>>>>>>>> nvars*pg + v; }
>>>>>>>>
>>>>>>>>
>>>>>>>>   ierr = solver->JFunction
>>>>>>>> <http://hypar.github.io/a00144.html#a25c87c7c874c7e08981688fe6c51167b>
>>>>>>>> (values,(u+nvars*p),solver->physics
>>>>>>>> <http://hypar.github.io/a00144.html#a48e81a1806b5774943fd9a26e9a190f2>
>>>>>>>> ,dir,0);
>>>>>>>>
>>>>>>>>
>>>>>>>>    _ArrayScale1D_
>>>>>>>> <http://hypar.github.io/a00165.html#a1dab5f429ccd03c4a7fbb605d8ef75b6>
>>>>>>>> (values,(dxinv*iblank),(nvars*nvars));
>>>>>>>>
>>>>>>>>
>>>>>>>>    ierr =
>>>>>>>> MatSetValues(Pmat,nvars,rows,nvars,cols,values,ADD_VALUES); CHKERRQ(ierr);
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>    /* left neighbor */
>>>>>>>>
>>>>>>>>
>>>>>>>>    if (pgL >= 0) {
>>>>>>>>
>>>>>>>>
>>>>>>>>    for (v=0; v<nvars; v++) { rows[v] = nvars*pg + v; cols[v] =
>>>>>>>> nvars*pgL + v; }
>>>>>>>>
>>>>>>>>
>>>>>>>>    ierr = solver->JFunction
>>>>>>>> <http://hypar.github.io/a00144.html#a25c87c7c874c7e08981688fe6c51167b>
>>>>>>>> (values,(u+nvars*pL),solver->physics
>>>>>>>> <http://hypar.github.io/a00144.html#a48e81a1806b5774943fd9a26e9a190f2>
>>>>>>>> ,dir,1);
>>>>>>>>
>>>>>>>>
>>>>>>>>    _ArrayScale1D_
>>>>>>>> <http://hypar.github.io/a00165.html#a1dab5f429ccd03c4a7fbb605d8ef75b6>
>>>>>>>> (values,(-dxinv*iblank),(nvars*nvars));
>>>>>>>>
>>>>>>>>
>>>>>>>>    ierr =
>>>>>>>> MatSetValues(Pmat,nvars,rows,nvars,cols,values,ADD_VALUES); CHKERRQ(ierr);
>>>>>>>>
>>>>>>>>
>>>>>>>>    }
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>    /* right neighbor */
>>>>>>>>
>>>>>>>>
>>>>>>>>    if (pgR >= 0) {
>>>>>>>>
>>>>>>>>
>>>>>>>>    for (v=0; v<nvars; v++) { rows[v] = nvars*pg + v; cols[v] =
>>>>>>>> nvars*pgR + v; }
>>>>>>>>
>>>>>>>>
>>>>>>>>    ierr = solver->JFunction
>>>>>>>> <http://hypar.github.io/a00144.html#a25c87c7c874c7e08981688fe6c51167b>
>>>>>>>> (values,(u+nvars*pR),solver->physics
>>>>>>>> <http://hypar.github.io/a00144.html#a48e81a1806b5774943fd9a26e9a190f2>
>>>>>>>> ,dir,-1);
>>>>>>>>
>>>>>>>>
>>>>>>>>    _ArrayScale1D_
>>>>>>>> <http://hypar.github.io/a00165.html#a1dab5f429ccd03c4a7fbb605d8ef75b6>
>>>>>>>> (values,(-dxinv*iblank),(nvars*nvars));
>>>>>>>>
>>>>>>>>
>>>>>>>>    ierr =
>>>>>>>> MatSetValues(Pmat,nvars,rows,nvars,cols,values,ADD_VALUES); CHKERRQ(ierr);
>>>>>>>>
>>>>>>>>
>>>>>>>>    }
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> I do not know if I am clear here ...
>>>>>>>> Anyways, I am trying to figure out how to do this shell matrix and
>>>>>>>> this preconditioner using all the FV and DMPlex artillery.
>>>>>>>>
>>>>>>>
>>>>>>> Okay, that is very clear. We should be able to get the JFNK just
>>>>>>> with -snes_mf_operator, and put the approximate J construction in
>>>>>>> DMPlexComputeJacobian_Internal().
>>>>>>> There is an FV section already, and we could just add this. I would
>>>>>>> need to understand those entries in the pointwise Riemann sense that the
>>>>>>> other stuff is now.
>>>>>>>
>>>>>>
>>>>>> Ok i had a quick look and if I understood correctly it would do the
>>>>>> job. Setting the snes-mf-operator flag would mean however that we have to
>>>>>> go through SNESSetJacobian to set the jacobian and the preconditioning
>>>>>> matrix wouldn't it ?
>>>>>>
>>>>>>
>>>>>>   Thibault,
>>>>>>
>>>>>>    Since the TS implicit methods end up using SNES internally the
>>>>>> option should be available to you without requiring you to be calling the
>>>>>> SNES routines directly
>>>>>>
>>>>>>    Once you have finalized your approach and if for the implicit case
>>>>>> you always work in the snes mf operator mode you can hardwire
>>>>>>
>>>>>>     TSGetSNES(ts,&snes);
>>>>>>     SNESSetUseMatrixFree(snes,PETSC_TRUE,PETSC_FALSE);
>>>>>>
>>>>>>     in your code so you don't need to always provide the option
>>>>>> -snes-mf-operator
>>>>>>
>>>>>
>>>>>>    Barry
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> There might be calls to the Riemann solver to evaluate the dRHS / dU
>>>>>> part yes but maybe it's possible to re-use what was computed for the RHS^n ?
>>>>>> In the FV section the jacobian is set to identity which I missed
>>>>>> before, but it could explain why when I used the following :
>>>>>>
>>>>>> TSSetType(ts, TSBEULER);
>>>>>> DMTSSetIFunctionLocal(dm, DMPlexTSComputeIFunctionFEM <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/TS/DMPlexTSComputeIFunctionFEM.html#DMPlexTSComputeIFunctionFEM>, &ctx);
>>>>>> DMTSSetIJacobianLocal(dm, DMPlexTSComputeIJacobianFEM <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/TS/DMPlexTSComputeIJacobianFEM.html#DMPlexTSComputeIJacobianFEM>, &ctx);
>>>>>>
>>>>>>  with my FV discretization nothing happened, right ?
>>>>>>
>>>>>> Thank you,
>>>>>>
>>>>>> Thibault
>>>>>>
>>>>>>   Thanks,
>>>>>>>
>>>>>>>      Matt
>>>>>>>
>>>>>>>
>>>>>>>> Le ven. 21 août 2020 à 14:55, Thibault Bridel-Bertomeu <
>>>>>>>> thibault.bridelbertomeu at gmail.com> a écrit :
>>>>>>>>
>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> Thanks Matthew and Jed for your input.
>>>>>>>>> I indeed envision an implicit solver in the sense Jed mentioned -
>>>>>>>>> Jiri Blazek's book is a nice intro to this concept.
>>>>>>>>>
>>>>>>>>> Matthew, I do not know exactly what to change right now because
>>>>>>>>> although I understand globally what the DMPlexComputeXXXX_Internal methods
>>>>>>>>> do, I cannot say for sure line by line what is happening.
>>>>>>>>> In a structured code, I have a an implicit FVM solver with PETSc
>>>>>>>>> but I do not use any of the FV structure, not even a DM - I just use C
>>>>>>>>> arrays that I transform to PETSc Vec and Mat and build my IJacobian and my
>>>>>>>>> preconditioner and gives all that to a TS and it runs. I cannot figure out
>>>>>>>>> how to do it with the FV and the DM and all the underlying "shortcuts" that
>>>>>>>>> I want to use.
>>>>>>>>>
>>>>>>>>> Here is the top method for the structured code :
>>>>>>>>>
>>>>>>>>> int total_size = context.npoints * solver->nvars
>>>>>>>>> ierr = TSSetRHSFunction(ts,PETSC_NULL,PetscRHSFunctionImpl,&
>>>>>>>>> context); CHKERRQ(ierr);
>>>>>>>>> SNES snes;
>>>>>>>>> KSP ksp;
>>>>>>>>> PC pc;
>>>>>>>>> SNESType snestype;
>>>>>>>>> ierr = TSGetSNES(ts,&snes); CHKERRQ(ierr);
>>>>>>>>> ierr = SNESGetType(snes,&snestype); CHKERRQ(ierr);
>>>>>>>>>
>>>>>>>>> flag_mat_a = 1;
>>>>>>>>> ierr = MatCreateShell(MPI_COMM_WORLD,total_size,total_size,
>>>>>>>>> PETSC_DETERMINE,
>>>>>>>>> PETSC_DETERMINE,&context,&A); CHKERRQ(ierr);
>>>>>>>>> context.jfnk_eps = 1e-7;
>>>>>>>>> ierr = PetscOptionsGetReal(NULL,NULL,"-jfnk_epsilon",&context.
>>>>>>>>> jfnk_eps,NULL); CHKERRQ(ierr);
>>>>>>>>> ierr = MatShellSetOperation(A,MATOP_MULT,(void (*)(void))
>>>>>>>>> PetscJacobianFunction_JFNK); CHKERRQ(ierr);
>>>>>>>>> ierr = MatSetUp(A); CHKERRQ(ierr);
>>>>>>>>>
>>>>>>>>> context.flag_use_precon = 0;
>>>>>>>>> ierr = PetscOptionsGetBool(PETSC_NULL,PETSC_NULL,"-with_pc",(
>>>>>>>>> PetscBool*)(&context.flag_use_precon),PETSC_NULL); CHKERRQ(ierr);
>>>>>>>>>
>>>>>>>>> /* Set up preconditioner matrix */
>>>>>>>>> flag_mat_b = 1;
>>>>>>>>> ierr = MatCreateAIJ(MPI_COMM_WORLD,total_size,total_size,
>>>>>>>>> PETSC_DETERMINE,PETSC_DETERMINE,
>>>>>>>>>
>>>>>>>> (solver->ndims*2+1)*solver->nvars,NULL,
>>>>>>>>> 2*solver->ndims*solver->nvars,NULL,&B); CHKERRQ(ierr);
>>>>>>>>> ierr = MatSetBlockSize(B,solver->nvars);
>>>>>>>>> /* Set the RHSJacobian function for TS */
>>>>>>>>>
>>>>>>>> ierr = TSSetIJacobian(ts,A,B,PetscIJacobian,&context); CHKERRQ(ierr
>>> );
>>>
>>> Thibault Bridel-Bertomeu
>>>>>>>>>>>>>>>>>> Eng, MSc, PhD
>>>>>>>>> Research Engineer
>>>>>>>>> CEA/CESTA
>>>>>>>>> 33114 LE BARP
>>>>>>>>> Tel.: (+33)557046924
>>>>>>>>> Mob.: (+33)611025322
>>>>>>>>> Mail: thibault.bridelbertomeu at gmail.com
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Le jeu. 20 août 2020 à 18:43, Jed Brown <jed at jedbrown.org> a
>>>>>>>>> écrit :
>>>>>>>>>
>>>>>>>>>> Matthew Knepley <knepley at gmail.com> writes:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> > I could never get the FVM stuff to make sense to me for
>>>>>>>>>> implicit methods.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> > Here is my problem understanding. If you have an FVM method, it
>>>>>>>>>> decides
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> > to move "stuff" from one cell to its neighboring cells
>>>>>>>>>> depending on the
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> > solution to the Riemann problem on each face, which computed
>>>>>>>>>> the flux. This
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> > is
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> > fine unless the timestep is so big that material can flow
>>>>>>>>>> through into the
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> > cells beyond the neighbor. Then I should have considered the
>>>>>>>>>> effect of the
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> > Riemann problem for those interfaces. That would be in the
>>>>>>>>>> Jacobian, but I
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> > don't know how to compute that Jacobian. I guess you could do
>>>>>>>>>> everything
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> > matrix-free, but without a preconditioner it seems hard.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> So long as we're using method of lines, the flux is just
>>>>>>>>>> instantaneous flux, not integrated over some time step.  It has the same
>>>>>>>>>> meaning for implicit and explicit.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> An explicit method would be unstable if you took such a large
>>>>>>>>>> time step (CFL) and an implicit method will not simultaneously be SSP and
>>>>>>>>>> higher than first order, but it's still a consistent discretization of the
>>>>>>>>>> problem.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> It's common (done in FUN3D and others) to precondition with a
>>>>>>>>>> first-order method, where gradient reconstruction/limiting is skipped.
>>>>>>>>>> That's what I'd recommend because limiting creates nasty nonlinearities and
>>>>>>>>>> the resulting discretizations lack h-ellipticity which makes them very hard
>>>>>>>>>> to solve.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> What most experimenters take for granted before they begin their
>>>>>>> experiments is infinitely more interesting than any results to which their
>>>>>>> experiments lead.
>>>>>>> -- Norbert Wiener
>>>>>>>
>>>>>>> https://www.cse.buffalo.edu/~knepley/
>>>>>>> <http://www.cse.buffalo.edu/~knepley/>
>>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>> Thibault Bridel-Bertomeu
>>>>>>>>>> Eng, MSc, PhD
>>>>> Research Engineer
>>>>> CEA/CESTA
>>>>> 33114 LE BARP
>>>>> Tel.: (+33)557046924
>>>>> Mob.: (+33)611025322
>>>>> Mail: thibault.bridelbertomeu at gmail.com
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>> <petsc_fvm_part_timedisc_v2.pdf><petsc_fvm_part_petscdefs_v2.pdf>
>>>>
>>>>
>>>>
>>>
>>> --
>>> Thibault Bridel-Bertomeu
>>>>>> Eng, MSc, PhD
>>> Research Engineer
>>> CEA/CESTA
>>> 33114 LE BARP
>>> Tel.: (+33)557046924
>>> Mob.: (+33)611025322
>>> Mail: thibault.bridelbertomeu at gmail.com
>>>
>>>
>>>
>> <implicit_with_coloring.pdf>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20200827/80a51f8a/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: equationOfState_modded_h.pdf
Type: application/pdf
Size: 299857 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20200827/80a51f8a/attachment-0002.pdf>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: timeDiscretization_PETSc_modded.pdf
Type: application/pdf
Size: 603726 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20200827/80a51f8a/attachment-0003.pdf>


More information about the petsc-users mailing list