[petsc-users] PetscFV and TS implicit

Matthew Knepley knepley at gmail.com
Fri Aug 21 08:09:35 CDT 2020


On Fri, Aug 21, 2020 at 8:56 AM Thibault Bridel-Bertomeu <
thibault.bridelbertomeu at gmail.com> wrote:

> Hi,
>
> Thanks Matthew and Jed for your input.
> I indeed envision an implicit solver in the sense Jed mentioned - Jiri
> Blazek's book is a nice intro to this concept.
>

Unfortunately, I still do not really understand. I have a step in the FV
code where I update the state based on the fluxes. If you
violate CFD, this step is completely wrong.


> Matthew, I do not know exactly what to change right now because although I
> understand globally what the DMPlexComputeXXXX_Internal methods do, I
> cannot say for sure line by line what is happening.
> In a structured code, I have a an implicit FVM solver with PETSc but I do
> not use any of the FV structure, not even a DM - I just use C arrays that I
> transform to PETSc Vec and Mat and build my IJacobian and my preconditioner
> and gives all that to a TS and it runs. I cannot figure out how to do it
> with the FV and the DM and all the underlying "shortcuts" that I want to
> use.
>

I can explain what I am doing, and maybe we can either change it to what
you want, or allow you to use the pieces in your code to make things
simpler.
Here is the comment from DMPlexComputeResidual_Internal():

  /* FVM */
  /* Get geometric data */
  /* If using gradients */
  /*   Compute gradient data */
  /*   Loop over domain faces */
  /*     Count computational faces */
  /*     Reconstruct cell gradient */
  /*   Loop over domain cells */
  /*     Limit cell gradients */
  /* Handle boundary values */
  /* Loop over domain faces */
  /*   Read out field, centroid, normal, volume for each side of face */
  /*   Riemann solve over faces */
  /* Loop over domain faces */
  /*   Accumulate fluxes to cells */

Obviously you might not need all the steps, and the last step is the one I
cannot understand for your case.

  Thanks,

     Matt


> Here is the top method for the structured code :
>
> int total_size = context.npoints * solver->nvars
> ierr = TSSetRHSFunction(ts,PETSC_NULL,PetscRHSFunctionImpl,&context);
> CHKERRQ(ierr);
> SNES snes;
> KSP ksp;
> PC pc;
> SNESType snestype;
> ierr = TSGetSNES(ts,&snes); CHKERRQ(ierr);
> ierr = SNESGetType(snes,&snestype); CHKERRQ(ierr);
>
> flag_mat_a = 1;
> ierr = MatCreateShell(MPI_COMM_WORLD,total_size,total_size,PETSC_DETERMINE
> ,
> PETSC_DETERMINE,&context,&A); CHKERRQ(ierr);
> context.jfnk_eps = 1e-7;
> ierr = PetscOptionsGetReal(NULL,NULL,"-jfnk_epsilon",&context.jfnk_eps,
> NULL); CHKERRQ(ierr);
> ierr = MatShellSetOperation(A,MATOP_MULT,(void (*)(void))
> PetscJacobianFunction_JFNK); CHKERRQ(ierr);
> ierr = MatSetUp(A); CHKERRQ(ierr);
>
> context.flag_use_precon = 0;
> ierr = PetscOptionsGetBool(PETSC_NULL,PETSC_NULL,"-with_pc",(PetscBool*)(&
> context.flag_use_precon),PETSC_NULL); CHKERRQ(ierr);
>
> /* Set up preconditioner matrix */
> flag_mat_b = 1;
> ierr = MatCreateAIJ(MPI_COMM_WORLD,total_size,total_size,PETSC_DETERMINE,
> PETSC_DETERMINE,
> (solver->ndims*2+1)*solver->nvars,NULL,
> 2*solver->ndims*solver->nvars,NULL,&B); CHKERRQ(ierr);
> ierr = MatSetBlockSize(B,solver->nvars);
> /* Set the RHSJacobian function for TS */
> ierr = TSSetIJacobian(ts,A,B,PetscIJacobian,&context); CHKERRQ(ierr);
>
> Thibault Bridel-Bertomeu
>> Eng, MSc, PhD
> Research Engineer
> CEA/CESTA
> 33114 LE BARP
> Tel.: (+33)557046924
> Mob.: (+33)611025322
> Mail: thibault.bridelbertomeu at gmail.com
>
>
> Le jeu. 20 août 2020 à 18:43, Jed Brown <jed at jedbrown.org> a écrit :
>
>> Matthew Knepley <knepley at gmail.com> writes:
>>
>> > I could never get the FVM stuff to make sense to me for implicit
>> methods.
>> > Here is my problem understanding. If you have an FVM method, it decides
>> > to move "stuff" from one cell to its neighboring cells depending on the
>> > solution to the Riemann problem on each face, which computed the flux.
>> This
>> > is
>> > fine unless the timestep is so big that material can flow through into
>> the
>> > cells beyond the neighbor. Then I should have considered the effect of
>> the
>> > Riemann problem for those interfaces. That would be in the Jacobian,
>> but I
>> > don't know how to compute that Jacobian. I guess you could do everything
>> > matrix-free, but without a preconditioner it seems hard.
>>
>> So long as we're using method of lines, the flux is just instantaneous
>> flux, not integrated over some time step.  It has the same meaning for
>> implicit and explicit.
>>
>> An explicit method would be unstable if you took such a large time step
>> (CFL) and an implicit method will not simultaneously be SSP and higher than
>> first order, but it's still a consistent discretization of the problem.
>>
>> It's common (done in FUN3D and others) to precondition with a first-order
>> method, where gradient reconstruction/limiting is skipped.  That's what I'd
>> recommend because limiting creates nasty nonlinearities and the resulting
>> discretizations lack h-ellipticity which makes them very hard to solve.
>>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20200821/9b765d18/attachment.html>


More information about the petsc-users mailing list