[petsc-users] PetscFV and TS implicit
Thibault Bridel-Bertomeu
thibault.bridelbertomeu at gmail.com
Fri Aug 21 08:21:07 CDT 2020
Thanks Matthew for your very fast answer. Sorry I sent another mail at the
same time my first one wasn't complete.
Le ven. 21 août 2020 à 15:09, Matthew Knepley <knepley at gmail.com> a écrit :
> On Fri, Aug 21, 2020 at 8:56 AM Thibault Bridel-Bertomeu <
> thibault.bridelbertomeu at gmail.com> wrote:
>
>> Hi,
>>
>> Thanks Matthew and Jed for your input.
>> I indeed envision an implicit solver in the sense Jed mentioned - Jiri
>> Blazek's book is a nice intro to this concept.
>>
>
> Unfortunately, I still do not really understand. I have a step in the FV
> code where I update the state based on the fluxes. If you
> violate CFD, this step is completely wrong.
>
>
>> Matthew, I do not know exactly what to change right now because although
>> I understand globally what the DMPlexComputeXXXX_Internal methods do, I
>> cannot say for sure line by line what is happening.
>> In a structured code, I have a an implicit FVM solver with PETSc but I do
>> not use any of the FV structure, not even a DM - I just use C arrays that I
>> transform to PETSc Vec and Mat and build my IJacobian and my preconditioner
>> and gives all that to a TS and it runs. I cannot figure out how to do it
>> with the FV and the DM and all the underlying "shortcuts" that I want to
>> use.
>>
>
> I can explain what I am doing, and maybe we can either change it to what
> you want, or allow you to use the pieces in your code to make things
> simpler.
> Here is the comment from DMPlexComputeResidual_Internal():
>
> /* FVM */
> /* Get geometric data */
> /* If using gradients */
> /* Compute gradient data */
> /* Loop over domain faces */
> /* Count computational faces */
> /* Reconstruct cell gradient */
> /* Loop over domain cells */
> /* Limit cell gradients */
> /* Handle boundary values */
> /* Loop over domain faces */
> /* Read out field, centroid, normal, volume for each side of face */
> /* Riemann solve over faces */
> /* Loop over domain faces */
> /* Accumulate fluxes to cells */
>
Yes I saw the comments at the end of the method and they helped to
understand.
Indeed at the end of this, you have the RHS of your system in the dU/dt =
RHS sense. To be accurate it is the RHS at time t^n, like if it were a
first order Euler approximation we would write U^(n+1) - U^n = dt * RHS^n.
Only with an implicit solver we want to write (U^(n+1) - U^n )/dt =
RHS^(n+1) as you know.
Since we cannot really evaluate RHS^(n+1) we like to linearize it about
RHS^n and we write RHS^(n+1) = RHS^n + [dRHS/dU]^n * (U^(n+1) - U^n)
All in all if we substitute we obtain something like [alpha * Identity -
[dRHS/dU]^n](U^(n+1) - U^n) = RHS^n where alpha depends on the time step.
This left hand side matrix is what I would like to compute before advancing
the system in time, whereas the RHS^n is what
DMPlexComputeResidual_Internal gives.
Then there is the story of preconditioning this system to make the whole
thing more stable but if I already could figure out how to generate that
LHS it would be a great step in the right direction.
Thanks again a lot !
Thibault
> Obviously you might not need all the steps, and the last step is the one I
> cannot understand for your case.
>
> Thanks,
>
> Matt
>
>
>> Here is the top method for the structured code :
>>
>> int total_size = context.npoints * solver->nvars
>> ierr = TSSetRHSFunction(ts,PETSC_NULL,PetscRHSFunctionImpl,&context);
>> CHKERRQ(ierr);
>> SNES snes;
>> KSP ksp;
>> PC pc;
>> SNESType snestype;
>> ierr = TSGetSNES(ts,&snes); CHKERRQ(ierr);
>> ierr = SNESGetType(snes,&snestype); CHKERRQ(ierr);
>>
>> flag_mat_a = 1;
>> ierr = MatCreateShell(MPI_COMM_WORLD,total_size,total_size,
>> PETSC_DETERMINE,
>> PETSC_DETERMINE,&context,&A); CHKERRQ(ierr);
>> context.jfnk_eps = 1e-7;
>> ierr = PetscOptionsGetReal(NULL,NULL,"-jfnk_epsilon",&context.jfnk_eps,
>> NULL); CHKERRQ(ierr);
>> ierr = MatShellSetOperation(A,MATOP_MULT,(void (*)(void))
>> PetscJacobianFunction_JFNK); CHKERRQ(ierr);
>> ierr = MatSetUp(A); CHKERRQ(ierr);
>>
>> context.flag_use_precon = 0;
>> ierr = PetscOptionsGetBool(PETSC_NULL,PETSC_NULL,"-with_pc",(PetscBool
>> *)(&context.flag_use_precon),PETSC_NULL); CHKERRQ(ierr);
>>
>> /* Set up preconditioner matrix */
>> flag_mat_b = 1;
>> ierr = MatCreateAIJ(MPI_COMM_WORLD,total_size,total_size,PETSC_DETERMINE,
>> PETSC_DETERMINE,
>> (solver->ndims*2+1)*solver->nvars,NULL,
>> 2*solver->ndims*solver->nvars,NULL,&B); CHKERRQ(ierr);
>> ierr = MatSetBlockSize(B,solver->nvars);
>> /* Set the RHSJacobian function for TS */
>> ierr = TSSetIJacobian(ts,A,B,PetscIJacobian,&context); CHKERRQ(ierr);
>>
>> Thibault Bridel-Bertomeu
>> —
>> Eng, MSc, PhD
>> Research Engineer
>> CEA/CESTA
>> 33114 LE BARP
>> Tel.: (+33)557046924
>> Mob.: (+33)611025322
>> Mail: thibault.bridelbertomeu at gmail.com
>>
>>
>> Le jeu. 20 août 2020 à 18:43, Jed Brown <jed at jedbrown.org> a écrit :
>>
>>> Matthew Knepley <knepley at gmail.com> writes:
>>>
>>> > I could never get the FVM stuff to make sense to me for implicit
>>> methods.
>>> > Here is my problem understanding. If you have an FVM method, it decides
>>> > to move "stuff" from one cell to its neighboring cells depending on the
>>> > solution to the Riemann problem on each face, which computed the flux.
>>> This
>>> > is
>>> > fine unless the timestep is so big that material can flow through into
>>> the
>>> > cells beyond the neighbor. Then I should have considered the effect of
>>> the
>>> > Riemann problem for those interfaces. That would be in the Jacobian,
>>> but I
>>> > don't know how to compute that Jacobian. I guess you could do
>>> everything
>>> > matrix-free, but without a preconditioner it seems hard.
>>>
>>> So long as we're using method of lines, the flux is just instantaneous
>>> flux, not integrated over some time step. It has the same meaning for
>>> implicit and explicit.
>>>
>>> An explicit method would be unstable if you took such a large time step
>>> (CFL) and an implicit method will not simultaneously be SSP and higher than
>>> first order, but it's still a consistent discretization of the problem.
>>>
>>> It's common (done in FUN3D and others) to precondition with a
>>> first-order method, where gradient reconstruction/limiting is skipped.
>>> That's what I'd recommend because limiting creates nasty nonlinearities and
>>> the resulting discretizations lack h-ellipticity which makes them very hard
>>> to solve.
>>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20200821/6ee9e182/attachment.html>
More information about the petsc-users
mailing list