[petsc-users] Scalable Solver for Incompressible Flow

Alexander Lindsay alexlindsay239 at gmail.com
Thu Jul 6 19:30:24 CDT 2023


This is an interesting article that compares a multi-level ILU algorithm to
approximate commutator and augmented Lagrange methods:
https://doi.org/10.1002/fld.5039

On Wed, Jun 28, 2023 at 11:37 AM Alexander Lindsay <alexlindsay239 at gmail.com>
wrote:

> I do believe that based off the results in
> https://doi.org/10.1137/040608817 we should be able to make LSC, with
> proper scaling, compare very favorably with PCD
>
> On Tue, Jun 27, 2023 at 10:41 AM Alexander Lindsay <
> alexlindsay239 at gmail.com> wrote:
>
>> I've opened https://gitlab.com/petsc/petsc/-/merge_requests/6642 which
>> adds a couple more scaling applications of the inverse of the diagonal of A
>>
>> On Mon, Jun 26, 2023 at 6:06 PM Alexander Lindsay <
>> alexlindsay239 at gmail.com> wrote:
>>
>>> I guess that similar to the discussions about selfp, the approximation
>>> of the velocity mass matrix by the diagonal of the velocity sub-matrix will
>>> improve when running a transient as opposed to a steady calculation,
>>> especially if the time derivative is lumped.... Just thinking while typing
>>>
>>> On Mon, Jun 26, 2023 at 6:03 PM Alexander Lindsay <
>>> alexlindsay239 at gmail.com> wrote:
>>>
>>>> Returning to Sebastian's question about the correctness of the current
>>>> LSC implementation: in the taxonomy paper that Jed linked to (which talks
>>>> about SIMPLE, PCD, and LSC), equation 21 shows four applications of the
>>>> inverse of the velocity mass matrix. In the PETSc implementation there are
>>>> at most two applications of the reciprocal of the diagonal of A (an
>>>> approximation to the velocity mass matrix without more plumbing, as already
>>>> pointed out). It seems like for code implementations in which there are
>>>> possible scaling differences between the velocity and pressure equations,
>>>> that this difference in the number of inverse applications could be
>>>> significant? I know Jed said that these scalings wouldn't really matter if
>>>> you have a uniform grid, but I'm not 100% convinced yet.
>>>>
>>>> I might try fiddling around with adding two more reciprocal
>>>> applications.
>>>>
>>>> On Fri, Jun 23, 2023 at 1:09 PM Pierre Jolivet <pierre.jolivet at lip6.fr>
>>>> wrote:
>>>>
>>>>>
>>>>> On 23 Jun 2023, at 10:06 PM, Pierre Jolivet <pierre.jolivet at lip6.fr>
>>>>> wrote:
>>>>>
>>>>>
>>>>> On 23 Jun 2023, at 9:39 PM, Alexander Lindsay <
>>>>> alexlindsay239 at gmail.com> wrote:
>>>>>
>>>>> Ah, I see that if I use Pierre's new 'full' option for
>>>>> -mat_schur_complement_ainv_type
>>>>>
>>>>>
>>>>> That was not initially done by me
>>>>>
>>>>>
>>>>> Oops, sorry for the noise, looks like it was done by me indeed
>>>>> in 9399e4fd88c6621aad8fe9558ce84df37bd6fada…
>>>>>
>>>>> Thanks,
>>>>> Pierre
>>>>>
>>>>> (though I recently tweaked MatSchurComplementComputeExplicitOperator()
>>>>> a bit to use KSPMatSolve(), so that if you have a small Schur complement —
>>>>> which is not really the case for NS — this could be a viable option, it was
>>>>> previously painfully slow).
>>>>>
>>>>> Thanks,
>>>>> Pierre
>>>>>
>>>>> that I get a single iteration for the Schur complement solve with LU.
>>>>> That's a nice testing option
>>>>>
>>>>> On Fri, Jun 23, 2023 at 12:02 PM Alexander Lindsay <
>>>>> alexlindsay239 at gmail.com> wrote:
>>>>>
>>>>>> I guess it is because the inverse of the diagonal form of A00 becomes
>>>>>> a poor representation of the inverse of A00? I guess naively I would have
>>>>>> thought that the blockdiag form of A00 is A00
>>>>>>
>>>>>> On Fri, Jun 23, 2023 at 10:18 AM Alexander Lindsay <
>>>>>> alexlindsay239 at gmail.com> wrote:
>>>>>>
>>>>>>> Hi Jed, I will come back with answers to all of your questions at
>>>>>>> some point. I mostly just deal with MOOSE users who come to me and tell me
>>>>>>> their solve is converging slowly, asking me how to fix it. So I generally
>>>>>>> assume they have built an appropriate mesh and problem size for the problem
>>>>>>> they want to solve and added appropriate turbulence modeling (although my
>>>>>>> general assumption is often violated).
>>>>>>>
>>>>>>> > And to confirm, are you doing a nonlinearly implicit
>>>>>>> velocity-pressure solve?
>>>>>>>
>>>>>>> Yes, this is our default.
>>>>>>>
>>>>>>> A general question: it seems that it is well known that the quality
>>>>>>> of selfp degrades with increasing advection. Why is that?
>>>>>>>
>>>>>>> On Wed, Jun 7, 2023 at 8:01 PM Jed Brown <jed at jedbrown.org> wrote:
>>>>>>>
>>>>>>>> Alexander Lindsay <alexlindsay239 at gmail.com> writes:
>>>>>>>>
>>>>>>>> > This has been a great discussion to follow. Regarding
>>>>>>>> >
>>>>>>>> >> when time stepping, you have enough mass matrix that cheaper
>>>>>>>> preconditioners are good enough
>>>>>>>> >
>>>>>>>> > I'm curious what some algebraic recommendations might be for high
>>>>>>>> Re in
>>>>>>>> > transients.
>>>>>>>>
>>>>>>>> What mesh aspect ratio and streamline CFL number? Assuming your
>>>>>>>> model is turbulent, can you say anything about momentum thickness Reynolds
>>>>>>>> number Re_θ? What is your wall normal spacing in plus units? (Wall resolved
>>>>>>>> or wall modeled?)
>>>>>>>>
>>>>>>>> And to confirm, are you doing a nonlinearly implicit
>>>>>>>> velocity-pressure solve?
>>>>>>>>
>>>>>>>> > I've found one-level DD to be ineffective when applied
>>>>>>>> monolithically or to the momentum block of a split, as it scales with the
>>>>>>>> mesh size.
>>>>>>>>
>>>>>>>> I wouldn't put too much weight on "scaling with mesh size" per se.
>>>>>>>> You want an efficient solver for the coarsest mesh that delivers sufficient
>>>>>>>> accuracy in your flow regime. Constants matter.
>>>>>>>>
>>>>>>>> Refining the mesh while holding time steps constant changes the
>>>>>>>> advective CFL number as well as cell Peclet/cell Reynolds numbers. A
>>>>>>>> meaningful scaling study is to increase Reynolds number (e.g., by growing
>>>>>>>> the domain) while keeping mesh size matched in terms of plus units in the
>>>>>>>> viscous sublayer and Kolmogorov length in the outer boundary layer. That
>>>>>>>> turns out to not be a very automatic study to do, but it's what matters and
>>>>>>>> you can spend a lot of time chasing ghosts with naive scaling studies.
>>>>>>>>
>>>>>>>
>>>>>
>>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230706/abbf2b97/attachment.html>


More information about the petsc-users mailing list