[petsc-users] Scalable Solver for Incompressible Flow
Jed Brown
jed at jedbrown.org
Fri Jul 28 21:04:30 CDT 2023
See src/snes/tutorials/ex70.c for the code that I think was used for that paper.
Alexander Lindsay <alexlindsay239 at gmail.com> writes:
> Sorry for the spam. Looks like these authors have published multiple papers on the subject
>
> cover.jpg
> Combining the Augmented Lagrangian Preconditioner with the Simple Schur Complement Approximation | SIAM Journal on
> Scientific Computingdoi.org
> cover.jpg
>
> On Jul 28, 2023, at 12:59 PM, Alexander Lindsay <alexlindsay239 at gmail.com> wrote:
>
> Do you know of anyone who has applied the augmented Lagrange methodology to a finite volume discretization?
>
> On Jul 6, 2023, at 6:25 PM, Matthew Knepley <knepley at gmail.com> wrote:
>
> On Thu, Jul 6, 2023 at 8:30 PM Alexander Lindsay <alexlindsay239 at gmail.com> wrote:
>
> This is an interesting article that compares a multi-level ILU algorithm to approximate commutator and augmented
> Lagrange methods: https://doi.org/10.1002/fld.5039
>
> That is for incompressible NS. The results are not better than https://arxiv.org/abs/1810.03315, and that PC is considerably
> simpler and already implemented in PETSc. There is an update in to this
>
>
> https://epubs.siam.org/doi/abs/10.1137/21M1430698?casa_token=Fp_XhuZStZ0AAAAA:YDhnkW9XvAom_b8KocWz-hBEI7FAt46aw3ICa0FvCrOVCtYr9bwvtqJ4aBOTkDSvANKh6YTQEw
>
>
> which removes the need for complicated elements.
>
> You might need stuff like ILU for compressible flow, but I think incompressible is solved.
>
> Thanks,
>
> Matt
>
> On Wed, Jun 28, 2023 at 11:37 AM Alexander Lindsay <alexlindsay239 at gmail.com> wrote:
>
> I do believe that based off the results in https://doi.org/10.1137/040608817 we should be able to make LSC, with
> proper scaling, compare very favorably with PCD
>
> On Tue, Jun 27, 2023 at 10:41 AM Alexander Lindsay <alexlindsay239 at gmail.com> wrote:
>
> I've opened https://gitlab.com/petsc/petsc/-/merge_requests/6642 which adds a couple more scaling
> applications of the inverse of the diagonal of A
>
> On Mon, Jun 26, 2023 at 6:06 PM Alexander Lindsay <alexlindsay239 at gmail.com> wrote:
>
> I guess that similar to the discussions about selfp, the approximation of the velocity mass matrix by the
> diagonal of the velocity sub-matrix will improve when running a transient as opposed to a steady
> calculation, especially if the time derivative is lumped.... Just thinking while typing
>
> On Mon, Jun 26, 2023 at 6:03 PM Alexander Lindsay <alexlindsay239 at gmail.com> wrote:
>
> Returning to Sebastian's question about the correctness of the current LSC implementation: in the
> taxonomy paper that Jed linked to (which talks about SIMPLE, PCD, and LSC), equation 21 shows four
> applications of the inverse of the velocity mass matrix. In the PETSc implementation there are at
> most two applications of the reciprocal of the diagonal of A (an approximation to the velocity mass
> matrix without more plumbing, as already pointed out). It seems like for code implementations in
> which there are possible scaling differences between the velocity and pressure equations, that this
> difference in the number of inverse applications could be significant? I know Jed said that these
> scalings wouldn't really matter if you have a uniform grid, but I'm not 100% convinced yet.
>
> I might try fiddling around with adding two more reciprocal applications.
>
> On Fri, Jun 23, 2023 at 1:09 PM Pierre Jolivet <pierre.jolivet at lip6.fr> wrote:
>
> On 23 Jun 2023, at 10:06 PM, Pierre Jolivet <pierre.jolivet at lip6.fr> wrote:
>
> On 23 Jun 2023, at 9:39 PM, Alexander Lindsay <alexlindsay239 at gmail.com> wrote:
>
> Ah, I see that if I use Pierre's new 'full' option for -mat_schur_complement_ainv_type
>
> That was not initially done by me
>
> Oops, sorry for the noise, looks like it was done by me indeed in
> 9399e4fd88c6621aad8fe9558ce84df37bd6fada…
>
> Thanks,
> Pierre
>
> (though I recently tweaked MatSchurComplementComputeExplicitOperator() a bit to use
> KSPMatSolve(), so that if you have a small Schur complement — which is not really the case
> for NS — this could be a viable option, it was previously painfully slow).
>
> Thanks,
> Pierre
>
> that I get a single iteration for the Schur complement solve with LU. That's a nice testing
> option
>
> On Fri, Jun 23, 2023 at 12:02 PM Alexander Lindsay <alexlindsay239 at gmail.com>
> wrote:
>
> I guess it is because the inverse of the diagonal form of A00 becomes a poor
> representation of the inverse of A00? I guess naively I would have thought that the
> blockdiag form of A00 is A00
>
> On Fri, Jun 23, 2023 at 10:18 AM Alexander Lindsay <alexlindsay239 at gmail.com>
> wrote:
>
> Hi Jed, I will come back with answers to all of your questions at some point. I
> mostly just deal with MOOSE users who come to me and tell me their solve is
> converging slowly, asking me how to fix it. So I generally assume they have
> built an appropriate mesh and problem size for the problem they want to solve
> and added appropriate turbulence modeling (although my general assumption
> is often violated).
>
> > And to confirm, are you doing a nonlinearly implicit velocity-pressure solve?
>
> Yes, this is our default.
>
> A general question: it seems that it is well known that the quality of selfp
> degrades with increasing advection. Why is that?
>
> On Wed, Jun 7, 2023 at 8:01 PM Jed Brown <jed at jedbrown.org> wrote:
>
> Alexander Lindsay <alexlindsay239 at gmail.com> writes:
>
> > This has been a great discussion to follow. Regarding
> >
> >> when time stepping, you have enough mass matrix that cheaper
> preconditioners are good enough
> >
> > I'm curious what some algebraic recommendations might be for high Re
> in
> > transients.
>
> What mesh aspect ratio and streamline CFL number? Assuming your model
> is turbulent, can you say anything about momentum thickness Reynolds
> number Re_θ? What is your wall normal spacing in plus units? (Wall
> resolved or wall modeled?)
>
> And to confirm, are you doing a nonlinearly implicit velocity-pressure
> solve?
>
> > I've found one-level DD to be ineffective when applied monolithically or
> to the momentum block of a split, as it scales with the mesh size.
>
> I wouldn't put too much weight on "scaling with mesh size" per se. You
> want an efficient solver for the coarsest mesh that delivers sufficient
> accuracy in your flow regime. Constants matter.
>
> Refining the mesh while holding time steps constant changes the advective
> CFL number as well as cell Peclet/cell Reynolds numbers. A meaningful
> scaling study is to increase Reynolds number (e.g., by growing the domain)
> while keeping mesh size matched in terms of plus units in the viscous
> sublayer and Kolmogorov length in the outer boundary layer. That turns
> out to not be a very automatic study to do, but it's what matters and you
> can spend a lot of time chasing ghosts with naive scaling studies.
>
> --
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any
> results to which their experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
More information about the petsc-users
mailing list