<div dir="ltr">Maybe that example was a jumping point for some of those studies, but it looks to me like that example has been around since ~2012 and initially only touched on SIMPLE, as opposed to addition of SIMPLE into an augmented lagrange scheme.<div><br></div><div>But it does seem that at some point Carola Kruse added Golub-Kahan bidiagonalization tests to ex70. I don't know very much about that although it seems to be related to AL methods ... but requires that the matrix be symmetric?</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Jul 28, 2023 at 7:04 PM Jed Brown <<a href="mailto:jed@jedbrown.org">jed@jedbrown.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">See src/snes/tutorials/ex70.c for the code that I think was used for that paper.<br>
<br>
Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>> writes:<br>
<br>
> Sorry for the spam. Looks like these authors have published multiple papers on the subject <br>
><br>
> cover.jpg <br>
> Combining the Augmented Lagrangian Preconditioner with the Simple Schur Complement Approximation | SIAM Journal on <br>
> Scientific Computingdoi.org <br>
> cover.jpg<br>
><br>
> On Jul 28, 2023, at 12:59 PM, Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>> wrote:<br>
><br>
> Do you know of anyone who has applied the augmented Lagrange methodology to a finite volume discretization?<br>
><br>
> On Jul 6, 2023, at 6:25 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br>
><br>
> On Thu, Jul 6, 2023 at 8:30 PM Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>> wrote:<br>
><br>
> This is an interesting article that compares a multi-level ILU algorithm to approximate commutator and augmented<br>
> Lagrange methods: <a href="https://doi.org/10.1002/fld.5039" rel="noreferrer" target="_blank">https://doi.org/10.1002/fld.5039</a><br>
><br>
> That is for incompressible NS. The results are not better than <a href="https://arxiv.org/abs/1810.03315" rel="noreferrer" target="_blank">https://arxiv.org/abs/1810.03315</a>, and that PC is considerably<br>
> simpler and already implemented in PETSc. There is an update in to this<br>
><br>
> <br>
> <a href="https://epubs.siam.org/doi/abs/10.1137/21M1430698?casa_token=Fp_XhuZStZ0AAAAA:YDhnkW9XvAom_b8KocWz-hBEI7FAt46aw3ICa0FvCrOVCtYr9bwvtqJ4aBOTkDSvANKh6YTQEw" rel="noreferrer" target="_blank">https://epubs.siam.org/doi/abs/10.1137/21M1430698?casa_token=Fp_XhuZStZ0AAAAA:YDhnkW9XvAom_b8KocWz-hBEI7FAt46aw3ICa0FvCrOVCtYr9bwvtqJ4aBOTkDSvANKh6YTQEw</a><br>
> <br>
><br>
> which removes the need for complicated elements.<br>
><br>
> You might need stuff like ILU for compressible flow, but I think incompressible is solved.<br>
><br>
> Thanks,<br>
><br>
> Matt<br>
> <br>
> On Wed, Jun 28, 2023 at 11:37 AM Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>> wrote:<br>
><br>
> I do believe that based off the results in <a href="https://doi.org/10.1137/040608817" rel="noreferrer" target="_blank">https://doi.org/10.1137/040608817</a> we should be able to make LSC, with<br>
> proper scaling, compare very favorably with PCD<br>
><br>
> On Tue, Jun 27, 2023 at 10:41 AM Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>> wrote:<br>
><br>
> I've opened <a href="https://gitlab.com/petsc/petsc/-/merge_requests/6642" rel="noreferrer" target="_blank">https://gitlab.com/petsc/petsc/-/merge_requests/6642</a> which adds a couple more scaling<br>
> applications of the inverse of the diagonal of A<br>
><br>
> On Mon, Jun 26, 2023 at 6:06 PM Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>> wrote:<br>
><br>
> I guess that similar to the discussions about selfp, the approximation of the velocity mass matrix by the<br>
> diagonal of the velocity sub-matrix will improve when running a transient as opposed to a steady<br>
> calculation, especially if the time derivative is lumped.... Just thinking while typing<br>
><br>
> On Mon, Jun 26, 2023 at 6:03 PM Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>> wrote:<br>
><br>
> Returning to Sebastian's question about the correctness of the current LSC implementation: in the<br>
> taxonomy paper that Jed linked to (which talks about SIMPLE, PCD, and LSC), equation 21 shows four<br>
> applications of the inverse of the velocity mass matrix. In the PETSc implementation there are at<br>
> most two applications of the reciprocal of the diagonal of A (an approximation to the velocity mass<br>
> matrix without more plumbing, as already pointed out). It seems like for code implementations in<br>
> which there are possible scaling differences between the velocity and pressure equations, that this<br>
> difference in the number of inverse applications could be significant? I know Jed said that these<br>
> scalings wouldn't really matter if you have a uniform grid, but I'm not 100% convinced yet.<br>
><br>
> I might try fiddling around with adding two more reciprocal applications.<br>
><br>
> On Fri, Jun 23, 2023 at 1:09 PM Pierre Jolivet <<a href="mailto:pierre.jolivet@lip6.fr" target="_blank">pierre.jolivet@lip6.fr</a>> wrote:<br>
><br>
> On 23 Jun 2023, at 10:06 PM, Pierre Jolivet <<a href="mailto:pierre.jolivet@lip6.fr" target="_blank">pierre.jolivet@lip6.fr</a>> wrote:<br>
><br>
> On 23 Jun 2023, at 9:39 PM, Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>> wrote:<br>
><br>
> Ah, I see that if I use Pierre's new 'full' option for -mat_schur_complement_ainv_type<br>
><br>
> That was not initially done by me<br>
><br>
> Oops, sorry for the noise, looks like it was done by me indeed in<br>
> 9399e4fd88c6621aad8fe9558ce84df37bd6fada…<br>
><br>
> Thanks,<br>
> Pierre<br>
><br>
> (though I recently tweaked MatSchurComplementComputeExplicitOperator() a bit to use<br>
> KSPMatSolve(), so that if you have a small Schur complement — which is not really the case<br>
> for NS — this could be a viable option, it was previously painfully slow).<br>
><br>
> Thanks,<br>
> Pierre<br>
><br>
> that I get a single iteration for the Schur complement solve with LU. That's a nice testing<br>
> option<br>
><br>
> On Fri, Jun 23, 2023 at 12:02 PM Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>><br>
> wrote:<br>
><br>
> I guess it is because the inverse of the diagonal form of A00 becomes a poor<br>
> representation of the inverse of A00? I guess naively I would have thought that the<br>
> blockdiag form of A00 is A00<br>
><br>
> On Fri, Jun 23, 2023 at 10:18 AM Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>><br>
> wrote:<br>
><br>
> Hi Jed, I will come back with answers to all of your questions at some point. I<br>
> mostly just deal with MOOSE users who come to me and tell me their solve is<br>
> converging slowly, asking me how to fix it. So I generally assume they have<br>
> built an appropriate mesh and problem size for the problem they want to solve<br>
> and added appropriate turbulence modeling (although my general assumption<br>
> is often violated).<br>
><br>
> > And to confirm, are you doing a nonlinearly implicit velocity-pressure solve?<br>
><br>
> Yes, this is our default.<br>
><br>
> A general question: it seems that it is well known that the quality of selfp<br>
> degrades with increasing advection. Why is that?<br>
><br>
> On Wed, Jun 7, 2023 at 8:01 PM Jed Brown <<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>> wrote:<br>
><br>
> Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>> writes:<br>
><br>
> > This has been a great discussion to follow. Regarding<br>
> ><br>
> >> when time stepping, you have enough mass matrix that cheaper<br>
> preconditioners are good enough<br>
> ><br>
> > I'm curious what some algebraic recommendations might be for high Re<br>
> in<br>
> > transients. <br>
><br>
> What mesh aspect ratio and streamline CFL number? Assuming your model<br>
> is turbulent, can you say anything about momentum thickness Reynolds<br>
> number Re_θ? What is your wall normal spacing in plus units? (Wall<br>
> resolved or wall modeled?)<br>
><br>
> And to confirm, are you doing a nonlinearly implicit velocity-pressure<br>
> solve?<br>
><br>
> > I've found one-level DD to be ineffective when applied monolithically or<br>
> to the momentum block of a split, as it scales with the mesh size. <br>
><br>
> I wouldn't put too much weight on "scaling with mesh size" per se. You<br>
> want an efficient solver for the coarsest mesh that delivers sufficient<br>
> accuracy in your flow regime. Constants matter.<br>
><br>
> Refining the mesh while holding time steps constant changes the advective<br>
> CFL number as well as cell Peclet/cell Reynolds numbers. A meaningful<br>
> scaling study is to increase Reynolds number (e.g., by growing the domain)<br>
> while keeping mesh size matched in terms of plus units in the viscous<br>
> sublayer and Kolmogorov length in the outer boundary layer. That turns<br>
> out to not be a very automatic study to do, but it's what matters and you<br>
> can spend a lot of time chasing ghosts with naive scaling studies.<br>
><br>
> -- <br>
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any<br>
> results to which their experiments lead.<br>
> -- Norbert Wiener<br>
><br>
> <a href="https://www.cse.buffalo.edu/~knepley/" rel="noreferrer" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>
</blockquote></div>