<div dir="ltr">I guess that similar to the discussions about selfp, the approximation of the velocity mass matrix by the diagonal of the velocity sub-matrix will improve when running a transient as opposed to a steady calculation, especially if the time derivative is lumped.... Just thinking while typing</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jun 26, 2023 at 6:03 PM Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com">alexlindsay239@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Returning to Sebastian's question about the correctness of the current LSC implementation: in the taxonomy paper that Jed linked to (which talks about SIMPLE, PCD, and LSC), equation 21 shows four applications of the inverse of the velocity mass matrix. In the PETSc implementation there are at most two applications of the reciprocal of the diagonal of A (an approximation to the velocity mass matrix without more plumbing, as already pointed out). It seems like for code implementations in which there are possible scaling differences between the velocity and pressure equations, that this difference in the number of inverse applications could be significant? I know Jed said that these scalings wouldn't really matter if you have a uniform grid, but I'm not 100% convinced yet.<div><br></div><div>I might try fiddling around with adding two more reciprocal applications.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Jun 23, 2023 at 1:09 PM Pierre Jolivet <<a href="mailto:pierre.jolivet@lip6.fr" target="_blank">pierre.jolivet@lip6.fr</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><br><div><blockquote type="cite"><div>On 23 Jun 2023, at 10:06 PM, Pierre Jolivet <<a href="mailto:pierre.jolivet@lip6.fr" target="_blank">pierre.jolivet@lip6.fr</a>> wrote:</div><br><div><div><br><div><blockquote type="cite"><div>On 23 Jun 2023, at 9:39 PM, Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>> wrote:</div><br><div><div dir="ltr">Ah, I see that if I use Pierre's new 'full' option for -mat_schur_complement_ainv_type</div></div></blockquote><div><br></div><div>That was not initially done by me</div></div></div></div></blockquote><div><br></div><div>Oops, sorry for the noise, looks like it was done by me indeed in 9399e4fd88c6621aad8fe9558ce84df37bd6fada…</div><div><br></div><div>Thanks,</div><div>Pierre</div><br><blockquote type="cite"><div><div><div><div> (though I recently tweaked MatSchurComplementComputeExplicitOperator() a bit to use KSPMatSolve(), so that if you have a small Schur complement — which is not really the case for NS — this could be a viable option, it was previously painfully slow).</div><div><br></div><div>Thanks,</div><div>Pierre</div><br><blockquote type="cite"><div><div dir="ltr"> that I get a single iteration for the Schur complement solve with LU. That's a nice testing option</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Jun 23, 2023 at 12:02 PM Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">I guess it is because the inverse of the diagonal form of A00 becomes a poor representation of the inverse of A00? I guess naively I would have thought that the blockdiag form of A00 is A00</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Jun 23, 2023 at 10:18 AM Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Jed, I will come back with answers to all of your questions at some point. I mostly just deal with MOOSE users who come to me and tell me their solve is converging slowly, asking me how to fix it. So I generally assume they have built an appropriate mesh and problem size for the problem they want to solve and added appropriate turbulence modeling (although my general assumption is often violated).<div><br></div><div>> And to confirm, are you doing a nonlinearly implicit velocity-pressure solve?</div><div><br></div><div>Yes, this is our default.</div><div><br></div><div>A general question: it seems that it is well known that the quality of selfp degrades with increasing advection. Why is that?</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jun 7, 2023 at 8:01 PM Jed Brown <<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>> writes:<br>
<br>
> This has been a great discussion to follow. Regarding<br>
><br>
>> when time stepping, you have enough mass matrix that cheaper preconditioners are good enough<br>
><br>
> I'm curious what some algebraic recommendations might be for high Re in<br>
> transients. <br>
<br>
What mesh aspect ratio and streamline CFL number? Assuming your model is turbulent, can you say anything about momentum thickness Reynolds number Re_θ? What is your wall normal spacing in plus units? (Wall resolved or wall modeled?)<br>
<br>
And to confirm, are you doing a nonlinearly implicit velocity-pressure solve?<br>
<br>
> I've found one-level DD to be ineffective when applied monolithically or to the momentum block of a split, as it scales with the mesh size. <br>
<br>
I wouldn't put too much weight on "scaling with mesh size" per se. You want an efficient solver for the coarsest mesh that delivers sufficient accuracy in your flow regime. Constants matter.<br>
<br>
Refining the mesh while holding time steps constant changes the advective CFL number as well as cell Peclet/cell Reynolds numbers. A meaningful scaling study is to increase Reynolds number (e.g., by growing the domain) while keeping mesh size matched in terms of plus units in the viscous sublayer and Kolmogorov length in the outer boundary layer. That turns out to not be a very automatic study to do, but it's what matters and you can spend a lot of time chasing ghosts with naive scaling studies.<br>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</div></blockquote></div><br></div></div></blockquote></div><br></div></div></blockquote></div>
</blockquote></div>