<div dir="ltr">Thanks Mark.<div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">This makes it hard to give much advice. You really just need to test things and use what works best.<br></blockquote><div><br></div><div>Yeah, arriving at the current setup was the result of a lot of rather aimless testing and trial and error.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I see you are using 20 smoothing steps. That is very high. Generally you want to use the v-cycle more (ie, lower number of smoothing steps and more iterations).<br></blockquote><div><br></div><div>This was partly from seeing a lot of cases that needed far too many outer gmres iterations / orthogonalization directions, and trying to coerce AMG into doing more work per cycle.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">You are beyond what AMG is designed for. If you press this problem it will break any solver and will break generic AMG relatively early.</blockquote><div><br></div><div>For what it's worth, I'm regularly solving much larger problems (1M-100M unknowns, unsteady) with this discretization and AMG setup on 500+ cores with impressively great convergence, dramatically better than ILU/ASM. This just happens to be the first time I've experimented with this extremely low Mach number, which is known to have a whole host of issues and generally needs low-mach preconditioners, I was just a bit surprised by this specific failure mechanism.</div><div><br></div><div>Thanks for the point on jacobi v bjacobi.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Mar 13, 2019 at 9:00 PM Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div><br><br></div><div>Any thoughts here? Is there anything obviously wrong with my setup? </div></div></div></div></div></div></div></div></div></blockquote><div><br></div><div>Fast and robust solvers for NS require specialized methods that are not provided in PETSc and the methods tend to require tighter integration with the meshing and discretization than the algebraic interface supports.<br></div><div><br></div><div>I see you are using 20 smoothing steps. That is very high. Generally you want to use the v-cycle more (ie, lower number of smoothing steps and more iterations).</div><div><br></div><div>And, full MG is a bit tricky. I would not use it, but if it helps, fine.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>Any way to reduce the dependence of the convergence iterations on the parallelism?</div></div></div></div></div></div></div></div></div></blockquote><div><br></div><div>This comes from the <span style="color:rgb(80,0,80)">bjacobi smoother. Use jacobi and you will not have a parallelism problem and you have </span><span style="color:rgb(80,0,80)">bjacobi in the limit of parallelism.</span></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div> -- obviously I expect the iteration count to be higher in parallel, but I didn't expect such catastrophic failure.</div><div><br></div></div></div></div></div></div></div></div></div></blockquote><div><br></div><div>You are beyond what AMG is designed for. If you press this problem it will break any solver and will break generic AMG relatively early.</div><div><br></div><div>This makes it hard to give much advice. You really just need to test things and use what works best. There are special purpose methods that you can implement in PETSc but that is a topic for a significant project.</div><div><br></div><div><br></div><div><br></div></div></div>
</blockquote></div>