<div dir="ltr"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="gmail-im" style="font-size:12.8px"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span style="font-size:12.8px"><br class="gmail-Apple-interchange-newline">You want to use the Schur complement PCs for NS. </span><span style="font-size:12.8px">We have some support for PC where you give us a mass matrix.</span></blockquote><div><br></div><div>I'll look into it, but I'm not aware of Schur complement being used for compressible/coupled NS, only for incompressible/segregated.</div></div></blockquote><div><br></div><div><br></div></span><div style="font-size:12.8px">Really? OK, you've looked into it more than me.</div></blockquote><div><br></div><div>Correct me if I'm wrong, I haven't looked into it much at all. I just haven't encountered Schur complement in CFD outside of segregated SIMPLE-type algorithms (like in the ex70/PCFIELDSPLIT example.) Those segregated pressure-correction methods tend to get progressively worse as you become less elliptic / non-zero Mach.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span style="font-size:12.8px">sor</span></blockquote><div><br></div><div>Makes sense, explicit RK methods are pretty decent smoothers for the nonlinear geometric multigrid approach. Thanks for the help, hopefully at least using proper smoothers in GAMG gets me a non-wrong preconditioner.</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Nov 8, 2017 at 8:40 PM, Mark Adams <span dir="ltr"><<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span class="">On Wed, Nov 8, 2017 at 8:02 PM, Mark Lohry <span dir="ltr"><<a href="mailto:mlohry@gmail.com" target="_blank">mlohry@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span style="font-size:12.8px">Do you mean the element Jacobian matrix has 35 "vertices"? You have 5x5 dense blocks (or at least you don't care about resolving any sparsity or you want to use block preconditioners. So your element Jacobians are [35x5]^2 dense(ish) matrices. This is not particularly useful for the solvers.</span></blockquote><div><br></div></span><div>Yes, the local element jacobians are (35x5)^2 ( (N+1)(N+2)(N+3)/6 vertices per Nth order element for 3D). They'd be 100% dense. So the real blocks of the full system jacobian are of that size, and not 5x5. Block ILU is pretty popular for these discretizations for obvious reasons.</div></div></blockquote><div><br></div></span><div>OK, but don't set that blocks size. </div><span class=""><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><span><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span style="font-size:12.8px">This is on the full NS problem I assume. You need special solvers for this. Scalable solvers for NS is not trivial.</span></blockquote><div> </div></span><div>Yes, compressible 3D NS. I'm aware scaling isn't trivial, but this isn't a scaling problem, I'm obviously computing a wrong preconditioner since it's actively hurting the linear solve.</div></div></blockquote><div><br></div></span><div>By scaling I mean not a direct solver.</div><span class=""><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><span><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span style="font-size:12.8px">Yes, you should use Gauss-Seidel for non-symmetric systems. Or indefinite systems</span></blockquote><div><br></div></span><div>Thanks!</div></div></blockquote><div><br></div></span><div>sor</div><span class=""><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><span><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span style="font-size:12.8px">You want to use the Schur complement PCs for NS. </span><span style="font-size:12.8px">We have some support for PC where you give us a mass matrix.</span></blockquote><div><br></div></span><div>I'll look into it, but I'm not aware of Schur complement being used for compressible/coupled NS, only for incompressible/segregated. </div></div></blockquote><div><br></div><div><br></div></span><div>Really? OK, you've looked into it more than me.</div><span class=""><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>This is DG where the mass matrix is a block diagonal constant you typically just invert once at the beginning; is there a PC where this can be exploited?</div></div></blockquote><div><br></div></span><div>Don't know but being able to explicitly invert this might let you compute an exact Schur complement matrix and then use algebraic solvers (like LU and AMG).</div><div><div class="h5"><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div></div><div class="m_1860391484408827047HOEnZb"><div class="m_1860391484408827047h5"><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Nov 8, 2017 at 7:04 PM, Mark Adams <span dir="ltr"><<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span>On Tue, Nov 7, 2017 at 3:00 PM, Mark Lohry <span dir="ltr"><<a href="mailto:mlohry@gmail.com" target="_blank">mlohry@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">I've now got gamg running on matrix-free newton-krylov with the jacobian provided by coloring finite differences (thanks again for the help). 3D Poisson with 4th order DG or higher (35^2 blocks), gamg with default settings is giving textbook convergence, which is great. Of course coupled compressible navier-stokes is harder, and convergence is bad-to-nonexistent.<div><br></div><div><div><div><br></div><div>1) Doc says "Equations must be ordered in
“vertex-major” ordering"; in my discretization, each "node" has 5 coupled degrees of freedom (density, 3 x momentum, energy). I'm ordering my unknowns:</div><div><br></div><div>rho_i, rhou_i, rhov_i, rhow_i, Et_i, rho_i+1, rhou_i+1, ... e.g. row-major matrix order if you wrote the unknowns [{rho}, {rhou}, ... ].</div><div><br></div><div>and then setting block size to 5. Is that correct? </div></div></div></div></blockquote><div><br></div></span><div>yes</div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div>I've also tried using the actual sparsity of the matrix </div></div></div></div></blockquote><div><br></div></span><div>Sorry, but using for what?</div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div>which has larger dense blocks (e.g. [35x5]^2), but neither seemed to help.</div></div></div></div></blockquote><div><br></div></span><div>Do you mean the element Jacobian matrix has 35 "vertices"? You have 5x5 dense blocks (or at least you don't care about resolving any sparsity or you want to use block preconditioners. So your element Jacobians are [35x5]^2 dense(ish) matrices. This is not particularly useful for the solvers.</div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><br></div><div><br></div><div>2) With default settings, and with -pc_gamg_square_graph, pc_gamg_sym_graph, agg_nsmooths 0 mentioned in the manual, the eigenvalue estimates explode on the coarsest level, which I don't see with poisson:</div></div><div><br></div><div><div> Down solver (pre-smoother) on level 1 ------------------------------<wbr>-</div><div> KSP Object: (mg_levels_1_) 32 MPI processes</div><div> type: chebyshev</div><div> eigenvalue estimates used: min = 0.18994, max = 2.08935</div><div> eigenvalues estimate via gmres min 0.00933256, max 1.8994</div></div><div><div> Down solver (pre-smoother) on level 2 ------------------------------<wbr>-</div><div> KSP Object: (mg_levels_2_) 32 MPI processes</div><div> type: chebyshev</div><div> eigenvalue estimates used: min = 0.165969, max = 1.82566</div><div> eigenvalues estimate via gmres min 0.0290728, max 1.65969</div></div><div><div> Down solver (pre-smoother) on level 3 ------------------------------<wbr>-</div><div> KSP Object: (mg_levels_3_) 32 MPI processes</div><div> type: chebyshev</div><div> eigenvalue estimates used: min = 0.146479, max = 1.61126</div><div> eigenvalues estimate via gmres min 0.204673, max 1.46479</div></div><div><div> Down solver (pre-smoother) on level 4 ------------------------------<wbr>-</div><div> KSP Object: (mg_levels_4_) 32 MPI processes</div><div> type: chebyshev</div><div> eigenvalue estimates used: min = 6.81977e+09, max = 7.50175e+10</div><div> eigenvalues estimate via gmres min -2.76436e+12, max 6.81977e+10</div></div><div><br></div><div>What's happening here? (Full -ksp_view below)</div></div></div></blockquote><div><br></div></span><div>This is on the full NS problem I assume. You need special solvers for this. Scalable solvers for NS is not trivial.</div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><br></div><div>3) I'm not very familiar with chebyshev smoothers, but they're only for SPD systems (?). </div></div></div></blockquote><div><br></div></span><div>Yes, you should use Gauss-Seidel for non-symmetric systems. Or indefinite systems</div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div>Is this an inappropriate smoother for this problem?</div><div><br></div><div>4) With gmres, the preconditioned residual is ~10 orders larger than the true residual; and the preconditioned residual drops while the true residual rises. I'm assuming this means something very wrong?<br></div></div></div></blockquote><div><br></div></span><div>Yes, your preconditioner is no good.</div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div></div><div><br></div><div>5) -pc_type hyper -pc_hypre_type boomeramg also works perfectly for the poisson case, but hits NaN on the first cycle for NS.</div><div><br></div><div><br></div></div></div></blockquote><div><br></div></span><div>You want to use the Schur complement PCs for NS. We have some support for PC where you give us a mass matrix. These are motivated by assuming there is a "commutation" (that is not true). These were developed by, among others, Andy Wathan, if you want to do a literature search. You will probably have to read the literature to understand the options and issues for your problem. Look at the PETSc manual and you should find a description of PETSc support for these PCs and some discussion of them and references. This should get you started.</div><span class="m_1860391484408827047m_1347187794803649338HOEnZb"><font color="#888888"><div><br></div><div>Mark</div><div><br></div><div><br></div><div><br></div><div> </div></font></span></div></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div></div></div><br></div></div>
</blockquote></div><br></div>