<div dir="ltr"><div class="gmail_default" style="color:rgb(0,0,0)"><span style="color:rgb(34,34,34)">On 26 October 2016 at 09:38, Mark Adams </span><span dir="ltr" style="color:rgb(34,34,34)"><<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>></span><span style="color:rgb(34,34,34)"> wrote:</span><br></div><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr">Please run with -info and grep on GAMG and send that. (-info is very noisy).<div><br></div></div></blockquote><div><div class="gmail_default" style="color:rgb(0,0,0)">I cat the grep at the end of the log file (see attachment petsc-3.7.4-n2.log).<br></div><div class="gmail_default" style="color:rgb(0,0,0)">Also, increasing the local number of iterations in SOR, as suggested by Barry, removed the indefinite preconditioner (file petsc-3.7.4-n2-lits2.log).</div></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div></div><div>I'm not sure what is going on here. Divergence with parallelism. Here are some suggestions.</div><div><br></div><div>Note, you do not need to set the null space for a scalar (Poisson) problem unless you have some special null space. And not getting it set (with the 6 rigid body modes) for the velocity (elasticity) equation will only degrade convergence rates.</div><div><br></div><div>There was a bug for a while (early 3.7 versions) where the coarse grid was not squeezed onto one processor, which could result in very bad convergence, but not divergence, on multiple processors (the -info output will report the number of 'active pes'). Perhaps this bug is causing divergence for you. We had another subtle bug where the eigen estimates used a bad seed vector, which gives a bad eigen estimate. This would cause divergence but it should not be a parallelism issue (these two bugs were both regressions in around 3.7)</div><div><br></div><div>Divergence usually comes from a bad eigen estimate in a Chebyshev smoother, but this is not highly correlated with parallelism. The -info data will report the eigen estimates but that is not terribly useful but you can see if it changes (gets larger) with better parameters. Add these parameters, with the correct prefix, and use -options_left to make sure that "there are no unused options":</div><div><br></div><div><div>-mg_levels_ksp_type chebyshev</div><div>-mg_levels_esteig_ksp_type cg</div><div>-mg_levels_esteig_ksp_max_it 10</div><div><div class="gmail_default" style="color:rgb(0,0,0);display:inline"></div>-mg_levels_ksp_chebyshev_<wbr>esteig 0,.1,0,1.05</div></div><div><br></div></div></blockquote><div><div class="gmail_default" style="color:rgb(0,0,0)">petsc-3.7.4-n2-chebyshev.log contains the output when using the default KSP Chebyshev.</div><div class="gmail_default" style="color:rgb(0,0,0)">When estimating the eigenvalues using cg with the translations [0, 0.1; 0, 1.05] (previously using default gmres with translations [0, 0.1; 0, 1.1]), the max eigenvalue decreases from 1.0931 to 1.04366 and the indefinite preconditioner appears ealier after 2 iterations (3 previously).</div><div class="gmail_default" style="color:rgb(0,0,0)">I attached the log (see petsc-3.7.4-chebyshev.log).</div></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div></div><div>chebyshev is the default, as Barry suggested, replace this with gmres or richardson (see below) and verify that this fixed the divergence problem. </div><div><br></div></div></blockquote><div><div class="gmail_default" style="color:rgb(0,0,0);display:inline"></div><span style="color:rgb(0,0,0)">Using gmres (-poisson_mg_levels_ksp_type gmres) fixes the divergence problem<div class="gmail_default" style="color:rgb(0,0,0);display:inline"> (file petsc-3.7.4-n2-gmres.log)</div>.</span><div class="gmail_default" style="color:rgb(0,0,0);display:inline"></div></div><div><div class="gmail_default" style="color:rgb(0,0,0);display:inline">Same observation with richardson (file petsc-3.7.4-n2-richardson.log).</div></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div></div><div>If your matrix is symmetric positive definite then use '-mg_levels_esteig_ksp_type cg', if not then use the default gmres. </div></div></blockquote><div><br></div><div><div class="gmail_default" style="color:rgb(0,0,0);display:inline">I checked and I still get an indefinite preconditioner when using gmres to estimate the eigenvalues.</div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><br></div><div>Increase/decrease '-mg_levels_esteig_ksp_max_it 10', you should see the estimates increase and converge with higher max_it. Setting this to a huge number, like 100, should fix the bad seed vector problem mentioned above.</div><div><br></div></div></blockquote><div><div class="gmail_default" style="color:rgb(0,0,0)">I played with the maximum number of iterations. Here are the min/max eigenvalue estimates for the two levels:</div><div class="gmail_default" style="color:rgb(0,0,0)">- max_it 5: (min=0.0975079, max=1.02383) on level 1, (0.0975647, 1.02443) on level 2</div><div class="gmail_default" style="color:rgb(0,0,0)">- max_it 10: (0.0991546, 1.04112), (0.0993962, 1.04366)</div><div class="gmail_default" style="color:rgb(0,0,0)">- max_it 20: (0.0995918, 1.04571), (0.115723, 1.21509)</div><div class="gmail_default" style="color:rgb(0,0,0)">- max_it 50: (0.0995651, 1.04543), (0.133744, 1.40431)</div><div class="gmail_default" style="color:rgb(0,0,0)">- max_it 100: (0.0995651, 1.04543), (0.133744, 1.40431)</div><div class="gmail_default" style="color:rgb(0,0,0)"><br></div><div class="gmail_default" style="color:rgb(0,0,0)">Note that all those runs ended up with an indefinite preconditioner, except when increasing the maximum number of iterations to 50 (and 100, which did not improve the eigenvalue estimates).</div></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div></div><div>If eigen estimates are a pain, like with non SPD systems, then richardson is an option (instead of chebyshev):</div><div><br><div>-mg_levels_ksp_type richardson</div><div>-mg_levels_ksp_richardson_<wbr>scale 0.6</div></div><div><br></div><div>You then need to play with the scaling (that is what chebyshev does for you essentially).</div><div><br></div></div><div class="gmail-HOEnZb"><div class="gmail-h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Oct 25, 2016 at 10:22 PM, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span>On Tue, Oct 25, 2016 at 9:20 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br></span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><br>
Olivier,<br>
<br><span>
Ok, so I've run the code in the debugger, but I don't not think the problem is with the null space. The code is correctly removing the null space on all the levels of multigrid.<br>
<br>
I think the error comes from changes in the behavior of GAMG. GAMG is relatively rapidly moving with different defaults and even different code with each release.<br>
<br>
To check this I added the option -poisson_mg_levels_pc_sor_lits 2 and it stopped complaining about KSP_DIVERGED_INDEFINITE_PC. I've seen this before where the smoother is "too weak" and so the net result is that action of the preconditioner is indefinite. Mark Adams probably has better suggestions on how to make the preconditioner behave. Note you could also use a KSP of richardson or gmres instead of cg since they don't care about this indefinite business.</span></blockquote><div><br></div><div>I think old GAMG squared the graph by default. You can see in the 3.7 output that it does not.</div><div><br></div><div> Matt</div><div><div class="gmail-m_-7170536835229105185h5"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span class="gmail-m_-7170536835229105185m_-830195992835208079HOEnZb"><font color="#888888"><br>
Barry<br>
<br>
<br>
</font></span><span class="gmail-m_-7170536835229105185m_-830195992835208079im gmail-m_-7170536835229105185m_-830195992835208079HOEnZb"><br>
> On Oct 25, 2016, at 5:39 PM, Olivier Mesnard <<a href="mailto:olivier.mesnard8@gmail.com" target="_blank">olivier.mesnard8@gmail.com</a>> wrote:<br>
><br>
> On 25 October 2016 at 17:51, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br>
><br>
> Olivier,<br>
><br>
> In theory you do not need to change anything else. Are you using a different matrix object for the velocity_ksp object than the poisson_ksp object?<br>
><br>
> The matrix is different for the velocity_ksp and the poisson_ksp.<br>
><br>
> The code change in PETSc is very little but we have a report from another CFD user who also had problems with the change so there may be some subtle bug that we can't figure out causing things to not behave properly.<br>
><br>
> First run the 3.7.4 code with -poisson_ksp_view and verify that when it prints the matrix information it prints something like has attached null space if it does not print that it means that somehow the matrix is not properly getting the matrix attached.<br>
><br>
> When running with 3.7.4 and -poisson_ksp_view, the output shows that the nullspace is not attached to the KSP (as it was with 3.5.4); however the print statement is now under the Mat info (which is expected when moving from KSPSetNullSpace to MatSetNullSpace?).<br>
><br>
</span><span class="gmail-m_-7170536835229105185m_-830195992835208079im gmail-m_-7170536835229105185m_-830195992835208079HOEnZb">> Though older versions had MatSetNullSpace() they didn't necessarily associate it with the KSP so it was not expected to work as a replacement for KSPSetNullSpace() with older versions.<br>
><br>
> Because our other user had great difficulty trying to debug the issue feel free to send us at <a href="mailto:petsc-maint@mcs.anl.gov" target="_blank">petsc-maint@mcs.anl.gov</a> your code with instructions on building and running and we can try to track down the problem. Better than hours and hours spent with fruitless email. We will, of course, not distribute the code and will delete in when we are finished with it.<br>
><br>
> The code is open-source and hosted on GitHub (<a href="https://github.com/barbagroup/PetIBM)" rel="noreferrer" target="_blank">https://github.com/barbagroup<wbr>/PetIBM)</a>.<br>
> I just pushed the branches `feature-compatible-petsc-3.7` and `revert-compatible-petsc-3.5` that I used to observe this problem.<br>
><br>
</span><div class="gmail-m_-7170536835229105185m_-830195992835208079HOEnZb"><div class="gmail-m_-7170536835229105185m_-830195992835208079h5">> PETSc (both 3.5.4 and 3.7.4) was configured as follow:<br>
> export PETSC_ARCH="linux-gnu-dbg"<br>
> ./configure --PETSC_ARCH=$PETSC_ARCH \<br>
> --with-cc=gcc \<br>
> --with-cxx=g++ \<br>
> --with-fc=gfortran \<br>
> --COPTFLAGS="-O0" \<br>
> --CXXOPTFLAGS="-O0" \<br>
> --FOPTFLAGS="-O0" \<br>
> --with-debugging=1 \<br>
> --download-fblaslapack \<br>
> --download-mpich \<br>
> --download-hypre \<br>
> --download-yaml \<br>
> --with-x=1<br>
><br>
> Our code was built using the following commands:<br>
> mkdir petibm-build<br>
> cd petibm-build<br>
> export PETSC_DIR=<directory of PETSc><br>
> export PETSC_ARCH="linux-gnu-dbg"<br>
> export PETIBM_DIR=<directory of PetIBM git repo><br>
> $PETIBM_DIR/configure --prefix=$PWD \<br>
> CXX=$PETSC_DIR/$PETSC_ARCH/bi<wbr>n/mpicxx \<br>
> CXXFLAGS="-g -O0 -std=c++11"<br>
> make all<br>
> make install<br>
><br>
> Then<br>
> cd examples<br>
> make examples<br>
><br>
> The example of the lid-driven cavity I was talking about can be found in the folder `examples/2d/convergence/lidDr<wbr>ivenCavity20/20/`<br>
><br>
> To run it:<br>
> mpiexec -n N <path-to-petibm-build>/bin/pet<wbr>ibm2d -directory <path-to-example><br>
><br>
> Let me know if you need more info. Thank you.<br>
><br>
> Barry<br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
> > On Oct 25, 2016, at 4:38 PM, Olivier Mesnard <<a href="mailto:olivier.mesnard8@gmail.com" target="_blank">olivier.mesnard8@gmail.com</a>> wrote:<br>
> ><br>
> > Hi all,<br>
> ><br>
> > We develop a CFD code using the PETSc library that solves the Navier-Stokes equations using the fractional-step method from Perot (1993).<br>
> > At each time-step, we solve two systems: one for the velocity field, the other, a Poisson system, for the pressure field.<br>
> > One of our test-cases is a 2D lid-driven cavity flow (Re=100) on a 20x20 grid using 1 or 2 procs.<br>
> > For the Poisson system, we usually use CG preconditioned with GAMG.<br>
> ><br>
> > So far, we have been using PETSc-3.5.4, and we would like to update the code with the latest release: 3.7.4.<br>
> ><br>
> > As suggested in the changelog of 3.6, we replaced the routine `KSPSetNullSpace()` with `MatSetNullSpace()`.<br>
> ><br>
> > Here is the list of options we use to configure the two solvers:<br>
> > * Velocity solver: prefix `-velocity_`<br>
> > -velocity_ksp_type bcgs<br>
> > -velocity_ksp_rtol 1.0E-08<br>
> > -velocity_ksp_atol 0.0<br>
> > -velocity_ksp_max_it 10000<br>
> > -velocity_pc_type jacobi<br>
> > -velocity_ksp_view<br>
> > -velocity_ksp_monitor_true_re<wbr>sidual<br>
> > -velocity_ksp_converged_reaso<wbr>n<br>
> > * Poisson solver: prefix `-poisson_`<br>
> > -poisson_ksp_type cg<br>
> > -poisson_ksp_rtol 1.0E-08<br>
> > -poisson_ksp_atol 0.0<br>
> > -poisson_ksp_max_it 20000<br>
> > -poisson_pc_type gamg<br>
> > -poisson_pc_gamg_type agg<br>
> > -poisson_pc_gamg_agg_nsmooths 1<br>
> > -poissonksp_view<br>
> > -poisson_ksp_monitor_true_res<wbr>idual<br>
> > -poisson_ksp_converged_reason<br>
> ><br>
> > With 3.5.4, the case runs normally on 1 or 2 procs.<br>
> > With 3.7.4, the case runs normally on 1 proc but not on 2.<br>
> > Why? The Poisson solver diverges because of an indefinite preconditioner (only with 2 procs).<br>
> ><br>
> > We also saw that the routine `MatSetNullSpace()` was already available in 3.5.4.<br>
> > With 3.5.4, replacing `KSPSetNullSpace()` with `MatSetNullSpace()` led to the Poisson solver diverging because of an indefinite matrix (on 1 and 2 procs).<br>
> ><br>
> > Thus, we were wondering if we needed to update something else for the KSP, and not just modifying the name of the routine?<br>
> ><br>
> > I have attached the output files from the different cases:<br>
> > * `run-petsc-3.5.4-n1.log` (3.5.4, `KSPSetNullSpace()`, n=1)<br>
> > * `run-petsc-3.5.4-n2.log`<br>
> > * `run-petsc-3.5.4-nsp-n1.log` (3.5.4, `MatSetNullSpace()`, n=1)<br>
> > * `run-petsc-3.5.4-nsp-n2.log`<br>
> > * `run-petsc-3.7.4-n1.log` (3.7.4, `MatSetNullSpace()`, n=1)<br>
> > * `run-petsc-3.7.4-n2.log`<br>
> ><br>
> > Thank you for your help,<br>
> > Olivier<br>
> > <run-petsc-3.5.4-n1.log><run-p<wbr>etsc-3.5.4-n2.log><run-petsc-3<wbr>.5.4-nsp-n1.log><run-petsc-3.5<wbr>.4-nsp-n2.log><run-petsc-3.7.4<wbr>-n1.log><run-petsc-3.7.4-n2.lo<wbr>g><br>
><br>
><br>
<br>
</div></div></blockquote></div></div></div><br><br clear="all"><span><div><br></div>-- <br><div class="gmail-m_-7170536835229105185m_-830195992835208079gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</span></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div></div>