Mark,<br><br>I've just run on one core which allows me to get ML to produce a one row coarse grid. The problem converges. I'm a bit confused.<br><br>John<br><br><div class="gmail_quote">On Fri, Mar 30, 2012 at 10:24 AM, Jed Brown <span dir="ltr"><<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p>-mg_coarse_pc_type svd?</p>
<p>(Use redundant for parallel.)</p><div class="HOEnZb"><div class="h5">
<div class="gmail_quote">On Mar 30, 2012 9:21 AM, "Mark F. Adams" <<a href="mailto:mark.adams@columbia.edu" target="_blank">mark.adams@columbia.edu</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div style="word-wrap:break-word"><br><div><div>On Mar 30, 2012, at 10:52 AM, John Mousel wrote:</div><br><blockquote type="cite">Mark,<br><br>I've run GAMG twice with different coarse grid sizes of 2 and 8 with 1 sweep of SOR on the coarse grid. For a size of 8 it converges nicely, but for a size of 2, I think the null space is causing too many problems. </blockquote>
<div><br></div><div>YEs, the iterative method is seeing the null space because of floating point error.</div><br><blockquote type="cite">If GAMG were to coarsen to a size of 1, then there would be no hope because only the null space would remain, right? This doesn't ever seem to occur with ML because there are at least as many rows as processors.<br>
</blockquote><div><br></div><div>Yes that seems like a good assumption. The right thing to do here would probably be to do and SVD and filter out the very low modes explicitly. For now I guess tweaking -pc_gamg_coarse_eq_limit n is al that can be done. Not very satisfying. We will think about this ... any thoughts anyone?</div>
<div><br></div><div>Mark</div><br><blockquote type="cite">
<br>John<br><br><div class="gmail_quote">On Fri, Mar 30, 2012 at 8:42 AM, Mark F. Adams <span dir="ltr"><<a href="mailto:mark.adams@columbia.edu" target="_blank">mark.adams@columbia.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div style="word-wrap:break-word"><br><div><div><div>On Mar 29, 2012, at 2:40 PM, John Mousel wrote:</div><br><blockquote type="cite">I'm attempting to solve a non-symmetric discretization of a 3D Poisson problem. The problem is singular. I've attached the results of KSPView from runs with ML and GAMG. When I run ML, I get convergence in 30 iterations. When I attempt to use the same settings with GAMG, I'm not getting convergence at all. The two things I notice are:<br>
<br>1. GAMG is using KSPType preonly, even though I've set it to be Richardson in my command line options.<br></blockquote><div><br></div></div><div>PETSc seems to switch the coarse grid solver to GMRES in Setup. This seems to be a bug and I unwisely decide to override this manually. I will undo this in the next checkin. This should not be the problem however.</div>
<div><br><blockquote type="cite">2. ML only coarsens down to 4 rows while GAMG coarsens to 2. My problem is singular, and whenever I try to use LU, I get zero pivot problems. To mitigate this, I've been using Richardson with SOR on the coarse matrix. Could the smaller coarse grid size of GAMG be causing problems with SOR. If so, is there a way to put a lower limit on the coarse grid size?<br>
<br></blockquote><div><br></div></div><div>I'm thinking that with a 2x2 coarse grid 8 iterations of SOR is picking up the null space. Maybe try just one SOR iteration on the coarse grid.</div><div><br></div><div>Also, can you run with options_left so that I can see your arguments. One known bug is the mat_diagaonal_scale breaks GAMG, but it should also break ML.</div>
<div><br></div><div>Mark</div><br><blockquote type="cite"><div>John<br><br><div class="gmail_quote">On Thu, Mar 29, 2012 at 11:03 AM, Jed Brown <span dir="ltr"><<a href="mailto:jedbrown@mcs.anl.gov" target="_blank">jedbrown@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div><div class="gmail_quote">On Thu, Mar 29, 2012 at 09:18, John Mousel <span dir="ltr"><<a href="mailto:john.mousel@gmail.com" target="_blank">john.mousel@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>[0]PETSC ERROR: Error in external library!<br>[0]PETSC ERROR: Cannot disable floating point exceptions!</div></blockquote></div><div><br></div></div><div>Looks like something is strange with your environment because fesetenv() is returning an error. I have disabled the call if the trap mode is not changing.</div>
<br><div><a href="http://petsc.cs.iit.edu/petsc/petsc-dev/rev/352b4c19e451" target="_blank">http://petsc.cs.iit.edu/petsc/petsc-dev/rev/352b4c19e451</a></div>
</blockquote></div><br>
</div><span><KSPView_GAMG.txt></span><span><KSPView_ML.txt></span></blockquote></div><br></div></blockquote></div><br>
<span><KSPView_ML.txt></span><span><KSPView_GAMG.txt></span></blockquote></div><br></div></blockquote></div>
</div></div></blockquote></div><br>