<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Dec 30, 2014 at 3:31 PM, Jed Brown <span dir="ltr"><<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">I have a pretty well-conditioned problem, eigenvalues in (0.2, 1.55)<br>
with bjacobi/ilu. It converges in 20-25 iterations with GMRES or with<br>
Chebyshev targeting this range (eigenvalues are almost uniformly<br>
distributed). I'd like to make an attempt to do better using GAMG, but<br>
the mass term is big enough that I don't want a real coarse grid.<br>
Instead, I want to coarsen only once or twice "in-place" and not have<br>
any real coarse grid. But it looks like GAMG is hardwired to put the<br>
coarsest grid on one process. How should we add this flexibility?<br></blockquote><div><br></div><div>Yea, time to fix this. This comes up if you have a singular problem and want to use an iterative coarse grid solver, or want to use a parallel LU solver, etc.</div><div><br></div><div>It would be pretty easy to fix your problem by removing the hard set to one proc if it is the last level ((PetscBool)(level==pc_gamg->Nlevels-2)), but there are other problems that we should fix.</div><div><br></div><div>The only reason that I can think of that you would want to use more than one proc on the coarse grid is if you want to use a non-default coarse grid solver (bjacobi/lu). Perhaps I could *not* go to one proc if the coarse grid solver pc type is set (more below). I don't like adding yet another switch for this if possible.</div><div><br></div><div>I now see there is another problem. I recently took out code (mark/gamg-coarseksp) to remove setting the coarse grid KSP:</div><div><br></div><div><div> ierr = PCSetUp_MG(pc);CHKERRQ(ierr);</div><div><br></div><div> /* PCSetUp_MG seems to insists on setting this to GMRES */</div><div> ierr = KSPSetType(mglevels[0]->smoothd, KSPPREONLY);CHKERRQ(ierr);</div></div><div> </div></div>The problem is that GMRES is going to do a few reductions even if it has an exact solver - this is bad on a large problem because it uses PETSC_COMM_WORLD.</div><div class="gmail_extra"><br></div><div class="gmail_extra">I don't know if PCSetUp_MG still sets the coarse grid solver to GMRES, but we should fix this now as well.</div><div class="gmail_extra"><br></div><div class="gmail_extra">One might have '-mg_coarse_pc_type lu' but have a parallel LU so just checking this would be fragile.</div><div class="gmail_extra"><br></div><div class="gmail_extra">Perhaps I could turn off this hard switch to one processor if -mg_coarse_pc_type or -mg_coarse_ksp_type or <br>
-mg_coarse_pc_factor_mat_solver_package is set? And also put this KSPPREONLY back in in that case?</div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div></div>