<div dir="ltr">Thanks everyone for the help.<div><br></div><div><div>On item 1 (using Amat different from Pmat with geometric multigrid), I tried Barry's suggestion but it did not seem to resolve the issue. For example, in ksp ex25.c, I tried adding the following lines after line 112:</div><div><font face="monospace, monospace" style="background-color:rgb(238,238,238)"> if (J == jac) { </font></div><div><font face="monospace, monospace" style="background-color:rgb(238,238,238)"> ierr = PetscPrintf(PETSC_COMM_WORLD,"<wbr>Creating a new Amat\n"); </font></div><div><font face="monospace, monospace" style="background-color:rgb(238,238,238)"> ierr = DMCreateMatrix(da,&J); </font></div><div><font face="monospace, monospace" style="background-color:rgb(238,238,238)"> ierr = KSPSetOperators(ksp,J,jac); </font></div><div><font face="monospace, monospace" style="background-color:rgb(238,238,238)"> } </font></div><div><font face="monospace, monospace" style="background-color:rgb(238,238,238)"> ierr = MatShift(J,1.0); </font></div><div><br></div><div>This change should set Amat (J) to be different from Pmat (jac), (specifically Amat=identity matrix), so the solution from KSP should be completely different from the original ex25.c. But viewing the solution vector, the solution is unchanged. It seems PETSc is ignoring the Amat created in this approach.</div><div><br></div><div>Matt K's suggestion of switching from KSP to SNES does work, allowing Amat to differ from Pmat on the finest multigrid level. (On coarser levels, it seems PETSc still forces Amat=Pmat on entry to computeMatrix). </div><div><br></div><div>On Jed's comment, the application I have in mind is indeed a convection-dominated equation (a steady linear 3D convection-diffusion equation with smoothly varying anisotropic coefficients and recirculating convection). Gamg and hypre-boomerAMG have been working on it ok if I discretize with low-order upwind differences in Pmat and use Amat=Pmat, but I'd like higher order accuracy. Using gmres with a higher-order discretization in Amat and low-order Pmat works ok, but the number of KSP iterations required gets large as the diffusion gets small compared to convection, even with -pc_type lu. So I'm working to see if geometric mg with defect correction at each level can do better.</div><div><br></div></div><div>Thanks,</div><div>Matt Landreman</div><div><br></div><div><br></div><div><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Feb 23, 2017 at 5:23 PM, Jed Brown <span dir="ltr"><<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span class="m_2050851470487261804m_4745145855045411141gmail-m_8976664275720475371gmail-">Matt Landreman <<a href="mailto:matt.landreman@gmail.com" target="_blank">matt.landreman@gmail.com</a>> writes:<br>
> 3. Is it at all sensible to do this second kind of defect correction with<br>
> _algebraic_ multigrid? Perhaps Amat for each level could be formed from the<br>
> high-order matrix at the fine level by the Galerkin operator R A P, after<br>
> getting all the restriction matrices created by gamg for Pmat?<br>
<br>
</span>Note that defect correction is most commonly used for<br>
transport-dominated processes for which the high order discretization is<br>
not h-elliptic. AMG heuristics are typically really bad for such<br>
problems so stabilizing a smoother isn't really a relevant issue. Also,<br>
it is usually used for strongly nonlinear problems where AMG's setup<br>
costs are likely overkill. This is not to say that defect correction<br>
AMG won't work, but there is reason to believe that it doesn't matter<br>
for many important problems.<br>
</blockquote></div><br></div></div>