[petsc-dev] Should -pc_type mg be a linear preconditioner by default?

Mark F. Adams mark.adams at columbia.edu
Tue Oct 25 19:51:31 CDT 2011


On Oct 25, 2011, at 8:13 PM, Jed Brown wrote:

> On Tue, Oct 25, 2011 at 17:48, Mark F. Adams <mark.adams at columbia.edu> wrote:
> As Jed says Cheb is great for SPD problem, and his estimator stuff is excellent and should be made the default for smoothers.  One problem is that the lower range should depend on coarsening rate (ie, 3D cubic elements should be lower than 2D quads but the Cheb polynomial is at least smooth at that end).  I might use a lower factor than 0.1 (thats a 2D quad and is at the low end of what anyone would do) but not a big deal.
> 
> For the unstructured problem we were playing with a couple weeks ago, there was a nice broad region where the iteration count was essentially the same. Playing with other problems, larger values seem to be better.
> 
> The following seems to be a pretty hard problem. I'm using grid sequencing here because something is needed for globalization, but I only care about the finer level linear solves.
> 
> mpiexec -n 4 ./ex50 -lidvelocity 1e3 -prandtl 1e4 -snes_grid_sequence 4 -snes_monitor_short -ksp_monitor -pc_type mg -mg_levels_ksp_type chebychev -mg_levels_ksp_max_it 3 -mg_levels_ksp_chebychev_estimate_eigenvalues 0,0.5,0,1.1 -mg_levels_pc_type sor
> 

But SOR has a damping coefficient, so are you really doing G-S (ie, SOR with \omega = 1.0)?

And is this true G-S?  MATAIJ does not have a parallel G-S but I'm guessing you are using DM here ...

Does pure G-S not work well for this problem?  that is does cheb help?

I've seen lots of people use G-S for lid driven cavities (not to imply that it will work for you), but maybe they damp them and don't make a big deal of it.

> This problem diverges if you bring the lower Cheby estimate down to 0.1. This configuration is also beating anything I've tried that has inner Krylov iterations (including ASM with direct subdomain solves).
> 
> Ex48 is a funny 3D problem, but 0,0.1,0,1.1 seems to be a good set of parameters there as well. (I was considering a reasonably anisotropic problem where I did not semi-coarsen, and preconditioning with ICC to bind the tightly coupled direction together. I couldn't get PBJacobi or SOR smoothing to work here.) The following can be solved much faster using (aggressive) semi-coarsening, but this configuration is not bad for isotropic coarsening.
> 
> mpiexec -n 4 ./ex48 -M 6 -P 4 -thi_hom x -dmmg_grid_sequence -pc_mg_type multiplicative -thi_L 80e3 -thi_nlevels 4 -ksp_monitor -snes_monitor -ksp_type fgmres -mg_levels_ksp_type chebychev -mg_levels_ksp_max_it 3 -mg_levels_ksp_chebychev_estimate_eigenvalues 0,0.1,0,1.1 -mg_levels_sub_pc_type icc
> 
> 
> I'd be interested to see your problems where 0.1 is too large. It wouldn't surprise me with aggressive coarsening in 3D, but 0.1 seems to be a good default for standard 2^d coarsening that is our default for geometric multigrid.

I wouldn't go as far as saying that it is too large.  You would see a degradation in ex56 if the ratio was hard wired to 0.1 I'm pretty sure but would not be bad.  As I said the Cheb poly is pretty smooth and flat at that end.  GAMG looks at the coarsening rate between levels to set this ratio.

> 
> For very unsymmetric problems, that is problems with hyperbolic terms you want to use Gauss-Seidel, and sometimes you want to damp it.  (I would still use Cheb for unsymmetric elliptic operators like what you can get in FV or other minor perturbations to an SPD matrix.)  Setting a damping parameter is too complex to figure out automatically, and is not always needed, and we do not have a parallel Gauss-Seidel in PETSc.  I actually have a parallel G-S, it is complex but I could move it over to PETSc if there is demand for it.
> 
> Is it specialized for BAIJ(3)? Maybe we should look at your approach and see if we can find a way to make it simple enough for the rest of us to understand.

No it works with any block size 1-6.  You can see the code in prometheus/src/gauss_seid.C (or something like that).

Its pretty easy to understand (see my SC01 paper on my web page), but its kind of a bitch to implement.  And you don't really need to understand it -- it does G-S. Very well defined.  If you care about equation ordering then you need to look at the algorithm.  But on one processor its lexicographical and on N processors its standard coloring kind of algo. and its a hybrid in between.  Its all MPI code so it would not be too hard to port it to PETSc but it uses a lot of hash tables and linked lists so C++ is useful.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20111025/42cd28eb/attachment.html>


More information about the petsc-dev mailing list