<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>On Oct 25, 2011, at 8:13 PM, Jed Brown wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div class="gmail_quote">On Tue, Oct 25, 2011 at 17:48, Mark F. Adams <span dir="ltr"><<a href="mailto:mark.adams@columbia.edu">mark.adams@columbia.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div>As Jed says Cheb is great for SPD problem, and his estimator stuff is excellent and should be made the default for smoothers. One problem is that the lower range should depend on coarsening rate (ie, 3D cubic elements should be lower than 2D quads but the Cheb polynomial is at least smooth at that end). I might use a lower factor than 0.1 (thats a 2D quad and is at the low end of what anyone would do) but not a big deal.</div>
</blockquote><div><br></div><div>For the unstructured problem we were playing with a couple weeks ago, there was a nice broad region where the iteration count was essentially the same. Playing with other problems, larger values seem to be better.</div>
<div><br></div><div>The following seems to be a pretty hard problem. I'm using grid sequencing here because something is needed for globalization, but I only care about the finer level linear solves.</div><div><br></div>
<div>mpiexec -n 4 ./ex50 -lidvelocity 1e3 -prandtl 1e4 -snes_grid_sequence 4 -snes_monitor_short -ksp_monitor -pc_type mg -mg_levels_ksp_type chebychev -mg_levels_ksp_max_it 3 -mg_levels_ksp_chebychev_estimate_eigenvalues 0,0.5,0,1.1 -mg_levels_pc_type sor</div>
<div><br></div></div></blockquote><div><br></div><div>But SOR has a damping coefficient, so are you really doing G-S (ie, SOR with \omega = 1.0)?</div><div><br></div><div>And is this true G-S? MATAIJ does not have a parallel G-S but I'm guessing you are using DM here ...</div><div><br></div><div>Does pure G-S not work well for this problem? that is does cheb help?</div><div><br></div><div>I've seen lots of people use G-S for lid driven cavities (not to imply that it will work for you), but maybe they damp them and don't make a big deal of it.</div><br><blockquote type="cite"><div class="gmail_quote"><div>This problem diverges if you bring the lower Cheby estimate down to 0.1. This configuration is also beating anything I've tried that has inner Krylov iterations (including ASM with direct subdomain solves).</div>
<div><br></div><div>Ex48 is a funny 3D problem, but 0,0.1,0,1.1 seems to be a good set of parameters there as well. (I was considering a reasonably anisotropic problem where I did not semi-coarsen, and preconditioning with ICC to bind the tightly coupled direction together. I couldn't get PBJacobi or SOR smoothing to work here.) The following can be solved much faster using (aggressive) semi-coarsening, but this configuration is not bad for isotropic coarsening.</div>
<div><br></div><div>mpiexec -n 4 ./ex48 -M 6 -P 4 -thi_hom x -dmmg_grid_sequence -pc_mg_type multiplicative -thi_L 80e3 -thi_nlevels 4 -ksp_monitor -snes_monitor -ksp_type fgmres -mg_levels_ksp_type chebychev -mg_levels_ksp_max_it 3 -mg_levels_ksp_chebychev_estimate_eigenvalues 0,0.1,0,1.1 -mg_levels_sub_pc_type icc</div>
<div><br></div><div><br></div><div>I'd be interested to see your problems where 0.1 is too large. It wouldn't surprise me with aggressive coarsening in 3D, but 0.1 seems to be a good default for standard 2^d coarsening that is our default for geometric multigrid.</div></div></blockquote><div><br></div><div>I wouldn't go as far as saying that it is too large. You would see a degradation in ex56 if the ratio was hard wired to 0.1 I'm pretty sure but would not be bad. As I said the Cheb poly is pretty smooth and flat at that end. GAMG looks at the coarsening rate between levels to set this ratio.</div><br><blockquote type="cite"><div class="gmail_quote">
<div><br></div><blockquote class="gmail_quote" style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0.8ex; border-left-width: 1px; border-left-color: rgb(204, 204, 204); border-left-style: solid; padding-left: 1ex; position: static; z-index: auto; "><div></div><div>For very unsymmetric problems, that is problems with hyperbolic terms you want to use Gauss-Seidel, and sometimes you want to damp it. (I would still use Cheb for unsymmetric elliptic operators like what you can get in FV or other minor perturbations to an SPD matrix.) Setting a damping parameter is too complex to figure out automatically, and is not always needed, and we do not have a parallel Gauss-Seidel in PETSc. I actually have a parallel G-S, it is complex but I could move it over to PETSc if there is demand for it.</div>
</blockquote></div><div><br></div><div>Is it specialized for BAIJ(3)? Maybe we should look at your approach and see if we can find a way to make it simple enough for the rest of us to understand.</div>
</blockquote></div><br><div>No it works with any block size 1-6. You can see the code in prometheus/src/gauss_seid.C (or something like that).</div><div><br></div><div>Its pretty easy to understand (see my SC01 paper on my web page), but its kind of a bitch to implement. And you don't really need to understand it -- it does G-S. Very well defined. If you care about equation ordering then you need to look at the algorithm. But on one processor its lexicographical and on N processors its standard coloring kind of algo. and its a hybrid in between. Its all MPI code so it would not be too hard to port it to PETSc but it uses a lot of hash tables and linked lists so C++ is useful.</div><div><br></div></body></html>