<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>On Oct 25, 2011, at 9:43 PM, Jed Brown wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div class="gmail_quote">On Tue, Oct 25, 2011 at 19:51, Mark F. Adams <span dir="ltr"><<a href="mailto:mark.adams@columbia.edu">mark.adams@columbia.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div><div>But SOR has a damping coefficient, so are you really doing G-S (ie, SOR with \omega = 1.0)?</div></div></blockquote><div><br></div><div>Our SOR is G-S unless you set a non-default damping parameter (the default is omega=1).</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div><div><br></div><div>And is this true G-S? MATAIJ does not have a parallel G-S but I'm guessing you are using DM here ...</div>
</div></blockquote><div><br></div><div>No, it is additive between subdomains.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div><div><br></div><div>
Does pure G-S not work well for this problem? that is does cheb help?</div></div></blockquote><div><br></div><div>I see qualitatively the same thing in serial, so Cheb is definitely helping relative to pure G-S.</div><div>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div><div><br></div><div>I've seen lots of people use G-S for lid driven cavities (not to imply that it will work for you), but maybe they damp them and don't make a big deal of it.</div>
</div></blockquote><div><br></div><div>Probably depends on the parameter range. Cheb seems much less sensitive to the lower estimate than SOR is to the damping factor. That is, I can try 2 or 3 values and find a nice broad minimum. With SOR, the "good" damping value seems to be so twitchy, I'd rather not mess with it.</div></div></blockquote><div><br></div><div>Humm, so a first order Cheb (1 iteration) is the same as a certain dampled SOR, right, so you must be doing several iterations. Not a big deal but I'd like to see if you did say two Cheb iterations and printed out the damping factor and then ran it with 2 SOR iterations with the average as its damping factor, and see how they compare. I've never seen the optimal polynomial effect of Cheb come into play experimentally.</div><div><br></div><div><br></div><blockquote type="cite"><div class="gmail_quote">
<div> </div><blockquote class="gmail_quote" style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0.8ex; border-left-width: 1px; border-left-color: rgb(204, 204, 204); border-left-style: solid; padding-left: 1ex; position: static; z-index: auto; "><div><div class="im"><div></div></div><div>I wouldn't go as far as saying that it is too large. You would see a degradation in ex56 if the ratio was hard wired to 0.1 I'm pretty sure but would not be bad. As I said the Cheb poly is pretty smooth and flat at that end. GAMG looks at the coarsening rate between levels to set this ratio.</div>
</div></blockquote><div><br></div><div>Does it also look at the dimension? </div></div></blockquote><div><br></div>No, just N.</div><div><br><blockquote type="cite"><div class="gmail_quote"><div>That is, if you coarsen a 1D problem by a factor of 8^1=8, the smoother will have a much bigger chunk of the spectrum to bite off than if you coarsen a 3D problem by 2^3=8.</div></div></blockquote><div><br></div><div>Yes, thats a good point.</div><br><blockquote type="cite"><div class="gmail_quote">
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div><div class="im"></div></div><div>No it works with any block size 1-6. You can see the code in prometheus/src/gauss_seid.C (or something like that).</div>
<div><br></div><div>Its pretty easy to understand (see my SC01 paper on my web page), but its kind of a bitch to implement. And you don't really need to understand it -- it does G-S. Very well defined. If you care about equation ordering then you need to look at the algorithm. But on one processor its lexicographical and on N processors its standard coloring kind of algo. and its a hybrid in between. Its all MPI code so it would not be too hard to port it to PETSc but it uses a lot of hash tables and linked lists so C++ is useful.</div>
</blockquote></div><br><div>I downloaded that paper when we talked about it this summer. It looks like the parallel coloring is a meaningful chunk of it? The INL guys are also asking for a parallel coloring, so it's something we should do regardless.</div>
</blockquote><br></div><div>No actually coloring is not important in the traditional sense. First, coloring can come in (theoretically) as a *processor* coloring and if my algorithm has one vertex per processor then it is the same as a traditional coloring scheme. But if the domains are are large enough (3^d for (bi/tri)linear elements) then there is a separation in the dependencies (see paper) and coloring is not needed.</div><div><br></div><div>And even if you had one vertex per process then you could randomize the process IDs (assuming they were not random initially) and you would get a log term in the complexity, which is not bad (graph theorists love randomization. just love it).</div><div><br></div></body></html>