[petsc-dev] Should -pc_type mg be a linear preconditioner by default?

Mark F. Adams mark.adams at columbia.edu
Wed Oct 26 08:56:44 CDT 2011


On Oct 25, 2011, at 9:43 PM, Jed Brown wrote:

> On Tue, Oct 25, 2011 at 19:51, Mark F. Adams <mark.adams at columbia.edu> wrote:
> But SOR has a damping coefficient, so are you really doing G-S (ie, SOR with \omega = 1.0)?
> 
> Our SOR is G-S unless you set a non-default damping parameter (the default is omega=1).
>  
> 
> And is this true G-S?  MATAIJ does not have a parallel G-S but I'm guessing you are using DM here ...
> 
> No, it is additive between subdomains.
>  
> 
> Does pure G-S not work well for this problem?  that is does cheb help?
> 
> I see qualitatively the same thing in serial, so Cheb is definitely helping relative to pure G-S.
>  
> 
> I've seen lots of people use G-S for lid driven cavities (not to imply that it will work for you), but maybe they damp them and don't make a big deal of it.
> 
> Probably depends on the parameter range. Cheb seems much less sensitive to the lower estimate than SOR is to the damping factor. That is, I can try 2 or 3 values and find a nice broad minimum. With SOR, the "good" damping value seems to be so twitchy, I'd rather not mess with it.

Humm, so a first order Cheb (1 iteration) is the same as a certain dampled SOR, right, so you must be doing several iterations.  Not a big deal but I'd like to see if you did say two Cheb iterations and printed out the damping factor and then ran it with 2 SOR iterations with the average as its damping factor, and see how they compare.  I've never seen the optimal  polynomial effect of Cheb come into play experimentally.


>   
> I wouldn't go as far as saying that it is too large.  You would see a degradation in ex56 if the ratio was hard wired to 0.1 I'm pretty sure but would not be bad.  As I said the Cheb poly is pretty smooth and flat at that end.  GAMG looks at the coarsening rate between levels to set this ratio.
> 
> Does it also look at the dimension?

No, just N.

> That is, if you coarsen a 1D problem by a factor of 8^1=8, the smoother will have a much bigger chunk of the spectrum to bite off than if you coarsen a 3D problem by 2^3=8.

Yes, thats a good point.

>  
> No it works with any block size 1-6.  You can see the code in prometheus/src/gauss_seid.C (or something like that).
> 
> Its pretty easy to understand (see my SC01 paper on my web page), but its kind of a bitch to implement.  And you don't really need to understand it -- it does G-S. Very well defined.  If you care about equation ordering then you need to look at the algorithm.  But on one processor its lexicographical and on N processors its standard coloring kind of algo. and its a hybrid in between.  Its all MPI code so it would not be too hard to port it to PETSc but it uses a lot of hash tables and linked lists so C++ is useful.
> 
> I downloaded that paper when we talked about it this summer. It looks like the parallel coloring is a meaningful chunk of it? The INL guys are also asking for a parallel coloring, so it's something we should do regardless.

No actually coloring is not important in the traditional sense.  First, coloring can come in (theoretically) as a *processor* coloring and if my algorithm has one vertex per processor then it is the same as a traditional coloring scheme.  But if the domains are are large enough (3^d for (bi/tri)linear elements) then there is a separation in the dependencies (see paper) and coloring is not needed.

And even if you had one vertex per process then you could randomize the process IDs (assuming they were not random initially) and you would get a log term in the complexity, which is not bad (graph theorists love randomization.  just love it).

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20111026/30d5393c/attachment.html>


More information about the petsc-dev mailing list