<div class="gmail_quote">On Sun, Oct 9, 2011 at 15:50, Jakub Sistek <span dir="ltr"><<a href="mailto:sistek@math.cas.cz">sistek@math.cas.cz</a>></span> wrote: </div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div text="#000000" bgcolor="#ffffff"><div class="im"><blockquote type="cite"><div class="gmail_quote">
</div>
</blockquote></div>
Well, let me share my experience here. Mathematically, they are just
a different language. But not from the point of view of
implementation. In this respect, I like work of C. Dohrmann (2003,
2007), which is in my eye more implementation aware. I have already
tried to implement three ways to solve these saddle point problems,
changing of basis among them (A). The other two were splitting the
constraints into (i) and (ii), enforcing group (i) by changing the
system as Dirichlet boundary conditions and group (ii) by Lagrange
multipliers (B), and solving the saddle point system as such by a
direct method with suitable pivoting (C). Let me summarize (up to my
understanding to the approaches) my current (and very personal)
feeling of (A), (B) and (C):<br>
<br>
(A) <br>
(+) allows implementation of the preconditioner by solving system
with a large (assembled in subdomains and at coarse dofs)
distributed matrix<br>
(-) changing of basis produces a lot of fill into the matrix, which
becomes very expensive especially for faces among subdomains, making
direct solves after the change a lot more expensive<br>
(-) does not provide explicit coarse problem, which is the basis for
multilevel method<br>
(-) not simple to generalize for arbitrary weighted averages on
edges/faces (this may be my limited understanding)<br>
<br>
(B)<br>
(+) averages on faces do not present a problem<br>
(+) allows using variety of solvers for the block K (inexact,
iterative, ...)<br>
(-) requires two solves of the problem with K in each application of
preconditioner (one in constructing the RHS for Lagrange multipliers
problem, another one for actual solve with corrected RHS)<br>
(-) requires enough point constraints to regularize K for direct
solvers<br>
<br>
(C) <br>
(+) averages on faces do not present a problem<br>
(+) simplicity of implementation - no distinction between group (i)
and (ii) needed<br>
(-) quite restrictive for subdomain solvers, which need to be robust
for saddle-point type of systems<br>
<br>
Since inexact solvers seem to me now quite important, I would be
inclined to try (B) next time I would implement this task. This is
in my opinion pretty much the approach of Dohrmann (2007) in "An
approximate BDDC preconditioner". <br></div></blockquote><div><br></div><div>This sounds sensible. My concern is that iterative solvers for saddle point problems are much more delicate. </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div text="#000000" bgcolor="#ffffff"><div class="im"><blockquote type="cite"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204, 204, 204);padding-left:1ex">
<div text="#000000" bgcolor="#ffffff"></div>
</blockquote>
<div><br>
</div>
<div>NullSpace is specifically for singular operators (e.g. the
global problem is floating). NearNullSpace is just describing
the low-energy modes (which need to be suitably represented in
the coarse space to have a scalable method).</div>
</div>
</blockquote></div>
Thank you for explanation. These would probably correspond to
subdomain coarse basis functions, which are computed in BDDC by
these saddle point problems rather than from an exact knowledge. The
coarse space here contains all the nullspaces of subdomains and
typically is much larger than exact nullspace of subdomain matrices
(e.g. for elasticity in 3D with cubic subdomains considering
arithmetic averages on faces and edges, each subdomain has 26*3 = 78
coarse basis functions compared to dimension 6 of exact nullspace.</div></blockquote><div><br></div><div>Well yes, but remember that faces are shared two ways and edges and vertices are shared many ways. This contributes to an average of much less than 78 dofs per coarse space. So you need many subdomain solves, but the coarse problem need not be much bigger.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div text="#000000" bgcolor="#ffffff"><div class="im">OK. As I have mentioned, I think that jumps inside domains are
naturally taken care for by subdomain solves, so very well for exact
solvers. Jumps aligned by interfaces (for example coefficients
constant for subdomains) are handled well by stiffness scaled
averaging operator. The remaining group present troubles. We have
created a small test problem (100k dofs) in a recent paper
(<a href="http://dx.doi.org/10.1016/j.matcom.2011.03.014" target="_blank">http://dx.doi.org/10.1016/j.matcom.2011.03.014</a>) for elasticity
computation, where much stiffer 'rods' embedded in softer material
were punching through faces of subdomains. There, the contrast in
coefficients was only 10^5 and the resulting estimated condition
number even after preconditioning ~2x10^4 when using arithmetic
averages on faces. I know it is not a complete test of the
phenomena, but one can conclude that the standard type of coarse
problem does not suffice here. I can remember that by playing with
the difference in coefficients, we were essentially able to make the
solver stagnate. Hopefully, I should look into it again in a month
or so, so I can keep you updated of a bit more systematic tests.
Papers by Pechstein and Scheichl (2009 a,b) contain systematic tests
already.</div></div></blockquote><div><br></div><div>Thanks for the references. For a few problems that I run into, aligning subdomains is just not practical.</div></div>