<div class="gmail_quote">On Fri, May 11, 2012 at 6:11 PM, Stefano Zampini <span dir="ltr"><<a href="mailto:stefano.zampini@gmail.com" target="_blank">stefano.zampini@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p>Isn't PtAP still the right operation, you just have a particular implementation that takes advantage of structure?</p>
</blockquote><div><br></div></div><div>yes it is, but since it is an expensive operations (P is dense), in BDDC, once you solved the local problems to create P, you have almost straigthly (and at a very low cost) the columns of the coarse matrix. The latter can be obtained (as it is implemented in the code) as C^T\Lambda where C is the local sparse matrix of constraints, and \Lambda is a dense and small matrix of lagrange multipliers.</div>
<div class="im">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<p>I know you can also assemble B A^{-1} B^T, which is the same thing, and maybe we should provide a generic op for that.</p></blockquote></div><div>What is B? the jump operator?</div></blockquote></div><br><div>Your C above.</div>
<div><br></div><div>I have other algorithms in mind where the the interpolants would be constructed somewhat differently. I may need to think a bit about what the right common operation is for that case. I just feel like we may be getting too tightly dependent on the specific BDDC algorithm (which has exponential condition number growth in the number of levels), which we probably want to generalize in the future.</div>