<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="gmail_quote"><div>Okay. Presumably the change of basis can be done with a MatPtAP if desirable.</div>
<div class="im"><div></div></div></div></blockquote><div><br>One we have the constraint matrix, we can easily obtain the change of basis matrix T (as in Klawonn-Widlund papers).<br>Note that the change of basis approach will be very effective for exact applications with reduced iterations. I think we should include in the new matrix class the possibility of doing iterations on the reduced space instead of the whole space of dofs.<br>
</div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="gmail_quote"><div class="im"><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div></div><div>Actually in PCBDDC the local coarse matrix is computed using a theoretical equivalence of the PtAP operation of coarse basis and the unassembled MATIS matrix (coarse basis are continuous only at vertex and constraints dofs). PtAP (where P is dense) is just avoided for its computational costs.<br>
</div></blockquote><div><br></div></div><div>Are you basically just doing a local PtAP or do you use the equivalence</div><div><br></div><div>K \Psi = C^T \Lambda (notation of Dohrmann's Eq 2)</div><div><br></div><div>
or something else?</div><div class="im">
<div> </div></div></div></blockquote><div><br>I'm using the equivalence.<br clear="all"><br>You said you would have the new matrix class to support either more subdomains per core, or more cores per subdomain. In the latter case, threaded or mpi matrices (on subcomms)?<br>
<br><br><br> <br><br></div></div><br>-- <br>Stefano<br>