<div class="gmail_quote">On Mon, Nov 21, 2011 at 09:41, Thomas Witkowski <span dir="ltr"><<a href="mailto:Thomas.Witkowski@tu-dresden.de">Thomas.Witkowski@tu-dresden.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div id=":5zy">In my case the Schur complemt should be quite sparse, </div></blockquote><div><br></div><div>So semantically, your Kbb is a parallel block-diagonal matrix. In my opinion, you don't actually want to store it that way because then you are only allowed to solve with the whole thing, which would make the algorithm more synchronous than necessary. So I would store each block in its own matrix with its own local communicator (MPI_COMM_SELF, of if you are being more general, some suitable subcommunicator).</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div id=":5zy">so I want to build it explicitly. My main problem is still how to compute<br>
<br>
inverse(Kbb) * Kba<br>
<br>
Sorry for asking again, but no of the solutions seems to be sastisfying. When I understood you (and Jed) right, there are two general ways: either I define inverse(Kbb) either as a Mat object and use MatMatSolve or via KSP and using KSPSolve. The first option seems fine, but one of you noted that it is not possible to reuse the LU factorization.</div>
</blockquote><div><br></div><div>No, both ways reuse the LU factorization.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div id=":5zy"> The would be a huge drawback as I have to use inverse(Kbb) in different context. When defining inverse(Kbb) via KSP, as I do it at the moment (and yes, I want to use here direct solvers only), I must store Kba either column wise or in a dense way. Both is not really feasible.<br>
</div></blockquote><div><br></div><div>You extract the piece of Kba that is relevant to each piece of Kbb. This will have only a few columns and is naturally stored columnwise (either as an array of column vectors or as MATDENSE).</div>
<div><br></div><div>After solving these blocks, you will have another tall skinny matrix (either as Vecs or MATDENSE) corresponding to each block of Kbb. Now you multiply the appropriate blocks Kab and put the (sparse, low dimension per block) result back into a global sparse matrix (for the coarse problem).</div>
</div>