<div class="gmail_quote">Thomas, we should move this discussion to petsc-dev, are you subscribed to that list?</div><div class="gmail_quote"><br></div><div class="gmail_quote">On Wed, Apr 20, 2011 at 13:55, Thomas Witkowski <span dir="ltr"><<a href="mailto:thomas.witkowski@tu-dresden.de">thomas.witkowski@tu-dresden.de</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div id=":1vz">There one small thing on the implementation details of the FETI-DP, I cannot figure out. Maybe some of you could help me to understand it, though it is not directly related to PETSc. Non of the publications says something about how to distribute the Lagrange multipliers over the processors. Is there any good way to do it or can it done arbitrarily?</div>
</blockquote><div><br></div><div>All their work that I have seen assumes a fully redundant set of Lagrange multipliers. In that context, each Lagrange multiplier only ever couples two subdomains together. Either process can then take ownership of that single Lagrange multiplier.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div id=":1vz"> And should be the jump operators B^i be directly assembled or should they be implemented in a matrix-free way?</div>
</blockquote><div><br></div><div>Usually these constraints are sparse so I think it is no problem to assume that they are always assembled.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div id=":1vz"> I'm confuse because in the work of Klawoon/Rheinbach, it is claimed that the following operator can be solved in a pure local way:<br>
<br>
F = \sum_{i=1}^{N} B^i inv(K_BB^i) trans(B^i)<br></div></blockquote><div><br></div><div>Did they use "F" for this thing? Usually F is the FETI-DP operator which involves a Schur complement of the entire partially assembled operator in the dual space. In any case, this thing is not purely local since the jump operators B^i need neighboring values so it has the same communication as a MatMult.</div>
<div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div id=":1vz">
<br>
With B^i the jump operators and K_BB^i the discretization of the sub domains with the primal nodes.</div></blockquote><div><br></div><div>I think you mean "with the primal nodes removed".</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div id=":1vz"> From the notation it follows that EACH local solve takes the whole vector of Lagrange multipliers. But this is not applicable for a good parallel implementation. Any hint on this topic would be helpful for me to understand this problem.</div>
</blockquote></div><br><div>I can't tell from their papers how B is stored. It would be natural to simply store B as a normal assembled matrix with a standard row partition of the Lagrange multipliers. Then you would apply the subdomain solve operator using</div>
<div><br></div><div>MatMultTranspose(B,XLambdaGlobal,XGlobal);</div><div>for (i=0; i<nlocalsub; i++) {</div><div> Vec XSubdomain,YSubdomain;</div><div> VecGetSubVector(XGlobal,sublocal[i],&XSubdomain); // no copy if subdomains are contiguous</div>
<div> VecGetSubVector(YGlobal,sublocal[i],&YSubdomain); // also no copy</div><div> KSPSolve(kspK_BB[i],XSubdomain,YSubdomain); // purely local solve, often KSPPREONLY and PCLU</div><div> VecRestoreSubVector(XGlobal,sublocal[i],&XSubdomain);</div>
<div> VecRestoreSubVector(YGlobal,sublocal[i],&YSubdomain);</div><div>}</div><div>MatMult(B,YGlobal,YLambdaGlobal);</div><div><br></div><div>All the communication is in the MatMultTranspose and MatMult. The "Global" vectors here are global with respect to K_BB (interior and interface dofs, primal dofs removed). I don't think there is need to ever store K_BB as a parallel matrix, it would be a separate matrix per subdomain (in the general case, subdomains could be parallel on subcommunicators).</div>
<div><br></div><div>This code should handle nlocalsub subdomains owned by the local communicator, typically PETSC_COMM_SELF. The index sets (IS) in sublocal represent the global (space of K_BB) dofs, usually these are contiguous sets so they can be represented very cheaply.</div>
<div><br></div><div>Barry, would you do it differently?</div>