[petsc-users] FETI-DP
Jed Brown
jed at 59A2.org
Wed Apr 20 07:43:46 CDT 2011
Thomas, we should move this discussion to petsc-dev, are you subscribed to
that list?
On Wed, Apr 20, 2011 at 13:55, Thomas Witkowski <
thomas.witkowski at tu-dresden.de> wrote:
> There one small thing on the implementation details of the FETI-DP, I
> cannot figure out. Maybe some of you could help me to understand it, though
> it is not directly related to PETSc. Non of the publications says something
> about how to distribute the Lagrange multipliers over the processors. Is
> there any good way to do it or can it done arbitrarily?
>
All their work that I have seen assumes a fully redundant set of Lagrange
multipliers. In that context, each Lagrange multiplier only ever couples two
subdomains together. Either process can then take ownership of that single
Lagrange multiplier.
> And should be the jump operators B^i be directly assembled or should they
> be implemented in a matrix-free way?
>
Usually these constraints are sparse so I think it is no problem to assume
that they are always assembled.
> I'm confuse because in the work of Klawoon/Rheinbach, it is claimed that
> the following operator can be solved in a pure local way:
>
> F = \sum_{i=1}^{N} B^i inv(K_BB^i) trans(B^i)
>
Did they use "F" for this thing? Usually F is the FETI-DP operator which
involves a Schur complement of the entire partially assembled operator in
the dual space. In any case, this thing is not purely local since the jump
operators B^i need neighboring values so it has the same communication as a
MatMult.
> With B^i the jump operators and K_BB^i the discretization of the sub
> domains with the primal nodes.
>
I think you mean "with the primal nodes removed".
> From the notation it follows that EACH local solve takes the whole vector
> of Lagrange multipliers. But this is not applicable for a good parallel
> implementation. Any hint on this topic would be helpful for me to understand
> this problem.
>
I can't tell from their papers how B is stored. It would be natural to
simply store B as a normal assembled matrix with a standard row partition of
the Lagrange multipliers. Then you would apply the subdomain solve operator
using
MatMultTranspose(B,XLambdaGlobal,XGlobal);
for (i=0; i<nlocalsub; i++) {
Vec XSubdomain,YSubdomain;
VecGetSubVector(XGlobal,sublocal[i],&XSubdomain); // no copy if subdomains
are contiguous
VecGetSubVector(YGlobal,sublocal[i],&YSubdomain); // also no copy
KSPSolve(kspK_BB[i],XSubdomain,YSubdomain); // purely local solve, often
KSPPREONLY and PCLU
VecRestoreSubVector(XGlobal,sublocal[i],&XSubdomain);
VecRestoreSubVector(YGlobal,sublocal[i],&YSubdomain);
}
MatMult(B,YGlobal,YLambdaGlobal);
All the communication is in the MatMultTranspose and MatMult. The "Global"
vectors here are global with respect to K_BB (interior and interface dofs,
primal dofs removed). I don't think there is need to ever store K_BB as a
parallel matrix, it would be a separate matrix per subdomain (in the general
case, subdomains could be parallel on subcommunicators).
This code should handle nlocalsub subdomains owned by the local
communicator, typically PETSC_COMM_SELF. The index sets (IS) in sublocal
represent the global (space of K_BB) dofs, usually these are contiguous sets
so they can be represented very cheaply.
Barry, would you do it differently?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110420/5b89989f/attachment.htm>
More information about the petsc-users
mailing list