[petsc-dev] [petsc-users] FETI-DP

Thomas Witkowski thomas.witkowski at tu-dresden.de
Wed Apr 20 10:05:59 CDT 2011


Jed Brown wrote:
>
>     I'm confuse because in the work of Klawoon/Rheinbach, it is
>     claimed that the following operator can be solved in a pure local way:
>
>     F = \sum_{i=1}^{N} B^i   inv(K_BB^i) trans(B^i)
>
>
> Did they use "F" for this thing? Usually F is the FETI-DP operator 
> which involves a Schur complement of the entire partially assembled 
> operator in the dual space. In any case, this thing is not purely 
> local since the jump operators B^i need neighboring values so it has 
> the same communication as a MatMult.
"F" is here the usual FETI-DP operator. I just omitted the coarse space 
part for more readability.
>
>
>     With B^i the jump operators and K_BB^i the discretization of the
>     sub domains with the primal nodes.
>
>
> I think you mean "with the primal nodes removed".
Yes!
>  
>
>     From the notation it follows that EACH local solve takes the whole
>     vector of Lagrange multipliers. But this is not applicable for a
>     good parallel implementation. Any hint on this topic would be
>     helpful for me to understand this problem.
>
>
> I can't tell from their papers how B is stored. It would be natural to 
> simply store B as a normal assembled matrix with a standard row 
> partition of the Lagrange multipliers. Then you would apply the 
> subdomain solve operator using
>
> MatMultTranspose(B,XLambdaGlobal,XGlobal);
> for (i=0; i<nlocalsub; i++) {
>   Vec XSubdomain,YSubdomain;
>   VecGetSubVector(XGlobal,sublocal[i],&XSubdomain); // no copy if 
> subdomains are contiguous
>   VecGetSubVector(YGlobal,sublocal[i],&YSubdomain); // also no copy
>   KSPSolve(kspK_BB[i],XSubdomain,YSubdomain); // purely local solve, 
> often KSPPREONLY and PCLU
>   VecRestoreSubVector(XGlobal,sublocal[i],&XSubdomain);
>   VecRestoreSubVector(YGlobal,sublocal[i],&YSubdomain);
> }
> MatMult(B,YGlobal,YLambdaGlobal);
>
> All the communication is in the MatMultTranspose and MatMult. The 
> "Global" vectors here are global with respect to K_BB (interior and 
> interface dofs, primal dofs removed). I don't think there is need to 
> ever store K_BB as a parallel matrix, it would be a separate matrix 
> per subdomain (in the general case, subdomains could be parallel on 
> subcommunicators).
>
> This code should handle nlocalsub subdomains owned by the local 
> communicator, typically PETSC_COMM_SELF. The index sets (IS) in 
> sublocal represent the global (space of K_BB) dofs, usually these are 
> contiguous sets so they can be represented very cheaply.
Following the mathematical representation wouldn't make it more sense to 
store B but B^T, because the local matrices B_i^T are real local and 
would have no off-diagonal elements? Also the very first multiplication 
MatMult(B,XLambdaGlobal,XGlobal) could be done in a local way. But I'm 
not sure what will than happen with the very last matrix multiplication, 
MatMultTranspose(B,YGlobal,YLambdaGlobal). Would it potentially result 
in an all-to-all communication?



More information about the petsc-dev mailing list