[petsc-users] FETI-DP
Thomas Witkowski
thomas.witkowski at tu-dresden.de
Wed Apr 20 06:55:25 CDT 2011
There one small thing on the implementation details of the FETI-DP, I
cannot figure out. Maybe some of you could help me to understand it,
though it is not directly related to PETSc. Non of the publications says
something about how to distribute the Lagrange multipliers over the
processors. Is there any good way to do it or can it done arbitrarily?
And should be the jump operators B^i be directly assembled or should
they be implemented in a matrix-free way? I'm confuse because in the
work of Klawoon/Rheinbach, it is claimed that the following operator can
be solved in a pure local way:
F = \sum_{i=1}^{N} B^i inv(K_BB^i) trans(B^i)
With B^i the jump operators and K_BB^i the discretization of the sub
domains with the primal nodes. From the notation it follows that EACH
local solve takes the whole vector of Lagrange multipliers. But this is
not applicable for a good parallel implementation. Any hint on this
topic would be helpful for me to understand this problem.
Thomas
Jed Brown wrote:
> On Thu, Apr 14, 2011 at 15:18, Thomas Witkowski
> <thomas.witkowski at tu-dresden.de
> <mailto:thomas.witkowski at tu-dresden.de>> wrote:
>
> Has anybody of you implemented the FETI-DP method in PETSc? I
> think about to do this for my FEM code, but first I want to
> evaluate the effort of the implementation.
>
>
> There are a few implementations out there. Probably most notable is
> Axel Klawonn and Oliver Rheinbach's implementation which has been
> scaled up to very large problems and computers. My understanding is
> that Xuemin Tu did some work on BDDC (equivalent to FETI-DP) using
> PETSc. I am not aware of anyone releasing a working FETI-DP
> implementation using PETSc, but of course you're welcome to ask these
> people if they would share code with you.
>
>
> What sort of problems do you want it for (physics and mesh)? How are
> you currently assembling your systems? A fully general FETI-DP
> implementation is a lot of work. For a specific class of problems and
> variant of FETI-DP, it will still take some effort, but should not be
> too much.
>
> There was a start to a FETI-DP implementation in PETSc quite a while
> ago, but it died due to bitrot and different ideas of how we would
> like to implement. You can get that code from mercurial:
>
> http://petsc.cs.iit.edu/petsc/petsc-dev/rev/021f379b5eea
>
>
> The fundamental ingredient of these methods is a "partially assembled"
> matrix. For a library implementation, the challenges are
>
> 1. How does the user provide the information necessary to decide what
> the coarse space looks like? (It's different for scalar problems,
> compressible elasticity, and Stokes, and tricky to do with no
> geometric information from the user.) The coefficient structure in the
> problem matters a lot when deciding which coarse basis functions to
> use, see http://dx.doi.org/10.1016/j.cma.2006.03.023
>
> 2. How do you handle primal basis functions with large support (e.g.
> rigid body modes of a face)? Two choices here:
> http://www.cs.nyu.edu/cs/faculty/widlund/FETI-DP-elasticity_TR.pdf .
>
> 3. How do you make it easy for the user to provide the required
> matrix? Ideally, the user would just use plain MatSetValuesLocal() and
> run with -mat_type partially-assembled -pc_type fetidp instead of, say
> -mat_type baij -pc_type asm. It should work for multiple subdomains
> per process and subdomains spanning multiple processes. This can now
> be done by implementing MatGetLocalSubMatrix(). The local blocks of
> the partially assembled system should be able to use different formats
> (e.g. SBAIJ).
>
> 4. How do you handle more than two levels? This is very important to
> use more than about 1000 subdomains in 3D because the coarse problem
> just gets too big (unless the coarse problem happens to be
> well-conditioned enough that you can use algebraic multigrid).
>
>
> I've wanted to implement FETI-DP in PETSc for almost two years, but
> it's never been a high priority. I think I now know how to get enough
> flexibility to make it worthwhile to me. I'd be happy to discuss
> implementation issues with you.
More information about the petsc-users
mailing list