[petsc-users] FETI-DP

Thomas Witkowski thomas.witkowski at tu-dresden.de
Fri Apr 15 05:40:37 CDT 2011


Jed Brown wrote:
> On Thu, Apr 14, 2011 at 15:18, Thomas Witkowski 
> <thomas.witkowski at tu-dresden.de 
> <mailto:thomas.witkowski at tu-dresden.de>> wrote:
>
>     Has anybody of you implemented the FETI-DP method in PETSc? I
>     think about to do this for my FEM code, but first I want to
>     evaluate the effort of the implementation.
>
>
> There are a few implementations out there. Probably most notable is 
> Axel Klawonn and Oliver Rheinbach's implementation which has been 
> scaled up to very large problems and computers. My understanding is 
> that Xuemin Tu did some work on BDDC (equivalent to FETI-DP) using 
> PETSc. I am not aware of anyone releasing a working FETI-DP 
> implementation using PETSc, but of course you're welcome to ask these 
> people if they would share code with you.
I know the works of Klawonn and Rheinbach, but was not aware that they 
have implemented their algorithms with PETSc.
>
> What sort of problems do you want it for (physics and mesh)? How are 
> you currently assembling your systems? A fully general FETI-DP 
> implementation is a lot of work. For a specific class of problems and 
> variant of FETI-DP, it will still take some effort, but should not be 
> too much.
My work is on a very general finite element toolbox (AMDiS) that solves 
a broad class of PDEs. The code is already parallelized, i.e., we have 
real distributed 2D (triangles) and 3D (tetrahedrons) adaptive meshes, 
mesh partitioning for load balancing with ParMETiS and Zoltan and a 
PETSc interface. For PETSc, there are two different modes at the moment. 
Either a so called global matrix solver or a Schur complement approach. 
The first one assembles one global parallel matrix, which we make most 
use of for using MUMPs or SuperLU on small and mid size problems. I 
would like to implement a broad class of different domain decomposition 
approaches into AMDiS, so that the user can make use of the method that 
is most appropriate for the problem.
>
> There was a start to a FETI-DP implementation in PETSc quite a while 
> ago, but it died due to bitrot and different ideas of how we would 
> like to implement. You can get that code from mercurial:
>
>  http://petsc.cs.iit.edu/petsc/petsc-dev/rev/021f379b5eea
Okay, good to know! I will have a look on it, may be I can extract some 
ideas for my own implementation.
>
>
> The fundamental ingredient of these methods is a "partially assembled" 
> matrix. For a library implementation, the challenges are
What do you mean by "partially assembled"? Do you mean that only a 
subset of the subdomain nodes must be assembled to a parallel 
distributed matrix and the most one can be put to a local matrix?
>
> 1. How does the user provide the information necessary to decide what 
> the coarse space looks like? (It's different for scalar problems, 
> compressible elasticity, and Stokes, and tricky to do with no 
> geometric information from the user.) The coefficient structure in the 
> problem matters a lot when deciding which coarse basis functions to 
> use, see http://dx.doi.org/10.1016/j.cma.2006.03.023
Do you think that this is really possible without providing at least 
some geometric information? At least in my code I can provide arbitrary 
geometrical information about the nodes to other libraries on very low 
computation costs.
>
> 2. How do you handle primal basis functions with large support (e.g. 
> rigid body modes of a face)? Two choices here: 
> http://www.cs.nyu.edu/cs/faculty/widlund/FETI-DP-elasticity_TR.pdf .
>
> 3. How do you make it easy for the user to provide the required 
> matrix? Ideally, the user would just use plain MatSetValuesLocal() and 
> run with -mat_type partially-assembled -pc_type fetidp instead of, say 
> -mat_type baij -pc_type asm. It should work for multiple subdomains 
> per process and subdomains spanning multiple processes. This can now 
> be done by implementing MatGetLocalSubMatrix(). The local blocks of 
> the partially assembled system should be able to use different formats 
> (e.g. SBAIJ).
I like this idea, but it's somehow the same as with PCFieldSplit. To 
make use of it, I have to provide at least the splits, before I can run 
this preconditioner. This will be same for FETI-DP. Somehow the user 
will need to specify the coarse space. To make this in a general way is 
a very challenging task, from my point of view.
>
> 4. How do you handle more than two levels? This is very important to 
> use more than about 1000 subdomains in 3D because the coarse problem 
> just gets too big (unless the coarse problem happens to be 
> well-conditioned enough that you can use algebraic multigrid).
Good question. Eventually, me code should run on definitely more then 
1000 nodes in 3D. We have some PDE's which we would like to run on 
O(10^5) nodes (phase field crystal equation, which is a 6th order 
nonlinear parabolic PDE).
>
> I've wanted to implement FETI-DP in PETSc for almost two years, but 
> it's never been a high priority. I think I now know how to get enough 
> flexibility to make it worthwhile to me. I'd be happy to discuss 
> implementation issues with you.
To implement FETI-DP in PETSc in a general way is very challenging but 
would be a feature of interest for most people how want to run their 
codes on real large number of nodes. If there are already some guys who 
have implemented it in PETSc, it would be the best to contact them to 
discuss these things.

Thomas



More information about the petsc-users mailing list