[petsc-dev] Using PCFieldSplitSetIS

Thomas Witkowski thomas.witkowski at tu-dresden.de
Wed Mar 16 01:37:38 CDT 2011


Jed Brown wrote:
> On Mon, Mar 14, 2011 at 12:32, Thomas Witkowski 
> <thomas.witkowski at tu-dresden.de 
> <mailto:thomas.witkowski at tu-dresden.de>> wrote:
>
>     Should I define blocks or splits for the subdomains and the
>     interior nodes? And what is the best way to force PETSc to make
>     some LU factorization on each subdomain and to store it (it is
>     needed to create the reduced Schur system, to define the action of
>     the Schur complement operator and to solve the subdomain unknows
>     in the last step) and to use it later?
>
>
> Okay, define two splits. The first consists of all the interior nodes, 
> the second has all the interface nodes. Now use -pc_fieldsplit_type 
> schur -fieldsplit_0_ksp_type preonly -fieldsplit_0_pc_type bjacobi 
> -fieldsplit_0_sub_pc_type lu. Remember to look at -ksp_view and -help 
> for options. You have a choice of preconditioning the Schur 
> complement, by default it just uses the interface matrix itself (which 
> is usually nearly diagonal).
Thanks for explanations! It works fine in my code. But I have two 
questions about it, maybe you can help me with them:
- To the first, is the LU factorization on block A_00 done only once?
- I have run the code with -fieldsplit_1_ksp_monitor to get some more 
information about the internal solves. I expected to get information 
about one iterative solver (for solving the Schur complement system), 
but I got three, all of them need around 20 iterations for my example. 
Could you explain to me what is actually solved there? When I have a 
look in the manual about the implementation of -pc_fieldsplit_type 
schur, there are three ksp objects. But two of them make a solve with 
A_00, which should take only one iteration.

Thomas
>
> That is, performing block Jacobi with direct subdomain solves on the 
> (parallel) interior matrix will be the same as a direct solve with 
> this matrix because all the subdomains are actually uncoupled.
>
> My point about exposing less concurrency had to do with always needing 
> to solve problems with the parallel interior-node matrix which could 
> actually be stored separately since the systems are not truly coupled. 
> This is most relevant with multiple subdomains per process or if you 
> are forming an explicit Schur complement (to build a course level 
> operator, such as with FETI-DP/BDDC).




More information about the petsc-dev mailing list