<div class="gmail_quote">On Mon, Mar 14, 2011 at 12:32, Thomas Witkowski <span dir="ltr"><<a href="mailto:thomas.witkowski@tu-dresden.de">thomas.witkowski@tu-dresden.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Should I define blocks or splits for the subdomains and the interior nodes? And what is the best way to force PETSc to make some LU factorization on each subdomain and to store it (it is needed to create the reduced Schur system, to define the action of the Schur complement operator and to solve the subdomain unknows in the last step) and to use it later?</blockquote>
</div><br><div>Okay, define two splits. The first consists of all the interior nodes, the second has all the interface nodes. Now use -pc_fieldsplit_type schur -fieldsplit_0_ksp_type preonly -fieldsplit_0_pc_type bjacobi -fieldsplit_0_sub_pc_type lu. Remember to look at -ksp_view and -help for options. You have a choice of preconditioning the Schur complement, by default it just uses the interface matrix itself (which is usually nearly diagonal).</div>
<div><br></div><div>That is, performing block Jacobi with direct subdomain solves on the (parallel) interior matrix will be the same as a direct solve with this matrix because all the subdomains are actually uncoupled.</div>
<div><br></div><div>My point about exposing less concurrency had to do with always needing to solve problems with the parallel interior-node matrix which could actually be stored separately since the systems are not truly coupled. This is most relevant with multiple subdomains per process or if you are forming an explicit Schur complement (to build a course level operator, such as with FETI-DP/BDDC).</div>