Schur complement system

Lisandro Dalcin dalcinl at gmail.com
Fri Mar 14 09:51:39 CDT 2008


However, I have to be sincere with you... I believe that way of
iterating over the Schur complement and preconditioning it by solving
a subsidiary problem with a 'strip' of nodes arount the interface is
not actually competitive with ASM where the local problems are solved
with LU factorization.

In fact, I now believe that ASM with LU is equivalent to my
Schur-based iteration method, and my method is actually slower in
practice (and far more contrived to implement). There is a theorem
commented in Saad's book on iterative methods (I cannot remember the
original author cited there) that let me infer my conclusions.
Further, I've been doing some experiments this week that confirm my
conclusions. In short, all my work is near to be a complete failure.

The problem with ASM+LU is that if your subdomanins are big, the local
problems could be a real bottleneck. My Schur-based implementation
handled that issue by automatically sub-partitioning subdomains using
recursive graph splittings (by reusing code from nested-disection
reordering). In the case of ASM, subpartitioning is implemented in
PETSc, but just by taking slices of matrix rows, this is not very good
in the case of unstructured problems. In the near future, I'm going to
push in petsc-dev a way to enable better subdomain sub-partitioning in
ASM.

Perhaps I'll implement a basic graph partitioner, or rely on
MatPartitioning implementations. Unfortunatelly, all MatPartitioning
impls relies in external packages and almost all of them are non-free
of GPL. That's the reason I think we need to add a basic sequential
graph partitioning in PETSc.



On 3/14/08, Kathrin Burckhardt <tribur at vision.ee.ethz.ch> wrote:
> Dear Lisandro,
>  Thank you very much for the explanation. Now I understand (more or
>  less).
>
>
>
>
>  On Mon, 10 Mar 2008, Lisandro Dalcin wrote:
>
>  > On 3/9/08, Kathrin Burckhardt <tribur at vision.ee.ethz.ch> wrote:
>  >>  uff, sounds in fact quite complicated.
>  >
>  > Yes, it is  :-(
>  >
>  >> In particular, I don't
>  >>  understand why you split the "local process matrix" (the local Schur
>  >>  complement, right?) in four (sub?)submatrices? Is it because of the
>  >>  structure of you domain?
>  >
>  > Well, perhaps I did not understood your.
>  >
>  > My implementation is intended to work with a MATIS global matrix. In
>  > this matrix, each process assembles its local subdomain matrix (this
>  > is somewhat natural in finite element methods). For a finite element
>  > partitioning (as described in Y. Saad book on iterative methods,
>  > http://www-users.cs.umn.edu/~saad/books.html), the 'local' Schur
>  > complement is defined by four submatrices obtained from the local
>  > subdomain matrix. Those 'local' Schur complements in turn define the
>  > 'global' one, which is never explicitely assembled, but implicitely
>  > handled by defining the matrix-vector product.
>  >
>  >
>  >
>  >>
>  >>
>  >>
>  >>  On Fri, 7 Mar 2008, Lisandro Dalcin wrote:
>  >>
>  >> > Well, I did. I have a preliminar implementation of this inside
>  >> > petsc4py (PETSc for Python). However, it is fully implemented in C as
>  >> > a standard preconditioner intended to be used with KSPPREONLY. The
>  >> > preconditioner has  a sub-KSP iterating only on global interface
>  >> > nodes. It should be almost trivial to incorporate it inside other
>  >> > codes.
>  >> >
>  >> > A warning: all this needs a MATIS matrix to work, that is each
>  >> > processor have is local portion of the global matrix in an unassembled
>  >> > fashion. In order to define the Schur complement system, the local
>  >> > process matrix is splitted in four submatrices. At each processor, the
>  >> > 'subdomain' can be further partitioned, this is implemented with a
>  >> > simple minded graph partitioning implemented by reusing the some core
>  >> > routines used inside PETSc for sparse matrix reordering. In order to
>  >> > precondition the Schur complement system, a subsidiary problem can
>  >> > defined by taking some layers of nodes around interface nodes and it
>  >> > is globally iterated with typically a few loops or sub-sub KSP.
>  >> >
>  >> > In short, the implementation is a really involved, and comparable in
>  >> > dificulty to the Newmann-Newman preconditioner (which is also
>  >> > implemented to work with MATIS). At the user level, the only parameter
>  >> > to tweack is the 'sub-subdomain' size and of course tolerances of the
>  >> > inner KSP solver inside the 'SCHUR' preconditioner.
>  >> >
>  >> > Why I never added this to PETSc? Time, lack of serious testing, etc.
>  >> > but mainly because I'm not sure of it is really a good choice compared
>  >> > to PCASM. In fact, some day I would implement my trics for
>  >> > sub-subdamain partitioning (or perhaps use metis for this) inside ASM.
>  >> > Then the sub-subdomain problems would have a reasonable size for using
>  >> > LU, and finally I would compare PCASM  with my PCSCHUR.
>  >> >
>  >> > Currently, we are using this for solving incompressible NS equations
>  >> > with monolithic stabilized FEM formulation, and other problems with
>  >> > badly conditioned global systems. We have found no better way to solve
>  >> > our linear systems for our application, probably because we are not
>  >> > smart enough.
>  >> >
>  >> > Hope you understood me...
>  >> >
>  >> > On 3/7/08, Kathrin Burckhardt <tribur at vision.ee.ethz.ch> wrote:
>  >> >> Dear nice people,
>  >> >>
>  >> >>  Did you ever try to solve a Schur complement system using PETSc?
>  >> >>
>  >> >>
>  >> >
>  >> >
>  >> >
>  >>
>  >>
>  >
>  >
>  >
>


-- 
Lisandro Dalcín
---------------
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
PTLC - Güemes 3450, (3000) Santa Fe, Argentina
Tel/Fax: +54-(0)342-451.1594




More information about the petsc-dev mailing list