[petsc-users] user experience with PCNN

Jed Brown jedbrown at mcs.anl.gov
Tue Oct 4 10:22:47 CDT 2011


On Mon, Oct 3, 2011 at 14:40, Jakub Sistek <sistek at math.cas.cz> wrote:

> **
>
> One thing I particularly enjoy on PETSc is the quick interchangeability of
> preconditioners and Krylov methods within the KSP object. But I can see this
> possible through strictly algebraic nature of the approach, where only
> matrix object is passed.
>

The KSP and PC objects have two "slots", the Krylov operator A and the
"preconditioning matrix" B. I take a very liberal view of what B is. I
consider it to be a container into which any problem/state-dependent
information needed by the preconditioner should be placed. Topological and
geometric information needed by the preconditioner does not change between
nonlinear iterations/time steps/etc, so it can be given to the PC directly
(e.g. PCBDDCSetCoarseSpaceCandidates() or something like that, though this
could also be attached to the Mat).

On the other hand, all of the FETI-DP and BDDC implementations I have heard
> of are related to FEM computations and make the mesh somewhat accessible to
> the solver. Although I do not like this, also my third generation of
> implementation of the BDDC method still needs some limited information on
> geometry. Not really for construction of the coarse basis functions (this is
> algebraic in BDDC), but rather indirectly for the selection of coarse
> degrees of freedom. I am not aware of any existing approach to selection of
> coarse DOFs at the moment, that would not require some information on
> geometry for robust selection on unstructured 3D meshes. I could imagine
> that the required information could be limited to positions of unknowns and
> some information of the problem which is solved (the nullspace size), the
> topology of the mesh is not really necessary.
>

We recently introduced MatSetNearNullSpace() which is also needed by
smoothed aggregation algebraic multigrid. (We decided that this belonged on
the Mat because there are problems for which the near null space could
change depending on the nonlinear regime, thus needing updating within a
nonlinear iteration. For multiphysics problems, it is fragile to depend on
access to the PC used for a particular "block" (if it exists), so I prefer
to put information that may eventually need to be composed with or interact
with other "blocks" into the Mat.)


> For this difficulty, I do not see it simple to write something like PCBDDC
> preconditioner that would simply interchange with PCASM and others. The
> situation would be simpler for BDDC if the preconditioner could use also
> some kind of mesh description.
>

I agree that it may always be necessary to provide extra information in
order to use PCBDDC. The goal would not be to have a solver that only needs
a (partially) assembled sparse matrix, but rather to have a purely algebraic
interface by which that information can be provided.

Another way for the PC to access grid information is through PCSetDM(). From
the perspective of the solver, the DM is just an interface for providing
grid- and discretization-dependent algebraic ingredients to the solver. This
enables users of DM to have preconditioners automatically set up.


>
> The other issue I can see to be a bit conflicting with the KSP approach of
> PETSc might be the fact, that BDDC implementations introduce some coupling
> between preconditioner and Krylov method, which is in fact run only for the
> Schur complement problem at the interface among subdomains. Multiplication
> by the system matrix in Krylov method is performed by Dirichlet solves on
> each subdomain, which corresponds to passing a special matrix-vector
> multiplying routine to the Krylov method - at least, this is the approach I
> follow in my last implementation of BDDC, in the BDDCML code, where
> essentially the preconditioner provides the A*x function to the Krylov
> method.
> I have seen this circumvented in PCNN by resolving the vectors to the
> original size after each application of the preconditioner, but in my
> opinion, this approach then loses some of the efficiency of running Krylov
> method on the Schur complement problem instead of the original problem,
> which usually has a great effect on convergence by itself.
>

There are tradeoffs both ways because iterating in the full space can
accommodate inexact subdomain solves. There are a bunch of algorithms that
use the same ingredients and are essentially equivalent when direct solvers
are used, but different when inexact solvers are used:

BDDC: iterate in interface space, needs exact subdomain and coarse solves
BDDC/primal: iterate in interface space plus coarse primal dofs, tolerant of
inexact coarse level solve
BDDC/full: iterate in full space, tolerant of inexact subdomain and coarse
solves
FETI-DP: iterate in space of Lagrange multipliers, much like BDDC above
iFETI-DP: iterate in space of subdomains with duplicate interface dofs,
coarse primal dofs, and Lagrange multipliers. tolerant of inexact subdomain
and coarse level solves
irFETI-DP: iterate in space of Lagrange multipliers and coarse dofs,
tolerant of inexact coarse solves

One advantage of iterating in the full space is that the method can
naturally be used to precondition a somewhat different matrix (e.g. a higher
order discretization of the same physics on the same mesh) which can be
applied matrix-free. Any method that iterates in a reduced space simply
contains another KSP for that purpose.


>
> Regarding problem types, I have little experience with using BDDC beyond
> Poisson problems and elasticity. Recently, I have done some tests with
> Stokes problems and incompressible Navier-Stokes problems, using "brute
> force" rather than any delicacy you may have in mind. The initial experience
> with Stokes problem using Taylor-Hood elements is quite good, things get
> worse for Navier-Stokes where the convergence, with the current simple
> coarse problem, deteriorates quickly with increasing Reynolds number.
> However, all these things should be better tested and, as you probably know,
> are rather recent topic of research and no clear conclusions have been
> really achieved.
>

I'm curious about what sort of problems you tested with Stokes. In
particular, I'm interested in problems containing thin structures with large
jumps in coefficients (e.g. 10^8). (In this application, I'm only interested
in the Stokes problem, Re=1e-20 for these problems.)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111004/12ddce73/attachment-0001.htm>


More information about the petsc-users mailing list