[petsc-users] Configure nested PCFIELDSPLIT with general index sets

Matthew Knepley knepley at gmail.com
Tue Mar 21 08:24:07 CDT 2017


On Tue, Mar 21, 2017 at 12:58 PM, Natacha BEREUX <natacha.bereux at gmail.com>
wrote:

> Dear PETSc user's,
> I am trying to solve a poroelasticity problem with an additional
> temperature equation. The problem is a 3 fields problem involving  a
> displacement field (u), a pressure field (p) and a temperature field (t).
> I have seen similar examples in
> http://www.mcs.anl.gov/papers/P2017-0112.pdf
> or in Matt's talk  http://www.caam.rice.edu/~mk51/presentations/SIAMCSE13.
> pdf
> I would like to reproduce them, but I am encountering  troubles whem
> trying to do so.
>

Yes, my talk is not clear on this point. The option you want,
-pc_fieldsplit_<num>_fields,
only works if you use a DM to describe the splits. Here is the code for
DMDA, which assumes
a regular division:


https://bitbucket.org/petsc/petsc/src/d4e0040555dc16f2fb1f7e2e0304e363dcc11328/src/ksp/pc/impls/fieldsplit/fieldsplit.c?at=master&fileviewer=file-view-default#fieldsplit.c-287

and for DMPlex and DMShell,


https://bitbucket.org/petsc/petsc/src/d4e0040555dc16f2fb1f7e2e0304e363dcc11328/src/ksp/pc/impls/fieldsplit/fieldsplit.c?at=master&fileviewer=file-view-default#fieldsplit.c-330

We could, of course, do something like this by just merging the ISes, but
then we lose the ability to do it recursively since
we would not know how to split them again. So we would have to carry along
something that tells us that, and this is exactly
the job that the DM is doing.

I think the remedy is as easy as specifying a DMShell that has a
PetscSection (DMSetDefaultSection) with your ordering, and
I think this is how Firedrake (http://www.firedrakeproject.org/) does it.
However, I usually use a DMPlex which knows about my
mesh, so I am not sure if this strategy has any holes.

The PetscSection is just a model of your ordering. You have a domain of
"points", which are usually pieces of the mesh, which
map to a # of dofs. For example, linear elements associate 1 dof to every
vertex. The Section allows dofs for several fields to
be added. This (I think) is described in the manual.

Does this make sense?

  Thanks,

     Matt

Here is how I proceed:
>
> I have a monolithic matrix A stemming .
> I build 3 index sets for u,p, and t in A.
> Then I set up the KSP context :
>
> call KSPCreate(PETSC_COMM_WORLD,ksp,ierr)
> call KSPSetOperators(ksp,A,A,ierr)
> call KSPGetPC(ksp, pc, ierr)
> call PCSetType(pc, PCFIELDSPLIT, ierr)
> call PCFieldSplitSetIS(pc,'u',is_u, ierr)
> call PCFieldSplitSetIS(pc,'p',is_p, ierr)
> call PCFieldSplitSetIS(pc,'t',is_t, ierr)
> call PCSetFromOptions(pc,ierr)
> call KSPSetFromOptions(ksp,ierr)
> call KSPSolve(ksp,b,x,ierr)
>
> I run the code with the following options
> -ksp_view
> -log_view
> -ksp_rtol 1.0e-5
> -ksp_type fgmres
> -pc_type fieldsplit
> -pc_fieldsplit_type additive
> -pc_fieldsplit_0_fields 0,1
> -pc_fieldsplit_1_fields 2
> -pc_fieldsplit_0_pc_type lu
> -pc_fieldsplit_0_ksp_type preonly
> -pc_fieldsplit_1_pc_type lu
> -pc_fieldsplit_1_ksp_type preonly
>
> I would like to group u and p fields  in a "0" block and then temperature
> remains in "1" block.
> (I start with direct solves in each blocks to check the block
> decomposition but I intend to do use iterative methods later, and more
> precisely to use a Schur fieldsplit preconditionner for the "0" block)
> The output follows :
>
> KSP Object: 1 MPI processes
>   type: fgmres
>     GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
>     GMRES: happy breakdown tolerance 1e-30
>   maximum iterations=10000, initial guess is zero
>   tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>   right preconditioning
>   using UNPRECONDITIONED norm type for convergence test
> PC Object: 1 MPI processes
>   type: fieldsplit
>     FieldSplit with ADDITIVE composition: total splits = 3
>     Solver info for each split is in the following KSP objects:
>     Split number 0 Defined by IS
>     KSP Object:    (fieldsplit_u_)     1 MPI processes
>       type: preonly
>       maximum iterations=10000, initial guess is zero
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>       left preconditioning
>       using DEFAULT norm type for convergence test
>     PC Object:    (fieldsplit_u_)     1 MPI processes
>       type: ilu
>       PC has not been set up so information may be incomplete
>         ILU: out-of-place factorization
>         0 levels of fill
>         tolerance for zero pivot 2.22045e-14
>         matrix ordering: natural
>       linear system matrix = precond matrix:
>       Mat Object:      (fieldsplit_u_)       1 MPI processes
>         type: seqaij
>         rows=60, cols=60
>         total: nonzeros=3600, allocated nonzeros=3600
>         total number of mallocs used during MatSetValues calls =0
>           using I-node routines: found 12 nodes, limit used is 5
>     Split number 1 Defined by IS
>     KSP Object:    (fieldsplit_p_)     1 MPI processes
>       type: preonly
>       maximum iterations=10000, initial guess is zero
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>       left preconditioning
>       using DEFAULT norm type for convergence test
>     PC Object:    (fieldsplit_p_)     1 MPI processes
>       type: ilu
>       PC has not been set up so information may be incomplete
>         ILU: out-of-place factorization
>         0 levels of fill
>         tolerance for zero pivot 2.22045e-14
>         matrix ordering: natural
>       linear system matrix = precond matrix:
>       Mat Object:      (fieldsplit_p_)       1 MPI processes
>         type: seqaij
>         rows=8, cols=8
>         total: nonzeros=64, allocated nonzeros=64
>         total number of mallocs used during MatSetValues calls =0
>           using I-node routines: found 2 nodes, limit used is 5
>     Split number 2 Defined by IS
>     KSP Object:    (fieldsplit_t_)     1 MPI processes
>       type: preonly
>       maximum iterations=10000, initial guess is zero
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>       left preconditioning
>       using DEFAULT norm type for convergence test
>     PC Object:    (fieldsplit_t_)     1 MPI processes
>       type: ilu
>       PC has not been set up so information may be incomplete
>         ILU: out-of-place factorization
>         0 levels of fill
>         tolerance for zero pivot 2.22045e-14
>         matrix ordering: natural
>       linear system matrix = precond matrix:
>       Mat Object:      (fieldsplit_t_)       1 MPI processes
>         type: seqaij
>         rows=8, cols=8
>         total: nonzeros=64, allocated nonzeros=64
>         total number of mallocs used during MatSetValues calls =0
>           using I-node routines: found 2 nodes, limit used is 5
>   linear system matrix = precond matrix:
>   Mat Object:   1 MPI processes
>     type: seqaij
>     rows=76, cols=76
>     total: nonzeros=5776, allocated nonzeros=5776
>     total number of mallocs used during MatSetValues calls =0
>       using I-node routines: found 16 nodes, limit used is 5
>
> The preconditionner has 3 splits, whereas I would like to group (u,p)
> together and see 2 splits
> I suspect that
> -pc_fieldsplit_0_fields 0,1
> -pc_fieldsplit_1_fields 2
> are not the appropriate options.
> Am I correct ?
> What is the right way for grouping two fields defined by index sets ?
>
> Any help would be greatly appreciated !
>
> Natacha
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170321/10ba3eae/attachment-0001.html>


More information about the petsc-users mailing list