[petsc-users] Fieldsplit schur applied on A00 of a fieldsplit schur

Luc luc.berger.vergiat at gmail.com
Sun Mar 2 19:41:27 CST 2014


So I did the modification as you suggested and it works fine.
I tried it on an example where I do two schur complements (on the global 
system and then on the A00 term of the global schur) and solve the A00 
and S blocks of the local schur with jacobi sweeps.
I attached the -ksp_view output for info.

Best,
Luc

On 03/02/2014 02:33 PM, Matthew Knepley wrote:
> On Fri, Feb 28, 2014 at 8:18 PM, Luc Berger-Vergiat 
> <luc.berger.vergiat at gmail.com <mailto:luc.berger.vergiat at gmail.com>> 
> wrote:
>
>     Hi all,
>     sorry for the cryptic title but this is a little complex.
>     Here is what I am doing:
>     I created a DMShell that gets four fields passed from a PetscSection.
>     Now I am doing this because I want to apply a schur complement to
>     my problem.
>     In order to do so I pass the following arguments to my code:
>
>     -ksp_type gmres
>     -pc_type fieldsplit
>     -pc_fieldsplit_type schur
>     -pc_fieldsplit_schur_factorization_type full
>     -pc_fieldsplit_0_fields 2,3                    <--- This define
>     A00 for my schur
>     -pc_fieldsplit_1_fields 0,1
>
>     Up to here everything works fine and as expected (I actually do a
>     -ksp_view to make sure that everything makes sense).
>     Now things get tricky, I would like to compute A00^-1 using
>     another schur decomposition so here are the commands I issue:
>
>     -fieldsplit_0_ksp_type preonly
>     -fieldsplit_0_pc_type fieldsplit
>         -fieldsplit_0_pc_fieldsplit_type schur
>     -fieldsplit_0_pc_fieldsplit_schur_factorization_type full
>     -fieldsplit_0_pc_fieldsplit_0_fields 2
>     -fieldsplit_0_pc_fieldsplit_1_fields 3
>
>     I am almost sure that the 4 first commands are correct, I am not
>     however sure that the last two are understood by PETSc.
>     Actually I am worried that the DMShell that I created for the
>     first level schur is not passed on the second level schur.
>     Here is the error message I get when I run my code:
>
>
> Sorry, I am really bogged down at the moment. Can you try this:
>
>   1) You do not need to specify 2,3 for the inner fields since it will 
> use them automatically
>
>   2) Can you try changing src/dm/impls/shell/dmshell.c:664 to include 
> DMSetUp(*subdm); ?
>
>   Thanks,
>
>       Matt
>
>     [0]PETSC ERROR: --------------------- Error Message
>     ------------------------------------
>     [0]PETSC ERROR: Object is in wrong state!
>     [0]PETSC ERROR: Decomposition defined only after DMSetUp!
>     [0]PETSC ERROR:
>     ------------------------------------------------------------------------
>     [0]PETSC ERROR: Petsc Development GIT revision:
>     v3.4.3-4597-g3edecfd  GIT Date: 2014-02-20 20:43:18 -0600
>     [0]PETSC ERROR: See docs/changes/index.html for recent updates.
>     [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>     [0]PETSC ERROR: See docs/index.html for manual pages.
>     [0]PETSC ERROR:
>     ------------------------------------------------------------------------
>     [0]PETSC ERROR:
>     /home/luc/research/feap_repo/ShearBands/parfeap-petsc34/feap on a
>     arch-linux2-c-opt named euler by luc Fri Feb 28 20:07:18 2014
>     [0]PETSC ERROR: Libraries linked from
>     /home/luc/research/petsc/arch-linux2-c-opt/lib
>     [0]PETSC ERROR: Configure run at Fri Feb 21 17:31:31 2014
>     [0]PETSC ERROR: Configure options --download-cmake
>     --download-hypre --download-metis --download-mpich
>     --download-parmetis --with-debugging=0 --with-share-libraries=0
>     [0]PETSC ERROR:
>     ------------------------------------------------------------------------
>     [0]PETSC ERROR: DMCreateFieldDecomposition() line 1262 in
>     /home/luc/research/petsc/src/dm/interface/dm.c
>     [0]PETSC ERROR: PCFieldSplitSetDefaults() line 336 in
>     /home/luc/research/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c
>     [0]PETSC ERROR: PCSetUp_FieldSplit() line 485 in
>     /home/luc/research/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c
>     [0]PETSC ERROR: PCSetUp() line 888 in
>     /home/luc/research/petsc/src/ksp/pc/interface/precon.c
>     [0]PETSC ERROR: KSPSetUp() line 278 in
>     /home/luc/research/petsc/src/ksp/ksp/interface/itfunc.c
>     [0]PETSC ERROR: KSPSolve() line 390 in
>     /home/luc/research/petsc/src/ksp/ksp/interface/itfunc.c
>     [0]PETSC ERROR: PCApply_FieldSplit_Schur() line 859 in
>     /home/luc/research/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c
>     [0]PETSC ERROR: PCApply() line 440 in
>     /home/luc/research/petsc/src/ksp/pc/interface/precon.c
>     [0]PETSC ERROR: KSP_PCApply() line 227 in
>     /home/luc/research/petsc/include/petsc-private/kspimpl.h
>     [0]PETSC ERROR: KSPInitialResidual() line 64 in
>     /home/luc/research/petsc/src/ksp/ksp/interface/itres.c
>     [0]PETSC ERROR: KSPSolve_GMRES() line 234 in
>     /home/luc/research/petsc/src/ksp/ksp/impls/gmres/gmres.c
>     [0]PETSC ERROR: KSPSolve() line 432 in
>     /home/luc/research/petsc/src/ksp/ksp/interface/itfunc.c
>
>     Let me know if I'm doing something wrong or misunderstood something.
>
>     Best,
>     Luc
>
>
>
>
> -- 
> What most experimenters take for granted before they begin their 
> experiments is infinitely more interesting than any results to which 
> their experiments lead.
> -- Norbert Wiener

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140302/86f4c6e0/attachment.html>
-------------- next part --------------
KSP Object: 1 MPI processes
  type: gmres
    GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
    GMRES: happy breakdown tolerance 1e-30
  maximum iterations=10000, initial guess is zero
  tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
  left preconditioning
  using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
  type: fieldsplit
    FieldSplit with Schur preconditioner, factorization FULL
    Preconditioner for the Schur complement formed from A11
    Split info:
    Split number 0 Defined by IS
    Split number 1 Defined by IS
    KSP solver for A00 block
      KSP Object:      (fieldsplit_0_)       1 MPI processes
        type: preonly
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (fieldsplit_0_)       1 MPI processes
        type: fieldsplit
          FieldSplit with Schur preconditioner, factorization FULL
          Preconditioner for the Schur complement formed from A11
          Split info:
          Split number 0 Defined by IS
          Split number 1 Defined by IS
          KSP solver for A00 block
            KSP Object:            (fieldsplit_0_fieldsplit_Field_2_)             1 MPI processes
              type: preonly
              maximum iterations=10000, initial guess is zero
              tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
              left preconditioning
              using NONE norm type for convergence test
            PC Object:            (fieldsplit_0_fieldsplit_Field_2_)             1 MPI processes
              type: jacobi
              linear system matrix = precond matrix:
              Mat Object:              (fieldsplit_0_fieldsplit_Field_2_)               1 MPI processes
                type: seqaij
                rows=4, cols=4
                total: nonzeros=16, allocated nonzeros=16
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 1 nodes, limit used is 5
          KSP solver for S = A11 - A10 inv(A00) A01 
            KSP Object:            (fieldsplit_0_fieldsplit_Field_3_)             1 MPI processes
              type: preonly
              maximum iterations=10000, initial guess is zero
              tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
              left preconditioning
              using NONE norm type for convergence test
            PC Object:            (fieldsplit_0_fieldsplit_Field_3_)             1 MPI processes
              type: jacobi
              linear system matrix followed by preconditioner matrix:
              Mat Object:              (fieldsplit_0_fieldsplit_Field_3_)               1 MPI processes
                type: schurcomplement
                rows=4, cols=4
                  Schur complement A11 - A10 inv(A00) A01
                  A11
                    Mat Object:                    (fieldsplit_0_fieldsplit_Field_3_)                     1 MPI processes
                      type: seqaij
                      rows=4, cols=4
                      total: nonzeros=16, allocated nonzeros=16
                      total number of mallocs used during MatSetValues calls =0
                        using I-node routines: found 1 nodes, limit used is 5
                  A10
                    Mat Object:                     1 MPI processes
                      type: seqaij
                      rows=4, cols=4
                      total: nonzeros=16, allocated nonzeros=16
                      total number of mallocs used during MatSetValues calls =0
                        using I-node routines: found 1 nodes, limit used is 5
                  KSP of A00
                    KSP Object:                    (fieldsplit_0_fieldsplit_Field_2_)                     1 MPI processes
                      type: preonly
                      maximum iterations=10000, initial guess is zero
                      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
                      left preconditioning
                      using NONE norm type for convergence test
                    PC Object:                    (fieldsplit_0_fieldsplit_Field_2_)                     1 MPI processes
                      type: jacobi
                      linear system matrix = precond matrix:
                      Mat Object:                      (fieldsplit_0_fieldsplit_Field_2_)                       1 MPI processes
                        type: seqaij
                        rows=4, cols=4
                        total: nonzeros=16, allocated nonzeros=16
                        total number of mallocs used during MatSetValues calls =0
                          using I-node routines: found 1 nodes, limit used is 5
                  A01
                    Mat Object:                     1 MPI processes
                      type: seqaij
                      rows=4, cols=4
                      total: nonzeros=16, allocated nonzeros=16
                      total number of mallocs used during MatSetValues calls =0
                        using I-node routines: found 1 nodes, limit used is 5
              Mat Object:              (fieldsplit_0_fieldsplit_Field_3_)               1 MPI processes
                type: seqaij
                rows=4, cols=4
                total: nonzeros=16, allocated nonzeros=16
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 1 nodes, limit used is 5
        linear system matrix = precond matrix:
        Mat Object:        (fieldsplit_0_)         1 MPI processes
          type: seqaij
          rows=8, cols=8
          total: nonzeros=64, allocated nonzeros=64
          total number of mallocs used during MatSetValues calls =0
            using I-node routines: found 2 nodes, limit used is 5
    KSP solver for S = A11 - A10 inv(A00) A01 
      KSP Object:      (fieldsplit_1_)       1 MPI processes
        type: gmres
          GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
          GMRES: happy breakdown tolerance 1e-30
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using PRECONDITIONED norm type for convergence test
      PC Object:      (fieldsplit_1_)       1 MPI processes
        type: ilu
          ILU: out-of-place factorization
          0 levels of fill
          tolerance for zero pivot 2.22045e-14
          using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
          matrix ordering: natural
          factor fill ratio given 1, needed 1
            Factored matrix follows:
              Mat Object:               1 MPI processes
                type: seqaij
                rows=8, cols=8
                package used to perform factorization: petsc
                total: nonzeros=64, allocated nonzeros=64
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 2 nodes, limit used is 5
        linear system matrix followed by preconditioner matrix:
        Mat Object:        (fieldsplit_1_)         1 MPI processes
          type: schurcomplement
          rows=8, cols=8
            Schur complement A11 - A10 inv(A00) A01
            A11
              Mat Object:              (fieldsplit_1_)               1 MPI processes
                type: seqaij
                rows=8, cols=8
                total: nonzeros=64, allocated nonzeros=64
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 2 nodes, limit used is 5
            A10
              Mat Object:               1 MPI processes
                type: seqaij
                rows=8, cols=8
                total: nonzeros=64, allocated nonzeros=64
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 2 nodes, limit used is 5
            KSP of A00
              KSP Object:              (fieldsplit_0_)               1 MPI processes
                type: preonly
                maximum iterations=10000, initial guess is zero
                tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
                left preconditioning
                using NONE norm type for convergence test
              PC Object:              (fieldsplit_0_)               1 MPI processes
                type: fieldsplit
                  FieldSplit with Schur preconditioner, factorization FULL
                  Preconditioner for the Schur complement formed from A11
                  Split info:
                  Split number 0 Defined by IS
                  Split number 1 Defined by IS
                  KSP solver for A00 block
                    KSP Object:                    (fieldsplit_0_fieldsplit_Field_2_)                     1 MPI processes
                      type: preonly
                      maximum iterations=10000, initial guess is zero
                      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
                      left preconditioning
                      using NONE norm type for convergence test
                    PC Object:                    (fieldsplit_0_fieldsplit_Field_2_)                     1 MPI processes
                      type: jacobi
                      linear system matrix = precond matrix:
                      Mat Object:                      (fieldsplit_0_fieldsplit_Field_2_)                       1 MPI processes
                        type: seqaij
                        rows=4, cols=4
                        total: nonzeros=16, allocated nonzeros=16
                        total number of mallocs used during MatSetValues calls =0
                          using I-node routines: found 1 nodes, limit used is 5
                  KSP solver for S = A11 - A10 inv(A00) A01 
                    KSP Object:                    (fieldsplit_0_fieldsplit_Field_3_)                     1 MPI processes
                      type: preonly
                      maximum iterations=10000, initial guess is zero
                      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
                      left preconditioning
                      using NONE norm type for convergence test
                    PC Object:                    (fieldsplit_0_fieldsplit_Field_3_)                     1 MPI processes
                      type: jacobi
                      linear system matrix followed by preconditioner matrix:
                      Mat Object:                      (fieldsplit_0_fieldsplit_Field_3_)                       1 MPI processes
                        type: schurcomplement
                        rows=4, cols=4
                          Schur complement A11 - A10 inv(A00) A01
                          A11
                            Mat Object:                            (fieldsplit_0_fieldsplit_Field_3_)                             1 MPI processes
                              type: seqaij
                              rows=4, cols=4
                              total: nonzeros=16, allocated nonzeros=16
                              total number of mallocs used during MatSetValues calls =0
                                using I-node routines: found 1 nodes, limit used is 5
                          A10
                            Mat Object:                             1 MPI processes
                              type: seqaij
                              rows=4, cols=4
                              total: nonzeros=16, allocated nonzeros=16
                              total number of mallocs used during MatSetValues calls =0
                                using I-node routines: found 1 nodes, limit used is 5
                          KSP of A00
                            KSP Object:                            (fieldsplit_0_fieldsplit_Field_2_)                             1 MPI processes
                              type: preonly
                              maximum iterations=10000, initial guess is zero
                              tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
                              left preconditioning
                              using NONE norm type for convergence test
                            PC Object:                            (fieldsplit_0_fieldsplit_Field_2_)                             1 MPI processes
                              type: jacobi
                              linear system matrix = precond matrix:
                              Mat Object:                              (fieldsplit_0_fieldsplit_Field_2_)                               1 MPI processes
                                type: seqaij
                                rows=4, cols=4
                                total: nonzeros=16, allocated nonzeros=16
                                total number of mallocs used during MatSetValues calls =0
                                  using I-node routines: found 1 nodes, limit used is 5
                          A01
                            Mat Object:                             1 MPI processes
                              type: seqaij
                              rows=4, cols=4
                              total: nonzeros=16, allocated nonzeros=16
                              total number of mallocs used during MatSetValues calls =0
                                using I-node routines: found 1 nodes, limit used is 5
                      Mat Object:                      (fieldsplit_0_fieldsplit_Field_3_)                       1 MPI processes
                        type: seqaij
                        rows=4, cols=4
                        total: nonzeros=16, allocated nonzeros=16
                        total number of mallocs used during MatSetValues calls =0
                          using I-node routines: found 1 nodes, limit used is 5
                linear system matrix = precond matrix:
                Mat Object:                (fieldsplit_0_)                 1 MPI processes
                  type: seqaij
                  rows=8, cols=8
                  total: nonzeros=64, allocated nonzeros=64
                  total number of mallocs used during MatSetValues calls =0
                    using I-node routines: found 2 nodes, limit used is 5
            A01
              Mat Object:               1 MPI processes
                type: seqaij
                rows=8, cols=8
                total: nonzeros=64, allocated nonzeros=64
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 2 nodes, limit used is 5
        Mat Object:        (fieldsplit_1_)         1 MPI processes
          type: seqaij
          rows=8, cols=8
          total: nonzeros=64, allocated nonzeros=64
          total number of mallocs used during MatSetValues calls =0
            using I-node routines: found 2 nodes, limit used is 5
  linear system matrix = precond matrix:
  Mat Object:   1 MPI processes
    type: seqaij
    rows=16, cols=16
    total: nonzeros=256, allocated nonzeros=320
    total number of mallocs used during MatSetValues calls =16
      using I-node routines: found 4 nodes, limit used is 5
-------------- next part --------------
./ex1 -ksp_type gmres -pc_type fieldsplit -pc_fieldsplit_type schur -pc_fieldsplit_schur_factorization_type full -pc_fieldsplit_0_fields 2,3 -pc_fieldsplit_1_fields 0,1 -fieldsplit_0_ksp_type preonly -fieldsplit_0_pc_type fieldsplit -fieldsplit_0_pc_fieldsplit_type schur -fieldsplit_0_pc_fieldsplit_schur_factorization_type full -fieldsplit_0_fieldsplit_Field_2_ksp_type preonly -fieldsplit_0_fieldsplit_Field_2_pc_type jacobi -fieldsplit_0_fieldsplit_Field_3_ksp_type preonly -fieldsplit_0_fieldsplit_Field_3_pc_type jacobi -ksp_view -ksp_monitor


More information about the petsc-users mailing list