[petsc-users] Error while calling a fieldsplit preconditioner containing a mg/gamg preconditioner

Luc Berger-Vergiat lb2653 at columbia.edu
Fri Nov 21 10:55:37 CST 2014


Sure, so with gamg (which was my first choice too), I get the following:

[0]PETSC ERROR: --------------------- Error Message 
--------------------------------------------------------------
[0]PETSC ERROR:
[0]PETSC ERROR: Must call DMShellSetGlobalVector() or 
DMShellSetCreateGlobalVector()
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html 
for trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.5.2, Sep, 08, 2014
[0]PETSC ERROR: /home/luc/research/feap_repo/ShearBands/parfeap/feap on 
a arch-opt named euler by luc Thu Nov 20 16:19:29 2014
[0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran 
--with-cxx=g++ --with-debugging=0 --with-shared-libraries=0 
--download-fblaslapack --download-mpich --download-parmetis 
--download-metis --download-ml=yes --download-hypre 
--download-superlu_dist --download-mumps --download-scalapack
[0]PETSC ERROR: #145 DMCreateGlobalVector_Shell() line 245 in 
/home/luc/research/petsc-3.5.2/src/dm/impls/shell/dmshell.c
[0]PETSC ERROR: #146 DMCreateGlobalVector() line 681 in 
/home/luc/research/petsc-3.5.2/src/dm/interface/dm.c
[0]PETSC ERROR: #147 DMGetGlobalVector() line 154 in 
/home/luc/research/petsc-3.5.2/src/dm/interface/dmget.c
[0]PETSC ERROR: --------------------- Error Message 
--------------------------------------------------------------
[0]PETSC ERROR:
[0]PETSC ERROR: Must call DMShellSetGlobalVector() or 
DMShellSetCreateGlobalVector()
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html 
for trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.5.2, Sep, 08, 2014
[0]PETSC ERROR: /home/luc/research/feap_repo/ShearBands/parfeap/feap on 
a arch-opt named euler by luc Thu Nov 20 16:19:29 2014
[0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran 
--with-cxx=g++ --with-debugging=0 --with-shared-libraries=0 
--download-fblaslapack --download-mpich --download-parmetis 
--download-metis --download-ml=yes --download-hypre 
--download-superlu_dist --download-mumps --download-scalapack
[0]PETSC ERROR: #148 DMCreateGlobalVector_Shell() line 245 in 
/home/luc/research/petsc-3.5.2/src/dm/impls/shell/dmshell.c
[0]PETSC ERROR: #149 DMCreateGlobalVector() line 681 in 
/home/luc/research/petsc-3.5.2/src/dm/interface/dm.c
[0]PETSC ERROR: #150 DMGetGlobalVector() line 154 in 
/home/luc/research/petsc-3.5.2/src/dm/interface/dmget.c
KSP Object: 1 MPI processes
   type: gmres
     GMRES: restart=30, using Classical (unmodified) Gram-Schmidt 
Orthogonalization with no iterative refinement
     GMRES: happy breakdown tolerance 1e-30
   maximum iterations=10000, initial guess is zero
   tolerances:  relative=1e-08, absolute=1e-16, divergence=1e+16
   left preconditioning
   using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
   type: fieldsplit
     FieldSplit with Schur preconditioner, factorization FULL
     Preconditioner for the Schur complement formed from Sp, an 
assembled approximation to S, which uses (the lumped) A00's diagonal's 
inverse
     Split info:
     Split number 0 Defined by IS
     Split number 1 Defined by IS
     KSP solver for A00 block
       KSP Object:      (fieldsplit_0_)       1 MPI processes
         type: preonly
         maximum iterations=10000, initial guess is zero
         tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
         left preconditioning
         using NONE norm type for convergence test
       PC Object:      (fieldsplit_0_)       1 MPI processes
         type: hypre
           HYPRE Euclid preconditioning
           HYPRE Euclid: number of levels 1
         linear system matrix = precond matrix:
         Mat Object:        (fieldsplit_0_)         1 MPI processes
           type: seqaij
           rows=2000, cols=2000
           total: nonzeros=40000, allocated nonzeros=40000
           total number of mallocs used during MatSetValues calls =0
             using I-node routines: found 400 nodes, limit used is 5
     KSP solver for S = A11 - A10 inv(A00) A01
       KSP Object:      (fieldsplit_1_)       1 MPI processes
         type: gmres
           GMRES: restart=30, using Classical (unmodified) Gram-Schmidt 
Orthogonalization with no iterative refinement
           GMRES: happy breakdown tolerance 1e-30
         maximum iterations=10000, initial guess is zero
         tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
         left preconditioning
         using PRECONDITIONED norm type for convergence test
       PC Object:      (fieldsplit_1_)       1 MPI processes
         type: gamg
           MG: type is MULTIPLICATIVE, levels=2 cycles=v
             Cycles per PCApply=1
             Using Galerkin computed coarse grid matrices
         Coarse grid solver -- level -------------------------------
           KSP Object: (fieldsplit_1_mg_coarse_)           1 MPI processes
             type: preonly
             maximum iterations=1, initial guess is zero
             tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
             left preconditioning
             using NONE norm type for convergence test
           PC Object: (fieldsplit_1_mg_coarse_)           1 MPI processes
             type: bjacobi
               block Jacobi: number of blocks = 1
               Local solve is same for all blocks, in the following KSP 
and PC objects:
               KSP Object: (fieldsplit_1_mg_coarse_sub_)               1 
MPI processes
                 type: preonly
                 maximum iterations=1, initial guess is zero
                 tolerances:  relative=1e-05, absolute=1e-50, 
divergence=10000
                 left preconditioning
                 using NONE norm type for convergence test
               PC Object: (fieldsplit_1_mg_coarse_sub_)               1 
MPI processes
                 type: lu
                   LU: out-of-place factorization
                   tolerance for zero pivot 2.22045e-14
                   using diagonal shift on blocks to prevent zero pivot 
[INBLOCKS]
                   matrix ordering: nd
                   factor fill ratio given 5, needed 1.22785
                     Factored matrix follows:
                       Mat Object:                       1 MPI processes
                         type: seqaij
                         rows=13, cols=13
                         package used to perform factorization: petsc
                         total: nonzeros=97, allocated nonzeros=97
                         total number of mallocs used during 
MatSetValues calls =0
                           using I-node routines: found 10 nodes, limit 
used is 5
                 linear system matrix = precond matrix:
                 Mat Object:                 1 MPI processes
                   type: seqaij
                   rows=13, cols=13
                   total: nonzeros=79, allocated nonzeros=79
                   total number of mallocs used during MatSetValues calls =0
                     not using I-node routines
             linear system matrix = precond matrix:
             Mat Object:             1 MPI processes
               type: seqaij
               rows=13, cols=13
               total: nonzeros=79, allocated nonzeros=79
               total number of mallocs used during MatSetValues calls =0
                 not using I-node routines
         Down solver (pre-smoother) on level 1 
-------------------------------
           KSP Object: (fieldsplit_1_mg_levels_1_)           1 MPI processes
             type: chebyshev
               Chebyshev: eigenvalue estimates:  min = 0.0979919, max = 
2.05783
             maximum iterations=2
             tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
             left preconditioning
             using nonzero initial guess
             using NONE norm type for convergence test
           PC Object: (fieldsplit_1_mg_levels_1_)           1 MPI processes
             type: sor
               SOR: type = local_symmetric, iterations = 1, local 
iterations = 1, omega = 1
             linear system matrix followed by preconditioner matrix:
             Mat Object:            (fieldsplit_1_)             1 MPI 
processes
               type: schurcomplement
               rows=330, cols=330
                 Schur complement A11 - A10 inv(A00) A01
                 A11
                   Mat Object: (fieldsplit_1_)                   1 MPI 
processes
                     type: seqaij
                     rows=330, cols=330
                     total: nonzeros=7642, allocated nonzeros=7642
                     total number of mallocs used during MatSetValues 
calls =0
                       using I-node routines: found 121 nodes, limit 
used is 5
                 A10
                   Mat Object:                   1 MPI processes
                     type: seqaij
                     rows=330, cols=2000
                     total: nonzeros=22800, allocated nonzeros=22800
                     total number of mallocs used during MatSetValues 
calls =0
                       using I-node routines: found 121 nodes, limit 
used is 5
                 KSP of A00
                   KSP Object: (fieldsplit_0_)                   1 MPI 
processes
                     type: preonly
                     maximum iterations=10000, initial guess is zero
                     tolerances:  relative=1e-05, absolute=1e-50, 
divergence=10000
                     left preconditioning
                     using NONE norm type for convergence test
                   PC Object: (fieldsplit_0_)                   1 MPI 
processes
                     type: hypre
                       HYPRE Euclid preconditioning
                       HYPRE Euclid: number of levels 1
                     linear system matrix = precond matrix:
                     Mat Object: (fieldsplit_0_)                     1 
MPI processes
                       type: seqaij
                       rows=2000, cols=2000
                       total: nonzeros=40000, allocated nonzeros=40000
                       total number of mallocs used during MatSetValues 
calls =0
                         using I-node routines: found 400 nodes, limit 
used is 5
                 A01
                   Mat Object:                   1 MPI processes
                     type: seqaij
                     rows=2000, cols=330
                     total: nonzeros=22800, allocated nonzeros=22800
                     total number of mallocs used during MatSetValues 
calls =0
                       using I-node routines: found 400 nodes, limit 
used is 5
             Mat Object:             1 MPI processes
               type: seqaij
               rows=330, cols=330
               total: nonzeros=7642, allocated nonzeros=7642
               total number of mallocs used during MatSetValues calls =0
                 using I-node routines: found 121 nodes, limit used is 5
         Up solver (post-smoother) same as down solver (pre-smoother)
         linear system matrix followed by preconditioner matrix:
         Mat Object:        (fieldsplit_1_)         1 MPI processes
           type: schurcomplement
           rows=330, cols=330
             Schur complement A11 - A10 inv(A00) A01
             A11
               Mat Object: (fieldsplit_1_)               1 MPI processes
                 type: seqaij
                 rows=330, cols=330
                 total: nonzeros=7642, allocated nonzeros=7642
                 total number of mallocs used during MatSetValues calls =0
                   using I-node routines: found 121 nodes, limit used is 5
             A10
               Mat Object:               1 MPI processes
                 type: seqaij
                 rows=330, cols=2000
                 total: nonzeros=22800, allocated nonzeros=22800
                 total number of mallocs used during MatSetValues calls =0
                   using I-node routines: found 121 nodes, limit used is 5
             KSP of A00
               KSP Object: (fieldsplit_0_)               1 MPI processes
                 type: preonly
                 maximum iterations=10000, initial guess is zero
                 tolerances:  relative=1e-05, absolute=1e-50, 
divergence=10000
                 left preconditioning
                 using NONE norm type for convergence test
               PC Object: (fieldsplit_0_)               1 MPI processes
                 type: hypre
                   HYPRE Euclid preconditioning
                   HYPRE Euclid: number of levels 1
                 linear system matrix = precond matrix:
                 Mat Object: (fieldsplit_0_)                 1 MPI processes
                   type: seqaij
                   rows=2000, cols=2000
                   total: nonzeros=40000, allocated nonzeros=40000
                   total number of mallocs used during MatSetValues calls =0
                     using I-node routines: found 400 nodes, limit used is 5
             A01
               Mat Object:               1 MPI processes
                 type: seqaij
                 rows=2000, cols=330
                 total: nonzeros=22800, allocated nonzeros=22800
                 total number of mallocs used during MatSetValues calls =0
                   using I-node routines: found 400 nodes, limit used is 5
         Mat Object:         1 MPI processes
           type: seqaij
           rows=330, cols=330
           total: nonzeros=7642, allocated nonzeros=7642
           total number of mallocs used during MatSetValues calls =0
             using I-node routines: found 121 nodes, limit used is 5
   linear system matrix = precond matrix:
   Mat Object:   1 MPI processes
     type: seqaij
     rows=2330, cols=2330
     total: nonzeros=93242, allocated nonzeros=93242
     total number of mallocs used during MatSetValues calls =0
       using I-node routines: found 521 nodes, limit used is 5

I am still missing the same call to 
DMShellSetCreateGlobalVector()/DMShellSetGlobalVector(), the only 
difference is that I get this error message twice : )

FYI this how I set my DMShell, KSP and PC:

c     Set up the number of fields, the section dofs and the fields 
sections dofs
       call PetscSectionCreate(PETSC_COMM_WORLD, FSSection, ierr)
       call PetscSectionSetNumFields(FSSection, numFields, ierr)
       call PetscSectionSetChart(FSSection, 1, numpeq+1, ierr)
       call PetscSectionGetChart(FSSection, NF, NE, ierr)

       do some loop over nodes and dofs
                      call PetscSectionSetDof(FSSection, kk, 1, ierr)
                      call PetscSectionSetFieldDof(FSSection, kk, 
jj-1,1, ierr)
       enddo

       call PetscSectionSetUp(FSSection, ierr)
c     Create the DM and set the default section
       if(DM_flg) then
          call DMShellCreate(PETSC_COMM_WORLD,FSDM, ierr)
          call DMSetDefaultSection(FSDM, FSSection, ierr)
          call DMSetUp(FSDM, ierr)
       endif

       call KSPCreate        (PETSC_COMM_WORLD, kspsol   ,ierr)

       call PetscOptionsGetString(PETSC_NULL_CHARACTER,'-pc_type',
      &           pctype,chk,ierr)
c     Check for fieldsplit preconditioner to assign a DM to the KSP
       if (chk .eqv. PETSC_TRUE) then
          if(pctype.eq.'fieldsplit') then
             call KSPSetDM(kspsol, FSDM, ierr)
             call KSPSetDMActive(kspsol, PETSC_FALSE, ierr)
          endif
       endif
c       call KSPSetOperators  (kspsol, Kmat, Kmat,
c     &                             DIFFERENT_NONZERO_PATTERN ,ierr)
       call KSPSetOperators(kspsol, Kmat, Kmat, ierr)

c           Options from command line

       call KSPSetFromOptions(kspsol,ierr)

       call KSPGetPC             (kspsol, pc ,      ierr)
       call PCSetCoordinates(pc,ndm,numpn,hr(np(43)),ierr)

Best,
Luc

On 11/21/2014 11:14 AM, Matthew Knepley wrote:
> On Fri, Nov 21, 2014 at 9:55 AM, Luc Berger-Vergiat 
> <lb2653 at columbia.edu <mailto:lb2653 at columbia.edu>> wrote:
>
>     Hi all,
>     I am using a DMShell to create to use a fieldsplit preconditioner.
>     I would like to try some of petsc's multigrid options: mg and gamg.
>
>
> Lets start with gamg, since that does not need anything else from the 
> user. If you want to use mg,
> then you will need to supply the interpolation/restriction operators 
> since we cannot calculate them
> without an idea of the discretization.
>
>   Thanks,
>
>      Matt
>
>     When I call my preconditioner:
>
>         -ksp_type gmres -pc_type fieldsplit -pc_fieldsplit_type schur
>         -pc_fieldsplit_schur_factorization_type full
>         -pc_fieldsplit_schur_precondition selfp
>         -pc_fieldsplit_0_fields 2,3 -pc_fieldsplit_1_fields 0,1
>         -fieldsplit_0_ksp_type preonly -fieldsplit_0_pc_type hypre
>         -fieldsplit_0_pc_hypre_type euclid -fieldsplit_1_ksp_type
>         gmres -fieldsplit_1_pc_type mg -malloc_log mlog -log_summary
>         time.log -ksp_view
>
>     it returns me the following error message and ksp_view:
>
>     [0]PETSC ERROR: --------------------- Error Message
>     --------------------------------------------------------------
>     [0]PETSC ERROR:
>     [0]PETSC ERROR: Must call DMShellSetGlobalVector() or
>     DMShellSetCreateGlobalVector()
>     [0]PETSC ERROR: See
>     http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble
>     shooting.
>     [0]PETSC ERROR: Petsc Release Version 3.5.2, Sep, 08, 2014
>     [0]PETSC ERROR:
>     /home/luc/research/feap_repo/ShearBands/parfeap/feap on a arch-opt
>     named euler by luc Fri Nov 21 10:12:53 2014
>     [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran
>     --with-cxx=g++ --with-debugging=0 --with-shared-libraries=0
>     --download-fblaslapack --download-mpich --download-parmetis
>     --download-metis --download-ml=yes --download-hypre
>     --download-superlu_dist --download-mumps --download-scalapack
>     [0]PETSC ERROR: #259 DMCreateGlobalVector_Shell() line 245 in
>     /home/luc/research/petsc-3.5.2/src/dm/impls/shell/dmshell.c
>     [0]PETSC ERROR: #260 DMCreateGlobalVector() line 681 in
>     /home/luc/research/petsc-3.5.2/src/dm/interface/dm.c
>     [0]PETSC ERROR: #261 DMGetGlobalVector() line 154 in
>     /home/luc/research/petsc-3.5.2/src/dm/interface/dmget.c
>     KSP Object: 1 MPI processes
>       type: gmres
>         GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
>     Orthogonalization with no iterative refinement
>         GMRES: happy breakdown tolerance 1e-30
>       maximum iterations=10000, initial guess is zero
>       tolerances:  relative=1e-08, absolute=1e-16, divergence=1e+16
>       left preconditioning
>       using PRECONDITIONED norm type for convergence test
>     PC Object: 1 MPI processes
>       type: fieldsplit
>         FieldSplit with Schur preconditioner, factorization FULL
>         Preconditioner for the Schur complement formed from Sp, an
>     assembled approximation to S, which uses (the lumped) A00's
>     diagonal's inverse
>         Split info:
>         Split number 0 Defined by IS
>         Split number 1 Defined by IS
>         KSP solver for A00 block
>           KSP Object:      (fieldsplit_0_)       1 MPI processes
>             type: preonly
>             maximum iterations=10000, initial guess is zero
>             tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>             left preconditioning
>             using NONE norm type for convergence test
>           PC Object:      (fieldsplit_0_)       1 MPI processes
>             type: hypre
>               HYPRE Euclid preconditioning
>               HYPRE Euclid: number of levels 1
>             linear system matrix = precond matrix:
>             Mat Object:        (fieldsplit_0_)         1 MPI processes
>               type: seqaij
>               rows=2000, cols=2000
>               total: nonzeros=40000, allocated nonzeros=40000
>               total number of mallocs used during MatSetValues calls =0
>                 using I-node routines: found 400 nodes, limit used is 5
>         KSP solver for S = A11 - A10 inv(A00) A01
>           KSP Object:      (fieldsplit_1_)       1 MPI processes
>             type: gmres
>               GMRES: restart=30, using Classical (unmodified)
>     Gram-Schmidt Orthogonalization with no iterative refinement
>               GMRES: happy breakdown tolerance 1e-30
>             maximum iterations=10000, initial guess is zero
>             tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>             left preconditioning
>             using PRECONDITIONED norm type for convergence test
>           PC Object:      (fieldsplit_1_)       1 MPI processes
>             type: mg
>               MG: type is MULTIPLICATIVE, levels=1 cycles=v
>                 Cycles per PCApply=1
>                 Not using Galerkin computed coarse grid matrices
>             Coarse grid solver -- level -------------------------------
>               KSP Object: (fieldsplit_1_mg_levels_0_)           1 MPI
>     processes
>                 type: chebyshev
>                   Chebyshev: eigenvalue estimates:  min = 1.70057, max
>     = 18.7063
>                   Chebyshev: estimated using:  [0 0.1; 0 1.1]
>                   KSP Object:
>     (fieldsplit_1_mg_levels_0_est_)               1 MPI processes
>                     type: gmres
>                       GMRES: restart=30, using Classical (unmodified)
>     Gram-Schmidt Orthogonalization with no iterative refinement
>                       GMRES: happy breakdown tolerance 1e-30
>                     maximum iterations=10, initial guess is zero
>                     tolerances:  relative=1e-05, absolute=1e-50,
>     divergence=10000
>                     left preconditioning
>                     using NONE norm type for convergence test
>                   PC Object: (fieldsplit_1_mg_levels_0_)              
>     1 MPI processes
>                     type: sor
>                       SOR: type = local_symmetric, iterations = 1,
>     local iterations = 1, omega = 1
>                     linear system matrix followed by preconditioner
>     matrix:
>                     Mat Object: (fieldsplit_1_)                 1 MPI
>     processes
>                       type: schurcomplement
>                       rows=330, cols=330
>                         Schur complement A11 - A10 inv(A00) A01
>                         A11
>                           Mat Object:
>     (fieldsplit_1_)                       1 MPI processes
>                             type: seqaij
>                             rows=330, cols=330
>                             total: nonzeros=7642, allocated nonzeros=7642
>                             total number of mallocs used during
>     MatSetValues calls =0
>                               using I-node routines: found 121 nodes,
>     limit used is 5
>                         A10
>                           Mat Object:                       1 MPI
>     processes
>                             type: seqaij
>                             rows=330, cols=2000
>                             total: nonzeros=22800, allocated
>     nonzeros=22800
>                             total number of mallocs used during
>     MatSetValues calls =0
>                               using I-node routines: found 121 nodes,
>     limit used is 5
>                         KSP of A00
>                           KSP Object:
>     (fieldsplit_0_)                       1 MPI processes
>                             type: preonly
>                             maximum iterations=10000, initial guess is
>     zero
>                             tolerances:  relative=1e-05,
>     absolute=1e-50, divergence=10000
>                             left preconditioning
>                             using NONE norm type for convergence test
>                           PC Object:
>     (fieldsplit_0_)                       1 MPI processes
>                             type: hypre
>                               HYPRE Euclid preconditioning
>                               HYPRE Euclid: number of levels 1
>                             linear system matrix = precond matrix:
>                             Mat Object:
>     (fieldsplit_0_)                         1 MPI processes
>                               type: seqaij
>                               rows=2000, cols=2000
>                               total: nonzeros=40000, allocated
>     nonzeros=40000
>                               total number of mallocs used during
>     MatSetValues calls =0
>                                 using I-node routines: found 400
>     nodes, limit used is 5
>                         A01
>                           Mat Object:                       1 MPI
>     processes
>                             type: seqaij
>                             rows=2000, cols=330
>                             total: nonzeros=22800, allocated
>     nonzeros=22800
>                             total number of mallocs used during
>     MatSetValues calls =0
>                               using I-node routines: found 400 nodes,
>     limit used is 5
>                     Mat Object:                 1 MPI processes
>                       type: seqaij
>                       rows=330, cols=330
>                       total: nonzeros=7642, allocated nonzeros=7642
>                       total number of mallocs used during MatSetValues
>     calls =0
>                         using I-node routines: found 121 nodes, limit
>     used is 5
>                 maximum iterations=1, initial guess is zero
>                 tolerances:  relative=1e-05, absolute=1e-50,
>     divergence=10000
>                 left preconditioning
>                 using NONE norm type for convergence test
>               PC Object: (fieldsplit_1_mg_levels_0_)           1 MPI
>     processes
>                 type: sor
>                   SOR: type = local_symmetric, iterations = 1, local
>     iterations = 1, omega = 1
>                 linear system matrix followed by preconditioner matrix:
>                 Mat Object: (fieldsplit_1_)             1 MPI processes
>                   type: schurcomplement
>                   rows=330, cols=330
>                     Schur complement A11 - A10 inv(A00) A01
>                     A11
>                       Mat Object: (fieldsplit_1_)                   1
>     MPI processes
>                         type: seqaij
>                         rows=330, cols=330
>                         total: nonzeros=7642, allocated nonzeros=7642
>                         total number of mallocs used during
>     MatSetValues calls =0
>                           using I-node routines: found 121 nodes,
>     limit used is 5
>                     A10
>                       Mat Object:                   1 MPI processes
>                         type: seqaij
>                         rows=330, cols=2000
>                         total: nonzeros=22800, allocated nonzeros=22800
>                         total number of mallocs used during
>     MatSetValues calls =0
>                           using I-node routines: found 121 nodes,
>     limit used is 5
>                     KSP of A00
>                       KSP Object: (fieldsplit_0_)                   1
>     MPI processes
>                         type: preonly
>                         maximum iterations=10000, initial guess is zero
>                         tolerances:  relative=1e-05, absolute=1e-50,
>     divergence=10000
>                         left preconditioning
>                         using NONE norm type for convergence test
>                       PC Object: (fieldsplit_0_)                   1
>     MPI processes
>                         type: hypre
>                           HYPRE Euclid preconditioning
>                           HYPRE Euclid: number of levels 1
>                         linear system matrix = precond matrix:
>                         Mat Object:
>     (fieldsplit_0_)                     1 MPI processes
>                           type: seqaij
>                           rows=2000, cols=2000
>                           total: nonzeros=40000, allocated nonzeros=40000
>                           total number of mallocs used during
>     MatSetValues calls =0
>                             using I-node routines: found 400 nodes,
>     limit used is 5
>                     A01
>                       Mat Object:                   1 MPI processes
>                         type: seqaij
>                         rows=2000, cols=330
>                         total: nonzeros=22800, allocated nonzeros=22800
>                         total number of mallocs used during
>     MatSetValues calls =0
>                           using I-node routines: found 400 nodes,
>     limit used is 5
>                 Mat Object:             1 MPI processes
>                   type: seqaij
>                   rows=330, cols=330
>                   total: nonzeros=7642, allocated nonzeros=7642
>                   total number of mallocs used during MatSetValues
>     calls =0
>                     using I-node routines: found 121 nodes, limit used
>     is 5
>             linear system matrix followed by preconditioner matrix:
>             Mat Object:        (fieldsplit_1_)         1 MPI processes
>               type: schurcomplement
>               rows=330, cols=330
>                 Schur complement A11 - A10 inv(A00) A01
>                 A11
>                   Mat Object: (fieldsplit_1_)               1 MPI
>     processes
>                     type: seqaij
>                     rows=330, cols=330
>                     total: nonzeros=7642, allocated nonzeros=7642
>                     total number of mallocs used during MatSetValues
>     calls =0
>                       using I-node routines: found 121 nodes, limit
>     used is 5
>                 A10
>                   Mat Object:               1 MPI processes
>                     type: seqaij
>                     rows=330, cols=2000
>                     total: nonzeros=22800, allocated nonzeros=22800
>                     total number of mallocs used during MatSetValues
>     calls =0
>                       using I-node routines: found 121 nodes, limit
>     used is 5
>                 KSP of A00
>                   KSP Object: (fieldsplit_0_)               1 MPI
>     processes
>                     type: preonly
>                     maximum iterations=10000, initial guess is zero
>                     tolerances:  relative=1e-05, absolute=1e-50,
>     divergence=10000
>                     left preconditioning
>                     using NONE norm type for convergence test
>                   PC Object: (fieldsplit_0_)               1 MPI processes
>                     type: hypre
>                       HYPRE Euclid preconditioning
>                       HYPRE Euclid: number of levels 1
>                     linear system matrix = precond matrix:
>                     Mat Object: (fieldsplit_0_)                 1 MPI
>     processes
>                       type: seqaij
>                       rows=2000, cols=2000
>                       total: nonzeros=40000, allocated nonzeros=40000
>                       total number of mallocs used during MatSetValues
>     calls =0
>                         using I-node routines: found 400 nodes, limit
>     used is 5
>                 A01
>                   Mat Object:               1 MPI processes
>                     type: seqaij
>                     rows=2000, cols=330
>                     total: nonzeros=22800, allocated nonzeros=22800
>                     total number of mallocs used during MatSetValues
>     calls =0
>                       using I-node routines: found 400 nodes, limit
>     used is 5
>             Mat Object:         1 MPI processes
>               type: seqaij
>               rows=330, cols=330
>               total: nonzeros=7642, allocated nonzeros=7642
>               total number of mallocs used during MatSetValues calls =0
>                 using I-node routines: found 121 nodes, limit used is 5
>       linear system matrix = precond matrix:
>       Mat Object:   1 MPI processes
>         type: seqaij
>         rows=2330, cols=2330
>         total: nonzeros=93242, allocated nonzeros=93242
>         total number of mallocs used during MatSetValues calls =0
>           using I-node routines: found 521 nodes, limit used is 5
>
>     I am not completely surprised by this since the multigrid
>     algorithm might want to know the structure of my vector in order
>     to do a good job at restrictions and prolongations, etc... I am
>     just not sure when I should call: DMShellSetGlobalVector() or
>     DMShellSetCreateGlobalVector().
>     If I call DMShellSetGlobalVector() I assume that I need to
>     generate my vector somehow before hand but I don't know what is
>     required to make it a valid "template" vector?
>     If I call DMShellSetCreateGlobalVector() I need to pass it a
>     functions that computes a vector but what kind of vector? A
>     residual or a "template"? This is not very clear to me...
>
>     -- 
>     Best,
>     Luc
>
>
>
>
> -- 
> What most experimenters take for granted before they begin their 
> experiments is infinitely more interesting than any results to which 
> their experiments lead.
> -- Norbert Wiener

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20141121/c5257785/attachment-0001.html>


More information about the petsc-users mailing list