[petsc-users] signal received error; MatNullSpaceTest; Stokes flow solver with pc fieldsplit and schur complement
Matthew Knepley
knepley at gmail.com
Wed Oct 16 13:01:56 CDT 2013
On Wed, Oct 16, 2013 at 12:55 PM, Bishesh Khanal <bisheshkh at gmail.com>wrote:
>
>
>
> On Wed, Oct 16, 2013 at 5:50 PM, Mark F. Adams <mfadams at lbl.gov> wrote:
>
>> You might also test with Jacobi as a sanity check.
>>
>> On Oct 16, 2013, at 11:19 AM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>
>> >
>> > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
>> >
>> > especially the GAMG version which you also try in the debugger with the
>> argument -start_in_debugger
>> >
>> >
>> >
>> > On Oct 16, 2013, at 9:32 AM, Dave May <dave.mayhem23 at gmail.com> wrote:
>> >
>> >> Sounds like a memory error.
>> >> I'd run your code through valgrind to double check. The error could be
>> completely unconnected to the nullspaces.
>>
>
> Thanks, I tried them, but has not worked yet. Here are couple of things I
> tried, running valgrind with one and multiple processors for a smaller
> sized domains where there is no runtime error thrown. The results are shown
> below:
> (I've run valgrind in the cluster for the bigger sized domain where it
> would throw run-time error, but it's still running, valgrind slowed down
> the execution I guess).
>
You can also try running under MPICH, which can be valgrind clean.
Matt
> 1******* ********** For smaller domain sizes ((i.e. the sizes for which
> the program runs and gives results) ******************
> With one processor, valgrind does NOT give any errors.
> With multiple processes it reports something but I'm not sure if they are
> errors related to my code. One example with two processes:
> petsc -n 2 valgrind src/AdLemMain -pc_type fieldsplit -pc_fieldsplit_type
> schur -pc_fieldsplit_dm_splits 0 -pc_fieldsplit_0_fields 0,1,2
> -pc_fieldsplit_1_fields 3 -fieldsplit_0_pc_type hypre
> -fieldsplit_0_ksp_converged_reason -ksp_converged_reason
> ==31715== Memcheck, a memory error detector
> ==31716== Memcheck, a memory error detector
> ==31716== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al.
> ==31716== Using Valgrind-3.6.1 and LibVEX; rerun with -h for copyright info
> ==31716== Command: src/AdLemMain -pc_type fieldsplit -pc_fieldsplit_type
> schur -pc_fieldsplit_dm_splits 0 -pc_fieldsplit_0_fields 0,1,2
> -pc_fieldsplit_1_fields 3 -fieldsplit_0_pc_type hypre
> -fieldsplit_0_ksp_converged_reason -ksp_converged_reason
> ==31716==
> ==31715== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al.
> ==31715== Using Valgrind-3.6.1 and LibVEX; rerun with -h for copyright info
> ==31715== Command: src/AdLemMain -pc_type fieldsplit -pc_fieldsplit_type
> schur -pc_fieldsplit_dm_splits 0 -pc_fieldsplit_0_fields 0,1,2
> -pc_fieldsplit_1_fields 3 -fieldsplit_0_pc_type hypre
> -fieldsplit_0_ksp_converged_reason -ksp_converged_reason
> ==31715==
> ==31716== Conditional jump or move depends on uninitialised value(s)
> ==31716== at 0x32EEED9BCE: ??? (in /usr/lib64/libgfortran.so.3.0.0)
> ==31716== by 0x32EEED9155: ??? (in /usr/lib64/libgfortran.so.3.0.0)
> ==31716== by 0x32EEE185D7: ??? (in /usr/lib64/libgfortran.so.3.0.0)
> ==31716== by 0x32ECC0F195: call_init.part.0 (in /lib64/ld-2.14.90.so)
> ==31716== by 0x32ECC0F272: _dl_init (in /lib64/ld-2.14.90.so)
> ==31716== by 0x32ECC01719: ??? (in /lib64/ld-2.14.90.so)
> ==31716== by 0xE: ???
> ==31716== by 0x7FF0003EE: ???
> ==31716== by 0x7FF0003FC: ???
> ==31716== by 0x7FF000405: ???
> ==31716== by 0x7FF000410: ???
> ==31716== by 0x7FF000424: ???
> ==31716==
> ==31716== Conditional jump or move depends on uninitialised value(s)
> ==31716== at 0x32EEED9BD9: ??? (in /usr/lib64/libgfortran.so.3.0.0)
> ==31716== by 0x32EEED9155: ??? (in /usr/lib64/libgfortran.so.3.0.0)
> ==31716== by 0x32EEE185D7: ??? (in /usr/lib64/libgfortran.so.3.0.0)
> ==31716== by 0x32ECC0F195: call_init.part.0 (in /lib64/ld-2.14.90.so)
> ==31716== by 0x32ECC0F272: _dl_init (in /lib64/ld-2.14.90.so)
> ==31716== by 0x32ECC01719: ??? (in /lib64/ld-2.14.90.so)
> ==31716== by 0xE: ???
> ==31716== by 0x7FF0003EE: ???
> ==31716== by 0x7FF0003FC: ???
> ==31716== by 0x7FF000405: ???
> ==31716== by 0x7FF000410: ???
> ==31716== by 0x7FF000424: ???
> ==31716==
> dmda of size: (8,8,8)
>
> using schur complement
>
> using user defined split
> Linear solve converged due to CONVERGED_ATOL iterations 0
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 3
> Linear solve converged due to CONVERGED_RTOL iterations 1
> ==31716==
> ==31716== HEAP SUMMARY:
> ==31716== in use at exit: 212,357 bytes in 1,870 blocks
> ==31716== total heap usage: 112,701 allocs, 110,831 frees, 19,698,341
> bytes allocated
> ==31716==
> ==31715==
> ==31715== HEAP SUMMARY:
> ==31715== in use at exit: 187,709 bytes in 1,864 blocks
> ==31715== total heap usage: 112,891 allocs, 111,027 frees, 19,838,487
> bytes allocated
> ==31715==
> ==31716== LEAK SUMMARY:
> ==31716== definitely lost: 0 bytes in 0 blocks
> ==31716== indirectly lost: 0 bytes in 0 blocks
> ==31716== possibly lost: 0 bytes in 0 blocks
> ==31716== still reachable: 212,357 bytes in 1,870 blocks
> ==31716== suppressed: 0 bytes in 0 blocks
> ==31716== Rerun with --leak-check=full to see details of leaked memory
> ==31716==
> ==31716== For counts of detected and suppressed errors, rerun with: -v
> ==31716== Use --track-origins=yes to see where uninitialised values come
> from
> ==31716== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 2 from 2)
> ==31715== LEAK SUMMARY:
> ==31715== definitely lost: 0 bytes in 0 blocks
> ==31715== indirectly lost: 0 bytes in 0 blocks
> ==31715== possibly lost: 0 bytes in 0 blocks
> ==31715== still reachable: 187,709 bytes in 1,864 blocks
> ==31715== suppressed: 0 bytes in 0 blocks
> ==31715== Rerun with --leak-check=full to see details of leaked memory
> ==31715==
> ==31715== For counts of detected and suppressed errors, rerun with: -v
> ==31715== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 2 from 2)
>
>
>
> 2*************************** For the small size of 18X18X18 grid (the
> solver converges, -fieldsplit_0 pc jacobi used) *********************
> The results when I run valgrind for 2 processes, some errors:
> .....
>
> ==18003== Syscall param writev(vector[...]) points to uninitialised byte(s)
> ==18003== at 0x962E047: writev (in /lib64/libc-2.14.90.so)
> ==18003== by 0xBB34E22: mca_oob_tcp_msg_send_handler (oob_tcp_msg.c:249)
> ==18003== by 0xBB35D52: mca_oob_tcp_peer_send (oob_tcp_peer.c:204)
> ==18003== by 0xBB39A36: mca_oob_tcp_send_nb (oob_tcp_send.c:167)
> ==18003== by 0xB92AB10: orte_rml_oob_send (rml_oob_send.c:136)
> ==18003== by 0xB92B0BF: orte_rml_oob_send_buffer (rml_oob_send.c:270)
> ==18003== by 0xBF44147: modex (grpcomm_bad_module.c:573)
> ==18003== by 0x81162B1: ompi_mpi_init (ompi_mpi_init.c:541)
> ==18003== by 0x812EC31: PMPI_Init_thread (pinit_thread.c:84)
> ==18003== by 0x4F903D9: PetscInitialize (pinit.c:675)
> ==18003== by 0x505088: main (PetscAdLemMain.cxx:25)
> ==18003== Address 0xfdf9d45 is 197 bytes inside a block of size 512
> alloc'd
> ==18003== at 0x4C2A5B2: realloc (vg_replace_malloc.c:525)
> ==18003== by 0x81A4286: opal_dss_buffer_extend
> (dss_internal_functions.c:63)
> ==18003== by 0x81A4685: opal_dss_copy_payload (dss_load_unload.c:164)
> ==18003== by 0x817C07E: orte_grpcomm_base_pack_modex_entries
> (grpcomm_base_modex.c:861)
> ==18003== by 0xBF44042: modex (grpcomm_bad_module.c:563)
> ==18003== by 0x81162B1: ompi_mpi_init (ompi_mpi_init.c:541)
> ==18003== by 0x812EC31: PMPI_Init_thread (pinit_thread.c:84)
> ==18003== by 0x4F903D9: PetscInitialize (pinit.c:675)
> ==18003== by 0x505088: main (PetscAdLemMain.cxx:25)
> ==18003==
>
> Then the solver converges
> and again some errors (I doubt if they are caused from my code at all:)
>
> ==18003== Conditional jump or move depends on uninitialised value(s)
> ==18003== at 0xDDCC1B2: rdma_destroy_id (in
> /usr/lib64/librdmacm.so.1.0.0)
> ==18003== by 0xE200A23: id_context_destructor
> (btl_openib_connect_rdmacm.c:185)
> ==18003== by 0xE1FFED0: rdmacm_component_finalize (opal_object.h:448)
> ==18003== by 0xE1FE3AA: ompi_btl_openib_connect_base_finalize
> (btl_openib_connect_base.c:496)
> ==18003== by 0xE1EA9E6: btl_openib_component_close
> (btl_openib_component.c:251)
> ==18003== by 0x81BD411: mca_base_components_close
> (mca_base_components_close.c:53)
> ==18003== by 0x8145E1F: mca_btl_base_close (btl_base_close.c:62)
> ==18003== by 0xD5A3DE8: mca_pml_ob1_component_close
> (pml_ob1_component.c:156)
> ==18003== by 0x81BD411: mca_base_components_close
> (mca_base_components_close.c:53)
> ==18003== by 0x8154E37: mca_pml_base_close (pml_base_close.c:66)
> ==18003== by 0x8117142: ompi_mpi_finalize (ompi_mpi_finalize.c:306)
> ==18003== by 0x4F94D6D: PetscFinalize (pinit.c:1276)
> ==180
> ..........................
>
> ==18003== HEAP SUMMARY:
> ==18003== in use at exit: 551,540 bytes in 3,294 blocks
> ==18003== total heap usage: 147,859 allocs, 144,565 frees, 84,461,908
> bytes allocated
> ==18003==
> ==18003== LEAK SUMMARY:
> ==18003== definitely lost: 124,956 bytes in 108 blocks
> ==18003== indirectly lost: 32,380 bytes in 54 blocks
> ==18003== possibly lost: 0 bytes in 0 blocks
> ==18003== still reachable: 394,204 bytes in 3,132 blocks
> ==18003== suppressed: 0 bytes in 0 blocks
> ==18003== Rerun with --leak-check=full to see details of leaked memory
> ==18003==
> ==18003== For counts of detected and suppressed errors, rerun with: -v
> ==18003== Use --track-origins=yes to see where uninitialised values come
> from
> ==18003== ERROR SUMMARY: 142 errors from 32 contexts (suppressed: 2 from 2)
>
>
>
>> >>
>> >> Cheers,
>> >> Dave
>> >>
>> >>
>> >> On 16 October 2013 16:18, Bishesh Khanal <bisheshkh at gmail.com> wrote:
>> >> Dear all,
>> >> I'm trying to solve a stokes flow with constant viscosity but with
>> non-zero divergence prescribed in the rhs.
>> >>
>> >> I have a matrix created from DMDA (mDa) of 4 dofs: vx, vy, vz and p
>> respectively.
>> >> I have another DMDA (mDaP) of same size but of 1 dof corresponding to
>> only p.
>> >> I have assigned the null space for constant pressure inside the code.
>> I have assigned two nullspace basis: One corresponding to vector created
>> from mDa that is assigned to outer ksp. Second corresponding to vector
>> created from mDaP that is assigned to a ksp obtained from the fieldsplit
>> corresponding to the schur complement.
>> >>
>> >> Now when running the code, the solver converges for up to certain
>> size, e.g. 92X110 X 92 (the results for this convegent case with -ksp_view
>> is given at the end of the emal.
>> >> But when I double the size of the grid in each dimension, it gives me
>> a run-time error.
>> >>
>> >> The options I've used are of the kind:
>> >> -pc_type fieldsplit -pc_fieldsplit_type schur -pc_fieldsplit_dm_splits
>> 0 -pc_fieldsplit_0_fields 0,1,2 -pc_fieldsplit_1_fields 3
>> -fieldsplit_0_pc_type hypre -fieldsplit_0_ksp_converged_reason
>> -fieldsplit_1_ksp_converged_reason -ksp_converged_reason -ksp_view
>> >>
>> >> Here are:
>> >> 1. Error message when using hypre for fieldsplit_0
>> >> 2. Error message when using gamg for fieldsplit_0
>> >> 3. -ksp_view of the working case using hypre for filedsplit_0
>> >>
>> >> I get following error when I use hypre :
>> >> 1.
>> ******************************************************************************************************
>> >> [5]PETSC ERROR: --------------------- Error Message
>> ------------------------------------
>> >> [5]PETSC ERROR: Signal received!
>> >> [5]PETSC ERROR:
>> ------------------------------------------------------------------------
>> >> [5]PETSC ERROR: Petsc Release Version 3.4.3, Oct, 15, 2013
>> >> [5]PETSC ERROR: See docs/changes/index.html for recent updates.
>> >> [5]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>> >> [5]PETSC ERROR: See docs/index.html for manual pages.
>> >> [5]PETSC ERROR:
>> ------------------------------------------------------------------------
>> >> [5]PETSC ERROR:
>> /epi/asclepios2/bkhanal/works/AdLemModel/build/src/AdLemMain on a
>> arch-linux2-cxx-debug named nef001 by bkhanal Wed Oct 16 15:08:42 2013
>> >> [5]PETSC ERROR: Libraries linked from
>> /epi/asclepios2/bkhanal/petscDebug/lib
>> >> [5]PETSC ERROR: Configure run at Wed Oct 16 14:18:48 2013
>> >> [5]PETSC ERROR: Configure options
>> --with-mpi-dir=/opt/openmpi-gcc/current/ --with-shared-libraries
>> --prefix=/epi/asclepios2/bkhanal/petscDebug -download-f-blas-lapack=1
>> --download-metis --download-parmetis --download-superlu_dist
>> --download-scalapack --download-mumps --download-hypre --with-clanguage=cxx
>> >> [5]PETSC ERROR:
>> ------------------------------------------------------------------------
>> >> [5]PETSC ERROR: User provided function() line 0 in unknown directory
>> unknown file
>> >> [6]PETSC ERROR:
>> ------------------------------------------------------------------------
>> >> [6]PETSC ERROR: Caught signal number 15 Terminate: Somet process (or
>> the batch system) has told this process to end
>> >> [6]PETSC ERROR: Try option -start_in_debugger or
>> -on_error_attach_debugger
>> >> [6]PETSC ERROR: or see
>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[6]PETSCERROR: or try
>> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory
>> corruption errors
>> >> [6]PETSC ERROR: likely location of problem given in stack below
>> >> [6]PETSC ERROR: --------------------- Stack Frames
>> ------------------------------------
>> >> [6]PETSC ERROR: Note: The EXACT line numbers in the stack are not
>> available,
>> >> [6]PETSC ERROR: INSTEAD the line number of the start of the
>> function
>> >> [6]PETSC ERROR: is given.
>> >> [6]PETSC ERROR: [6] HYPRE_SetupXXX line 130
>> /tmp/petsc-3.4.3/src/ksp/pc/impls/hypre/hypre.c
>> >> [6]PETSC ERROR: [6] PCSetUp_HYPRE line 94
>> /tmp/petsc-3.4.3/src/ksp/pc/impls/hypre/hypre.c
>> >> [6]PETSC ERROR: [6] PCSetUp line 868
>> /tmp/petsc-3.4.3/src/ksp/pc/interface/precon.c
>> >> [6]PETSC ERROR: [6] KSPSetUp line 192
>> /tmp/petsc-3.4.3/src/ksp/ksp/interface/itfunc.c
>> >> [6]PETSC ERROR: [6] KSPSolve line 356
>> /tmp/petsc-3.4.3/src/ksp/ksp/interface/itfunc.c
>> >> [6]PETSC ERROR: [6] MatMult_SchurComplement line 75
>> /tmp/petsc-3.4.3/src/ksp/ksp/utils/schurm.c
>> >> [6]PETSC ERROR: [6] MatNullSpaceTest line 408
>> /tmp/petsc-3.4.3/src/mat/interface/matnull.c
>> >> [6]PETSC ERROR: [6] solveModel line 113
>> "unknowndirectory/"/epi/asclepios2/bkhanal/works/AdLemModel/src/PetscAdLemTaras3D.cxx
>> >>
>> >>
>> >> 2.
>> ****************************************************************************************************
>> >> Using gamg instead has errors like following:
>> >>
>> >> [5]PETSC ERROR: --------------------- Stack Frames
>> ------------------------------------
>> >> [5]PETSC ERROR: Note: The EXACT line numbers in the stack are not
>> available,
>> >> [5]PETSC ERROR: INSTEAD the line number of the start of the
>> function
>> >> [5]PETSC ERROR: is given.
>> >> [5]PETSC ERROR: [5] PetscLLCondensedAddSorted line 1202
>> /tmp/petsc-3.4.3/include/petsc-private/matimpl.h
>> >> [5]PETSC ERROR: [5] MatPtAPSymbolic_MPIAIJ_MPIAIJ line 124
>> /tmp/petsc-3.4.3/src/mat/impls/aij/mpi/mpiptap.c
>> >> [5]PETSC ERROR: [5] MatPtAP_MPIAIJ_MPIAIJ line 80
>> /tmp/petsc-3.4.3/src/mat/impls/aij/mpi/mpiptap.c
>> >> [5]PETSC ERROR: [5] MatPtAP line 8223
>> /tmp/petsc-3.4.3/src/mat/interface/matrix.c
>> >> [5]PETSC ERROR: [5] createLevel line 144
>> /tmp/petsc-3.4.3/src/ksp/pc/impls/gamg/gamg.c
>> >> [5]PETSC ERROR: [5] PCSetUp_GAMG line 545
>> /tmp/petsc-3.4.3/src/ksp/pc/impls/gamg/gamg.c
>> >> [5]PETSC ERROR: [5] PCSetUp line 868
>> /tmp/petsc-3.4.3/src/ksp/pc/interface/precon.c
>> >> [5]PETSC ERROR: [5] KSPSetUp line 192
>> /tmp/petsc-3.4.3/src/ksp/ksp/interface/itfunc.c
>> >> [5]PETSC ERROR: [5] KSPSolve line 356
>> /tmp/petsc-3.4.3/src/ksp/ksp/interface/itfunc.c
>> >> [5]PETSC ERROR: [5] MatMult_SchurComplement line 75
>> /tmp/petsc-3.4.3/src/ksp/ksp/utils/schurm.c
>> >> [5]PETSC ERROR: [5] MatNullSpaceTest line 408
>> /tmp/petsc-3.4.3/src/mat/interface/matnull.c
>> >> [5]PETSC ERROR: [5] solveModel line 113
>> "unknowndirectory/"/epi/asclepios2/bkhanal/works/AdLemModel/src/PetscAdLemTaras3D.cxx
>> >>
>> >>
>> >> 3.
>> ********************************************************************************************************
>> >>
>> >> BUT, It does give me results when I use a domain of size: 91X109 X 91
>> (half sized in each dimension) The result along with ksp view in this case
>> is as follows:
>> >>
>> >> Linear solve converged due to CONVERGED_RTOL iterations 2
>> >> KSP Object: 64 MPI processes
>> >> type: gmres
>> >> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
>> Orthogonalization with no iterative refinement
>> >> GMRES: happy breakdown tolerance 1e-30
>> >> maximum iterations=10000, initial guess is zero
>> >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
>> >> left preconditioning
>> >> has attached null space
>> >> using PRECONDITIONED norm type for convergence test
>> >> PC Object: 64 MPI processes
>> >> type: fieldsplit
>> >> FieldSplit with Schur preconditioner, blocksize = 4, factorization
>> FULL
>> >> Preconditioner for the Schur complement formed from user provided
>> matrix
>> >> Split info:
>> >> Split number 0 Fields 0, 1, 2
>> >> Split number 1 Fields 3
>> >> KSP solver for A00 block
>> >> KSP Object: (fieldsplit_0_) 64 MPI processes
>> >> type: gmres
>> >> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
>> Orthogonalization with no iterative refinement
>> >> GMRES: happy breakdown tolerance 1e-30
>> >> maximum iterations=10000, initial guess is zero
>> >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
>> >> left preconditioning
>> >> using PRECONDITIONED norm type for convergence test
>> >> PC Object: (fieldsplit_0_) 64 MPI processes
>> >> type: hypre
>> >> HYPRE BoomerAMG preconditioning
>> >> HYPRE BoomerAMG: Cycle type V
>> >> HYPRE BoomerAMG: Maximum number of levels 25
>> >> HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1
>> >> HYPRE BoomerAMG: Convergence tolerance PER hypre call 0
>> >> HYPRE BoomerAMG: Threshold for strong coupling 0.25
>> >> HYPRE BoomerAMG: Interpolation truncation factor 0
>> >> HYPRE BoomerAMG: Interpolation: max elements per row 0
>> >> HYPRE BoomerAMG: Number of levels of aggressive coarsening 0
>> >> HYPRE BoomerAMG: Number of paths for aggressive coarsening 1
>> >> HYPRE BoomerAMG: Maximum row sums 0.9
>> >> HYPRE BoomerAMG: Sweeps down 1
>> >> HYPRE BoomerAMG: Sweeps up 1
>> >> HYPRE BoomerAMG: Sweeps on coarse 1
>> >> HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi
>> >> HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi
>> >> HYPRE BoomerAMG: Relax on coarse Gaussian-elimination
>> >> HYPRE BoomerAMG: Relax weight (all) 1
>> >> HYPRE BoomerAMG: Outer relax weight (all) 1
>> >> HYPRE BoomerAMG: Using CF-relaxation
>> >> HYPRE BoomerAMG: Measure type local
>> >> HYPRE BoomerAMG: Coarsen type Falgout
>> >> HYPRE BoomerAMG: Interpolation type classical
>> >> linear system matrix = precond matrix:
>> >> Matrix Object: 64 MPI processes
>> >> type: mpiaij
>> >> rows=2793120, cols=2793120
>> >> total: nonzeros=221624352, allocated nonzeros=221624352
>> >> total number of mallocs used during MatSetValues calls =0
>> >> using I-node (on process 0) routines: found 14812 nodes,
>> limit used is 5
>> >> KSP solver for S = A11 - A10 inv(A00) A01
>> >> KSP Object: (fieldsplit_1_) 64 MPI processes
>> >> type: gmres
>> >> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
>> Orthogonalization with no iterative refinement
>> >> GMRES: happy breakdown tolerance 1e-30
>> >> maximum iterations=10000, initial guess is zero
>> >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
>> >> left preconditioning
>> >> has attached null space
>> >> using PRECONDITIONED norm type for convergence test
>> >> PC Object: (fieldsplit_1_) 64 MPI processes
>> >> type: bjacobi
>> >> block Jacobi: number of blocks = 64
>> >> Local solve is same for all blocks, in the following KSP and
>> PC objects:
>> >> KSP Object: (fieldsplit_1_sub_) 1 MPI processes
>> >> type: preonly
>> >> maximum iterations=10000, initial guess is zero
>> >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000
>> >> left preconditioning
>> >> using NONE norm type for convergence test
>> >> PC Object: (fieldsplit_1_sub_) 1 MPI processes
>> >> type: ilu
>> >> ILU: out-of-place factorization
>> >> 0 levels of fill
>> >> tolerance for zero pivot 2.22045e-14
>> >> using diagonal shift on blocks to prevent zero pivot
>> [INBLOCKS]
>> >> matrix ordering: natural
>> >> factor fill ratio given 1, needed 1
>> >> Factored matrix follows:
>> >> Matrix Object: 1 MPI processes
>> >> type: seqaij
>> >> rows=14812, cols=14812
>> >> package used to perform factorization: petsc
>> >> total: nonzeros=368098, allocated nonzeros=368098
>> >> total number of mallocs used during MatSetValues
>> calls =0
>> >> not using I-node routines
>> >> linear system matrix = precond matrix:
>> >> Matrix Object: 1 MPI processes
>> >> type: seqaij
>> >> rows=14812, cols=14812
>> >> total: nonzeros=368098, allocated nonzeros=368098
>> >> total number of mallocs used during MatSetValues calls =0
>> >> not using I-node routines
>> >>
>> >> linear system matrix followed by preconditioner matrix:
>> >> Matrix Object: 64 MPI processes
>> >> type: schurcomplement
>> >> rows=931040, cols=931040
>> >> Schur complement A11 - A10 inv(A00) A01
>> >> A11
>> >> Matrix Object: 64 MPI processes
>> >> type: mpiaij
>> >> rows=931040, cols=931040
>> >> total: nonzeros=24624928, allocated nonzeros=24624928
>> >> total number of mallocs used during MatSetValues calls
>> =0
>> >> not using I-node (on process 0) routines
>> >> A10
>> >> Matrix Object: 64 MPI processes
>> >> type: mpiaij
>> >> rows=931040, cols=2793120
>> >> total: nonzeros=73874784, allocated nonzeros=73874784
>> >> total number of mallocs used during MatSetValues calls
>> =0
>> >> not using I-node (on process 0) routines
>> >> KSP of A00
>> >> KSP Object: (fieldsplit_0_) 64
>> MPI processes
>> >> type: gmres
>> >> GMRES: restart=30, using Classical (unmodified)
>> Gram-Schmidt Orthogonalization with no iterative refinement
>> >> GMRES: happy breakdown tolerance 1e-30
>> >> maximum iterations=10000, initial guess is zero
>> >> tolerances: relative=1e-05, absolute=1e-50,
>> divergence=10000
>> >> left preconditioning
>> >> using PRECONDITIONED norm type for convergence test
>> >> PC Object: (fieldsplit_0_) 64
>> MPI processes
>> >> type: hypre
>> >> HYPRE BoomerAMG preconditioning
>> >> HYPRE BoomerAMG: Cycle type V
>> >> HYPRE BoomerAMG: Maximum number of levels 25
>> >> HYPRE BoomerAMG: Maximum number of iterations PER
>> hypre call 1
>> >> HYPRE BoomerAMG: Convergence tolerance PER hypre call
>> 0
>> >> HYPRE BoomerAMG: Threshold for strong coupling 0.25
>> >> HYPRE BoomerAMG: Interpolation truncation factor 0
>> >> HYPRE BoomerAMG: Interpolation: max elements per row 0
>> >> HYPRE BoomerAMG: Number of levels of aggressive
>> coarsening 0
>> >> HYPRE BoomerAMG: Number of paths for aggressive
>> coarsening 1
>> >> HYPRE BoomerAMG: Maximum row sums 0.9
>> >> HYPRE BoomerAMG: Sweeps down 1
>> >> HYPRE BoomerAMG: Sweeps up 1
>> >> HYPRE BoomerAMG: Sweeps on coarse 1
>> >> HYPRE BoomerAMG: Relax down
>> symmetric-SOR/Jacobi
>> >> HYPRE BoomerAMG: Relax up
>> symmetric-SOR/Jacobi
>> >> HYPRE BoomerAMG: Relax on coarse
>> Gaussian-elimination
>> >> HYPRE BoomerAMG: Relax weight (all) 1
>> >> HYPRE BoomerAMG: Outer relax weight (all) 1
>> >> HYPRE BoomerAMG: Using CF-relaxation
>> >> HYPRE BoomerAMG: Measure type local
>> >> HYPRE BoomerAMG: Coarsen type Falgout
>> >> HYPRE BoomerAMG: Interpolation type classical
>> >> linear system matrix = precond matrix:
>> >> Matrix Object: 64 MPI processes
>> >> type: mpiaij
>> >> rows=2793120, cols=2793120
>> >> total: nonzeros=221624352, allocated
>> nonzeros=221624352
>> >> total number of mallocs used during MatSetValues
>> calls =0
>> >> using I-node (on process 0) routines: found 14812
>> nodes, limit used is 5
>> >> A01
>> >> Matrix Object: 64 MPI processes
>> >> type: mpiaij
>> >> rows=2793120, cols=931040
>> >> total: nonzeros=73874784, allocated nonzeros=73874784
>> >> total number of mallocs used during MatSetValues calls
>> =0
>> >> using I-node (on process 0) routines: found 14812
>> nodes, limit used is 5
>> >> Matrix Object: 64 MPI processes
>> >> type: mpiaij
>> >> rows=931040, cols=931040
>> >> total: nonzeros=24624928, allocated nonzeros=24624928
>> >> total number of mallocs used during MatSetValues calls =0
>> >> not using I-node (on process 0) routines
>> >> linear system matrix = precond matrix:
>> >> Matrix Object: 64 MPI processes
>> >> type: mpiaij
>> >> rows=3724160, cols=3724160, bs=4
>> >> total: nonzeros=393998848, allocated nonzeros=393998848
>> >> total number of mallocs used during MatSetValues calls =0
>> >>
>> >>
>> ******************************************************************************************************
>> >> What could be going wrong here ? Is it something related to null-space
>> setting ? But I do not know why it does not arise for smaller domain sizes!
>> >>
>> >
>>
>>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20131016/08cde466/attachment-0001.html>
More information about the petsc-users
mailing list