<div dir="ltr"><div>Sounds like a memory error.<br>I'd run your code through valgrind to double check. The error could be completely unconnected to the nullspaces.<br><br>Cheers,<br></div>� Dave <br></div><div class="gmail_extra">
<br><br><div class="gmail_quote">On 16 October 2013 16:18, Bishesh Khanal <span dir="ltr"><<a href="mailto:bisheshkh@gmail.com" target="_blank">bisheshkh@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div><div>Dear all,<br></div><div>I'm trying to solve a stokes flow with constant viscosity but with non-zero divergence prescribed in the rhs.<br><br></div><div>I have a matrix created from DMDA (mDa) of 4 dofs: vx, vy, vz and p respectively.<br>
I have another DMDA (mDaP) of same size but of 1 dof corresponding to only p.<br>I have assigned the null space for constant pressure inside the code. I have assigned two nullspace basis:� One corresponding to vector created from mDa that is assigned to outer ksp. Second corresponding to vector created from mDaP that is assigned to a ksp obtained from the fieldsplit corresponding to the schur complement. <br>
<br>Now when running the code, the solver converges for up to certain size, e.g. 92X110 X 92 (the results for this convegent case with -ksp_view is given at the end of the emal.<br></div><div>But when I double the size of the grid in each dimension, it gives me a run-time error.<br>
<div><br>The options� I've used are of the kind:� <br>-pc_type fieldsplit
-pc_fieldsplit_type schur -pc_fieldsplit_dm_splits 0
-pc_fieldsplit_0_fields 0,1,2 -pc_fieldsplit_1_fields 3
-fieldsplit_0_pc_type hypre -fieldsplit_0_ksp_converged_reason
-fieldsplit_1_ksp_converged_reason -ksp_converged_reason -ksp_view<br></div><div><br></div>Here are: <br></div><div>1. Error message when using hypre for fieldsplit_0<br></div><div>2. Error message when using gamg for fieldsplit_0<br>
</div><div>3. -ksp_view of the working case using hypre for filedsplit_0<br></div><br><div>I get following error when I use hypre :<br>1. ******************************************************************************************************<br>
[5]PETSC ERROR: --------------------- Error Message ------------------------------------<br>[5]PETSC ERROR: Signal received!<br>[5]PETSC ERROR: ------------------------------------------------------------------------<br>
[5]PETSC ERROR: Petsc Release Version 3.4.3, Oct, 15, 2013 <br>
[5]PETSC ERROR: See docs/changes/index.html for recent updates.<br>[5]PETSC ERROR: See docs/faq.html for hints about trouble shooting.<br>[5]PETSC ERROR: See docs/index.html for manual pages.<br>[5]PETSC ERROR: ------------------------------------------------------------------------<br>
[5]PETSC ERROR: /epi/asclepios2/bkhanal/works/AdLemModel/build/src/AdLemMain on a arch-linux2-cxx-debug named nef001 by bkhanal Wed Oct 16 15:08:42 2013<br>[5]PETSC ERROR: Libraries linked from /epi/asclepios2/bkhanal/petscDebug/lib<br>
[5]PETSC ERROR: Configure run at Wed Oct 16 14:18:48 2013<br>[5]PETSC ERROR: Configure options --with-mpi-dir=/opt/openmpi-gcc/current/ --with-shared-libraries --prefix=/epi/asclepios2/bkhanal/petscDebug -download-f-blas-lapack=1 --download-metis --download-parmetis --download-superlu_dist --download-scalapack --download-mumps --download-hypre --with-clanguage=cxx<br>
[5]PETSC ERROR: ------------------------------------------------------------------------<br>[5]PETSC ERROR: User provided function() line 0 in unknown directory unknown file<br>[6]PETSC ERROR: ------------------------------------------------------------------------<br>
[6]PETSC ERROR: Caught signal number 15 Terminate: Somet process (or the batch system) has told this process to end<br>[6]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger<br>[6]PETSC ERROR: or see <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[6]PETSC" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[6]PETSC</a> ERROR: or try <a href="http://valgrind.org" target="_blank">http://valgrind.org</a> on GNU/linux and Apple Mac OS X to find memory corruption errors<br>
[6]PETSC ERROR: likely location of problem given in stack below<br>[6]PETSC ERROR: ---------------------� Stack Frames ------------------------------------<br>[6]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,<br>
[6]PETSC ERROR:������ INSTEAD the line number of the start of the function<br>[6]PETSC ERROR:������ is given.<br>[6]PETSC ERROR: [6] HYPRE_SetupXXX line 130 /tmp/petsc-3.4.3/src/ksp/pc/impls/hypre/hypre.c<br>[6]PETSC ERROR: [6] PCSetUp_HYPRE line 94 /tmp/petsc-3.4.3/src/ksp/pc/impls/hypre/hypre.c<br>
[6]PETSC ERROR: [6] PCSetUp line 868 /tmp/petsc-3.4.3/src/ksp/pc/interface/precon.c<br>[6]PETSC ERROR: [6] KSPSetUp line 192 /tmp/petsc-3.4.3/src/ksp/ksp/interface/itfunc.c<br>[6]PETSC ERROR: [6] KSPSolve line 356 /tmp/petsc-3.4.3/src/ksp/ksp/interface/itfunc.c<br>
[6]PETSC ERROR: [6] MatMult_SchurComplement line 75 /tmp/petsc-3.4.3/src/ksp/ksp/utils/schurm.c<br>[6]PETSC ERROR: [6] MatNullSpaceTest line 408 /tmp/petsc-3.4.3/src/mat/interface/matnull.c<br>[6]PETSC ERROR: [6] solveModel line 113 "unknowndirectory/"/epi/asclepios2/bkhanal/works/AdLemModel/src/PetscAdLemTaras3D.cxx<br>
</div><div><br><br>2. ****************************************************************************************************<br></div><div>Using gamg instead has errors like following:<br><br>[5]PETSC ERROR: ---------------------� Stack Frames ------------------------------------<br>
[5]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,<br>[5]PETSC ERROR:������ INSTEAD the line number of the start of the function<br>[5]PETSC ERROR:������ is given.<br>[5]PETSC ERROR: [5] PetscLLCondensedAddSorted line 1202 /tmp/petsc-3.4.3/include/petsc-private/matimpl.h<br>
[5]PETSC ERROR: [5] MatPtAPSymbolic_MPIAIJ_MPIAIJ line 124 /tmp/petsc-3.4.3/src/mat/impls/aij/mpi/mpiptap.c<br>[5]PETSC ERROR: [5] MatPtAP_MPIAIJ_MPIAIJ line 80 /tmp/petsc-3.4.3/src/mat/impls/aij/mpi/mpiptap.c<br>[5]PETSC ERROR: [5] MatPtAP line 8223 /tmp/petsc-3.4.3/src/mat/interface/matrix.c<br>
[5]PETSC ERROR: [5] createLevel line 144 /tmp/petsc-3.4.3/src/ksp/pc/impls/gamg/gamg.c<br>[5]PETSC ERROR: [5] PCSetUp_GAMG line 545 /tmp/petsc-3.4.3/src/ksp/pc/impls/gamg/gamg.c<br>[5]PETSC ERROR: [5] PCSetUp line 868 /tmp/petsc-3.4.3/src/ksp/pc/interface/precon.c<br>
[5]PETSC ERROR: [5] KSPSetUp line 192 /tmp/petsc-3.4.3/src/ksp/ksp/interface/itfunc.c<br>[5]PETSC ERROR: [5] KSPSolve line 356 /tmp/petsc-3.4.3/src/ksp/ksp/interface/itfunc.c<br>[5]PETSC ERROR: [5] MatMult_SchurComplement line 75 /tmp/petsc-3.4.3/src/ksp/ksp/utils/schurm.c<br>
[5]PETSC ERROR: [5] MatNullSpaceTest line 408 /tmp/petsc-3.4.3/src/mat/interface/matnull.c<br>[5]PETSC ERROR: [5] solveModel line 113 "unknowndirectory/"/epi/asclepios2/bkhanal/works/AdLemModel/src/PetscAdLemTaras3D.cxx<br>
<br><br>3. ********************************************************************************************************<br></div><div><br></div><div>BUT, It does give me results when I use a domain of size: 91X109 X 91 (half sized in each dimension) The result along with ksp view in this case is as follows:<br>
</div><br>Linear solve converged due to CONVERGED_RTOL iterations 2<br>KSP Object: 64 MPI processes<br>� type: gmres<br>��� GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
��� GMRES: happy breakdown tolerance 1e-30<br>� maximum iterations=10000, initial guess is zero<br>� tolerances:� relative=1e-05, absolute=1e-50, divergence=10000<br>� left preconditioning<br>� has attached null space<br>
� using PRECONDITIONED norm type for convergence test<br>PC Object: 64 MPI processes<br>� type: fieldsplit<br>��� FieldSplit with Schur preconditioner, blocksize = 4, factorization FULL<br>��� Preconditioner for the Schur complement formed from user provided matrix<br>
��� Split info:<br>��� Split number 0 Fields� 0, 1, 2<br>��� Split number 1 Fields� 3<br>��� KSP solver for A00 block <br>����� KSP Object:����� (fieldsplit_0_)������ 64 MPI processes<br>������� type: gmres<br>��������� GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
��������� GMRES: happy breakdown tolerance 1e-30<br>������� maximum iterations=10000, initial guess is zero<br>������� tolerances:� relative=1e-05, absolute=1e-50, divergence=10000<br>������� left preconditioning<br>������� using PRECONDITIONED norm type for convergence test<br>
����� PC Object:����� (fieldsplit_0_)������ 64 MPI processes<br>������� type: hypre<br>��������� HYPRE BoomerAMG preconditioning<br>��������� HYPRE BoomerAMG: Cycle type V<br>��������� HYPRE BoomerAMG: Maximum number of levels 25<br>
��������� HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1<br>��������� HYPRE BoomerAMG: Convergence tolerance PER hypre call 0<br>��������� HYPRE BoomerAMG: Threshold for strong coupling 0.25<br>��������� HYPRE BoomerAMG: Interpolation truncation factor 0<br>
��������� HYPRE BoomerAMG: Interpolation: max elements per row 0<br>��������� HYPRE BoomerAMG: Number of levels of aggressive coarsening 0<br>��������� HYPRE BoomerAMG: Number of paths for aggressive coarsening 1<br>��������� HYPRE BoomerAMG: Maximum row sums 0.9<br>
��������� HYPRE BoomerAMG: Sweeps down�������� 1<br>��������� HYPRE BoomerAMG: Sweeps up���������� 1<br>��������� HYPRE BoomerAMG: Sweeps on coarse��� 1<br>��������� HYPRE BoomerAMG: Relax down��������� symmetric-SOR/Jacobi<br>
��������� HYPRE BoomerAMG: Relax up����������� symmetric-SOR/Jacobi<br>��������� HYPRE BoomerAMG: Relax on coarse���� Gaussian-elimination<br>��������� HYPRE BoomerAMG: Relax weight� (all)����� 1<br>��������� HYPRE BoomerAMG: Outer relax weight (all) 1<br>
��������� HYPRE BoomerAMG: Using CF-relaxation<br>��������� HYPRE BoomerAMG: Measure type������� local<br>��������� HYPRE BoomerAMG: Coarsen type������� Falgout<br>��������� HYPRE BoomerAMG: Interpolation type� classical<br>
������� linear system matrix = precond matrix:<br>������� Matrix Object:�������� 64 MPI processes<br>��������� type: mpiaij<br>��������� rows=2793120, cols=2793120<br>��������� total: nonzeros=221624352, allocated nonzeros=221624352<br>
��������� total number of mallocs used during MatSetValues calls =0<br>����������� using I-node (on process 0) routines: found 14812 nodes, limit used is 5<br>��� KSP solver for S = A11 - A10 inv(A00) A01 <br>����� KSP Object:����� (fieldsplit_1_)������ 64 MPI processes<br>
������� type: gmres<br>��������� GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>��������� GMRES: happy breakdown tolerance 1e-30<br>������� maximum iterations=10000, initial guess is zero<br>
������� tolerances:� relative=1e-05, absolute=1e-50, divergence=10000<br>������� left preconditioning<br>������� has attached null space<br>������� using PRECONDITIONED norm type for convergence test<br>����� PC Object:����� (fieldsplit_1_)������ 64 MPI processes<br>
������� type: bjacobi<br>��������� block Jacobi: number of blocks = 64<br>��������� Local solve is same for all blocks, in the following KSP and PC objects:<br>������� KSP Object:������� (fieldsplit_1_sub_)�������� 1 MPI processes<br>
��������� type: preonly<br>��������� maximum iterations=10000, initial guess is zero<br>��������� tolerances:� relative=1e-05, absolute=1e-50, divergence=10000<br>��������� left preconditioning<br>��������� using NONE norm type for convergence test<br>
������� PC Object:������� (fieldsplit_1_sub_)�������� 1 MPI processes<br>��������� type: ilu<br>����������� ILU: out-of-place factorization<br>����������� 0 levels of fill<br>����������� tolerance for zero pivot 2.22045e-14<br>
����������� using diagonal shift on blocks to prevent zero pivot [INBLOCKS]<br>����������� matrix ordering: natural<br>����������� factor fill ratio given 1, needed 1<br>������������� Factored matrix follows:<br>��������������� Matrix Object:���������������� 1 MPI processes<br>
����������������� type: seqaij<br>����������������� rows=14812, cols=14812<br>����������������� package used to perform factorization: petsc<br>����������������� total: nonzeros=368098, allocated nonzeros=368098<br>����������������� total number of mallocs used during MatSetValues calls =0<br>
������������������� not using I-node routines<br>��������� linear system matrix = precond matrix:<br>��������� Matrix Object:���������� 1 MPI processes<br>����������� type: seqaij<br>����������� rows=14812, cols=14812<br>
����������� total: nonzeros=368098, allocated nonzeros=368098<br>����������� total number of mallocs used during MatSetValues calls =0<br>������������� not using I-node routines<br><br>������� linear system matrix followed by preconditioner matrix:<br>
������� Matrix Object:�������� 64 MPI processes<br>��������� type: schurcomplement<br>��������� rows=931040, cols=931040<br>����������� Schur complement A11 - A10 inv(A00) A01<br>����������� A11<br>������������� Matrix Object:�������������� 64 MPI processes<br>
��������������� type: mpiaij<br>��������������� rows=931040, cols=931040<br>��������������� total: nonzeros=24624928, allocated nonzeros=24624928<br>��������������� total number of mallocs used during MatSetValues calls =0<br>
����������������� not using I-node (on process 0) routines<br>����������� A10<br>������������� Matrix Object:�������������� 64 MPI processes<br>��������������� type: mpiaij<br>��������������� rows=931040, cols=2793120<br>
��������������� total: nonzeros=73874784, allocated nonzeros=73874784<br>��������������� total number of mallocs used during MatSetValues calls =0<br>����������������� not using I-node (on process 0) routines<br>����������� KSP of A00<br>
������������� KSP Object:������������� (fieldsplit_0_)�������������� 64 MPI processes<br>��������������� type: gmres<br>����������������� GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
����������������� GMRES: happy breakdown tolerance 1e-30<br>��������������� maximum iterations=10000, initial guess is zero<br>��������������� tolerances:� relative=1e-05, absolute=1e-50, divergence=10000<br>��������������� left preconditioning<br>
��������������� using PRECONDITIONED norm type for convergence test<br>������������� PC Object:������������� (fieldsplit_0_)�������������� 64 MPI processes<br>��������������� type: hypre<br>����������������� HYPRE BoomerAMG preconditioning<br>
����������������� HYPRE BoomerAMG: Cycle type V<br>����������������� HYPRE BoomerAMG: Maximum number of levels 25<br>����������������� HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1<br>����������������� HYPRE BoomerAMG: Convergence tolerance PER hypre call 0<br>
����������������� HYPRE BoomerAMG: Threshold for strong coupling 0.25<br>����������������� HYPRE BoomerAMG: Interpolation truncation factor 0<br>����������������� HYPRE BoomerAMG: Interpolation: max elements per row 0<br>
����������������� HYPRE BoomerAMG: Number of levels of aggressive coarsening 0<br>����������������� HYPRE BoomerAMG: Number of paths for aggressive coarsening 1<br>����������������� HYPRE BoomerAMG: Maximum row sums 0.9<br>
����������������� HYPRE BoomerAMG: Sweeps down�������� 1<br>����������������� HYPRE BoomerAMG: Sweeps up���������� 1<br>����������������� HYPRE BoomerAMG: Sweeps on coarse��� 1<br>����������������� HYPRE BoomerAMG: Relax down��������� symmetric-SOR/Jacobi<br>
����������������� HYPRE BoomerAMG: Relax up����������� symmetric-SOR/Jacobi<br>����������������� HYPRE BoomerAMG: Relax on coarse���� Gaussian-elimination<br>����������������� HYPRE BoomerAMG: Relax weight� (all)����� 1<br>
����������������� HYPRE BoomerAMG: Outer relax weight (all) 1<br>����������������� HYPRE BoomerAMG: Using CF-relaxation<br>����������������� HYPRE BoomerAMG: Measure type������� local<br>����������������� HYPRE BoomerAMG: Coarsen type������� Falgout<br>
����������������� HYPRE BoomerAMG: Interpolation type� classical<br>��������������� linear system matrix = precond matrix:<br>��������������� Matrix Object:���������������� 64 MPI processes<br>����������������� type: mpiaij<br>
����������������� rows=2793120, cols=2793120<br>����������������� total: nonzeros=221624352, allocated nonzeros=221624352<br>����������������� total number of mallocs used during MatSetValues calls =0<br>������������������� using I-node (on process 0) routines: found 14812 nodes, limit used is 5<br>
����������� A01<br>������������� Matrix Object:�������������� 64 MPI processes<br>��������������� type: mpiaij<br>��������������� rows=2793120, cols=931040<br>��������������� total: nonzeros=73874784, allocated nonzeros=73874784<br>
��������������� total number of mallocs used during MatSetValues calls =0<br>����������������� using I-node (on process 0) routines: found 14812 nodes, limit used is 5<br>������� Matrix Object:�������� 64 MPI processes<br>
��������� type: mpiaij<br>��������� rows=931040, cols=931040<br>��������� total: nonzeros=24624928, allocated nonzeros=24624928<br>��������� total number of mallocs used during MatSetValues calls =0<br>����������� not using I-node (on process 0) routines<br>
� linear system matrix = precond matrix:<br>� Matrix Object:�� 64 MPI processes<br>��� type: mpiaij<br>��� rows=3724160, cols=3724160, bs=4<br>��� total: nonzeros=393998848, allocated nonzeros=393998848<br>��� total number of mallocs used during MatSetValues calls =0<br>
<br>******************************************************************************************************<br></div>What could be going wrong here ? Is it something related to null-space setting ? But I do not know why it does not arise for smaller domain sizes!<br>
</div>
</blockquote></div><br></div>