<div dir="ltr">On Wed, Oct 16, 2013 at 12:55 PM, Bishesh Khanal <span dir="ltr"><<a href="mailto:bisheshkh@gmail.com" target="_blank">bisheshkh@gmail.com</a>></span> wrote:<br><div class="gmail_extra"><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Oct 16, 2013 at 5:50 PM, Mark F. Adams <span dir="ltr"><<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">You might also test with Jacobi as a sanity check.<br>
<div><div><br>
On Oct 16, 2013, at 11:19 AM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br>
<br>
><br>
> <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind</a><br>
><br>
> especially the GAMG version which you also try in the debugger with the argument -start_in_debugger<br>
><br>
><br>
><br>
> On Oct 16, 2013, at 9:32 AM, Dave May <<a href="mailto:dave.mayhem23@gmail.com" target="_blank">dave.mayhem23@gmail.com</a>> wrote:<br>
><br>
>> Sounds like a memory error.<br>
>> I'd run your code through valgrind to double check. The error could be completely unconnected to the nullspaces.<br></div></div></blockquote><div><br>Thanks, I tried them, but has not worked yet. Here are couple of things I tried, running valgrind with one and multiple processors for a smaller sized domains where there is no runtime error thrown. The results are shown below:<br>
</div><div>(I've run valgrind in the cluster for the bigger sized domain where it would throw run-time error, but it's still running, valgrind slowed down the execution I guess).</div></div></div></div></blockquote>
<div><br></div><div>You can also try running under MPICH, which can be valgrind clean.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div>1******* ********** For smaller domain sizes ((i.e. the sizes for which the program runs and gives results) ******************<br>
</div><div>With one processor, valgrind does NOT give any errors. <br>With multiple processes it reports something but I'm not sure if they are errors related to my code. One example with two processes:<br>petsc -n 2 valgrind src/AdLemMain -pc_type fieldsplit -pc_fieldsplit_type schur -pc_fieldsplit_dm_splits 0 -pc_fieldsplit_0_fields 0,1,2 -pc_fieldsplit_1_fields 3 -fieldsplit_0_pc_type hypre -fieldsplit_0_ksp_converged_reason -ksp_converged_reason<br>
==31715== Memcheck, a memory error detector<br>==31716== Memcheck, a memory error detector<br>==31716== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al.<br>==31716== Using Valgrind-3.6.1 and LibVEX; rerun with -h for copyright info<br>
==31716== Command: src/AdLemMain -pc_type fieldsplit -pc_fieldsplit_type schur -pc_fieldsplit_dm_splits 0 -pc_fieldsplit_0_fields 0,1,2 -pc_fieldsplit_1_fields 3 -fieldsplit_0_pc_type hypre -fieldsplit_0_ksp_converged_reason -ksp_converged_reason<br>
==31716== <br>==31715== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al.<br>==31715== Using Valgrind-3.6.1 and LibVEX; rerun with -h for copyright info<br>==31715== Command: src/AdLemMain -pc_type fieldsplit -pc_fieldsplit_type schur -pc_fieldsplit_dm_splits 0 -pc_fieldsplit_0_fields 0,1,2 -pc_fieldsplit_1_fields 3 -fieldsplit_0_pc_type hypre -fieldsplit_0_ksp_converged_reason -ksp_converged_reason<br>
==31715== <br>==31716== Conditional jump or move depends on uninitialised value(s)<br>==31716== at 0x32EEED9BCE: ??? (in /usr/lib64/libgfortran.so.3.0.0)<br>==31716== by 0x32EEED9155: ??? (in /usr/lib64/libgfortran.so.3.0.0)<br>
==31716== by 0x32EEE185D7: ??? (in /usr/lib64/libgfortran.so.3.0.0)<br>==31716== by 0x32ECC0F195: call_init.part.0 (in /lib64/<a href="http://ld-2.14.90.so" target="_blank">ld-2.14.90.so</a>)<br>==31716== by 0x32ECC0F272: _dl_init (in /lib64/<a href="http://ld-2.14.90.so" target="_blank">ld-2.14.90.so</a>)<br>
==31716== by 0x32ECC01719: ??? (in /lib64/<a href="http://ld-2.14.90.so" target="_blank">ld-2.14.90.so</a>)<br>==31716== by 0xE: ???<br>==31716== by 0x7FF0003EE: ???<br>==31716== by 0x7FF0003FC: ???<br>==31716== by 0x7FF000405: ???<br>
==31716== by 0x7FF000410: ???<br>==31716== by 0x7FF000424: ???<br>==31716== <br>==31716== Conditional jump or move depends on uninitialised value(s)<br>==31716== at 0x32EEED9BD9: ??? (in /usr/lib64/libgfortran.so.3.0.0)<br>
==31716== by 0x32EEED9155: ??? (in /usr/lib64/libgfortran.so.3.0.0)<br>==31716== by 0x32EEE185D7: ??? (in /usr/lib64/libgfortran.so.3.0.0)<br>==31716== by 0x32ECC0F195: call_init.part.0 (in /lib64/<a href="http://ld-2.14.90.so" target="_blank">ld-2.14.90.so</a>)<br>
==31716== by 0x32ECC0F272: _dl_init (in /lib64/<a href="http://ld-2.14.90.so" target="_blank">ld-2.14.90.so</a>)<br>==31716== by 0x32ECC01719: ??? (in /lib64/<a href="http://ld-2.14.90.so" target="_blank">ld-2.14.90.so</a>)<br>
==31716== by 0xE: ???<br>
==31716== by 0x7FF0003EE: ???<br>==31716== by 0x7FF0003FC: ???<br>==31716== by 0x7FF000405: ???<br>==31716== by 0x7FF000410: ???<br>==31716== by 0x7FF000424: ???<br>==31716== <br>dmda of size: (8,8,8)<br><br>
using schur complement <br><br> using user defined split <br> Linear solve converged due to CONVERGED_ATOL iterations 0<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br>
Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br>
Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br>
Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br>
Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br>
Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br>
Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br> Linear solve converged due to CONVERGED_RTOL iterations 3<br>
Linear solve converged due to CONVERGED_RTOL iterations 1<br>==31716== <br>==31716== HEAP SUMMARY:<br>==31716== in use at exit: 212,357 bytes in 1,870 blocks<br>==31716== total heap usage: 112,701 allocs, 110,831 frees, 19,698,341 bytes allocated<br>
==31716== <br>==31715== <br>==31715== HEAP SUMMARY:<br>==31715== in use at exit: 187,709 bytes in 1,864 blocks<br>==31715== total heap usage: 112,891 allocs, 111,027 frees, 19,838,487 bytes allocated<br>==31715== <br>
==31716== LEAK SUMMARY:<br>==31716== definitely lost: 0 bytes in 0 blocks<br>==31716== indirectly lost: 0 bytes in 0 blocks<br>==31716== possibly lost: 0 bytes in 0 blocks<br>==31716== still reachable: 212,357 bytes in 1,870 blocks<br>
==31716== suppressed: 0 bytes in 0 blocks<br>==31716== Rerun with --leak-check=full to see details of leaked memory<br>==31716== <br>==31716== For counts of detected and suppressed errors, rerun with: -v<br>==31716== Use --track-origins=yes to see where uninitialised values come from<br>
==31716== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 2 from 2)<br>==31715== LEAK SUMMARY:<br>==31715== definitely lost: 0 bytes in 0 blocks<br>==31715== indirectly lost: 0 bytes in 0 blocks<br>==31715== possibly lost: 0 bytes in 0 blocks<br>
==31715== still reachable: 187,709 bytes in 1,864 blocks<br>==31715== suppressed: 0 bytes in 0 blocks<br>==31715== Rerun with --leak-check=full to see details of leaked memory<br>==31715== <br>==31715== For counts of detected and suppressed errors, rerun with: -v<br>
==31715== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 2 from 2)<br><br><br><br></div><div>2*************************** For the small size of 18X18X18 grid (the solver converges, -fieldsplit_0 pc jacobi used) *********************<br>
</div><div>The results when I run valgrind for 2 processes, some errors:<br>.....<br><br>==18003== Syscall param writev(vector[...]) points to uninitialised byte(s)<br>==18003== at 0x962E047: writev (in /lib64/<a href="http://libc-2.14.90.so" target="_blank">libc-2.14.90.so</a>)<br>
==18003== by 0xBB34E22: mca_oob_tcp_msg_send_handler (oob_tcp_msg.c:249)<br>==18003== by 0xBB35D52: mca_oob_tcp_peer_send (oob_tcp_peer.c:204)<br>==18003== by 0xBB39A36: mca_oob_tcp_send_nb (oob_tcp_send.c:167)<br>
==18003== by 0xB92AB10: orte_rml_oob_send (rml_oob_send.c:136)<br>==18003== by 0xB92B0BF: orte_rml_oob_send_buffer (rml_oob_send.c:270)<br>==18003== by 0xBF44147: modex (grpcomm_bad_module.c:573)<br>==18003== by 0x81162B1: ompi_mpi_init (ompi_mpi_init.c:541)<br>
==18003== by 0x812EC31: PMPI_Init_thread (pinit_thread.c:84)<br>==18003== by 0x4F903D9: PetscInitialize (pinit.c:675)<br>==18003== by 0x505088: main (PetscAdLemMain.cxx:25)<br>==18003== Address 0xfdf9d45 is 197 bytes inside a block of size 512 alloc'd<br>
==18003== at 0x4C2A5B2: realloc (vg_replace_malloc.c:525)<br>==18003== by 0x81A4286: opal_dss_buffer_extend (dss_internal_functions.c:63)<br>==18003== by 0x81A4685: opal_dss_copy_payload (dss_load_unload.c:164)<br>
==18003== by 0x817C07E: orte_grpcomm_base_pack_modex_entries (grpcomm_base_modex.c:861)<br>==18003== by 0xBF44042: modex (grpcomm_bad_module.c:563)<br>==18003== by 0x81162B1: ompi_mpi_init (ompi_mpi_init.c:541)<br>
==18003== by 0x812EC31: PMPI_Init_thread (pinit_thread.c:84)<br>==18003== by 0x4F903D9: PetscInitialize (pinit.c:675)<br>==18003== by 0x505088: main (PetscAdLemMain.cxx:25)<br>==18003==<br><br></div><div>Then the solver converges<br>
and again some errors (I doubt if they are caused from my code at all:)<br><br>==18003== Conditional jump or move depends on uninitialised value(s)<br>==18003== at 0xDDCC1B2: rdma_destroy_id (in /usr/lib64/librdmacm.so.1.0.0)<br>
==18003== by 0xE200A23: id_context_destructor (btl_openib_connect_rdmacm.c:185)<br>==18003== by 0xE1FFED0: rdmacm_component_finalize (opal_object.h:448)<br>==18003== by 0xE1FE3AA: ompi_btl_openib_connect_base_finalize (btl_openib_connect_base.c:496)<br>
==18003== by 0xE1EA9E6: btl_openib_component_close (btl_openib_component.c:251)<br>==18003== by 0x81BD411: mca_base_components_close (mca_base_components_close.c:53)<br>==18003== by 0x8145E1F: mca_btl_base_close (btl_base_close.c:62)<br>
==18003== by 0xD5A3DE8: mca_pml_ob1_component_close (pml_ob1_component.c:156)<br>==18003== by 0x81BD411: mca_base_components_close (mca_base_components_close.c:53)<br>==18003== by 0x8154E37: mca_pml_base_close (pml_base_close.c:66)<br>
==18003== by 0x8117142: ompi_mpi_finalize (ompi_mpi_finalize.c:306)<br>==18003== by 0x4F94D6D: PetscFinalize (pinit.c:1276)<br>==180<br>..........................<br><br>==18003== HEAP SUMMARY:<br>==18003== in use at exit: 551,540 bytes in 3,294 blocks<br>
==18003== total heap usage: 147,859 allocs, 144,565 frees, 84,461,908 bytes allocated<br>==18003==<br>==18003== LEAK SUMMARY:<br>==18003== definitely lost: 124,956 bytes in 108 blocks<br>==18003== indirectly lost: 32,380 bytes in 54 blocks<br>
==18003== possibly lost: 0 bytes in 0 blocks<br>==18003== still reachable: 394,204 bytes in 3,132 blocks<br>==18003== suppressed: 0 bytes in 0 blocks<br>==18003== Rerun with --leak-check=full to see details of leaked memory<br>
==18003==<br>==18003== For counts of detected and suppressed errors, rerun with: -v<br>==18003== Use --track-origins=yes to see where uninitialised values come from<br>==18003== ERROR SUMMARY: 142 errors from 32 contexts (suppressed: 2 from 2)<br>
</div><div><br> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div>
>><br>
>> Cheers,<br>
>> Dave<br>
>><br>
>><br>
>> On 16 October 2013 16:18, Bishesh Khanal <<a href="mailto:bisheshkh@gmail.com" target="_blank">bisheshkh@gmail.com</a>> wrote:<br>
>> Dear all,<br>
>> I'm trying to solve a stokes flow with constant viscosity but with non-zero divergence prescribed in the rhs.<br>
>><br>
>> I have a matrix created from DMDA (mDa) of 4 dofs: vx, vy, vz and p respectively.<br>
>> I have another DMDA (mDaP) of same size but of 1 dof corresponding to only p.<br>
>> I have assigned the null space for constant pressure inside the code. I have assigned two nullspace basis: One corresponding to vector created from mDa that is assigned to outer ksp. Second corresponding to vector created from mDaP that is assigned to a ksp obtained from the fieldsplit corresponding to the schur complement.<br>
>><br>
>> Now when running the code, the solver converges for up to certain size, e.g. 92X110 X 92 (the results for this convegent case with -ksp_view is given at the end of the emal.<br>
>> But when I double the size of the grid in each dimension, it gives me a run-time error.<br>
>><br>
>> The options I've used are of the kind:<br>
>> -pc_type fieldsplit -pc_fieldsplit_type schur -pc_fieldsplit_dm_splits 0 -pc_fieldsplit_0_fields 0,1,2 -pc_fieldsplit_1_fields 3 -fieldsplit_0_pc_type hypre -fieldsplit_0_ksp_converged_reason -fieldsplit_1_ksp_converged_reason -ksp_converged_reason -ksp_view<br>
>><br>
>> Here are:<br>
>> 1. Error message when using hypre for fieldsplit_0<br>
>> 2. Error message when using gamg for fieldsplit_0<br>
>> 3. -ksp_view of the working case using hypre for filedsplit_0<br>
>><br>
>> I get following error when I use hypre :<br>
>> 1. ******************************************************************************************************<br>
>> [5]PETSC ERROR: --------------------- Error Message ------------------------------------<br>
>> [5]PETSC ERROR: Signal received!<br>
>> [5]PETSC ERROR: ------------------------------------------------------------------------<br>
>> [5]PETSC ERROR: Petsc Release Version 3.4.3, Oct, 15, 2013<br>
>> [5]PETSC ERROR: See docs/changes/index.html for recent updates.<br>
>> [5]PETSC ERROR: See docs/faq.html for hints about trouble shooting.<br>
>> [5]PETSC ERROR: See docs/index.html for manual pages.<br>
>> [5]PETSC ERROR: ------------------------------------------------------------------------<br>
>> [5]PETSC ERROR: /epi/asclepios2/bkhanal/works/AdLemModel/build/src/AdLemMain on a arch-linux2-cxx-debug named nef001 by bkhanal Wed Oct 16 15:08:42 2013<br>
>> [5]PETSC ERROR: Libraries linked from /epi/asclepios2/bkhanal/petscDebug/lib<br>
>> [5]PETSC ERROR: Configure run at Wed Oct 16 14:18:48 2013<br>
>> [5]PETSC ERROR: Configure options --with-mpi-dir=/opt/openmpi-gcc/current/ --with-shared-libraries --prefix=/epi/asclepios2/bkhanal/petscDebug -download-f-blas-lapack=1 --download-metis --download-parmetis --download-superlu_dist --download-scalapack --download-mumps --download-hypre --with-clanguage=cxx<br>
>> [5]PETSC ERROR: ------------------------------------------------------------------------<br>
>> [5]PETSC ERROR: User provided function() line 0 in unknown directory unknown file<br>
>> [6]PETSC ERROR: ------------------------------------------------------------------------<br>
>> [6]PETSC ERROR: Caught signal number 15 Terminate: Somet process (or the batch system) has told this process to end<br>
>> [6]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger<br>
>> [6]PETSC ERROR: or see <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[6]PETSC" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[6]PETSC</a> ERROR: or try <a href="http://valgrind.org" target="_blank">http://valgrind.org</a> on GNU/linux and Apple Mac OS X to find memory corruption errors<br>
>> [6]PETSC ERROR: likely location of problem given in stack below<br>
>> [6]PETSC ERROR: --------------------- Stack Frames ------------------------------------<br>
>> [6]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,<br>
>> [6]PETSC ERROR: INSTEAD the line number of the start of the function<br>
>> [6]PETSC ERROR: is given.<br>
>> [6]PETSC ERROR: [6] HYPRE_SetupXXX line 130 /tmp/petsc-3.4.3/src/ksp/pc/impls/hypre/hypre.c<br>
>> [6]PETSC ERROR: [6] PCSetUp_HYPRE line 94 /tmp/petsc-3.4.3/src/ksp/pc/impls/hypre/hypre.c<br>
>> [6]PETSC ERROR: [6] PCSetUp line 868 /tmp/petsc-3.4.3/src/ksp/pc/interface/precon.c<br>
>> [6]PETSC ERROR: [6] KSPSetUp line 192 /tmp/petsc-3.4.3/src/ksp/ksp/interface/itfunc.c<br>
>> [6]PETSC ERROR: [6] KSPSolve line 356 /tmp/petsc-3.4.3/src/ksp/ksp/interface/itfunc.c<br>
>> [6]PETSC ERROR: [6] MatMult_SchurComplement line 75 /tmp/petsc-3.4.3/src/ksp/ksp/utils/schurm.c<br>
>> [6]PETSC ERROR: [6] MatNullSpaceTest line 408 /tmp/petsc-3.4.3/src/mat/interface/matnull.c<br>
>> [6]PETSC ERROR: [6] solveModel line 113 "unknowndirectory/"/epi/asclepios2/bkhanal/works/AdLemModel/src/PetscAdLemTaras3D.cxx<br>
>><br>
>><br>
>> 2. ****************************************************************************************************<br>
>> Using gamg instead has errors like following:<br>
>><br>
>> [5]PETSC ERROR: --------------------- Stack Frames ------------------------------------<br>
>> [5]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,<br>
>> [5]PETSC ERROR: INSTEAD the line number of the start of the function<br>
>> [5]PETSC ERROR: is given.<br>
>> [5]PETSC ERROR: [5] PetscLLCondensedAddSorted line 1202 /tmp/petsc-3.4.3/include/petsc-private/matimpl.h<br>
>> [5]PETSC ERROR: [5] MatPtAPSymbolic_MPIAIJ_MPIAIJ line 124 /tmp/petsc-3.4.3/src/mat/impls/aij/mpi/mpiptap.c<br>
>> [5]PETSC ERROR: [5] MatPtAP_MPIAIJ_MPIAIJ line 80 /tmp/petsc-3.4.3/src/mat/impls/aij/mpi/mpiptap.c<br>
>> [5]PETSC ERROR: [5] MatPtAP line 8223 /tmp/petsc-3.4.3/src/mat/interface/matrix.c<br>
>> [5]PETSC ERROR: [5] createLevel line 144 /tmp/petsc-3.4.3/src/ksp/pc/impls/gamg/gamg.c<br>
>> [5]PETSC ERROR: [5] PCSetUp_GAMG line 545 /tmp/petsc-3.4.3/src/ksp/pc/impls/gamg/gamg.c<br>
>> [5]PETSC ERROR: [5] PCSetUp line 868 /tmp/petsc-3.4.3/src/ksp/pc/interface/precon.c<br>
>> [5]PETSC ERROR: [5] KSPSetUp line 192 /tmp/petsc-3.4.3/src/ksp/ksp/interface/itfunc.c<br>
>> [5]PETSC ERROR: [5] KSPSolve line 356 /tmp/petsc-3.4.3/src/ksp/ksp/interface/itfunc.c<br>
>> [5]PETSC ERROR: [5] MatMult_SchurComplement line 75 /tmp/petsc-3.4.3/src/ksp/ksp/utils/schurm.c<br>
>> [5]PETSC ERROR: [5] MatNullSpaceTest line 408 /tmp/petsc-3.4.3/src/mat/interface/matnull.c<br>
>> [5]PETSC ERROR: [5] solveModel line 113 "unknowndirectory/"/epi/asclepios2/bkhanal/works/AdLemModel/src/PetscAdLemTaras3D.cxx<br>
>><br>
>><br>
>> 3. ********************************************************************************************************<br>
>><br>
>> BUT, It does give me results when I use a domain of size: 91X109 X 91 (half sized in each dimension) The result along with ksp view in this case is as follows:<br>
>><br>
>> Linear solve converged due to CONVERGED_RTOL iterations 2<br>
>> KSP Object: 64 MPI processes<br>
>> type: gmres<br>
>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
>> GMRES: happy breakdown tolerance 1e-30<br>
>> maximum iterations=10000, initial guess is zero<br>
>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
>> left preconditioning<br>
>> has attached null space<br>
>> using PRECONDITIONED norm type for convergence test<br>
>> PC Object: 64 MPI processes<br>
>> type: fieldsplit<br>
>> FieldSplit with Schur preconditioner, blocksize = 4, factorization FULL<br>
>> Preconditioner for the Schur complement formed from user provided matrix<br>
>> Split info:<br>
>> Split number 0 Fields 0, 1, 2<br>
>> Split number 1 Fields 3<br>
>> KSP solver for A00 block<br>
>> KSP Object: (fieldsplit_0_) 64 MPI processes<br>
>> type: gmres<br>
>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
>> GMRES: happy breakdown tolerance 1e-30<br>
>> maximum iterations=10000, initial guess is zero<br>
>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
>> left preconditioning<br>
>> using PRECONDITIONED norm type for convergence test<br>
>> PC Object: (fieldsplit_0_) 64 MPI processes<br>
>> type: hypre<br>
>> HYPRE BoomerAMG preconditioning<br>
>> HYPRE BoomerAMG: Cycle type V<br>
>> HYPRE BoomerAMG: Maximum number of levels 25<br>
>> HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1<br>
>> HYPRE BoomerAMG: Convergence tolerance PER hypre call 0<br>
>> HYPRE BoomerAMG: Threshold for strong coupling 0.25<br>
>> HYPRE BoomerAMG: Interpolation truncation factor 0<br>
>> HYPRE BoomerAMG: Interpolation: max elements per row 0<br>
>> HYPRE BoomerAMG: Number of levels of aggressive coarsening 0<br>
>> HYPRE BoomerAMG: Number of paths for aggressive coarsening 1<br>
>> HYPRE BoomerAMG: Maximum row sums 0.9<br>
>> HYPRE BoomerAMG: Sweeps down 1<br>
>> HYPRE BoomerAMG: Sweeps up 1<br>
>> HYPRE BoomerAMG: Sweeps on coarse 1<br>
>> HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi<br>
>> HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi<br>
>> HYPRE BoomerAMG: Relax on coarse Gaussian-elimination<br>
>> HYPRE BoomerAMG: Relax weight (all) 1<br>
>> HYPRE BoomerAMG: Outer relax weight (all) 1<br>
>> HYPRE BoomerAMG: Using CF-relaxation<br>
>> HYPRE BoomerAMG: Measure type local<br>
>> HYPRE BoomerAMG: Coarsen type Falgout<br>
>> HYPRE BoomerAMG: Interpolation type classical<br>
>> linear system matrix = precond matrix:<br>
>> Matrix Object: 64 MPI processes<br>
>> type: mpiaij<br>
>> rows=2793120, cols=2793120<br>
>> total: nonzeros=221624352, allocated nonzeros=221624352<br>
>> total number of mallocs used during MatSetValues calls =0<br>
>> using I-node (on process 0) routines: found 14812 nodes, limit used is 5<br>
>> KSP solver for S = A11 - A10 inv(A00) A01<br>
>> KSP Object: (fieldsplit_1_) 64 MPI processes<br>
>> type: gmres<br>
>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
>> GMRES: happy breakdown tolerance 1e-30<br>
>> maximum iterations=10000, initial guess is zero<br>
>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
>> left preconditioning<br>
>> has attached null space<br>
>> using PRECONDITIONED norm type for convergence test<br>
>> PC Object: (fieldsplit_1_) 64 MPI processes<br>
>> type: bjacobi<br>
>> block Jacobi: number of blocks = 64<br>
>> Local solve is same for all blocks, in the following KSP and PC objects:<br>
>> KSP Object: (fieldsplit_1_sub_) 1 MPI processes<br>
>> type: preonly<br>
>> maximum iterations=10000, initial guess is zero<br>
>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
>> left preconditioning<br>
>> using NONE norm type for convergence test<br>
>> PC Object: (fieldsplit_1_sub_) 1 MPI processes<br>
>> type: ilu<br>
>> ILU: out-of-place factorization<br>
>> 0 levels of fill<br>
>> tolerance for zero pivot 2.22045e-14<br>
>> using diagonal shift on blocks to prevent zero pivot [INBLOCKS]<br>
>> matrix ordering: natural<br>
>> factor fill ratio given 1, needed 1<br>
>> Factored matrix follows:<br>
>> Matrix Object: 1 MPI processes<br>
>> type: seqaij<br>
>> rows=14812, cols=14812<br>
>> package used to perform factorization: petsc<br>
>> total: nonzeros=368098, allocated nonzeros=368098<br>
>> total number of mallocs used during MatSetValues calls =0<br>
>> not using I-node routines<br>
>> linear system matrix = precond matrix:<br>
>> Matrix Object: 1 MPI processes<br>
>> type: seqaij<br>
>> rows=14812, cols=14812<br>
>> total: nonzeros=368098, allocated nonzeros=368098<br>
>> total number of mallocs used during MatSetValues calls =0<br>
>> not using I-node routines<br>
>><br>
>> linear system matrix followed by preconditioner matrix:<br>
>> Matrix Object: 64 MPI processes<br>
>> type: schurcomplement<br>
>> rows=931040, cols=931040<br>
>> Schur complement A11 - A10 inv(A00) A01<br>
>> A11<br>
>> Matrix Object: 64 MPI processes<br>
>> type: mpiaij<br>
>> rows=931040, cols=931040<br>
>> total: nonzeros=24624928, allocated nonzeros=24624928<br>
>> total number of mallocs used during MatSetValues calls =0<br>
>> not using I-node (on process 0) routines<br>
>> A10<br>
>> Matrix Object: 64 MPI processes<br>
>> type: mpiaij<br>
>> rows=931040, cols=2793120<br>
>> total: nonzeros=73874784, allocated nonzeros=73874784<br>
>> total number of mallocs used during MatSetValues calls =0<br>
>> not using I-node (on process 0) routines<br>
>> KSP of A00<br>
>> KSP Object: (fieldsplit_0_) 64 MPI processes<br>
>> type: gmres<br>
>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
>> GMRES: happy breakdown tolerance 1e-30<br>
>> maximum iterations=10000, initial guess is zero<br>
>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
>> left preconditioning<br>
>> using PRECONDITIONED norm type for convergence test<br>
>> PC Object: (fieldsplit_0_) 64 MPI processes<br>
>> type: hypre<br>
>> HYPRE BoomerAMG preconditioning<br>
>> HYPRE BoomerAMG: Cycle type V<br>
>> HYPRE BoomerAMG: Maximum number of levels 25<br>
>> HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1<br>
>> HYPRE BoomerAMG: Convergence tolerance PER hypre call 0<br>
>> HYPRE BoomerAMG: Threshold for strong coupling 0.25<br>
>> HYPRE BoomerAMG: Interpolation truncation factor 0<br>
>> HYPRE BoomerAMG: Interpolation: max elements per row 0<br>
>> HYPRE BoomerAMG: Number of levels of aggressive coarsening 0<br>
>> HYPRE BoomerAMG: Number of paths for aggressive coarsening 1<br>
>> HYPRE BoomerAMG: Maximum row sums 0.9<br>
>> HYPRE BoomerAMG: Sweeps down 1<br>
>> HYPRE BoomerAMG: Sweeps up 1<br>
>> HYPRE BoomerAMG: Sweeps on coarse 1<br>
>> HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi<br>
>> HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi<br>
>> HYPRE BoomerAMG: Relax on coarse Gaussian-elimination<br>
>> HYPRE BoomerAMG: Relax weight (all) 1<br>
>> HYPRE BoomerAMG: Outer relax weight (all) 1<br>
>> HYPRE BoomerAMG: Using CF-relaxation<br>
>> HYPRE BoomerAMG: Measure type local<br>
>> HYPRE BoomerAMG: Coarsen type Falgout<br>
>> HYPRE BoomerAMG: Interpolation type classical<br>
>> linear system matrix = precond matrix:<br>
>> Matrix Object: 64 MPI processes<br>
>> type: mpiaij<br>
>> rows=2793120, cols=2793120<br>
>> total: nonzeros=221624352, allocated nonzeros=221624352<br>
>> total number of mallocs used during MatSetValues calls =0<br>
>> using I-node (on process 0) routines: found 14812 nodes, limit used is 5<br>
>> A01<br>
>> Matrix Object: 64 MPI processes<br>
>> type: mpiaij<br>
>> rows=2793120, cols=931040<br>
>> total: nonzeros=73874784, allocated nonzeros=73874784<br>
>> total number of mallocs used during MatSetValues calls =0<br>
>> using I-node (on process 0) routines: found 14812 nodes, limit used is 5<br>
>> Matrix Object: 64 MPI processes<br>
>> type: mpiaij<br>
>> rows=931040, cols=931040<br>
>> total: nonzeros=24624928, allocated nonzeros=24624928<br>
>> total number of mallocs used during MatSetValues calls =0<br>
>> not using I-node (on process 0) routines<br>
>> linear system matrix = precond matrix:<br>
>> Matrix Object: 64 MPI processes<br>
>> type: mpiaij<br>
>> rows=3724160, cols=3724160, bs=4<br>
>> total: nonzeros=393998848, allocated nonzeros=393998848<br>
>> total number of mallocs used during MatSetValues calls =0<br>
>><br>
>> ******************************************************************************************************<br>
>> What could be going wrong here ? Is it something related to null-space setting ? But I do not know why it does not arise for smaller domain sizes!<br>
>><br>
><br>
<br>
</div></div></blockquote></div><br></div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener
</div></div>