<div dir="ltr"><div dir="ltr"><div dir="ltr"><div>Using a larger problem set with 2B non-zero elements and a matrix of 25M x 25M I get the following error:</div><div>[4]PETSC ERROR: ------------------------------------------------------------------------<br>[4]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range<br>[4]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger<br>[4]PETSC ERROR: or see <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind">http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind</a><br>[4]PETSC ERROR: or try <a href="http://valgrind.org">http://valgrind.org</a> on GNU/linux and Apple Mac OS X to find memory corruption errors<br>[4]PETSC ERROR: likely location of problem given in stack below<br>[4]PETSC ERROR: --------------------- Stack Frames ------------------------------------<br>[4]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,<br>[4]PETSC ERROR: INSTEAD the line number of the start of the function<br>[4]PETSC ERROR: is given.<br>[4]PETSC ERROR: [4] MatCreateSeqAIJWithArrays line 4422 /lustre/home/vef002/petsc/src/mat/impls/aij/seq/aij.c<br>[4]PETSC ERROR: [4] MatMatMultSymbolic_SeqAIJ_SeqAIJ line 747 /lustre/home/vef002/petsc/src/mat/impls/aij/seq/matmatmult.c<br>[4]PETSC ERROR: [4] MatTransposeMatMultSymbolic_MPIAIJ_MPIAIJ_nonscalable line 1256 /lustre/home/vef002/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c<br>[4]PETSC ERROR: [4] MatTransposeMatMult_MPIAIJ_MPIAIJ line 1156 /lustre/home/vef002/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c<br>[4]PETSC ERROR: [4] MatTransposeMatMult line 9950 /lustre/home/vef002/petsc/src/mat/interface/matrix.c<br>[4]PETSC ERROR: [4] PCGAMGCoarsen_AGG line 871 /lustre/home/vef002/petsc/src/ksp/pc/impls/gamg/agg.c<br>[4]PETSC ERROR: [4] PCSetUp_GAMG line 428 /lustre/home/vef002/petsc/src/ksp/pc/impls/gamg/gamg.c<br>[4]PETSC ERROR: [4] PCSetUp line 894 /lustre/home/vef002/petsc/src/ksp/pc/interface/precon.c<br>[4]PETSC ERROR: [4] KSPSetUp line 304 /lustre/home/vef002/petsc/src/ksp/ksp/interface/itfunc.c<br>[4]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------<br>[4]PETSC ERROR: Signal received<br>[4]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html">http://www.mcs.anl.gov/petsc/documentation/faq.html</a> for trouble shooting.<br>[4]PETSC ERROR: Petsc Release Version 3.10.2, unknown <br>[4]PETSC ERROR: ./solveCSys on a linux-cumulus-debug named r02g03 by vef002 Fri Jan 11 09:13:23 2019<br>[4]PETSC ERROR: Configure options PETSC_ARCH=linux-cumulus-debug --with-cc=/usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/mpicc --with-fc=/usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/mpifort --with-cxx=/usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/mpicxx --download-parmetis --download-metis --download-ptscotch --download-superlu_dist --download-mumps --with-scalar-type=complex --with-debugging=yes --download-scalapack --download-superlu --download-fblaslapack=1 --download-cmake<br>[4]PETSC ERROR: #1 User provided function() line 0 in unknown file<br>--------------------------------------------------------------------------<br>MPI_ABORT was invoked on rank 4 in communicator MPI_COMM_WORLD<br>with errorcode 59.<br><br>NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.<br>You may or may not see output from other processes, depending on<br>exactly when Open MPI kills them.<br>--------------------------------------------------------------------------<br>[0]PETSC ERROR: ------------------------------------------------------------------------<br>[0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the batch system) has told this process to end<br>[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger<br>[0]PETSC ERROR: or see <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind">http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind</a><br></div><div><br></div><div>Using Valgrind on only one of the valgrind files the following error was written:</div><div><br></div><div>==9053== Invalid read of size 4<br>==9053== at 0x5B8067E: MatCreateSeqAIJWithArrays (aij.c:4445)<br>==9053== by 0x5BC2608: MatMatMultSymbolic_SeqAIJ_SeqAIJ (matmatmult.c:790)<br>==9053== by 0x5D106F8: MatTransposeMatMultSymbolic_MPIAIJ_MPIAIJ_nonscalable (mpimatmatmult.c:1337)<br>==9053== by 0x5D0E84E: MatTransposeMatMult_MPIAIJ_MPIAIJ (mpimatmatmult.c:1186)<br>==9053== by 0x5457C57: MatTransposeMatMult (matrix.c:9984)<br>==9053== by 0x64DD99D: PCGAMGCoarsen_AGG (agg.c:882)<br>==9053== by 0x64C7527: PCSetUp_GAMG (gamg.c:522)<br>==9053== by 0x6592AA0: PCSetUp (precon.c:932)<br>==9053== by 0x66B1267: KSPSetUp (itfunc.c:391)<br>==9053== by 0x4019A2: main (solveCmplxLinearSys.cpp:68)<br>==9053== Address 0x8386997f4 is not stack'd, malloc'd or (recently) free'd<br>==9053==<br><br></div></div></div></div><br><div class="gmail_quote"><div dir="ltr">On Fri, Jan 11, 2019 at 8:41 AM Sal Am <<a href="mailto:tempohoper@gmail.com">tempohoper@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div>Thank you Dave,</div><div><br></div><div>I reconfigured PETSc with valgrind and debugging mode, I ran the code again with the following options:</div><div>mpiexec -n 8 valgrind --tool=memcheck -q --num-callers=20 --log-file=valgrind.log.%p ./solveCSys -malloc off -ksp_type bcgs -pc_type gamg -log_view</div><div>(as on the petsc website you linked)</div><div><br></div><div>It finished solving using the iterative solver, but the resulting valgrind.log.%p files (all 8 corresponding to each processor) are all empty. And it took a whooping ~15hours, for what used to take ~10-20min. Maybe this is because of valgrind? I am not sure. Attached is the log_view.<br></div><div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr">On Thu, Jan 10, 2019 at 8:59 AM Dave May <<a href="mailto:dave.mayhem23@gmail.com" target="_blank">dave.mayhem23@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr">On Thu, 10 Jan 2019 at 08:55, Sal Am via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>I am not sure what is exactly is wrong as the error changes slightly every time I run it (without changing the parameters).</div></div></div></div></div></blockquote><div><br></div><div>This likely implies that you have a memory error in your code (a memory leak would not cause this behaviour).</div><div>I strongly suggest you make sure your code is free of memory errors.</div><div>You can do this using valgrind. See here </div><div><br></div><div><a href="https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind" target="_blank">https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind</a><br></div><div><br></div><div>for an explanation of how to use valgrind.</div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div> I have attached the first two run's errors and my code. <br></div><div><br></div><div>Is there a memory leak somewhere? I have tried running it with -malloc_dump, but not getting anything printed out, however, when run with -log_view I see that Viewer is created 4 times, but destroyed 3 times. The way I see it, I have destroyed it where I see I no longer have use for it so not sure if I am wrong. Could this be the reason why it keeps crashing? It crashes as soon as it reads the matrix, before entering the solving mode (I have a print statement before solving starts that never prints).<br></div><div><br></div><div>how I run it in the job script on 2 node with 32 processors using the clusters OpenMPI. <br></div><div><br></div><div>mpiexec ./solveCSys -ksp_type bcgs -pc_type gamg -ksp_converged_reason -ksp_monitor_true_residual -log_view -ksp_error_if_not_converged -ksp_monitor -malloc_log -ksp_view</div><div><br></div><div>the matrix:</div><div>2 122 821 366 (non-zero elements)<br></div><div>25 947 279 x 25 947 279<br></div><div><br></div><div>Thanks and all the best<br></div></div></div></div></div>
</blockquote></div></div></div>
</blockquote></div>
</blockquote></div>