Running test examples to verify correct installation Using PETSC_DIR=/home/lani/local/cf2_2013.9/openmpi/petsc_cuda_fixed and PETSC_ARCH=arch-x86_64 Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI process See http://www.mcs.anl.gov/petsc/documentation/faq.html lid velocity = 0.0016, prandtl # = 1, grashof # = 1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [0]PETSC ERROR: to get more information on the crash. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Development GIT revision: unknown GIT Date: unknown [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./ex19 on a arch-x86_64 named arkepler.private.vki.eu by lani Sun Jan 19 21:20:50 2014 [0]PETSC ERROR: Libraries linked from /home/lani/local/cf2_2013.9/openmpi/petsc_cuda_fixed/lib [0]PETSC ERROR: Configure run at Sun Jan 19 21:09:29 2014 [0]PETSC ERROR: Configure options --prefix=/home/lani/local/cf2_2013.9/openmpi/petsc_cuda_fixed --with-debugging=0 COPTFLAGS="-O3 " FOPTFLAGS="-O3 " --with-mpi-dir=/home/lani/local/cf2_2013.9/openmpi --download-f2cblaslapack=1 --with-fortran=1 --with-shared-libraries=1 --with-cudac=/opt/cuda/5.0.35/bin/nvcc --with-cuda-dir=/opt/cuda/5.0.35 --with-cuda=1 --with-cusp=1 --with-thrust=1 --with-cusp-dir=/home/lani/local/cf2_2013.9 --PETSC_ARCH=arch-x86_64 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown file -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpiexec has exited due to process rank 0 with PID 6831 on node arkepler.private.vki.eu exiting improperly. There are two reasons this could occur: 1. this process did not call "init" before exiting, but others in the job did. This can cause a job to hang indefinitely while it waits for all processes to call "init". By rule, if one process calls "init", then ALL processes must call "init" prior to termination. 2. this process called "init", but exited without calling "finalize". By rule, all processes that call "init" MUST call "finalize" prior to exiting or it will be considered an "abnormal termination" This may have caused other processes in the application to be terminated by signals sent by mpiexec (as reported here). -------------------------------------------------------------------------- Possible error running C/C++ src/snes/examples/tutorials/ex19 with 2 MPI processes See http://www.mcs.anl.gov/petsc/documentation/faq.html lid velocity = 0.0016, prandtl # = 1, grashof # = 1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [0]PETSC ERROR: to get more information on the crash. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Development GIT revision: unknown GIT Date: unknown [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./ex19 on a arch-x86_64 named arkepler.private.vki.eu by lani Sun Jan 19 21:21:22 2014 [0]PETSC ERROR: Libraries linked from /home/lani/local/cf2_2013.9/openmpi/petsc_cuda_fixed/lib [0]PETSC ERROR: Configure run at Sun Jan 19 21:09:29 2014 [0]PETSC ERROR: Configure options --prefix=/home/lani/local/cf2_2013.9/openmpi/petsc_cuda_fixed --with-debugging=0 COPTFLAGS="-O3 " FOPTFLAGS="-O3 " --with-mpi-dir=/home/lani/local/cf2_2013.9/openmpi --download-f2cblaslapack=1 --with-fortran=1 --with-shared-libraries=1 --with-cudac=/opt/cuda/5.0.35/bin/nvcc --with-cuda-dir=/opt/cuda/5.0.35 --with-cuda=1 --with-cusp=1 --with-thrust=1 --with-cusp-dir=/home/lani/local/cf2_2013.9 --PETSC_ARCH=arch-x86_64 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [1]PETSC ERROR: to get more information on the crash. [1]PETSC ERROR: --------------------- Error Message ------------------------------------ [1]PETSC ERROR: Signal received! [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Petsc Development GIT revision: unknown GIT Date: unknown [1]PETSC ERROR: See docs/changes/index.html for recent updates. [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [1]PETSC ERROR: See docs/index.html for manual pages. [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: ./ex19 on a arch-x86_64 named arkepler.private.vki.eu by lani Sun Jan 19 21:21:22 2014 [1]PETSC ERROR: Libraries linked from /home/lani/local/cf2_2013.9/openmpi/petsc_cuda_fixed/lib [1]PETSC ERROR: Configure run at Sun Jan 19 21:09:29 2014 [1]PETSC ERROR: Configure options --prefix=/home/lani/local/cf2_2013.9/openmpi/petsc_cuda_fixed --with-debugging=0 COPTFLAGS="-O3 " FOPTFLAGS="-O3 " --with-mpi-dir=/home/lani/local/cf2_2013.9/openmpi --download-f2cblaslapack=1 --with-fortran=1 --with-shared-libraries=1 --with-cudac=/opt/cuda/5.0.35/bin/nvcc --with-cuda-dir=/opt/cuda/5.0.35 --with-cuda=1 --with-cusp=1 --with-thrust=1 --with-cusp-dir=/home/lani/local/cf2_2013.9 --PETSC_ARCH=arch-x86_64 [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: User provided function() line 0 in unknown file User provided function() line 0 in unknown file -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpiexec has exited due to process rank 0 with PID 6847 on node arkepler.private.vki.eu exiting improperly. There are two reasons this could occur: 1. this process did not call "init" before exiting, but others in the job did. This can cause a job to hang indefinitely while it waits for all processes to call "init". By rule, if one process calls "init", then ALL processes must call "init" prior to termination. 2. this process called "init", but exited without calling "finalize". By rule, all processes that call "init" MUST call "finalize" prior to exiting or it will be considered an "abnormal termination" This may have caused other processes in the application to be terminated by signals sent by mpiexec (as reported here). -------------------------------------------------------------------------- [arkepler.private.vki.eu:06846] 1 more process has sent help message help-mpi-api.txt / mpi-abort [arkepler.private.vki.eu:06846] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages egrep: /home/lani/local/cf2_2013.9/openmpi/petsc_cuda_fixed/arch-x86_64/include/petscconf.h: No such file or directory Fortran example src/snes/examples/tutorials/ex5f run successfully with 1 MPI process Completed test examples