<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Apr 14, 2014 at 9:40 AM, TAY wee-beng <span dir="ltr"><<a href="mailto:zonexo@gmail.com" target="_blank">zonexo@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<div>Hi Barry,<br>
<br>
I'm not too sure how to do it. I'm running mpi. So I run:<br>
<br>
mpirun -n 4 ./a.out -start_in_debugger<br></div></div></blockquote><div><br></div><div> add -debugger_pause 10</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"><div>
I got the msg below. Before the gdb windows appear (thru x11), the
program aborts.<br>
<br>
Also I tried running in another cluster and it worked. Also tried
in the current cluster in debug mode and it worked too.<br>
<br>
<i><u><b>mpirun -n 4 ./a.out -start_in_debugger</b></u></i><i><u><b><br>
</b></u></i><i><u><b>--------------------------------------------------------------------------</b></u></i><i><u><b><br>
</b></u></i><i><u><b>An MPI process has executed an operation
involving a call to the</b></u></i><i><u><b><br>
</b></u></i><i><u><b>"fork()" system call to create a child
process. Open MPI is currently</b></u></i><i><u><b><br>
</b></u></i><i><u><b>operating in a condition that could
result in memory corruption or</b></u></i><i><u><b><br>
</b></u></i><i><u><b>other system errors; your MPI job may
hang, crash, or produce silent</b></u></i><i><u><b><br>
</b></u></i><i><u><b>data corruption. The use of fork() (or
system() or other calls that</b></u></i><i><u><b><br>
</b></u></i><i><u><b>create child processes) is strongly
discouraged. </b></u></i><i><u><b><br>
</b></u></i><i><u><b><br>
</b></u></i><i><u><b>The process that invoked fork was:</b></u></i><i><u><b><br>
</b></u></i><i><u><b><br>
</b></u></i><i><u><b> Local host: n12-76 (PID 20235)</b></u></i><i><u><b><br>
</b></u></i><i><u><b> MPI_COMM_WORLD rank: 2</b></u></i><i><u><b><br>
</b></u></i><i><u><b><br>
</b></u></i><i><u><b>If you are *absolutely sure* that your
application will successfully</b></u></i><i><u><b><br>
</b></u></i><i><u><b>and correctly survive a call to fork(),
you may disable this warning</b></u></i><i><u><b><br>
</b></u></i><i><u><b>by setting the mpi_warn_on_fork MCA
parameter to 0.</b></u></i><i><u><b><br>
</b></u></i><i><u><b>--------------------------------------------------------------------------</b></u></i><i><u><b><br>
</b></u></i><i><u><b>[2]PETSC ERROR: PETSC: Attaching gdb to
./a.out of pid 20235 on display localhost:50.0 on machine
n12-76</b></u></i><i><u><b><br>
</b></u></i><i><u><b>[0]PETSC ERROR: PETSC: Attaching gdb to
./a.out of pid 20233 on display localhost:50.0 on machine
n12-76</b></u></i><i><u><b><br>
</b></u></i><i><u><b>[1]PETSC ERROR: PETSC: Attaching gdb to
./a.out of pid 20234 on display localhost:50.0 on machine
n12-76</b></u></i><i><u><b><br>
</b></u></i><i><u><b>[3]PETSC ERROR: PETSC: Attaching gdb to
./a.out of pid 20236 on display localhost:50.0 on machine
n12-76</b></u></i><i><u><b><br>
</b></u></i><i><u><b>[n12-76:20232] 3 more processes have sent
help message help-mpi-runtime.txt / mpi_init:warn-fork</b></u></i><i><u><b><br>
</b></u></i><i><u><b>[n12-76:20232] Set MCA parameter
"orte_base_help_aggregate" to 0 to see all help / error
messages</b></u></i><i><u><b><br>
</b></u></i><i><u><b><br>
</b></u></i><i><u><b>....</b></u></i><i><u><b><br>
</b></u></i><i><u><b><br>
</b></u></i><i><u><b> 1</b></u></i><i><u><b><br>
</b></u></i><i><u><b>[1]PETSC ERROR:
------------------------------------------------------------------------</b></u></i><i><u><b><br>
</b></u></i><i><u><b>[1]PETSC ERROR: Caught signal number 11
SEGV: Segmentation Violation, probably memory access out of
range</b></u></i><i><u><b><br>
</b></u></i><i><u><b>[1]PETSC ERROR: Try option
-start_in_debugger or -on_error_attach_debugger</b></u></i><i><u><b><br>
</b></u></i><i><u><b>[1]PETSC ERROR: or see
<a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind</a>[1]PETSC
ERROR: or try <a href="http://valgrind.org" target="_blank">http://valgrind.org</a> on GNU/linux and Apple Mac
OS X to find memory corruption errors</b></u></i><i><u><b><br>
</b></u></i><i><u><b>[1]PETSC ERROR: configure using
--with-debugging=yes, recompile, link, and run </b></u></i><i><u><b><br>
</b></u></i><i><u><b>[1]PETSC ERROR: to get more information
on the crash.</b></u></i><i><u><b><br>
</b></u></i><i><u><b>[1]PETSC ERROR: User provided function()
line 0 in unknown directory unknown file (null)</b></u></i><i><u><b><br>
</b></u></i><i><u><b>[3]PETSC ERROR:
------------------------------------------------------------------------</b></u></i><i><u><b><br>
</b></u></i><i><u><b>[3]PETSC ERROR: Caught signal number 11
SEGV: Segmentation Violation, probably memory access out of
range</b></u></i><i><u><b><br>
</b></u></i><i><u><b>[3]PETSC ERROR: Try option
-start_in_debugger or -on_error_attach_debugger</b></u></i><i><u><b><br>
</b></u></i><i><u><b>[3]PETSC ERROR: or see
<a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind</a>[3]PETSC
ERROR: or try <a href="http://valgrind.org" target="_blank">http://valgrind.org</a> on GNU/linux and Apple Mac
OS X to find memory corruption errors</b></u></i><i><u><b><br>
</b></u></i><i><u><b>[3]PETSC ERROR: configure using
--with-debugging=yes, recompile, link, and run </b></u></i><i><u><b><br>
</b></u></i><i><u><b>[3]PETSC ERROR: to get more information
on the crash.</b></u></i><i><u><b><br>
</b></u></i><i><u><b>[3]PETSC ERROR: User provided function()
line 0 in unknown directory unknown file (null)</b></u></i><br>
<br>
...<br>
<pre cols="72">Thank you.
Yours sincerely,
TAY wee-beng</pre>
On 14/4/2014 9:05 PM, Barry Smith wrote:<br>
</div>
<blockquote type="cite">
<pre> Because IO doesn’t always get flushed immediately it may not be hanging at this point. It is better to use the option -start_in_debugger then type cont in each debugger window and then when you think it is “hanging” do a control C in each debugger window and type where to see where each process is you can also look around in the debugger at variables to see why it is “hanging” at that point.
Barry
This routines don’t have any parallel communication in them so are unlikely to hang.
On Apr 14, 2014, at 6:52 AM, TAY wee-beng <a href="mailto:zonexo@gmail.com" target="_blank"><zonexo@gmail.com></a> wrote:
</pre>
<blockquote type="cite">
<pre>Hi,
My code hangs and I added in mpi_barrier and print to catch the bug. I found that it hangs after printing "7". Is it because I'm doing something wrong? I need to access the u,v,w array so I use DMDAVecGetArrayF90. After access, I use DMDAVecRestoreArrayF90.
call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr)
call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"3"
call DMDAVecGetArrayF90(da_v,v_local,v_array,ierr)
call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"4"
call DMDAVecGetArrayF90(da_w,w_local,w_array,ierr)
call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"5"
call I_IIB_uv_initial_1st_dm(I_cell_no_u1,I_cell_no_v1,I_cell_no_w1,I_cell_u1,I_cell_v1,I_cell_w1,u_array,v_array,w_array)
call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"6"
call DMDAVecRestoreArrayF90(da_w,w_local,w_array,ierr) !must be in reverse order
call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"7"
call DMDAVecRestoreArrayF90(da_v,v_local,v_array,ierr)
call MPI_Barrier(MPI_COMM_WORLD,ierr); if (myid==0) print *,"8"
call DMDAVecRestoreArrayF90(da_u,u_local,u_array,ierr)
--
Thank you.
Yours sincerely,
TAY wee-beng
</pre><span class="HOEnZb"><font color="#888888">
</font></span></blockquote><span class="HOEnZb"><font color="#888888">
<pre></pre>
</font></span></blockquote>
<br>
</div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener
</div></div>