The cpi also does not work. There is no error message, but it takes forever:<br><br>xxxx@query:~/MPI$ mpiexec -n 2 -f machinefile /home/netlab/MPI/mpich2-build/examples/cpi<br>Process 1 of 2 is on query<br>Process 0 of 2 is on trigger<br>
<br>I think my two hosts are still trying to communicate to each other. Any suggestions?<br><br>Best wishes,<br><br><br><div class="gmail_quote">On Fri, May 27, 2011 at 9:42 AM, Dave Goodell <span dir="ltr"><<a href="mailto:goodell@mcs.anl.gov">goodell@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">Does the "examples/cpi" program from the MPICH2 build directory work correctly for you when you run it on multiple nodes?<br>
<br>
-Dave<br>
<div><div></div><div class="h5"><br>
On May 26, 2011, at 5:49 PM CDT, Fujun Liu wrote:<br>
<br>
> Hi everyone,<br>
><br>
> When I try one example from <a href="http://beige.ucs.indiana.edu/I590/node62.html" target="_blank">http://beige.ucs.indiana.edu/I590/node62.html</a>, I got the following error message as below. In the MPI cluster, there are two hosts. If I run the two processes on just one host, everything works fine. But if I run two processes on the two-host cluster, the following error happens. I think the two hosts just can't send/receive message to each other, but I don't know how to resolve this.<br>
><br>
> Thanks in advance!<br>
><br>
> xxxx@query:~/MPI$ mpiexec -n 2 -f machinefile ./GreetMaster<br>
> Fatal error in PMPI_Bcast: Other MPI error, error stack:<br>
> PMPI_Bcast(1430).......................: MPI_Bcast(buf=0x7fff13114cb0, count=8192, MPI_CHAR, root=0, MPI_COMM_WORLD) failed<br>
> MPIR_Bcast_impl(1273)..................:<br>
> MPIR_Bcast_intra(1107).................:<br>
> MPIR_Bcast_binomial(143)...............:<br>
> MPIC_Recv(110).........................:<br>
> MPIC_Wait(540).........................:<br>
> MPIDI_CH3I_Progress(353)...............:<br>
> MPID_nem_mpich2_blocking_recv(905).....:<br>
> MPID_nem_tcp_connpoll(1823)............:<br>
> state_commrdy_handler(1665)............:<br>
> MPID_nem_tcp_recv_handler(1559)........:<br>
> MPID_nem_handle_pkt(587)...............:<br>
> MPIDI_CH3_PktHandler_EagerSend(632)....: failure occurred while posting a receive for message data (MPIDI_CH3_PKT_EAGER_SEND)<br>
> MPIDI_CH3U_Receive_data_unexpected(251): Out of memory (unable to allocate -1216907051 bytes)<br>
> [mpiexec@query] ONE OF THE PROCESSES TERMINATED BADLY: CLEANING UP<br>
> APPLICATION TERMINATED WITH THE EXIT STRING: Hangup (signal 1)<br>
><br>
> --<br>
> Fujun Liu<br>
> Department of Computer Science, University of Kentucky, 2010.08-<br>
> <a href="mailto:fujun.liu@uky.edu">fujun.liu@uky.edu</a>, <a href="tel:%28859%29229-3659" value="+18592293659">(859)229-3659</a><br>
><br>
><br>
><br>
</div></div>> _______________________________________________<br>
> mpich-discuss mailing list<br>
> <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
<br>
_______________________________________________<br>
mpich-discuss mailing list<br>
<a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
</blockquote></div><br><br clear="all"><br>-- <br><div>Fujun Liu<br>Department of Computer Science, University of Kentucky, 2010.08-<br></div>
<div><a href="mailto:fujun.liu@uky.edu" target="_blank">fujun.liu@uky.edu</a>, (859)229-3659</div>
<div><br> </div><br>