Try to execute the program using <br>"mpiexec -n 2 ./test.x" only<br><br>or can you tell what is there in your "hosts" (Described in command "mpiexec -f <u><b>hosts</b></u> -n 2 ./test.x<b>"</b>) file<br>
<br>----Mandar Gurav<br><br><div class="gmail_quote">On Wed, Feb 2, 2011 at 4:38 PM, Deniz DAL <span dir="ltr"><<a href="mailto:dendal25@yahoo.com">dendal25@yahoo.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<table border="0" cellpadding="0" cellspacing="0"><tbody><tr><td style="font-family: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; font-size: inherit; line-height: inherit; font-size-adjust: inherit; font-stretch: inherit;" valign="top">
<div>Hello everyone,</div>
<div>I created a small linux cluster with 4 compute nodes yesterday. I installed Fedora 14 OS to all machine and mpich2 v1.3.2 on the server node. But i have a serious problem. I can not even make a simple send and receive program work. below is the steps that i follow. You will see the error at the end. I can not run point to point and collective communication routines of mpi. I am suspecting that the problem might have sthg to do with the hydra. Any help is appreciated.</div>
<div>Deniz.</div>
<div> </div>
<div> </div>
<div><b>[ddal@admin mpi_uygulamalar]$ cat hosts <br>admin<br>cn01<br>cn02<br>cn03<br>[ddal@admin mpi_uygulamalar]$ cat 01_Send_Receive_One_Message.cpp <br>#include "mpi.h"<br>#include <iostream><br>using namespace std;</b></div>
<div><b>#define TAG 25</b></div>
<div><b>int main(int argc, char* argv[])<br>{<br> int myRank,<br> size;</b></div>
<div><b> char processorName[50];<br> int nameLength;</b></div>
<div><b> int a;//size of the send buffer<br> int b;</b></div>
<div><b> MPI_Status status;</b></div>
<div><b> /* Initialize MPI */<br> MPI_Init(&argc, &argv);</b></div>
<div><b> /* Determine the size of the group */<br> MPI_Comm_size(MPI_COMM_WORLD,&size);</b></div>
<div><b> /* Determine the rank of the calling process */<br> MPI_Comm_rank(MPI_COMM_WORLD,&myRank);</b></div>
<div><b> MPI_Get_processor_name(processorName, &nameLength);</b></div>
<div><b> if(size != 2 )<br> {<br> cout<<"Number of CPUs must be 2 !\n";<br> MPI_Abort(MPI_COMM_WORLD, 99);<br> }</b></div>
<div><b> if(myRank == 0)/* Master Sends a Message */<br> {<br> a=25;</b></div>
<div><b> MPI_Send(&a, 1, MPI_INT, 1, TAG, MPI_COMM_WORLD);<br> printf("%s Sent Variable a Successfully\n",processorName);<br> }<br> else /* Process 1 Receives the Message */<br> {<br> MPI_Recv(&b, 1, MPI_INT, 0, TAG, MPI_COMM_WORLD, &status );<br>
printf("%s Received Variable a Successfully over b\n",processorName);<br> printf("b=%d\n",b);<br> }</b></div>
<div><b> /* Terminate the MPI */<br> MPI_Finalize();</b></div>
<div><b> return 0;<br>}[ddal@admin mpi_uygulamalar]$ mpicxx 01_Send_Receive_One_Message.cpp -o test.x <br>[ddal@admin mpi_uygulamalar]$ mpiexec -f hosts -n 2 ./test.x <br>Fatal error in MPI_Send: Other MPI error, error stack:<br>
MPI_Send(173)..............: MPI_Send(buf=0xbf928900, count=1, MPI_INT, dest=1, tag=25, MPI_COMM_WORLD) failed<br>MPID_nem_tcp_connpoll(1811): Communication error with rank 1: <br>[mpiexec@admin] ONE OF THE PROCESSES TERMINATED BADLY: CLEANING UP<br>
[proxy:0:1@cn01] HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:868): assert (!closed) failed<br>[proxy:0:1@cn01] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status<br>[proxy:0:1@cn01] main (./pm/pmiserv/pmip.c:208): demux engine error waiting for event<br>
APPLICATION TERMINATED WITH THE EXIT STRING: Hangup (signal 1)<br>[ddal@admin mpi_uygulamalar]$ </b></div></td></tr></tbody></table><br>
<br>_______________________________________________<br>
mpich-discuss mailing list<br>
<a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br>|||| Mandar Gurav ||||<br>