<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Tahoma
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>
<div>brahim21322@hotmail.fr</div><div>mpich-discuss@mcs.anl.gov</div><div>Hello</div><div>I try to run clien/server program (on Ubuntu), and when i run the sever program this error appears :</div><div>--------------------------------------------------------------------------</div><div>At least one pair of MPI processes are unable to reach each other for</div><div>MPI communications. This means that no Open MPI device has indicated</div><div>that it can be used to communicate between these processes. This is</div><div>an error; Open MPI requires that all MPI processes be able to reach</div><div>each other. This error can sometimes be the result of forgetting to</div><div>specify the "self" BTL.</div><div><br></div><div> Process 1 ([[973,1],0]) is on host: TIGRE</div><div> Process 2 ([[913,1],0]) is on host: TIGRE</div><div> BTLs attempted: self sm tcp</div><div><br></div><div>Your MPI job is now going to abort; sorry.</div><div>--------------------------------------------------------------------------</div><div>[TIGRE:2205] *** An error occurred in MPI_Send</div><div>[TIGRE:2205] *** on communicator</div><div>[TIGRE:2205] *** MPI_ERR_INTERN: internal error</div><div>[TIGRE:2205] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)</div><div>--------------------------------------------------------------------------</div><div>mpiexec has exited due to process rank 0 with PID 2205 on</div><div>node TIGRE exiting without calling "finalize". This may</div><div>have caused other processes in the application to be</div><div>terminated by signals sent by mpiexec (as reported here).</div><div>--------------------------------------------------------------------------</div><div><br></div><div><br></div><div><br></div><div>*******and this error appears when i run the client program</div><div>--------------------------------------------------------------------------</div><div>At least one pair of MPI processes are unable to reach each other for</div><div>MPI communications. This means that no Open MPI device has indicated</div><div>that it can be used to communicate between these processes. This is</div><div>an error; Open MPI requires that all MPI processes be able to reach</div><div>each other. This error can sometimes be the result of forgetting to</div><div>specify the "self" BTL.</div><div><br></div><div> Process 1 ([[913,1],0]) is on host: TIGRE</div><div> Process 2 ([[973,1],0]) is on host: TIGRE</div><div> BTLs attempted: self sm tcp</div><div><br></div><div>Your MPI job is now going to abort; sorry.</div><div>--------------------------------------------------------------------------</div><div><br></div><div><br></div><div><br></div><div>***here we have the server program :</div><div>#include <stdio.h></div><div><br></div><div>#include <mpi.h></div><div><br></div><div><br></div><div>main(int argc, char **argv)</div><div>{</div><div><br></div><div> int my_id;</div><div><br></div><div> char port_name[MPI_MAX_PORT_NAME];</div><div><br></div><div> MPI_Comm newcomm;</div><div><br></div><div> int passed_num;</div><div><br></div><div><br></div><div> MPI_Init(&argc, &argv);</div><div><br></div><div> MPI_Comm_rank(MPI_COMM_WORLD, &my_id);</div><div><br></div><div><br></div><div> passed_num = 111;</div><div><br></div><div><br></div><div> if (my_id == 0)</div><div><br></div><div> {</div><div><br></div><div> MPI_Open_port(MPI_INFO_NULL, port_name);</div><div><br></div><div> printf("%s\n\n", port_name); fflush(stdout);</div><div><br></div><div> } /* endif */</div><div><br></div><div><br></div><div> MPI_Comm_accept(port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &newcomm);</div><div> </div><div><br></div><div> if (my_id == 0)</div><div> {</div><div><br></div><div> MPI_Send(&passed_num, 1, MPI_INT, 0, 0, newcomm);</div><div><br></div><div> printf("after sending passed_num %d\n", passed_num); fflush(stdout);</div><div><br></div><div> MPI_Close_port(port_name);</div><div><br></div><div> } /* endif */</div><div><br></div><div><br></div><div> MPI_Finalize();</div><div><br></div><div><br></div><div> exit(0);</div><div><br></div><div><br></div><div>} /* end main() */</div><div><br></div><div><br></div><div><br></div><div>***and here the client program :</div><div>#include <stdio.h></div><div><br></div><div>#include <mpi.h></div><div><br></div><div><br></div><div>main(int argc, char **argv)</div><div><br></div><div>{</div><div><br></div><div> int passed_num;</div><div><br></div><div> int my_id;</div><div><br></div><div> MPI_Comm newcomm;</div><div><br></div><div><br></div><div> MPI_Init(&argc, &argv);</div><div><br></div><div> MPI_Comm_rank(MPI_COMM_WORLD, &my_id);</div><div><br></div><div><br></div><div> MPI_Comm_connect(argv[1], MPI_INFO_NULL, 0, MPI_COMM_WORLD, &newcomm);</div><div> </div><div><br></div><div> if (my_id == 0)</div><div><br></div><div> {</div><div><br></div><div> MPI_Status status;</div><div><br></div><div> MPI_Recv(&passed_num, 1, MPI_INT, 0, 0, newcomm, &status);</div><div><br></div><div> printf("after receiving passed_num %d\n", passed_num); fflush(stdout);</div><div><br></div><div> } /* endif */</div><div><br></div><div><br></div><div> MPI_Finalize();</div><div><br></div><div><br></div><div> exit(0);</div><div><br></div><div><br></div><div>} /* end main() */</div><div><br></div><div>If anybody has sotlution, please help me</div><div>thank you</div>                                            </div></body>
</html>