[mpich-discuss] MPI ERROR
moussa brahim
brahim21322 at hotmail.fr
Wed Apr 4 16:56:18 CDT 2012
brahim21322 at hotmail.frmpich-discuss@mcs.anl.govHelloI try to run clien/server program (on Ubuntu), and when i run the sever program this error appears :--------------------------------------------------------------------------At least one pair of MPI processes are unable to reach each other forMPI communications. This means that no Open MPI device has indicatedthat it can be used to communicate between these processes. This isan error; Open MPI requires that all MPI processes be able to reacheach other. This error can sometimes be the result of forgetting tospecify the "self" BTL.
Process 1 ([[973,1],0]) is on host: TIGRE Process 2 ([[913,1],0]) is on host: TIGRE BTLs attempted: self sm tcp
Your MPI job is now going to abort; sorry.--------------------------------------------------------------------------[TIGRE:2205] *** An error occurred in MPI_Send[TIGRE:2205] *** on communicator[TIGRE:2205] *** MPI_ERR_INTERN: internal error[TIGRE:2205] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)--------------------------------------------------------------------------mpiexec has exited due to process rank 0 with PID 2205 onnode TIGRE exiting without calling "finalize". This mayhave caused other processes in the application to beterminated by signals sent by mpiexec (as reported here).--------------------------------------------------------------------------
*******and this error appears when i run the client program--------------------------------------------------------------------------At least one pair of MPI processes are unable to reach each other forMPI communications. This means that no Open MPI device has indicatedthat it can be used to communicate between these processes. This isan error; Open MPI requires that all MPI processes be able to reacheach other. This error can sometimes be the result of forgetting tospecify the "self" BTL.
Process 1 ([[913,1],0]) is on host: TIGRE Process 2 ([[973,1],0]) is on host: TIGRE BTLs attempted: self sm tcp
Your MPI job is now going to abort; sorry.--------------------------------------------------------------------------
***here we have the server program :#include <stdio.h>
#include <mpi.h>
main(int argc, char **argv){
int my_id;
char port_name[MPI_MAX_PORT_NAME];
MPI_Comm newcomm;
int passed_num;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_id);
passed_num = 111;
if (my_id == 0)
{
MPI_Open_port(MPI_INFO_NULL, port_name);
printf("%s\n\n", port_name); fflush(stdout);
} /* endif */
MPI_Comm_accept(port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &newcomm);
if (my_id == 0) {
MPI_Send(&passed_num, 1, MPI_INT, 0, 0, newcomm);
printf("after sending passed_num %d\n", passed_num); fflush(stdout);
MPI_Close_port(port_name);
} /* endif */
MPI_Finalize();
exit(0);
} /* end main() */
***and here the client program :#include <stdio.h>
#include <mpi.h>
main(int argc, char **argv)
{
int passed_num;
int my_id;
MPI_Comm newcomm;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_id);
MPI_Comm_connect(argv[1], MPI_INFO_NULL, 0, MPI_COMM_WORLD, &newcomm);
if (my_id == 0)
{
MPI_Status status;
MPI_Recv(&passed_num, 1, MPI_INT, 0, 0, newcomm, &status);
printf("after receiving passed_num %d\n", passed_num); fflush(stdout);
} /* endif */
MPI_Finalize();
exit(0);
} /* end main() */
If anybody has sotlution, please help methank you
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20120404/c7a94bcd/attachment.htm>
More information about the mpich-discuss
mailing list