[mpich-discuss] MPI ERROR
Jeff Hammond
jhammond at alcf.anl.gov
Wed Apr 4 17:26:28 CDT 2012
...or just install MPICH2.
Jeff
On Wed, Apr 4, 2012 at 4:59 PM, Pavan Balaji <balaji at mcs.anl.gov> wrote:
>
> You seem to be using Open MPI, not MPICH2. Please send an email to the
> appropriate mailing list.
>
> -- Pavan
>
>
> On 04/04/2012 04:56 PM, moussa brahim wrote:
>>
>> brahim21322 at hotmail.fr
>> mpich-discuss at mcs.anl.gov
>> Hello
>> I try to run clien/server program (on Ubuntu), and when i run the sever
>> program this error appears :
>> --------------------------------------------------------------------------
>> At least one pair of MPI processes are unable to reach each other for
>> MPI communications. This means that no Open MPI device has indicated
>> that it can be used to communicate between these processes. This is
>> an error; Open MPI requires that all MPI processes be able to reach
>> each other. This error can sometimes be the result of forgetting to
>> specify the "self" BTL.
>>
>> Process 1 ([[973,1],0]) is on host: TIGRE
>> Process 2 ([[913,1],0]) is on host: TIGRE
>> BTLs attempted: self sm tcp
>>
>> Your MPI job is now going to abort; sorry.
>> --------------------------------------------------------------------------
>> [TIGRE:2205] *** An error occurred in MPI_Send
>> [TIGRE:2205] *** on communicator
>> [TIGRE:2205] *** MPI_ERR_INTERN: internal error
>> [TIGRE:2205] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
>> --------------------------------------------------------------------------
>> mpiexec has exited due to process rank 0 with PID 2205 on
>> node TIGRE exiting without calling "finalize". This may
>> have caused other processes in the application to be
>> terminated by signals sent by mpiexec (as reported here).
>> --------------------------------------------------------------------------
>>
>>
>>
>> *******and this error appears when i run the client program
>> --------------------------------------------------------------------------
>> At least one pair of MPI processes are unable to reach each other for
>> MPI communications. This means that no Open MPI device has indicated
>> that it can be used to communicate between these processes. This is
>> an error; Open MPI requires that all MPI processes be able to reach
>> each other. This error can sometimes be the result of forgetting to
>> specify the "self" BTL.
>>
>> Process 1 ([[913,1],0]) is on host: TIGRE
>> Process 2 ([[973,1],0]) is on host: TIGRE
>> BTLs attempted: self sm tcp
>>
>> Your MPI job is now going to abort; sorry.
>> --------------------------------------------------------------------------
>>
>>
>>
>> ***here we have the server program :
>> #include <stdio.h>
>>
>> #include <mpi.h>
>>
>>
>> main(int argc, char **argv)
>> {
>>
>> int my_id;
>>
>> char port_name[MPI_MAX_PORT_NAME];
>>
>> MPI_Comm newcomm;
>>
>> int passed_num;
>>
>>
>> MPI_Init(&argc, &argv);
>>
>> MPI_Comm_rank(MPI_COMM_WORLD, &my_id);
>>
>>
>> passed_num = 111;
>>
>>
>> if (my_id == 0)
>>
>> {
>>
>> MPI_Open_port(MPI_INFO_NULL, port_name);
>>
>> printf("%s\n\n", port_name); fflush(stdout);
>>
>> } /* endif */
>>
>>
>> MPI_Comm_accept(port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &newcomm);
>>
>> if (my_id == 0)
>> {
>>
>> MPI_Send(&passed_num, 1, MPI_INT, 0, 0, newcomm);
>>
>> printf("after sending passed_num %d\n", passed_num); fflush(stdout);
>>
>> MPI_Close_port(port_name);
>>
>> } /* endif */
>>
>>
>> MPI_Finalize();
>>
>>
>> exit(0);
>>
>>
>> } /* end main() */
>>
>>
>>
>> ***and here the client program :
>> #include <stdio.h>
>>
>> #include <mpi.h>
>>
>>
>> main(int argc, char **argv)
>>
>> {
>>
>> int passed_num;
>>
>> int my_id;
>>
>> MPI_Comm newcomm;
>>
>>
>> MPI_Init(&argc, &argv);
>>
>> MPI_Comm_rank(MPI_COMM_WORLD, &my_id);
>>
>>
>> MPI_Comm_connect(argv[1], MPI_INFO_NULL, 0, MPI_COMM_WORLD, &newcomm);
>>
>> if (my_id == 0)
>>
>> {
>>
>> MPI_Status status;
>>
>> MPI_Recv(&passed_num, 1, MPI_INT, 0, 0, newcomm, &status);
>>
>> printf("after receiving passed_num %d\n", passed_num); fflush(stdout);
>>
>> } /* endif */
>>
>>
>> MPI_Finalize();
>>
>>
>> exit(0);
>>
>>
>> } /* end main() */
>>
>> If anybody has sotlution, please help me
>> thank you
>>
>>
>> _______________________________________________
>> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
>> To manage subscription options or unsubscribe:
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
>
> --
> Pavan Balaji
> http://www.mcs.anl.gov/~balaji
> _______________________________________________
> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
--
Jeff Hammond
Argonne Leadership Computing Facility
University of Chicago Computation Institute
jhammond at alcf.anl.gov / (630) 252-5381
http://www.linkedin.com/in/jeffhammond
https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond (in-progress)
https://wiki.alcf.anl.gov/old/index.php/User:Jhammond (deprecated)
https://wiki-old.alcf.anl.gov/index.php/User:Jhammond(deprecated)
More information about the mpich-discuss
mailing list