[mpich-discuss] Fatal error in MPI_Barrier
Antonio José Gallardo Díaz
ajcampa at hotmail.com
Mon Feb 2 09:49:13 CST 2009
Hello, this error show me when i try my jobs that use MPI.
Fatal error in MPI_Barrier: Other MPI error, error stack:
MPI_Barrier(406).............................: MPI_Barrier(MPI_COMM_WORLD) failed
MPIR_Barrier(77).............................:
MPIC_Sendrecv(123)...........................:
MPIC_Wait(270)...............................:
MPIDI_CH3i_Progress_wait(215)................: an error occurred while handling an event returned by MPIDU_Sock_Wait()
MPIDI_CH3I_Progress_handle_sock_event(640)...:
MPIDI_CH3_Sockconn_handle_connopen_event(887): unable to find the process group structure with id <��oz�>[cli_1]: aborting job:
Fatal error in MPI_Barrier: Other MPI error, error stack:
MPI_Barrier(406).............................: MPI_Barrier(MPI_COMM_WORLD) failed
MPIR_Barrier(77).............................:
MPIC_Sendrecv(123)...........................:
MPIC_Wait(270)...............................:
MPIDI_CH3i_Progress_wait(215)................: an error occurred while handling an event returned by MPIDU_Sock_Wait()
MPIDI_CH3I_Progress_handle_sock_event(640)...:
MPIDI_CH3_Sockconn_handle_connopen_event(887): unable to find the process group structure with id <��oz�>
rank 1 in job 15 wireless_43226 caused collective abort of all ranks
exit status of rank 1: killed by signal 9
I have two PC's with linux (kubuntu 8.10). I make a cluster using this machines. When use for example the command "mpiexec -l -n 2 hostname" i can see that it's all right, but when i try to send o receive some thing i have the same error. I don't know why. Please i need one hand. Thanks for all.
_________________________________________________________________
Descubre cómo compartir tus fotos con Windows Live. ¡Pruébalo ya!
http://home.live.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20090202/95ae7a54/attachment-0001.htm>
More information about the mpich-discuss
mailing list