[MPICH] problem net_send: could not write to fd=5, errno = 32
ajit mote
ajitm at cdac.in
Tue Jan 16 00:33:01 CST 2007
The problem is that number of send are not equal to number of recv
call ...
else part of ur code contain all processes other than rank 0 process, it
mean there is only one send but recv are more that send that will give u
error for more than two process ...
if u give some idea what u want to do then it will be easy for us to
comment or use collective call ...
sorry for my bad english ...
On Mon, 2007-01-15 at 13:54 -0600, Rajeev Thakur wrote:
> This program will not run on more than 2 processes because there is
> only 1 send (from rank 0 to 1), while all other processes call Recv
> and expect a message from rank 0.
>
> Rajeev
>
>
> ______________________________________________________________
> From: Luiz Mendes [mailto:luizmendesw at gmail.com]
> Sent: Monday, January 15, 2007 1:31 PM
> To: mpich-discuss at mcs.anl.gov
> Cc: thakur at mcs.anl.gov; Geoff Jacobs
> Subject: Re: [MPICH] problem net_send: could not write to
> fd=5, errno = 32
>
>
>
> Hi Geoff, Rajeev, Antony
>
> Yes, i forgot to write.
> But problems continuing to be the same with MPI_Finalize().
>
> After, I was editing that code and i saw that if a process
> execute MPI_Recv() that it wasnt destinated to itselft the
> problem arises.
>
> well, i was thinking that if a process analize one MPI_Recv
> not destinated to it, the process simply would ignore it, but
> this, until this moment, it seems to be wrong.
>
> thanks, sorry for foolish errors.
> Luiz Mendes
>
>
> 2007/1/15, Rajeev Thakur <thakur at mcs.anl.gov>:
> You need to add an MPI_Finalize.
>
> Rajeev
>
>
> ______________________________________________
> From: owner-mpich-discuss at mcs.anl.gov
> [mailto:owner-mpich-discuss at mcs.anl.gov] On
> Behalf Of Luiz Mendes
> Sent: Monday, January 15, 2007 12:22 PM
> To: mpich-discuss at mcs.anl.gov
> Cc: Geoff Jacobs
> Subject: Re: [MPICH] problem net_send: could
> not write to fd=5, errno = 32
>
>
>
> Hi Geoff,
>
> i dont know Geoff, the code is below:
>
> #include "mpi.h"
> #include <stdio.h>
> #include <string.h>
> int main(int argc, char* argv[])
> {
> char mensagem[32];
> int numprocs, rank;
> MPI_Status stat;
> MPI_Init(&argc,&argv);
> MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
> MPI_Comm_rank(MPI_COMM_WORLD,&rank);
> if(rank==0)
> {
> sprintf(mensagem, "Message Text");
> MPI_Send(mensagem, 32, MPI_CHAR, 1, 0,
> MPI_COMM_WORLD);
> }
> else
> {
> MPI_Recv(mensagem, 32, MPI_CHAR, 0, 0,
> MPI_COMM_WORLD, &stat);
> }
> }
>
> when i run with mpirun -np 2 it works, and
> when i run with mpirun -np 3 i doesnt run and
> relates the following message below
>
> p2_23214: p4_error: net_recv read: probable
> EOF on socket: 1
> bm_list_23184: (0.507200) wakeup_slave: unable
> to interrupt slave 0 pid 23183
> rm_l_1_23213: (0.257494) net_send: could not
> write to fd=6, errno = 9
> rm_l_1_23213: p4_error: net_send write: -1
> p4_error: latest msg from perror: Bad file
> descriptor
> rm_l_2_23236: (0.045590) net_send: could not
> write to fd=5, errno = 32
> p2_23214: (2.046247) net_send: could not write
> to fd=5, errno = 32
>
>
> What it could be?
>
> I installed MPICH 1.2.7p1 with mode p4 i
> think. I am trying to run it on two PCS. I
> installed MPI in /usr/share/MPI-1 folder and
> share this one by NFS server with another PC.
>
> Well i was trying to run another standard
> example simpleio.c, and it doesnt store things
> in other PC.
>
> What is happening?
>
> Thanks,
> Luiz Mendes
>
> 2007/1/15, Geoff Jacobs <gdjacobs at gmail.com>:
> Luiz Mendes wrote:
> > HI all,
> >
> > i would like to know whats the
> reason for this error
> >
> > net_send: could not write to fd=5,
> errno = 32
> >
> > what should i do to correct this? I
> searched for this error in internet
> > but i didnt find anything about.
> >
> > Its strange, because this error is
> caused by the increase of number of
> > processes in the execution of mpi
> program.
> >
> > Could you help me?
> >
> > Thanks
> > Luiz Mendes
> Segfault?
>
> --
> Geoffrey D. Jacobs
>
> Go to the Chinese Restaurant,
> Order the Special
>
>
>
More information about the mpich-discuss
mailing list