[mpich-discuss] ISend problem
Jierui XIE
jierui.xie at gmail.com
Thu Nov 17 12:29:25 CST 2011
Hello,
The problem is not solved. Anyone can help?
Thanks.
On Fri, Nov 11, 2011 at 5:56 PM, Jierui XIE <jierui.xie at gmail.com> wrote:
> Hello,
>
> I want to use MPI_ISend but got errors.
>
> For EACH process, I used a loop to post MPI_ISend() to other
> processes(not all) and the number of messages is large (e.g., 100,000, each one one 2 integers)
> I am wondering if there is a limit(max number) for posting MPI_ISend().
>
> If I replace it with a blocking MPI_Send, then it is fine. Why?
>
>
> Jerry
> Thanks.
>
> ==============Erorr===============================
> rank 6 in job 223 node08_42506 caused collective abort of all ranks
> exit status of rank 6: return code 1
> Fatal error in MPI_Isend: Other MPI error, error stack:
> MPI_Isend(146)...............: MPI_Isend(buf=0x12da9e8, count=2,
> MPI_INT, dest=6, tag=1, MPI_COMM_WORLD, request=0x7fff0f363068) failed
> MPIDI_EagerContigIsend(535)..: failure occurred while attempting to
> send an eager message
> MPID_nem_tcp_iSendContig(400): writev to socket failed - Connection
> reset by peer
>
>
> ==============Code===============================
>
> for(int t=1;t<maxT;t++){
> MPI_Request req_s[totalMsgOut],req_r[totalMsgIn];
> int tag=t;
>
> //------------------------------------
> int msgArr[totalMsgIn][2];
> for(int i=0;i<totalMsgIn;i++)
> MPI_Irecv(&msgArr[i],2,MPI_INT,MPI_ANY_SOURCE,tag,MPI_COMM_WORLD,&req_r[i]);
>
> //-------------------------------------------------------------
> // post send ****(problem)
> //-----------------------------------------------------------
> int dest,p=0;
> for(int i=0;i<subNODES.size();i++){
> MPI_Isend(v->msg,2,MPI_INT,dest,tag,MPI_COMM_WORLD,&req_s[p++]);
>
> //MPI_Send(v->msg,2,MPI_INT,dest,tag,MPI_COMM_WORLD); > }
> }
>
>
> //------------------------------------
> //wait
> MPI_Waitall(totalMsgIn, req_r, MPI_STATUSES_IGNORE);
>
> //------------------------------------
> //wait
> MPI_Waitall(totalMsgOut, req_s, MPI_STATUSES_IGNORE);
>
> }
>
More information about the mpich-discuss
mailing list