[mpich-discuss] read from socket failed (errno 10055) on 1.3.2p1

Kuraisa, Roy J (BOSTON) roy_kuraisa at merck.com
Tue Apr 19 14:32:24 CDT 2011


Hi,
Summary:
---------------
On Windows when I execute the following command (working on a fairly large dataset):
   mpiexec -hosts 2 usctap3825 15 usctap3488 1 \\fs1\correlatempi.exe cfg.xml in.h5 out.h5 debug
I encounter an MPI gather error (read from socket failed (errno 10055).  See error stack at end of this message.  If I run on only one computer (with 16 cores):
   mpiexec -hosts 1 usctap3825 15 \\fs1\correlatempi.exe cfg.xml in.h5 out.h5 debug
the program runs successfully.

Additionally, both of the above commands run successfully on mpich2 v1.2.1 (although I had to build on mpich2 1.2.1 and used different servers that are configured exactly like the origian servers noted above (e.g., usctap3825, 16-core, 64GB memory, etc).

I noticed that a similar error was fixed in mpich2-1.2 (http://trac.mcs.anl.gov/projects/mpich2/ticket/895).  Could this have regressed?  tia.

System Configuration:
--------------------------------

Server1 (usctap3825)
-------
a. Windows Server 2003, 64-bit, SP2
b. 16 cores/processors
c. 64GB memory
d. Physical computer
Server2 (usctap3488)
-------
a. Windows Server 2003, 64-bit, SP2
b. 2 cores/processors
c. 8GB memory
d. Virtual Machine

cheers, roy


error stack:
----------------
Fatal error in PMPI_Gatherv: Other MPI error, error stack:
PMPI_Gatherv(398)................................: MPI_Gatherv failed(sbuf=00000
0003AA30040, scount=97787376, MPI_FLOAT, rbuf=0000000180040040, rcnts=000000000D
6515E0, displs=000000000D651630, MPI_FLOAT, root=0, MPI_COMM_WORLD) failed
MPIR_Gatherv_impl(210)...........................:
MPIR_Gatherv(118)................................:
MPIC_Waitall_ft(852).............................:
MPIR_Waitall_impl(121)...........................:
MPIDI_CH3I_Progress(353).........................:
MPID_nem_mpich2_blocking_recv(905)...............:
MPID_nem_newtcp_module_poll(37)..................:
MPID_nem_newtcp_module_connpoll(2669)............:
MPID_nem_newtcp_module_recv_success_handler(2364):
MPID_nem_newtcp_module_post_readv_ex(330)........:
MPIU_SOCKW_Readv_ex(392).........................: read from socket failed, An o
peration on a socket could not be performed because the system lacked sufficient
 buffer space or because a queue was full.
 (errno 10055)

Notice:  This e-mail message, together with any attachments, contains
information of Merck & Co., Inc. (One Merck Drive, Whitehouse Station,
New Jersey, USA 08889), and/or its affiliates Direct contact information
for affiliates is available at 
http://www.merck.com/contact/contacts.html) that may be confidential,
proprietary copyrighted and/or legally privileged. It is intended solely
for the use of the individual or entity named on this message. If you are
not the intended recipient, and have received this message in error,
please notify us immediately by reply e-mail and then delete it from 
your system.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20110419/5bd9f8c8/attachment.htm>


More information about the mpich-discuss mailing list