[mpich-discuss] ERROR - The specified network name is no longer available. (errno 64)

Alexandru Blidaru alexsb92 at gmail.com
Mon Jul 26 15:43:14 CDT 2010


Hi,

I am trying to run the attached program on, which when fully written will be
an implementation for a 3D array where each element in the array will
possible have a number of elements, so I would basically have a 4D array
with a dimension that is position related. The class I have to write would
accept the array if the current node is 0, and it would slice it in equal
parts along the x-axis and send it to the slave nodes. You can completely
ignore any code that is commented.

In the constructor DOFArr holds the number of elements in the 4th Dimension
for each 3rd dimension place. DOFArr is a linear array, and any 3D position
can be mapped into 1D using the LinearIndex function. The constructor simply
gives the array some random values. It then splits the array into nNodes
equal parts, where nNodes is the number of slave nodes. Of course  I will
get a remainder so some slices will be wider by one unit. I will then send
the offset which is the width of the slice. The NoDOF will be sent which is
the width of the 4th Dimension. And then the data is sent one double at a
time.

The Slave nodes pretty much mirror the master node except instead of sending
they are receiving.

However when i run it from command line with "mpiexec -n 3 MPIVDOFArray.exe"
, i get the following output:


This application has requested the Runtime to terminate it in an unusual
> way.

Please contact the application's support team for more information.


> This application has requested the Runtime to terminate it in an unusual
> way.

Please contact the application's support team for more information.

Processors: 3

ProcessorID: 1

Processors: 3

ProcessorID: 2

Fatal error in MPI_Waitall: Other MPI error, error stack:

MPI_Waitall(261)..........................: MPI_Waitall(count=76,
> req_array=00529270, status_array=00529490) failed

MPIDI_CH3i_Progress_wait(215).............: an error occurred while handling
> an event returned by MPIDU_Sock_Wait()

MPIDI_CH3I_Progress_handle_sock_event(420):

MPIDU_Sock_wait(2606).....................: The specified network name is no
> longer available. (errno 64)

Processors: 3

ProcessorID: 0

Initialization done

job aborted:

rank: node: exit code[: error message]

0: C7June2010: 1: Fatal error in MPI_Waitall: Other MPI error, error stack:

MPI_Waitall(261)..........................: MPI_Waitall(count=76,
> req_array=00529270, status_array=00529490) failed

MPIDI_CH3i_Progress_wait(215).............: an error occurred while handling
> an event returned by MPIDU_Sock_Wait()

MPIDI_CH3I_Progress_handle_sock_event(420):

MPIDU_Sock_wait(2606).....................: The specified network name is no
> longer available. (errno 64)

1: C7June2010: 3: process 1 exited without calling finalize

2: C7June2010: 3: process 2 exited without calling finalize


Currently I am not testing the code on an actual cluster. I am just doing it
on the workstation I am on. For writing the code I use Microsoft Visual
Studio 2008 and MPI examples compile and run great.

I followed the example here for Isend and Irecv
https://computing.llnl.gov/tutorials/mpi/#Non-Blocking_Message_Passing_Routines
<https://computing.llnl.gov/tutorials/mpi/#Non-Blocking_Message_Passing_Routines>For
the array decomposition i followed the example mpi_array.c from
https://computing.llnl.gov/tutorials/mpi/exercise.html

<https://computing.llnl.gov/tutorials/mpi/exercise.html>It would be great if
you guys could tell me what I'm doing wrong.

Cheers,
Alex
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20100726/80447dbe/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: main.cpp
Type: application/octet-stream
Size: 15486 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20100726/80447dbe/attachment-0001.obj>


More information about the mpich-discuss mailing list