[mpich-discuss] ERROR - The specified network name is no longer available. (errno 64)
Jayesh Krishna
jayesh at mcs.anl.gov
Mon Jul 26 16:01:10 CDT 2010
Hi,
Did you try debugging your MPI application (Follow the steps in Section 9.8 of the windows developer's guide for details - http://www.mcs.anl.gov/research/projects/mpich2/documentation/files/mpich2-1.2.1-windevguide.pdf)?
Regards,
Jayesh
----- Original Message -----
From: "Alexandru Blidaru" <alexsb92 at gmail.com>
To: mpich-discuss at mcs.anl.gov
Sent: Monday, July 26, 2010 3:43:14 PM GMT -06:00 US/Canada Central
Subject: [mpich-discuss] ERROR - The specified network name is no longer available. (errno 64)
Hi,
I am trying to run the attached program on, which when fully written will be an implementation for a 3D array where each element in the array will possible have a number of elements, so I would basically have a 4D array with a dimension that is position related. The class I have to write would accept the array if the current node is 0, and it would slice it in equal parts along the x-axis and send it to the slave nodes. You can completely ignore any code that is commented.
In the constructor DOFArr holds the number of elements in the 4th Dimension for each 3rd dimension place. DOFArr is a linear array, and any 3D position can be mapped into 1D using the LinearIndex function. The constructor simply gives the array some random values. It then splits the array into nNodes equal parts, where nNodes is the number of slave nodes. Of course I will get a remainder so some slices will be wider by one unit. I will then send the offset which is the width of the slice. The NoDOF will be sent which is the width of the 4th Dimension. And then the data is sent one double at a time.
The Slave nodes pretty much mirror the master node except instead of sending they are receiving.
However when i run it from command line with "mpiexec -n 3 MPIVDOFArray.exe" , i get the following output:
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
Processors: 3
ProcessorID: 1
Processors: 3
ProcessorID: 2
Fatal error in MPI_Waitall: Other MPI error, error stack:
MPI_Waitall(261)..........................: MPI_Waitall(count=76, req_array=00529270, status_array=00529490) failed
MPIDI_CH3i_Progress_wait(215).............: an error occurred while handling an event returned by MPIDU_Sock_Wait()
MPIDI_CH3I_Progress_handle_sock_event(420):
MPIDU_Sock_wait(2606).....................: The specified network name is no longer available. (errno 64)
Processors: 3
ProcessorID: 0
Initialization done
job aborted:
rank: node: exit code[: error message]
0: C7June2010: 1: Fatal error in MPI_Waitall: Other MPI error, error stack:
MPI_Waitall(261)..........................: MPI_Waitall(count=76, req_array=00529270, status_array=00529490) failed
MPIDI_CH3i_Progress_wait(215).............: an error occurred while handling an event returned by MPIDU_Sock_Wait()
MPIDI_CH3I_Progress_handle_sock_event(420):
MPIDU_Sock_wait(2606).....................: The specified network name is no longer available. (errno 64)
1: C7June2010: 3: process 1 exited without calling finalize
2: C7June2010: 3: process 2 exited without calling finalize
Currently I am not testing the code on an actual cluster. I am just doing it on the workstation I am on. For writing the code I use Microsoft Visual Studio 2008 and MPI examples compile and run great.
I followed the example here for Isend and Irecv https://computing.llnl.gov/tutorials/mpi/#Non-Blocking_Message_Passing_Routines
For the array decomposition i followed the example mpi_array.c from https://computing.llnl.gov/tutorials/mpi/exercise.html
It would be great if you guys could tell me what I'm doing wrong.
Cheers,
Alex
_______________________________________________
mpich-discuss mailing list
mpich-discuss at mcs.anl.gov
https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
More information about the mpich-discuss
mailing list