[mpich-discuss] MPICH 2 on Windows 7 64 bit
Jayesh Krishna
jayesh at mcs.anl.gov
Thu May 10 09:16:10 CDT 2012
Hi,
As Rajeev mentioned this is a bug in FDS. You should contact them regarding the issue (AFAIR this issue was brought to their notice, through a bug report, and a new version was released with the bug fix).
Regards,
Jayesh
----- Original Message -----
From: "Rajeev Thakur" <thakur at mcs.anl.gov>
To: mpich-discuss at mcs.anl.gov
Cc: "Jayesh Krishna" <jayesh at mcs.anl.gov>, "Managers" <SINSystemManagers at arup.com>
Sent: Thursday, May 10, 2012 8:19:39 AM
Subject: Re: [mpich-discuss] MPICH 2 on Windows 7 64 bit
> Fatal error in PMPI_Gatherv: Internal MPI error!, error stack:
> PMPI_Gatherv(386).....: MPI_Gatherv failed(sbuf=000000003C553848, scount=1, MPI_
> DOUBLE_PRECISION, rbuf=000000003C553848, rcnts=000000003C4D6928, displs=00000000
> 3C4D69E8, MPI_DOUBLE_PRECISION, root=0, MPI_COMM_WORLD) failed
> MPIR_Gatherv_impl(199):
> MPIR_Gatherv(103).....:
> MPIR_Localcopy(349)...: memcpy arguments alias each other, dst=000000003C553848
> src=000000003C553848 len=8
This error means that you are calling MPI_Gatherv somewhere with the same sendbuf and recvbuf. You need to pass different buffers or use MPI_IN_PLACE as described in the MPI standard.
Rajeev
On May 10, 2012, at 3:14 AM, Leonardo Garbanzos wrote:
> Hi Jayesh,
>
> We are having issue running the MPICH 2 on our windows 7 64bit PCs. We have installed the MPICH 2 1.0.8, 1.2, 1.3rc2 and 1.4.1 vers of it, but we are still getting the below error.
>
> “Error”
> Microsoft Windows [Version 6.1.7601]
> Copyright (c) 2009 Microsoft Corporation. All rights reserved.
>
> C:\Users\sin.cfd>cd \fds_mpi\sr_lobby05
>
> C:\FDS_MPI\sr_lobby05>mpiexec -file config_srlobby05.txt
> User credentials needed to launch processes:
> account (domain\user) [GLOBAL\sin.cfd]:
> password:
> Process 0 of 5 is running on SINPCSGH146PRT1.clients.global.arup.com
> Process 5 of 5 is running on SINPCSGH146PRT1.clients.global.arup.com
> Process 1 of 5 is running on SINPCSGH029T4WQ.clients.global.arup.com
> Process 3 of 5 is running on SINPCSGH029T4WQ.clients.global.arup.com
> Process 2 of 5 is running on SINPCSGH029T4WQ.clients.global.arup.com
> Process 4 of 5 is running on SINPCSGH029T4WQ.clients.global.arup.com
> Mesh 1 is assigned to Process 0
> Mesh 2 is assigned to Process 1
> Mesh 3 is assigned to Process 2
> Mesh 4 is assigned to Process 3
> Mesh 5 is assigned to Process 4
> Mesh 6 is assigned to Process 5
>
> Fire Dynamics Simulator
>
> Compilation Date : Fri, 29 Oct 2010
>
> Version: 5.5.3; MPI Enabled; OpenMP Disabled
> SVN Revision No. : 7031
>
> Job TITLE : Base:Medium t2 5MW axis with natural ventilation
> Job ID string : sr_lobby05
>
> Fatal error in PMPI_Gatherv: Internal MPI error!, error stack:
> PMPI_Gatherv(386).....: MPI_Gatherv failed(sbuf=000000003C553848, scount=1, MPI_
> DOUBLE_PRECISION, rbuf=000000003C553848, rcnts=000000003C4D6928, displs=00000000
> 3C4D69E8, MPI_DOUBLE_PRECISION, root=0, MPI_COMM_WORLD) failed
> MPIR_Gatherv_impl(199):
> MPIR_Gatherv(103).....:
> MPIR_Localcopy(349)...: memcpy arguments alias each other, dst=000000003C553848
> src=000000003C553848 len=8
> Fatal error in MPI_Allreduce: Other MPI error, error stack:
> MPI_Allreduce(824)...................: MPI_Allreduce(sbuf=000000003C376CB8, rbuf
> =000000003C376C88, count=5, MPI_LOGICAL, MPI_LXOR, MPI_COMM_WORLD) failed
> MPIR_Allreduce_impl(682).............:
> MPIR_Allreduce_intra(197)............:
> MPIR_Bcast_impl(1150)................:
> MPIR_Bcast_intra(1021)...............:
> MPIR_Bcast_binomial(157).............:
> MPIC_Recv(108).......................:
> MPIC_Wait(528).......................:
> MPIDI_CH3I_Progress(334).............:
> MPID_nem_mpich2_blocking_recv(906)...:
> MPID_nem_newtcp_module_poll(37)......:
> MPID_nem_newtcp_module_connpoll(2655):
> gen_read_fail_handler(1145)..........: read from socket failed - The specified n
> etwork name is no longer available.
>
> Fatal error in MPI_Allreduce: Other MPI error, error stack:
> MPI_Allreduce(824)...................: MPI_Allreduce(sbuf=000000003C346CB8, rbuf
> =000000003C346C88, count=5, MPI_LOGICAL, MPI_LXOR, MPI_COMM_WORLD) failed
> MPIR_Allreduce_impl(682).............:
> MPIR_Allreduce_intra(191)............:
> allreduce_intra_or_coll_fn(103)......:
> MPIR_Allreduce_intra(361)............:
> MPIC_Sendrecv(189)...................:
> MPIC_Wait(528).......................:
> MPIDI_CH3I_Progress(334).............:
> MPID_nem_mpich2_blocking_recv(906)...:
> MPID_nem_newtcp_module_poll(37)......:
> MPID_nem_newtcp_module_connpoll(2655):
> gen_read_fail_handler(1145)..........: read from socket failed - The specified n
> etwork name is no longer available.
>
> Fatal error in MPI_Allreduce: Other MPI error, error stack:
> MPI_Allreduce(824)...................: MPI_Allreduce(sbuf=000000003C416CB8, rbuf
> =000000003C416C88, count=5, MPI_LOGICAL, MPI_LXOR, MPI_COMM_WORLD) failed
> MPIR_Allreduce_impl(682).............:
> MPIR_Allreduce_intra(197)............:
> MPIR_Bcast_impl(1150)................:
> MPIR_Bcast_intra(1021)...............:
> MPIR_Bcast_binomial(157).............:
> MPIC_Recv(108).......................:
> MPIC_Wait(528).......................:
> MPIDI_CH3I_Progress(334).............:
> MPID_nem_mpich2_blocking_recv(906)...:
> MPID_nem_newtcp_module_poll(37)......:
> MPID_nem_newtcp_module_connpoll(2655):
> gen_read_fail_handler(1145)..........: read from socket failed - The specified n
> etwork name is no longer available.
>
> Fatal error in MPI_Allreduce: Other MPI error, error stack:
> MPI_Allreduce(824)...................: MPI_Allreduce(sbuf=000000003C466CB8, rbuf
> =000000003C466C88, count=5, MPI_LOGICAL, MPI_LXOR, MPI_COMM_WORLD) failed
> MPIR_Allreduce_impl(682).............:
> MPIR_Allreduce_intra(197)............:
> MPIR_Bcast_impl(1150)................:
> MPIR_Bcast_intra(1021)...............:
> MPIR_Bcast_binomial(157).............:
> MPIC_Recv(108).......................:
> MPIC_Wait(528).......................:
> MPIDI_CH3I_Progress(334).............:
> MPID_nem_mpich2_blocking_recv(906)...:
> MPID_nem_newtcp_module_poll(37)......:
> MPID_nem_newtcp_module_connpoll(2655):
> gen_read_fail_handler(1145)..........: read from socket failed - The specified n
> etwork name is no longer available.
>
> Fatal error in MPI_Allreduce: Other MPI error, error stack:
> MPI_Allreduce(824)...................: MPI_Allreduce(sbuf=000000003C496CB8, rbuf
> =000000003C496C88, count=5, MPI_LOGICAL, MPI_LXOR, MPI_COMM_WORLD) failed
> MPIR_Allreduce_impl(682).............:
> MPIR_Allreduce_intra(197)............:
> MPIR_Bcast_impl(1150)................:
> MPIR_Bcast_intra(1021)...............:
> MPIR_Bcast_binomial(157).............:
> MPIC_Recv(108).......................:
> MPIC_Wait(528).......................:
> MPIDI_CH3I_Progress(334).............:
> MPID_nem_mpich2_blocking_recv(906)...:
> MPID_nem_newtcp_module_poll(37)......:
> MPID_nem_newtcp_module_connpoll(2655):
> gen_read_fail_handler(1145)..........: read from socket failed - The specified n
> etwork name is no longer available.
>
>
> job aborted:
> rank: node: exit code[: error message]
> 0: 10.197.240.36: 1: process 0 exited without calling finalize
> 1: 10.197.240.35: 1: process 1 exited without calling finalize
> 2: 10.197.240.35: 1: process 2 exited without calling finalize
> 3: 10.197.240.35: 1: process 3 exited without calling finalize
> 4: 10.197.240.35: 1: process 4 exited without calling finalize
> 5: 10.197.240.36: 1: process 5 exited without calling finalize
>
> C:\FDS_MPI\sr_lobby05>
>
> Do you have any recommendation for us to resolve the issue.
>
> Thanks
> Leonardo Garbanzos
> IT Support Analyst
>
> Arup
> 10 Hoe Chiang Road #26-01 Keppel Towers Singapore 089315
> t +65 6411 2500 d +65 6411 2540
> f +65 6411 2501 m +65 9817 3002
> www.arup.com
>
> Arup Singapore Pte Ltd - Reg. No. 200100731M
>
>
> ____________________________________________________________
> Electronic mail messages entering and leaving Arup business
> systems are scanned for acceptability of content and viruses_______________________________________________
> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
More information about the mpich-discuss
mailing list