[mpich-discuss] MPI_IN_PLACE argument
Rajeev Thakur
thakur at mcs.anl.gov
Wed Apr 28 10:07:18 CDT 2010
Make sure the sendbuf is allocated to be large enough.
Rajeev
> -----Original Message-----
> From: mpich-discuss-bounces at mcs.anl.gov
> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of
> Steenhauer, Kate
> Sent: Wednesday, April 28, 2010 9:48 AM
> To: mpich-discuss at mcs.anl.gov
> Subject: Re: [mpich-discuss] MPI_IN_PLACE argument
>
> Ok, any idea what this could be then? Thanks kate
>
> -----Original Message-----
> From: mpich-discuss-bounces at mcs.anl.gov
> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Rajeev Thakur
> Sent: 28 April 2010 15:17
> To: mpich-discuss at mcs.anl.gov
> Subject: Re: [mpich-discuss] MPI_IN_PLACE argument
>
> If sendbuf and B are distinct arrays, you don't need to use
> MPI_IN_PLACE. The problem is something else then.
>
> Rajeev
>
> > -----Original Message-----
> > From: mpich-discuss-bounces at mcs.anl.gov
> > [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Steenhauer,
> > Kate
> > Sent: Wednesday, April 28, 2010 5:10 AM
> > To: mpich-discuss at mcs.anl.gov; 'Anthony Chan'
> > Subject: [mpich-discuss] MPI_IN_PLACE argument
> >
> > Hi,
> >
> > I am trying to sort the bug in the mpi program (see attached). It
> > seems to me, with my very limited knowledge, that everything is
> > related and the whole program needs to be restructed when upgrading
> > from mpich1 to mpich2? If the MPI_Scatter routine flags a bug (The
> > error message is 'memcpy argument memory ranges overlap,
> > dst_=0xafd74f8 src_=0xafd750c len_=16200, internal ABORT'), then it
> > is, as far as I can see, most likely that the other
> routines, such as,
> > gather(A, B), allGather(A, B), scatterXY(A, B, nk),gatherXY(A, B,
> > nk),allGatherXY(A, B, nk) etc. (see attached), will bring
> up a similar
> > bug as well. So I don't really know where to start,
> considering this
> > is a well-established code, run for many years successfully with
> > mpich1 with sensible output.
> >
> > By using the MPI_IN_Place argument I have to replace the recvbuf,
> > B(nx_1,ny_1,nz_1)?
> > call MPI_scatter (sendbuf, nxyz_1, MPI_REAL8, &
> > B, nxyz_1, MPI_REAL8, &
> > idroot, icomm_grid, ierr) I hope you will be
> > able to help me.
> > kate
> >
> > -----Original Message-----
> > From: mpich-discuss-bounces at mcs.anl.gov
> > [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of
> Rajeev Thakur
> > Sent: 27 April 2010 17:32
> > To: mpich-discuss at mcs.anl.gov; 'Anthony Chan'
> > Subject: Re: [mpich-discuss] configuration problem
> >
> > Kate,
> > You have to use the mpif.h file that comes with the MPI
> > implementation, not from some other MPI implementation.
> With MPICH2,
> > you have to use MPICH2's include file.
> >
> > The error with MPI_Scatter indicates there is a bug in your MPI
> > program.
> > You are using the same buffer as sendbuf and recvbuf, which is not
> > allowed in MPI. You can use the MPI_IN_PLACE argument instead, as
> > described in the MPI standard.
> >
> > I would recommend using MPICH2 instead of trying to get
> > MPICH-1 to work.
> >
> > Rajeev
> >
> >
> > > -----Original Message-----
> > > From: mpich-discuss-bounces at mcs.anl.gov
> > > [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of
> Steenhauer,
> > > Kate
> > > Sent: Tuesday, April 27, 2010 9:50 AM
> > > To: Anthony Chan; mpich-discuss at mcs.anl.gov
> > > Subject: Re: [mpich-discuss] configuration problem
> > >
> > > Initially, I tried to run my code with mpich2. The
> > following happens.
> > >
> > > We have 1 workstation with 8 processors, the following
> > > software: Linux Centos, Fortran Intel 11.1.
> > >
> > > Mpich2, Intel Fortran and Linux is not working on our pc.
> > It does not
> > > like the mpif.inc file, it immediately gets into problems at the
> > > initialisation of the MPI (subroutine MPI_INI).
> > > mpirun -np 8 RUN02
> > > > Fatal error in MPI_Comm_size: Invalid communicator, error stack:
> > > > MPI_Comm_size(111): MPI_Comm_size(comm=0x5b,
> size=0x7fffce629784)
> > > > failed
> > > > MPI_Comm_size(69).: Invalid communicator MPISTART
> > > > rank 7 in job 2 cops-021026_40378 caused collective
> > > abort of all ranks
> > > > exit status of rank 7: killed by signal 9
> > >
> > > Then when I change the mpif.inc file (see attachment) and
> > direct it to
> > > the mpif.h file that came with the mpich2 library, it gets
> > passed this
> > > problem but then runs into a next problem further down
> the line at
> > > MPI_SCATTER, where it is trying to distribute data to the
> different
> > > processors. The error message is 'memcpy argument memory ranges
> > > overlap,
> > > dst_=0xafd74f8 src_=0xafd750c len_=16200, internal ABORT'.
> > >
> > > There is something in the parametresiation within the mpi that is
> > > possibly different then when the code is successfully run
> > on another
> > > cluster (this cluster uses Redhat, various mpich versions, e.g.
> > > mpich-1.2.5..12, and various versions of Intel FORTRAN compiler
> > > (ifort), e.g. 7, 9 and 12).
> > >
> > > I have attached the mpif.inc file.
> > >
> > > Please let me know if you have any ideas?
> > >
> > > Thanks
> > >
> > > Kate
> > >
> > > -----Original Message-----
> > > From: mpich-discuss-bounces at mcs.anl.gov
> > > [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of
> > > chan at mcs.anl.gov
> > > Sent: 27 April 2010 15:42
> > > To: mpich-discuss at mcs.anl.gov
> > > Subject: Re: [mpich-discuss] configuration problem
> > >
> > >
> > > Is there any reason you can't use mpich2 ? The latest
> > stable release
> > > of mpich2 is 1.2.1p1.
> > >
> > > mpich-1 is no longer officially supported. Latest fortran
> > compilers
> > > are much better supported in mpich2.
> > >
> > > A.Chan
> > >
> > > ----- "Kate Steenhauer" <k.steenhauer at abdn.ac.uk> wrote:
> > >
> > > > Hello,
> > > >
> > > > We are trying to install mpich-1.2.7p1. I downloaded and
> > > unzipped this
> > > > version from
> http://www.mcs.anl.gov/research/projects/mpi/mpich1/
> > > >
> > > > We have 1 workstation with 8 processors, and the
> > following software:
> > > > Linux Centos and Fortran Intel 11.1.
> > > >
> > > > When the documentation guidelines with regard to
> configuring and
> > > > making are followed (see the files attached) it all
> seems ok, e.g.
> > > > mpif90 is generated. However, when a simple parallel job is
> > > tested we
> > > > get the following error:
> > > >
> > > > mpif90 -o testA MPITEST.f90
> > > > No Fortran 90 compiler specified when mpif90 was created, or
> > > > configuration file does not specify a compiler.
> > > >
> > > > Is there a specific prefix I need to give with an ifort fortran
> > > > compiler when I configure mpich?
> > > >
> > > > I would like to thank you in advance for your help. Please
> > > let me know
> > > > if you need any further details.
> > > >
> > > > Regards
> > > >
> > > > Kate Steenhauer
> > > >
> > > > University of Aberdeen
> > > >
> > > > 01224-272806
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > The University of Aberdeen is a charity registered in
> > Scotland, No
> > > > SC013683.
> > > >
> > > > _______________________________________________
> > > > mpich-discuss mailing list
> > > > mpich-discuss at mcs.anl.gov
> > > > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> > > _______________________________________________
> > > mpich-discuss mailing list
> > > mpich-discuss at mcs.anl.gov
> > > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> > >
> > >
> > > The University of Aberdeen is a charity registered in
> Scotland, No
> > > SC013683.
> > >
> >
> > _______________________________________________
> > mpich-discuss mailing list
> > mpich-discuss at mcs.anl.gov
> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> >
> >
> > The University of Aberdeen is a charity registered in Scotland, No
> > SC013683.
> >
>
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
>
> The University of Aberdeen is a charity registered in
> Scotland, No SC013683.
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
More information about the mpich-discuss
mailing list