[MPICH] MPI_REDUCE with MPI_IN_PLACE fails with memory error
Anthony Chan
chan at mcs.anl.gov
Tue Mar 13 12:14:30 CDT 2007
The error checking profiling library should be able to detect the problem.
A.Chan
On Tue, 13 Mar 2007, Rusty Lusk wrote:
> I believe our error-checking profiling library detects this error,
> doesn't it?
>
> On Mar 13, 2007, at 11:08 AM, Anthony Chan wrote:
>
> >
> > We have no access to intel 9.x compilers yet, can't directly verify
> > the
> > reported problem. However you may be using MPI_IN_PLACE incorrectly
> > in non-root process.
> >
> > On Tue, 13 Mar 2007, Martin Kleinschmidt wrote:
> >
> >> The corresponding lines of code are:
> >>
> >> #################
> >> #ifdef PARALLEL
> >> if (myid .eq. 0) then
> >> call MPI_Reduce(MPI_IN_PLACE, vecf2(1),
> >> $ n*nneue,
> >> $ MPI_double_precision, MPI_SUM, 0,
> >> $ MPI_Comm_World, MPIerr)
> >> else
> >> call MPI_Reduce(vecf2(1),MPI_IN_PLACE,
> >> $ n*nneue,
> >> $ MPI_double_precision, MPI_SUM, 0,
> >> $ MPI_Comm_World, MPIerr)
> >> endif
> >> #endif
> >
> > Try calling MPI_Reduce with MPI_IN_PLACE in send buffer in all
> > ranks, i.e.
> >
> > call MPI_Reduce(MPI_IN_PLACE, vecf2(1),
> > $ n*nneue,
> > $ MPI_DOUBLE_PRECISION, MPI_SUM, 0,
> > $ MPI_COMM_WORLD, MPIerr)
> >
> >
> > A.Chan
> >
> >> #################
> >> with n*nneue = 76160987, and 76160987*8 = 609287896, about 600 MB
> >>
> >> The point is: I thought, I could avoid the need for allocationg
> >> additional memory by using MPI_IN_PLACE, which obviously does not
> >> work.
> >>
> >> - do I use MPI_IN_PLACE in the right way?
> >> - why does MPI_IN_PLACE need additional memory?
> >> - is it possible to rewrite this code in a way that eliminates the
> >> need
> >> for allocating additional memory? This part of the code is not
> >> time-critical - it is executed once every few hours.
> >>
> >> (I'm using mpich2-1.0.5p2, Intel fortran compiler 9.1.040, Intel C
> >> compiler 9.1.045 for compiling both mpich and my code)
> >>
> >> ...martin
> >>
> >>
> >
>
>
More information about the mpich-discuss
mailing list