[mpich-discuss] Problems with multiple calls to MPI_Reduce?
Rajeev Thakur
thakur at mcs.anl.gov
Tue Jul 1 12:37:21 CDT 2008
The collective operations don't use tags, so that should not be a problem.
I don't quite understand how you are using allreduce with maxloc to fit the
output together.
You could use MPI-IO instead to write directly from each process to a common
file. Or a slower approach would be for each process to send the data to
rank 0 and have it write the file.
Rajeev
> -----Original Message-----
> From: owner-mpich-discuss at mcs.anl.gov
> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of Elliot Parkin
> Sent: Tuesday, July 01, 2008 4:32 AM
> To: mpich-discuss at mcs.anl.gov
> Subject: RE: [mpich-discuss] Problems with multiple calls to
> MPI_Reduce?
>
> Thanks for the reply guys, very much appreciated.
>
> Just to set the scene of how and why I'm using MPI_Allreduce:
>
> I'm using MPI to parallelize a hydrodynamics code. The hydro
> grid is split into sections and to output a data file
> containing the whole grid requires the separate parts to be
> pieced together. The separate parts of the grid pass their
> part to a dummy array and then MPI_Allreduce fits this
> together and broadcasts the parts back to all processors.
> Here's a chunk of the code:
>
> c Multiple calls to glue for various parameters
>
> call glue(lzro,zro) ! density
> call glue(lzpr,zpr) ! pressure
> call glue(lzco,zco) ! colour
> call glue(lzux,zux) ! x-velocity
> call glue(lzuy,zuy) ! y-velocity
> call glue(lzuz,zuz) ! z-velocity
>
> c The glue subroutine looks like this:
>
> all = (imax*jmax*kmax)
>
> do k=1,kmax
> do j=1,jmax
> do i=1,imax
> dummy(i,j,k) = -1.0e50
> end do
> end do
> end do
>
> do k=1,mpikmax
> do j=1,mpijmax
> do i=1,mpiimax
> a = j + rank*mpijmax
> dummy(i,a,k) = input(i,j,k)
> end do
> end do
> end do
>
> call MPI_Allreduce (dummy,output,all,MPI_2DOUBLE_PRECISION,
> : MPI_MAXLOC,comm,ierr)
>
>
> Michael Ahlmann suggested that the problem could be due to
> using non-unique tags for the calls. At the minute the calls
> all use the same communicator, but I have to apologise about
> my lack of knowledge about MPI because I don't really know if
> that would be a problem or not.
>
> Thanks again,
>
> Elliot Parkin
>
>
> Quoting Rajeev Thakur <thakur at mcs.anl.gov>:
>
> > Can you send us a small code fragment?
> >
> > Rajeev
> >
> >> -----Original Message-----
> >> From: owner-mpich-discuss at mcs.anl.gov
> >> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of Elliot Parkin
> >> Sent: Friday, June 27, 2008 6:28 AM
> >> To: mpich-discuss at mcs.anl.gov
> >> Subject: [mpich-discuss] Problems with multiple calls to
> MPI_Reduce?
> >>
> >> Hello everyone,
> >>
> >> I've ran into some difficulties using MPI_Reduce. It seems
> that after
> >> it has been called some of the parameters in the code get set to
> >> bizarre values. Also, if I call MPI_Reduce consecutively
> (to fill an
> >> array with the maximum values across the processes) it
> doesn't appear
> >> to work the third time and sets all values of the array to
> zero. Has
> >> anybody else had this problem, and if so is there any way to
> >> explain/fix this?
> >>
> >> Cheers,
> >>
> >> Elliot Parkin.
> >>
> >> ----------------------------------------------------------------
> >> This message was sent using IMP, the Internet Messaging Program.
> >>
> >>
> >
> >
>
>
>
More information about the mpich-discuss
mailing list