[MPICH] Collective write when only a subset of processes have data to write

Rajeev Thakur thakur at mcs.anl.gov
Mon Feb 5 14:50:58 CST 2007


For processes with 0 data, you should be able to call MPI_File_set_view with
disp=0, etype=MPI_BYTE, filetype=MPI_BYTE, and MPI_File_write_all with
count=0. What error do you get if you do this?

Rajeev 

> -----Original Message-----
> From: owner-mpich-discuss at mcs.anl.gov 
> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of Heshan Lin
> Sent: Monday, February 05, 2007 1:37 PM
> To: 'Robert Latham'
> Cc: mpich-discuss at mcs.anl.gov
> Subject: RE: [MPICH] Collective write when only a subset of 
> processes have data to write
> 
> Thanks for your response, Rob. But I am not sure how to set 
> parameters for
> MPI_File_set_view and MPI_File_wirte_all for processes with no data to
> write. I tried writing 0 length of data but got errors. By 
> any chance you
> can point me to some examples? 
> 
> Heshan
> 
> > -----Original Message-----
> > From: Robert Latham [mailto:robl at mcs.anl.gov]
> > Sent: Monday, February 05, 2007 10:53 AM
> > To: Heshan Lin
> > Cc: mpich-discuss at mcs.anl.gov
> > Subject: Re: [MPICH] Collective write when only a subset of 
> processes have
> data
> > to write
> > 
> > On Sun, Feb 04, 2007 at 07:59:56PM -0500, Heshan Lin wrote:
> > > Hi,
> > >
> > > I am testing collective write with a parallel program in 
> which every MPI
> > > process needs to periodically output non-contiguous data. 
> The basic
> program
> > > structure for each MPI process looks like following.
> > >
> > > MPI_File_open(MPI_COMM_WORLD)
> > > WHILE (not end) {
> > >     Computation()
> > >     MPI_File_set_view()
> > >     MPI_File_write_all()
> > > }
> > >
> > > One problem I have encountered now is that at some 
> iteration there are
> only
> > > a subset of MPI processes having data to write. In that 
> case the program
> > > will hang if not all processes issue write requests.
> > 
> > Go ahead and have all processes --  even those with no work to do --
> > call MPI_File_write_all.  The MPI-IO implementation will 
> know what do
> > to.
> > 
> > ==rob
> > 
> > --
> > Rob Latham
> > Mathematics and Computer Science Division    A215 0178 EA2D 
> B059 8CDF
> > Argonne National Lab, IL USA                 B29D F333 664A 
> 4280 315B
> 
> 




More information about the mpich-discuss mailing list