[mpich-discuss] MPI-IO issue

Matthew J. Grismer matthew.grismer at us.af.mil
Mon Feb 13 05:08:24 CST 2012


Wei-keng,

Yes, that is exactly what I set the lowerbound and extent to for the derived
type on each of the processors.  And then I went back and checked that the
values were set correctly with MPI_Type_get_extent.  Then the view and reads
are as in my first example in the original post, reading x, then y, then z
in separate reads.

Matt


On 2/10/12 6:07 PM, "Wei-keng Liao" <wkliao at ece.northwestern.edu> wrote:

> Matt,
> 
> Based on the way you use the new derived datatype in MPI_FILE_SET_VIEW,
> the lowerbound and extent should be 0 and ie*je*ke*8, respectively.
> 8 byte is the bytes for real*8 type. Can you verify those values?
> 
> Wei-keng
> 
> 
> On Feb 10, 2012, at 4:25 PM, Grismer, Matthew J Civ USAF AFMC AFRL/RBAT wrote:
> 
>> Thanks for the info, I was not aware of the extent of the derived type.  I
>> used MPI_Type_create_resized to change the lower bound and extent of the
>> derived types on each processor to cover the entire array, not just the
>> sub-blocks I wanted.  Now x reads correctly on each processor, but then y
>> and z are still wrong.  After the MPI_File_read_all command for x, I used
>> the MPI_File_get_position command to see where each processors file pointer
>> was.  Only on processor 0 has the file pointer moved to the correct location
>> in the file, on all the other processors it is different and far short of
>> the correct location.  I'm defining the derived types and extents the same
>> way on all the processors, and even check the extents after I set them to
>> verify they are correct.  Any other thoughts on what I am missing?  Thanks.
>> 
>> Matt
>> 
>> -----Original Message-----
>> From: mpich-discuss-bounces at mcs.anl.gov
>> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Wei-keng Liao
>> Sent: Wednesday, February 08, 2012 7:33 PM
>> To: mpich-discuss at mcs.anl.gov
>> Subject: Re: [mpich-discuss] MPI-IO issue
>> 
>> Hi,
>> 
>> I am guessing the problem is the datatype you created.
>> Please check the type extent to see if it equals to the
>> size of entire array, using MPI_Type_extent(). If not,
>> the file pointer after the first read will not move to
>> the beginning of the second array you expected, which
>> leads to reading data from a wrong file location.
>> 
>> Also, if each process reads a block subarray, I suggest
>> to use MPI_Type_create_subarray to create the new datatype.
>> 
>> 
>> Wei-keng
>> 
>> 
>> On Feb 8, 2012, at 3:46 PM, Grismer, Matthew J Civ USAF AFMC AFRL/RBAT
>> wrote:
>> 
>>> I am attempting to use MPI-IO to read block, structured data from a
>>> file.  Three three-dimensional coordinate arrays are stored in the file,
>>> one after another with Fortran ordering and in binary (not Fortran
>>> unformatted):
>>> 
>>> header info
>>> (((x(i,j,k),i=1,ie),j=1,je),k=1,ke)
>>> (((y(i,j,k),i=1,ie),j=1,je),k=1,ke)
>>> (((z(i,j,k),i=1,ie),j=1,je),k=1,ke)
>>> 
>>> Each processor reads a block subset of the arrays, and I've defined a
>>> derived type using MPI_TYPE_INDEXED for the blocks that go to each
>>> processor.  MPI_TYPE_INDEXED takes as arguments the number of blocks,
>>> block size, and block displacement;  I created (and committed) the
>>> datatype with the number of blocks/displacements 3 times (i.e.
>>> subsetie*subsetje*subsetke*3) what I need for one variable above on a
>>> given processor.  Then I used the following to read the file across the
>>> processors:
>>> 
>>> count = subsetie*subsetje*subsetke
>>> call MPI_FILE_SET_VIEW ( fh, skipheader, mpi_real8, newdatatype,
>>> "native", mpi_info_null, ierror )
>>> call MPI_FILE_READ_ALL ( fh, x, count, mpi_real8, ierror )
>>> call MPI_FILE_READ_ALL ( fh, y, count, mpi_real8, ierror )
>>> call MPI_FILE_READ_ALL ( fh, z, count, mpi_real8, ierror )
>>> 
>>> The result from this is x is read correctly, y and z are not.  However,
>>> if I define one variable xyz(subsetie,subsetje,subsetke,3) and read all
>>> the data in one call:
>>> 
>>> count = subsetie*subsetje*subsetke*3
>>> call MPI_FILE_SET_VIEW ( fh, skipheader, mpi_real8, newdatatype,
>>> "native", mpi_info_null, ierror )
>>> call MPI_FILE_READ_ALL ( fh, xyz, count, mpi_real8, ierror )
>>> 
>>> everything is read correctly, which also verifies my derived type is
>>> correct.  Alternatively I can reset the view (with correct
>>> displacements) after each READ_ALL and read into the individual
>>> variables.
>>> 
>>> Is this the expected behavior?  If so I do not understand how to make
>>> successive collective reads from a file without resetting the view
>>> before every read, which will take a toll on performance when I continue
>>> to read many additional variables from the file.
>>> 
>>> Matt
>>> _______________________________________________
>>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>>> To manage subscription options or unsubscribe:
>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>> 
>> _______________________________________________
>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>> To manage subscription options or unsubscribe:
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>> _______________________________________________
>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>> To manage subscription options or unsubscribe:
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> 
> _______________________________________________
> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss



More information about the mpich-discuss mailing list