[mpich-discuss] MPI-IO issue

Grismer, Matthew J Civ USAF AFMC AFRL/RBAT Matthew.Grismer at wpafb.af.mil
Tue Feb 28 11:25:23 CST 2012


Wei-keng,

Thanks for the sample code, I did modify it to read my particular files
identically to what I have done in the code I've been working on.
Unfortunately for me, this standalone code reads the file correctly!  So now
I am stuck trying to figure out what in the larger code I am modifying is
causing the MPI-IO routines to muck up, presumably something is being
hammered in memory somewhere.  Anyway, thanks for all your help.

Matt

-----Original Message-----
From: mpich-discuss-bounces at mcs.anl.gov
[mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Wei-keng Liao
Sent: Thursday, February 23, 2012 2:07 AM
To: mpich-discuss at mcs.anl.gov
Subject: Re: [mpich-discuss] MPI-IO issue

Hi, Matt,

I cannot verify your code without running it.
Below is a sample code using subarray type constructor.
Could you check if you can run it and maybe make use of it? 
Some parameter values are hard coded. Please change them
with your values.

        program main
        use mpi
        implicit none

        integer myid, nx, ny, nz, npx, npy, npz
        integer mypx, mypy, mypz
        integer array_of_sizes(3), array_of_subsizes(3), array_of_starts(3)
        integer fp, ftype, ierr
        integer mstatus(MPI_STATUS_SIZE)
        integer(KIND=MPI_OFFSET_KIND) iOffset
        character*128 filename
        real    buf(20, 10, 10)

        call MPI_Init(ierr)
        call MPI_Comm_rank(MPI_COMM_WORLD, myid, ierr)

        filename = 'test.dat'

        npx = 1   ! number or processes in x dimension
        npy = 2   ! number or processes in y dimension
        npz = 2   ! number or processes in z dimension

        array_of_sizes(1) = 20 ! global array size in x dimension
        array_of_sizes(2) = 20 ! global array size in y dimension
        array_of_sizes(3) = 20 ! global array size in z dimension

        array_of_subsizes(1) = array_of_sizes(1) / npx   ! subarray size in
x dimension
        array_of_subsizes(2) = array_of_sizes(2) / npy   ! subarray size in
y dimension
        array_of_subsizes(3) = array_of_sizes(3) / npz   ! subarray size in
z dimension

        mypz = myid/(npx*npy)                 ! my process rank in z
dimension
        mypy = (myid-(mypz*npx*npy))/npx      ! my process rank in y
dimension
        mypx = mod(myid-(mypz*npx*npy), npx)  ! my process rank in x
dimension

        array_of_starts(1) = array_of_subsizes(1) * mypx
        array_of_starts(2) = array_of_subsizes(2) * mypy
        array_of_starts(3) = array_of_subsizes(3) * mypz

        call MPI_Type_create_subarray(3, array_of_sizes, array_of_subsizes,
&
                                      array_of_starts, MPI_ORDER_FORTRAN, &
                                      MPI_REAL8, ftype, ierr)
        call MPI_Type_commit(ftype, ierr)

        call MPI_File_open(MPI_COMM_WORLD, filename,
MPI_MODE_RDWR+MPI_MODE_CREATE, &
                           MPI_INFO_NULL, fp, ierr)

        iOffset = 0
        call MPI_File_set_view(fp, iOffset, MPI_REAL8, ftype, 'native',
MPI_INFO_NULL, ierr)
        call MPI_Type_free(ftype, ierr)

        call MPI_File_read_all(fp, buf, nx*ny*nz, MPI_REAL8, mstatus, ierr)

        call MPI_File_close(fp, ierr)
    
        call MPI_Finalize(ierr)

        end program main

Wei-keng


On Feb 22, 2012, at 1:49 PM, Grismer, Matthew J Civ USAF AFMC AFRL/RBAT
wrote:

> Wei-keng,
> 
> I built MPICH2 1.4.1p1 with the very latest Intel compilers (12.1.3) on OS
X
> 10.6.8, rebuilt my code and ran it, same incorrect results read (darn!).
I
> built the ROMIO tests coll_test.c and fcoll_test.f, and both ran without
> errors.  I am using MPI_ORDER_FORTRAN in my subarray definition. 
> 
> I've attached the relevant section of code.  When I continually reset the
> file view  (as shown) with correct displacements after reading each
> variable, then x, y, and z are read correctly.  If I remove the last 2 set
> views, x is read correctly but not y and z.  As I mentioned before,
checking
> file pointer position in the file on each processor after reading x is
only
> correct on processor 0, even though the subarray command has correctly set
> the lower and upper bounds for the derived type to the full block size.
> 
> Matt
> 
> -----Original Message-----
> From: mpich-discuss-bounces at mcs.anl.gov
> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Wei-keng Liao
> Sent: Thursday, February 16, 2012 6:37 PM
> To: mpich-discuss at mcs.anl.gov
> Subject: Re: [mpich-discuss] MPI-IO issue
> 
> Matt,
> 
> Can you show us the codes you use to create the derived data type?
> Are you using MPI_ORDER_FORTRAN in MPI_Type_create_subarray()?
> I am suspecting maybe the order is not set correctly.
> 
> 
> ROMIO has a test program named coll_test.c under src/mpi/romio/test.
> It uses MPI_Type_create_darray() to create the new data type for
> a subarray accesses similar to your case. It only writes once, though.
> 
> Please run that test to see if it passes. If not, then it might be
> the ROMIO source code. Also, there have been several updates since
> 1.2.1p1. Give the latest a try. Maybe it solves the problem all at once.
> 
> Wei-keng
> 
> 
> On Feb 16, 2012, at 2:41 PM, Grismer, Matthew J Civ USAF AFMC AFRL/RBAT
> wrote:
> 
>> Wei-keng,
>> 
>> Ok, I went back and removed my derived type, and use
>> MPI_Type_create_subarray.  I checked the extents of the created types,
and
>> they are correct on all processors.  I also verified that size (ie*je*ke)
> of
>> the subarray type I created matched the amount of data I am requesting
> each
>> processor to read in MPI_File_read_all.  The data is still being read
>> incorrectly.  After the first MPI_File_read_all for x, I used
>> MPI_File_get_position on each processor to see if they are in the correct
>> location (the upper bound extent).  Only processor 0 shows the correct
>> location, the other 7 processes are all at the wrong location far short
of
>> the upper extent (processor 0 correctly shows 513000, the others are
> showing
>> 81000 or 84000).
>> 
>> I forgot to mention I am using version 1.2.1p1 built on Mac OS X 10.6
with
>> Intel compilers.  Would it be worth upgrading to the latest version (I
>> didn't realize how far I had fallen behind)?
>> 
>> Matt
>> 
>> -----Original Message-----
>> From: mpich-discuss-bounces at mcs.anl.gov
>> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Wei-keng Liao
>> Sent: Monday, February 13, 2012 5:13 PM
>> To: mpich-discuss at mcs.anl.gov
>> Subject: Re: [mpich-discuss] MPI-IO issue
>> 
>> Hi Matt,
>> 
>> Could you please try MPI_Type_create_subarray() to see if
>> the same error happens, so we can rule out the possibility
>> that the error is from your derived datatype?
>> 
>> Wei-keng
>> 
>> 
>> On Feb 13, 2012, at 5:08 AM, Matthew J. Grismer wrote:
>> 
>>> Wei-keng,
>>> 
>>> Yes, that is exactly what I set the lowerbound and extent to for the
>> derived
>>> type on each of the processors.  And then I went back and checked that
> the
>>> values were set correctly with MPI_Type_get_extent.  Then the view and
>> reads
>>> are as in my first example in the original post, reading x, then y, then
> z
>>> in separate reads.
>>> 
>>> Matt
>>> 
>>> 
>>> On 2/10/12 6:07 PM, "Wei-keng Liao" <wkliao at ece.northwestern.edu> wrote:
>>> 
>>>> Matt,
>>>> 
>>>> Based on the way you use the new derived datatype in MPI_FILE_SET_VIEW,
>>>> the lowerbound and extent should be 0 and ie*je*ke*8, respectively.
>>>> 8 byte is the bytes for real*8 type. Can you verify those values?
>>>> 
>>>> Wei-keng
>>>> 
>>>> 
>>>> On Feb 10, 2012, at 4:25 PM, Grismer, Matthew J Civ USAF AFMC AFRL/RBAT
>> wrote:
>>>> 
>>>>> Thanks for the info, I was not aware of the extent of the derived
type.
>> I
>>>>> used MPI_Type_create_resized to change the lower bound and extent of
> the
>>>>> derived types on each processor to cover the entire array, not just
the
>>>>> sub-blocks I wanted.  Now x reads correctly on each processor, but
then
>> y
>>>>> and z are still wrong.  After the MPI_File_read_all command for x, I
>> used
>>>>> the MPI_File_get_position command to see where each processors file
>> pointer
>>>>> was.  Only on processor 0 has the file pointer moved to the correct
>> location
>>>>> in the file, on all the other processors it is different and far short
>> of
>>>>> the correct location.  I'm defining the derived types and extents the
>> same
>>>>> way on all the processors, and even check the extents after I set them
>> to
>>>>> verify they are correct.  Any other thoughts on what I am missing?
>> Thanks.
>>>>> 
>>>>> Matt
>>>>> 
>>>>> -----Original Message-----
>>>>> From: mpich-discuss-bounces at mcs.anl.gov
>>>>> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Wei-keng Liao
>>>>> Sent: Wednesday, February 08, 2012 7:33 PM
>>>>> To: mpich-discuss at mcs.anl.gov
>>>>> Subject: Re: [mpich-discuss] MPI-IO issue
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> I am guessing the problem is the datatype you created.
>>>>> Please check the type extent to see if it equals to the
>>>>> size of entire array, using MPI_Type_extent(). If not,
>>>>> the file pointer after the first read will not move to
>>>>> the beginning of the second array you expected, which
>>>>> leads to reading data from a wrong file location.
>>>>> 
>>>>> Also, if each process reads a block subarray, I suggest
>>>>> to use MPI_Type_create_subarray to create the new datatype.
>>>>> 
>>>>> 
>>>>> Wei-keng
>>>>> 
>>>>> 
>>>>> On Feb 8, 2012, at 3:46 PM, Grismer, Matthew J Civ USAF AFMC AFRL/RBAT
>>>>> wrote:
>>>>> 
>>>>>> I am attempting to use MPI-IO to read block, structured data from a
>>>>>> file.  Three three-dimensional coordinate arrays are stored in the
>> file,
>>>>>> one after another with Fortran ordering and in binary (not Fortran
>>>>>> unformatted):
>>>>>> 
>>>>>> header info
>>>>>> (((x(i,j,k),i=1,ie),j=1,je),k=1,ke)
>>>>>> (((y(i,j,k),i=1,ie),j=1,je),k=1,ke)
>>>>>> (((z(i,j,k),i=1,ie),j=1,je),k=1,ke)
>>>>>> 
>>>>>> Each processor reads a block subset of the arrays, and I've defined a
>>>>>> derived type using MPI_TYPE_INDEXED for the blocks that go to each
>>>>>> processor.  MPI_TYPE_INDEXED takes as arguments the number of blocks,
>>>>>> block size, and block displacement;  I created (and committed) the
>>>>>> datatype with the number of blocks/displacements 3 times (i.e.
>>>>>> subsetie*subsetje*subsetke*3) what I need for one variable above on a
>>>>>> given processor.  Then I used the following to read the file across
> the
>>>>>> processors:
>>>>>> 
>>>>>> count = subsetie*subsetje*subsetke
>>>>>> call MPI_FILE_SET_VIEW ( fh, skipheader, mpi_real8, newdatatype,
>>>>>> "native", mpi_info_null, ierror )
>>>>>> call MPI_FILE_READ_ALL ( fh, x, count, mpi_real8, ierror )
>>>>>> call MPI_FILE_READ_ALL ( fh, y, count, mpi_real8, ierror )
>>>>>> call MPI_FILE_READ_ALL ( fh, z, count, mpi_real8, ierror )
>>>>>> 
>>>>>> The result from this is x is read correctly, y and z are not.
> However,
>>>>>> if I define one variable xyz(subsetie,subsetje,subsetke,3) and read
> all
>>>>>> the data in one call:
>>>>>> 
>>>>>> count = subsetie*subsetje*subsetke*3
>>>>>> call MPI_FILE_SET_VIEW ( fh, skipheader, mpi_real8, newdatatype,
>>>>>> "native", mpi_info_null, ierror )
>>>>>> call MPI_FILE_READ_ALL ( fh, xyz, count, mpi_real8, ierror )
>>>>>> 
>>>>>> everything is read correctly, which also verifies my derived type is
>>>>>> correct.  Alternatively I can reset the view (with correct
>>>>>> displacements) after each READ_ALL and read into the individual
>>>>>> variables.
>>>>>> 
>>>>>> Is this the expected behavior?  If so I do not understand how to make
>>>>>> successive collective reads from a file without resetting the view
>>>>>> before every read, which will take a toll on performance when I
>> continue
>>>>>> to read many additional variables from the file.
>>>>>> 
>>>>>> Matt
>>>>>> _______________________________________________
>>>>>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>>>>>> To manage subscription options or unsubscribe:
>>>>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>>> 
>>>>> _______________________________________________
>>>>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>>>>> To manage subscription options or unsubscribe:
>>>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>>> _______________________________________________
>>>>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>>>>> To manage subscription options or unsubscribe:
>>>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>> 
>>>> _______________________________________________
>>>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>>>> To manage subscription options or unsubscribe:
>>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>> 
>>> _______________________________________________
>>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>>> To manage subscription options or unsubscribe:
>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>> 
>> _______________________________________________
>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>> To manage subscription options or unsubscribe:
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>> _______________________________________________
>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>> To manage subscription options or unsubscribe:
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> 
> _______________________________________________
> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> <MOD_FileIO.f90>_______________________________________________
> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss

_______________________________________________
mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
To manage subscription options or unsubscribe:
https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 5688 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20120228/571551f8/attachment-0001.bin>


More information about the mpich-discuss mailing list