[mpich-discuss] MPI_File_write_at_all issue

Mohamad Chaarawi mschaara at cs.uh.edu
Wed Aug 26 11:39:01 CDT 2009


Ok i now see the problem here..

Thanks for the reply,
Mohamad

Rob Latham wrote:
> On Tue, Aug 25, 2009 at 10:27:49AM -0500, Mohamad Chaarawi wrote:
>   
>> I have attached the test case, which basically creates a file view in a
>> way where each process writes/reads 4 integers 4 times.
>> In a 2 process scenario the file view is:
>>
>> (PROC 0 --> 4 ints)(PROC 1 --> 4 ints)(PROC 0 --> 4 ints)(PROC 1 --> 4
>> ints)(PROC 0 --> 4 ints)(PROC 1 --> 4 ints)(PROC 0 --> 4 ints)(PROC 1
>> --> 4 ints)
>>
>> When using MPI_File_write_at_all with an offset of 16 (which means 16*4
>> = 64bytes) since the etype used to create the file view is an int, and
>> each process writing 16 integers, I would expect the size of the file to
>> be 256, since each process is starting to write 16 integers into the
>> file view, or basically skipping one iteration of the file view into the
>> file.. however Im getting a file size of 240.
>>
>> Im not sure if Im misunderstanding the way write_at_all works or there
>> is some bug somewhere..
>>     
>
> I think what you have done here is that you have forgotten the rules for tiling of MPI file types.  
>
> When you skip over 16 ints, you skip over 16 ints as given by
> the struct type you created, or one entire struct type.  Now we place
> the next ints based on the 2nd iteration of the tiled type.
>
> This is tricky, but important:  when tiling types, the only thing that
> matters is the lower bound and upper bound.   The LB for rank 0 is 0
> and the UB is 128.  The LB for rank 1 is 16 and the UB is also 128.
>
> (you can confirm this with a call to MPI_TYPE_GET_EXTENT )
>
> As soon as you tile this type, you will have un-done the careful
> placement you tried to achieve, and actually end up with rank 0 and
> rank 1 writing to the same offsets. 
>
> You have created a struct type with an MPI_UB, so just add an MPI_LB
> in your struct declaration.
>
> ==rob
>
>   



More information about the mpich-discuss mailing list