[MPICH] slow IOR when using fileview

Wei-keng Liao wkliao at ece.northwestern.edu
Sun Jul 1 19:54:41 CDT 2007


I just tried the cb_buffer_size 10485760 hint and got the similar results.

I agree the cost for using datatype should be negligible, especially for 
this I/O pattern. Note that both the two cases call MPI collective writes, 
but since the access patterns are not interleaved, ROMIO internally will 
call independent subroutines to write to the file. Still, I could not see 
any reason for such a big performance difference.

Also, I tested the code on a different machine, both cases have about the 
same timing. I tried MPICH 1.2.7, 2-1.0.2, and 2-1.0.5. I wonder if this 
only happens on Cray. I will report this to the system people to see what 
they think.

Wei-keng


On Sun, 1 Jul 2007, Rajeev Thakur wrote:

> In these other cases, the code assumes that the datatype represents a
> noncontiguous access pattern and goes through the motions of collective I/O
> when it actually is not necessary. That costs more but should not be this
> much more.
>
> Rajeev
>
>> -----Original Message-----
>> From: owner-mpich-discuss at mcs.anl.gov
>> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of Wei-keng Liao
>> Sent: Sunday, July 01, 2007 3:25 PM
>> To: mpich-discuss at mcs.anl.gov
>> Subject: RE: [MPICH] slow IOR when using fileview
>>
>>
>> I tried different datatype constructors. Compared to the case without
>> using MPI_File_setview(), the results are:
>>    MPI_Type_create_subarray()  ---- much slower
>>    MPI_Type_indexed()  ---- much slower
>>    MPI_Type_vector() with explicit offset in setview  ---- much slower
>>    MPI_Type_contiguous() with explicit offset in setview  ---- same
>>    MPI_BYTE with explicit offset in setview  ---- same
>>
>> The MPICH is version 2-1.0.2. I cannot test newer versions on that
>> machine.
>>
>> Wei-keng
>>
>>
>> On Sun, 1 Jul 2007, Rajeev Thakur wrote:
>>
>>> Can you see what happens if you use type_indexed instead of
>> type_subarray?
>>>
>>> Rajeev
>>>
>>>> -----Original Message-----
>>>> From: owner-mpich-discuss at mcs.anl.gov
>>>> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of Wei-keng Liao
>>>> Sent: Saturday, June 30, 2007 2:03 AM
>>>> To: mpich-discuss at mcs.anl.gov
>>>> Subject: [MPICH] slow IOR when using fileview
>>>>
>>>>
>>>> I am experiencing slow IOR performance on Cray XT3 when using
>>>> fileview
>>>> option. I extract the code into a simpler version (attached).
>>>> The code
>>>> compares two collective writes: MPI_File_write_all and
>>>> MPI_File_write_at_all. The former uses an MPI fileview and
>>>> the latter uses
>>>> explicit file offset. For both cases, each process writes
>> 10 MB to a
>>>> shared file, contiguously, non-overlapping, non-interleaved.
>>>> On the Cray
>>>> XT3 with Lustre file system, the former is extremely
>> slower than the
>>>> latter. Here is an output for using 8 processes:
>>>>
>>>> 2: MPI_File_write_all() time = 4.72 sec
>>>> 3: MPI_File_write_all() time = 4.74 sec
>>>> 6: MPI_File_write_all() time = 4.77 sec
>>>> 1: MPI_File_write_all() time = 4.79 sec
>>>> 7: MPI_File_write_all() time = 4.81 sec
>>>> 0: MPI_File_write_all() time = 4.83 sec
>>>> 5: MPI_File_write_all() time = 4.85 sec
>>>> 4: MPI_File_write_all() time = 4.89 sec
>>>> 2: MPI_File_write_at_all() time = 0.02 sec
>>>> 1: MPI_File_write_at_all() time = 0.02 sec
>>>> 3: MPI_File_write_at_all() time = 0.02 sec
>>>> 0: MPI_File_write_at_all() time = 0.02 sec
>>>> 6: MPI_File_write_at_all() time = 0.02 sec
>>>> 4: MPI_File_write_at_all() time = 0.02 sec
>>>> 7: MPI_File_write_at_all() time = 0.02 sec
>>>> 5: MPI_File_write_at_all() time = 0.02 sec
>>>>
>>>> I tried the same code on other machines and different file
>>>> systems (eg.
>>>> PVFS), and timings for both cases were very close to each
>>>> other. If anyone
>>>> has access to a Cray XT3 machine, could you please try it and
>>>> let me know?
>>>> Thanks.
>>>>
>>>> Wei-keng
>>>>
>>>
>>
>>
>




More information about the mpich-discuss mailing list