[MPICH] slow IOR when using fileview
Rajeev Thakur
thakur at mcs.anl.gov
Sun Jul 1 11:12:23 CDT 2007
Can you see what happens if you use type_indexed instead of type_subarray?
Rajeev
> -----Original Message-----
> From: owner-mpich-discuss at mcs.anl.gov
> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of Wei-keng Liao
> Sent: Saturday, June 30, 2007 2:03 AM
> To: mpich-discuss at mcs.anl.gov
> Subject: [MPICH] slow IOR when using fileview
>
>
> I am experiencing slow IOR performance on Cray XT3 when using
> fileview
> option. I extract the code into a simpler version (attached).
> The code
> compares two collective writes: MPI_File_write_all and
> MPI_File_write_at_all. The former uses an MPI fileview and
> the latter uses
> explicit file offset. For both cases, each process writes 10 MB to a
> shared file, contiguously, non-overlapping, non-interleaved.
> On the Cray
> XT3 with Lustre file system, the former is extremely slower than the
> latter. Here is an output for using 8 processes:
>
> 2: MPI_File_write_all() time = 4.72 sec
> 3: MPI_File_write_all() time = 4.74 sec
> 6: MPI_File_write_all() time = 4.77 sec
> 1: MPI_File_write_all() time = 4.79 sec
> 7: MPI_File_write_all() time = 4.81 sec
> 0: MPI_File_write_all() time = 4.83 sec
> 5: MPI_File_write_all() time = 4.85 sec
> 4: MPI_File_write_all() time = 4.89 sec
> 2: MPI_File_write_at_all() time = 0.02 sec
> 1: MPI_File_write_at_all() time = 0.02 sec
> 3: MPI_File_write_at_all() time = 0.02 sec
> 0: MPI_File_write_at_all() time = 0.02 sec
> 6: MPI_File_write_at_all() time = 0.02 sec
> 4: MPI_File_write_at_all() time = 0.02 sec
> 7: MPI_File_write_at_all() time = 0.02 sec
> 5: MPI_File_write_at_all() time = 0.02 sec
>
> I tried the same code on other machines and different file
> systems (eg.
> PVFS), and timings for both cases were very close to each
> other. If anyone
> has access to a Cray XT3 machine, could you please try it and
> let me know?
> Thanks.
>
> Wei-keng
>
More information about the mpich-discuss
mailing list