[mpich-discuss] Parallel I/O on Lustre: MPI Vs. POSIX

Rajeev Thakur thakur at mcs.anl.gov
Mon Jun 20 16:01:09 CDT 2011


Try using the independent I/O functions MPI_File_write_at and MPI_File_read_at instead of the collective ones for this access pattern (large contiguous blocks). Also, the closest POSIX functions to compare with are open/read/write instead of fopen/fread/fwrite. And you can write to a shared file with POSIX I/O as well (open/read/write) for a more equal comparsion.

Rajeev


On Jun 20, 2011, at 3:52 PM, George Zagaris wrote:

> Dear All:
> 
> I am currently investigating what would be the best I/O strategy for large-scale
> data targeting in particular the Lustre architecture.
> 
> Towards this end, I developed a small benchmark (also attached) where
> each process
> writes and reads 4,194,304 doubles (32MB per process) with MPI I/O on
> a single shared file
> and POSIX I/O on a separate files -- one file per process.
> 
> I run this code with 32 processes under a directory which has:
> (a) stripe size equal to 32MB, i.e., data is stripe aligned, and
> (b) stripe count (number of OSTs) set to 32
> 
> I would expect that given the above configuration there will be no
> file-system contention
> since the data is stripe aligned and the number of OSTs is equal to
> the number of processes
> that are performing the I/O. Hence, I would expect that the
> performance of the MPI I/O would
> be close to the POSIX performance. The raw performance numbers that I
> obtained do not
> corroborate this theory however:
> 
> MPI-WRITE-OPEN:       0.0422981
> MPI-WRITE-CLOSE:     0.000592947
> MPI-WRITE:                 0.0437472
> MPI-READ-OPEN:        0.00699806
> MPI-READ-CLOSE:      1.30613
> MPI-READ:                  1.30675
> POSIX-WRITE-OPEN:   0.017261
> POSIX-WRITE-CLOSE: 0.00202298
> POSIX-WRITE:             0.00158501
> POSIX-READ-OPEN:    0.00238109
> POSIX-READ-CLOSE:  0.000462055
> POSIX-READ:              0.00268793
> 
> I was wondering if anyone has experience with using MPI I/O on lustre
> and whether
> using hints can improve the I/O performance. Any additional, thoughts, comments
> or suggestions on this would also be very welcome.
> 
> I sincerely thank you for all your time & help.
> 
> Best Regards,
> George
> <ConflictFreeStripeAligned.cxx>_______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss



More information about the mpich-discuss mailing list