[mpich-discuss] Parallel I/O on Lustre: MPI Vs. POSIX

George Zagaris george.zagaris at kitware.com
Mon Jun 20 15:52:12 CDT 2011


Dear All:

I am currently investigating what would be the best I/O strategy for large-scale
data targeting in particular the Lustre architecture.

Towards this end, I developed a small benchmark (also attached) where
each process
writes and reads 4,194,304 doubles (32MB per process) with MPI I/O on
a single shared file
and POSIX I/O on a separate files -- one file per process.

I run this code with 32 processes under a directory which has:
(a) stripe size equal to 32MB, i.e., data is stripe aligned, and
(b) stripe count (number of OSTs) set to 32

I would expect that given the above configuration there will be no
file-system contention
since the data is stripe aligned and the number of OSTs is equal to
the number of processes
that are performing the I/O. Hence, I would expect that the
performance of the MPI I/O would
be close to the POSIX performance. The raw performance numbers that I
obtained do not
corroborate this theory however:

MPI-WRITE-OPEN:       0.0422981
MPI-WRITE-CLOSE:     0.000592947
MPI-WRITE:                 0.0437472
MPI-READ-OPEN:        0.00699806
MPI-READ-CLOSE:      1.30613
MPI-READ:                  1.30675
POSIX-WRITE-OPEN:   0.017261
POSIX-WRITE-CLOSE: 0.00202298
POSIX-WRITE:             0.00158501
POSIX-READ-OPEN:    0.00238109
POSIX-READ-CLOSE:  0.000462055
POSIX-READ:              0.00268793

I was wondering if anyone has experience with using MPI I/O on lustre
and whether
using hints can improve the I/O performance. Any additional, thoughts, comments
or suggestions on this would also be very welcome.

I sincerely thank you for all your time & help.

Best Regards,
George
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ConflictFreeStripeAligned.cxx
Type: text/x-c++src
Size: 11766 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20110620/0c8ed0db/attachment.cxx>


More information about the mpich-discuss mailing list