[mpich-discuss] MPI IO performance
burlen
burlen.loring at gmail.com
Mon Jul 13 18:56:08 CDT 2009
I have made use of MPI-IO in my application. I have arrays stored one
per file in fortran order and I have set the file view to a subarray and
used MPI_File_read_all to read that subarray using native mode. I have
made some comparison against std::ifstream to read the same array with
the same decomposition I am finding std::ifstream is a bit faster, some
cases more than 40% faster. However, I had expected to see at worst
equivalent performance.
Are there any issues I have to watch out for using MPI-IO? What could
explain the results? What I can do to improve the situation?
My test array is 513Mb, I have tested on 2,4, and 8 nodes, with a single
process per node. File system is nfs. For my tests I am using
mpich2-1.0.8p1 and ch3:ssm.
Burlen
More information about the mpich-discuss
mailing list