benchmark program question
Wei-keng Liao
wkliao at ece.northwestern.edu
Tue Feb 26 12:52:15 CST 2008
Hi,
IOR benchmark includes pnetcdf. http://sourceforge.net/projects/ior-sio
Its pnetcdf was recently modified to use record variables. That should
take care of the file appending I/O mode.
I am interested in your results of 26 GB/s on Lustre using MPI-IO on
shared files. My experience was about 15 GB/s top on Jaguar @ORNL using an
application I/O kernel whose data partitioning patterns are 3D
block-block-block. But the results were obtained when Jaguar was running
catamount. Jaguar is currently under upgrade to new hardware and software.
I would appreciate if you can share your results.
Wei-keng
On Tue, 26 Feb 2008, Marty Barnaby wrote:
> I'm new to the parallel NetCDF interface, and I don't have much
> experience with NetCDF either. Because of new interest on our part, we
> would like to have a straightforward, benchmark program to get byte-rate
> metrics for writing to a Posix FS (chiefly, some large Lustre
> deployments). I've had some reasonable experiences in this at the MPI-IO
> level, achieving a sustained, average rate of 26 GB/s; this writing to a
> single, shared file with an LFS stripe-count of 160. If anyone is
> interested, I could provide them with more specifics.
>
> I can't find the benchmark-type code that I really need, though I've
> been looking at the material under /test like /test_double/test_write.c
> This I've compiled and executed successfully at the appropriate -np 4
> level.
>
> There are three dynamics I would like to have that I can't see how to
> get.
>
> 1. Run on any number of processors. I'm sure this is simple, but I want to
> know where the failure is when I attempt it.
>
> 2. Set the number of bytes appended to an open file in a single, atomic,
> collective write operation. In my MPI-IO benchmark program I merely
> got this number by having a buffer size on each processor, and
> the total
> was the product of this times the number of processors. At the higher
> level of PNetCDF I'm not sure which value I'm getting in the def_var
> and put_var.
>
> 3. Be able to perform any number of equivalent, collective write operations,
> appending to the same, open file. Simply a:
>
> for ( i = 0; i < NUMRECS; i++ )
>
> concept. This is basically what our scientific, simulation applications
> do in their 'dump' mode.
>
>
> Thanks,
> Marty Barnaby
>
More information about the parallel-netcdf
mailing list