initial timings
John Tannahill
tannahill1 at llnl.gov
Mon Aug 25 12:13:10 CDT 2003
Reiner,
I would be happy to send you any of my test codes that you would like
(as long as no one here says I can't, I will have to check; not sure
if your being located in Germany makes a difference), but first let me
explain what I have:
csnap.c => a modified version of some test code from NERSC that just
exercises some of the basic functionality of ANL's parallel netCDF
library
pnf_test.F => my translation of csnap.c, which uncovered some
deficiencies in ANL's parallel netCDF library fortran interface, which
ANL has since corrected
maswrt.F => a code that mimics the way we currently do our netCDF I/O
in our modeling code: Slaves all read their own data, but for writing,
the Slaves all send (MPI) their data back in chunks to the Master,
which then does the write (uses serial netCDF library)
slvwrt.F => modification of maswrt.F to use ANL's Fortran interface to
their parallel netCDF libray: Slaves both read and write their own
data
Some additional comments on slvwrt.F =>
Currently it is somewhat kludged until the next release of the parallel
netCDF library (I am using 0.8.9). Right now, 0.8.9 requires that you
do some things C-like: index from 0 or reverse the order of dimensions
in particular. Also, the Fortran interface currently uses subroutine
calls and it's my understanding that these will be changing to function
calls, to better match the way the serial netCDF works. Lastly, a
couple of the parallel calls will be changing from using Float in their
names to using Real, again to better match how serial netCDF does things.
Slvwrt.F currently works, but it might be somewhat confusing given the
above. My plans are to modify it once ANL releases their new version,
and then things should look pretty good.
So again, you are welcome to any of these now (given approval) or you
could choose to wait until I am able to update slvwrt.F.
It's nice to know that other people are interested in this work. What
are your plans with regards to parallel netCDF?
Regards,
John
Reiner Vogelsang wrote:
> Dear John,
> I would like to make some remarks on your results:
>
> First of all , thanks for posting your results.
>
> Moreover, two months ago I was running some performance and throughput
> measurements with ncrcat on
> one of our Altix 3000 servers. ncrcat is one of the NCO utilities. The setup
> of those
> measurements were such that several independent tasks of ncrcat were
> processing replicated sets
> of the same input data. The files were in the range of 1 GB and the
> filesystem was stripped over several
> FC disks and I/O channels.
>
> I found that the performance of the serial NetCDF library 3.5.0 could be
> increased significantly by
> using an internal port of the FFIO library (known from Cray machines )to the
> IA64 version to Redhat 7.2. The FFIO can perform an extra buffering or
> caching for writing and reading.
> It is an advantage over the standard raw POSIX I/O which is used in the
> serial NetCDF library, especially
> for strided I/O patterns which need a lot of seek operations.
>
> Do you know what kind of I/O statements are used in the MPI-I/O part of your
> MPI library?
>
> Anyway, your findings are very promising.
>
> Best regards
> Reiner
>
> Ps: Do you mind sending me your Fortran test? I was about to modify the test
> code for C in order
> to measure some performance numbers on a Altix 3000. I am happy to share the
> results with you.
>
>
>
>
>
>
>
>
>
> John Tannahill wrote:
>
>
>>Some initial timing results for parallel netCDF =>
>>
>>File size: 216 MB
>>Processors: 16 (5x3+1; lonxlat+master)
>>Platform: NERSC IBM-SP (seaborg)
>>2D domain decomposition (lon/lat)
>>600x600x150 (lonxlatxlev)
>>real*4 array for I/O
>>Fortran code
>>
>>Method 1: Use serial netCDF.
>> Slaves all read their own data.
>> For output:
>> Slaves send their data to the Master (MPI)
>> (all at once, no buffering; so file size restricted)
>> Master collects and outputs the data
>> (all at once)
>>
>>Method 2: Use ANL's parallel-netcdf, beta version 0.8.9.
>> Slaves all read their own data, but use parallel-netcdf calls.
>> For output:
>> Slaves all output their own data
>> (all at once)
>>
>>Read results =>
>>
>> Method 2 appears to be about 33% faster than Method 1.
>>
>>Write results =>
>>
>> Method 2 appears to be about 6-7 times faster than Method 1.
>>
>>Note that these preliminary results are based on the parameters given
>>above. Next week, I hope to look at different machines, different
>>file sizes (although I am memory limited on the Master as to how big
>>I can go), different numbers of processors, etc.
>>
>>Anyway, things look promising.
>>
>>Regards,
>>John
>>
>>--
>>============================
>>John R. Tannahill
>>Lawrence Livermore Nat. Lab.
>>P.O. Box 808, M/S L-103
>>Livermore, CA 94551
>>925-423-3514
>>Fax: 925-423-4908
>>============================
>
>
> --
> --------------------------------------------------------------------------------
> _
> )/___ _---_
> =_/(___)_-__ ( )
> / /\\|/O[]/ \c O ( )
> Reiner Vogelsang \__/ ----'\__/ ..o o O .o -_-
> Senior System Engineer
>
> Silicon Graphics GmbH Home Office
> Am Hochacker 3
> D-85630 Grasbrunn 52428 Juelich
> Germany
>
> Phone +49-89-46108-0 +49-2461-939265
> Fax +49-89-46108-222 +49-2461-939266
> Mobile +49-171-3583208
> email reiner at sgi.com
>
>
>
>
--
============================
John R. Tannahill
Lawrence Livermore Nat. Lab.
P.O. Box 808, M/S L-103
Livermore, CA 94551
925-423-3514
Fax: 925-423-4908
============================
More information about the parallel-netcdf
mailing list