[MPICH] Serial read-in of MPI-2 I/O

Peter Diamessis pjd38 at cornell.edu
Thu Sep 20 14:48:58 CDT 2007


Hi Rajeev,

Many thanks for the prompt and thorough response.
I was a little paranoid about any meta-data. As I indicated
in a previous message to the list, counting the number of array/vector
elements and their datatype the file size is consistent with one having
no meta data.

A big thank you again,

Peter

----- Original Message ----- 
From: "Rajeev Thakur" <thakur at mcs.anl.gov>
To: "'Peter Diamessis'" <pjd38 at cornell.edu>; <mpich-discuss at mcs.anl.gov>
Sent: Thursday, September 20, 2007 2:15 PM
Subject: RE: [MPICH] Serial read-in of MPI-2 I/O


> Peter,
>      If it's the same kind of machine, it should work if you are using
> MPICH/ROMIO. ROMIO doesn't add any metadata to the file, so if you are
> writing a 100x100 array of doubles, the file will contain exactly that. That
> is not true with ordinary Fortran I/O. If you write an array with a Fortran
> write statement, Fortran adds some metadata, so the file is a few bytes
> larger than the array size. If you read with a Fortran read, it will expect
> the file to be in this format. If it was written with MPI-IO, it won't work
> because the metadata will be missing. So if your viz expert will read using
> just raw binary I/O (as in C, for example), it will work.
> 
> Rajeev
> 
> 
>> -----Original Message-----
>> From: owner-mpich-discuss at mcs.anl.gov 
>> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of Peter Diamessis
>> Sent: Tuesday, September 18, 2007 10:46 AM
>> To: mpich-discuss at mcs.anl.gov
>> Subject: [MPICH] Serial read-in of MPI-2 I/O
>> 
>> Hi folks,
>> 
>> I have a naive question regarding MPI-2 I/O so please bear with me.
>> I'm working with a local visualization expert who wants to visualize
>> some results of my parallel CFD simulations.
>> 
>> The computational domain is a 3-D box and generated by an MPI-
>> based simulation, where the domain has been partitioned according
>> to a 1-D domain decomposition. I've generated snapshots (i.e. sample
>> fields) of my basic/primitive 3-D variables (velocity vector 
>> & density, a scalar)
>> that I output at specific times during the simulations using 
>> non-contiguous
>> MPI-2 parallel I/O . I read in that data through a separate 
>> postprocessor
>> code to perform any necessary analysis.
>> 
>> The viz expert would like to read in the data from some of 
>> these snapshots serially onto his Linux box.
>> Note that the data was generated on a Linux cluster using the 
>> 'native' option.
>> I've attached the actual F90 source code for the output. As 
>> you can see, the
>> four 3-D arrays containing the primitive variables are 
>> outputted, followed by
>> some secondary information on domain size & grid-spacing.
>> 
>> The "Using MPI-2" book says that files created by 'native' 
>> representation 
>> are not portable. However, I'm assuming that Linux works with 
>> Little Endian
>> and thus this MPI-2 generated data can be read serially on 
>> another Linux machine ?
>> Given that the data is outputted in a contiguous manner, 
>> totally analogous to
>> the way it's stored in memory, how different is this file 
>> from the equivalent binary
>> file a serial F90 code would output ? I'm particularly 
>> concerned as to whether there
>> is any header information preceding each 3-D array block. If 
>> not I'm assuming that,
>> since my 3-D variables are 4-byte reals, the viz guy can just 
>> read through the data
>> in 4-byte chunks ?
>> 
>> Any clarifications would be hugely appreciated.
>> 
>> Sincerely,
>> 
>> Peter Diamessis
>> 
>> 
>




More information about the mpich-discuss mailing list