[MPICH] question about MPI-IO Read_all
Rob Ross
rross at mcs.anl.gov
Thu Apr 26 09:18:22 CDT 2007
Hi all,
This is a misleading discussion, and I'd like to try to steer a little.
MPI does *not* say that 32-bit types will be used when creating or
processing datatypes. int (and MPI_Aint) in 32-bit platforms are 32-bit,
so we end up with these size types on these machines as an artifact of
the reuse of the MPI datatype constructors that were defined for memory
operations in MPI-1.
On 64-bit platforms MPI_Aint is 64-bit, and sometimes ints are 64-bit,
and in those cases all these issues drop away completely. On other
systems 32-bit ints remain, and some hoops must be jumped through (as
described by Wei-keng and Rajeev) for getting larger I/O descriptions,
because the "count" parameter is an int. On systems with 32-bit Aints we
have to use similar hoops if we want to access data that is widely
distributed throughout the file.
None of this has anything to do with maximum file size. Maximum file
size is dependent on the underlying OS and OS offset sizes, etc.
Generally modern systems support 64-bit offsets and can write very large
files, limited by the underlying FS or storage.
Finally, when trying to use these "tricks" to describe larger than 2GB
file data in a single type (when ints are 32-bit) or to describe a
region that is larger than 2GB (when Aints are 32-bit), you may run into
bugs (which is what I think happened to Peter D.). This can happen
because the MPI implementation cannot return an accurate size and extent
for the type in these cases, because these are returned in ints and
Aints respectively, which aren't always big enough to return the value.
Regards,
Rob
Russell L. Carter wrote:
> Hi Rajeev,
> Thanks! The following is very helpful:
>
> Rajeev Thakur wrote:
>> 2^31 is 2 Gbytes. If you are reading 2 GB per process with a single
>> Read_all, you are already doing quite well performance-wise. If you
>> want to
>> read more than that you can create a derived datatype of say 10
>> contiguous
>> bytes and pass that as the datatype to Read_all. That would give you
>> 20 GB.
>> You read even more by using 100 or 1000 instead of 10.
>>
>> In practice, you might encounter some errors, because the MPI-IO
>> implementation internally may use some types that are 32-bit, not
>> expecting
>> anyone to read larger than that with a single call. So try it once,
>> and if
>> it doesn't work, read in 2GB chunks.
>
> Another way of phrasing my question is, what Datatypes are allowed
> in Read_all? Apparently contiguous fundamental types are ok. Anything
> else? Where is this defined? I do not see it defined in
>
> http://www.mpi-forum.org/docs/mpi-20-html/node192.htm#Node192
>
> If there's a community understood reason for a 2GB (say) limitation I
> would like to be able to reference it.
>
> Thanks,
> Russell
>
>> Rajeev
>>
>>
>>> -----Original Message-----
>>> From: owner-mpich-discuss at mcs.anl.gov
>>> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of Russell L. Carter
>>> Sent: Wednesday, April 25, 2007 6:33 PM
>>> To: mpich-discuss at mcs.anl.gov
>>> Subject: [MPICH] question about MPI-IO Read_all
>>>
>>> Hi,
>>> I have a question about the amount of data it is possible to read
>>> using MPI::Create_hindex with a fundamental type of MPI::BYTE, and
>>> MPI::File::Read_all.
>>>
>>> Following the discussion about irregularly distributed arras beginning
>>> on p. 78 of "Using MPI-2", I want to read my data by doing this:
>>>
>>> double *buf = ...;
>>> int count, bufsize = ...;
>>> MPI::Offset offset = ...;
>>> MPI::File f = MPI::File::Open(...);
>>> MPI::Datatype filetype(MPI::BYTE);
>>> filetype.Create_hindexed(count, blocks, displacements);
>>> f.Set_view(offset, MPI::BYTE, filetype, "native", info_);
>>> f.Read_all(buf, bufsize, MPI::BYTE);
>>>
>>> What I a curious about is the amount of data that can
>>> be read with Read_all. Since bufsize is an int, then
>>> that would seem to imply that the maximum Read_all (per node)
>>> is 2^31. Which in bytes, is not gigantic.
>>>
>>> Is there some other technique I can use to increase the amount
>>> of data I can Read_all at one time? I have different sized
>>> data interspersed, so I can't offset by a larger fundamental
>>> type. My arrays are not contiguous in the fortran calling program,
>>> and are of int and 4 or 8 byte reals. If I use a Create_struct
>>> to make a filetype that I use to Set_view, doesn't this have
>>> the same read size limitation? Only now it is for all the
>>> arrays in the struct. Hopefully I am missing something.
>>>
>>> Thanks,
>>> Russell
>>>
>>>
>>
>
More information about the mpich-discuss
mailing list