[MPICH] Limitations of MPI I/O (romio) on ntfs
David Ashton
ashton at mcs.anl.gov
Mon Jun 27 13:58:06 CDT 2005
James Perrin,
The flags to MPI_File_open are not correctly implemented under NTFS. This
is a known bug that we have not fixed in the current release.
-David Ashton
-----Original Message-----
From: owner-mpich-discuss at mcs.anl.gov
[mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of James S Perrin
Sent: Monday, June 27, 2005 7:24 AM
To: mpich-discuss at mcs.anl.gov
Subject: [MPICH] Limitations of MPI I/O (romio) on ntfs
Hi,
I'm using mpich2 (previously mpich 1.2.5) on windows with ntfs, just
one of a
number of platforms my applications needs to support.
There is very little documentation on what are the limitations using
ntfs and
what I found are through trial and error. I just want to confirm what I've
found
or learn if there are solutions.
1. Only one process can access a file for reading. This is an annoying
problem
as we/users have dual cpu machines. I can understand writing being
restricted to
a single process by not reading. Is there some hint that I should be using?
2. This seems like a bug, if my file is set to read_only and even though I'm
opening it using MPI_MODE_RDONLY the file fails to open.
3. If I share a folder across a cluster (and use mpichs shared folder to
drive
mapping) should I be able to read a file on multiple nodes (and multiple
processes per node if 1. has a solution)?
Regards
James
PS Since the list server told me that the romio-users is non-existent I
guess
this is where I should post on the subject.
--
----------------------------------------------------------------------------
-
James S. Perrin, | email: james.perrin at manchester.ac.uk
Manchester Visualization Centre, | www.sve.man.ac.uk/General/Staff/perrin
Kilburn Building, The University, | tel: +44 161 275 6945
Manchester, England. M13 9PL. | fax: +44 161 275 6800/6040
----------------------------------------------------------------------------
-
"The test of intellect is the refusal to belabour the obvious" -Alfred
Bester
----------------------------------------------------------------------------
-
More information about the mpich-discuss
mailing list