[mpich-discuss] Parallel File System
Rob Ross
rross at mcs.anl.gov
Wed Feb 16 16:02:31 CST 2011
Hi,
Real parallel I/O requires a parallel file system underneath, you can't just bake it into the app. You can use MPI-IO on NFS (with a list of caveats as long as my arm), but you really want a PFS. Otherwise you might as well write independent files or write only from rank 0.
-- Rob
On Feb 16, 2011, at 1:32 PM, "Hiatt, Dave M " <dave.m.hiatt at citi.com> wrote:
> <image001.gif>
> Is there a default parallel file system baked into MPICH so that MPI-IO is functional without Lustre, PVFS, or GFPS being installed independently on my cluster. I’m getting a lot of pushback from my grid support group, telling me if I want parallel I/O I have to bake it into my app. We write to an NFS mount disk farm. Can I use MPI 1.2.1 out of the box and get parallel I/O through MPI IO? For whatever reason I didn’t find this answer on the web site anywhere. If I missed it, I apologize.
>
>
>
> Ciao
>
> dave
>
>
>
> "If there's anything more important than my ego around, I want it caught and shot now." -Zaphod Beeblebrox
>
> Dave M. Hiatt
>
> Director, Risk Analytics
>
> CitiMortgage
>
> 1000 Technology Drive
> O'Fallon, MO 63368-2240
>
>
>
> Telephone: 636-261-1408
>
>
>
>
>
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20110216/c7f0b20a/attachment-0001.htm>
More information about the mpich-discuss
mailing list