[MPICH] MPICH2 and PVFS2
Peter Diamessis
pjd38 at cornell.edu
Fri Apr 13 12:26:56 CDT 2007
Hi Rob,
I really appreciate the insight. Indeed, our cluster uses myrinet gm-type connections.
I forwarded your reply to our system admin.
I've actually already generated a bunch of large datafiles through MPI-2 that I
have been able to handle very conveniently. I have no intention of quitting :-)
These datasets are my research livelihood :-)
Thank you again,
Pete
----- Original Message -----
From: "Robert Latham" <robl at mcs.anl.gov>
To: "Peter Diamessis" <pjd38 at cornell.edu>
Cc: <mpich-discuss at mcs.anl.gov>
Sent: Friday, April 13, 2007 12:21 PM
Subject: Re: [MPICH] MPICH2 and PVFS2
> On Thu, Apr 12, 2007 at 11:51:27PM -0400, Peter Diamessis wrote:
>> Apparently, MPICH v1.2.7..15 is still being used at the cluster. Our system
>> manager has made an effort to build it to make ROMIO compatible with
>> PVFS2. Nonetheless, my problems persist. Would it simply be a better
>> idea that they shift the whole cluster to MPICH2, which I assume is more
>> naturally compatible with PVFS2 ?
>
> Hi Peter
>
> First, I'm glad you're using MPI-IO to its full potential and seeing
> good results.
>
> Second, yes, the PVFS v2 support in MPICH 1.2.7 is quite old: about 2
> years of bugfixes have gone into MPICH2-1.0.5p4. It'd be great if you
> could run MPICH2 -- you might still find bugs but particularly for
> noncontiguous I/O we have fixed a number of issues.
>
> Now, complicating the matter is that v1.2.7..15 is a version of MPICH
> for the myrinet interconnect. Your admin will know know if you are
> using GM or MX. Let your administrator know you can build MPICH2 with
> gm support by configuring MPICH2 with the --with-device=ch3:nemesis:gm
> flag. You might also need --with-gm-include=/path/to/include/gm if
> your GM headers are in a different location.
>
>> Sorry if I sound too ignorant but my I/O troubles have driven me up the
>> wall.
>
> This is not an ignorant question at all. Thanks for sending in the
> report instead of just quitting in frustration.
>
> ==rob
>
> --
> Rob Latham
> Mathematics and Computer Science Division A215 0178 EA2D B059 8CDF
> Argonne National Lab, IL USA B29D F333 664A 4280 315B
>
More information about the mpich-discuss
mailing list