[MPICH] MPICH2 and PVFS2

Darius Buntinas buntinas at mcs.anl.gov
Fri Apr 13 14:28:49 CDT 2007


MPICH2 does support MX, but performance has not been tuned yet.  Also, 
Myricom is actively working on their own MPICH2 port over MX.  So 
depending on how long your MX transition takes, the MX port may be 
completed.

Darius

Peter Diamessis wrote:
> Hi Rob,
> 
> Actually, one more thing. I did contact our system admin. .
> He says he'll install MPICH2 but is concerned as to whether
> it supports MX ? The cluster is currently set-up using GM but
> they are transitioning to MX and his understanding of your email
> is that MX is not supported by MPICH2 ? Thus, open-MPI may
> the cluster's ultimate option ?
> Any thoughts here ?
> 
> Sincerely,
> 
> Peter Diamessis
> 
> ----- Original Message ----- From: "Robert Latham" <robl at mcs.anl.gov>
> To: "Peter Diamessis" <pjd38 at cornell.edu>
> Cc: <mpich-discuss at mcs.anl.gov>
> Sent: Friday, April 13, 2007 12:21 PM
> Subject: Re: [MPICH] MPICH2 and PVFS2
> 
> 
>> On Thu, Apr 12, 2007 at 11:51:27PM -0400, Peter Diamessis wrote:
>>> Apparently, MPICH v1.2.7..15 is still being used at the cluster. Our 
>>> system
>>> manager has made an effort to build it to make ROMIO compatible with
>>> PVFS2. Nonetheless, my problems persist. Would it simply be a better
>>> idea that they shift the whole cluster to MPICH2, which I assume is more
>>> naturally compatible with PVFS2 ?
>>
>> Hi Peter
>>
>> First, I'm glad you're using MPI-IO to its full potential and seeing
>> good results. 
>> Second, yes, the PVFS v2 support in MPICH 1.2.7 is quite old: about 2
>> years of bugfixes have gone into MPICH2-1.0.5p4.  It'd be great if you
>> could run MPICH2 -- you might still find bugs but particularly for
>> noncontiguous I/O we have fixed a number of issues. 
>> Now, complicating the matter is that v1.2.7..15 is a version of MPICH
>> for the myrinet interconnect.  Your admin will know know if you are
>> using GM or MX.  Let your administrator know you can build MPICH2 with
>> gm support by configuring MPICH2 with the --with-device=ch3:nemesis:gm
>> flag.  You might also need --with-gm-include=/path/to/include/gm  if
>> your GM headers are in a different location.
>>
>>> Sorry if I sound too ignorant but my I/O troubles have driven me up 
>>> the wall.
>>
>> This is not an ignorant question at all.  Thanks for sending in the
>> report instead of just quitting in frustration.
>>
>> ==rob
>>
>> -- 
>> Rob Latham
>> Mathematics and Computer Science Division    A215 0178 EA2D B059 8CDF
>> Argonne National Lab, IL USA                 B29D F333 664A 4280 315B
>>
> 




More information about the mpich-discuss mailing list