[MPICH] MPICH2 and PVFS2

Peter Diamessis pjd38 at cornell.edu
Fri Apr 13 15:25:22 CDT 2007


Scott,

I do stand corrected. MX has not been installed on our cluster
but a transition to it is planned. I'm unable to find any information
on the cluster webpage.

Peter

----- Original Message ----- 
From: "Scott Atchley" <atchley at myri.com>
To: "Darius Buntinas" <buntinas at mcs.anl.gov>
Cc: "Peter Diamessis" <pjd38 at cornell.edu>; "Robert Latham" <robl at mcs.anl.gov>; <mpich-discuss at mcs.anl.gov>
Sent: Friday, April 13, 2007 3:56 PM
Subject: Re: [MPICH] MPICH2 and PVFS2


> Darius,
> 
> I have not yet tested ch3:nemesis:mx on Myrinet-2000 hardware. It may  
> be the performance is sufficient.
> 
> Peter, which Myrinet-2000 do you have? Single port D or F cards or  
> dual port E cards? If you have MX loaded, simply run:
> 
> $ /opt/mx/bin/mx_info
> 
> If you have GM loaded, run:
> 
> $ /opt/gm/bin/gm_board_info
> 
> and send the results.
> 
> Scott
> 
> On Apr 13, 2007, at 3:28 PM, Darius Buntinas wrote:
> 
>>
>> MPICH2 does support MX, but performance has not been tuned yet.   
>> Also, Myricom is actively working on their own MPICH2 port over  
>> MX.  So depending on how long your MX transition takes, the MX port  
>> may be completed.
>>
>> Darius
>>
>> Peter Diamessis wrote:
>>> Hi Rob,
>>> Actually, one more thing. I did contact our system admin. .
>>> He says he'll install MPICH2 but is concerned as to whether
>>> it supports MX ? The cluster is currently set-up using GM but
>>> they are transitioning to MX and his understanding of your email
>>> is that MX is not supported by MPICH2 ? Thus, open-MPI may
>>> the cluster's ultimate option ?
>>> Any thoughts here ?
>>> Sincerely,
>>> Peter Diamessis
>>> ----- Original Message ----- From: "Robert Latham" <robl at mcs.anl.gov>
>>> To: "Peter Diamessis" <pjd38 at cornell.edu>
>>> Cc: <mpich-discuss at mcs.anl.gov>
>>> Sent: Friday, April 13, 2007 12:21 PM
>>> Subject: Re: [MPICH] MPICH2 and PVFS2
>>>> On Thu, Apr 12, 2007 at 11:51:27PM -0400, Peter Diamessis wrote:
>>>>> Apparently, MPICH v1.2.7..15 is still being used at the cluster.  
>>>>> Our system
>>>>> manager has made an effort to build it to make ROMIO compatible  
>>>>> with
>>>>> PVFS2. Nonetheless, my problems persist. Would it simply be a  
>>>>> better
>>>>> idea that they shift the whole cluster to MPICH2, which I assume  
>>>>> is more
>>>>> naturally compatible with PVFS2 ?
>>>>
>>>> Hi Peter
>>>>
>>>> First, I'm glad you're using MPI-IO to its full potential and seeing
>>>> good results. Second, yes, the PVFS v2 support in MPICH 1.2.7 is  
>>>> quite old: about 2
>>>> years of bugfixes have gone into MPICH2-1.0.5p4.  It'd be great  
>>>> if you
>>>> could run MPICH2 -- you might still find bugs but particularly for
>>>> noncontiguous I/O we have fixed a number of issues. Now,  
>>>> complicating the matter is that v1.2.7..15 is a version of MPICH
>>>> for the myrinet interconnect.  Your admin will know know if you are
>>>> using GM or MX.  Let your administrator know you can build MPICH2  
>>>> with
>>>> gm support by configuring MPICH2 with the --with- 
>>>> device=ch3:nemesis:gm
>>>> flag.  You might also need --with-gm-include=/path/to/include/gm  if
>>>> your GM headers are in a different location.
>>>>
>>>>> Sorry if I sound too ignorant but my I/O troubles have driven me  
>>>>> up the wall.
>>>>
>>>> This is not an ignorant question at all.  Thanks for sending in the
>>>> report instead of just quitting in frustration.
>>>>
>>>> ==rob
>>>>
>>>> -- 
>>>> Rob Latham
>>>> Mathematics and Computer Science Division    A215 0178 EA2D B059  
>>>> 8CDF
>>>> Argonne National Lab, IL USA                 B29D F333 664A 4280  
>>>> 315B
>>>>
>>
>




More information about the mpich-discuss mailing list