[mpich-discuss] Basic MPICH questions when coming from OpenMPIbackground (SMP clusters)

Muhammad Atif m_atif_s at yahoo.com
Sun Feb 1 18:08:31 CST 2009


Thanks,
yeap! this is what I think as well.... or in some cases we can setup routing tables to do the same (which is faster than CB).


 Best Regards,
Muhammad Atif




________________________________
From: Rajeev Thakur <thakur at mcs.anl.gov>
To: mpich-discuss at mcs.anl.gov
Sent: Friday, January 30, 2009 4:10:42 PM
Subject: Re: [mpich-discuss] Basic MPICH questions when coming from OpenMPIbackground (SMP clusters)

 
I think channel bonding may be the only good way to do it 
at present.
 
Rajeev


________________________________
 From: mpich-discuss-bounces at mcs.anl.gov  [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Muhammad  Atif
Sent: Tuesday, January 27, 2009 6:02 PM
To: mpich-discuss at mcs.anl.gov
Subject: [mpich-discuss] Basic MPICH  questions when coming from OpenMPIbackground (SMP  clusters)


Dear  MPICH users/gurus
I have got basic questions regarding MPICH with regards  to SMP clusters using multiple GigE interfaces. Forgive me for asking such  basic questions as I am more familiar with OpenMPI. Also forgive me if there  was any such post, as google was not being my good friend.

Basically  what I want to ask is; if I have  two quard core machines and each  machine has 4 GigE interfaces, what should I do to use all 4 GigE interfaces.  Is channel bonding (a.k.a Link aggregration) the only option? Or can I do with  by setting up the routing tables?  What method is more  optimal/prefered.
My requirement is that if I run a program with 8  processes, I want each process to use a distinct CPU and distinct interface.  Also, once the processes are communicating within the machine they should use  Shared memory infrastructure.

In case of OpenMPI,  I only have to  specify a hostfile with slots and ask the btl to use all four (or whatever)  interfaces. OMPI is able to route the packets accordingly using mix of SM and  TCP BTLs.

I am using MPICH 1.0.7. 

 Best Regards,
Muhammad Atif 


      
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20090201/d908ce05/attachment.htm>


More information about the mpich-discuss mailing list