[mpich-discuss] Basic MPICH questions when coming from OpenMPIbackground (SMP clusters)

Daye Liu rosemont at gmail.com
Sun Feb 1 18:51:26 CST 2009


Just curious, how much performance is expected by utilizing 4 NICs for a
quad-core node, considering the four cores share the same L-2 cache?

On Sun, 2009-02-01 at 16:08 -0800, Muhammad Atif wrote:
>  
> Thanks,
> yeap! this is what I think as well.... or in some cases we can setup
> routing tables to do the same (which is faster than CB).
> 
>  
> Best Regards,
> Muhammad Atif
> 
> 
> 
> 
> ______________________________________________________________________
> From: Rajeev Thakur <thakur at mcs.anl.gov>
> To: mpich-discuss at mcs.anl.gov
> Sent: Friday, January 30, 2009 4:10:42 PM
> Subject: Re: [mpich-discuss] Basic MPICH questions when coming from
> OpenMPIbackground (SMP clusters)
> 
> I think channel bonding may be the only good way to do it at present.
>  
>  
>  
> Rajeev
> 
>          
>         
>         ______________________________________________________________
>         From: mpich-discuss-bounces at mcs.anl.gov
>         [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of
>         Muhammad Atif
>         Sent: Tuesday, January 27, 2009 6:02 PM
>         To: mpich-discuss at mcs.anl.gov
>         Subject: [mpich-discuss] Basic MPICH questions when coming
>         from OpenMPIbackground (SMP clusters)
>         
>         
>          
>         
>         Dear MPICH users/gurus
>         I have got basic questions regarding MPICH with regards to SMP
>         clusters using multiple GigE interfaces. Forgive me for asking
>         such basic questions as I am more familiar with OpenMPI. Also
>         forgive me if there was any such post, as google was not being
>         my good friend.
>         
>         Basically what I want to ask is; if I have  two quard core
>         machines and each machine has 4 GigE interfaces, what should I
>         do to use all 4 GigE interfaces. Is channel bonding (a.k.a
>         Link aggregration) the only option? Or can I do with by
>         setting up the routing tables?  What method is more
>         optimal/prefered.
>         My requirement is that if I run a program with 8 processes, I
>         want each process to use a distinct CPU and distinct
>         interface. Also, once the processes are communicating within
>         the machine they should use Shared memory infrastructure.
>         
>         In case of OpenMPI,  I only have to specify a hostfile with
>         slots and ask the btl to use all four (or whatever)
>         interfaces. OMPI is able to route the packets accordingly
>         using mix of SM and TCP BTLs.
>         
>         I am using MPICH 1.0.7. 
>          
>         Best Regards,
>         Muhammad Atif 
>         
>         
>         
> 



More information about the mpich-discuss mailing list