[mpich-discuss] MPICH on multiple NICs, and how to choose the best computing node

Diego M. Vadell dvadell at linuxclusters.com.ar
Tue May 20 16:25:10 CDT 2008


Hi,
   It looks to me like you are describing "bonding" ( 
http://www.linux-corner.info/bonding.html ). Is it good enough?

Regards,
 -- Diego. 

On Tuesday 20 May 2008 17:49:05 Gus Correa wrote:
> Hello MPICH experts
>
> 1) I wonder if MPICH2 (and/or MPICH-1) works with multiple
> (Gigabit)Ethernet NICs
> installed on each computer node on a Linux/Beowulf cluster.
>
> The naive idea here would be to increase network bandwidth and reduce
> contention by using multiple NICs.
> Network contention may become an issue in the typical "eight core
> computer node"
> that is popular now on the market (two-slot quad-core nodes).
>
> I will need to buy a replacement for our small old cluster soon,
> and I don't know what would be the most cost effective choice to run our
> MPICH-based code:
> "two-slot quad-core nodes",  "two-slot dual-core nodes",
> or other (single core processors seem to be an extinct species).
>
> Of course, the price of one two-slot quad-core node is significantly
> less than the price
> of a couple of two-slot dual-core nodes.
> However, it looks like to me that MPICH may not be as efficient with
> eight cores sharing a single NIC,
> as it would be in a two-slot dual-core node (four cores sharing a NIC).
> Where the optimal price/performance point resides I don't know.
>
> 2) If MPICH does work with multiple NICs,
> are there guidelines, documentation, or a recipe on how to set up a
> scheme like this?
>
> 3) Any other suggestions to avoid/reduce MPICH network contention in
> multicore/multiprocessor computer nodes?
> (Using OpenMP within the nodes is not so convenient, as it requires
> intensive re-coding,
> and is not particularly simple or efficient in large programs.
> Buying Infiniband, Myrinet, etc, is not possible with our shoestring
> budget, and I got to stick to and be happy with Gigabit Ethernet.)
>
> If this question was discussed before, as it is likely to be the case,
> please just point to me the right thread in the mailing list.
>
> Many thanks,
> Gus Correa





More information about the mpich-discuss mailing list