<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=us-ascii">
<STYLE type=text/css>DIV {
        MARGIN: 0px
}
</STYLE>
<META content="MSHTML 6.00.6000.16788" name=GENERATOR></HEAD>
<BODY>
<DIV dir=ltr align=left><SPAN class=335101005-30012009><FONT face=Arial
color=#0000ff size=2>I think channel bonding may be the only good way to do it
at present.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=335101005-30012009><FONT face=Arial
color=#0000ff size=2></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN class=335101005-30012009><FONT face=Arial
color=#0000ff size=2>Rajeev</FONT></SPAN></DIV><BR>
<BLOCKQUOTE
style="PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: #0000ff 2px solid; MARGIN-RIGHT: 0px">
<DIV class=OutlookMessageHeader lang=en-us dir=ltr align=left>
<HR tabIndex=-1>
<FONT face=Tahoma size=2><B>From:</B> mpich-discuss-bounces@mcs.anl.gov
[mailto:mpich-discuss-bounces@mcs.anl.gov] <B>On Behalf Of </B>Muhammad
Atif<BR><B>Sent:</B> Tuesday, January 27, 2009 6:02 PM<BR><B>To:</B>
mpich-discuss@mcs.anl.gov<BR><B>Subject:</B> [mpich-discuss] Basic MPICH
questions when coming from OpenMPIbackground (SMP
clusters)<BR></FONT><BR></DIV>
<DIV></DIV>
<DIV
style="FONT-SIZE: 10pt; FONT-FAMILY: times new roman,new york,times,serif">Dear
MPICH users/gurus<BR>I have got basic questions regarding MPICH with regards
to SMP clusters using multiple GigE interfaces. Forgive me for asking such
basic questions as I am more familiar with OpenMPI. Also forgive me if there
was any such post, as google was not being my good friend.<BR><BR>Basically
what I want to ask is; if I have two quard core machines and each
machine has 4 GigE interfaces, what should I do to use all 4 GigE interfaces.
Is channel bonding (a.k.a Link aggregration) the only option? Or can I do with
by setting up the routing tables? What method is more
optimal/prefered.<BR>My requirement is that if I run a program with 8
processes, I want each process to use a distinct CPU and distinct interface.
Also, once the processes are communicating within the machine they should use
Shared memory infrastructure.<BR><BR>In case of OpenMPI, I only have to
specify a hostfile with slots and ask the btl to use all four (or whatever)
interfaces. OMPI is able to route the packets accordingly using mix of SM and
TCP BTLs.<BR><BR>I am using MPICH 1.0.7. <BR>
<DIV> </DIV>Best Regards,<BR>Muhammad Atif
<DIV><BR></DIV></DIV><BR></BLOCKQUOTE></BODY></HTML>