<html><head><style type="text/css"><!-- DIV {margin:0px;} --></style></head><body><div style="font-family:times new roman,new york,times,serif;font-size:10pt">Thanks,<br>yeap! this is what I think as well.... or in some cases we can setup routing tables to do the same (which is faster than CB).<br><br><div> </div>Best Regards,<br>Muhammad Atif<div><br></div><div style="font-family: times new roman,new york,times,serif; font-size: 10pt;"><br><div style="font-family: times new roman,new york,times,serif; font-size: 12pt;"><font size="2" face="Tahoma"><hr size="1"><b><span style="font-weight: bold;">From:</span></b> Rajeev Thakur <thakur@mcs.anl.gov><br><b><span style="font-weight: bold;">To:</span></b> mpich-discuss@mcs.anl.gov<br><b><span style="font-weight: bold;">Sent:</span></b> Friday, January 30, 2009 4:10:42 PM<br><b><span style="font-weight: bold;">Subject:</span></b> Re: [mpich-discuss] Basic MPICH questions when coming from
OpenMPIbackground (SMP clusters)<br></font><br>
<style type="text/css">DIV {
MARGIN:0px;}
</style>
<div dir="ltr" align="left"><span class="335101005-30012009"><font size="2" color="#0000ff" face="Arial">I think channel bonding may be the only good way to do it
at present.</font></span></div>
<div dir="ltr" align="left"><span class="335101005-30012009"><font size="2" color="#0000ff" face="Arial"></font></span> </div>
<div dir="ltr" align="left"><span class="335101005-30012009"><font size="2" color="#0000ff" face="Arial">Rajeev</font></span></div><br>
<blockquote style="border-left: 2px solid rgb(0, 0, 255); padding-left: 5px; margin-left: 5px; margin-right: 0px;">
<div class="OutlookMessageHeader" dir="ltr" align="left" lang="en-us">
<hr tabindex="-1">
<font size="2" face="Tahoma"><b>From:</b> mpich-discuss-bounces@mcs.anl.gov
[mailto:mpich-discuss-bounces@mcs.anl.gov] <b>On Behalf Of </b>Muhammad
Atif<br><b>Sent:</b> Tuesday, January 27, 2009 6:02 PM<br><b>To:</b>
mpich-discuss@mcs.anl.gov<br><b>Subject:</b> [mpich-discuss] Basic MPICH
questions when coming from OpenMPIbackground (SMP
clusters)<br></font><br></div>
<div></div>
<div style="font-size: 10pt; font-family: times new roman,new york,times,serif;">Dear
MPICH users/gurus<br>I have got basic questions regarding MPICH with regards
to SMP clusters using multiple GigE interfaces. Forgive me for asking such
basic questions as I am more familiar with OpenMPI. Also forgive me if there
was any such post, as google was not being my good friend.<br><br>Basically
what I want to ask is; if I have two quard core machines and each
machine has 4 GigE interfaces, what should I do to use all 4 GigE interfaces.
Is channel bonding (a.k.a Link aggregration) the only option? Or can I do with
by setting up the routing tables? What method is more
optimal/prefered.<br>My requirement is that if I run a program with 8
processes, I want each process to use a distinct CPU and distinct interface.
Also, once the processes are communicating within the machine they should use
Shared memory infrastructure.<br><br>In case of OpenMPI, I only have to
specify a hostfile with slots and ask the btl to use all four (or whatever)
interfaces. OMPI is able to route the packets accordingly using mix of SM and
TCP BTLs.<br><br>I am using MPICH 1.0.7. <br>
<div> </div>Best Regards,<br>Muhammad Atif
<div><br></div></div><br></blockquote></div></div></div><br>
</body></html>