<table cellspacing="0" cellpadding="0" border="0" ><tr><td valign="top" style="font: inherit;"><DIV>what are the CPU on these 2 boxes ?</DIV>
<DIV> </DIV>
<DIV>tan</DIV>
<DIV><BR><BR>--- On <B>Tue, 1/13/09, Gustavo Miranda Teixeira <I><magusbr@gmail.com></I></B> wrote:<BR></DIV>
<BLOCKQUOTE style="PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: rgb(16,16,255) 2px solid">From: Gustavo Miranda Teixeira <magusbr@gmail.com><BR>Subject: Re: [mpich-discuss] MPICH v1.2.7p1 and SMP clusters<BR>To: mpich-discuss@mcs.anl.gov<BR>Date: Tuesday, January 13, 2009, 9:37 AM<BR><BR>
<DIV id=yiv1905992840>Hello Marcus,<BR><BR>I can't see what your point is. Do you want to know in which core is the processes allocated? Like if it's on the same processor or different ones?<BR><BR><BR>
<DIV class=gmail_quote>On Tue, Jan 13, 2009 at 3:06 PM, Marcus Vinicius Brandão Soares <SPAN dir=ltr><<A href="mailto:mvbsoares@gmail.com" target=_blank rel=nofollow>mvbsoares@gmail.com</A>></SPAN> wrote:<BR>
<BLOCKQUOTE class=gmail_quote style="PADDING-LEFT: 1ex; MARGIN: 0pt 0pt 0pt 0.8ex; BORDER-LEFT: rgb(204,204,204) 1px solid">Hello Gustavo and all,<BR><BR>You described that you are using two machines with a dual processor in each one. If I can model it in a simple graph, we have two vertices and two unidirectional edges.<BR><BR>Each machine has a dual processor, each one with dual core, so there are 8 processor. But lets think again in the graph model: now we have two vertices, each one with two more vertices; these last two vertices have two more vertices too, and so this is the end.<BR><BR>Do you know the structure of the communication lines of the core processors ? <BR><BR>
<DIV class=gmail_quote>2009/1/13 Gustavo Miranda Teixeira <SPAN dir=ltr><<A href="mailto:magusbr@gmail.com" target=_blank rel=nofollow>magusbr@gmail.com</A>></SPAN>
<DIV>
<DIV></DIV>
<DIV class=Wj3C7c><BR>
<BLOCKQUOTE class=gmail_quote style="PADDING-LEFT: 1ex; MARGIN: 0pt 0pt 0pt 0.8ex; BORDER-LEFT: rgb(204,204,204) 1px solid">Hello everyone!<BR><BR>I've been experiencing some issues when using MPICH v1.2.7p1 and a SMP cluster and thought maybe some one can help me here.<BR><BR>I have a small cluster with two dual processor machines with gigabit ethernet communication. Each processor is a dual core which sums up to 8 cores of processors. When I run an application spreading 4 processes in both the machines (like distributing 2 processes in one machine and 2 processes in another) I get a significantly better performance than when I run the same application using 4 processes in only one machine. Isn`t it a bit curious? I know some people who also noticed that, but no one can explain me why this happens. Googling it didn't helped either. I originally thought it was a problem from my kind of application (a heart simulator which using PETSc to solve some
differential equations) but some simple experimentations showed a simple MPI_Send inside a huge loop causes the same issue. Measuring cache hits and misses showed it`s not a memory contention problem. I also know that a in-node communication in MPICH uses the loopback interface, but as far as I know a message that uses loopback interface simply takes a shortcut to the input queue instead of being sent to the device, so there is no reason for the message to take longer to get to the other processes. So, I have no idea why it`s taking longer to use MPICH in the same machine. Does anyone else have noticed that too? Is there some logical explanation for this to happen?<BR><BR>Thanks,<BR><FONT color=#888888>Gustavo Miranda Teixeira<BR></FONT></BLOCKQUOTE></DIV></DIV></DIV><BR><BR clear=all><BR>-- <BR>Marcus Vinicius<BR>--<BR>"Havendo suficientes colaboradores,<BR>Qualquer problema é passível de solução"<BR>Eric S. Raymond<BR>A Catedral e o
Bazar<BR><BR>"O passado é apenas um recurso para o presente"<BR>Clave de Clau<BR><BR>"Ninguém é tão pobre que não possa dar um abraço; e <BR>Ninguém é tão rico que não necessite de um abraço.<BR>Anônimo<BR></BLOCKQUOTE></DIV><BR></DIV></BLOCKQUOTE></td></tr></table><br>