Hi, <br><br>I recently made a Rocks cluster of 10 comuting nodes with dual Intel quad core cpus (Harpertown). The nodes are connected with gigabit networks. <br><br>The problem is I got bad scalability from the benchmark test of Cactus which is well-known program for numerical relativity. It might be natural with the gigabit network. However, I suspect I missed something important or it's a generic problem of clusters of multicore cpus because the benchmark problems does not require massive communication between the computing nodes.<br>
<br>I tested openmpi, mpich as well as mpich2 but no significant differences between openmpi and mpich2. The mpich2 was configured with ssm. After reading some documents on nemesis and multicore cpus, I tested with nemesis but got worse results than with ssm. <br>
<br>- Is there any optimized configuration option for multicore cpus like Harpertown? <br>- Could it be improved only with infiniband, myrinet,...?<br>- If the gigabit network was the cause, could it be improved with Open-MX?<br>
<br>I'm a newbie on this field. The questions must be not clear to you. I appreciate any helps in advance.<br><br>Kim, Hee Il<br>