[mpich-discuss] General Scalability Question
balaji at mcs.anl.gov
Mon Oct 26 10:46:01 CDT 2009
On 10/26/2009 10:30 AM, Robertson, Andrew wrote:
> Assuming the same memory per core am I better off with
> High core count (12-16) boxes on a gigabit switch
> Lower core count (2 -4) boxes on an infiniband switch.
Is there that much difference in price? You might be able to get a 1
generation older IB network (ConnectX DDR) single port adapter for
cheaper. If you go one more generation older (InfiniHost III), you would
get it onboard your motherboard as well (probably ConnectX is available
onboard now as well).
Anyway, 12-16 boxes with Gigabit Ethernet would certainly be better than
2-4 boxes with IB for most applications, assuming that the system and
the network is configured and optimized properly. However, if the
difference is more like 12-16 vs. 10-12, then it's a closer call.
> I understand that if I configure mpich correctly it will use shared
> memory on the mutli-core multi-processor boxes. If I end up with the
> high core count boxes, should I spec the frontside bus (or whatever it
> is called now) as high as possible??
This is only a second or third order issue as far as MPI is concerned.
> I also have concerns that a single power supply failure takes out more
> cores, though perhaps that is not such a problem
True, but you are really looking at a small cluster (16 nodes), so is
this really a big issue?
More information about the mpich-discuss