multi core os x machines

Randall Mackie randy at geosystem.us
Tue Nov 13 11:01:47 CST 2007


We have a 64 node cluster, each node being a quad core Intel Xeon chip,
so we have a total of 256 cpus. i'm not quite sure of the chip architecture
and the memory paths.

With infiniband, each cpu can go at full 100% during a PETSc execution.

The key for us was the infiniband and the special mpi that is tuned
for the infiniband - without them, performance was much worse (ie, using
mpich).

Randy M.

Barry Smith wrote:
> 
>   It depends on how the memory is connected to the individual cores or CPUS;
> for example the AMD has a different approach than Intel. If the 
> different processors/cores
> have SEPERATE paths to memory then you will not see this terrible effect.
> 
>    Barry
> 
> 
> 
> On Nov 13, 2007, at 10:23 AM, Gideon Simpson wrote:
> 
>> This is also true for a multi-processor machine, or its unique to 
>> multi-core machines?
>> -gideon
>>
>> On Nov 13, 2007, at 11:14 AM, Barry Smith wrote:
>>
>>>
>>>   Not possible. The problem is that with one process it uses all the 
>>> memory
>>> bandwidth, when you change to use 2 processes (2 cores) each core
>>> now gets only half the memory bandwidth and hence essentially half
>>> the speed.
>>>
>>>    Barry
>>>
>>>
>>>    Barry
>>>
>>> On Nov 13, 2007, at 10:06 AM, Gideon Simpson wrote:
>>>
>>>> Has anyone had any success in getting good performance on multi-core 
>>>> intel os x machines with petsc?  What's the right way to get MPICH 
>>>> up and running for such a thing?
>>>>
>>>> -Gideon Simpson
>>>>  Department of Applied Physics and Applied Mathematics
>>>>  Columbia University
>>>>
>>>>
>>>
>>
> 

-- 
Randall Mackie
GSY-USA, Inc.
PMB# 643
2261 Market St.,
San Francisco, CA 94114-1600
Tel (415) 469-8649
Fax (415) 469-5044

California Registered Geophysicist
License No. GP 1034




More information about the petsc-users mailing list