[mpich-discuss] MPICH2 (or MPI_Init) limitation | scalability
Bernard Chambon
bernard.chambon at cc.in2p3.fr
Thu Jan 12 05:31:58 CST 2012
Hello,
Good news, It works !
Whith your patch I can run 255 tasks and perhaps more, without any special
configuration on the machine (*)
It's a very good news for me, Let me thank you very much Darius
>mpiexec -np 255 bin/advance_test
Running MPI version 2, subversion 2
I am the master task 0 sur ccwpge0062, for 254 slaves tasks, we will exchange a buffer of 10 MB
slave number 1, iteration = 1
slave number 2, iteration = 1
slave number 3, iteration = 1
...
slave number 254, iteration = 1
slave number 1, iteration = 2
slave number 2, iteration = 2
...
slave number 254, iteration = 7
Now will try to run jobs (thru GridEngine + Hydra) with higher numbers of tasks
and I will tell you if it's ok
Best regards
Le 11 janv. 2012 à 18:49, Darius Buntinas a écrit :
> I think I found the problem. Apply this patch (using "patch -p0 < seg_sz.patch"), then "make clean; make; make install", and try it again. Make sure to relink your application.
>
> Let us know if this works.
>
> Thanks,
> -d
PS :
(*)
>limit
cputime unlimited
filesize unlimited
datasize unlimited
stacksize 10240 kbytes
coredumpsize unlimited
memoryuse unlimited
vmemoryuse unlimited
descriptors 1024
memorylocked 32 kbytes
maxproc 409600
>sysctl -A | egrep "shm"
vm.hugetlb_shm_group = 0
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.shmmax = 33554432
---------------
Bernard CHAMBON
IN2P3 / CNRS
04 72 69 42 18
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20120112/ea18ac96/attachment.htm>
More information about the mpich-discuss
mailing list