[mpich-discuss] MPICH2 (or MPI_Init) limitation | scalability
Darius Buntinas
buntinas at mcs.anl.gov
Thu Jan 12 11:07:06 CST 2012
Great, I'm glad it worked. BTW, the kernel.shmall et.al. parameters are used for System V shared memory, so they'll have no effect when using mmap-ed shared memory, which is how MPICH is configured by default.
-d
On Jan 12, 2012, at 5:31 AM, Bernard Chambon wrote:
> Hello,
>
> Good news, It works !
> Whith your patch I can run 255 tasks and perhaps more, without any special
> configuration on the machine (*)
>
>
> It's a very good news for me, Let me thank you very much Darius
>
> >mpiexec -np 255 bin/advance_test
> Running MPI version 2, subversion 2
> I am the master task 0 sur ccwpge0062, for 254 slaves tasks, we will exchange a buffer of 10 MB
>
> slave number 1, iteration = 1
> slave number 2, iteration = 1
> slave number 3, iteration = 1
> ...
> slave number 254, iteration = 1
> slave number 1, iteration = 2
> slave number 2, iteration = 2
> ...
>
> slave number 254, iteration = 7
>
>
> Now will try to run jobs (thru GridEngine + Hydra) with higher numbers of tasks
> and I will tell you if it's ok
>
> Best regards
>
>
> Le 11 janv. 2012 à 18:49, Darius Buntinas a écrit :
>
>> I think I found the problem. Apply this patch (using "patch -p0 < seg_sz.patch"), then "make clean; make; make install", and try it again. Make sure to relink your application.
>>
>> Let us know if this works.
>>
>> Thanks,
>> -d
>
> PS :
> (*)
> >limit
> cputime unlimited
> filesize unlimited
> datasize unlimited
> stacksize 10240 kbytes
> coredumpsize unlimited
> memoryuse unlimited
> vmemoryuse unlimited
> descriptors 1024
> memorylocked 32 kbytes
> maxproc 409600
>
> >sysctl -A | egrep "shm"
> vm.hugetlb_shm_group = 0
> kernel.shmmni = 4096
> kernel.shmall = 2097152
> kernel.shmmax = 33554432
>
> ---------------
> Bernard CHAMBON
> IN2P3 / CNRS
> 04 72 69 42 18
>
> _______________________________________________
> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
More information about the mpich-discuss
mailing list