[mpich-discuss] MPICH2 (or MPI_Init) limitation | scalability

Bernard Chambon bernard.chambon at cc.in2p3.fr
Wed Jan 18 08:39:35 CST 2012


Hello,

Le 12 janv. 2012 à 18:07, Darius Buntinas a écrit :

> Great, I'm glad it worked.  BTW, the kernel.shmall et.al. parameters are used for System V shared memory, so they'll have no effect when using mmap-ed shared memory, which is how MPICH is configured by default.
> 
> -d
> 

Now that your patch has solved the shared memory failure, I encountered a new failure when specifying -iface option

With a minimal piece of code (*) , 
 with -iface, I always get assertion error like this one :
>mpiexec -iface eth0 -n 150 bin/basic_test
[mpiexec at ccwpge0061] control_cb (./pm/pmiserv/pmiserv_cb.c:215): assert (!closed) failed
[mpiexec at ccwpge0061] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status
[mpiexec at ccwpge0061] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:181): error waiting for event
[mpiexec at ccwpge0061] main (./ui/mpich/mpiexec.c:405): process manager error waiting for completion

 without -iface, it works always
> mpiexec -n 150 bin/basic_test   : it's OK 100 % of time ( without -iface)

I'm confused with that because I can't provide the 10Gb/s (-iface eth2) to our customers 

Best regards


(*) minimal code :
  if (MPI_Init(NULL, NULL) != MPI_SUCCESS ) {
   printf("Error calling MPI_Init !!, => exiting \n") ; fflush(stdout);
   return(1);
  } else {
   MPI_Finalize();  
   printf("It's OK \n") ; fflush(stdout);
   return(0);
  }


> On Jan 12, 2012, at 5:31 AM, Bernard Chambon wrote:
> 
>> Hello,
>> 
>> Good news, It works !
>> Whith your patch I can run 255 tasks and perhaps more, without any special
>> configuration on the machine (*)
>> 
>> 
> 

---------------
Bernard CHAMBON
IN2P3 / CNRS
04 72 69 42 18

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20120118/77118fc3/attachment.htm>


More information about the mpich-discuss mailing list