[MPICH] Is ther a limitation with MPI_Comm_spawn_multiple

Rajeev Thakur thakur at mcs.anl.gov
Wed Apr 5 22:41:28 CDT 2006


The line below should be sizeof(char *) instead of sizeof(char)

>>        commands=(char**)malloc(sizeof(char)*num);
 
Try it with that change.

Rajeev


> -----Original Message-----
> From: Rajesh Sudarsan [mailto:rajesh.sudarsan at gmail.com] 
> Sent: Wednesday, April 05, 2006 10:29 PM
> To: Rajeev Thakur
> Cc: mpich-discuss at mcs.anl.gov
> Subject: Re: [MPICH] Is ther a limitation with MPI_Comm_spawn_multiple
> 
> Hi Rajeev,
> Thanks for the reply. But the same code runs perfectly if num = 4 or 
> less. In the example I sent you, the value of num =8 and I 
> had assigned 
> memory from 0 to 7. I tried running this code with different value of 
> num and it gave the same results. I also thought there is no 
> restriction 
> on comm_spawn_multiple command, but I cannot understand why 
> this simple 
> code is not working. I dont think it is because of memory allocation.
> 
> Regards,
> Rajesh
> 
> Rajeev Thakur wrote:
> > I haven't run your code, but I don't think you have 
> allocated enough memory
> > in the commands array to do this:
> >
> >   
> >>        commands=(char**)malloc(sizeof(char)*num);
> >>        commands[0]="./hello";
> >>        commands[1]="./hello";
> >>        commands[2]="./hello";
> >>        commands[3]="./hello";
> >>        commands[4]="./hello";
> >>        commands[5]="./hello"; 
> >>        commands[6]="./hello";
> >>        commands[7]="./hello";
> >>     
> >  
> > There is no limit in comm_spawn_multiple.
> >
> > Rajeev
> >
> >
> >   
> >> -----Original Message-----
> >> From: owner-mpich-discuss at mcs.anl.gov 
> >> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of 
> Rajesh Sudarsan
> >> Sent: Wednesday, April 05, 2006 6:15 PM
> >> To: mpich-discuss at mcs.anl.gov
> >> Subject: [MPICH] Is ther a limitation with MPI_Comm_spawn_multiple
> >>
> >> Hi,
> >> I am using the MPI_Comm_spawn_multiple command in my program to 
> >> dynamically spawn new processes. But the command does not 
> >> seem to work 
> >> when the number of spawned processes is greater than or 
> equal to 5. I 
> >> tested this using a simple master worker example. I have 
> pasted the 
> >> sample programs below. Could anyone tell me where I am going 
> >> wrong or is 
> >> this a limitation with the implementation?
> >>
> >> MASTER:
> >>
> >> #include<string.h>
> >> #include<stdlib.h>
> >> #include<stdio.h>
> >> #include <mpi.h>
> >>
> >> void
> >> main(int argc, char **argv)
> >> {
> >>
> >>         int             tag = 0;
> >>         int             my_rank;
> >>         int             num_proc;
> >>         char         slave[20];
> >>         int             array_of_errcodes[10];
> >>         int             num,k;
> >>         MPI_Status      status;
> >>         MPI_Comm        inter_comm;
> >>         char **commands;
> >>         MPI_Info *info;
> >>         int *maxprocs;
> >>         char ***args=NULL;
> >>         int i=1;   
> >>      
> >>         char *host =(char*)"host";
> >>
> >>        num = 8;
> >>
> >>        commands=(char**)malloc(sizeof(char)*num);
> >>        commands[0]="./hello";
> >>        commands[1]="./hello";
> >>        commands[2]="./hello";
> >>        commands[3]="./hello";
> >>        commands[4]="./hello";
> >>        commands[5]="./hello";
> >>        commands[6]="./hello";
> >>        commands[7]="./hello";
> >>
> >>         maxprocs=(int *)malloc(sizeof(int)*num);
> >>         maxprocs[0]=i;
> >>         maxprocs[1]=i;
> >>         maxprocs[2]=i;
> >>         maxprocs[3]=i;
> >>         maxprocs[4]=i;
> >>         maxprocs[5]=i;
> >>         maxprocs[6]=i;
> >>         maxprocs[7]=i;
> >>  
> >>  
> >>         MPI_Init(&argc, &argv);
> >>         MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
> >>         MPI_Comm_size(MPI_COMM_WORLD, &num_proc);
> >>
> >>         info=(MPI_Info *)malloc(sizeof(MPI_Info)*num);
> >>         MPI_Info_create (&info[0]);
> >>         MPI_Info_create (&info[1]);
> >>         MPI_Info_create (&info[2]);
> >>         MPI_Info_create (&info[3]);
> >>         MPI_Info_create (&info[4]);
> >>         MPI_Info_create (&info[5]);
> >>         MPI_Info_create (&info[6]);
> >>         MPI_Info_create (&info[7]);
> >>
> >>
> >>         MPI_Info_set (info[0],host, "n1026");
> >>         MPI_Info_set (info[1],host, "n1027");
> >>         MPI_Info_set (info[2],host, "n1028");
> >>         MPI_Info_set (info[3],host, "n1029");
> >>         MPI_Info_set (info[4],host, "n1050");
> >>         MPI_Info_set (info[5],host, "n1051");
> >>         MPI_Info_set (info[5],host, "n1052");
> >>         MPI_Info_set (info[5],host, "n1053");
> >>
> >>
> >>     printf("Master %d running on host\n",my_rank);
> >>     printf("MASTER : spawning  %d slaves ... \n",num);
> >>    
> >>         /* spawn slave and send it a message */ 
> >>     MPI_Comm_spawn_multiple(num, commands, args, maxprocs, info,0, 
> >> MPI_COMM_WORLD, &inter_comm, array_of_errcodes);
> >>
> >>      MPI_Finalize();
> >>
> >> }
> >>
> >>
> >> WORKER:
> >>
> >> #include <stdio.h>
> >> #include <mpi.h>
> >>
> >> main (int argc, char **argv)
> >> {
> >>    int node,k;
> >>    int data1[1],data2[1]=10;
> >>    MPI_Status *status;
> >>    char host[20];
> >>    MPI_Init(&argc, &argv);
> >>    MPI_Comm_rank(MPI_COMM_WORLD, &node);
> >>
> >>     status = (MPI_Status*)malloc(sizeof(MPI_Status));
> >>
> >>    if (node == 0) {
> >>      printf("Rank 0 is present in C version of Hello World.\n");
> >>      k=gethostname(host,20);
> >>      printf("hostname = %s\n",host);
> >>      data1[1]=11;
> >>       printf("data received on proc %d\n",node);
> >>        printf("%d\n",data2[1]);
> >>    }
> >>     else {
> >>      k=gethostname(host,20);
> >>      printf("hostname = %s\n",host);
> >>      data1[1]=16;
> >>      printf("  Rank %d of C version says: Hello world!\n", node);
> >>      printf("data received on proc %d\n",node);
> >>      printf("-- %d\n",data2[1]);
> >>    }
> >>
> >>     MPI_Finalize();
> >> }
> >>
> >>
> >> Regards,
> >> Rajesh Sudarsan
> >>
> >>
> >>     
> >
> >
> >   
> 
> 




More information about the mpich-discuss mailing list