[mpich-discuss] Spawn MPI-Process

Rajeev Thakur thakur at mcs.anl.gov
Wed Oct 31 12:53:46 CDT 2012


The argv list has to be terminated by NULL. See the definition of MPI_Comm_spawn.


-------------- next part --------------
A non-text attachment was scrubbed...
Name: Screen shot 2012-10-31 at 12.53.22 PM.png
Type: image/png
Size: 62818 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20121031/bdb7d429/attachment-0001.png>
-------------- next part --------------




On Oct 31, 2012, at 10:42 AM, Silvan Br?ndli wrote:

> Dear all
> 
> After a long time I'm coming back to this topic...
> 
> Good news: The first example is running (see attached case "test_spawn", run ./build to compile and ./run to start the example). Setting -hosts localhost:XYZ to something bigger than 1, as Pavan suggested helped.
> 
> Bad news: I also need to pass an argument to the program I want to start. When I try, I get a segmentation fault (see second case: test_spawn_argv). I guess something must be wrong with the argv I am passing:
> 
> char* hiargv[1];
> char h0[]={"23"};
> hiargv[0]=h0;
> 
> The resulting output is:
> 
> =================================================
> 
> [0] Main rank 0
> [0] Main world_size 1
> [0] Main universe size 2
> [0] before spawn 1
> 
> =====================================================================================
> =   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
> =   EXIT CODE: 11
> =   CLEANING UP REMAINING PROCESSES
> =   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
> =====================================================================================
> APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault (signal 11)
> 
> =================================================
> 
> I am grateful for any help.
> 
> Best regards
> Silvan
> 
> 
> On 08/21/12 16:42, Silvan Br?ndli wrote:
>> When I use MPI_ARGV_NULL I get:
>> 
>> [0] world_size 1
>> [0] universe size 1
>> [0] try to spawn
>> [mpiexec at skfp4] HYD_pmcd_pmi_alloc_pg_scratch
>> (./pm/pmiserv/pmiserv_utils.c:595): assert (pg->pg_process_count *
>> sizeof(struct HYD_pmcd_pmi_ecount)) failed
>> [mpiexec at skfp4] fn_spawn (./pm/pmiserv/pmiserv_pmi_v1.c:468): unable to
>> allocate pg scratch space
>> [mpiexec at skfp4] handle_pmi_cmd (./pm/pmiserv/pmiserv_cb.c:44): PMI
>> handler returned error
>> [mpiexec at skfp4] control_cb (./pm/pmiserv/pmiserv_cb.c:289): unable to
>> process PMI command
>> [mpiexec at skfp4] HYDT_dmxu_poll_wait_for_event
>> (./tools/demux/demux_poll.c:77): callback returned error status
>> [mpiexec at skfp4] HYD_pmci_wait_for_completion
>> (./pm/pmiserv/pmiserv_pmci.c:181): error waiting for event
>> [mpiexec at skfp4] main (./ui/mpich/mpiexec.c:405): process manager error
>> waiting for completion
>> 
>> Can you get some helpful information out of this?
>> 
>> Best regards
>> Silvan
>> 
>> 
>> On 08/21/12 15:21, Rajeev Thakur wrote:
>>> Trying passing MPI_ARGV_NULL instead of hiargv. If that also doesn't
>>> work, then use a debugger to locate the seg fault.
>>> 
>>> Rajeev
>>> 
>>> On Aug 21, 2012, at 5:07 AM, Silvan Br?ndli wrote:
>>> 
>>>> Rajeev and Pavan, thank you for your suggestions. Unfortunately I
>>>> still get the same error (segmentation fault). So I'm quite clueless
>>>> where to look for the error now... If somebody could execute the
>>>> example to see if the same error occurs this might be helpful.
>>>> 
>>>> Best regards
>>>> Silvan
>>>> 
>>>> On 08/21/12 01:59, Rajeev Balaji wrote:
>>>>> 
>>>>> Or you can specify multiple process on the node using:
>>>>> 
>>>>> mpiexec -hosts localhost:100 -np 1 ./main
>>>>> 
>>>>> Then the universe size will be set to 100.
>>>>> 
>>>>>  -- Pavan
>>>>> 
>>>>> On 08/20/2012 12:49 PM, Rajeev Thakur wrote:
>>>>>> Instead of universe_size-1, try passing some number like 3 or 4.
>>>>>> 
>>>>>> Rajeev
>>>>>> 
>>>>>> On Aug 20, 2012, at 4:39 AM, Silvan Br?ndli wrote:
>>>>>> 
>>>>>>> Dear all
>>>>>>> 
>>>>>>> I try to start an application using the MPI_Comm_spawn command.
>>>>>>> However, when I execute my program I get a segmentation fault. I
>>>>>>> guess, this might be because universe_size is one. As I couldn't find
>>>>>>> helpful information on setting universe_size I hope some of you could
>>>>>>> help me.
>>>>>>> 
>>>>>>> Find the source code below or in the attached files. I compile the
>>>>>>> code using
>>>>>>> mpicxx main.cpp -o main
>>>>>>> mpicxx hi.cpp -o hi
>>>>>>> 
>>>>>>> and execute the case with
>>>>>>> 
>>>>>>> mpiexec -prepend-rank -np 1 ./main
>>>>>>> 
>>>>>>> The output is
>>>>>>> 
>>>>>>> $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> [0] world_size 1
>>>>>>> [0] universe size 1
>>>>>>> [0] try to spawn
>>>>>>> 
>>>>>>> =====================================================================================
>>>>>>> 
>>>>>>> 
>>>>>>> =   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
>>>>>>> =   EXIT CODE: 11
>>>>>>> =   CLEANING UP REMAINING PROCESSES
>>>>>>> =   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
>>>>>>> =====================================================================================
>>>>>>> 
>>>>>>> 
>>>>>>> APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault
>>>>>>> (signal 11)
>>>>>>> 
>>>>>>> $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> Best regards
>>>>>>> Silvan
>>>>>>> 
>>>>>>> $$$$==main.cpp==$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> #include <sstream>
>>>>>>> #include <mpi.h>;
>>>>>>> 
>>>>>>> using std::string;
>>>>>>> 
>>>>>>> int main(int argc, char *argv[])
>>>>>>> {
>>>>>>>  int          myrank;
>>>>>>>  if (MPI_Init(&argc,&argv)!=MPI_SUCCESS)
>>>>>>>  {
>>>>>>>    printf("MPI_Init failed");
>>>>>>>  }
>>>>>>>  MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
>>>>>>> 
>>>>>>>  int world_size, universe_size, *universe_sizep, flag;
>>>>>>>  MPI_Comm_size(MPI_COMM_WORLD, &world_size);
>>>>>>>  printf("world_size %d\n",world_size);
>>>>>>> 
>>>>>>>  MPI_Attr_get(MPI_COMM_WORLD, MPI_UNIVERSE_SIZE,
>>>>>>>  &universe_sizep, &flag);
>>>>>>> 
>>>>>>>  if (!flag)
>>>>>>>  {
>>>>>>>    printf("This MPI does not support UNIVERSE_SIZE. How many
>>>>>>> processes total?");
>>>>>>>    scanf("%d", &universe_size);
>>>>>>>  } else universe_size = *universe_sizep;
>>>>>>>  printf("universe size %d\n",universe_size);
>>>>>>> 
>>>>>>>  char* hiargv[1];
>>>>>>>  char* m0=" ";
>>>>>>>  hiargv[0]=m0;
>>>>>>> 
>>>>>>>  MPI_Comm childComm;
>>>>>>>  int spawnerror;
>>>>>>>  printf("try to spawn\n");
>>>>>>>  MPI_Comm_spawn("./hi",hiargv, universe_size-1, MPI_INFO_NULL,
>>>>>>> myrank, MPI_COMM_SELF, &childComm, &spawnerror);
>>>>>>>  printf("after spawn\n");
>>>>>>>  MPI_Finalize();
>>>>>>>  return 0;
>>>>>>> }
>>>>>>> 
>>>>>>> $$$$==hi.cpp==$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> #include <mpi.h>;
>>>>>>> int main(int argc, char** argv) {
>>>>>>>  MPI_Init(NULL, NULL);
>>>>>>> 
>>>>>>>  int world_size;
>>>>>>>  MPI_Comm_size(MPI_COMM_WORLD, &world_size);
>>>>>>> 
>>>>>>>  int world_rank;
>>>>>>>  MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
>>>>>>> 
>>>>>>>  printf("Hello world from, rank %d"
>>>>>>>         " out of %d processors\n",
>>>>>>>         world_rank, world_size);
>>>>>>> 
>>>>>>>  MPI_Finalize();
>>>>>>>  return 0;
>>>>>>> }
>>>>>>> 
>>>>>>> $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
>>>>>>> 
>>>>>>> 
>>>>>>> --
>>>>>>> Dipl.-Ing. Silvan Br?ndli
>>>>>>> Numerische Strukturanalyse mit Anwendungen in der Schiffstechnik
>>>>>>> (M-10)
>>>>>>> 
>>>>>>> Technische Universit?t Hamburg-Harburg
>>>>>>> Schwarzenbergstra?e 95c
>>>>>>> 21073 Hamburg
>>>>>>> 
>>>>>>> Tel.  : +49 (0)40 42878 - 6187
>>>>>>> Fax.  : +49 (0)40 42878 - 6090
>>>>>>> e-mail: silvan.braendli at tu-harburg.de
>>>>>>> www   : http://www.tuhh.de/skf
>>>>>>> 
>>>>>>> <main.cpp><hi.cpp>_______________________________________________
>>>>>>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>>>>>>> To manage subscription options or unsubscribe:
>>>>>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>>>> 
>>>>>> _______________________________________________
>>>>>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>>>>>> To manage subscription options or unsubscribe:
>>>>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> --
>>>> Dipl.-Ing. Silvan Br?ndli
>>>> Numerische Strukturanalyse mit Anwendungen in der Schiffstechnik (M-10)
>>>> 
>>>> Technische Universit?t Hamburg-Harburg
>>>> Schwarzenbergstra?e 95c
>>>> 21073 Hamburg
>>>> 
>>>> Tel.  : +49 (0)40 42878 - 6187
>>>> Fax.  : +49 (0)40 42878 - 6090
>>>> e-mail: silvan.braendli at tu-harburg.de
>>>> www   : http://www.tuhh.de/skf
>>>> 
>>>> 
>>>> _______________________________________________
>>>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>>>> To manage subscription options or unsubscribe:
>>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>> 
>>> _______________________________________________
>>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>>> To manage subscription options or unsubscribe:
>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>> 
>> 
>> 
> 
> 
> -- 
> Dipl.-Ing. Silvan Br?ndli
> Numerische Strukturanalyse mit Anwendungen in der Schiffstechnik (M-10)
> 
> Technische Universit?t Hamburg-Harburg
> Schwarzenbergstra?e 95c
> 21073 Hamburg
> 
> Tel.  : +49 (0)40 42878 - 6187
> Fax.  : +49 (0)40 42878 - 6090
> e-mail: silvan.braendli at tu-harburg.de
> www   : http://www.tuhh.de/skf
> 
> 5th GACM Colloquium on Computational Mechanics
> http://www.tu-harburg.de/gacm2013
> <test_spawn.zip><test_spawn_argv.zip>_______________________________________________
> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss



More information about the mpich-discuss mailing list