[mpich-discuss] Spawn MPI-Process
Pavan Balaji
balaji at mcs.anl.gov
Mon Aug 20 18:59:18 CDT 2012
Or you can specify multiple process on the node using:
mpiexec -hosts localhost:100 -np 1 ./main
Then the universe size will be set to 100.
-- Pavan
On 08/20/2012 12:49 PM, Rajeev Thakur wrote:
> Instead of universe_size-1, try passing some number like 3 or 4.
>
> Rajeev
>
> On Aug 20, 2012, at 4:39 AM, Silvan Brändli wrote:
>
>> Dear all
>>
>> I try to start an application using the MPI_Comm_spawn command. However, when I execute my program I get a segmentation fault. I guess, this might be because universe_size is one. As I couldn't find helpful information on setting universe_size I hope some of you could help me.
>>
>> Find the source code below or in the attached files. I compile the code using
>> mpicxx main.cpp -o main
>> mpicxx hi.cpp -o hi
>>
>> and execute the case with
>>
>> mpiexec -prepend-rank -np 1 ./main
>>
>> The output is
>>
>> $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
>>
>> [0] world_size 1
>> [0] universe size 1
>> [0] try to spawn
>>
>> =====================================================================================
>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
>> = EXIT CODE: 11
>> = CLEANING UP REMAINING PROCESSES
>> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
>> =====================================================================================
>> APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault (signal 11)
>>
>> $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
>>
>> Best regards
>> Silvan
>>
>> $$$$==main.cpp==$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
>>
>> #include <sstream>
>> #include <mpi.h>;
>>
>> using std::string;
>>
>> int main(int argc, char *argv[])
>> {
>> int myrank;
>> if (MPI_Init(&argc,&argv)!=MPI_SUCCESS)
>> {
>> printf("MPI_Init failed");
>> }
>> MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
>>
>> int world_size, universe_size, *universe_sizep, flag;
>> MPI_Comm_size(MPI_COMM_WORLD, &world_size);
>> printf("world_size %d\n",world_size);
>>
>> MPI_Attr_get(MPI_COMM_WORLD, MPI_UNIVERSE_SIZE,
>> &universe_sizep, &flag);
>>
>> if (!flag)
>> {
>> printf("This MPI does not support UNIVERSE_SIZE. How many processes total?");
>> scanf("%d", &universe_size);
>> } else universe_size = *universe_sizep;
>> printf("universe size %d\n",universe_size);
>>
>> char* hiargv[1];
>> char* m0=" ";
>> hiargv[0]=m0;
>>
>> MPI_Comm childComm;
>> int spawnerror;
>> printf("try to spawn\n");
>> MPI_Comm_spawn("./hi",hiargv, universe_size-1, MPI_INFO_NULL, myrank, MPI_COMM_SELF, &childComm, &spawnerror);
>> printf("after spawn\n");
>> MPI_Finalize();
>> return 0;
>> }
>>
>> $$$$==hi.cpp==$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
>>
>> #include <mpi.h>;
>> int main(int argc, char** argv) {
>> MPI_Init(NULL, NULL);
>>
>> int world_size;
>> MPI_Comm_size(MPI_COMM_WORLD, &world_size);
>>
>> int world_rank;
>> MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
>>
>> printf("Hello world from, rank %d"
>> " out of %d processors\n",
>> world_rank, world_size);
>>
>> MPI_Finalize();
>> return 0;
>> }
>>
>> $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
>> --
>> Dipl.-Ing. Silvan Brändli
>> Numerische Strukturanalyse mit Anwendungen in der Schiffstechnik (M-10)
>>
>> Technische Universität Hamburg-Harburg
>> Schwarzenbergstraße 95c
>> 21073 Hamburg
>>
>> Tel. : +49 (0)40 42878 - 6187
>> Fax. : +49 (0)40 42878 - 6090
>> e-mail: silvan.braendli at tu-harburg.de
>> www : http://www.tuhh.de/skf
>>
>> <main.cpp><hi.cpp>_______________________________________________
>> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
>> To manage subscription options or unsubscribe:
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
> _______________________________________________
> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
--
Pavan Balaji
http://www.mcs.anl.gov/~balaji
More information about the mpich-discuss
mailing list