[MPICH] a problem with MPI
Matthew Chambers
matthew.chambers at vanderbilt.edu
Mon Sep 24 12:25:32 CDT 2007
MPI_Init and MPI_Finalize are process-scope functions. They are not
designed to be called more than once, IIRC. You call MPI_Init once
before you make any other MPI calls (and typically right after that you
get the number of processes and the current process's id and store those
globally), and you call MPI_Finalize once after you are finished making
all MPI calls. For your for loop that needs to be parallelized, there's
little help we can give you until you are more specific about what
you're trying to accomplish. It seems like a read through of some MPI
references is in order though.
-Matt Chambers
Anna Mereu wrote:
> Hi, i have a problem concerning the implementation of a program in a
> parallel way. I don't know if this is the right forum, but i don't
> know any other forum that can help me.
> Inside my program i have a procedure that must be parallel. So at the
> beginning and at the end of it, there are the MPI_Init and
> MPI_Finalize commands. At this point all seems to go correctly.
> But the fact is that i have to run this procedure a lot of times
> subsiquently, and i do it with a for cycle.
> The problem that occurs to me is that, while during the first
> iteration the process ids go from 0 to nproc-1 (so i can simpy
> identify them and assign different jobs to them), during the next
> iterations the ids are different and in particular they seem to be
> random. In this way i cannot identify a job (for instance the master)
> from another.
> In practice inside the procedure i have something like...
>
> if(myid==0) then......
> else.....
>
> How can i face this problem?
>
> Thank you in advance for the help
>
> Anna
>
>
More information about the mpich-discuss
mailing list