[mpich-discuss] Using MPI_Put/Get correctly?
Pavan Balaji
balaji at mcs.anl.gov
Mon Dec 27 19:25:39 CST 2010
On 12/27/2010 04:29 PM, Grismer, Matthew J Civ USAF AFMC AFRL/RBAT wrote:
> I've created two example test programs that appear to highlight the issue
> with MPICH2; both die when I run them on 2 processors. I am pretty certain
> the first (putoneway.f90) should work, as I am only doing a single put from
> one processor to a second processor; the target processor is doing nothing
> with the window'ed array that is receiving the data. My guess is the problem
> lies in the indexed datatypes that I am using for both the origin and
> target.
Thanks. Example programs that show the problem are always helpful.
The first example doesn't die for me. It dumps some output and returns
correctly (return code 0).
> The second case (putbothways.f90) closely mirrors what I am actually trying
> to do in my code, that is have each processor put into the other processors
> window'ed array at the same time. So, each process is sending from and
> receiving into the same array at the same time, with no overlap in the sent
> and received data. Once again I'm using indexed data types for both the
> origin and target.
The second example segfaults for me, but I don't know enough Fortran to
know if the code is correct or not. However, it looks like the usage of
MPI_Type_indexed is incorrect (block lengths needs to be an array).
Here's the prototype:
http://www.mcs.anl.gov/research/projects/mpi/www/www3/MPI_Type_indexed.html
If you don't think this is an error in the code, maybe someone know
knowledgeable in Fortran on the mailing list can comment.
-- Pavan
--
Pavan Balaji
http://www.mcs.anl.gov/~balaji
More information about the mpich-discuss
mailing list