[mpich-discuss] MPICH2 with MPI-1 code

Mark Williamson mjw at sdsc.edu
Wed Apr 14 14:19:37 CDT 2010


Dear List,

I have a query about MPICH2's behavior with MPI-1 code. We develop a 
piece of molecular dynamics code called AMBER. In the lead up to the 
next major release, we have been testing the MPI components of it with 
various MPI implementations on the major Linux distributions. We noticed 
that on the latest SUSE that some of the MPI test cases were failing.

Looking closer into the fail, it pertained to this error message from a 
recent MPICH2 release:

Assertion failed in file helper_fns.c at line 335: 0
memcpy argument memory ranges overlap, dst_=0x6e51a4 src_=0x6e51a0 len_=100

Having researched this, it was related to an "Incorrect call to 
MPI_Gatherv"; the root process aliased the send buffer and the receive 
buffer alias to each other. This of course, can be resolved by using 
MPI_IN_PLACE. Interestingly, mpich2-1.0.7, with which we had used in the 
past, never complained about this.

So, this brings me to my question. My understanding is that MPI_IN_PLACE 
is a MPI-2 specific argument value. The piece of AMBER code that I refer 
to in this discussion written within the a MPI-1 standard. As stated on 
the MPICH2 home page, "MPICH2 is a high-performance and widely portable 
implementation of the Message Passing Interface (MPI) standard (*BOTH* 
MPI-1 and MPI-2)". If I use MPI_IN_PLACE, the code will not be MPI-1 
standard, so how to I specific instruct MPICH2's mpif90 or mpirun that 
one is using a MPI-1 code, hence avoiding this error?

regards,

Mark

-- 
Mark Williamson, Post Doc
Walker Molecular Dynamics Group
Room 395E
San Diego Supercomputer Center
9500 Gilman Drive
La Jolla, CA 92093-0505
Email:  mjw at sdsc.edu
Office: 858-246-0827


More information about the mpich-discuss mailing list