[mpich-discuss] MPICH2 with MPI-1 code

Darius Buntinas buntinas at mcs.anl.gov
Wed Apr 14 14:48:21 CDT 2010


My understanding is that using the same buffer for both the send buffer 
and receive buffer was not allowed in MPI 1, and that the addition of 
MPI_IN_PLACE in MPI 2 just made that possible.

Note that ignoring that error may be dangerous, because passing the same 
(or overlapping) buffers to memcpy can give incorrect results (depending 
on the memcpy implementation and optimization level).

-d

On 04/14/2010 02:19 PM, Mark Williamson wrote:
> Dear List,
>
> I have a query about MPICH2's behavior with MPI-1 code. We develop a
> piece of molecular dynamics code called AMBER. In the lead up to the
> next major release, we have been testing the MPI components of it with
> various MPI implementations on the major Linux distributions. We noticed
> that on the latest SUSE that some of the MPI test cases were failing.
>
> Looking closer into the fail, it pertained to this error message from a
> recent MPICH2 release:
>
> Assertion failed in file helper_fns.c at line 335: 0
> memcpy argument memory ranges overlap, dst_=0x6e51a4 src_=0x6e51a0 len_=100
>
> Having researched this, it was related to an "Incorrect call to
> MPI_Gatherv"; the root process aliased the send buffer and the receive
> buffer alias to each other. This of course, can be resolved by using
> MPI_IN_PLACE. Interestingly, mpich2-1.0.7, with which we had used in the
> past, never complained about this.
>
> So, this brings me to my question. My understanding is that MPI_IN_PLACE
> is a MPI-2 specific argument value. The piece of AMBER code that I refer
> to in this discussion written within the a MPI-1 standard. As stated on
> the MPICH2 home page, "MPICH2 is a high-performance and widely portable
> implementation of the Message Passing Interface (MPI) standard (*BOTH*
> MPI-1 and MPI-2)". If I use MPI_IN_PLACE, the code will not be MPI-1
> standard, so how to I specific instruct MPICH2's mpif90 or mpirun that
> one is using a MPI-1 code, hence avoiding this error?
>
> regards,
>
> Mark
>


More information about the mpich-discuss mailing list