[mpich-discuss] MPICH2-1.1 or 1.0.8p1 build failure

Dave Goodell goodell at mcs.anl.gov
Tue Jul 21 08:08:18 CDT 2009


Did you compile with "make -j N", where N is an integer?  This type of  
error can happen when building MPICH2 in parallel.  Unfortunately the  
MPICH2 build isn't currently compatible with parallel make.  We are  
working on it currently and hope to make this capability available in  
the MPICH2-1.1.2 timeframe.

If you did a serial build then there is probably some other problem.   
If this is the case, can you send us the config.log, c.txt, and m.txt  
(with VERBOSE=1) as described in the MPICH2 README file in the source  
distribution.

-Dave

On Jul 21, 2009, at 2:03 AM, Sangamesh B wrote:

> Dear MPICh2 team,
>
>     I tried to install mpich2-1.0.8p1 with Intel 10 compilers on  
> Cent OS 5.2.
> The configure goes well, but the make fails:
>
> fortcom: Error: ./mpi_base.f90, line 18: Error in opening the compiled
> module file.  Check INCLUDE paths.   [MPI_CONSTANTS]
>      USE MPI_CONSTANTS,ONLY:MPI_ADDRESS_KIND
> -----------^
> fortcom: Error: ./mpi_base.f90, line 20: A kind type parameter must be
> a compile-time constant.   [MPI_ADDRESS_KIND]
>      INTEGER(KIND=MPI_ADDRESS_KIND) v1
> --------------------^
> fortcom: Error: ./mpi_base.f90, line 18: Name in only-list does not
> exist.   [MPI_ADDRESS_KIND]
>      USE MPI_CONSTANTS,ONLY:MPI_ADDRESS_KIND
> ------------------------------^
> fortcom: Error: ./mpi_base.f90, line 72: Error in opening the compiled
> module file.  Check INCLUDE paths.   [MPI_CONSTANTS]
>      USE MPI_CONSTANTS,ONLY:MPI_STATUS_SIZE
> -----------^
> fortcom: Error: ./mpi_base.f90, line 73: Conflicting attributes or
> multiple declaration of name.   [MPI_STATUS_SIZE]
>      INTEGER v0(MPI_STATUS_SIZE)
> ------------------^
> fortcom: Error: ./mpi_base.f90, line 73: A specification expression is
> invalid.   [MPI_STATUS_SIZE]
>      INTEGER v0(MPI_STATUS_SIZE)
>
> The same errors appeared for the latest version of mpich2 i.e. 1.1
>
> Let me know what's going wrong here.
>
> Thank you



More information about the mpich-discuss mailing list