[mpich-discuss] Cannot build mpich2-1.0.8p1 (nemesis) with PGI 8.0-4 on Linux x86_64

Gus Correa gus at ldeo.columbia.edu
Mon Mar 30 20:02:51 CDT 2009


Hi Dave, list

Dave: Thank you very much for your prompt answer.
See my comments inline, please.

Dave Goodell wrote:
> On Mar 30, 2009, at 6:12 PM, Gus Correa wrote:
> 
>> I tried to build the latest mpich2-1.0.8p1,
>> with the nemesis communication device,
>> using the PGI 8.0-4 compilers (pgcc, pgcpp, and pgf90)
>> on a Linux x86_64 computer, but it failed.
>>
>> The error message, which happens during the make phase, is:
> ...
>> BTW, I built mpich2-1.0.8p1 with nemesis on the same computer
>> using Gnu compilers (gcc, g++, gfortran),
>> and also using Intel compilers (icc, icpc, ifort).
>> However, we also need the PGI build of MPICH2.
>>
>> I appreciate any help.
> 
> Hi Gus,
> 
> Thanks for the bug report.  

It was not intended as such.
I am just looking for help to build MPICH2 with PGI.

> Did you successfully build the original 
> mpich2-1.0.8 (non-patch) release with these same PGI compilers?  

Since you asked, I tried mpich2-1.0.8 (non-patch) with PGI 8.0-4.
It fails with the same exact error.  :(

I also tried to replace PGI 8.0-4 with PGI 7.2-5,
and build mpich2-1.0.8p1 (patch).
No difference whatsoever, same exact error again :(

FYI, I also tried a hybrid build of mpich2-1.0.8 (patch),
using Gnu gcc/g++ and PGI pgf90 release 8.0-4.
This one builds correctly, no error! :)

So, somehow the problem may be in using pgcc instead of gcc, right?
Wouldn't the PGI developers be interested in giving you
a hand here?

The configure script for this hybrid build is:

#! /bin/sh
export MYINSTALLDIR=/some/directory
####################################################
export CC=gcc
export CXX=g++
export F77=pgf90
export F90=${F77}
# Note: Optimization flags for AMD Opteron "Shanghai"
export MPICH2LIB_CFLAGS='-march=amdfam10 -O3 -finline-functions 
-funroll-loops -mfpmath=sse -static'
export MPICH2LIB_CXXFLAGS=${MPICH2LIB_CFLAGS}
export MPICH2LIB_FFLAGS='-tp shanghai-64 -fast -Mfprelaxed -static'
export MPICH2LIB_F90FLAGS=${MPICH2LIB_FFLAGS}
####################################################
../configure \
--prefix=${MYINSTALLDIR} \
--with-device=ch3:nemesis \
--enable-fast \
2>&1 | tee configure_${build_id}.log

> The 
> changes from 1.0.8 to 1.0.8p1 are pretty small [1] and I don't see how 
> they would have altered your ability to build MPICH2 with the PGI 
> compilers.
> 
> Things might be better in the latest 1.1b1 release, but I doubt it.  I'm 
> not sure exactly what form of inline assembly the PGI compilers use, but 
> we can make an effort to support it in the 1.1.0 stable release if this 
> is important to you.  

It would be nice to be able to compile with PGI.
Again, wouldn't the PGI folks be interested in
this matter also perhaps?

We keep a variety of MPI library builds, MPICH2 and OpenMPI,
using Gnu, Intel, PGI, and hybrids (Gnu+PGI and Gnu+Intel).
I try to build as many as I can.
It is not a combinatorial mania.
Our experience with ocean/atmosphere/climate code has been that
different programs may or may not compile with any of those libraries.
Hence, if you have a variety of library builds chances are that
you will be able to compile and run each of the codes.
For the same reason we pay two commercial compiler licenses.


> The inline assembly in the PGI manual looks like 
> gcc-style, so we might just have a bad configure test.  Would you mind 
> sending us the output from configure as well as the config.log?  You can 
> send it to me directly to avoid spamming the list with log files.  That 
> should help me figure out what's going on with the configure test since 
> I don't have a copy of the PGI compilers installed anywhere handy.

I will email the files to you off list.

> 
> In the mean time you could install one of the other channels such as 
> ch3:sock for the PGI-compiled version.  The intranode performance won't 
> be as good but it should work just fine.
> 

I can, but I wonder if this will bring in other type of problems.
Not my direct experience, but other people reported runtime problems
with the ch3:sock on x86_64 and recent kernels.
See this thread:

http://marc.info/?l=npaci-rocks-discussion&m=123175012813683&w=2

On the other hand, nemesis seems to work.

> -Dave
> 
> [1] 
> https://trac.mcs.anl.gov/projects/mpich2/changeset?new=mpich2%2Fbranches%2Frelease%2FMPICH2_1_0_8%404206&old=mpich2%2Fbranches%2Frelease%2FMPICH2_1_0_8%403379 
> 

Thank you again for your help.

Gus Correa
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------


More information about the mpich-discuss mailing list