[petsc-users] trouble compiling MPICH on cluster

Randall Mackie rlmackie862 at gmail.com
Wed Dec 16 11:05:08 CST 2020


Hi Satish,

You are quite right and thank you for spotting that!

I had copied another configuration command file and forgot to remove those lines with mpicc and mpif90.

All that is necessary is

--with-clean=1 \
--with-scalar-type=complex \
--with-debugging=1 \
--with-fortran=1 \
--download-mpich=../external/mpich-3.3.2.tar.gz

Much appreciated,


Randy

> On Dec 16, 2020, at 8:38 AM, Satish Balay <balay at mcs.anl.gov> wrote:
> 
>> Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions --with-clean=1 --with-scalar-type=complex --with-debugging=1 --with-fortran=1 --download-mpich=../external/mpich-3.3.2.tar.gz --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicc
> 
> using --downlaod-mpich with a prior install of mpi [i.e --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicc] does not make sense.. This should be:
> 
> --with-cc=gcc --with-fc=gfortran --with-cxx=mpicc
> 
> 
> Note: all petsc is doing is build mpich with:
> 
> Configuring MPICH version 3.3.2 with  '--prefix=/auto/soft1/multiphysics/PETSc/petsc-3.13.3/linux-gnu-mpich-complex-debug' 'MAKE=/usr/bin/gmake' '--libdir=/auto/soft1/multiphysics/PETSc/petsc-3.13.3/linux-gnu-mpich-complex-debug/lib' 'CC=mpicc' 'CFLAGS=-fPIC -fstack-protector -g3' 'AR=/usr/bin/ar' 'ARFLAGS=cr' 'CXX=mpicc' 'CXXFLAGS=-fstack-protector -g -fPIC -x c++ -std=gnu++11' 'FFLAGS=-fPIC -ffree-line-length-0 -g' 'FC=mpif90' 'F77=mpif90' 'FCFLAGS=-fPIC -ffree-line-length-0 -g' '--enable-shared' '--with-device=ch3:sock' '--with-pm=hydra' '--enable-fast=no' '--enable-error-messages=all' '--enable-g=meminit'
> 
> 
> So a manual build with equivalent options [with the above fix - i.e CC=gcc CXX=g++ FC=gfortran F77=gfortran] should also provide equivalent [valgrind clean] MPICH.
> 
> Satish
> 
> 
> 
> On Wed, 16 Dec 2020, Randall Mackie wrote:
> 
>> Dear PETSc team:
>> 
>> I am trying to compile a debug-mpich version of PETSc on a new remote cluster for running valgrind.
>> 
>> I’ve done this a thousand times on my laptop and the clusters I normally have access to, and it’s never been a problem.
>> 
>> This time, it’s failing on trying to install mpich and according to the configure.log (attached) seems to be failing with the following message:
>> 
>> src/binding/cxx/.libs/initcxx.o: In function `__static_initialization_and_destruction_0':
>> /auto/soft1/multiphysics/PETSc/petsc-3.13.3/linux-gnu-mpich-complex-debug/externalpackages/mpich-3.3.2/src/binding/cxx/initcxx.cxx:46: undefined reference to `__dso_handle'
>> /usr/bin/ld: src/binding/cxx/.libs/initcxx.o: relocation R_X86_64_PC32 against undefined hidden symbol `__dso_handle' can not be used when making a shared object
>> /usr/bin/ld: final link failed: Bad value
>> collect2: error: ld returned 1 exit status
>> gmake[2]: *** [lib/libmpicxx.la] Error 1
>> gmake[2]: *** Waiting for unfinished jobs....
>> /usr/bin/ld: cannot find -l-L/usr/lib/gcc/x86_64-linux-gnu/4.7
>> collect2: error: ld returned 1 exit status
>> gmake[2]: *** [lib/libmpifort.la] Error 1
>> gmake[1]: *** [all-recursive] Error 1
>> gmake: *** [all] Error 2
>> 
>> 
>> We were able to separately compile and install mpich (using the same tar ball) and then use that and compile PETSc, so we have a work-around, but I would prefer to compile them together as I’ve always done.
>> 
>> Any ideas as to the issue?
>> 
>> Thanks,
>> 
>> Randy M.
>> 
>> 



More information about the petsc-users mailing list