c++ bindings, intel compiler, petsc.h conflict, petsc install?
Barry Smith
bsmith at mcs.anl.gov
Wed Aug 29 11:03:14 CDT 2007
Please send configure.log and make_log* to petsc-maint at mcs.anl.gov
And all the output from your failed compile/link.
On Wed, 29 Aug 2007, Sean Dettrick wrote:
> Hi,
>
> I have a C++ MPI code that uses the C++ MPI bindings and uses PETSc as
> a kind of plug-in. This was compiling and running nicely before on
> linux with petsc-2.3.0, GCC compilers and LAM MPI. But now I can't
> get my code to compile on linux with petsc-2.3.3-p4, 64 bit Intel
> compilers, and Intel MPI.
>
> If petsc.h is included before mpi.h, then there are compile time
> errors: the MPI namespace is not available, due to MPICH_SKIP_MPICXX
> in petsc.h (not present in 2.3.0).
>
> If mpi.h is included before petsc.h, then there are link time errors:
> undefined referenced to Petsc functions.
>
> Am I doing something wrong, or is this a well known thing?
>
> I wonder if it could be a configuration error? Because the intel mpi
> installation uses non-standard directory names for the 64 bit version
> - bin64, include64 etc, the configuration looks a little hairy:
>
> config/configure.py \
> --with-petsc-arch=intel_MPI_64_static \
> --with-fortran=0 \
> --with-mpi-include=/opt/intel/mpi/3.0/include64 \
> --with-mpi-lib=/opt/intel/mpi/3.0/lib64/libmpi.a \
> --with-mpiexec=/opt/intel/mpi-rt/3.0/bin64/mpiexec \
> --with-x=no \
> --with-matlab=0 \
> --with-shared=0 \
> --with-cc=/opt/intel/mpi/3.0/bin64/mpiicc \
> --with-cxx=/opt/intel/mpi/3.0/bin64/mpiicpc \
> --CXXFLAGS=-I/opt/intel/mpi/3.0/include64 \
> --CFLAGS=-I/opt/intel/mpi/3.0/include64 \
> --LDFLAGS=-L/opt/intel/mpi/3.0/lib64 \
> --download-c-blas-lapack=1
>
> "make all test" indicated that the tests were passed.
>
> Any suggestions greatly appreciated!
>
> Thanks
> Sean Dettrick
>
>
More information about the petsc-users
mailing list