[petsc-dev] Building PETSc on LLNL Lassen
Barry Smith
bsmith at petsc.dev
Sun Dec 13 23:21:02 CST 2020
So two configure sins. Setting values running configure without logging them into configure.log and setting values without testing them with configure tests :-)
We need to finally deal properly with setting -cbin in configure; just defaulting causes all kinds of undebuggable problems. Why can't we also just set it to the C++ compiler? Is it because it cannot handle the MPI c++ compiler so it needs to be set to the underlying C++ compiler?
Barry
It is also strange that Jacob's configure.log seems to believe the IBM compiler is a GNU compiler (but because of poor logging configure does not log this information; fix coming in a MR soon).
> On Dec 13, 2020, at 6:32 PM, Junchao Zhang <junchao.zhang at gmail.com> wrote:
>
> Jacob,
> Do you need to add 'CUDAFLAGS=-ccbin xlc++' to specify the host compiler for CUDA? Note in cuda.py I added
>
> if self.compilers.cxxdialect in ['C++11','C++14']: #nvcc is a C++ compiler so it is always good to add -std=xxx. It is even crucial when using thrust complex (see MR 2822)
> self.setCompilers.CUDAFLAGS += ' -std=' + self.compilers.cxxdialect.lower()
>
> In your configure.log, there are
> #define PETSC_HAVE_CXX_DIALECT_CXX11 1
> #define PETSC_HAVE_CXX_DIALECT_CXX14 1
>
> I guess without -ccbin, nvcc uses gcc by default and your gcc does not support C++14.
>
> --Junchao Zhang
>
>
> On Sun, Dec 13, 2020 at 1:25 PM Jacob Faibussowitsch <jacob.fai at gmail.com <mailto:jacob.fai at gmail.com>> wrote:
> Hello All,
>
> Does anyone have any experience building petsc with cuda support on Lassen? I’ve been having trouble building with ibm xl compilers + spectrum-mpi + nvcc. NVCC seems to not like -std=c++14 argument, complaining that its configured host compiler doesn’t support it, but compiling the following “test.cc <http://test.cc/>":
>
> #include <stdlib.h>
>
> int main(int argc, char **argv)
> {
> int i = 1;
> i += argc;
> return(i);
> }
>
> With mpicc -std=c++14 test.cc <http://test.cc/> produces zero errors.
> ------------------------------------------------------------------------
>
> Modules loaded:
>
> module load xl/2020.11.12-cuda-11.1.1
> module load spectrum-mpi
> module load cuda/11.1.1
> module load python/3.8.2
> module load cmake
> module load valgrind
> module load lapack
>
> My configure commands:
>
> ./configure --with-cc=mpicc --with-cxx=mpiCC --with-fc=mpifort --with-cuda --with-debugging=1 PETSC_ARCH=arch-linux-c-debug
>
> The error:
>
> TESTING: findMPIInc from config.packages.MPI(config/BuildSystem/config/packages/MPI.py:636) *******************************************************************************
> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details):
> -------------------------------------------------------------------------------
> Bad compiler flag: -I/usr/tce/packages/spectrum-mpi/ibm/spectrum-mpi-rolling-release/include
> *******************************************************************************
>
> The actual configure.log error:
>
> Executing: nvcc -c -o /var/tmp/petsc-2v0k4k61/config.setCompilers/conftest.o -I/var/tmp/petsc-2v0k4\
> k61/config.setCompilers -I/var/tmp/petsc-2v0k4k61/config.types -g -std=c++14 -I/usr/tce/packages/s\
> pectrum-mpi/ibm/spectrum-mpi-rolling-release/include -Wno-deprecated-gpu-targets /var/tmp/petsc-2v\
> 0k4k61/config.setCompilers/conftest.cu <http://conftest.cu/>
> Possible ERROR while running compiler:
> stderr:
> nvcc warning : The -std=c++14 flag is not supported with the configured host compiler. Flag will be\
> ignored.
> Source:
> #include "confdefs.h"
> #include "conffix.h"
>
> int main() {
> ;
> return 0;
> }
> Rejecting compiler flag -I/usr/tce/packages/spectrum-mpi/ibm/spectrum-mpi-rolling-release/include due to
> nvcc warning : The -std=c++14 flag is not supported with the configured host compiler. Flag will be ignored.
>
>
> Best regards,
>
> Jacob Faibussowitsch
> (Jacob Fai - booss - oh - vitch)
> Cell: (312) 694-3391
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20201213/053e55c9/attachment.html>
More information about the petsc-dev
mailing list