[petsc-dev] Building PETSc on LLNL Lassen

Jacob Faibussowitsch jacob.fai at gmail.com
Sun Dec 13 19:54:59 CST 2020


Junchao’s suggestion was on the money it seems.

I played around trying to build petsc with various compilers (xl, pgi, gcc, clang), and a few version and they all had issues except when using gcc and clang. As it turns out the gcc that nvcc falls back on on Lassen is positively geriatric, I believe gcc v4.9.3 as it is the default gcc loaded. -std=c++14 wasn’t introduced to gcc until v5.2. This gcc is only upgraded automatically if one uses the gcc or clang compiler modules (as these will override “gcc” either via replacement or alias). Using -ccbin to override the default compiler and set it to mpicc worked.

So to sum up, configure checked mpicc (= xl v2020.11.12) and found correctly that it supported c++14, and so assumed that nvcc’s host compiler also supported this, adding it to configure arguments. But this wasn’t the case since nvcc silently defaults to gcc, which in this case was too old. I also tried looking for a way to figure out exactly what host compiler nvcc is using, but so far no luck.

Note that Lassen also does some funky silent argument additions, which you can see if you run “nvcc -vvvv”, but I don’t think these affected configure (since the c++14 flag comes from Junchao’s additions in cuda.py):

Setting default C++ std via -std=c++14 to match g++ default <—————— Silently does this if your “g++” supports -std=c++14
Set TMPDIR to /var/tmp/faibuss to workaround name conflicts
nvidia-wrapper executing after fixup:
export LLNL_CALLED_FROM_NVCC=11.1.1
+ exec /usr/tce/packages/cuda/cuda-11.1.1/nvidia/bin/nvcc -std=c++14 -Xlinker '"-rpath=/usr/tce/packages/cuda/cuda-11.1.1/nvidia/lib64:/usr/tce/packages/cuda/cuda-11.1.1"'
nvcc fatal   : No input files specified; use option --help for more information

Best regards,

Jacob Faibussowitsch
(Jacob Fai - booss - oh - vitch)
Cell: (312) 694-3391

> On Dec 13, 2020, at 18:32, Junchao Zhang <junchao.zhang at gmail.com> wrote:
> 
> Jacob,
>   Do you need to add  'CUDAFLAGS=-ccbin xlc++' to specify the host compiler for CUDA? Note in cuda.py I added
> 
>     if self.compilers.cxxdialect in ['C++11','C++14']: #nvcc is a C++ compiler so it is always good to add -std=xxx. It is even crucial when using thrust complex (see MR 2822)
>       self.setCompilers.CUDAFLAGS += ' -std=' + self.compilers.cxxdialect.lower()
> 
>  In your configure.log, there are 
> #define PETSC_HAVE_CXX_DIALECT_CXX11 1
> #define PETSC_HAVE_CXX_DIALECT_CXX14 1
> 
> I guess without -ccbin, nvcc uses gcc by default and your gcc does not support C++14.
> 
> --Junchao Zhang
> 
> 
> On Sun, Dec 13, 2020 at 1:25 PM Jacob Faibussowitsch <jacob.fai at gmail.com <mailto:jacob.fai at gmail.com>> wrote:
> Hello All,
> 
> Does anyone have any experience building petsc with cuda support on Lassen? I’ve been having trouble building with ibm xl compilers + spectrum-mpi + nvcc. NVCC seems to not like -std=c++14 argument, complaining that its configured host compiler doesn’t support it, but compiling the following “test.cc <http://test.cc/>":
> 
> #include <stdlib.h>
> 
> int main(int argc, char **argv)
> {                                                                                                   
>   int i = 1;
>   i += argc;
>   return(i);
> }
> 
> With mpicc -std=c++14 test.cc <http://test.cc/> produces zero errors. 
> ------------------------------------------------------------------------
> 
> Modules loaded:
> 
> module load xl/2020.11.12-cuda-11.1.1                                                               
> module load spectrum-mpi
> module load cuda/11.1.1
> module load python/3.8.2
> module load cmake
> module load valgrind
> module load lapack
> 
> My configure commands:
> 
> ./configure  --with-cc=mpicc --with-cxx=mpiCC --with-fc=mpifort --with-cuda --with-debugging=1 PETSC_ARCH=arch-linux-c-debug
> 
> The error:
> 
> TESTING: findMPIInc from config.packages.MPI(config/BuildSystem/config/packages/MPI.py:636)         *******************************************************************************
>          UNABLE to CONFIGURE with GIVEN OPTIONS    (see configure.log for details):
> -------------------------------------------------------------------------------
> Bad compiler flag: -I/usr/tce/packages/spectrum-mpi/ibm/spectrum-mpi-rolling-release/include
> *******************************************************************************
> 
> The actual configure.log error:
> 
> Executing: nvcc -c -o /var/tmp/petsc-2v0k4k61/config.setCompilers/conftest.o -I/var/tmp/petsc-2v0k4\
> k61/config.setCompilers -I/var/tmp/petsc-2v0k4k61/config.types  -g -std=c++14 -I/usr/tce/packages/s\
> pectrum-mpi/ibm/spectrum-mpi-rolling-release/include  -Wno-deprecated-gpu-targets /var/tmp/petsc-2v\
> 0k4k61/config.setCompilers/conftest.cu <http://conftest.cu/> 
> Possible ERROR while running compiler:
> stderr:
> nvcc warning : The -std=c++14 flag is not supported with the configured host compiler. Flag will be\
>  ignored.
> Source:
> #include "confdefs.h"
> #include "conffix.h"
> 
> int main() {
> ;
>   return 0;
> }
>                   Rejecting compiler flag -I/usr/tce/packages/spectrum-mpi/ibm/spectrum-mpi-rolling-release/include  due to 
> nvcc warning : The -std=c++14 flag is not supported with the configured host compiler. Flag will be ignored.
> 
> 
> Best regards,
> 
> Jacob Faibussowitsch
> (Jacob Fai - booss - oh - vitch)
> Cell: (312) 694-3391

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20201213/94b36252/attachment.html>


More information about the petsc-dev mailing list