[petsc-dev] Building PETSc on LLNL Lassen

Jed Brown jed at jedbrown.org
Mon Dec 14 09:29:37 CST 2020


Honestly, we should pass -ffp-contract=fast in our standard optimization options. I'm not aware of any sensitive fp [1] that needs strict evaluation in PETSc, and if it does, we can use:

#pragma STDC FP_CONTRACT off

[1] An example is result = a*a - a*a (where each value a comes from a separate source). With contraction, this may be evaluated as

  x = a * a;          // simple multiply
  result = a * a - x; // fused multiply-add yields nonzero

The FMA carries extra accuracy through the operation, rounding the final result. It's generally more accurate, but doesn't satisfy this symmetry, which might be expected in some circumstances (e.g., convergence tolerances when up against machine precision).

Pierre Jolivet <pierre at joliv.et> writes:

> Don’t know if this still applies, but here is what Jed said about this 10 months ago (https://gitlab.com/petsc/petsc/-/merge_requests/2466#note_275686270 <https://gitlab.com/petsc/petsc/-/merge_requests/2466#note_275686270>):
> “A minor issue is that -ffp-contract=off is implied by -std=c*, versus -ffp-contract=fast being default otherwise (with GCC).”
>
> Thanks,
> Pierre
>
>> On 14 Dec 2020, at 9:35 AM, Stefano Zampini <stefano.zampini at gmail.com> wrote:
>> 
>> While we are discussing this dialect stuff, do we still want to test for gnu++{11|14} extensions before testing for c++{11|14} ? We get warnings when using KOKKOS since it replaces gnu++14 with c++14. What is the added value of using gnu++{11|14}?
>> 
>> Il giorno lun 14 dic 2020 alle ore 08:21 Barry Smith <bsmith at petsc.dev <mailto:bsmith at petsc.dev>> ha scritto:
>> 
>>   So two configure sins. Setting values running configure without logging them into configure.log and setting values without testing them with configure tests :-)
>> 
>>   We need to finally deal properly with setting -cbin in configure; just defaulting causes all kinds of undebuggable problems. Why can't we also just set it to the C++ compiler? Is it because it cannot handle the MPI c++ compiler so it needs to be set to the underlying C++ compiler?
>> 
>>   Barry
>> 
>>   It is also strange that Jacob's configure.log seems to believe the IBM compiler is a GNU compiler (but because of poor logging configure does not log this information; fix coming in a MR soon).
>> 
>>> On Dec 13, 2020, at 6:32 PM, Junchao Zhang <junchao.zhang at gmail.com <mailto:junchao.zhang at gmail.com>> wrote:
>>> 
>>> Jacob,
>>>   Do you need to add  'CUDAFLAGS=-ccbin xlc++' to specify the host compiler for CUDA? Note in cuda.py I added
>>> 
>>>     if self.compilers.cxxdialect in ['C++11','C++14']: #nvcc is a C++ compiler so it is always good to add -std=xxx. It is even crucial when using thrust complex (see MR 2822)
>>>       self.setCompilers.CUDAFLAGS += ' -std=' + self.compilers.cxxdialect.lower()
>>> 
>>>  In your configure.log, there are 
>>> #define PETSC_HAVE_CXX_DIALECT_CXX11 1
>>> #define PETSC_HAVE_CXX_DIALECT_CXX14 1
>>> 
>>> I guess without -ccbin, nvcc uses gcc by default and your gcc does not support C++14.
>>> 
>>> --Junchao Zhang
>>> 
>>> 
>>> On Sun, Dec 13, 2020 at 1:25 PM Jacob Faibussowitsch <jacob.fai at gmail.com <mailto:jacob.fai at gmail.com>> wrote:
>>> Hello All,
>>> 
>>> Does anyone have any experience building petsc with cuda support on Lassen? I’ve been having trouble building with ibm xl compilers + spectrum-mpi + nvcc. NVCC seems to not like -std=c++14 argument, complaining that its configured host compiler doesn’t support it, but compiling the following “test.cc <http://test.cc/>":
>>> 
>>> #include <stdlib.h>
>>> 
>>> int main(int argc, char **argv)
>>> {                                                                                                   
>>>   int i = 1;
>>>   i += argc;
>>>   return(i);
>>> }
>>> 
>>> With mpicc -std=c++14 test.cc <http://test.cc/> produces zero errors. 
>>> ------------------------------------------------------------------------
>>> 
>>> Modules loaded:
>>> 
>>> module load xl/2020.11.12-cuda-11.1.1                                                               
>>> module load spectrum-mpi
>>> module load cuda/11.1.1
>>> module load python/3.8.2
>>> module load cmake
>>> module load valgrind
>>> module load lapack
>>> 
>>> My configure commands:
>>> 
>>> ./configure  --with-cc=mpicc --with-cxx=mpiCC --with-fc=mpifort --with-cuda --with-debugging=1 PETSC_ARCH=arch-linux-c-debug
>>> 
>>> The error:
>>> 
>>> TESTING: findMPIInc from config.packages.MPI(config/BuildSystem/config/packages/MPI.py:636)         *******************************************************************************
>>>          UNABLE to CONFIGURE with GIVEN OPTIONS    (see configure.log for details):
>>> -------------------------------------------------------------------------------
>>> Bad compiler flag: -I/usr/tce/packages/spectrum-mpi/ibm/spectrum-mpi-rolling-release/include
>>> *******************************************************************************
>>> 
>>> The actual configure.log error:
>>> 
>>> Executing: nvcc -c -o /var/tmp/petsc-2v0k4k61/config.setCompilers/conftest.o -I/var/tmp/petsc-2v0k4\
>>> k61/config.setCompilers -I/var/tmp/petsc-2v0k4k61/config.types  -g -std=c++14 -I/usr/tce/packages/s\
>>> pectrum-mpi/ibm/spectrum-mpi-rolling-release/include  -Wno-deprecated-gpu-targets /var/tmp/petsc-2v\
>>> 0k4k61/config.setCompilers/conftest.cu <http://conftest.cu/> 
>>> Possible ERROR while running compiler:
>>> stderr:
>>> nvcc warning : The -std=c++14 flag is not supported with the configured host compiler. Flag will be\
>>>  ignored.
>>> Source:
>>> #include "confdefs.h"
>>> #include "conffix.h"
>>> 
>>> int main() {
>>> ;
>>>   return 0;
>>> }
>>>                   Rejecting compiler flag -I/usr/tce/packages/spectrum-mpi/ibm/spectrum-mpi-rolling-release/include  due to 
>>> nvcc warning : The -std=c++14 flag is not supported with the configured host compiler. Flag will be ignored.
>>> 
>>> 
>>> Best regards,
>>> 
>>> Jacob Faibussowitsch
>>> (Jacob Fai - booss - oh - vitch)
>>> Cell: (312) 694-3391
>> 
>> 
>> 
>> -- 
>> Stefano


More information about the petsc-dev mailing list