Error building PNetCDF 1.12.2 with NVHPC 22.1 under CentOS 8

Carl Ponder cponder at nvidia.com
Fri Jan 28 13:37:06 CST 2022


I'm not set up to use the Git branches, I only download the versioned 
tar-files.

------------------------------------------------------------------------
Subject: 	Re: Error building PNetCDF 1.12.2 with NVHPC 22.1 under CentOS 8
Date: 	Fri, 28 Jan 2022 18:51:17 +0000
From: 	Wei-Keng Liao <wkliao at northwestern.edu>
To: 	Carl Ponder <cponder at nvidia.com>
CC: 	parallel-netcdf at mcs.anl.gov <parallel-netcdf at mcs.anl.gov>



External email: Use caution opening links or attachments


Hi, Carl

I pushed a fix to the master branch.
Could you please give it a try and let me know
if it works (without setting SEQ_CC)?

Wei-keng

> On Jan 28, 2022, at 12:32 PM, Wei-Keng Liao<wkliao at northwestern.edu>  wrote:
>
> Thanks. I will add SEQ_CC to the help message.
>
> I assume the value of "ac_cv_mpi_compiler_base_MPICC" is the
> same as what you got from command `which pgcc`.
>
> FYI. There are 3 utility programs that are designed to run
> in serial: ncoffsets, ncvalidator, and pnetcdf_version.
> In the cross-compile environment, I need to extract the C
> compiler from the MPI compiler, which sets SEQ_CC.
>
> Wei-keng
>
>> On Jan 28, 2022, at 4:19 AM, Carl Ponder<cponder at nvidia.com>  wrote:
>>
>>
>> Ok this fixed the problem:
>> export SEQ_CC=`which pgcc`
>> The SEQ_CC setting isn't mentioned by the command
>> ./confgure --help
>> Could you get it to list somehow?
>> I had to find it by looking at the messages in the script.
>>
>> Subject:     Re: Error building PNetCDF 1.12.2 with NVHPC 22.1 under CentOS 8
>> Date:        Fri, 28 Jan 2022 03:16:00 -0600
>> From:        Carl Ponder<cponder at nvidia.com>
>> Reply-To:    Carl Ponder<cponder at nvidia.com>
>> To:  Wei-Keng Liao<wkliao at northwestern.edu>
>>
>>
>>
>> The value is this
>> ac_cv_mpi_compiler_base_MPICC=/home/cponder/WRF/PGI/100.NVHPC/Linux_x86_64/21.11/compilers/bin/pgcc
>> ac_cv_path_ac_cv_mpi_compiler_base_MPICC=/home/cponder/WRF/PGI/100.NVHPC/Linux_x86_64/21.11/compilers/bin/pgcc
>> ac_cv_mpi_compiler_base_MPICC='/home/cponder/WRF/PGI/100.NVHPC/Linux_x86_64/21.11/compilers/bin/pgcc'
>> which I *think* is what I want, since I built OpenMPI to use the PGI compiler.
>> The problem, as I see it at least, is that for the Utility programs it's trying to use gcc instead:
>> checking for mpiexec... /home/cponder/WRF/PGI/105.OpenMPI/bin/mpiexec
>> checking for gcc... /bin/gcc
>> checking C compiler for serial utility programs... /bin/gcc
>>
>> If I don't add the PGI inslude-paths, at least, then gcc doesn't use the wrong ones.
>>
>> Subject:     Re: Error building PNetCDF 1.12.2 with NVHPC 22.1 under CentOS 8
>> Date:        Thu, 27 Jan 2022 22:34:41 +0000
>> From:        Wei-Keng Liao<wkliao at northwestern.edu>
>> To:  Carl Ponder<cponder at nvidia.com>
>>
>>
>> External email: Use caution opening links or attachments
>>
>>
>> Can you check the value of variable "ac_cv_mpi_compiler_base_MPICC"
>> printed in your config.log?
>>
>> If it points to the PGI compiler, then configure.ac needs to be fixed.
>> Otherwise, config.log can help me to identify the problem.
>>
>> Wei-keng
>>
>>> On Jan 27, 2022, at 4:29 PM, Carl Ponder<cponder at nvidia.com>
>>> wrote:
>>>
>>>
>>> I can get around the problem by *NOT* setting the include-paths to the NVHPC compiler:
>>> # export           CPATH+=:$PGI/include
>>> # export  C_INCLUDE_PATH+=:$PGI/include
>>> # export    INCLUDE_PATH+=:$PGI/include
>>> I can work with this, but haven't had to do it with the other libs that use the CC/MPICC etc. compiler settings for everything.
>>> Do you still need to see the config.log?
>>>
>>> Subject:      Re: Error building PNetCDF 1.12.2 with NVHPC 22.1 under CentOS 8
>>> Date: Thu, 27 Jan 2022 22:16:25 +0000
>>> From: Wei-Keng Liao
>>> <wkliao at northwestern.edu>
>>>
>>> To:   Carl Ponder
>>> <cponder at nvidia.com>
>>>
>>> CC:
>>> parallel-netcdf at mcs.anl.gov  <parallel-netcdf at mcs.anl.gov>
>>>
>>>
>>>
>>> External email: Use caution opening links or attachments
>>>
>>>
>>> Hi, Carl
>>>
>>> Please send me file "config.log". I will take a look.
>>>
>>> Wei-keng
>>>
>>>
>>>
>>>> On Jan 27, 2022, at 3:42 PM, Carl Ponder<cponder at nvidia.com>
>>>>
>>>> wrote:
>>>>
>>>> I'm getting this error:
>>>> /bin/gcc -I../../../src/utils/ncvalidator -o cdfdiff cdfdiff.c
>>>> In file included from /usr/include/stdlib.h:55,
>>>>                  from /home/cponder/WRF/PGI/100.NVHPC/Linux_x86_64/21.11/compilers/include/stdlib.h:13,
>>>>                  from cdfdiff.c:19:
>>>> /home/cponder/WRF/PGI/100.NVHPC/Linux_x86_64/21.11/compilers/include/bits/floatn.h:60:17: error: two or more data types in declaration specifiers
>>>>    typedef float _Float32;
>>>>                  ^~~~~~~~
>>>> /home/cponder/WRF/PGI/100.NVHPC/Linux_x86_64/21.11/compilers/include/bits/floatn.h:63:18: error: two or more data types in declaration specifiers
>>>>    typedef double _Float64;
>>>>                   ^~~~~~~~
>>>> /home/cponder/WRF/PGI/100.NVHPC/Linux_x86_64/21.11/compilers/include/bits/floatn.h:74:18: error: two or more data types in declaration specifiers
>>>>    typedef double _Float32x;
>>>>                   ^~~~~~~~~
>>>> /home/cponder/WRF/PGI/100.NVHPC/Linux_x86_64/21.11/compilers/include/bits/floatn.h:78:25: error: two or more data types in declaration specifiers
>>>>      typedef long double _Float64x;
>>>>                          ^~~~~~~~~
>>>> make[3]: *** [Makefile:847: cdfdiff] Error 1
>>>> make[3]: Leaving directory '/home/cponder/WRF/PGI/A.106.PNetCDF/distro/src/utils/ncmpidiff'
>>>> make[2]: *** [Makefile:545: all-recursive] Error 1
>>>> make[2]: Leaving directory '/home/cponder/WRF/PGI/A.106.PNetCDF/distro/src/utils'
>>>> make[1]: *** [Makefile:476: all-recursive] Error 1
>>>> make[1]: Leaving directory '/home/cponder/WRF/PGI/A.106.PNetCDF/distro/src'
>>>> make: *** [Makefile:533: all-recursive] Error 1
>>>> I believe the problem is that it's trying to use the system-default gcc here
>>>> checking for mpiexec... /home/cponder/WRF/PGI/105.OpenMPI/bin/mpiexec
>>>> checking for gcc... /bin/gcc
>>>> checking C compiler for serial utility programs... /bin/gcc
>>>> which is incompatible with all the NVHPC_specific paths that I'm setting for the overall build:
>>>> export CC=`which pgcc`
>>>> export CXX=`which pgc++`
>>>> export F77=`which pgf77`
>>>> export F90=`which pgf90`
>>>> export FC=`which pgfortran`
>>>>
>>>> export MPICC=`which mpicc`
>>>> export MPICXX=`which mpicxx`
>>>> export MPIF77=`which mpif77`
>>>> export MPIF90=`which mpif90`
>>>>
>>>> export   CFLAGS="-fPIC -m64 -tp=px"
>>>> export CXXFLAGS="-fPIC -m64 -tp=px"
>>>> export  FCFLAGS="-fPIC -m64 -tp=px"
>>>>
>>>> export LDFLAGS+=" -L$PGI/cuda/lib64 -lnvidia-ml"
>>>> I'm guessing that the old compilers & old OS levels didn't enable these types so there was no collision during the compilation.
>>>> Is there a way to override the gcc default for the utilities? I don't see any such setting in the configure --help output.
>>>>
>>>>
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/parallel-netcdf/attachments/20220128/56372dbc/attachment.html>


More information about the parallel-netcdf mailing list