[petsc-dev] Makefile.user and CUDA

Patrick Sanan patrick.sanan at gmail.com
Wed Oct 14 09:38:01 CDT 2020


Here's a hack that seems to do what I want, but I don't think it's library-quality, as it extracts information from the "cuda" package in what seems like a brittle way:

https://gitlab.com/petsc/petsc/-/merge_requests/3345

> Am 13.10.2020 um 03:00 schrieb Jed Brown <jed at jedbrown.org>:
> 
> Matthew Knepley <knepley at gmail.com <mailto:knepley at gmail.com>> writes:
> 
>> On Mon, Oct 12, 2020 at 3:03 PM Patrick Sanan <patrick.sanan at gmail.com>
>> wrote:
>> 
>>> 
>>> 
>>> Am 12.10.2020 um 20:11 schrieb Matthew Knepley <knepley at gmail.com>:
>>> 
>>> On Mon, Oct 12, 2020 at 3:47 AM Patrick Sanan <patrick.sanan at gmail.com>
>>> wrote:
>>> 
>>>> I have a toy application code built on PETSc which needs to compile and
>>>> link a .cu file.
>>>> 
>>>> I'd love to be able to configure PETSc (with CUDA), and then use a
>>>> modified version of share/petsc/Makefile.user to compile and link my code,
>>>> using a consistent set of compilers, libraries, and flags.  Makefile.user
>>>> uses petsc.pc (via pkg-config) and implicit GNU make rules to do almost
>>>> everything for you for C, C++, and Fortran.
>>>> 
>>>> However, I don't think it currently supports CUDA, and I'm not familiar
>>>> enough with BuildSystem or pkg-config to quickly add support myself, so I
>>>> resort to the "old" way, including things like this in my Makefile:
>>>> 
>>> 
>>> Okay, here is where the pkgconf file gets generated, and in fact where the
>>> language sections are
>>> 
>>> 
>>> https://gitlab.com/petsc/petsc/-/blob/master/config/PETSc/Configure.py#L161
>>> 
>>> I think you can just put a "CUDA" section in. I do not know what names to
>>> use, so I have not done it.
>>> 
>>> 
>>> That's where I got stuck as well :D
>>> 
>>> I can follow the pattern to get the compiler (CUDAC) and some of the
>>> flags, but that doesn't seem to have all the information that CUDAC,
>>> CUDAC_FLAGS, CUDA_INCLUDE, and CUDA_LIB provide in petscvariables.
>>> 
>> 
>> I can dig around and find all these for you.
>> 
>> 
>>> Naively I would assume I'd need to dig around to figure out how those
>>> variables are populated, and get the same info into petsc.pc
>>> 
>>> But, I worry that I'm misunderstanding fundamentals about pkg-config and
>>> how it is supposed to work. Am I free to put whatever fields I want in
>>> there, or is there some authority on what's "standard"?
>> 
>> I do not understand anything about pkg-config (and think it is a
>> fundamentally misguided mechanism). Jed, how should CUDA work with this?
> 
> I have no experience, but here are some examples.  I think the caller is responsible for using nvcc if you have *.cu source.
> 
> $ cat /usr/lib/pkgconfig/cublas.pc 
> cudaroot=/opt/cuda
> libdir=${cudaroot}/targets/x86_64-linux/lib
> includedir=${cudaroot}/targets/x86_64-linux/include
> 
> Name: cublas
> Description: CUDA BLAS Library
> Version: 11.0
> Libs: -L${libdir} -lcublas
> Cflags: -I${includedir}
> 
> 
> $ cat /usr/lib/pkgconfig/nvrtc.pc 
> cudaroot=/opt/cuda
> libdir=${cudaroot}/targets/x86_64-linux/lib
> includedir=${cudaroot}/targets/x86_64-linux/include
> 
> Name: nvrtc
> Description: A runtime compilation library for CUDA C++
> Version: 11.0
> Libs: -L${libdir} -lnvrtc
> Cflags: -I${includedir}

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20201014/637ca469/attachment.html>


More information about the petsc-dev mailing list