<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<br>
<div class="moz-cite-prefix">On 3/22/19 3:28 PM, Mills, Richard Tran wrote:<br>
</div>
<blockquote type="cite" cite="mid:7cd914a8-178f-6e54-cbf4-693dca3f7732@anl.gov">On 3/22/19 12:13 PM, Balay, Satish wrote:<br>
<blockquote type="cite" cite="mid:alpine.LFD.2.21.1903221412180.19977@sb">
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">Is there currently an existing check like this somewhere? Or will things just fail when running 'make' right now?
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">Most likely no. Its probably best to attempt the error case - and
figure-out how to add a check.</pre>
</blockquote>
I gave things a try and verified that there is no check for this anywhere in configure -- things just fail at 'make' time. I think that all we need is a test that will try to compile any simple, valid C program using "nvcc --compiler-options=<compiler options
PETSc has identified> <CUDAFLAGS>". If the test fails, it should report something like "Compiler flags do not work with CUDA compiler; perhaps you need to provide to use -ccbin in CUDAFLAGS to specify the intended host compiler".<br>
<br>
I'm not sure where this test should go. Does it make sense for this to go in cuda.py with the other checks like checkNVCCDoubleAlign()? If so, how do I get at the values of <compiler options PETSc has identified> and <CUDAFLAGS>? I'm not sure what modules I
need to import from BuildSystem...<br>
</blockquote>
OK, answering part of my own question here: Re-familiarizing myself with how the configure packages work, and then looking through the makefiles, I see that the argument to "--compiler-options" is filled in by the makefile variables<br>
<br>
${PCC_FLAGS} ${CFLAGS} ${CCPPFLAGS}<br>
<br>
and it appears that this partly maps to self.compilers.CFLAGS in BuildSystem. But so far I've not managed to employ the right combination of find and grep to figure out where PCC_FLAGS and CCPPFLAGS come from.<br>
<br>
--Richard<br>
<blockquote type="cite" cite="mid:7cd914a8-178f-6e54-cbf4-693dca3f7732@anl.gov"><br>
--Richard<br>
<blockquote type="cite" cite="mid:alpine.LFD.2.21.1903221412180.19977@sb">
<pre class="moz-quote-pre" wrap="">Satish
On Fri, 22 Mar 2019, Mills, Richard Tran via petsc-dev wrote:
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">On 3/18/19 7:29 PM, Balay, Satish wrote:
On Tue, 19 Mar 2019, Mills, Richard Tran via petsc-dev wrote:
Colleagues,
It took me a while to get PETSc to build at all with anything on Summit other than the GNU compilers, but, once this was accomplished, editing out the isGNU() test and then passing something like
'--with-cuda=1',
'--with-cudac=nvcc -ccbin pgc++',
Does the following also work?
--with-cuda=1 --with-cudac=nvcc CUDAFLAGS='-ccbin pgc++'
Yes, using CUDAFLAGS as above also works, and that does seem to be a better way to do things.
After experimenting with a lot of different builds on Summit, and doing more reading about how CUDA compilation works on different platforms, I'm now thinking that perhaps configure.py should *avoid* doing anything clever to try figure out what the value of "-ccbin" should be. For one, this is not anything that NVIDIA's toolchain does for the user in the first place: If you want to use nvcc with a host compiler that isn't whatever NVIDIA considers the default (g++ on Linux, clang on Mac OS, MSVC on Windows), NVIDIA expects you to provide the appropriate '-ccbin' argument. Second, nvcc isn't the only CUDA compiler that a user might want to use: some people use Clang directly to compile CUDA code. Third, which host compilers are supported appears to be platform independent; for example, GCC is the default/preferred host compiler on Linux, but isn't even supported on Mac OS! Figuring out what is supported is very convoluted, and I think that trying to get configure to determine this may be more trouble than it is worth. I think we should instead let the user try whatever, and print out a helpful message how they "may need to specify host compiler to nvcc with -ccbin" if the CUDA compiler doesn't seem to work. Also, I'll put something about this in the CUDA configure examples. Any objections?
Sometimes we have extra options in configure for specific features for
ex: --with-pic --with-visibility etc.
But that gets messy. On cuda side - we've have --with-cuda-arch and at
some point elimiated it [so CUDAFLAGS is now the interface for this
flag]. We could add --with-cuda-internal-compiler option to petsc
configure - but it will again have similar drawbacks. I personally
think most users will gravitate towards specifying such option via
CUDAFLAGS
to configure works fine. So, I should make a change to the BuildSystem cuda.py along these lines. I'm wondering exactly how I should make this work. I could just remove the check,
sure
but I think that maybe the better thing to do is to check isGNU(), then if the compiler is *not* GNU, configure should add the appropriate '-ccbin' argument to "--with-cudac", unless the user has specified '-ccbin' in their '--with-cudac' already. Or do we need to get this fancy?
The check should be: do --compiler-options= constructed by PETSc configure work with CUDAC
Is there currently an existing check like this somewhere? Or will things just fail when running 'make' right now?
[or perhaps we should - just trim the --compiler-options to only -I flags?]
I think we should avoid explict check for a compiler type [i.e isGNU() check] as much as possible.
CUDA is only supposed to work with certain compilers, but there doesn't seem to be a correct official list (for instance, it supposedly won't work with the IBM XL compilers, but they certainly *are* actually supported on Summit). Heck, the latest GCC suite won't even work right now. Since what compilers are supported seems to be in flux, I suggest we just let the user try anything and then let things fail if it doesn't work.
I suspec the list is dependent on the install [for ex: linux vs Windows vs somthing else?] and version of cuda [for ex: each version of cuda supports only specific versions of gcc]
Yes, you are correct about this, as I detailed above.
Satish
--Richard
On 3/12/19 8:45 PM, Smith, Barry F. wrote:
Richard,
You need to remove the isGNU() test and then experiment with getting the Nvidia tools to use the compiler you want it to use.
No one has made a serious effort to use any other compilers but Gnu (at least not publicly).
Barry
On Mar 12, 2019, at 10:40 PM, Mills, Richard Tran via petsc-dev <a class="moz-txt-link-rfc2396E" href="mailto:petsc-dev@mcs.anl.gov" moz-do-not-send="true"><petsc-dev@mcs.anl.gov></a><a class="moz-txt-link-rfc2396E" href="mailto:petsc-dev@mcs.anl.gov" moz-do-not-send="true"><mailto:petsc-dev@mcs.anl.gov></a><a class="moz-txt-link-rfc2396E" href="mailto:petsc-dev@mcs.anl.gov" moz-do-not-send="true"><mailto:petsc-dev@mcs.anl.gov></a><a class="moz-txt-link-rfc2396E" href="mailto:petsc-dev@mcs.anl.gov" moz-do-not-send="true"><mailto:petsc-dev@mcs.anl.gov></a> wrote:
Fellow PETSc developers,
If I try to configure PETSc with CUDA support on the ORNL Summit system using non-GNU compilers, I run into an error due to the following code in packages/cuda.py:
def configureTypes(self):
import config.setCompilers
if not config.setCompilers.Configure.isGNU(self.setCompilers.CC, self.log):
raise RuntimeError('Must use GNU compilers with CUDA')
...
Is this just because this code predates support for other host compilers with nvcc, or is there perhaps some more subtle reason that I, with my inexperience using CUDA, don't know about? I'm guessing that I just need to add support for using '-ccbin' appropriately to set the location of the non-GNU host compiler, but maybe there is something that I'm missing. I poked around in the petsc-dev mailing list archives and can find a few old threads on using non-GNU compilers, but I'm not sure what conclusions were reached.
Best regards,
Richard
</pre>
</blockquote>
</blockquote>
<br>
</blockquote>
<br>
</body>
</html>