[petsc-dev] sm_70
Stefano Zampini
stefano.zampini at gmail.com
Sat Sep 26 00:36:55 CDT 2020
I have added the configure option --with-cuda-gencodearch=XX to pass down
such information to the external packages build. We are not currently using
it in the library itself.
Il Sab 26 Set 2020, 06:08 Barry Smith <bsmith at petsc.dev> ha scritto:
>
>
> On Sep 25, 2020, at 8:36 PM, Jacob Faibussowitsch <jacob.fai at gmail.com>
> wrote:
>
> Configure by default should find out the available GPU and build for that
> sm_* it should not require the user to set this (how the heck is the user
> going to know what to set?) If I remember correctly there is a utility
> available that gives this information.
>
> For CUDA I believe the tool is nvidia-smi. Should make sure this automatic
> detection works when configuring —with-batch though since login nodes might
> have different arch than compute.
>
>
> Would someone buying a machine be that perverse, they put a different
> GPU on the front end than the back end? Sadly, yes, your point is good,
> somehow on batch systems one wants the backend GPU information.
>
> Maybe on Cray systems it is hidden in one of the six million
> environmental variables they set.
>
> Barry
>
>
> Best regards,
>
> Jacob Faibussowitsch
> (Jacob Fai - booss - oh - vitch)
> Cell: (312) 694-3391
>
> On Sep 25, 2020, at 21:09, Barry Smith <bsmith at petsc.dev> wrote:
>
>
> Configure by default should find out the available GPU and build for
> that sm_* it should not require the user to set this (how the heck is the
> user going to know what to set?) If I remember correctly there is a
> utility available that gives this information.
>
> For generic builds like in package distributions I don't know how it
> should work, ideally all the possibilities would be available in the
> library and at run time the correct one will be utilized.
>
> Barry
>
>
> On Sep 25, 2020, at 5:49 PM, Mark Adams <mfadams at lbl.gov> wrote:
>
> '--CUDAFLAGS=-arch=sm_70',
>
> seems to fix this.
>
> On Fri, Sep 25, 2020 at 6:31 PM Mark Adams <mfadams at lbl.gov> wrote:
>
>> I see kokkos and hyper have a sm_70 flag, but I don't see one for PETSc.
>>
>> It looks like you have to specify this to get modern atomics to work in
>> Cuda. I get:
>>
>> /ccs/home/adams/petsc/include/petscaijdevice.h(99): error: no instance of
>> overloaded function "atomicAdd" matches the argument list
>> argument types are: (double *, double)
>>
>> I tried using a Kokkos configuration, thinking I could get these sm_70
>> flags, but that did not work.
>>
>> Any ideas?
>>
>> Mark
>>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20200926/2455e849/attachment.html>
More information about the petsc-dev
mailing list