[petsc-dev] using typedef for each different memory space the variable lives in
Mark Adams
mfadams at lbl.gov
Sun Dec 13 08:39:51 CST 2020
So how do I declare
struct myMat {
PetscScalar *a;
PetscInt *i, *j, nrows;
}
so that I can use, or give me an alternative idiom,
struct myMat h_mat, *d_mat;
....
cudaMemcpy( d_mat, &h_mat, sizeof(myMat),
cudaMemcpyHostToDevice))
And what is PetscKokkosReal*? I guess this is in the default execution
space in Kokkos. I guess that is fine.
But I guess I don't understand what a "code reader" is. If it's a person or
a compiler you could do what I do is have a variable naming convention like
h_ and d_.
Junchao used _d and _h in aijkokkosimpl.hpp, and these are typed for device
and host. He had to type them to get it into the Kokkos API, which does
type checking so I think it would catch having KokkosReal be a double and
PetscReal be a float.
On Sat, Dec 12, 2020 at 5:42 PM Barry Smith <bsmith at petsc.dev> wrote:
>
>
> On Dec 12, 2020, at 4:23 PM, Mark Adams <mfadams at lbl.gov> wrote:
>
>
>
> On Sat, Dec 12, 2020 at 10:44 AM Barry Smith <bsmith at petsc.dev> wrote:
>
>>
>> Currently we use PetscScalar and PetscScalar * to refer to variables
>> that could be in any memory space. On the CPU, on the GPU, in Kokkos, etc.
>>
>> Would it make sense to use typedef to indicate at each location the
>> true type of the memory location when possible?
>>
>
> No. Absolutely not.
>
> Because Cuda is simple C code (eg, printf is provided but few standard
> libs are provided and you can't call non-device functions from the device),
> you can put kernels in a header file and include it in the .cu file to get
> Cuda code. You need to #define things like the device declaration syntax
> (eg __device__) and things like atomicAdd. This is how MatSetValuesDevice
> works.
>
> The way I do deep copies in Cuda I declare a host a device struct, like:
>
> Mat h_mat, *d_mat.
>
> Then do cuda mallocs into pointers in h_mat, then a cuda malloc on to get
> d_mat. Then a cuda copy-to-device to put any data (cuda malloced) or
> metadata (eg, array size, dim, etc) from h_mat into d_mat. I don't know how
> I could do this if h_mat and d_mat are not the same without even more
> gymnastics.
>
>
> Oh, you would not need to change your code at all, the only difference is
> in certain places you would have variables declared as PetscKokkosReal that
> point to Kokkos memory instead of declared as PetscReal. It is, as Jed
> notes, just for readers/maintainers of the code ease.
>
>
> The Kokkos people have been working with this for a long time and I think
> they have probably learned the hard way what works. I would look at what
> they do. If they or SYCL does it I would take a look.
>
>
> CUDA has always had the same syntax for pointers to GPU memory and CPU
> memory; this seemed odd to me because it means each code reader has to find
> out some other way which pointers actually refer to host memory and which
> point to GPU memory instead of just directly saying it. Kokkos seems to
> have just inherited this approach. Of course with unified memory there is
> no distinguishing so maybe no reason to have different names.
>
>
>
>
>>
>> typedef PetscReal PetscKokkosReal means the variable is in the
>> Kokkos memory space
>>
>
> There is no such thing. THere is the default execution space, default host
> space, Cuda memory space, etc.
>
>
>> typedef PetscReal PetscCUDAReal
>> typedef PetscReal PetscNVSHEMReal
>>
>> etc.
>>
>> Then one could write things like
>>
>> struct {
>> ...
>> PetscNVSHEMReal *values;
>> }
>>
>> Similarly inside kernels one would use the type type associated with
>> the kernel, cuda with cuda etc.
>>
>> I think the resulting code will be much clearer and easier to dive
>> into, then having to first figure out where each variable lives.
>>
>> I find the current code confusing because one cannot immediately see a
>> variable declaration and know where it lives, even though it does live
>> somewhere in particular..
>>
>> Barry
>>
>>
>>
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20201213/b6933536/attachment.html>
More information about the petsc-dev
mailing list