[petsc-dev] Backend independent VecGetArray for GPUs
Karl Rupp
rupp at iue.tuwien.ac.at
Mon Oct 20 06:06:49 CDT 2014
Hi Ashwin,
I pushed a function for obtaining the CUDA pointer from a CUSP vector here:
https://bitbucket.org/petsc/petsc/commits/d831094ec27070ea54a249045841367f8aab0976
It currently resides in branch
karlrupp/feature-gpu-handle-access-for-vec
The respective function for ViennaCL will be pushed later.
Please let me know if this works for you, then I'll start the
integration process.
Best regards,
Karli
On 10/19/2014 04:12 PM, Ashwin Srinath wrote:
> Thanks Karl!
>
> Ashwin
>
> On Sun, Oct 19, 2014 at 9:45 AM, Karl Rupp <rupp at iue.tuwien.ac.at
> <mailto:rupp at iue.tuwien.ac.at>> wrote:
>
> Hi Ashwin,
>
> I'll add two functions returning the bare CUDA and OpenCL handles.
>
> Best regards,
> Karli
>
>
> On 10/19/2014 03:42 PM, Ashwin Srinath wrote:
>
> Hi everyone
>
> Just wondering what the consensus is. I'm happy to submit a PR if
> someone can tell me what goes where!
>
> Thanks
> Ashwin
>
> On 18 Oct 2014 01:46, "Karl Rupp" <rupp at iue.tuwien.ac.at
> <mailto:rupp at iue.tuwien.ac.at>
> <mailto:rupp at iue.tuwien.ac.at <mailto:rupp at iue.tuwien.ac.at>>__>
> wrote:
>
> Hi,
>
> >> > Why would we want this? The packages themselves
> (CUDA/ViennaCL) only
>
> expose
> memory using these specific types. What use is it
> to wrap
> these up in
> a void * if you
> just have to caste back down to use them. Isn't it
> better to
> maintain
> type-specific, and
> type safe, interfaces for this stuff?
>
> The biggest problem I faced was that there was no
> way to
> access device
> memory using petsc4py - since there is no
> equivalent for
> VecCUSPGetArray. So returning a raw pointer may not
> be very
> helpful
> for C++/CUSP users (they already have a nice way to
> access
> device
> memory) but it would definitely make things a lot
> easier for
> Python users.
>
>
> To me it sounds like something that should be dealt
> with in the
> library
> that does the python bindings, not in PETSc itself. (...)
>
>
> Unfortunately, this is not so easy: If the Python wrapper
> has to
> take care of such a conversion, then it needs to use the
> *exactly
> same build environment* as PETSc. The reason is that the
> CUSP and
> ViennaCL types are C++ beasts, not having a defined ABI, so
> one can
> run into all sorts of hard-to-debug problems when finally
> linking
> libpetsc.so with the Python wrapper. If, however, PETSc
> provides
> these low-level memory buffers, the Python wrapper can
> attach to a
> well-defined ABI.
>
> Best regards,
> Karli
>
>
>
More information about the petsc-dev
mailing list