[petsc-dev] Backend independent VecGetArray for GPUs

Ashwin Srinath ashwinsrnth at gmail.com
Sun Oct 19 09:12:23 CDT 2014


Thanks Karl!

Ashwin

On Sun, Oct 19, 2014 at 9:45 AM, Karl Rupp <rupp at iue.tuwien.ac.at> wrote:

> Hi Ashwin,
>
> I'll add two functions returning the bare CUDA and OpenCL handles.
>
> Best regards,
> Karli
>
>
> On 10/19/2014 03:42 PM, Ashwin Srinath wrote:
>
>> Hi everyone
>>
>> Just wondering what the consensus is. I'm happy to submit a PR if
>> someone can tell me what goes where!
>>
>> Thanks
>> Ashwin
>>
>> On 18 Oct 2014 01:46, "Karl Rupp" <rupp at iue.tuwien.ac.at
>> <mailto:rupp at iue.tuwien.ac.at>> wrote:
>>
>>     Hi,
>>
>>      >> > Why would we want this? The packages themselves
>>     (CUDA/ViennaCL) only
>>
>>             expose
>>             memory using these specific types. What use is it to wrap
>>             these up in
>>             a void * if you
>>             just have to caste back down to use them. Isn't it better to
>>             maintain
>>             type-specific, and
>>             type safe, interfaces for this stuff?
>>
>>             The biggest problem I faced was that there was no way to
>>             access device
>>             memory using petsc4py - since there is no equivalent for
>>             VecCUSPGetArray. So returning a raw pointer may not be very
>>             helpful
>>             for C++/CUSP users (they already have a nice way to access
>>             device
>>             memory) but it would definitely make things a lot easier for
>>             Python users.
>>
>>
>>         To me it sounds like something that should be dealt with in the
>>         library
>>         that does the python bindings, not in PETSc itself. (...)
>>
>>
>>     Unfortunately, this is not so easy: If the Python wrapper has to
>>     take care of such a conversion, then it needs to use the *exactly
>>     same build environment* as PETSc. The reason is that the CUSP and
>>     ViennaCL types are C++ beasts, not having a defined ABI, so one can
>>     run into all sorts of hard-to-debug problems when finally linking
>>     libpetsc.so with the Python wrapper. If, however, PETSc provides
>>     these low-level memory buffers, the Python wrapper can attach to a
>>     well-defined ABI.
>>
>>     Best regards,
>>     Karli
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20141019/babe7092/attachment.html>


More information about the petsc-dev mailing list