[petsc-dev] Backend independent VecGetArray for GPUs
Karl Rupp
rupp at iue.tuwien.ac.at
Sat Oct 18 01:46:46 CDT 2014
Hi,
>> > Why would we want this? The packages themselves (CUDA/ViennaCL) only
>> expose
>> memory using these specific types. What use is it to wrap these up in
>> a void * if you
>> just have to caste back down to use them. Isn't it better to maintain
>> type-specific, and
>> type safe, interfaces for this stuff?
>>
>> The biggest problem I faced was that there was no way to access device
>> memory using petsc4py - since there is no equivalent for
>> VecCUSPGetArray. So returning a raw pointer may not be very helpful
>> for C++/CUSP users (they already have a nice way to access device
>> memory) but it would definitely make things a lot easier for Python users.
>
> To me it sounds like something that should be dealt with in the library
> that does the python bindings, not in PETSc itself. (...)
Unfortunately, this is not so easy: If the Python wrapper has to take
care of such a conversion, then it needs to use the *exactly same build
environment* as PETSc. The reason is that the CUSP and ViennaCL types
are C++ beasts, not having a defined ABI, so one can run into all sorts
of hard-to-debug problems when finally linking libpetsc.so with the
Python wrapper. If, however, PETSc provides these low-level memory
buffers, the Python wrapper can attach to a well-defined ABI.
Best regards,
Karli
More information about the petsc-dev
mailing list