[petsc-dev] Backend independent VecGetArray for GPUs

Dominic Meiser dmeiser at txcorp.com
Fri Oct 17 09:54:36 CDT 2014


Hi Ashwin,

Are you suggesting that `VecGetGPUArray*`  be added to the `Vec` 
interface? That might be problematic because these methods only makes 
sense for GPU vectors. The interface of `Vec` would then be tied to the 
PETSc configuration.

An alternative might be to compose the various `VecCUSPGetArray` and 
related methods with `Vec` objects. You could then query these methods 
and use them if available (and handle absence of the methods if needed, 
e.g. throw an exception). This would be easy to add.

I'm not an expert on the PETSc object model so hopefully somebody else 
can comment on whether this would be an idiomatic PETSc solution.

Cheers,
Dominic


On 10/17/2014 08:21 AM, Ashwin Srinath wrote:
> Hello, petsc-dev
>
> When working with petsc4py, I realized that there was no way to access 
> the underlying device memory. The routine that PETSc provides is 
> VecCUSPGetArrayRead/Write, which of course, makes no sense in Python.
>
> So I wrote a petsc4py extension that extracts and returns the raw 
> device pointer from the underlying CUSP array. With this raw pointer, 
> I'm able to construct a PyCUDA GPUArray, and apply my own kernels to 
> the underlying buffers. My code is available here: 
> https://github.com/ashwinsrnth/petsc-pycuda
>
> After discussion with Lisandro Dalcin, I think that it might be a good 
> idea for PETSc to provide a routine `VecGetGPUArray` (in place of or 
> in addition to the current `VecCUSPGetArray`) which returns a raw 
> pointer to device memory, and lets the user decide what to do with it.
>
> Do you think this can fit in to PETSc? If so, I already have an 
> implementation, but can use help with the interface.
>
> Thank you
> Ashwin


-- 
Dominic Meiser
Tech-X Corporation
5621 Arapahoe Avenue
Boulder, CO 80303
USA
Telephone: 303-996-2036
Fax: 303-448-7756
www.txcorp.com




More information about the petsc-dev mailing list