[petsc-dev] Not possible to do a VecPlaceArray for veccusp

Jose E. Roman jroman at dsic.upv.es
Fri Feb 26 06:27:57 CST 2016


> El 25 feb 2016, a las 17:19, Dominic Meiser <dmeiser at txcorp.com> escribió:
> 
> On Thu, Feb 25, 2016 at 01:13:01PM +0100, Jose E. Roman wrote:
>> We are trying to do some GPU developments on the SLEPc side, and we would need a way of placing the array of a VECCUSP vector, providing the GPU address. Specifically, what we want to do is have a large Vec on GPU and slice it in several smaller Vecs.
>> 
>> For the GetArray/RestoreArray we have all possibilities:
>> - VecGetArray: gets the pointer to the buffer stored in CPU memory
>> - VecCUSPGetArray*: returns a CUSPARRAY object that contains some info, including the buffer allocated in GPU memory
>> - VecCUSPGetCUDAArray*: returns a raw pointer of the GPU buffer
>> 
>> The problem comes with PlaceArray equivalents. Using VecPlaceArray we can provide a new pointer to CPU memory. We wanted to implement the equivalent thing for GPU, but we found difficulties due to Thrust. If we wanted to provide a VecCUSPPlaceCUDAArray the problem is that Thrust does not allow wrapping an exisiting GPU buffer with a CUSPARRAY object (when creating a CUSPARRAY it always allocates new memory). On the other hand, VecCUSPPlaceArray is possible to implement, but the problem is that one should provide a CUSPARRAY obtained from a VecCUSPGetArray* without modification (it is not possible to do pointer arithmetic with a CUSPARRAY).
>> 
>> Any thoughts?
>> 
> 
> I think your and Karli's analysis is correct, this is currently
> not supported.  Besides Karli's proposal to use ViennaCL's cuda
> backend a different option might be to use cusp's array views.
> These have a constructor for sub-ranges of other cusp arrays:
> 
> https://github.com/cusplibrary/cusplibrary/blob/master/cusp/array1d.h#L409
> 
> However, enabling cusp array views in something like
> VecCUSPPlaceArray is not immediately possible.  The CUSPARRAY
> type, which is currently hardwired to be
> cusp::array1d<PetscScalar,cusp::device_memory>, would have to
> become a template parameter.  I'm not sure if we want to go down
> that path.

Yes, we do not like this.

> 
> The alternative would be to use raw cuda pointers instead of cusp
> arrays for GPU memory in VecCUSP.  That would be a fairly
> significant undertaking (certainly more than the 2-3 weeks Karli
> is estimating for getting the ViennaCL cuda backend in).

Do you mean creating a new class VECCUDA in addition to VECCUSP and VECVIENNACL? This could be a solution for us. It would mean maybe refactoring MATAIJCUSPARSE to work with these new Vecs?

If there is interest we can help in adding this stuff.


> 
> Cheers,
> Dominic
> 
> -- 
> Dominic Meiser
> Tech-X Corporation - 5621 Arapahoe Avenue - Boulder, CO 80303




More information about the petsc-dev mailing list