[petsc-dev] Current status: GPUs for PETSc

Lawrence Mitchell lawrence.mitchell at ed.ac.uk
Mon Nov 5 05:27:53 CST 2012


Hi Karli, and others,

On 05/11/2012 01:51, Karl Rupp wrote:

...

> That's it for now, after some more refining I'll start with a careful 
> migration of the code/concepts into PETSc. Comments are, of course, 
> always welcome.

So we're working on FE assembly + solve on GPUs using fenics kernels
(github.com/OP2/PyOP2).  For the GPU solve, it would be nice if we could
backdoor assembled matrices straight on to the GPU.  That is, create a Mat
saying "this is the sparsity pattern" and then, rather than calling
MatSetValues on the host, just pass a pointer to the device data.

At the moment, we're doing a similar thing using CUSP, but are looking at
doing multi-GPU assembly + solve and would like not to have to reinvent too
many wheels, in particular, the MPI-parallel layer.  Additionally, we're
already using PETSc for the CPU-side linear algebra so it would be nice to
use the same interface everywhere.

I guess effectively we'd like something like MatCreateSeqAIJWithArrays and
MatCreateMPIAIJWithSplitArrays but with the ability to pass device pointers
rather than host pointers.  Is there any roadmap in PETSc for this kind of
thing?  Would patches in this direction be welcome?

Cheers,
Lawrence

-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.




More information about the petsc-dev mailing list