[petsc-dev] Unification approach for OpenMP/Threads/OpenCL/CUDA: Part 1: Memory

Karl Rupp rupp at mcs.anl.gov
Sat Oct 6 19:40:45 CDT 2012


Hi Jed,
 >
>     The important point here, however, is the independence from the
>     implementation libraries, otherwise we would have to maintain a
>     separate memory management implementation for each GPU library we
>     possibly interface with.
>
>
> Surely you'll have to implement it differently regardless. Is the issue
> just that you want one publicly visible interface for syncing memory? Or
> would you like a separate interface for synchronizing to different kinds
> of accelerators, but for the logic behind that interface to be reused?

I'm thinking of a common interface that ensures that a certain part of 
the vector (possibly identified by an index set) is 'valid' on a certain 
device. Such functionality is not library-specific (it always boils down 
to copying buffers identified by handles), thus I'd like to have this 
implemented only once in Petsc. It would also allow to move buffers 
around, e.g. matrix-vector multiplication is carried out by LibraryA, 
while a preconditioner is applied using LibraryB. With a common place to 
store the handles, it's just about invoking the respective subroutines 
in LibraryA and LibraryB for the supplied buffer.


> I don't know if anyone wants to simultaneously use different
> accelerators (GPU and MIC?), but I could imagine a Vec type that
> supports memory residing in more than two spaces, choosing to perform
> the operation wherever the data is most current.

Hmm, I don't know whether it is a good decision to assume just one 
accelerator model now, so I wanted to keep it as general as possible.


> For code navigation, I don't if you looked at the "gtags" part of the
> user's manual. If you use Emacs or Vim, it's an excellent way to tab
> complete forward and reverse lookup. Note that PETSc's naming convention
> is good for tab completion of implementations.

Thanks for the hint :-)

Best regards,
Karli




More information about the petsc-dev mailing list