[petsc-dev] PETSc - MPI3 functionality
Smith, Barry F.
bsmith at mcs.anl.gov
Fri Sep 7 21:28:17 CDT 2018
The VecScatter routines are in a big state of flux now as we try to move from a monolithic implementation (where many cases were handled with cumbersome if checks in the code) to simpler independent standalone implementations that easily allow new implementations orthogonal to the current implementations. So it is not a good time to dive in.
We are trying to do the refactorization but it is a bit frustrating and slow.
Can you tell us why you feel you need a custom implementation? Is the current implementation too slow (how do you know it is too slow?)?
> On Sep 7, 2018, at 12:26 PM, Tamara Dancheva <tamaradanceva19933 at gmail.com> wrote:
> I am developing an asynchronous method for a FEM solver, and need a custom implementation of the VecScatterBegin and VecScatterEnd routines. Since PETSc uses its own limited set of MPI functions, could you tell what would be the best way possible to extend upon it and use for example the non-blocking collectives, igatherv and so on?
> I hope the question is specific enough, let me know if anything, I can provide with more information. I would very much appreciate any help, thanks in advance!
More information about the petsc-dev