[petsc-dev] PETSc - MPI3 functionality
Junchao Zhang
jczhang at mcs.anl.gov
Fri Sep 7 14:06:51 CDT 2018
I have two PETSc pull requests for VecScatter: PR #1037 adds MPI3 process
shared memory, and PR #1047 adds MPI3 neighborhood collective
(MPI_Ineighbor_alltoallv).
In the master branch, there already is another approach applying MPI3
process shared memory to VecScatter. You can enable it with command line
option -vecscatter_type mpi3
--Junchao Zhang
On Fri, Sep 7, 2018 at 12:26 PM Tamara Dancheva <
tamaradanceva19933 at gmail.com> wrote:
> Hi,
>
> I am developing an asynchronous method for a FEM solver, and need a custom
> implementation of the VecScatterBegin and VecScatterEnd routines. Since
> PETSc uses its own limited set of MPI functions, could you tell what would
> be the best way possible to extend upon it and use for example the
> non-blocking collectives, igatherv and so on?
>
> I hope the question is specific enough, let me know if anything, I can
> provide with more information. I would very much appreciate any help,
> thanks in advance!
>
> Best,
> Tamara
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20180907/9cd62987/attachment.html>
More information about the petsc-dev
mailing list