[petsc-dev] MPI derived datatype use in PETSc
Junchao Zhang
junchao.zhang at gmail.com
Wed Sep 23 14:03:03 CDT 2020
DMPlex has MPI_Type_create_struct(). But for matrices and vectors, we only
use MPIU_SCALAR.
In petsc, we always pack non-contiguous data before calling MPI, since most
indices are irregular. Using MPI_Type_indexed() etc probably does not
provide any benefit.
The only place I can think of that can benefit from derived data types is
in DMDA. The ghost points can be described with MPI_Type_vector(). We can
save the packing/unpacking and associated buffers.
--Junchao Zhang
On Wed, Sep 23, 2020 at 12:30 PM Victor Eijkhout <eijkhout at tacc.utexas.edu>
wrote:
> The Ohio mvapich people are working on getting better performance out of
> MPI datatypes. I notice that there are 5 million lines in the petsc source
> that reference MPI datatypes. So just as a wild guess:
>
> Optimizations on MPI Datatypes seem to be beneficial mostly if you’re
> sending blocks of at least a kilobyte each. Is that a plausible usage
> scenario? What is the typical use of MPI Datatypes in PETSc, and what type
> of datatype would most benefit from optimization?
>
> Victor.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20200923/370cbb31/attachment.html>
More information about the petsc-dev
mailing list