[petsc-dev] MPI derived datatype use in PETSc
Matthew Knepley
knepley at gmail.com
Wed Sep 23 14:57:27 CDT 2020
On Wed, Sep 23, 2020 at 3:03 PM Junchao Zhang <junchao.zhang at gmail.com>
wrote:
> DMPlex has MPI_Type_create_struct(). But for matrices and vectors, we
> only use MPIU_SCALAR.
>
We create small datatypes for pairs of things and such. The Plex usage is
also for a very small type. Most of this is done to save multiple
reductions, rather
than for throughput.
Thanks,
Matt
> In petsc, we always pack non-contiguous data before calling MPI, since
> most indices are irregular. Using MPI_Type_indexed() etc probably does not
> provide any benefit.
> The only place I can think of that can benefit from derived data types is
> in DMDA. The ghost points can be described with MPI_Type_vector(). We can
> save the packing/unpacking and associated buffers.
>
> --Junchao Zhang
>
>
> On Wed, Sep 23, 2020 at 12:30 PM Victor Eijkhout <eijkhout at tacc.utexas.edu>
> wrote:
>
>> The Ohio mvapich people are working on getting better performance out of
>> MPI datatypes. I notice that there are 5 million lines in the petsc source
>> that reference MPI datatypes. So just as a wild guess:
>>
>> Optimizations on MPI Datatypes seem to be beneficial mostly if you’re
>> sending blocks of at least a kilobyte each. Is that a plausible usage
>> scenario? What is the typical use of MPI Datatypes in PETSc, and what type
>> of datatype would most benefit from optimization?
>>
>> Victor.
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20200923/6559e28c/attachment.html>
More information about the petsc-dev
mailing list