[petsc-dev] use of hash table vs array in various places in PETSc

Jed Brown jedbrown at mcs.anl.gov
Wed Sep 21 00:25:54 CDT 2011


On Wed, Sep 21, 2011 at 01:27, Barry Smith <bsmith at mcs.anl.gov> wrote:

> MPI_GraphComm_init( ......ownership on each process, indices , ...., &ctx);
>
>  MPI_GraphComm_start(ctx, pointers to the data, MPI_Datatypes, MPI_reduces,
> ......);
>

MPI_Graph_create has to do with process topology, not indexing within a
process. MPI_Datatype is generally local (though MPI_Type_create_darray
offers some parallel semantics).


>
>
>   Look at the MPI 3 proposals.
>

The neighborhood collectives are the most natural mapping of VecScatter to
MPI, but creation is still not as convenient. I think we also want to
preserve non-MPI-3 functionality for a while. The neighborhood collectives
also don't handle matched layouts between arrays with different base types.

But perhaps we can write something that creates the context (basically just
number of entries that need to go each place) by inspecting an MPI_Datatype
and then it would be guaranteed to work for other types that have the same
layout. Then it would just be a matter of getting the MPI stack to share
layout information between an integer and scalar datatype with the same
layout.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20110921/6a17d48e/attachment.html>


More information about the petsc-dev mailing list