[petsc-dev] use of hash table vs array in various places in PETSc

Barry Smith bsmith at mcs.anl.gov
Wed Sep 21 08:25:49 CDT 2011


On Sep 21, 2011, at 12:25 AM, Jed Brown wrote:

> On Wed, Sep 21, 2011 at 01:27, Barry Smith <bsmith at mcs.anl.gov> wrote:
> MPI_GraphComm_init( ......ownership on each process, indices , ...., &ctx);
> 
>  MPI_GraphComm_start(ctx, pointers to the data, MPI_Datatypes, MPI_reduces, ......);
> 
> MPI_Graph_create has to do with process topology, not indexing within a process. MPI_Datatype is generally local (though MPI_Type_create_darray offers some parallel semantics).

   I know, I called them GraphComm just to have a name, not to match something currently in MPI

>  
> 
> 
>   Look at the MPI 3 proposals.
> 
> The neighborhood collectives are the most natural mapping of VecScatter to MPI, but creation is still not as convenient.


    Then they are fucking up MPI-3 and don't know what they are doing. Why not propose to them the correct model?

   Barry

> I think we also want to preserve non-MPI-3 functionality for a while. The neighborhood collectives also don't handle matched layouts between arrays with different base types.
> 
> But perhaps we can write something that creates the context (basically just number of entries that need to go each place) by inspecting an MPI_Datatype and then it would be guaranteed to work for other types that have the same layout. Then it would just be a matter of getting the MPI stack to share layout information between an integer and scalar datatype with the same layout.




More information about the petsc-dev mailing list