[petsc-dev] Generality of VecScatter

Barry Smith bsmith at mcs.anl.gov
Thu Nov 24 09:20:45 CST 2011


On Nov 24, 2011, at 2:35 AM, Matthew Knepley wrote:

> On Thu, Nov 24, 2011 at 1:51 AM, Dmitry Karpeev <karpeev at mcs.anl.gov> wrote:
> 
> 
> On Wed, Nov 23, 2011 at 10:22 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> 
>   These guys are making a very big strategic mistake. They cannot expect to get any funding at all if they continue to purse MPI when the acting and presumably next ASCR director believes with every fiber of his being that MPI is dead. They are writing the death sentence to computer science at ANL.
> 
> What does this director believe is the future?
> 
> They do not believe in a future technology, only in the death of MPI due to
> 
>   a) Lots of cores (this does not convince me, its an implementation problem)
> 
>   b) manycore (which has legitimate problems, much discussed here)

   Matt hit the nail on the head here.  The new approach will be "radically different/better but what it will be depends on what the CS scientists develop, that isn't known now".

   I actually kind of agree with him. We should throw off the legacy software of MPI 1 and 2 (MPI 3 is to evolutionary), but keep the powerful ideas like the communicator and that it is a completely ___library___ based parallel solution (doesn't require language changes). The new model should deal with the parallel computing problems of today just like MPI dealt with the parallel computing problems of 1993 (the problems are different). Victor and I are playing around with some ideas.

   Barry


> 
> These are just words for the director, but powerful words. Who does not want to fund the Next MPI?
> 
>     Matt
>  
> Dmitry. 
> 
>   Barry
> 
> 
> On Nov 23, 2011, at 6:28 PM, Jed Brown wrote:
> 
> > I had a useful conversation with the MPICH guys today about the unstructured communication primitives that Mark, Matt, Barry, and I have been discussing. It's looking like we can do a very thin communication library that has the nice semantics I was looking for, namely that the local-to-global is stored only as indices in the local space. To implement the communication primitives, the owner of a global node never needs to know how many or which processes have that point in their local space. There is an MPI-3 feature that we can use to elegantly avoid two-way knowledge for pointwise gather/scatter. (We can always implement the same interface using vanilla MPI-1, but the MPI-2/3 implementation would have less overhead.)
> >
> > If I/we write this layer, VecScatter (with arbitrary data types, etc) could be implemented very easily on top of it. I'm in no rush to do this, but I could imagine getting there eventually. The VecScatter interface allows some things that might not be important. For example, any process can specify an edge in the communication graph, even if it does not own the source or the destination vertex. Also, both source and destination vertices can have degree higher than one. We can even put a link between the same points twice (effectively multiplying that contribution by 2 when using ADD_VALUES). I don't know if this ever makes semantic sense, but the interface allows it.
> >
> > I wonder if we can express all useful communication with a more restrictive interface similar to a local-to-global map (but mapping between any vectors) where one side (typically called "local", but doesn't need to actually be a local Vec) has at most one edge.
> >
> > What weird things do people use VecScatter for that might break with this more restrictive model?
> 
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> -- Norbert Wiener




More information about the petsc-dev mailing list