[petsc-dev] Generality of VecScatter

Jed Brown jedbrown at mcs.anl.gov
Thu Nov 24 11:45:09 CST 2011


On Thu, Nov 24, 2011 at 11:31, Barry Smith <bsmith at mcs.anl.gov> wrote:

>  You seem to have the tact assumption that the choice is between MPI+X or
> some PGAS language, this I say is a major straw man! (And I totally agree
> with you PGAS language are junk and will never take off).
>
>     What I am suggesting is there is a third alternative that is
> compatible with MPI (allowing people to transition from MPI to it without
> rewriting all current MPI based code) and is COMPLETELY library based (as
> MPI is). This new thing would provide a modern API suitable for many core,
> for GPUs and distributed computing and the API would focus on moving data
> and scheduling tasks. The API will be suitable for C/C++/Fortran 90. Now
> exact details of the API and the model are not clear to me of course. I
> will be pushing the MCS CS folks in this direction because I don't see any
> other reasonable alternative (MPI+X is fundamentally not powerful enough
> for the new configurations and PGAS are a joke).
>

I just wrote the following in a private discussion with Barry and Matt.
Copied here for the rest of you.

Do you want to abandon MPI as a foundation for distributed memory, or do
you just want a reliable way to manage multicore/SMT? I am far from
convinced that we can't or shouldn't build our ultimate communication
abstraction on top of MPI. In my opinion, the job of MPI is to abstract
non-portable network-level details to provide a minimal set of primitives
on which to write performant code and to implement libraries. Of course
there is some unnecessary cruft, and there are occasional holes where
networks can do cool things that are useful to libraries, but have not been
exposed through MPI. But cleaning that stuff up is what I think new MPI
standards should be targeting.

I do not believe the goal of MPI is to provide abstractions that are
directly usable by applications. That is the scope of domain-specific and
general-purpose libraries. If an abstraction can be implemented with
portable performance using primitives in MPI, then it should *not* be added
to MPI.

I know that you want these cool high-level persistent communication
primitives. I want them too, but I don't want them *inside* of MPI, I want
them in their own portable library. For the library to be portable, its
network operations should be defined using some portable low-level
communication interface. If only one of those existed...

We can deal with distribution later so it's not a pain for users to get
these things installed.

As a general rule, I would much prefer that these high-level primitives be
libraries that use MPI (and are created using communicators), otherwise
they will not compose with other libraries and I think will suffer the fate
of the PGAS languages, Smalltalk, Lisp, GA, etc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20111124/1db7048f/attachment.html>


More information about the petsc-dev mailing list