<div class="gmail_quote">On Thu, Nov 24, 2011 at 11:31, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div id=":2og"> You seem to have the tact assumption that the choice is between MPI+X or some PGAS language, this I say is a major straw man! (And I totally agree with you PGAS language are junk and will never take off).<br>
<br>
What I am suggesting is there is a third alternative that is compatible with MPI (allowing people to transition from MPI to it without rewriting all current MPI based code) and is COMPLETELY library based (as MPI is). This new thing would provide a modern API suitable for many core, for GPUs and distributed computing and the API would focus on moving data and scheduling tasks. The API will be suitable for C/C++/Fortran 90. Now exact details of the API and the model are not clear to me of course. I will be pushing the MCS CS folks in this direction because I don't see any other reasonable alternative (MPI+X is fundamentally not powerful enough for the new configurations and PGAS are a joke).<br>
</div></blockquote></div><div><br></div><div>I just wrote the following in a private discussion with Barry and Matt. Copied here for the rest of you.</div><div><br></div><div>Do you want to abandon MPI as a foundation for distributed memory, or do you just want a reliable way to manage multicore/SMT? I am far from convinced that we can't or shouldn't build our ultimate communication abstraction on top of MPI. In my opinion, the job of MPI is to abstract non-portable network-level details to provide a minimal set of primitives on which to write performant code and to implement libraries. Of course there is some unnecessary cruft, and there are occasional holes where networks can do cool things that are useful to libraries, but have not been exposed through MPI. But cleaning that stuff up is what I think new MPI standards should be targeting.</div>
<div><br></div><div>I do not believe the goal of MPI is to provide abstractions that are directly usable by applications. That is the scope of domain-specific and general-purpose libraries. If an abstraction can be implemented with portable performance using primitives in MPI, then it should *not* be added to MPI.</div>
<div><br></div><div>I know that you want these cool high-level persistent communication primitives. I want them too, but I don't want them *inside* of MPI, I want them in their own portable library. For the library to be portable, its network operations should be defined using some portable low-level communication interface. If only one of those existed...</div>
<div><br></div><div>We can deal with distribution later so it's not a pain for users to get these things installed.</div><div><br></div><div>As a general rule, I would much prefer that these high-level primitives be libraries that use MPI (and are created using communicators), otherwise they will not compose with other libraries and I think will suffer the fate of the PGAS languages, Smalltalk, Lisp, GA, etc.</div>