On Sun, Oct 7, 2012 at 7:58 AM, Karl Rupp <span dir="ltr"><<a href="mailto:rupp@mcs.anl.gov" target="_blank">rupp@mcs.anl.gov</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
>From the simplicity point of view, you're absolutely right. However, MPI is only an implementation of the model used for distributed computing/memory (including 'faking' distributed memory on a shared memory system). With all the complex memory hierarchies introduced in the last years, we may have to adapt our programming approach in order to get reasonable performance, even though MPI would be able to accomplish this (yet at higher costs - even on shared memory systems, MPI messaging is not a free lunch).</blockquote>
</div><br><div>It's easy to end up with an unusable system this way. MPI is an established API and implementations have lots of tricks for shared memory. Adding another level to the hierarchy costs simplicity. If we're going to claim that it's necessary for performance reasons, we have to back up that claim. I think it's a rather pointless rabbit hole, but would be happy to hear why that's not the case.</div>