[petsc-dev] PETSc and threads
Jed Brown
jed at jedbrown.org
Mon Jan 19 17:23:28 CST 2015
Barry Smith <bsmith at mcs.anl.gov> writes:
> Whose legacy applications? Presumably all PETSc legacy applications
> which are written for pure MPI are either MPI scalable or have
> flaws in the non-PETSc part that make them non-MPI scalable but
> presumably that could be fixed?
Or they have already bought into threads for other reasons. People
rarely design applications based on unbiased data.
> Are you talking about applications with for example redundant
> storage of mesh info, hence it is not MPI scalable? Well it is not
> going to be MPI + thread scalable either (though the threads will
> help for a while as you said).
Correct, but if the number of nodes is not increasing, they'll be able
to live with certain types of non-scalability for a while, especially if
their goal is science/engineering output rather than "scalability" on
DOE's computers.
> And as you noted before even the redundant mesh info business can
> be handled with the MPI window stuff just as well if not better
> than threads anyways. Be more specific with what legacy
> applications and what features of those apps.
Apps that already committed to threads. And it doesn't have to be
better even in their own tests. I recently reviewed a paper written by
an esteemed colleague that is promoting a "new" threading model that
performs uniformly worse than the naive MPI implementation they have had
for years. And it was presented as a positive result because "MPI won't
scale", despite the new thing showing worse performance and worse
scalability on every numerical example in the paper.
If we take the hard-line approach that PETSc will not support threads in
any form, we're going to be swimming upstream for a sizable class of
apps and libraries. I think we should encourage people to choose
processes, but for those that buy into threads for whatever reason,
logical or not, we would be a better library if we can offer them good
performance.
Also note that if the vendors stick with their silly high-latency
node-wide coherence, a larger fraction of users would be able to run
their science of interest on a single node, in which case we could
provide practical parallel solvers to applications that are not
interested in dealing with MPI's idiosyncrasies.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 818 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20150119/85450df8/attachment.sig>
More information about the petsc-dev
mailing list