[petsc-dev] PETSc and threads

Barry Smith bsmith at mcs.anl.gov
Sat Jan 10 11:57:27 CST 2015


> On Jan 10, 2015, at 12:37 AM, Jed Brown <jed at jedbrown.org> wrote:
> 
> Barry Smith <bsmith at mcs.anl.gov> writes:
>>  It is more than today, it seems sometime in the reason past you've
>>  come around to the Rusty Lusk pure MPI model is the way to go (in
>>  the abstract) but for whatever reason you never got around to or
>>  wanted to state it explicitly. Sort of like the guy who accepted
>>  Christ and starting bringing him up in conversation but never
>>  actually came out to everyone about his new found faith.
> 
> Heh, well my null hypothesis is that MPI with neighborhood collectives
> are sufficient to achieve solver performance as good or better than
> anything else available.  Usability for legacy applications is a
> different issue.

   Whose legacy applications? Presumably all PETSc legacy applications which are written for pure MPI are either MPI scalable or have flaws in the non-PETSc part that make them non-MPI scalable but presumably that could be fixed?  Are you talking about applications with for example redundant storage of mesh info, hence it is not MPI scalable? Well it is not going to be MPI + thread scalable either (though the threads will help for a while as you said). And as you noted before even the redundant mesh info business can be handled with the MPI window stuff just as well if not better than threads anyways. Be more specific with what legacy applications and what features of those apps.







More information about the petsc-dev mailing list