[petsc-dev] PETSc and threads
Nystrom, William David
wdn at lanl.gov
Fri Jan 9 15:37:20 CST 2015
Well, I would really like to be able to do the experiment with PETSc - and I tried to do
so back in the summer of 2013. But I encountered problems which I documented with
the current PETSc threadcomm package trying a really simple problem with cg and
jacobi preconditioning. And I don't believe those problems have been fixed. And I
don't believe there is any intention of fixing them with the current threadcomm package.
So I can't do any meaningful experiments with PETSc related to MPI+threads.
Regarding HPGMG-FV, I never heard of it and have no idea whether it could be used
in an ASC code to do the linear solves. What I am interested in doing, that is relevant
to this email list, is adding an interface to PETSc to an ASC code and then trying to
solve linear systems for real problems of interest with different methods and see what
the results are. I have no idea when I might be able to such experiments with MPI+threads.
I have also had some recent experience running a plasma simulation code called VPIC
on Blue Gene Q with flat MPI and MPI+pthreads. When I run with MPI+pthreads on
Blue Gene Q, VPIC is noticeably faster even though I can only run with 3 threads per
rank but can run in flat MPI mode with 4 ranks per core. So I would like to be able to
do comparisons with PETSc for the cases of flat MPI and MPI+threads. But maybe
the PETSc Team is just not interested in providing that capability and I need to look
elsewhere - like Trilinos.
So I'm just looking for info that will allow me to make some decisions and do some
planning. When should I plan on having thread support in PETSc that addresses
the issues I reported during the summer of 2013?
BTW, if you have references that document experiments comparing performance of
flat MPI with MPI+threads, I would be happy to read them.
Thanks,
Dave
________________________________________
From: Jed Brown [jed at jedbrown.org]
Sent: Friday, January 09, 2015 1:50 PM
To: Nystrom, William David; Mark Adams
Cc: Barry Smith; For users of the development version of PETSc; Nystrom, William David
Subject: Re: [petsc-dev] PETSc and threads
"Nystrom, William David" <wdn at lanl.gov> writes:
> for PETSc thread support is because I have access to platforms where
> I think it could be very useful.
Why do you think that?
I encourage you to run tests with HPGMG-FV since Sam Williams has put a
great deal of effort into optimizing his threaded implementation. Our
experience has been that flat MPI is faster on almost every machine we
tested for all problem sizes. This coincides with work from others that
I believe did careful studies (including Intel engineers trying to
demonstrate threading success).
People have a lot of incentive to declare that MPI is not sufficient, so
many studies are declared complete when they produce data to support
that notion. The true reason for such observations are often more
subtle, leading to a lot of misinformation and misdirected effort.
Let's strive to understand what we're doing instead of bumbling around
"just running shit" [1] until we confirm some preconceptions.
[1] Terminology courtesy Matt.
More information about the petsc-dev
mailing list