[petsc-dev] PETSc and threads
Nystrom, William David
wdn at lanl.gov
Fri Jan 9 13:14:04 CST 2015
Mark,
I did not say that "threads are discouraged". Rather, I said that it was
"my understanding" that use of the "current" thread implementation in
PETSc was discouraged because a decision had been made to do a
redesign and reimplementation of the PETSc thread support package.
And that has certainly been consistent with my attempts to test out
the threadcomm package and report issues that I found. In particular,
I did a lot of testing and benchmarking of the threadcomm package
back in the summer of 2013 and provided extensive feedback to the
PETSc Team on my findings. I received very little comment from the
PETSc Team on my feedback except from Karli. By the end of August,
2013 I was told that the threadcomm package was going to be redesigned
and rewritten and that there was not any interest in fixing the issues
that I had encountered and reported with the current version of
threadcomm. And, since then, I have not been able to detect any
work on the current threadcomm package. I have requested updates
from Jed a couple of times since 2013 and his replies have seemed
consistent with the plan to redesign and rewrite the PETSc thread
support. So, I'm just trying to learn what the latest plan and schedule
for PETSc thread support is because I have access to platforms where
I think it could be very useful. But without the issues I reported in 2013
being fixed, the current thread support is not very useful to me.
Thanks,
Dave
--
Dave Nystrom
LANL HPC-5
Phone: 505-667-7913
Email: wdn at lanl.gov
Smail: Mail Stop B272
Group HPC-5
Los Alamos National Laboratory
Los Alamos, NM 87545
________________________________
From: Mark Adams [mfadams at lbl.gov]
Sent: Friday, January 09, 2015 11:35 AM
To: Nystrom, William David
Cc: Jed Brown; Barry Smith; For users of the development version of PETSc
Subject: Re: [petsc-dev] PETSc and threads
Dave,
Why do you think threads are discouraged? PETSc tends to not keep dead code around so if it is in the repo its "supported" with the caveat that resources are not infinite.
BTW, I am using threads with hypre and gamg on Titan. Not sure it is helping yet but the solves (not setup) are fully threaded AFAIK.
Mark
On Fri, Jan 9, 2015 at 11:59 AM, Nystrom, William David <wdn at lanl.gov<mailto:wdn at lanl.gov>> wrote:
So is there any schedule for the availability of the new PETSc thread model implementation?
My understanding is that the current thread implementation in PETSc is not even supported
by the PETSc Team and use of it is discouraged. I'm interested in this capability for both
Sequoia and Trinity and have been thinking about making a PETSc interface to one of the
main LANL ASC codes.
Dave
--
Dave Nystrom
LANL HPC-5
Phone: 505-667-7913<tel:505-667-7913>
Email: wdn at lanl.gov<mailto:wdn at lanl.gov>
Smail: Mail Stop B272
Group HPC-5
Los Alamos National Laboratory
Los Alamos, NM 87545
________________________________________
From: petsc-dev-bounces at mcs.anl.gov<mailto:petsc-dev-bounces at mcs.anl.gov> [petsc-dev-bounces at mcs.anl.gov<mailto:petsc-dev-bounces at mcs.anl.gov>] on behalf of Jed Brown [jed at jedbrown.org<mailto:jed at jedbrown.org>]
Sent: Friday, January 09, 2015 8:44 AM
To: Mark Adams; Barry Smith
Cc: For users of the development version of PETSc
Subject: Re: [petsc-dev] PETSc and threads
Mark Adams <mfadams at lbl.gov<mailto:mfadams at lbl.gov>> writes:
> No this is me. They will probably have about 30K (2D linear FE) equations
> per 40 Tflop node. 10% (4 Tflops) is too much resources for 30K equations
> as it is. No need to try utilize the GPU as far as I can see.
With multiple POWER9 sockets per node, you have to deal with NUMA and
separate caches. The rest of the application is not going to do this
with threads, so you'll have multiple MPI processes anyway. The entire
problem will fit readily in L2 cache and you have a latency problem on
the CPU alone. Ask them to make neighborhood collectives fast.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20150109/daa8623e/attachment.html>
More information about the petsc-dev
mailing list