[petsc-dev] PETSc and threads

Mark Adams mfadams at lbl.gov
Fri Jan 9 12:35:50 CST 2015


Dave,

Why do you think threads are discouraged?  PETSc tends to not keep dead
code around so if it is in the repo its "supported" with the caveat that
resources are not infinite.

BTW, I am using threads with hypre and gamg on Titan.  Not sure it is
helping yet but the solves (not setup) are fully threaded AFAIK.

Mark

On Fri, Jan 9, 2015 at 11:59 AM, Nystrom, William David <wdn at lanl.gov>
wrote:

> So is there any schedule for the availability of the new PETSc thread
> model implementation?
> My understanding is that the current thread implementation in PETSc is not
> even supported
> by the PETSc Team and use of it is discouraged.  I'm interested in this
> capability for both
> Sequoia and Trinity and have been thinking about making a PETSc interface
> to one of the
> main LANL ASC codes.
>
> Dave
>
> --
> Dave Nystrom
> LANL HPC-5
> Phone: 505-667-7913
> Email: wdn at lanl.gov
> Smail: Mail Stop B272
>        Group HPC-5
>        Los Alamos National Laboratory
>        Los Alamos, NM 87545
>
>
> ________________________________________
> From: petsc-dev-bounces at mcs.anl.gov [petsc-dev-bounces at mcs.anl.gov] on
> behalf of Jed Brown [jed at jedbrown.org]
> Sent: Friday, January 09, 2015 8:44 AM
> To: Mark Adams; Barry Smith
> Cc: For users of the development version of PETSc
> Subject: Re: [petsc-dev] PETSc and threads
>
> Mark Adams <mfadams at lbl.gov> writes:
> > No this is me.  They will probably have about 30K (2D linear FE)
> equations
> > per 40 Tflop node.  10% (4 Tflops) is too much resources for 30K
> equations
> > as it is.  No need to try utilize the GPU as far as I can see.
>
> With multiple POWER9 sockets per node, you have to deal with NUMA and
> separate caches.  The rest of the application is not going to do this
> with threads, so you'll have multiple MPI processes anyway.  The entire
> problem will fit readily in L2 cache and you have a latency problem on
> the CPU alone.  Ask them to make neighborhood collectives fast.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20150109/e55bfebb/attachment.html>


More information about the petsc-dev mailing list