[petsc-users] Using OpenMP threads with PETSc

Jed Brown jed at jedbrown.org
Thu Apr 9 18:01:14 CDT 2015


Lucas Clemente Vella <lvella at gmail.com> writes:
> Mainly because there is no need to copy buffers between processes
> (with MPI calls) when it is already fast to use them shared, inside
> the same NUMA node. 

What about cache efficiency when your working set is not contiguous or
is poorly shaped?

> As for processes running in other NUMA nodes, the extra copy incurred
> by MPI pays off with faster memory access. I believed MPI
> implementations didn't use the NIC to communicate when inside the same
> computer node, but instead used shared memory.

Yes.

> By the way, there is already a precompiler macro to disable CPU
> affinity: PETSC_HAVE_SCHED_CPU_SET_T.
> How to disable it at configure?

I don't think there is an option, but it should be made a run-time
rather than configure-time option anyway.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 818 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150409/2b0635fb/attachment.pgp>


More information about the petsc-users mailing list