[petsc-users] MPI bindings for PETSc local and global numbering
Jed Brown
jed at jedbrown.org
Mon May 11 16:07:51 CDT 2015
Justin Chang <jychang48 at gmail.com> writes:
> ${MPIEXEC}.hydra -n <procs> -bind-to hwthread -map-by socket ./myprogram
> <args>
Why are you using "-map-by socket" if you want a sequential ordering?
Abstractly, I expect a tradeoff between load balancing in case of
correlated irregularity and cache locality, but would tend to prefer
sequential ordering (so that nearby row numbers are more likely to share
cache).
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 818 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150511/3aa48940/attachment.pgp>
More information about the petsc-users
mailing list