[petsc-dev] Using multiple mallocs with PETSc
jed at jedbrown.org
Sat Mar 11 14:36:32 CST 2017
Barry Smith <bsmith at mcs.anl.gov> writes:
>> I think it's accurate in the sense that the performance of real
>> applications using a page migration system will be sufficiently close to
>> the best manual page mapping strategy that nobody should bother with the
>> manual system.
> Will such a page migration system ever exist, is Intel working hard
> on it for KNL? What if no one provides such a page migration
> system? Should we just wait around until they do (which they won't)
> and do nothing else instead? Or will we have to do a half-assed
> hacky thing to work around the lack of the mythical decent page
> migration system?
Libnuma has move_pages. Prior to release, Intel refused to confirm that
MCDRAM would be shown to the OS as a normal numa node, such that
move_pages would work, and sometimes suggesting that it would not. Some
of the email history is me being incredulous this state before learning
that the obvious implementation that I preferred was in fact what they
Anyway, this means PETSc can track usage and call move_pages itself to
migrate hot pages into MCDRAM.
I don't know if Intel or Linux kernel people are going to tweak the
existing automatic page migration to do this transparently, but we
probably shouldn't hold our breath.
>> In cache mode, accessing infrequently-used memory (like TS trajectory)
>> evicts memory that you will use again soon.
> What if you could advise the malloc system that this chunk of
> memory should not be cached? Though this appears to be impossible
> by design?
Malloc has nothing to do with cache, and I don't think the hardware has
an interface that would allow the kernel to set policy at this
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 832 bytes
Desc: not available
More information about the petsc-dev