[petsc-dev] Using multiple mallocs with PETSc
Richard Mills
richardtmills at gmail.com
Thu Mar 9 22:08:36 CST 2017
On Thu, Mar 9, 2017 at 7:45 PM, Jeff Hammond <jeff.science at gmail.com> wrote:
>
>> I started to play with memkind last summer. At that time, there were
>> plenty of sayings online like this:
>> "the *hbwmalloc* interface is stable but *memkind* interface is only
>> partially stable."
>>
>>
> If you want the most stable interface, just use libnuma. It took me less
> than a day to reimplement hbwmalloc.h on top of libnuma and dlmalloc (
> https://github.com/jeffhammond/myhbwmalloc). Note that myhbwmalloc was
> an education exercise, not software that I actually think anyone should
> use. It is intentionally brittle (fast or fail - nothing in between).
>
> One consequence of using libnuma to manage MCDRAM is that one can call
> numa_move_pages, which Jed has asserted is the single most important
> function call in the history of memory management ;-)
>
I think you can also move pages allocated by memkind around by calling
numa_move_pages, actually, but this breaks the heap partitioning that
memkind does.
I actually question whether we even need a heap manager for things like big
arrays inside of Vec objects. It should be fine to just call mmap()
directly for those. These will tend to be big things that don't get
allocated/deallocated too frequently, so it probably won't matter that an
expensive system call is required.
--Richard
> Jeff
>
>
>> Perhaps I should try memkind calls since they may become much better.
>>
>> Hong (Mr.)
>>
>
> --
> Jeff Hammond
> jeff.science at gmail.com
> http://jeffhammond.github.io/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20170309/b6cbb09d/attachment.html>
More information about the petsc-dev
mailing list