<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Apr 27, 2015 at 12:38 PM, Jed Brown <span dir="ltr"><<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span>Richard Mills <<a href="mailto:rtm@utk.edu" target="_blank">rtm@utk.edu</a>> writes:<br>
> I think it is possible to add the memkind support without breaking all of<br>
> the interfaces used throughout PETSc for PetscMalloc(), etc. I recently<br>
> sat with Chris Cantalupo, the main memkind developer, and walked him<br>
> through PETSc's allocation routines, and we came up with the following: The<br>
> imalloc() function pointer could have an implementation something like<br>
><br>
> PetcErrorCode PetscMemkindMalloc(size_t size, const char *func, const char<br>
> *file, void **result)<br>
><br>
> {<br>
><br>
> struct memkind *kind;<br>
><br>
> int err;<br>
><br>
><br>
><br>
> if (*result == NULL) {<br>
><br>
> kind = MEMKIND_DEFAULT;<br>
><br>
> }<br>
><br>
> else {<br>
><br>
> kind = (struct memkind *)(*result);<br>
<br>
</span>I'm at a loss for words to express how disgusting this is.<br></blockquote><div><br></div><div>Ha ha! Yeah, I don't like it either. Chris and I were just thinking about what we could do if we wanted to not break the existing API. But one of my favorite things about PETSc is that developers are never afraid to make wholesale changes to things.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<span><br>
> This gives us (1) a method of passing the kind of memory without modifying<br>
> the petsc allocation routine calling sequence,<br>
<br>
</span>Nonsense, it just dodges the compiler's ability to tell you about the<br>
memory errors that it creates at every place where PetscMalloc is<br>
called!<br>
<br>
<br>
What did Chris say when you asked him about making memkind "suck less"?<br>
(Using shorthand to avoid retyping my previous long emails with<br>
constructive suggestions.)<br></blockquote><div> </div><div>I had some pretty good discussions with Chris. He's a very reasonable guy, actually (and unfortunately has just moved to another project, so someone else is going to have to take over memkind ownership). I summarize the main points (the ones I can recall, anyway) below:</div><div><br></div><div>1) Easy one first: Regarding my wish for a call to accurately query the amount of available high-bandwidth memory (MCDRAM), there is currently a memkind_get_size() API but it has the shortcomings of being expensive and not taking into account the heap's free pool (just the memory that the OS knows to be available). It should be possible to get around the expense of the call with some caching and to include the free pool accounting. Don't know if any work has been done on this one, yet.</div><div><br></div><div>2) Regarding the desire to be able to move pages between kinds of memory while keeping the same virtual address: This is tough to implement in a way that will give decent performance. I guess that what we'd really like to have would be an API like</div><div><br></div><div> <span style="font-family:'Courier New'">int
memkind_convert(memkind_t kind, void *ptr, </span><span style="font-family:'Courier New'">size_t size);</span></div><div><br></div><div>but the problem with the above is that is if the physical backing of a virtual address is being changed, then a POSIX system call has to be made. This also means that a heap management system tracking properties of virtual address ranges for reuse after freeing will require *making a system call to query the properties at the time of the free*. This kills a lot of the reason for using a heap manager in the first place: avoiding the expense of repeated system calls (otherwise we'd just use mmap() for everything) by reusing memory already obtained from the kernel.</div><div><br></div><div>Linux provides the mbind(2) and move_pages(2) system calls that enable the user to modify the backing physical pages of virtual address ranges within the NUMA architecture, so these can be used to move physical pages between NUMA nodes (and high bandwidth on-package memory will be treated as a NUMA node). (A user on a KNL system could actually use move_pages(2) to move between DRAM and MCDRAM, I believe.) But Linux doesn't provide an equivalent way for a user to change the page size of the backing physical pages of an address range, so it's not possible to implement the above memkind_convert() with what Linux currently provides.</div><div><br></div><div>If we want to move data from one memory kind to another, I believe that we need to be able to deal with the virtual address changing. Yes, this is a pain because extra bookkeeping is involved. Maybe we don't want to bother with supporting something like this in PETSc. But I don't know of any good way around this. I have discussed with Chris the idea of adding support for asynchronously copying pages between different kinds of memory (maybe have a memdup() analog to strdup()) and he had some ideas about how this might be done efficiently. But, again, I don't know of a good way to move data to a different memory kind while keeping the same virtual address. If I'm misunderstanding something about what is possible with Linux (or other *nix), please let me know--I'd really like to be wrong on this.</div><div><br></div><div>Say that a library is eventually made available that can process all of the nonlocal information to make reasonable recommendations about where various data structures should be placed (or, hell, say that there is just an oracle we can consult about this), but there isn't a good way to do this while keeping the same virtual address. Would this be a showstopper for using it in PETSc? If not, how should we deal with it? In my toy MMLIB ("memory malleability library") code I wrote during my dissertation work to handle "caching" data from disk in DRAM (for doing "memory adaptive" in-core/out-of-core computations), I broke a given data set down into "panels" of some user-determined granularity. A particular data set was associated with an MMS object (and there was a registry that tracked all of the various MMSes), and when a user needed to work with a portion of the data set, he would call</div><div><br></div><div> void *mmlib_get_panel(MMS mms, int p)</div><div><br></div><div>To get a pointer to the beginning of panel p, work with it a while, and then when it could be safely released, would call</div><div><br></div><div> void *mmlib_release_panel(MMS, int p)</div><div><br></div><div>The the library would be free to evict the panel if necessary. If it kept it cached, a subsequent request for the panel would return the same address, but if it was evicted and then later requested, another mmap() we be performed to get at the data and a different address would be returned.</div><div><br></div><div>My MMLIB library was really just a toy; the examples I looked at where pretty contrived; and the "panel" is perhaps the wrong granularity, but is an approach along these lines unworkable? Or, rather: If we have to, how horrible would it be to need a pointer to a pointer inside, say, a Vec to get to the actual array of values? If the array of values is being access by VecGetArray(), the address of this array is not allowed to be changed based on our oracle's recommendations until VecRestoreArray() is called. If there is no outstanding VecGetArray(), then our oracle is free to change the address that the array actually "lives" at, and the next VecGetArray() might return a different address. Can we deal with this, or are there terrible complications I'm not thinking of? It *may* be possible to have some systems where we can move a data structure around through all kinds of memory and keep the same virtual addresses, but I think that there will certainly be systems on which this will NOT be possible, and I think this sort of consideration will become more common as more companies introduce different kinds of high-bandwidth memory, types of NVRAM, etc. Yes, the proliferation of various kinds of user-addressable memory types is horrible from a certain perspective, but I don't think it can be avoided.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div><div><br>
> and (2) support a fall back code path legacy applications which will<br>
> not set the pointer to NULL. Or am I missing something?<br>
<br>
</div></div></blockquote></div><br></div></div>