[MPICH] Availability of the Driller library
Jean-Marc Saffroy
saffroy at gmail.com
Tue Sep 25 21:41:45 CDT 2007
On Tue, 25 Sep 2007, Darius Buntinas wrote:
> Your explanation makes sense, but I forgot to say in my last email was
> that I would like to avoid overriding the default memory allocators.
I would have avoided it if possible, but the problem is that:
- my design requires that mmap/brk/etc. be overloaded
- when the glibc malloc is used, it calls the glibc versions of these
syscalls instead of the overloading functions
I used dlmalloc since it's very easy to use, but it should be possible to
do the same with the glibc allocator (which happens to be a derivative of
dlmalloc).
So I *think* that having a 100% compatible memory allocator is not a
difficult task, but there are other problems (see below).
> Instead, I would like to remap sections of memory as needed, e.g., for
> each MPI_Send operation.
I have no idea how you could do this efficiently.
> Overriding malloc, mmap, brk, and sbrk work fine for most codes, but
> there's always a few which don't work, and I'm just thinking of how to
> handle those.
I'm sure there are applications out there that will break, because Driller
has some interesting side effects, for example:
- fork is currently broken (but not system, so I guess vfork works fine
too); I have ideas for "fixing" this, but then copy-on-write is no longer
possible, and forking costs a full copy of the process adress space, which
is highly time- and memory-consuming
- Driller is not thread-safe (yet?)
- static linking is not handled (yet?)
- file descriptors are created behind the scenes, which breaks POSIX in
subtle ways (I think this should be a fairly rare problem)
- SIGSEGV is used to grow the stack, and it may collide with other uses
of this signal (testing if efence still works will be interesting)
Overall, I'd be surprised if there were a 100% solution, so I'll be glad
if Driller can be useful as a simple 98% solution, and the remaining 2%
use the current strategies that copy buffers in shm.
--
saffroy at gmail.com
More information about the mpich-discuss
mailing list