create/destroy name change proposal

Mark Beall mbeall at simmetrix.com
Wed May 5 22:48:58 CDT 2010


I'll second Carl's concerns related to this.

If you dictate that an implementation must immediately release memory,  
you're likely to run into two issues:
1) significant performance degradation vs. an implementation that  
intelligently manages its memory (not necessarily vs. an  
implementation that does this badly).
2) memory fragmentation issues that can actually leave the application  
with less usable memory

We have a way to run our code such that all memory allocations go  
through the system memory allocation calls with no management of the  
memory (this really just exists for testing things with various memory  
checking tools that don't catch things if you don't do that). If you  
turn that on, you get a hit in performance of about 30% (in using the  
mesh database for meshing, the actual performance of the low level  
functions is obviously much more). In addition, although we don't  
release all the memory we possibly could, you also see a memory usage  
increase of about 10% (system memory allocators generally have  
overhead, ours doesn't)

There might be some reasonable things that could be specified, such as  
when a mesh is deleted that all memory associated with that mesh is  
returned to the application. Or, perhaps, that there is an API  
function that, when called, would request the release of all memory  
that can be (which is open to enough interpretation that any  
implementation should be happy with it - if they don't have the  
ability to release memory that's not being used, then it's a  
particularly simple function to write).

It seems to me that this is more a "quality of implementation" issue  
that can't really be reasonably specified. If you're going to go in  
that direction, you probably also need to specify maximums for memory  
usage and minimum performance specs.

mark

On May 5, 2010, at 10:16 AM, Carl Ollivier-Gooch wrote:

>> Related concern #1:
>> ------------------
>> What says the specification regarding what is supposed to happen to
>> storage associated with the things being removed or destroy? I  
>> could not
>> find anything that specifically enumerates what, if anything, an
>> implementation is required to do with removing or destroying things  
>> with
>> the storage associated with those things. If we give implementations
>> flexibility what to do with storage, I could envision implementations
>> where a remove operation just indicates internally that an item  
>> should
>> be ignored but otherwise does nothing to free up any storage that  
>> might
>> be associated with the item having been previously added. Likewise  
>> for
>> 'destroy' operations. Worse, I could storage that is 'lost' to  
>> removed
>> or destroy'd things being kept around presisting across save/load  
>> which
>> would be even worse. So, I think the specification should speak to  
>> the
>> issue of an implementation's responsibility regarding storage  
>> associated
>> with things being removed/destroyed.
>
> Can of worms warning! :-)
>
> GRUMMP is one of the implementations that marks stuff as removed  
> without freeing up storage.  Rationale: entities are created  
> internally in blocks rather than one by one.  Removing an entity in  
> the middle of a block with compression of the remaining data is O(n)  
> for every deletion, which is bad.  Actually, it's worse, because  
> that changes all of your connectivity data (whether you use integers  
> or pointers).  And it's even worse than that, because the way GRUMMP  
> stores sets and entity handle tags is by using pointers to  
> entities.  So if I compress the data structures, all of that stuff  
> has to be updated.
>
> As a result, the compression is done in batch and only occasionally.  
> This is why we have this obscure function about handle validity: to  
> give implementations notice that it's okay to rearrange the database  
> without messing up handles that the application may be holding on  
> to.  When the database is compressed, blocks of memory with no  
> entities in them are freed, but the remaining blocks won't be 100%  
> full (except by remarkable coincidence).
>
> GRUMMP -does- compress the database before writing (actually, it  
> also reorders at this point, too).  So there's no carryover of non- 
> garbage-collected memory to the next run.



More information about the tstt-interface mailing list