<div dir="ltr">(Moving this to my MCS email addr and to swift-devel)<div><br></div><div>Mihael, what we are trying to do here is not (initially) change anything in Swift memory usage.</div><div><br></div><div>We just want to understand the costs in memory of normal Swift operations, eg, call a function with N args and M returns; map a file; create an array of 1000 1MB strings; etc.</div>
<div><br></div><div>Then, for any program execution, we want to be able to trace - at some useful level of granularity - the consumption of Java memory caused by these normal Swift activities.</div><div><br></div><div>For example, if a user writes a function that is going to create - and hold - 10MB of memory, due to its local variables, then having 10,000 of those active at once would consume - and hold - 100GB of RAM.</div>
<div><br></div><div>My suspicion is that this is exactly what e.g. Sheri's code is doing. Any I further suspect that once we identify what procedures are using most of the memory in what way, then we can tune the user code to use much less memory,</div>
<div><br></div><div>We can - by experiment - develop a cost table for common Swift operations. But Sheri's code is the most complex Swift scripts that exist. Each has a few K lines of swift code. WIthout some auomated memory usage stats that correlate mem consumption to source code, it will be hard to find the culprits.</div>
<div><br></div><div>So the question is not how to make Swift use less memor (although thats always desirable), but rather first just to create the tools to know how much a give program run uses for what.</div><div><br></div>
<div>Can you suggest affordable ways to get this info?</div><div><br></div><div>Thanks,</div><div><br></div><div>- Mike</div><div><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Jan 30, 2014 at 5:22 PM, Mihael Hategan <span dir="ltr"><<a href="mailto:hategan@mcs.anl.gov" target="_blank">hategan@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Weeeellll,<br>
<br>
Stuff eats memory. Some stuff can be made to take less memory, some<br>
stuff can be made to take less memory at the expense of performance, and<br>
some stuff just needs to be there. And then once in a while there's<br>
stuff that doesn't need to be there at all.<br>
<br>
I somewhat routinely have to deal with the first two. It's not an easy<br>
problem, because it only becomes obvious what eats memory at large<br>
scales when you actually have a large scale run, and that's difficult to<br>
analyze both because of technicalities (such as it takes lots of ram to<br>
analyze things) and because it's hard to distinguish signal from noise<br>
when there's a lot of stuff. But, again, that's something I generally<br>
keep in mind with every commit.<br>
<br>
It is however, mostly attributable to design choices. We sacrificed<br>
scalabiilty for convenience initially, because juggling with concurrency<br>
was difficult, and the scales we were looking at were generally pretty<br>
small. Things change though.<br>
<br>
There's the last possibility also. And that is that we have a situation<br>
that doesn't normally occur and shouldn't occur that is probably a bug<br>
and that happened this once. If that's the case we should find and fix<br>
that. So, is that the case?<br>
<span class="HOEnZb"><font color="#888888"><br>
Mihael<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
On Thu, 2014-01-30 at 17:06 -0600, Yadu Nand wrote:<br>
> Hi Mike, Mihael<br>
><br>
> I talked to Mihael about the RAM issue and he said that having heap<br>
> dumps can help but he wasn't sure if that alone is sufficient to pin<br>
> point what is using memory excessively.<br>
><br>
> Here's what I did :<br>
> * Force the apps to dump the heap and analyse it offline with jhat.<br>
> I've used jhat on one such dump from a memory stress test. If the<br>
> dump is very large<br>
> the user could just start jhat which starts a webserver on port 7000<br>
> which we can access.<br>
><br>
> Here's one of the dump analysis from jhat :<br>
> <a href="http://swift.rcc.uchicago.edu:7000/histo/" target="_blank">http://swift.rcc.uchicago.edu:7000/histo/</a><br>
> <a href="http://swift.rcc.uchicago.edu:7000/showInstanceCounts/includePlatform/" target="_blank">http://swift.rcc.uchicago.edu:7000/showInstanceCounts/includePlatform/</a><br>
><br>
> * jmap can be used to get maps of the jvm while it is running : Here's<br>
> a snap on a stress run with 10^6 + 1 ints held in a swift array:<br>
><br>
> [yadunand@midway001 data_stress]$ jmap -histo:live 31135 | head -n 10<br>
><br>
> num #instances #bytes class name<br>
> ----------------------------------------------<br>
> 1: 1000001 56000056 org.griphyn.vdl.mapping.DataNode<br>
> 2: 1030601 32979232 java.util.HashMap$Entry<br>
> 3: 2014652 32234432 java.lang.Integer<br>
> 4: 1000015 24000360 org.griphyn.vdl.type.impl.FieldImpl<br>
> 5: 14672 4903992 [Ljava.util.HashMap$Entry;<br>
> 6: 29872 4149184 <constMethodKlass><br>
> 7: 29872 3831968 <methodKlass><br>
><br>
> These together with the live heap tracking commit from Mihael should<br>
> be able to give a better picture of what is going on with the user's<br>
> run. This again would require the user to run an extra script.<br>
><br>
> As for Sheri's case there was a core dump, and if we could get her to<br>
> run jhat on her side, I think that would open up some extra detail<br>
> into what is consuming the memory.<br>
><br>
> Please let me know if this might be something that is worth a shot.<br>
><br>
> Thanks,<br>
> Yadu<br>
<br>
<br>
</div></div></blockquote></div><br></div></div>