[Swift-user] Question about job sizes

Lorenzo Pesce lpesce at uchicago.edu
Tue Apr 17 19:11:24 CDT 2012


Sorry for being unclear.

> I am not sure I understand the question. Are you saying that how to have coasters use less memory for longer running jobs?

No. Some of the jobs will take more memory than other and it is reasonably predictable. I was wondering if I can pass that information to swift to prevent it from sending too many "large memory" jobs to the same node and blow it up.
Those jobs tend to take longer as well, but that is less important to a degree (would be good to send them in first without overdoing it).

I guess that perhaps I could try to force the big ones to go in one per NUMA node right away and let the other ones more or less move in as the slots open up.


> Or is the question that when there are more concurrently running apps memory is increasing in a non-proportional way? I may be wrong with both interpretations. Could you elaborate a bit more?
> 
> On Apr 17, 2012, at 15:27, Lorenzo Pesce <lpesce at uchicago.edu> wrote:
> 
>> Me again ;-)
>> 
>> If the jobs occupy memory and CPU in a predictable way, say they have a parameter i =1:20 where 20 takes 20 time units and 1 takes on. Memory also grows, but not proportionally. 
>> Is there a way to tell coasters how to pack jobs in order to not bust the memory?
>> 
>> Lorenzo
>> _______________________________________________
>> Swift-user mailing list
>> Swift-user at ci.uchicago.edu
>> https://lists.ci.uchicago.edu/cgi-bin/mailman/listinfo/swift-user




More information about the Swift-user mailing list