[Swift-devel] block allocation

Zhao Zhang zhaozhang at uchicago.edu
Wed May 27 13:42:02 CDT 2009


Hi, Mihael

I have been running the language-behavior and application test with 
coaster up to date from SVN.
Could you help double check I am using the right version of cog?
Swift svn swift-r2949 cog-r2406

Also, all my tests returned successful,  are there any run-time logs 
that I could see how many workers were
running on each site and monitoring their status? Like how many 
registered, how many are idle
how many are busy and etc. I am also attaching two sites.xml definition 
for uc-teragrid and ranger.

best
zhao

[zzhang at communicado scip]$ cat 
/home/zzhang/swift_coaster/cog/modules/swift/tests/sites/coaster_new/tgranger-sge-gram2.xml
<config>
  <pool handle="tgtacc" >
    <gridftp  url="gsiftp://gridftp.ranger.tacc.teragrid.org" />
    <execution  provider="coaster" 
url="gatekeeper.ranger.tacc.teragrid.org" jobManager="gt2:gt2:SGE"/>
    <profile namespace="globus" key="project">TG-CCR080022N</profile>
    <workdirectory >/work/00946/zzhang/work</workdirectory>
    <profile namespace="env" 
key="SWIFT_JOBDIR_PATH">/tmp/zzhang/jobdir</profile>
    <profile namespace="globus" key="coastersPerNode">16</profile>
    <profile namespace="globus" key="queue">development</profile>
    <profile namespace="karajan" key="initialScore">50</profile>
    <profile namespace="karajan" key="jobThrottle">10</profile>
    <profile namespace="globus" key="slots">4</profile>
    <profile namespace="globus" key="nodeGranularity">2</profile>
    <profile namespace="globus" key="lowOverAllocation">5</profile>
    <profile namespace="globus" key="highOverAllocation">1</profile>
    <profile namespace="globus" key="maxNodes">2</profile>
    <profile namespace="globus"key="remoteMonitorEnabled">false</profile>
  </pool>
</config>

[zzhang at communicado scip]$ cat 
/home/zzhang/swift_coaster/cog/modules/swift/tests/sites/coaster_new/tguc-pbs-gram2.xml
<config>
  <pool handle="tguc" >
    <gridftp  url="gsiftp://tg-gridftp.uc.teragrid.org" />
    <execution  provider="coaster" url="tg-grid.uc.teragrid.org" 
jobManager="gt2:gt2:pbs"/>
    <profile namespace="globus" key="project">TG-DBS080005N</profile>
    <workdirectory >/home/zzhang/work</workdirectory>
    <profile namespace="globus" key="slots">4</profile>
    <profile namespace="globus" key="nodeGranularity">2</profile>
    <profile namespace="globus" key="lowOverAllocation">5</profile>
    <profile namespace="globus" key="highOverAllocation">1</profile>
    <profile namespace="globus" key="maxNodes">2</profile>
    <profile namespace="globus" key="remoteMonitorEnabled">false</profile>
  </pool>
</config>



Mihael Hategan wrote:
> On Tue, 2009-05-26 at 14:05 -0500, Zhao Zhang wrote:
>   
>> Hi, Mihael
>>
>> I read through the new content about coaster in user guide. Here I got 
>> several questions:
>>
>> Given the following case: we have 100 jobs, 5 worker nodes, and each 
>> worker node has 4 cores, which means
>> we have 4 workers on each node.
>>
>> The profile key "Slot", it is defined as "How many maximum LRM 
>> jobs/worker blocks are allowed".
>> Correct me if my understanding is incorrect. In the above case, 100 
>> jobs/5 nodes/4 workers=5 jobs/worker
>>
>> So if we set "Slot" to 4,
>>     
>
> Set the slots to the maximum number of concurrent jobs the LRM allows
> you to have. On Ranger that would be 50 (as far as I remember), and on
> BGP, that would be 6.
>
>   
>>  this above test case won't finish because 5 is 
>> going out of the bound 4. Is this correct?
>>     
>
> No. Once old blocks are done, new blocks are created.
>
>   
>> Also, could you point me to a sites.xml definition you used for the 
>> local test? That will be helpful for me to get
>> the ideas. Thanks.
>>     
>
>
> <config>
>   <pool handle="localhost">
>     <gridftp  url="local://localhost" />
>     <execution provider="coaster" jobmanager="local:local"
> url="localhost" />
>     <workdirectory >/var/tmp</workdirectory>
>     <profile namespace="globus" key="slots">4</profile>
>     <profile namespace="globus" key="nodeGranularity">2</profile>
>     <profile namespace="globus" key="lowOverAllocation">5</profile>
>     <profile namespace="globus" key="highOverAllocation">1</profile>
>         <profile namespace="globus" key="maxNodes">2</profile>
>         <profile namespace="globus"
> key="remoteMonitorEnabled">true</profile>
>   </pool>
> </config>
>
>
>
>   



More information about the Swift-devel mailing list