<table cellspacing='0' cellpadding='0' border='0' ><tr><td valign='top' style='font: inherit;'>I'm using pre-WS-GRAM. <br><br>MikeK<br><br><br>--- On <b>Mon, 7/21/08, Ioan Raicu <i><iraicu@cs.uchicago.edu></i></b> wrote:<br><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px;">From: Ioan Raicu <iraicu@cs.uchicago.edu><br>Subject: Re: [Swift-user] Using > 1 CPU per compute node under GRAM<br>To: "Michael Wilde" <wilde@mcs.anl.gov><br>Cc: "Swift User Discussion List" <swift-user@ci.uchicago.edu>, "Stu Martin" <smartin@mcs.anl.gov>, "Martin Feller" <feller@mcs.anl.gov>, "JP Navarro" <navarro@mcs.anl.gov>, "Mike Kubal" <mikekubal@yahoo.com><br>Date: Monday, July 21, 2008, 6:57 PM<br><br><pre>In the past (i.e. MolDyn), I don't think we ever found a easy solution <br>to this when running straight through GRAM (if the LRM didn't support <br>this policy). But, as JP
said, it is site specific, so some sites will <br>allow getting only 1 CPU per node, such as Teraport, in which case GRAM <br>should work just fine.<br><br>Ioan<br><br>Michael Wilde wrote:<br>> Im asking this on behalf of Mike Kubal while I wait for more info on <br>> his settings:<br>><br>> Mike is running under Swift on teragrid/Abe which has 8-core nodes. <br>> His jobs are all running 1-job-per-node, wasting 7 cores.<br>><br>> I am waiting to hear if he is running on WS-GRAM or pre-WS-GRAM.<br>><br>> In the meantime, does anyone know if there's a way to specify <br>> compute-node-sharing between separate single-cpu jobs via both GRAMs?<br>><br>> And if this is dependent on the local job manager code or settings? <br>> (Ie might work on some sites but not others)?<br>><br>> On globus doc
page:<br>><br>http://www.globus.org/toolkit/docs/4.0/execution/wsgram/WS_GRAM_Job_Desc_Extensions.html#r-wsgram-extensions-constructs-nodes<br><br>><br>><br>> I see:<br>> <!-- *OR* an explicit number of processes per node... --><br>> <processesPerHost>...</processesPerHost><br>> </resourceAllocationGroup><br>> </extensions><br>> but cant tell if this applies to single-core jobs or only to <br>> multi-core jobs.<br>><br>> This will ideally be handled as desired by Falkon or Coaster, but in <br>> the meantime I was hoping there was a simple setting to give MikeK <br>> better CPU yield on Abe.<br>><br>> - Mike Wilde<br>><br>> ---<br>><br>> A sample of one of his jobs looks like this under qstat -ef:<br>><br>> Job Id: 395980.abem5.ncsa.uiuc.edu<br>> Job_Name = STDIN<br>> Job_Owner = mkubal@abe1196<br>>
job_state = Q<br>> queue = normal<br>> server = abem5.ncsa.uiuc.edu<br>> Account_Name = onm<br>> Checkpoint = u<br>> ctime = Mon Jul 21 17:43:47 2008<br>> Error_Path = abe1196:/dev/null<br>> Hold_Types = n<br>> Join_Path = n<br>> Keep_Files = n<br>> Mail_Points = n<br>> mtime = Mon Jul 21 17:43:47 2008<br>> Output_Path = abe1196:/dev/null<br>> Priority = 0<br>> qtime = Mon Jul 21 17:43:47 2008<br>> Rerunable = True<br>> Resource_List.ncpus = 1<br>> Resource_List.nodect = 1<br>> Resource_List.nodes = 1<br>> Resource_List.walltime = 00:10:00<br>> Shell_Path_List = /bin/sh<br>> etime = Mon Jul 21 17:43:47 2008<br>> submit_args = -A onm /tmp/.pbs_mkubal_21430/STDIN<br>><br>> And his jobs show up like this under qstat -n (ie are all on core /0 ):<br>><br>> 395653.abem5.ncsa.ui mkubal normal
STDIN 1767 1 1 <br>> -- 00:10 R --<br>> abe0872/0<br>><br>> While multi-core jobs use<br>><br>> +abe0582/2+abe0582/1+abe0582/0+abe0579/7+abe0579/6+abe0579/5+abe0579/4<br>> +abe0579/3+abe0579/2+abe0579/1+abe0579/0<br>> _______________________________________________<br>> Swift-user mailing list<br>> Swift-user@ci.uchicago.edu<br>> http://mail.ci.uchicago.edu/mailman/listinfo/swift-user<br>><br><br>-- <br>===================================================<br>Ioan Raicu<br>Ph.D. Candidate<br>===================================================<br>Distributed Systems Laboratory<br>Computer Science Department<br>University of Chicago<br>1100 E. 58th Street, Ryerson Hall<br>Chicago, IL 60637<br>===================================================<br>Email: iraicu@cs.uchicago.edu<br>Web:
http://www.cs.uchicago.edu/~iraicu<br>http://dev.globus.org/wiki/Incubator/Falkon<br>http://dsl-wiki.cs.uchicago.edu/index.php/Main_Page<br>===================================================<br>===================================================</pre></blockquote></td></tr></table><br>