[Swift-devel] Re: [Swift-user] pbs ppn count and stuff
Mihael Hategan
hategan at mcs.anl.gov
Thu Feb 3 11:04:22 CST 2011
On Wed, 2011-02-02 at 14:08 -0600, Michael Wilde wrote:
> Would 2PM tomorrow work?
Works for me.
> Justin, can you join this discussion? I'll set up a conf call once we confirm a time.
>
> Im inserting below an email thread started by Matt at NCAR on this
> topic. He also refers back to a very old thread in which the same
> issue was raised.
>
> - Mike
>
>
>
> ----- Forwarded Message -----
> From: "Matthew Woitaszek" <matthew.woitaszek at gmail.com>
> To: "Allan Espinosa" <aespinosa at cs.uchicago.edu>
> Cc: swift-user at ci.uchicago.edu
> Sent: Thursday, November 4, 2010 10:06:48 AM
> Subject: Re: [Swift-user] Coasters and PBS resource requests: nodes and ppn
>
>
> Hi Allan,
>
> Yep, that's it. When the coasters resource request comes in with just "nodes=1", it gets interpreted by PBS as nodes=1:ppn=1, and thus PBS puts other jobs on the node, too, until all 8 CPUs are allocated (e.g., 8 1-cpu PBS jobs are running on it).
>
> I'd like to find some way to make the request as:
> nodes=1:ppn=8
> along with
> workersPerNode=8
> so that PBS allocates one node and all 8 processors, and then one Coasters job would put 8 workers on it, matching the resource request with the use.
>
> Matthew
>
>
>
>
>
> On Wed, Nov 3, 2010 at 5:41 PM, Allan Espinosa < aespinosa at cs.uchicago.edu > wrote:
>
>
> Hi Matthew,
>
> Does this mean, coasters will now submit nodes=1;ppn=1 and do node packing?
>
> If there is no node packing being initiated by PBS, you can just
> specify workersPerNode=8 . But then what you request to PBS is now
> different to what you actually use.
>
> -Allan
>
> 2010/11/3 Matthew Woitaszek < matthew.woitaszek at gmail.com >:
>
>
>
> > Good afternoon,
> >
> > Is there a way to update PBS resource requests when using coasters to supply
> > modified PBS resource strings such as "nodes=1:ppn=8"? (Or other arbitrary
> > resource requests, such as node properties?)
> >
> > Of course, I'm just trying to get coasters to allocate all of the processors
> > on an 8-core node, using either the "gt2:gt2:pbs" or "local:pbs" provider.
> > Both submit jobs just fine. I found no discernible difference with the
> > "host_types" Globus namespace variable, presuming I'm setting it right.
> >
> > The particular cluster I'm using allows node packing for users that run lots
> > of single-processor tasks, so without ppn, it will assume nodes=1,ncpus=1
> > and thus pack 8 jobs on each node before moving on to the next node. (I know
> > it won't be an issue at sites that make nodes exclusive. On this system, the
> > queue default is "nodes=1:ppn=8", but because coasters explicitly specifies
> > the number of nodes in its generated resource request, the ppn default seems
> > to get lost!)
> >
> > I see that this has been discussed as far back as 2007, and I found Marcin
> > and Mike's previous discussion of the topic at
> >
> > http://mail.ci.uchicago.edu/pipermail/swift-user/2010-March/001409.html
> >
> > but there didn't seem to be any definitive conclusion. Any suggestions would
> > be appreciated!
> >
> > Matthew
> >
>
> --
> Allan M. Espinosa < http://amespinosa.wordpress.com >
> PhD student, Computer Science
> University of Chicago < http://people.cs.uchicago.edu/~aespinosa >
>
>
> _______________________________________________
> Swift-user mailing list
> Swift-user at ci.uchicago.edu
> http://mail.ci.uchicago.edu/mailman/listinfo/swift-user
>
> --
> Michael Wilde
> Computation Institute, University of Chicago
> Mathematics and Computer Science Division
> Argonne National Laboratory
>
>
>
> ----- Original Message -----
> > On Tue, 2011-02-01 at 15:34 -0600, Michael Wilde wrote:
> >
> > > Lets start with a voice call and then bring the issue back to the
> > > devel list.
> >
> > Can we do this on Thursday after 12:30 Chicago time?
> >
> > Mihael
>
More information about the Swift-devel
mailing list