[Swift-devel] Where is latest doc on running Swift on Beagle? Covers OpenMP apps?

Michael Wilde wilde at mcs.anl.gov
Mon Oct 17 11:52:54 CDT 2011


I can help write a test case.  Its just a for() loop with a #pragma in front - very simple. If each parallel loop iteration could do system("sleep N") we could readily observe that the test is working and spawning OMP_NUM_THREADS threads and procs.

- Mike 

----- Original Message -----
> From: "Glen Hocky" <hockyg at uchicago.edu>
> To: "Justin M Wozniak" <wozniak at mcs.anl.gov>
> Cc: "Michael Wilde" <wilde at mcs.anl.gov>, "David Kelly" <davidk at ci.uchicago.edu>, "ketan" <ketancmaheshwari at gmail.com>,
> "Swift Devel" <swift-devel at ci.uchicago.edu>
> Sent: Monday, October 17, 2011 11:30:17 AM
> Subject: Re: [Swift-devel] Where is latest doc on running Swift on Beagle? Covers OpenMP apps?
> Justin, I'm not sure my program counts as sufficiently simple for this
> purpose. I'd be happy to let you include it and get an example set up
> though if you want to use it anyway. The open mp part, which I haven't
> been using recently, may need a bit of debugging as well
> 
> 
> Glen
> 
> 
> 
> On Mon, Oct 17, 2011 at 12:23 PM, Justin M Wozniak <
> wozniak at mcs.anl.gov > wrote:
> 
> 
> 
> Glen, do you have an extremely simple but relevant OpenMP program that
> we could stick in the test suite?
> 
> 
> 
> 
> On Sun, 16 Oct 2011, Glen Hocky wrote:
> 
> 
> 
> It's in my run script that creates the actual sites file that I run
> with.
> I'm not sure what you would do if you wanted more than 24 cores, so
> depth
> stays fixed at 24 (that's an aprun parameters). Then
> 
> WORKERSPERNODE=$((24/$PPN))
> 
> Where PPN is how many cores you want per OPENMP app and then workers
> per
> node says how many OPENMP apps you want to run. So obvious example
> would be
> you want 3 8 core OPENMP jobs, PPN = 8, WORKERSPERNODE=3
> 
> 
> On Sun, Oct 16, 2011 at 12:27 PM, Michael Wilde < wilde at mcs.anl.gov >
> wrote:
> 
> 
> 
> Thanks, Glen!
> 
> Justin, can you check the sites file below? I dont understand the
> interaction between the parameters OMP_NUM_THREDS, jobsPerNode, and
> depth.
> WHere is the best documentation on that?
> 
> Thanks,
> 
> - Mike
> 
> 
> ----- Original Message -----
> 
> 
> From: "Glen Hocky" < hockyg at uchicago.edu >
> To: "Michael Wilde" < wilde at mcs.anl.gov >
> Cc: "David Kelly" < davidk at ci.uchicago.edu >, "ketan" <
> ketancmaheshwari at gmail.com >
> 
> 
> Sent: Sunday, October 16, 2011 11:18:33 AM
> Subject: Re: [Swift-devel] Where is latest doc on running Swift on
> Beagle? Covers OpenMP apps?
> 
> 
> Yes, I'm running and yes I did test openmp a while back. Sites file
> follows. I'm using trunk from a few months ago
> 
> "Swift svn swift-r4813 (swift modified locally) cog-r3175"
> 
> 
> 
> <pool handle="pbs-beagle-coasters">
> <execution provider="coaster" jobmanager="local:pbs" url="none"/>
> <filesystem provider="local"/>
> <profile namespace="globus"
> key="providerAttributes">pbs. aprun;pbs.mpp;depth=24</ profile>
> <profile key="jobsPerNode" namespace="globus">24</ profile>
> 
> 
> <profile namespace="env" key="OMP_NUM_THREADS">$PPN</ profile>
> <profile namespace="globus" key="maxwalltime">$TIME</ profile>
> <profile namespace="globus" key="maxTime">$MAXTIME</ profile>
> <profile namespace="globus" key="slots">$nodes</profile>
> <profile namespace="globus" key="nodeGranularity">1</ profile>
> <profile namespace="globus" key="maxNodes">1</profile>
> <profile namespace="globus" key="lowOverAllocation">100</ profile>
> <profile namespace="globus" key="highOverAllocation">100</ profile>
> <profile namespace="karajan" key="jobThrottle">200.00</ profile>
> <profile namespace="karajan" key="initialScore">10000</ profile>
> 
> 
> <workdirectory >$swiftrundir/swiftwork</ workdirectory>
> </pool>
> 
> 
> 
> On Sun, Oct 16, 2011 at 12:15 PM, Michael Wilde < wilde at mcs.anl.gov >
> wrote:
> 
> 
> David, Ketan,
> 
> I need to run some things on Beagle, asap.
> 
> Ketan, where is the latest and best documentation for this? I see your
> edits below to the 0.93 Site Guide. But I dont see that online where I
> would expect it:
> 
> 
> http://www.ci.uchicago.edu/ swift/wwwdev/guides/release-0.
> 93/siteguide/siteguide.html#_ beagle
> 
> 
> 
> David, is it just that this document is not being correctly pushed to
> the wwwdev site on a nightly basis?
> 
> Ketan, is the latest info on running Swift on Beagle now all in the
> siteguide? Is the info you were putting in the cookbook (I see many
> commits there) now all consolidated into the Site Guide? And is there
> a difference in sites.xml settings between 0.93 and trunk? Lastly,
> which release works best?
> 
> Second question: I need to run a script that executes many 24-core
> OpenMP apps. Is the necessary support for this in 0.93? What if any
> declarations do I need other than to say jobsPerNode=1? Glen, are you
> running OpenMP on Beagle and if so what release and sites file are you
> using?
> 
> Im assuming Justin's latest changes to sites.xml are in trunk but not
> 0.93? If that is correct, is there a corresponding site site for
> Beagle for trunk?
> 
> Thanks,
> 
> - Mike
> 
> 
> ----- Forwarded Message -----
> From: ketan at ci.uchicago.edu
> To: swift-commit at ci.uchicago.edu
> Sent: Sunday, September 18, 2011 10:14:10 PM
> Subject: [Swift-commit] r5126 - branches/release-0.93/docs/ siteguide
> 
> Author: ketan
> Date: 2011-09-18 22:14:10 -0500 (Sun, 18 Sep 2011)
> New Revision: 5126
> 
> Modified:
> branches/release-0.93/docs/ siteguide/beagle
> Log:
> added content to beagle siteguide
> 
> Modified: branches/release-0.93/docs/ siteguide/beagle
> ============================== ============================== =======
> --- branches/release-0.93/docs/ siteguide/beagle 2011-09-19 02:41:02
> UTC (rev 5125)
> +++ branches/release-0.93/docs/ siteguide/beagle 2011-09-19 03:14:10
> UTC (rev 5126)
> @@ -52,9 +52,38 @@
> A key factor in scaling up Swift runs on Beagle is to setup the
> sites.xml parameters.
> The following sites.xml parameters must be set to scale that is
> intended for a large run:
> 
> - * walltime: The expected walltime for completion of your run. This
> parameter is accepted in seconds.
> - * slots: Number of qsub jobs needs to be submitted by swift. This
> number will determine how many qsubs swift will submit for your run.
> Typical values range between 40 and 80 for large runs.
> - * nodegranularity: Determines the number of nodes per job. Total
> nodes will thus be slots times nodegranularity. This may vary for
> advanced configurations though.
> - * maxnodes: Determines the maximum number of nodes a job must pack
> into its qsub. This parameter determines the largest single job that
> your run will submit.
> + * *maxTime* : The expected walltime for completion of your run. This
> parameter is accepted in seconds.
> + * *slots* : Number of qsub jobs needs to be submitted by swift. This
> number will determine how many qsubs swift will submit for your run.
> Typical values range between 40 and 80 for large runs.
> + * *nodeGranularity* : Determines the number of nodes per job. Total
> nodes will thus be slots times nodegranularity. This may vary for
> advanced configurations though.
> + * *maxNodes* : Determines the maximum number of nodes a job must
> pack into its qsub. This parameter determines the largest single job
> that your run will submit.
> + * *jobThrottle* : A factor that determines the number of tasks
> dispatched simultaneously. The intended number of simultaneous tasks
> must match the number of cores targeted. The number of tasks is
> calculated from the jobThrottle factor is as follows:
> 
> +----
> +Number of Tasks = (JobThrottle x 100) + 1
> +----
> 
> +Following is an example sites.xml for a 50 slots run with each slot
> occupying 4 nodes (thus, a 200 node run):
> +
> +-----
> +<config>
> + <pool handle="pbs">
> + <execution provider="coaster" jobmanager="local:pbs"/>
> + <profile namespace="globus" key="project">CI-CCR000013</ profile>
> +
> + <profile namespace="globus" key="ppn">24:cray:pack</ profile>
> +
> + <profile namespace="globus" key="jobsPerNode">24</profile>
> + <profile namespace="globus" key="maxTime">50000</profile>
> + <profile namespace="globus" key="slots">50</profile>
> + <profile namespace="globus" key="nodeGranularity">4</ profile>
> + <profile namespace="globus" key="maxNodes">4</profile>
> +
> + <profile namespace="karajan" key="jobThrottle">48.00</ profile>
> + <profile namespace="karajan" key="initialScore">10000</ profile>
> +
> + <filesystem provider="local"/>
> + <workdirectory >/lustre/beagle/ketan/swift. workdir</workdirectory>
> + </pool>
> +</config>
> +-----
> +
> 
> ______________________________ _________________
> Swift-commit mailing list
> Swift-commit at ci.uchicago.edu
> https://lists.ci.uchicago.edu/ cgi-bin/mailman/listinfo/ swift-commit
> 
> --
> Michael Wilde
> Computation Institute, University of Chicago
> Mathematics and Computer Science Division
> Argonne National Laboratory
> 
> ______________________________ _________________
> Swift-devel mailing list
> Swift-devel at ci.uchicago.edu
> https://lists.ci.uchicago.edu/ cgi-bin/mailman/listinfo/ swift-devel
> 
> --
> Michael Wilde
> Computation Institute, University of Chicago
> Mathematics and Computer Science Division
> Argonne National Laboratory
> 
> 
> 
> 
> --
> Justin M Wozniak

-- 
Michael Wilde
Computation Institute, University of Chicago
Mathematics and Computer Science Division
Argonne National Laboratory




More information about the Swift-devel mailing list