From benc at hawaga.org.uk Thu Dec 3 07:38:08 2009 From: benc at hawaga.org.uk (Ben Clifford) Date: Thu, 3 Dec 2009 13:38:08 +0000 (GMT) Subject: [Swift-devel] [Haskell] Call for papers: PAPP 2010, 7th International Workshop on Practical Aspects of High-level Parallel Programming (fwd) Message-ID: this came by on one of the haskell lists - maybe interesting for swift from the language side of things ---------- Forwarded message ---------- Date: Thu, 03 Dec 2009 13:10:30 +0100 From: Clemens Grelck To: haskell at haskell.org Cc: c.grelck at uva.nl Subject: [Haskell] Call for papers: PAPP 2010, 7th International Workshop on Practical Aspects of High-level Parallel Programming --------------------------------------------------------------------- Please accept our apologies if you have received multiple copies. Please feel free to distribute it to those who might be interested. --------------------------------------------------------------------- ---------------------------------------------------------- CALL FOR PAPERS PAPP 2010 Seventh International Workshop on Practical Aspects of High-level Parallel Programming http://lacl.univ-paris12.fr/gava/PAPP2010/ part of ICCS 2010 The International Conference on Computational Science May 31- June 2, 2010, University of Amsterdam, The Netherlands. ---------------------------------------------------------- AIMS AND SCOPE Computational Science applications are more and more complex to develop and require more and more computing power. Bill McColl's post "Sequential Computing Considered Harmful" is an excellent summary of today's situation. Sequential computing cannot go further. Major companies in the computing industry now recognizes the urgency of reorienting an entire industry towards massively parallel computing (Think Parallel or Perish). Parallel and grid computing are solutions to the increasing need for computing power. The trend is towards the increase of cores in processors, the number of processors and the need for scalable computing everywhere. But parallel and distributed programming is still dominated by low-level techniques such as send/receive message passing. Thus high-level approaches should play a key role in the shift to scalable computing in every computer. Algorithmic skeletons, parallel extensions of functional languages such as Haskell and ML, parallel logic and constraint programming, parallel execution of declarative programs such as SQL queries, genericity and meta-programming in object-oriented languages, etc. have produced methods and tools that improve the price/performance ratio of parallel software, and broaden the range of target applications. Also, high level languages offer a high degree of abstraction which ease the development of complex systems. Moreover, being based on formal semantics, it is possible to certify the correctness of critical parts of the applications. The PAPP workshop focuses on practical aspects of high-level parallel programming: design, implementation and optimization of high-level programming languages, semantics of parallel languages, formal verification, design or certification of libraries, middlewares and tools (performance predictors working on high-level parallel/grid source code, visualisations of abstract behaviour, automatic hotspot detectors, high-level GRID resource managers, compilers, automatic generators, etc.), application of proof assistants to parallel applications, applications in all fields of computational science, benchmarks and experiments. Research on high-level grid programming is particularly relevant as well as domain specific parallel software. The aim of all these languages and tools is to improve and ease the development of applications (safety, expressivity, efficiency, etc.). Thus the Seventh PAPP workshop focuses on applications. The PAPP workshop is aimed both at researchers involved in the development of high level approaches for parallel and grid computing and computational science researchers who are potential users of these languages and tools. TOPICS We welcome submission of original, unpublished papers in English on topics including: * applications in all fields of high-performance computing and visualisation (using high-level tools) * high-level models (CGM, BSP, MPM, LogP, etc.) and tools for parallel and grid computing * Program verification and Formal verification of parallel applications/ libraries/languages or parallel computing in computer-assisted reasoning * high-level parallel language design, implementation and optimisation * modular, object-oriented, functional, logic, constraint programming for parallel, distributed and grid computing systems * algorithmic skeletons, patterns and high-level parallel libraries * generative (e.g. template-based) programming with algorithmic skeletons, patterns and high-level parallel libraries * benchmarks and experiments using such languages and tools * industrial uses of a high-level parallel language PAPER SUBMISSION AND PUBLICATION Prospective authors are invited to submit full papers in English presenting original research. Submitted papers must be unpublished and not submitted for publication elsewhere. Papers will go through a rigorous reviewing process. Each paper will be reviewed by at least three referees. The accepted papers will be published in the conference proceedings published by Elsevier Science in the open-access Procedia Computer Science series, as part of the ICCS proceedings. Submission must be done through the ICCS website: http://www.iccs-meeting.org/iccs2010/papers/upload.php We invite you to submit a full paper of 10 pages formatted according to the rules of Procedia Computer Science (see http://www.elsevier.com/wps/find/journaldescription.cws_home/719435/description#description), describing new and original results, no later than January 11, 2010 -HARD DEADLINE. Submission implies the willingness of at least one of the authors to register and present the paper. An early email to "gava at univ-paris12.fr" with your intention to submit a paper would be greatly appreciated (especially if you have doubts about the relevance of your paper). Accepted papers should be presented at the workshop and extended and revised versions could be published in a special issue of Scalable Computing: Practice and Experience, provided revisions suggested by the referees are made. IMPORTANT DATES * January 11, 2010 - Full paper due (HARD DEADLINE) * February 15, 2010 - Notification * March 1, 2010 - Camera-ready paper due * September, 2010 - Journal version due PROGRAM COMMITTEE * Marco Aldinucci (University of Torino, Italy) * Anne Benoit (ENS Lyon, France) * Umit V. Catalyurek (The Ohio State University, USA) * Emmanuel Chailloux (University of Paris 6, France) * Fr?d?ric Dabrowski (University of Orl?ans, France) * Fr?d?ric Gava (University Paris-East (Paris 12), France) * Alexandros Gerbessiotis (NJIT, USA) * Clemens Grelck (University of Amsterdam, Netherlands) * Hideya Iwasaki (The University of Electro-communications, Japan) * Christoph Kessler (Linkopings Universitet, Sweden) * Rita Loogen (University of Marburg, Germany) * Kiminori Matsuzaki (Kochi University of Technology, Japan) * Samuel Midkiff (Purdue University, USA) * Susanna Pelagatti (University of Pisa, Italy) * Bruno Raffin (INRIA, France) * Casiano Rodriguez-Leon (University La Laguna, Spain) ORGANIZERS Dr. Anne BENOIT Laboratoire d'Informatique du Parall?lisme Ecole Normale Sup?rieure de Lyon 46 All?e d'Italie 69364 Lyon Cedex 07 - France Dr. Fr?d?ric GAVA Laboratoire d'algorithmique, complexit? et logique Universit? de Paris-Est (Paris 12) 61 avenue du G?n?ral de Gaulle 94010 Cr?teil cedex - France -- Dr Clemens Grelck University of Amsterdam University of Hertfordshire http://www.sac-home.org/~cg _______________________________________________ Haskell mailing list Haskell at haskell.org http://www.haskell.org/mailman/listinfo/haskell From wilde at mcs.anl.gov Thu Dec 3 07:50:21 2009 From: wilde at mcs.anl.gov (Michael Wilde) Date: Thu, 03 Dec 2009 07:50:21 -0600 Subject: [Swift-devel] [Haskell] Call for papers: PAPP 2010, 7th International Workshop on Practical Aspects of High-level Parallel Programming (fwd) In-Reply-To: References: Message-ID: <4B17C21D.2000805@mcs.anl.gov> That sounds like a good place to complete and submit the language paper we started earlier this year. - Mike On 12/3/09 7:38 AM, Ben Clifford wrote: > this came by on one of the haskell lists - maybe interesting for swift > from the language side of things > > ---------- Forwarded message ---------- > Date: Thu, 03 Dec 2009 13:10:30 +0100 > From: Clemens Grelck > To: haskell at haskell.org > Cc: c.grelck at uva.nl > Subject: [Haskell] Call for papers: PAPP 2010, > 7th International Workshop on Practical Aspects of High-level Parallel > Programming > > --------------------------------------------------------------------- > Please accept our apologies if you have received multiple copies. > Please feel free to distribute it to those who might be interested. > --------------------------------------------------------------------- > > > ---------------------------------------------------------- > CALL FOR PAPERS > > PAPP 2010 > Seventh International Workshop on > Practical Aspects of High-level Parallel Programming > http://lacl.univ-paris12.fr/gava/PAPP2010/ > > part of > > ICCS 2010 > The International Conference on Computational Science > May 31- June 2, 2010, University of Amsterdam, The Netherlands. > ---------------------------------------------------------- > > > AIMS AND SCOPE > > Computational Science applications are more and more complex to develop and > require more and more computing power. Bill McColl's post "Sequential Computing > Considered Harmful" is an excellent summary of today's situation. Sequential > computing cannot go further. Major companies in the computing industry now > recognizes the urgency of reorienting an entire industry towards massively > parallel computing (Think Parallel or Perish). > > Parallel and grid computing are solutions to the increasing need for computing > power. The trend is towards the increase of cores in processors, the number of > processors and the need for scalable computing everywhere. But parallel and > distributed programming is still dominated by low-level techniques such as > send/receive message passing. Thus high-level approaches should play a key role > in the shift to scalable computing in every computer. > > Algorithmic skeletons, parallel extensions of functional languages such as > Haskell and ML, parallel logic and constraint programming, parallel execution > of declarative programs such as SQL queries, genericity and meta-programming in > object-oriented languages, etc. have produced methods and tools that improve > the price/performance ratio of parallel software, and broaden the range of > target applications. Also, high level languages offer a high degree of > abstraction which ease the development of complex systems. Moreover, being > based on formal semantics, it is possible to certify the correctness of critical > parts of the applications. > > The PAPP workshop focuses on practical aspects of high-level parallel > programming: design, implementation and optimization of high-level programming > languages, semantics of parallel languages, formal verification, design or > certification of libraries, middlewares and tools (performance predictors > working on high-level parallel/grid source code, visualisations of abstract > behaviour, automatic hotspot detectors, high-level GRID resource managers, > compilers, automatic generators, etc.), application of proof assistants to > parallel applications, applications in all fields of computational science, > benchmarks and experiments. Research on high-level grid programming is > particularly relevant as well as domain specific parallel software. > > The aim of all these languages and tools is to improve and ease the development > of applications (safety, expressivity, efficiency, etc.). Thus the Seventh PAPP > workshop focuses on applications. > > The PAPP workshop is aimed both at researchers involved in the development of > high level approaches for parallel and grid computing and computational science > researchers who are potential users of these languages and tools. > > TOPICS > > We welcome submission of original, unpublished papers in English on > topics including: > > * applications in all fields of high-performance computing and > visualisation (using high-level tools) > > * high-level models (CGM, BSP, MPM, LogP, etc.) and tools for parallel > and grid computing > > * Program verification and Formal verification of parallel applications/ > libraries/languages or parallel computing in > computer-assisted reasoning > > * high-level parallel language design, implementation and > optimisation > > * modular, object-oriented, functional, logic, constraint programming > for parallel, distributed and grid computing systems > > * algorithmic skeletons, patterns and high-level parallel libraries > > * generative (e.g. template-based) programming with algorithmic > skeletons, patterns and high-level parallel libraries > > * benchmarks and experiments using such languages and tools > > * industrial uses of a high-level parallel language > > > > PAPER SUBMISSION AND PUBLICATION > > Prospective authors are invited to submit full papers in English > presenting original research. Submitted papers must be unpublished and > not submitted for publication elsewhere. Papers will go through a > rigorous reviewing process. Each paper will be reviewed by at least > three referees. The accepted papers will be published in the > conference proceedings published by Elsevier Science in the open-access > Procedia Computer Science series, as part of the ICCS proceedings. > > Submission must be done through the ICCS website: > http://www.iccs-meeting.org/iccs2010/papers/upload.php > We invite you to submit a full paper of 10 pages formatted according to > the rules of Procedia Computer Science (see > http://www.elsevier.com/wps/find/journaldescription.cws_home/719435/description#description), > describing > new and original results, no later than January 11, 2010 -HARD DEADLINE. > Submission implies the willingness of at least one > of the authors to register and present the paper. An early email to > "gava at univ-paris12.fr" with your intention to submit a paper would > be greatly appreciated (especially if you have doubts about the > relevance of your paper). > > Accepted papers should be presented at the workshop and extended and > revised versions could be published in a special issue of Scalable Computing: > Practice and Experience, provided revisions suggested by the > referees are made. > > > IMPORTANT DATES > > * January 11, 2010 - Full paper due (HARD DEADLINE) > * February 15, 2010 - Notification > * March 1, 2010 - Camera-ready paper due > * September, 2010 - Journal version due > > PROGRAM COMMITTEE > > * Marco Aldinucci (University of Torino, Italy) > * Anne Benoit (ENS Lyon, France) > * Umit V. Catalyurek (The Ohio State University, USA) > * Emmanuel Chailloux (University of Paris 6, France) > * Fr?d?ric Dabrowski (University of Orl?ans, France) > * Fr?d?ric Gava (University Paris-East (Paris 12), France) > * Alexandros Gerbessiotis (NJIT, USA) > * Clemens Grelck (University of Amsterdam, Netherlands) > * Hideya Iwasaki (The University of Electro-communications, Japan) > * Christoph Kessler (Linkopings Universitet, Sweden) > * Rita Loogen (University of Marburg, Germany) > * Kiminori Matsuzaki (Kochi University of Technology, Japan) > * Samuel Midkiff (Purdue University, USA) > * Susanna Pelagatti (University of Pisa, Italy) > * Bruno Raffin (INRIA, France) > * Casiano Rodriguez-Leon (University La Laguna, Spain) > > > ORGANIZERS > > Dr. Anne BENOIT > Laboratoire d'Informatique du Parall?lisme > Ecole Normale Sup?rieure de Lyon > 46 All?e d'Italie > 69364 Lyon Cedex 07 - France > > Dr. Fr?d?ric GAVA > Laboratoire d'algorithmique, complexit? et logique > Universit? de Paris-Est (Paris 12) > 61 avenue du G?n?ral de Gaulle > 94010 Cr?teil cedex - France > > > > ------------------------------------------------------------------------ > > _______________________________________________ > Swift-devel mailing list > Swift-devel at ci.uchicago.edu > http://mail.ci.uchicago.edu/mailman/listinfo/swift-devel From wilde at mcs.anl.gov Thu Dec 3 08:16:23 2009 From: wilde at mcs.anl.gov (Michael Wilde) Date: Thu, 03 Dec 2009 08:16:23 -0600 Subject: [Swift-devel] Communicado going down for server move Message-ID: <4B17C837.7010307@mcs.anl.gov> Communicado (and many other machines) may be down from Dec 4 (tomorrow) through Dec 8 - details below. - Mike -------- Original Message -------- Subject: [CI] Dec 3-8 IT move reminder/update Date: Wed, 2 Dec 2009 11:43:57 -0600 From: Greg Cross To: ci at ci.uchicago.edu, teraport-notify at ci.uchicago.edu References: A reminder that the IT hardware relocation activity at the University of Chicago Hyde Park campus will begin tomorrow, December 3 and conclude Tuesday, December 8. To minimize unavailability of CI resources, network filesystem operations -- PADS GPFS metadata and CI SAN migrations -- will be postponed to a later date (TBD) in order to focus exclusively on move operations. Power down of the Teraport cluster will start at 12:01 a.m. CST tomorrow morning (Dec 3). All other events will follow the original schedule (see below). Please note that the absence of core IT services on Saturday will impact all systems connected to the CI infrastructure. If you have questions regarding the move, please contact support at ci.uchicago.edu . Thank you for your patience during this transition period. Begin forwarded message: > From: Greg Cross > Date: 2009-11-18 Wed 09:19:35 CST > To: ci at ci.uchicago.edu, teraport-notify at ci.uchicago.edu > Subject: Dec 3-8: IT moves and maintenance at UC > > Next month, the Computation Institute will complete the last phase > of its intracampus moves by relocating IT hardware at the Research > Institutes and Woodlawn Social Sciences Center to the machine rooms > in Searle. Move activities will take place from December 3 through 8. > > PLEASE NOTE: Core services (login, mail, web, NFS) and all VMs will > be unavailable starting the morning of December 5. Core services > should return later that day; VMs should be available by the evening > of Dec 6. > > > SCHEDULE > -------- > > Dec 3: Teraport powered offline and unracked > > Dec 4: all project servers powered offline and unracked > > Dec 5: core services and all VMs taken offline and moved to SCL; > availability expected later that day > remaining hardware in RI and WSSC to be powered off and > disassembled for Monday move; > PADS GPFS will go offline for metadisk migration > > Dec 6: PADS GPFS available; > VMs available > > Dec 7: physical move from RI and WSSC to SCL > > Dec 8: continuation of physical WSSC move if necessary; > Teraport power installed, reracking of cluster begins; > expected availability by end of week > > > All CI resources in RI and WSSC will be unavailable during the > process of this move. These include: > > * core services (authentication, login, network filesystems, > database, mail, web, monitoring, and management) > * PADS storage resource > * Teraport compute and storage resources > * project systems: sidgrid, tp-neurodb, age3, communicado, bridled, > NMPDR, NCDM, systemimager > * project VMs on bridled and tron > * X-Men compute resource > * Lightsaber compute and storage resources > * CBC web service > > > This will affect use of other resources, including: > > * HNL workstation cluster > * Ellipse compute and storage resources > > > In sync with this move are migration of storage to new physical > resources: > > * As noted above, PADS GPFS will be taken offline the weekend of Dec > 5 for migration of metadata to internal disks within the > filesystem's storage nodes. > > * Storage used by /home filesystem and databases on db.ci will be > migrated from the DDN array to a new fibrechannel SAN; this will > facilitate the expansion and move of the DDN, which will be used > exclusively for PADS storage. Filesystem syncing will begin later > this week in preparation for physical hardware reconfiguration on > Dec 5. > > > There will be one additional physical move to relocate the DDN and > NMPDR hardware from RI to TCS. The date of this move has not yet > been determined. > > Move plans will be on the CI web site this week; expect an > additional notification via email. If you have any questions, please > contact support at ci.uchicago.edu . > > > _______________________________________________ ci mailing list ci at ci.uchicago.edu http://mail.ci.uchicago.edu/mailman/listinfo/ci From wwj at ci.uchicago.edu Thu Dec 3 14:31:51 2009 From: wwj at ci.uchicago.edu (Wenjun Wu) Date: Thu, 03 Dec 2009 14:31:51 -0600 Subject: [Swift-devel] gridftp issue in running oops swift script on queenbee In-Reply-To: <1259177092.22405.1.camel@localhost> References: <20091123190130.CFZ45811@m4500-02.uchicago.edu> <1259100877.5951.0.camel@localhost> <20091125113308.CGB97214@m4500-02.uchicago.edu> <1259170689.18426.2.camel@localhost> <20091125115025.CGB99648@m4500-02.uchicago.edu> <1259177092.22405.1.camel@localhost> Message-ID: <4B182037.7080809@ci.uchicago.edu> Hello, I got a weird issue in running the attached script on the TeraGrid QueenBee. The workflow fails after the ItFixInit.sh is done. The InitFixInit.sh just copies some files to an output directory and it finishes the copying successfully. But when the swift engine tries to clean up the temp directory for the ItFixInit task, it fails. I also tries to set the sitedir.keep=true in the swift.properites to avoid the removal of the temp files but got the same error. Here is the description of QueenBeen in the site.xml file. /home/wwj/testjobs And this is the tc.data file used for the workflow: QueenBee PSim /home/wwj/tools/protlib2/bin/PSim.sh null null null QueenBee ItFixInit /home/wwj/tools/protlib2/bin/ItFixInit.sh null null null QueenBee RevisePData /home/wwj/tools/protlib2/bin/RevisePData.sh null null null Thanks! Wenjun Caused by: org.globus.cog.abstraction.impl.file.IrrecoverableResourceException: Exception in getFile Caused by: org.globus.cog.abstraction.impl.file.FileResourceException: Failed to retrieve file information about /home/wwj/testjobs/oops-20091202-1700-vmh5dnd3/info/j/ItFixInit-jjpe3bkj-info Caused by: org.globus.ftp.exception.ServerException: Server refused performing the request. Custom message: Server refused MLST command (error code 1) [Nested exception message: Custom message: Unexpected reply: 500-Command failed : System error in stat: No such file or directory 500-A system call failed: No such file or directory 500- 500 End.] [Nested exception is org.globus.ftp.exception.UnexpectedReplyCodeException: Custom message: Unexpected reply: 500-Command failed : System error in stat: No such file or directory 500-A system call failed: No such file or directory 500- 500 End.] 2009-12-02 17:01:33,938-0600 INFO vdl:execute END_FAILURE thread=0-1-10-1-1-2-0-1-1 tr=ItFixInit 2009-12-02 17:01:33,942-0600 DEBUG VDL2ExecutionContext Exception in ItFixInit: Arguments: [sandbox/wwj/oops/input/fasta/T1af7.fasta, output/T1af7/R00/T1af7.R00.fasta] Host: QueenBee Directory: oops-20091202-1700-vmh5dnd3/jobs/j/ItFixInit-jjpe3bkj stderr.txt: stdout.txt: ---- Exception in ItFixInit: Arguments: [sandbox/wwj/oops/input/fasta/T1af7.fasta, output/T1af7/R00/T1af7.R00.fasta] Host: QueenBee Directory: oops-20091202-1700-vmh5dnd3/jobs/j/ItFixInit-jjpe3bkj stderr.txt: stdout.txt: ---- Caused by: Failed to remove job directory /home/wwj/testjobs/j/ItFixInit-jjpe3bkj at org.globus.cog.karajan.workflow.nodes.functions.KException.function(KException.java:29) at org.globus.cog.karajan.workflow.nodes.functions.AbstractFunction.post(AbstractFunction.java:45) at org.globus.cog.karajan.workflow.nodes.AbstractSequentialWithArguments.childCompleted(AbstractSequentialWithArguments.java:192) at org.globus.cog.karajan.workflow.nodes.Sequential.notificationEvent(Sequential.java:33) at org.globus.cog.karajan.workflow.nodes.FlowNode.event(FlowNode.java:332) at org.globus.cog.karajan.workflow.events.EventBus.send(EventBus.java:134) at org.globus.cog.karajan.workflow.events.EventBus.sendHooked(EventBus.java:108) at org.globus.cog.karajan.workflow.nodes.FlowNode.fireNotificationEvent(FlowNode.java:176) at org.globus.cog.karajan.workflow.nodes.FlowNode.complete(FlowNode.java:296) at org.globus.cog.karajan.workflow.nodes.FlowContainer.post(FlowContainer.java:58) at org.globus.cog.karajan.workflow.nodes.functions.AbstractFunction.post(AbstractFunction.java:46) at org.globus.cog.karajan.workflow.nodes.Sequential.startNext(Sequential.java:51) at org.globus.cog.karajan.workflow.nodes.Sequential.executeChildren(Sequential.java:27) at org.globus.cog.karajan.workflow.nodes.functions.AbstractFunction.executeChildren(AbstractFunction.java:40) at org.globus.cog.karajan.workflow.nodes.FlowContainer.execute(FlowContainer.java:63) at org.globus.cog.karajan.workflow.nodes.FlowNode.restart(FlowNode.java:233) at org.globus.cog.karajan.workflow.nodes.FlowNode.start(FlowNode.java:278) at org.globus.cog.karajan.workflow.nodes.FlowNode.controlEvent(FlowNode.java:391) at org.globus.cog.karajan.workflow.nodes.FlowNode.event(FlowNode.java:329) at org.globus.cog.karajan.workflow.FlowElementWrapper.event(FlowElementWrapper.java:227) at org.globus.cog.karajan.workflow.events.EventBus.send(EventBus.java:134) at org.globus.cog.karajan.workflow.events.EventBus.sendHooked(EventBus.java:108) at org.globus.cog.karajan.workflow.events.EventTargetPair.run(EventTargetPair.java:43) at edu.emory.mathcs.backport.java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:431) at edu.emory.mathcs.backport.java.util.concurrent.FutureTask.run(FutureTask.java:166) at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:643) at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:668) at java.lang.Thread.run(Thread.java:595) Caused by: Failed to remove job directory /home/wwj/testjobs/j/ItFixInit-jjpe3bkj at org.globus.cog.karajan.workflow.nodes.FlowNode.fail(FlowNode.java:411) at org.globus.cog.karajan.workflow.nodes.FlowNode.fail(FlowNode.java:415) at org.globus.cog.karajan.workflow.nodes.GenerateErrorNode.post(GenerateErrorNode.java:28) ... 26 more -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: oops-20091202-1700-vmh5dnd3.log URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: oops.swift URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ItFixInit.sh URL: From hategan at mcs.anl.gov Thu Dec 3 14:47:59 2009 From: hategan at mcs.anl.gov (Mihael Hategan) Date: Thu, 03 Dec 2009 14:47:59 -0600 Subject: [Swift-devel] gridftp issue in running oops swift script on queenbee In-Reply-To: <4B182037.7080809@ci.uchicago.edu> References: <20091123190130.CFZ45811@m4500-02.uchicago.edu> <1259100877.5951.0.camel@localhost> <20091125113308.CGB97214@m4500-02.uchicago.edu> <1259170689.18426.2.camel@localhost> <20091125115025.CGB99648@m4500-02.uchicago.edu> <1259177092.22405.1.camel@localhost> <4B182037.7080809@ci.uchicago.edu> Message-ID: <1259873279.28031.3.camel@localhost> On Thu, 2009-12-03 at 14:31 -0600, Wenjun Wu wrote: > Hello, > I got a weird issue in running the attached script on the TeraGrid > QueenBee. > The workflow fails after the ItFixInit.sh is done. > The InitFixInit.sh just copies some files to an output directory and > it finishes the copying successfully. > But when the swift engine tries to clean up the temp directory for the > ItFixInit task, it fails. > I also tries to set the sitedir.keep=true in the swift.properites to > avoid the removal of the temp files but got the same error. sitedir.keep=true refers to the shared directory not the job directory. Anyway, this seems to be some weird issue with the FS. What would be helpful would be for you to find the .info file from oops-20091202-1700-vmh5dnd3/jobs/j/ItFixInit-jjpe3bk and post that. From wwj at ci.uchicago.edu Thu Dec 3 16:04:34 2009 From: wwj at ci.uchicago.edu (Wenjun Wu) Date: Thu, 03 Dec 2009 16:04:34 -0600 Subject: [Swift-devel] gridftp issue in running oops swift script on queenbee In-Reply-To: <1259873279.28031.3.camel@localhost> References: <20091123190130.CFZ45811@m4500-02.uchicago.edu> <1259100877.5951.0.camel@localhost> <20091125113308.CGB97214@m4500-02.uchicago.edu> <1259170689.18426.2.camel@localhost> <20091125115025.CGB99648@m4500-02.uchicago.edu> <1259177092.22405.1.camel@localhost> <4B182037.7080809@ci.uchicago.edu> <1259873279.28031.3.camel@localhost> Message-ID: <4B1835F2.3030607@ci.uchicago.edu> I can't find the .info file. The swift engine seems to remove the info file since the ItFixInit is finished. Is there any way to tell the swift engine to keep the info file? Wenjun > On Thu, 2009-12-03 at 14:31 -0600, Wenjun Wu wrote: > >> Hello, >> I got a weird issue in running the attached script on the TeraGrid >> QueenBee. >> The workflow fails after the ItFixInit.sh is done. >> The InitFixInit.sh just copies some files to an output directory and >> it finishes the copying successfully. >> But when the swift engine tries to clean up the temp directory for the >> ItFixInit task, it fails. >> I also tries to set the sitedir.keep=true in the swift.properites to >> avoid the removal of the temp files but got the same error. >> > > sitedir.keep=true refers to the shared directory not the job directory. > > Anyway, this seems to be some weird issue with the FS. > > What would be helpful would be for you to find the .info file from > oops-20091202-1700-vmh5dnd3/jobs/j/ItFixInit-jjpe3bk and post that. > > From hategan at mcs.anl.gov Thu Dec 3 16:13:14 2009 From: hategan at mcs.anl.gov (Mihael Hategan) Date: Thu, 03 Dec 2009 16:13:14 -0600 Subject: [Swift-devel] gridftp issue in running oops swift script on queenbee In-Reply-To: <4B1835F2.3030607@ci.uchicago.edu> References: <20091123190130.CFZ45811@m4500-02.uchicago.edu> <1259100877.5951.0.camel@localhost> <20091125113308.CGB97214@m4500-02.uchicago.edu> <1259170689.18426.2.camel@localhost> <20091125115025.CGB99648@m4500-02.uchicago.edu> <1259177092.22405.1.camel@localhost> <4B182037.7080809@ci.uchicago.edu> <1259873279.28031.3.camel@localhost> <4B1835F2.3030607@ci.uchicago.edu> Message-ID: <1259878394.30785.1.camel@localhost> On Thu, 2009-12-03 at 16:04 -0600, Wenjun Wu wrote: > I can't find the .info file. The swift engine seems to remove the info > file since the ItFixInit is finished. > Is there any way to tell the swift engine to keep the info file? Sorry. In your case the info file should be in oops-20091202-1700-vmh5dnd3/info/j/ItFixInit-jjpe3bk. From wwj at ci.uchicago.edu Thu Dec 3 16:31:49 2009 From: wwj at ci.uchicago.edu (Wenjun Wu) Date: Thu, 03 Dec 2009 16:31:49 -0600 Subject: [Swift-devel] gridftp issue in running oops swift script on queenbee In-Reply-To: <1259878394.30785.1.camel@localhost> References: <20091123190130.CFZ45811@m4500-02.uchicago.edu> <1259100877.5951.0.camel@localhost> <20091125113308.CGB97214@m4500-02.uchicago.edu> <1259170689.18426.2.camel@localhost> <20091125115025.CGB99648@m4500-02.uchicago.edu> <1259177092.22405.1.camel@localhost> <4B182037.7080809@ci.uchicago.edu> <1259873279.28031.3.camel@localhost> <4B1835F2.3030607@ci.uchicago.edu> <1259878394.30785.1.camel@localhost> Message-ID: <4B183C55.8060408@ci.uchicago.edu> Under the info directory, there is no file left. ls -lt oops-20091202-1700-vmh5dnd3/info total 0 And actually the j/ItFixInit-jjpe3bk was created at the same level as oops-20091202-1700-vmh5dnd3. Wenjun > On Thu, 2009-12-03 at 16:04 -0600, Wenjun Wu wrote: > >> I can't find the .info file. The swift engine seems to remove the info >> file since the ItFixInit is finished. >> Is there any way to tell the swift engine to keep the info file? >> > > Sorry. In your case the info file should be in > oops-20091202-1700-vmh5dnd3/info/j/ItFixInit-jjpe3bk. > > > From hategan at mcs.anl.gov Thu Dec 3 16:58:07 2009 From: hategan at mcs.anl.gov (Mihael Hategan) Date: Thu, 03 Dec 2009 16:58:07 -0600 Subject: [Swift-devel] gridftp issue in running oops swift script on queenbee In-Reply-To: <4B183C55.8060408@ci.uchicago.edu> References: <20091123190130.CFZ45811@m4500-02.uchicago.edu> <1259100877.5951.0.camel@localhost> <20091125113308.CGB97214@m4500-02.uchicago.edu> <1259170689.18426.2.camel@localhost> <20091125115025.CGB99648@m4500-02.uchicago.edu> <1259177092.22405.1.camel@localhost> <4B182037.7080809@ci.uchicago.edu> <1259873279.28031.3.camel@localhost> <4B1835F2.3030607@ci.uchicago.edu> <1259878394.30785.1.camel@localhost> <4B183C55.8060408@ci.uchicago.edu> Message-ID: <1259881088.30785.11.camel@localhost> On Thu, 2009-12-03 at 16:31 -0600, Wenjun Wu wrote: > Under the info directory, there is no file left. > > ls -lt oops-20091202-1700-vmh5dnd3/info > total 0 > > And actually the j/ItFixInit-jjpe3bk was created at the same level as > oops-20091202-1700-vmh5dnd3. Can you do a "tree" in oops-20091202-1700-vmh5dnd3 and paste that? From wwj at ci.uchicago.edu Thu Dec 3 23:03:04 2009 From: wwj at ci.uchicago.edu (Wenjun Wu) Date: Thu, 03 Dec 2009 23:03:04 -0600 Subject: [Swift-devel] gridftp issue in running oops swift script on queenbee In-Reply-To: <1259881088.30785.11.camel@localhost> References: <20091123190130.CFZ45811@m4500-02.uchicago.edu> <1259100877.5951.0.camel@localhost> <20091125113308.CGB97214@m4500-02.uchicago.edu> <1259170689.18426.2.camel@localhost> <20091125115025.CGB99648@m4500-02.uchicago.edu> <1259177092.22405.1.camel@localhost> <4B182037.7080809@ci.uchicago.edu> <1259873279.28031.3.camel@localhost> <4B1835F2.3030607@ci.uchicago.edu> <1259878394.30785.1.camel@localhost> <4B183C55.8060408@ci.uchicago.edu> <1259881088.30785.11.camel@localhost> Message-ID: <4B189808.7060401@ci.uchicago.edu> . |-info |-kickstart |-shared |---dev |---output |-----T1af7 |-------R00 |---sandbox |-----wwj |-------oops |---------input |-----------fasta |-----------native |-----------rama |-----------secseq |-status |---f |---h |---j > On Thu, 2009-12-03 at 16:31 -0600, Wenjun Wu wrote: > >> Under the info directory, there is no file left. >> >> ls -lt oops-20091202-1700-vmh5dnd3/info >> total 0 >> >> And actually the j/ItFixInit-jjpe3bk was created at the same level as >> oops-20091202-1700-vmh5dnd3. >> > > Can you do a "tree" in oops-20091202-1700-vmh5dnd3 and paste that? > > From hategan at mcs.anl.gov Thu Dec 3 23:25:54 2009 From: hategan at mcs.anl.gov (Mihael Hategan) Date: Thu, 03 Dec 2009 23:25:54 -0600 Subject: [Swift-devel] gridftp issue in running oops swift script on queenbee In-Reply-To: <4B189808.7060401@ci.uchicago.edu> References: <20091123190130.CFZ45811@m4500-02.uchicago.edu> <1259100877.5951.0.camel@localhost> <20091125113308.CGB97214@m4500-02.uchicago.edu> <1259170689.18426.2.camel@localhost> <20091125115025.CGB99648@m4500-02.uchicago.edu> <1259177092.22405.1.camel@localhost> <4B182037.7080809@ci.uchicago.edu> <1259873279.28031.3.camel@localhost> <4B1835F2.3030607@ci.uchicago.edu> <1259878394.30785.1.camel@localhost> <4B183C55.8060408@ci.uchicago.edu> <1259881088.30785.11.camel@localhost> <4B189808.7060401@ci.uchicago.edu> Message-ID: <1259904354.4512.2.camel@localhost> Doesn't look like there are any files there created by the compute nodes. Are you sure that filesystem is visible from the compute node? On Thu, 2009-12-03 at 23:03 -0600, Wenjun Wu wrote: > . > |-info > |-kickstart > |-shared > |---dev > |---output > |-----T1af7 > |-------R00 > |---sandbox > |-----wwj > |-------oops > |---------input > |-----------fasta > |-----------native > |-----------rama > |-----------secseq > |-status > |---f > |---h > |---j > > > On Thu, 2009-12-03 at 16:31 -0600, Wenjun Wu wrote: > > > >> Under the info directory, there is no file left. > >> > >> ls -lt oops-20091202-1700-vmh5dnd3/info > >> total 0 > >> > >> And actually the j/ItFixInit-jjpe3bk was created at the same level as > >> oops-20091202-1700-vmh5dnd3. > >> > > > > Can you do a "tree" in oops-20091202-1700-vmh5dnd3 and paste that? > > > > > From wilde at mcs.anl.gov Fri Dec 4 11:18:11 2009 From: wilde at mcs.anl.gov (Michael Wilde) Date: Fri, 04 Dec 2009 11:18:11 -0600 Subject: [Swift-devel] gridftp issue in running oops swift script on queenbee In-Reply-To: <1259904354.4512.2.camel@localhost> References: <20091123190130.CFZ45811@m4500-02.uchicago.edu> <1259100877.5951.0.camel@localhost> <20091125113308.CGB97214@m4500-02.uchicago.edu> <1259170689.18426.2.camel@localhost> <20091125115025.CGB99648@m4500-02.uchicago.edu> <1259177092.22405.1.camel@localhost> <4B182037.7080809@ci.uchicago.edu> <1259873279.28031.3.camel@localhost> <4B1835F2.3030607@ci.uchicago.edu> <1259878394.30785.1.camel@localhost> <4B183C55.8060408@ci.uchicago.edu> <1259881088.30785.11.camel@localhost> <4B189808.7060401@ci.uchicago.edu> <1259904354.4512.2.camel@localhost> Message-ID: <4B194453.5050507@mcs.anl.gov> Wenjun, can you try to run a simple script consisting of a single cat command, to make it easy to see what Swift is doing on Queenbee? It looks to me like something is acting strange in the interaction between Swift and Queenbee's gridftp server. Mihael: looking at the original logs I was perplexed to see that Swift could create the remote directories but not remove them. Almost as if the remove command was using a pathname that the grodftp server couldnt see. Wenjun tried a manual globus-url-copy to what we thought was a very similar path, and from the command line (with Swift's globus-url-copy) it was able to fetch individual files. So it seems that his workdir is visible to Queenbee's gridftp by what *seemed* like the expected path. It would be interesting to try uberftp and see if dirs and files can be removed. You'd need a host that has it; not sure if its in the teragrid toolkit or not. Its in the OSG client stack. Maybe can download it separately. - Mike On 12/3/09 11:25 PM, Mihael Hategan wrote: > Doesn't look like there are any files there created by the compute > nodes. Are you sure that filesystem is visible from the compute node? > > On Thu, 2009-12-03 at 23:03 -0600, Wenjun Wu wrote: >> . >> |-info >> |-kickstart >> |-shared >> |---dev >> |---output >> |-----T1af7 >> |-------R00 >> |---sandbox >> |-----wwj >> |-------oops >> |---------input >> |-----------fasta >> |-----------native >> |-----------rama >> |-----------secseq >> |-status >> |---f >> |---h >> |---j >> >>> On Thu, 2009-12-03 at 16:31 -0600, Wenjun Wu wrote: >>> >>>> Under the info directory, there is no file left. >>>> >>>> ls -lt oops-20091202-1700-vmh5dnd3/info >>>> total 0 >>>> >>>> And actually the j/ItFixInit-jjpe3bk was created at the same level as >>>> oops-20091202-1700-vmh5dnd3. >>>> >>> Can you do a "tree" in oops-20091202-1700-vmh5dnd3 and paste that? >>> >>> > > _______________________________________________ > Swift-devel mailing list > Swift-devel at ci.uchicago.edu > http://mail.ci.uchicago.edu/mailman/listinfo/swift-devel From hategan at mcs.anl.gov Fri Dec 4 11:51:42 2009 From: hategan at mcs.anl.gov (Mihael Hategan) Date: Fri, 04 Dec 2009 11:51:42 -0600 Subject: [Swift-devel] gridftp issue in running oops swift script on queenbee In-Reply-To: <4B194453.5050507@mcs.anl.gov> References: <20091123190130.CFZ45811@m4500-02.uchicago.edu> <1259100877.5951.0.camel@localhost> <20091125113308.CGB97214@m4500-02.uchicago.edu> <1259170689.18426.2.camel@localhost> <20091125115025.CGB99648@m4500-02.uchicago.edu> <1259177092.22405.1.camel@localhost> <4B182037.7080809@ci.uchicago.edu> <1259873279.28031.3.camel@localhost> <4B1835F2.3030607@ci.uchicago.edu> <1259878394.30785.1.camel@localhost> <4B183C55.8060408@ci.uchicago.edu> <1259881088.30785.11.camel@localhost> <4B189808.7060401@ci.uchicago.edu> <1259904354.4512.2.camel@localhost> <4B194453.5050507@mcs.anl.gov> Message-ID: <1259949102.17099.2.camel@localhost> On Fri, 2009-12-04 at 11:18 -0600, Michael Wilde wrote: > Wenjun, can you try to run a simple script consisting of a single cat > command, to make it easy to see what Swift is doing on Queenbee? > > It looks to me like something is acting strange in the interaction > between Swift and Queenbee's gridftp server. > > Mihael: looking at the original logs I was perplexed to see that Swift > could create the remote directories but not remove them. Almost as if > the remove command was using a pathname that the grodftp server couldnt see. That's because the creation is done by swift, on the head node (through gridftp), and the removal is done by the wrapper, on the compute node. From tiberius at ci.uchicago.edu Fri Dec 4 14:07:15 2009 From: tiberius at ci.uchicago.edu (Tiberiu Stef-Praun) Date: Fri, 4 Dec 2009 14:07:15 -0600 Subject: [Swift-devel] swift-plot-log fails on this file Message-ID: Spamming the whole list, not sure who would fix this I am testing it with a fresh svn checkout (but it fails on swift-0.9 too) Thanks Tibi -- Tiberiu (Tibi) Stef-Praun, PhD Computational Sciences Researcher Computation Institute 5640 S. Ellis Ave, #405 University of Chicago http://www-unix.mcs.anl.gov/~tiberius/ -------------- next part -------------- A non-text attachment was scrubbed... Name: policyest-dp-engine-20091204-1333-lk8rg426.log Type: application/octet-stream Size: 900617 bytes Desc: not available URL: From iraicu at cs.uchicago.edu Mon Dec 7 17:16:41 2009 From: iraicu at cs.uchicago.edu (Ioan Raicu) Date: Mon, 07 Dec 2009 17:16:41 -0600 Subject: [Swift-devel] [Fwd: [Workflows] call for book chapters: (fwd)] Message-ID: <4B1D8CD9.10700@cs.uchicago.edu> This might be an interesting venue for some of you. Cheers, Ioan -------- Original Message -------- Subject: [Workflows] call for book chapters: (fwd) Date: Mon, 7 Dec 2009 13:37:07 -0600 (CST) From: Yong Zhao To: iraicu at eecs.northwestern.edu, Shiyong Lu ---------- Forwarded message ---------- Date: Sun, 6 Dec 2009 23:37:03 -0500 From: Lizhe Wang Reply-To: workflows at googlegroups.com To: workflows at googlegroups.com Subject: [Workflows] call for book chapters: Title: Guide to e-Science: Next Generation Scientific Research and Discovery Publisher: Springer Synopsis: The fundamental principal in e-Science is based on the trend that procedures and practices of traditional way in which science is conducted are undergoing radical change. This change is based on the inclusion of modern cyberinfrastructure as part of the science, or experiment environment which includes not only the already ubiquitous high end computers, storage and network infrastructure, but also emerging Web technologies. This allows the exploration of previously unknown problems via simulation, generation and analysis of large amount of data, and global research collaboration. e-Science is inherently interdisciplinary allowing and promoting synergistic activities between different scientific disciplines rather than just between a single discipline and computer science. The book aims to describe e-Science methodologies, associated tools & middleware, systems, applications and services. It will include e-Science concept, issues, principles and methodologies, as well as how various technologies and tools can be employed to build an essential infrastructure to support various research missions in many areas of science (e.g. particle physics, earth science, bio-informatics). For example, e-Science employs Grid computing as one of the major enabling technology contributors to make the e-Science vision a reality. It also includes using parallel computing and various distributed computing technologies such as SOA, collaborative computing, workflow, ontology and semantic Web to develop middleware, services and applications. As e-Science has made significant progress over the past 5 years, this book will also provide successful case studies on e-Science practice and application / service development in scientific and engineering disciplines. It covers areas such as infrastructure, computational resource management, data management, collaborative computing & workflow, middleware, application and service development. Topic of interest includes, but not limited to: e-Science concept , principles, philosophy, and methodology Design patterns in e-ScienceWeb 2.0 and research 2.0 in e-Science e-Science infrastructure Workflow and simulation process automation Semantic web and ontology in e-Science Web service and SOA technologies in e-ScienceSecurity in e-Science Service Level Agreement and QoS in e-Science The emerging Cloud computing in e-Science e-Science practice and novel applications e-Science and Virtual Organization(VO) Lessons learned and the future trend of e-Science Expected readers include scientists, researchers, engineers and IT professionals who work in the fields of computational science (e.g. particle physics, earth sciences), parallel and distributed computing, Grid computing/Cloud computing, etc. The book also can be employed as a reference book for postgraduate students who study computer science. The book is of series ?Computer Communications and Networks? to be published by Springer. Important dates: 01, Jan., 2010: book chapter proposal due 15, Jan, 2010: notification of book chapter proposal 15, Apr., 2010: book chapter due 15, Jun., 2010: notification of book chapter submission 15, Aug., 2010: camera-ready accepted book chapters due Manuscript submission: Book chapter contributors are expected to submit a 1-2 page book chapter proposal in MS Word format to one of the editors below with the subject of "Lastname: e-Science book chapter submission". The final version of accepted book chapter should be prepared using the template provided by the publisher. It is expected that each chapter should be 20-35 pages, with figures / illustrations. Manuscript format: http://www.springer.com/authors/book+authors?SGWID=0-154102-12-417900-0 Editors: Dr. Xiaoyu Yang, IT Innovation Centre, University of Southampton, UK.Email: kev.x.yang at gmail.com Dr. Lizhe Lizhe Wang, Rochester Institute of Technology. Email: lizhe.wang at gmail.com Dr. Wei Jie, Thames Valley University, UK. Email: jiewei at pmail.ntu.edu.sg -- Google group on "Workflows", http://groups.google.com/group/workflows. Supported by Technical Area in IEEE TCSC on "Workflow Management in Scalable Computing Environments", http://www.swinflow.org/tcsc/wmsce.htm. To post to this group, send email to workflows at googlegroups.com -- ================================================================= Ioan Raicu, Ph.D. NSF/CRA Computing Innovation Fellow ================================================================= Center for Ultra-scale Computing and Information Security (CUCIS) Department of Electrical Engineering and Computer Science Northwestern University 2145 Sheridan Rd, Tech M384 Evanston, IL 60208-3118 ================================================================= Cel: 1-847-722-0876 Tel: 1-847-491-8163 Email: iraicu at eecs.northwestern.edu Web: http://www.eecs.northwestern.edu/~iraicu/ https://wiki.cucis.eecs.northwestern.edu/ ================================================================= ================================================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: From aespinosa at cs.uchicago.edu Mon Dec 7 19:27:14 2009 From: aespinosa at cs.uchicago.edu (Allan Espinosa) Date: Mon, 7 Dec 2009 19:27:14 -0600 Subject: [Swift-devel] precision of jobThrottle in sites.xml Message-ID: <50b07b4b0912071727o6696b68dmedec687d7f310b68@mail.gmail.com> I got load balancing off by one. this resulted in straggler jobs in my workload. sites.xml entry for each site: $workdir 2.56 10000 job distribution to sites from swift-plot-log: site JOB_START JOB_END APPLICATION_EXCEPTION JOB_CANCELED unknown total BGP_000 0 257 0 0 0 257 BGP_001 0 257 0 0 0 257 BGP_002 0 257 0 0 0 257 BGP_003 0 257 0 0 0 257 BGP_004 0 257 0 0 0 257 BGP_005 0 257 0 0 0 257 BGP_006 0 257 0 0 0 257 BGP_007 0 241 0 0 0 241 BGP_008 0 257 0 0 0 257 BGP_009 0 257 0 0 0 257 BGP_010 0 257 0 0 0 257 BGP_011 0 257 0 0 0 257 BGP_012 0 257 0 0 0 257 BGP_013 0 257 0 0 0 257 BGP_014 0 257 0 0 0 257 BGP_015 0 257 0 0 0 257 Trying out a jobThrottle of 2.54 doesn't give me a good split either: site JOB_START JOB_END APPLICATION_EXCEPTION JOB_CANCELED unknown total BGP_000 0 256 0 0 0 256 BGP_001 0 255 0 0 0 255 BGP_002 0 255 0 0 0 255 BGP_003 4 251 0 0 0 255 BGP_004 0 259 0 0 0 259 BGP_005 4 251 0 0 0 255 BGP_006 0 256 0 0 0 256 BGP_007 0 255 0 0 0 255 BGP_008 0 255 0 0 0 255 BGP_009 0 255 0 0 0 255 BGP_010 0 258 0 0 0 258 BGP_011 0 255 0 0 0 255 BGP_012 0 258 0 0 0 258 BGP_013 0 256 0 0 0 256 BGP_014 0 255 0 0 0 255 BGP_015 0 258 0 0 0 258 What do you guys suggest for more precise load distribution? a higher score? try it with jobThrottle=2.55? Thanks, -Allan -- Allan M. Espinosa PhD student, Computer Science University of Chicago From wilde at mcs.anl.gov Mon Dec 7 20:00:59 2009 From: wilde at mcs.anl.gov (Michael Wilde) Date: Mon, 07 Dec 2009 20:00:59 -0600 Subject: [Swift-devel] precision of jobThrottle in sites.xml In-Reply-To: <50b07b4b0912071727o6696b68dmedec687d7f310b68@mail.gmail.com> References: <50b07b4b0912071727o6696b68dmedec687d7f310b68@mail.gmail.com> Message-ID: <4B1DB35B.5070704@mcs.anl.gov> Allan, your *throttle value* is off by one. The formula is: nJobs = (jobThrottle*100)+1 So for 256 jobs you want to set it to 2.55 Mihael me be able to explain the rationale. I think it wasnt designed to be directly set by users. But since it so frequently is, perhaps the formula should be made simpler, or a nJobs parameter added. - Mike On 12/7/09 7:27 PM, Allan Espinosa wrote: > I got load balancing off by one. this resulted in straggler jobs in > my workload. > > sites.xml entry for each site: > > url="http://$ip:50001/wsrf/services/GenericPortal/core/WS/GPFactoryService"/> > > $workdir > 2.56 > 10000 > > > job distribution to sites from swift-plot-log: > site JOB_START JOB_END APPLICATION_EXCEPTION JOB_CANCELED unknown total > BGP_000 0 257 0 0 0 257 > BGP_001 0 257 0 0 0 257 > BGP_002 0 257 0 0 0 257 > BGP_003 0 257 0 0 0 257 > BGP_004 0 257 0 0 0 257 > BGP_005 0 257 0 0 0 257 > BGP_006 0 257 0 0 0 257 > BGP_007 0 241 0 0 0 241 > BGP_008 0 257 0 0 0 257 > BGP_009 0 257 0 0 0 257 > BGP_010 0 257 0 0 0 257 > BGP_011 0 257 0 0 0 257 > BGP_012 0 257 0 0 0 257 > BGP_013 0 257 0 0 0 257 > BGP_014 0 257 0 0 0 257 > BGP_015 0 257 0 0 0 257 > > Trying out a jobThrottle of 2.54 doesn't give me a good split either: > site JOB_START JOB_END APPLICATION_EXCEPTION JOB_CANCELED unknown total > BGP_000 0 256 0 0 0 256 > BGP_001 0 255 0 0 0 255 > BGP_002 0 255 0 0 0 255 > BGP_003 4 251 0 0 0 255 > BGP_004 0 259 0 0 0 259 > BGP_005 4 251 0 0 0 255 > BGP_006 0 256 0 0 0 256 > BGP_007 0 255 0 0 0 255 > BGP_008 0 255 0 0 0 255 > BGP_009 0 255 0 0 0 255 > BGP_010 0 258 0 0 0 258 > BGP_011 0 255 0 0 0 255 > BGP_012 0 258 0 0 0 258 > BGP_013 0 256 0 0 0 256 > BGP_014 0 255 0 0 0 255 > BGP_015 0 258 0 0 0 258 > > What do you guys suggest for more precise load distribution? a higher > score? try it with jobThrottle=2.55? > > Thanks, > -Allan > From aespinosa at cs.uchicago.edu Mon Dec 7 20:35:39 2009 From: aespinosa at cs.uchicago.edu (Allan Espinosa) Date: Mon, 7 Dec 2009 20:35:39 -0600 Subject: [Swift-devel] precision of jobThrottle in sites.xml In-Reply-To: <4B1DB35B.5070704@mcs.anl.gov> References: <50b07b4b0912071727o6696b68dmedec687d7f310b68@mail.gmail.com> <4B1DB35B.5070704@mcs.anl.gov> Message-ID: <50b07b4b0912071835o5bb6919dpcdd2dfe51e42e594@mail.gmail.com> Aha. in the documentation it says nJobs = (jobThrottle*100)+ 2 Thanks for confirming that. -Allan 2009/12/7 Michael Wilde : > Allan, your *throttle value* is off by one. > > The formula is: nJobs = (jobThrottle*100)+1 > > So for 256 jobs you want to set it to 2.55 > > Mihael me be able to explain the rationale. I think it wasnt designed to be > directly set by users. But since it so frequently is, perhaps the formula > should be made simpler, or a nJobs parameter added. > > - Mike > > On 12/7/09 7:27 PM, Allan Espinosa wrote: >> >> I got load balancing off by one. this resulted ?in straggler jobs in >> my workload. >> >> sites.xml entry for each site: >> ? >> ? ? ?> >> url="http://$ip:50001/wsrf/services/GenericPortal/core/WS/GPFactoryService"/> >> ? ? ? >> ? ? ?$workdir >> ? ? ?2.56 >> ? ? ?10000 >> ? ? >> >> job distribution to sites from swift-plot-log: >> site ? ?JOB_START ? ? ? JOB_END ? ? ? ? APPLICATION_EXCEPTION >> JOB_CANCELED ? ?unknown ? ? ? ? total >> BGP_000 ? ? ? ? 0 ? ? ? 257 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 257 >> BGP_001 ? ? ? ? 0 ? ? ? 257 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 257 >> BGP_002 ? ? ? ? 0 ? ? ? 257 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 257 >> BGP_003 ? ? ? ? 0 ? ? ? 257 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 257 >> BGP_004 ? ? ? ? 0 ? ? ? 257 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 257 >> BGP_005 ? ? ? ? 0 ? ? ? 257 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 257 >> BGP_006 ? ? ? ? 0 ? ? ? 257 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 257 >> BGP_007 ? ? ? ? 0 ? ? ? 241 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 241 >> BGP_008 ? ? ? ? 0 ? ? ? 257 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 257 >> BGP_009 ? ? ? ? 0 ? ? ? 257 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 257 >> BGP_010 ? ? ? ? 0 ? ? ? 257 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 257 >> BGP_011 ? ? ? ? 0 ? ? ? 257 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 257 >> BGP_012 ? ? ? ? 0 ? ? ? 257 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 257 >> BGP_013 ? ? ? ? 0 ? ? ? 257 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 257 >> BGP_014 ? ? ? ? 0 ? ? ? 257 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 257 >> BGP_015 ? ? ? ? 0 ? ? ? 257 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 257 >> >> Trying out a jobThrottle of 2.54 doesn't give me a good split either: >> site ? ?JOB_START ? ? ? JOB_END ? ? ? ? APPLICATION_EXCEPTION >> JOB_CANCELED ? ?unknown ? ? ? ? total >> BGP_000 ? ? ? ? 0 ? ? ? 256 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 256 >> BGP_001 ? ? ? ? 0 ? ? ? 255 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 255 >> BGP_002 ? ? ? ? 0 ? ? ? 255 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 255 >> BGP_003 ? ? ? ? 4 ? ? ? 251 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 255 >> BGP_004 ? ? ? ? 0 ? ? ? 259 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 259 >> BGP_005 ? ? ? ? 4 ? ? ? 251 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 255 >> BGP_006 ? ? ? ? 0 ? ? ? 256 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 256 >> BGP_007 ? ? ? ? 0 ? ? ? 255 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 255 >> BGP_008 ? ? ? ? 0 ? ? ? 255 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 255 >> BGP_009 ? ? ? ? 0 ? ? ? 255 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 255 >> BGP_010 ? ? ? ? 0 ? ? ? 258 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 258 >> BGP_011 ? ? ? ? 0 ? ? ? 255 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 255 >> BGP_012 ? ? ? ? 0 ? ? ? 258 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 258 >> BGP_013 ? ? ? ? 0 ? ? ? 256 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 256 >> BGP_014 ? ? ? ? 0 ? ? ? 255 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 255 >> BGP_015 ? ? ? ? 0 ? ? ? 258 ? ? 0 ? ? ? 0 ? ? ? 0 ? ? ? 258 >> >> What do you guys suggest for more precise load distribution? a higher >> score? try it with jobThrottle=2.55? >> >> Thanks, >> -Allan >> > > -- Allan M. Espinosa PhD student, Computer Science University of Chicago From wwj at ci.uchicago.edu Tue Dec 8 13:14:00 2009 From: wwj at ci.uchicago.edu (Wenjun Wu) Date: Tue, 08 Dec 2009 13:14:00 -0600 Subject: [Swift-devel] gridftp issue in running oops swift script on queenbee In-Reply-To: <4B194453.5050507@mcs.anl.gov> References: <20091123190130.CFZ45811@m4500-02.uchicago.edu> <1259100877.5951.0.camel@localhost> <20091125113308.CGB97214@m4500-02.uchicago.edu> <1259170689.18426.2.camel@localhost> <20091125115025.CGB99648@m4500-02.uchicago.edu> <1259177092.22405.1.camel@localhost> <4B182037.7080809@ci.uchicago.edu> <1259873279.28031.3.camel@localhost> <4B1835F2.3030607@ci.uchicago.edu> <1259878394.30785.1.camel@localhost> <4B183C55.8060408@ci.uchicago.edu> <1259881088.30785.11.camel@localhost> <4B189808.7060401@ci.uchicago.edu> <1259904354.4512.2.camel@localhost> <4B194453.5050507@mcs.anl.gov> Message-ID: <4B1EA578.4070206@ci.uchicago.edu> I wrote a simple script doing the cat on Queenbeen and it worked fine. type MFile; (MFile outfile) runcat(MFile infile){ app { rcat @infile stdout=@filename(outfile); } } MFile infile ; MFile outfile ; outfile=runcat(infile); What I really want to try as a next step is to run the ItFixInit as a local task instead of a remote one. It could avoid the weird issue. Wenjun > Wenjun, can you try to run a simple script consisting of a single cat > command, to make it easy to see what Swift is doing on Queenbee? > > It looks to me like something is acting strange in the interaction > between Swift and Queenbee's gridftp server. > > Mihael: looking at the original logs I was perplexed to see that Swift > could create the remote directories but not remove them. Almost as if > the remove command was using a pathname that the grodftp server > couldnt see. > > Wenjun tried a manual globus-url-copy to what we thought was a very > similar path, and from the command line (with Swift's globus-url-copy) > it was able to fetch individual files. So it seems that his workdir is > visible to Queenbee's gridftp by what *seemed* like the expected path. > > It would be interesting to try uberftp and see if dirs and files can > be removed. You'd need a host that has it; not sure if its in the > teragrid toolkit or not. Its in the OSG client stack. Maybe can > download it separately. > > - Mike > > > > On 12/3/09 11:25 PM, Mihael Hategan wrote: >> Doesn't look like there are any files there created by the compute >> nodes. Are you sure that filesystem is visible from the compute node? >> >> On Thu, 2009-12-03 at 23:03 -0600, Wenjun Wu wrote: >>> . >>> |-info >>> |-kickstart >>> |-shared >>> |---dev >>> |---output >>> |-----T1af7 >>> |-------R00 >>> |---sandbox >>> |-----wwj >>> |-------oops >>> |---------input >>> |-----------fasta >>> |-----------native >>> |-----------rama >>> |-----------secseq >>> |-status >>> |---f >>> |---h >>> |---j >>> >>>> On Thu, 2009-12-03 at 16:31 -0600, Wenjun Wu wrote: >>>> >>>>> Under the info directory, there is no file left. >>>>> >>>>> ls -lt oops-20091202-1700-vmh5dnd3/info >>>>> total 0 >>>>> >>>>> And actually the j/ItFixInit-jjpe3bk was created at the same level >>>>> as oops-20091202-1700-vmh5dnd3. >>>>> >>>> Can you do a "tree" in oops-20091202-1700-vmh5dnd3 and paste that? >>>> >>>> >> >> _______________________________________________ >> Swift-devel mailing list >> Swift-devel at ci.uchicago.edu >> http://mail.ci.uchicago.edu/mailman/listinfo/swift-devel From iraicu at cs.uchicago.edu Fri Dec 11 16:12:00 2009 From: iraicu at cs.uchicago.edu (Ioan Raicu) Date: Fri, 11 Dec 2009 16:12:00 -0600 Subject: [Swift-devel] CFP: 19th ACM International Symposium on High Performance Distributed Computing (HPDC) 2010 Message-ID: <4B22C3B0.5010507@cs.uchicago.edu> The 19th ACM International Symposium on High Performance Distributed Computing (HPDC 2010) is now accepting submissions of research papers. Authors are invited to submit full papers of at most 12 pages or short papers of at most 4 pages. Details about formatting requirements and the submission process are given at http://hpdc2010.eecs.northwestern.edu/submitpaper.html. The deadline for registering an abstract is Friday, January 15, 2010 and the deadline for the complete paper is Friday, January 22, 2010. The detailed call for papers follows, and can also be seen online at http://hpdc2010.eecs.northwestern.edu/hpdc2010-cfp.txt. ======================================================================= ACM HPDC 2010 Call For Papers 19th ACM International Symposium on High Performance Distributed Computing Chicago, Illinois June 21-25, 2010 http://hpdc2010.eecs.northwestern.edu/ The ACM International Symposium on High Performance Distributed Computing (HPDC) is the premier venue for presenting the latest research on the design, implementation, evaluation, and use of parallel and distributed systems for high performance and high end computing. The 19th installment of HPDC will take place in the heart of the Chicago, Illinois, the third largest city in the United States and a major technological and cultural capital. The conference will be held on June 23-25 (Wednesday through Friday) with affiliated workshops occurring on June 21-22 (Monday and Tuesday). Submissions are welcomed on all forms of high performance distributed computing, including grids, clouds, clusters, service-oriented computing, utility computing, peer-to-peer systems, and global computing ensembles. New scholarly research showing empirical and reproducible results in architectures, systems, and networks is strongly encouraged, as are experience reports of applications and deployments that can provide insights for future high performance distributed computing research. All papers will be rigorously reviewed by a distinguished program committee, with a strong focus on the combination of rigorous scientific results and likely high impact within high performance distributed computing. Research papers must clearly demonstrate research contributions and novelty while experience reports must clearly describe lessons learned and demonstrate impact. Topics of interest include (but are not limited to) the following, in the context of high performance distributed computing and high end computing: * Systems * Architectures * Algorithms * Networking * Programming languages and environments * Data management * I/O and file systems * Virtualization * Resource management, scheduling, and load-balancing * Performance modeling, simulation, and prediction * Fault tolerance, reliability and availability * Security, configuration, policy, and management issues * Multicore issues and opportunities * Models and use cases for utility, grid, and cloud computing Both full papers and short papers (for poster presentation and/or demonstrations) may be submitted. IMPORTANT DATES Paper Abstract submissions: January 15, 2010 Paper submissions: January 22, 2010 Author notification: March 30, 2010 Final manuscripts: April 23, 2010 SUBMISSIONS Authors are invited to submit full papers of at most 12 pages or short papers of at most 4 pages. The page limits include all figures and references. Papers should be formatted in the ACM proceedings style (e.g., http://www.acm.org/sigs/publications/proceedings-templates). Reviewing is single-blind. Papers must be self-contained and provide the technical substance required for the program committee to evaluate the paper's contribution, including how it differs from prior work. All papers will be reviewed and judged on correctness, originality, technical strength, significance, quality of presentation, and interest and relevance to the conference. Submitted papers must be original work that has not appeared in and is not under consideration for another conference or a journal. There will be NO DEADLINE EXTENSIONS. PUBLICATION Accepted full and short papers will appear in the conference proceedings. WORKSHOPS HPDC will sponsor eight affiliated workshops on the two days preceding the conference. Each will feature a mix of invited talks and peer reviewed papers on specialized topics. Information on topics and submission are available at http://hpdc2010.eecs.northwestern.edu/workshops.html. GENERAL CO-CHAIRS Kate Keahey, Argonne National Labs Salim Hariri, University of Arizona STEERING COMMITTEE Salim Hariri, Univ. of Arizona (Chair) Andrew A. Chien, Intel / UCSD Henri Bal, Vrije University Franck Cappello, INRIA Jack Dongarra, Univ. of Tennessee Ian Foster, ANL& Univ. of Chicago Andrew Grimshaw, Univ. of Virginia Carl Kesselman, USC/ISI Dieter Kranzlmueller, Ludwig-Maximilians-Univ. Muenchen Miron Livny, Univ. of Wisconsin Manish Parashar, Rutgers University Karsten Schwan, Georgia Tech David Walker, Univ. of Cardiff Rich Wolski, UCSB PROGRAM CHAIR Peter Dinda, Northwestern University PROGRAM COMMITTEE Kento Aida, NII and Tokyo Institute of Technology Ron Brightwell, Sandia National Labs Fabian Bustamante, Northwestern University Henri Bal, Vrije Universiteit Frank Cappello, INRIA Claris Castillo, IBM Research Henri Casanova, University of Hawaii Abhishek Chandra, University of Minnesota Chris Colohan, Google Brian Cooper, Yahoo Research Wu-chun Feng, Virginia Tech Renato Ferreira, Universidade Federal de Minas Gerais Jose Fortes, University of Florida Ian Foster, University of Chicago / Argonne Geoffrey Fox, Indiana University Michael Gerndt, TU-Munich Andrew Grimshaw, University of Virginia Thilo Kielmann, Vrije Universiteit Zhiling Lan, IIT John Lange, Northwestern University Arthur Maccabe, Oak Ridge National Labs Satoshi Matsuoka, Toyota Institute of Technology Jose Moreira, IBM Research Klara Nahrstedt, UIUC Dushyanth Narayanan, Microsoft Research Manish Parashar, Rutgers University Ioan Raicu, Northwestern University Morris Riedel, Juelich Supercomputing Centre Matei Ripeanu, UBC Joel Saltz, Emory University Karsten Schwan, Georgia Tech Thomas Stricker, Google Jaspal Subhlok, University of Houston Martin Swany, University of Delaware Michela Taufer, University of Delaware Valerie Taylor, TAMU Douglas Thain, University of Notre Dame Jon Weissman, University of Minnesota Rich Wolski, UCSB and Eucalyptus Systems Dongyan Xu, Purdue University Ken Yocum, UCSD WORKSHOP CHAIR Douglas Thain, University of Notre Dame PUBLICITY CO-CHAIRS Martin Swany, U. Delaware Morris Riedel, Julich Supercomputing Centre Renato Ferreira, Universidade Federal de Minas Gerais Kento Aida, NII and Tokyo Institute of Technology LOCAL ARRANGEMENTS CHAIR Zhiling Lan, IIT STUDENT ACTIVITIES CO-CHAIRS John Lange, Northwestern University Ioan Raicu, Northwestern University -- ================================================================= Ioan Raicu, Ph.D. NSF/CRA Computing Innovation Fellow ================================================================= Center for Ultra-scale Computing and Information Security (CUCIS) Department of Electrical Engineering and Computer Science Northwestern University 2145 Sheridan Rd, Tech M384 Evanston, IL 60208-3118 ================================================================= Cel: 1-847-722-0876 Tel: 1-847-491-8163 Email: iraicu at eecs.northwestern.edu Web: http://www.eecs.northwestern.edu/~iraicu/ https://wiki.cucis.eecs.northwestern.edu/ ================================================================= ================================================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: From iraicu at cs.uchicago.edu Mon Dec 14 14:07:46 2009 From: iraicu at cs.uchicago.edu (Ioan Raicu) Date: Mon, 14 Dec 2009 14:07:46 -0600 Subject: [Swift-devel] CFP: 1st ACM Workshop on Scientific Cloud Computing (ScienceCloud) 2010 Message-ID: <4B269B12.7000902@cs.uchicago.edu> Call for Papers --------------------------------------------------------------------------------------- 1st ACM Workshop on Scientific Cloud Computing (ScienceCloud) 2010 http://dsl.cs.uchicago.edu/ScienceCloud2010/ --------------------------------------------------------------------------------------- June 21st, 2010 Chicago, Illinois, USA Co-located with with ACM High Performance Distributed Computing Conference (HPDC) 2010 ======================================================================================= Workshop Overview The advent of computation can be compared, in terms of the breadth and depth of its impact on research and scholarship, to the invention of writing and the development of modern mathematics. Scientific Computing has already begun to change how science is done, enabling scientific breakthroughs through new kinds of experiments that would have been impossible only a decade ago. Today's science is generating datasets that are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. The support for data intensive computing is critical to advancing modern science as storage systems have experienced an increasing gap between its capacity and its bandwidth by more than 10-fold over the last decade. There is an emerging need for advanced techniques to manipulate, visualize and interpret large datasets. Scientific Computing is the key to many domains' "holy grail" of new knowledge, and comes in many shapes and forms, from high-performance computing (HPC) which is heavily focused on compute-intensive applications, high-throughput computing (HTC) which focuses on using many computing resources over long periods of time to accomplish its computational tasks, many-task computing (MTC) which aims to bridge the gap between HPC and HTC by focusing on using many resources over short periods of time, to data-intensive computing which is heavily focused on data distribution and harnessing data locality by scheduling of computations close to the data. The 1st workshop on Scientific Cloud Computing (ScienceCloud) will provide the scientific community a dedicated forum for discussing new research, development, and deployment efforts in running these kinds of scientific computing workloads on Cloud Computing infrastructures. The ScienceCloud workshop will focus on the use of cloud-based technologies to meet new compute intensive and data intensive scientific challenges that are not well served by the current supercomputers, grids or commercial clouds. What architectural changes to the current cloud frameworks (hardware, operating systems, networking and/or programming models) are needed to support science? Dynamic information derived from remote instruments and coupled simulation and sensor ensembles are both important new science pathways and tremendous challenges for current HPC/HTC/MTC technologies. How can cloud technologies enable these new scientific approaches? How are scientists using clouds? Are there scientific HPC/HTC/MTC workloads that are suitable candidates to take advantage of emerging cloud computing resources with high efficiency? What benefits exist by adopting the cloud model, over clusters, grids, or supercomputers? What factors are limiting clouds use or would make them more usable/efficient? This workshop encourages interaction and cross-pollination between those developing applications, algorithms, software, hardware and networking, emphasizing scientific computing for such cloud platforms. We believe the workshop will be an excellent place to help the community define the current state, determine future goals, and define architectures and services for future science clouds. Topics of Interest --------------------------------------------------------------------------------------- We invite the submission of original work that is related to the topics below. The papers can be either short (5 pages) position papers, or long (10 pages) research papers. Topics of interest include (in the context of Cloud Computing): * scientific computing applications o case studies on cloud computing o case studies comparing clouds, cluster, grids, and/or supercomputers o performance evaluation * performance evaluation o real systems o cloud computing benchmarks o reliability of large systems * programming models and tools o map-reduce and its generalizations o many-task computing middleware and applications o integrating parallel programming frameworks with storage clouds o message passing interface (MPI) o service-oriented science applications * storage cloud architectures and implementations o distributed file systems o content distribution systems for large data o data caching frameworks and techniques o data management within and across data centers o data-aware scheduling o data-intensive computing applications o eventual-consistency storage usage and management * compute resource management o dynamic resource provisioning o scheduling o techniques to manage many-core resources and/or GPUs * high-performance computing o high-performance I/O systems o interconnect and network interface architectures for HPC o multi-gigabit wide-area networking o scientific computing tradeoffs between clusters/grids/supercomputers and clouds o parallel file systems in dynamic environments * models, frameworks and systems for cloud security o implementation of access control and scalable isolation Paper Submission and Publication --------------------------------------------------------------------------------------- Authors are invited to submit papers with unpublished, original work of not more than 10 pages of double column text using single spaced 10 point size on 8.5 x 11 inch pages, (including all text, figures, and references) as per ACM 8.5 x 11 manuscript guidelines (http://www.acm.org/publications/instructions_for_proceedings_volumes); document templates can be found at http://www.acm.org/sigs/publications/proceedings-templates. A 250 word abstract (PDF format) must be submitted online at https://cmt.research.microsoft.com/ScienceCloud2010/ before the deadline of February 1st, 2010 at 11:59PM PST; the final 10 page papers in PDF format will be due on March 1st, 2010 at 11:59PM PST. Papers will be peer-reviewed, and accepted papers will be published in the workshop proceedings as part of the ACM digital library. Notifications of the paper decisions will be sent out by April 1st, 2010. Selected excellent work will be invited to submit extended versions of the workshop paper to a special issue journal. Submission implies the willingness of at least one of the authors to register and present the paper. For more information, please visit http://dsl.cs.uchicago.edu/ScienceCloud2010/. Important Dates --------------------------------------------------------------------------------------- * Abstract Due: February 22nd, 2010 * Papers Due: March 1st, 2010 * Notification of Acceptance: April 1st, 2010 * Workshop Date: June 21st, 2010 Committee Members --------------------------------------------------------------------------------------- Workshop Chairs * Pete Beckman, University of Chicago & Argonne National Laboratory * Ian Foster, University of Chicago & Argonne National Laboratory * Ioan Raicu, Northwestern University Steering Committee * Jeff Broughton, Lawrence Berkeley National Lab., USA * Alok Choudhary, Northwestern University, USA * Dennis Gannon, Microsoft Research, USA * Robert Grossman, University of Illinois at Chicago, USA * Kate Keahey, Nimbus, University of Chicago, Argonne National Laboratory, USA * Ed Lazowska, University of Washington, USA * Ignacio Llorente, Open Nebula, Universidad Complutense de Madrid, Spain * David E. Martin, Argonne National Laboratory, Northwestern University, USA * Gabriel Mateescu, Linkoping University, Sweden * David O'Hallaron, Carnegie Mellon University, Intel Labs, USA * Rich Wolski, Eucalyptus, University of California, Santa Barbara, USA * Kathy Yelick, University of California at Berkeley, Lawrence Berkeley National Lab., USA Technical Committee * David Abramson, Monash University, Australia * Roger Barga, Microsoft Research, USA * Roy Campbell, University of Illinois at Urbana Champaign, USA * Henri Casanova, University of Hawaii at Manoa, USA * Brian Cooper, Yahoo! Research, USA * Peter Dinda, Northwestern University, USA * Geoffrey Fox, Indiana University, USA * Adriana Iamnitchi, University of South Florida, USA * Alexandru Iosup, Delft University of Technology, Netherlands * James Hamilton, Amazon Web Services, USA * Tevfik Kosar, Louisiana State University, USA * Shiyong Lu, Wayne State University, USA * Ruben S. Montero, Universidad Complutense de Madrid, Spain * Reagan Moore, University of North Carolina, Chappel Hill, USA * Lavanya Ramakrishnan, Lawrence Berkeley National Laboratory * Matei Ripeanu, University of British Columbia, Canada * Larry Rudolph, VMware, USA * Marc Snir, University of Illinois at Urbana Champaign, USA * Xian-He Sun, Illinois Institute of Technology, USA * Mike Wilde, University of Chicago & Argonne National Laboratory, USA * Alec Wolman, Microsoft Research, USA * Yong Zhao, Microsoft, USA -- ================================================================= Ioan Raicu, Ph.D. NSF/CRA Computing Innovation Fellow ================================================================= Center for Ultra-scale Computing and Information Security (CUCIS) Department of Electrical Engineering and Computer Science Northwestern University 2145 Sheridan Rd, Tech M384 Evanston, IL 60208-3118 ================================================================= Cel: 1-847-722-0876 Tel: 1-847-491-8163 Email: iraicu at eecs.northwestern.edu Web: http://www.eecs.northwestern.edu/~iraicu/ https://wiki.cucis.eecs.northwestern.edu/ ================================================================= ================================================================= From iraicu at cs.uchicago.edu Wed Dec 16 14:28:33 2009 From: iraicu at cs.uchicago.edu (Ioan Raicu) Date: Wed, 16 Dec 2009 14:28:33 -0600 Subject: [Swift-devel] HPDC 2010 Workshops Selected and Open Message-ID: <4B2942F1.1050606@cs.uchicago.edu> HPDC 2010 Workshops Selected and Online After a competitive process, HPDC has selected eight workshops for co-location on the two days preceding the conference, June 21st and 22nd 2010. The workshops' web sites are now live and available from http://hpdc2010.eecs.northwestern.edu/workshops.html. Each will feature a mix of invited talks and peer-reviewed papers on specialized topics. Information on topics and submission are available in the links below. Workshops on Monday, June 21st, 2010 Emerging Computational Methods for the Life Sciences http://salsahpc.indiana.edu/ECMLS2010 Submission Deadline: 1 March 2010 LSAP: Large-Scale System and Application Performance http://www.lsap2010.org/ Submission Deadline: 1 March 2010 MDQCS: Managing Data Quality for Collaborative Science http://hercules.infotech.monash.edu.au/mdqcs/Home.html Submission Deadline: 1 March 2010 ScienceCloud: Workshop on Scientific Cloud Computing http://dsl.cs.uchicago.edu/ScienceCloud2010 Submission Deadline: 1 March 2010 (Abstracts 22 Feb) Workshops on Tuesday, June 22nd, 2010 CLADE: Challenges of Large Applications in Distributed Environments http://sites.google.com/site/clade2010 Submission Deadline: 1 March 2010 DIDC: Data Intensive Distributed Computing http://www.cct.lsu.edu/~kosar/didc10 Submission Deadline: 1 March 2010 (Abstracts 22 Feb) MAPREDUCE: MapReduce and its Applications http://graal.ens-lyon.fr/mapreduce Submission Deadline: 15 Feb 2010 VTDC: Virtualization Technologies for Distributed Computing http://www.grid-appliance.org/wiki/index.php/VTDC10 Submission Deadline: 1 March 2010 -- ================================================================= Ioan Raicu, Ph.D. NSF/CRA Computing Innovation Fellow ================================================================= Center for Ultra-scale Computing and Information Security (CUCIS) Department of Electrical Engineering and Computer Science Northwestern University 2145 Sheridan Rd, Tech M384 Evanston, IL 60208-3118 ================================================================= Cel: 1-847-722-0876 Tel: 1-847-491-8163 Email: iraicu at eecs.northwestern.edu Web: http://www.eecs.northwestern.edu/~iraicu/ https://wiki.cucis.eecs.northwestern.edu/ ================================================================= =================================================================