[hpc-announce] SC11 Call for Papers

Rajeev Thakur thakur at mcs.anl.gov
Sat Mar 5 21:44:10 CST 2011

(A pdf version of this CFP is available at http://sc11.supercomputing.org/?pg=papers.html )

SC11 Call for Papers

SC11, the premier annual international conference on high-performance computing, networking, and storage, will be held in Seattle, Washington, November 12-18, 2011. The Technical Papers Program at SC is the lead component for presenting the most timely and highest-quality work in all areas in this field. The conference committee solicits submissions of excellent scientific quality on any topics related to scalable high-performance computing including, but not limited to, the areas below. Papers that focus on sustained performance and/or data-intensive science are of particular interest.

- Computational biology and bioinformatics
- Algorithms for image processing
- Computational earth and atmospheric sciences
- Algorithms for discrete and combinatorial problems
- Computational chemistry and chemical engineering
- Algorithms for data analysis and data mining
- Computational fluid dynamics
- Algorithms for visualization
- Computational solid mechanics and materials
- Algorithms for uncertainty quantification
- Computational medicine and bioengineering
- Algorithms for data assimilation and inverse problems
- Computational physics
- Algorithms for storage bound applications
- Algorithms for particle/n-body/molecular methods
- High-performance numerical algorithms
- Algorithms for grid/mesh-based methods

System Software
- Compiler analysis and program transformation
- Software for memory hierarchies
- Programming and runtime environments for high-performance and high-throughput computing 
- Software for multicore processors
- Software approaches for fault tolerance and resilience
- Scalable operating systems
- Software for communication optimization
- Software for transactional memory
- Software for distributed-memory computing 
- Software for shared-memory computing
- Software for high-performance computing
- Software techniques for reducing energy consumption
- Software for high-throughput computing 
- Virtualization software

- Benchmarking
- System performance
- Performance analysis
- Application performance
- Performance modeling
- Communication, network, and I/O performance
- Performance prediction
- Parallel and distributed analysis infrastructures
- Performance tools
- Code instrumentation and instrumentation infrastructure
- Performance evaluation
- Energy and power consumption
- Processor performance
- Memory efficiency and memory performance

- Parallel file and storage systems
- Scalable storage metadata and data management
- I/O performance tuning and benchmarking
- Databases for HPC and scalable structured storage
- I/O middleware for HPC
- Next generation storage systems and media
- Storage systems for data intensive computing
- Data mining for HPC
- Reliability and fault tolerance in HPC storage
- Data intensive computing
- Archival storage

- Processor architecture, chip multiprocessors, GPU 
- Interconnect technologies (InfiniBand, Myrinet, Quadrics, Ethernet, Routable PCI etc.)
- Parallel computer architecture
- Internet Protocol (TCP, UDP, sockets)
- Cache and memory systems
- Switch/router architecture
- Power-efficient architectures
- Routing algorithms and techniques
- High-availability architectures
- Network fault tolerance
- Stream or vector architectures
- Network interface architecture
- Innovative hardware/software co-designed
- Network topologies
- Architectures
- Optical interconnects
- Embedded and reconfigurable architectures
- Quality of service
- High-performance architectures based on emerging technology
- Storage networks
- Congestion management
- Networks on chip
- Collective communication

Grids and Clouds
- Grid security and identity management
- Virtualization and overlays
- Grid scheduling and load balancing
- Workflows
- Grid data management
- Scientific applications on clouds
- Grid self-configuration and management
- Security in clouds
- Grid applications
- Compute and storage cloud architectures
- Grid information services and monitoring
- Programming models and tools for computing on clouds
- Grid QoS and SLA management
- Cloud scheduling algorithms
- Problem solving environments and portals
- Resource provisioning in clouds
- Service oriented architectures for HPC
- Grid performance and benchmarking
- Architectures & tools for integration of clouds, clusters & grids

Format. Submissions are limited to 10 pages in two-column format and should follow the ACM SIG Proceedings Template. (See http://www.acm.org/sigs/publications/proceedings-templates; either of the styles listed there will do.) The 10-page limit includes figures, tables, and appendices, but does not include references, for which there is no page limit.

Review Process. The SC11 technical papers committee will rigorously review all submissions using originality, technical soundness, timeliness, and impact as the predominant acceptance criteria. SC11 anticipates an acceptance rate of 20-25%. Awards will be presented for Best Paper and Best Student Paper. Extended versions of papers selected for the Best Paper and Best Student Paper Awards may be published in the journal Scientific Programming.

How to submit. Papers must be submitted electronically via the web sitehttps://submissions.supercomputing.org/. A sample submission form is also available at that site (click on the tab "Sample Submission Forms"). SC follows a two-part submission process, with abstracts due by April 1, 2011 and full papers by April 8, 2011.

Important SC11 Information:
Location:                       Washington State Convention Center, Seattle, WA
Information:                  http://sc11.supercomputing.org/
Web Submissions:	       https://submissions.supercomputing.org/
Email Contact:              papers at info.supercomputing.org

Important Dates:
Web Submissions Open:       February 14, 2011
Abstracts Due:                       April 1, 2011
Full Papers Due:                     April 8, 2011
Notification:                           July 1, 2011
Conference Dates:	                November 12-18, 2011

SC11 Technical Papers Chairs
Franck Cappello, UIUC and INRIA
Rajeev Thakur, Argonne National Laboratory

System Software Area Chairs
Zhiling Lan, Illinois Institute of Technology
Vivek Sarkar, Rice University

System Software Committee Members
Pavan Balaji, Argonne National Laboratory
Hans Boehm, HP
Jim M Brandt, Sandia National Labs
Claris Castillo, IBM Research
Brad Chamberlain, Cray Inc.
Barbara Chapman, University of Houston
Daniel G. Chavarria-Miranda, Pacific Northwest National Laboratory
Bronis R. de Supinski, Lawrence Livermore National Laboratory
Narayan Desai, Argonne National Laboratory
Maya Gokhale, Lawrence Livermore National Laboratory
Torsten Hoefler, University of Illinois at Urbana-Champaign
Jesus Labarta, Barcelona Supercomputing Center
Arthur Maccabe, Oak Ridge National Laboratory
Satoshi Matsuoka, Tokyo Institute of Technology
Frank Mueller, North Carolina State University
Depei Qian, Beihang University
Daniele Paolo Scarpazza, D.E. Shaw Research
Karsten Schwan, Georgia Institute of Technology
John Shalf, Lawrence Berkeley National Laboratory
Alan Sussman, University of Maryland
Michela Taufer, University of Delaware
Kenjiro Taura, University of Tokyo
Jesper Larsson Traeff, University of Vienna
Binyu Zang, Fudan University

Applications Area Chairs
Jacqueline Chen, Sandia National Laboratories
Luc Giraud, INRIA

Applications Committee Members
Peter Arbenz, ETH Zurich
Costas Bekas, IBM Research - Zurich
Olivier Coulaud, INRIA
Eric Darve, Stanford University
Omar Ghattas, University of Texas at Austin
Steve Hammond, National Renewable Energy Laboratory
Robert Harrison, Oak Ridge National Laboratory
Ricky A. Kendall, Oak Ridge National Laboratory
Moe Khaleel, Pacific Northwest National Laboratory
Scott Klasky, Oak Ridge National Laboratory
Douglas Kothe, Oak Ridge National Laboratory
John Levesque, Cray Inc
Daniel Livescu, Los Alamos National Laboratory
Kwan-Liu Ma, University of California, Davis
Patrick S. McCormick, Los Alamos National Laboratory
Anthony Mezzacappa, Oak Ridge National Laboratory
Jamaludin Mohd-Yusof, Los Alamos National Laboratory
Robert Moser, University of Texas at Austin
Walter Nadler, Juelich Supercomputing Centre
Kengo Nakajima, University of Tokyo
Esmond G Ng, Lawrence Berkeley National Laboratory
Valerio Pascucci, University of Utah
Amanda Peters, Harvard University
Ulrich J Ruede, University of Erlangen-Nuremberg
Greg Ruetsch, NVIDIA
Thomas C. Schulthess, ETH Zurich
Amik St-Cyr, National Center for Atmospheric Research
Rick Stevens, Argonne National Laboratory
William Tang, Princeton University
Mark Taylor, Sandia National laboratories
Ray S. Tuminaro, Sandia National laboratories

Architecture/Networks Area Chairs
Hiroshi Nakashima, Kyoto University
Raymond Namyst, University of Bordeaux - INRIA

Architecture/Networks Committee Members
Dennis Abts, Google
Jung Ho Ahn, Seoul National University
Carl Beckmann, Intel
Muli Ben-Yehuda, Technion and IBM Research
Keren Bergman, Columbia University
Angelos Bilas, FORTH and Univ. of Crete, Greece
Ron Brightwell, Sandia National Laboratories
Marcelo Cintra, University of Edinburgh
Jose Flich, Universidad Politecnica de Valencia
Rich Graham, Oak Ridge National Laboratory
Koji Inoue, Kyushu University
Yutaka Ishikawa, University of Tokyo
Christine Morin, INRIA
Scott Pakin, Los Alamos National Laboratory
Valentina Salapura, IBM

Storage Area Chairs
Garth Gibson, Carnegie Mellon University / Panasas Inc.
Robert B. Ross, Argonne National Laboratory

Storage Committee Members
Richard Shane Canon, Lawrence Berkeley National Laboratory
Yong Chen, Texas Tech University
Toni Cortes, Barcelona Supercomputing Center
Dan Feng, Huazhong University of Science and Technology
Dean Hildebrand, IBM Almaden Research Center
Quincey Koziol, The HDF Group
Robert Latham, Argonne National Laboratory
Wei-keng Liao, Northwestern University
Xiaosong Ma, North Carolina State University and ORNL
Carlos Maltzahn, University of California, Santa Cruz
Bianca Schroeder, University of Toronto
Lee Ward, Sandia National Laboratories
Brent Welch, Panasas
Pete Wyckoff, NetApp

Clouds/Grids Area Chairs
Rosa M. Badia, Barcelona Supercomputing Center
Geoffrey C Fox, Indiana University

Clouds/Grids Committee Members
David Abramson, Monash University
Ann L Chervenak, University of Southern California
Marty A Humphrey, University of Virginia
Adriana Iamnitchi, University of South Florida
Hai Jin, Huazhong University of Science and Technology
Daniel S. Katz, University of Chicago
Thilo Kielmann, Vrije Universiteit
Laurent Lefevre, INRIA
Ignacio M. Llorente, Universidad Complutense de Madrid
Manish Parashar, Rutgers University
Satoshi Sekiguchi, AIST
Anne Trefethen, University of Oxford
Jon B. Weissman, University of Minnesota

Performance Area Chairs
Taisuke Boku, University of Tsukuba
Martin Schulz, Lawrence Livermore National Laboratory

Performance Committee Members
Mark Bull, EPCC, University of Edinburgh
Karl Fuerlinger, University of California, Berkeley
Karen L. Karavanic, Portland State University
Darren J Kerbyson, Pacific Northwest National Laboratory
Bettina Krammer, Universite de Versailles Saint-Quentin-en-Yvelines
David Lowenthal, University of Arizona
Bernd Mohr, Juelich Supercomputing Centre
Kathryn Mohror, Lawrence Livermore National Laboratory
Matthias S. Mueller, Technical University Dresden
Akira Naruse, Fujitsu Laboratories LTD.
Leonid Oliker, Lawrence Berkeley National Laboratory
Reiji Suda, the University of Tokyo
Daisuke Takahashi, University of Tsukuba
Nathan Tallent, Rice University
Richard Vuduc, Georgia Institute of Technology
Josef Weidendorfer, Technische Universitaet Muenchen
Gerhard Wellein, Erlangen Regional Computing Center

More information about the hpc-announce mailing list