[hpc-announce] CFP: Resilience at Euro-Par 2011
Gentile, Ann
gentile at sandia.gov
Mon May 2 15:11:19 CDT 2011
CFP: 4th Workshop on Resiliency in High Performance Computing (Resilience)
in Clusters, Clouds, and Grids
in conjunction with the
17th International European Conference on Parallel and
Distributed Computing (Euro-Par 2011)
Bordeaux France, August 29 - September 2nd, 2011
Clusters, Clouds, and Grids are three different computational paradigms
with the intent or potential to support High Performance Computing
(HPC). Currently, they consist of hardware, management, and usage models
particular to different computational regimes, e.g., high performance
cluster systems designed to support tightly coupled scientific
simulation codes typically utilize high-speed interconnects and
commercial cloud systems designed to support software as a service (SAS)
do not. However, in order to support HPC, all must at least utilize
large numbers of resources and hence effective HPC in any of these
paradigms must address the issue of resiliency at large-scale.
Recent trends in HPC systems have clearly indicated that future
increases in performance, in excess of those resulting from improvements
in single- processor performance, will be achieved through corresponding
increases in system scale, i.e., using a significantly larger component
count. As the raw computational performance of these HPC systems
increases from today's tera- and peta-scale to next-generation multi
peta-scale capability and beyond, their number of computational,
networking, and storage components will grow from the ten-to-one-hundred
thousand compute nodes of today's systems to several hundreds of
thousands of compute nodes and more in the foreseeable future. This
substantial growth in system scale, and the resulting component count,
poses a challenge for HPC system and application software with respect
to fault tolerance and resilience.
Furthermore, recent experiences on extreme-scale HPC systems with
non-recoverable soft errors, i.e., bit flips in memory, cache,
registers, and logic added another major source of concern. The
probability of such errors not only grows with system size, but also
with increasing architectural vulnerability caused by employing
accelerators, such as FPGAs and GPUs, and by shrinking nanometer
technology. Reactive fault tolerance technologies, such as
checkpoint/restart, are unable to handle high failure rates due to
associated overheads, while proactive resiliency technologies, such as
migration, simply fail as random soft errors can't be predicted.
Moreover, soft errors may even remain undetected resulting in silent
data corruption.
Important Web sites:
Resilience 2011 at http://xcr.cenit.latech.edu/resilience2011
Euro-Par 2011 at http://europar2011.bordeaux.inria.fr
Prior conferences Web sites:
Resilience 2010 at http://xcr.cenit.latech.edu/resilience2010
Resilience 2009 at http://xcr.cenit.latech.edu/resilience2009
Resilience 2008 at http://xcr.cenit.latech.edu/resilience2008
Important dates:
Paper submission deadline on June 5, 2011
Notification deadline on July 4, 2011
Resilience Workshop on August 30, 2011
Euro-Par conference on August 29 - September 2nd, 2011
Camera ready deadline is after the workshop
Topics of interest include, but are not limited to:
Reports on current HPC system and application resiliency
HPC resiliency metrics and standards
HPC system and application resiliency analysis
HPC system and application-level fault handling and anticipation
HPC system and application health monitoring
Resiliency for HPC file and storage systems
System-level checkpoint/restart for HPC
System-level migration for HPC
Algorithm-based resiliency fundamentals for HPC (not Hadoop)
Fault tolerant MPI concepts and solutions
Soft error detection and recovery in HPC systems
HPC system and application log analysis
Statistical methods to identify failure root causes
Fault injection studies in HPC environments
High availability solutions for HPC systems
Reliability and availability analysis
Hardware for fault detection and recovery
Resource management for system resiliency and availability
More information about the hpc-announce
mailing list