[hpc-announce] FTXS 2022 @ SC22: Call for papers
Levy, Scott Larson
sllevy at sandia.gov
Tue Apr 26 09:34:52 CDT 2022
CALL FOR PAPERS
12th Workshop on Fault-Tolerance for HPC at eXtreme Scale (FTXS 2022)
In conjunction with The International Conference for
High Performance Computing, Networking, Storage, and Analysis (SC22)
Dallas, Texas, USA November 13 - 18, 2022
* Submission of papers: August 11, 2022
* Author notification: September 8, 2022
* Camera-ready papers: October 7, 2022
* Workshop: November 14, 2022
Authors are invited to submit original papers on the research and practice of fault-tolerance in
extreme-scale distributed systems (primarily HPC systems, but including grid and cloud systems).
Resilience and fault-tolerance remain a major concern for supercomputing and advances in this area
are needed. Therefore, we are broadly interested in forward-looking papers that seek to
characterize and mitigate the impact of faults.
We are particularly interested in papers that address issues related to the following developments
in extreme-scale systems:
* Storage Devices: The storage hierarchy on HPC systems continues to increase in depth and
complexity. SSDs and NVMe add high-speed node-local (or rack-local) persistent storage that can be
used to improve the performance of checkpoint/restart or otherwise facilitate application
resilience. Continuing to efficiently exploit these devices remains critical for extreme-scale HPC
systems. Moreover, the recent availability of Non-Volatile Memory Modules (NVMMs) has begun to blur
the line between memory and storage. The implications of this blurring for fault tolerance on
extreme-scale systems are still being explored.
* System Heterogeneity: Modern HPC systems increasingly include GPUs, FPGAs, and other types of
accelerators. New networking devices like Data Processing Units (DPUs) and SmartNICs are also
starting to be deployed. However, there are many resilience and fault tolerance issues associated
with these devices that still need to be resolved. Papers at prominent recent conferences
(including SC20, ICS 2019, and IEEE Cluster 2018) demonstrate that understanding the fault
tolerance implications of heterogeneous compute devices is an important and active area of
* Computing Paradigms: Novel non-von Neumann computing paradigms, including quantum and
neuromorphic computing, have attracted significant research interest. Recent publications
demonstrate that understanding the fault tolerance implications of these computing paradigms is
also an area of active research.
* Machine Learning: Algorithms that rely on elements of machine learning are becoming more and more
prevalent on HPC systems. Understanding how these algorithms react and respond to the frequency
and variety of faults that occur on HPC systems is critical to ensuring that they continue to
provide accurate and timely answers.
Additional topics of interest include, but are not limited to:
* Algorithmic-Based Fault Tolerance (ABFT) techniques to address undetected (silent) errors
* Silent data corruption (SDC) detection / correction techniques
* Novel fault-tolerance techniques and implementations
* Failure data analysis and field studies
* Power, performance, resilience (PPR) assessments / tradeoffs
* Emerging hardware and software technology for resilience
* Advances in reliability monitoring, analysis, and control of highly complex systems
* Failure prediction, error preemption, and recovery techniques
* Fault-tolerant programming models
* Models for software and hardware reliability
* Metrics and standards for measuring, improving, and enforcing effective fault-tolerance
* Scalable Byzantine fault-tolerance and security from single-fault and fail-silent violations
* Atmospheric evaluations relevant to HPC systems (terrestrial neutrons, temperature, etc.)
* Near-threshold-voltage implications and evaluations for reliability
* Benchmarks and experimental environments including fault injection
* Frameworks and APIs for fault-tolerance and fault management
Submissions are solicited in the following categories:
* Regular papers presenting innovative ideas improving the state of the art or discussing the issues
seen on existing extreme-scale systems, including some form of analysis and evaluation. Regular
papers should not exceed ten (10) pages including all text, appendices, and figures, but excluding
* Extended abstracts presenting preliminary results, proposing disruptive ideas, or challenging
assumptions in the field. The inclusion of some form of preliminary results is encouraged.
Extended abstract papers should not exceed four (4) pages, including all text, appendices, and figures,
but excluding references. Extended abstracts will be evaluated separately and given shorter oral
Submissions shall be submitted electronically at submissions.supercomputing.org and must conform
to IEEE conference proceedings style. IEEE templates are available at: www.ieee.org/conferences/publishing/templates.html.
We do not have an upper limit on the number of papers that we will accept. We will make every
effort to make sure that every high-quality submission will be included in our workshop.
Subject to publisher constraints, our workshop will publish all submissions accepted for inclusion
in our workshop. We have reached an agreement with IEEE Computer Society to publish our proceedings
in IEEE Xplore.
Reproducibility is an important component of extreme-scale system research. However, the goal of
our workshop is to encourage and facilitate discussion of novel approaches and preliminary results.
As a result, it may not always be feasible to release reproducibility artifacts. Therefore, while
we encourage authors to make their work as public and reproducible as possible, we do not explicitly
require it. For authors who choose to submit an Artifact Descriptions (AD) and Artifact Evaluations (AE),
see https://sc22.supercomputing.org/submit/reproducibility-initiative, we are working on a process
for reviewing these elements.
Scott Levy - Sandia National Laboratories
Keita Teranishi - Sandia National Laboratories
John Daly - Laboratory for Physical Sciences
Questions? Contact Scott Levy (sllevy at sandia.gov).
More information about the hpc-announce