[hpc-announce] [CfP][Deadline Extension] REX-IO Workshop at IEEE Cluster 2021 - Submissions due July 9, 2021

Sarah M. Neuwirth s.neuwirth at em.uni-frankfurt.de
Sun Jul 4 14:58:04 CDT 2021


**[Please accept our apologies if you receive multiple copies of this
email]**

Dear Colleagues,

The submission deadline for the 1st Workshop on Re-envisioning
Extreme-Scale I/O for Emerging Hybrid HPC Workloads (REX-IO) has been
extended by a week (from July 2nd to July 9th, 2021, 11:59PM AoE). Take
advantage of this extension and submit your work. Position and vision
paper submissions are highly welcomed as well.

Accepted papers will be published by IEEE as part of the Cluster 2021
Conference Proceedings. Manuscripts should not exceed 6 single-spaced,
double-column pages using 10-point size font on 8.5X11 inch pages (IEEE
conference style) including text and figures excluding references.  For
submission details, please see below or refer to our website:
https://sites.google.com/view/rexio/

REX-IO will be organized virtually. Travel will not be required. We're
looking forward to an engaging and productive workshop.

Kind regards,
Sarah Neuwirth
on behalf of the REX-IO 2021 Co-Chairs


*********************************************************************

Call for Papers

REX-IO 2021: 1st Workshop on Re-envisioning Extreme-Scale I/O for
Emerging Hybrid HPC Workloads

Held in conjunction with IEEE Cluster 2021.

September 7, 2021

<https://sites.google.com/view/rexio/>

*********************************************************************

================================
Scope, Aims, and Topics
================================
High Performance Computing (HPC) applications are evolving to include
not only traditional scale-up modeling and simulation bulk-synchronous
workloads but also scale-out workloads like artificial intelligence
(AI), data analytics methods, deep learning, big data and complex
multi-step workflows. Exascale workflows are projected to include
multiple different components from both scale-up and scale-out
communities operating together to drive scientific discovery and
innovation. With the often conflicting design choices between optimizing
for write-intensive vs. read-intensive, having flexible I/O systems will
be crucial to support these hybrid workloads. Another performance aspect
is the intensifying complexity of parallel file and storage systems in
large-scale cluster environments. Storage system designs are advancing
beyond the traditional two-tiered file system and archive model by
introducing new tiers of temporary, fast storage close to the computing
resources with distinctly different performance characteristics.

The changing landscape of emerging hybrid HPC workloads along with the
ever increasing gap between the compute and storage performance
capabilities reinforces the need for an in-depth understanding of
extreme-scale I/O and for rethinking existing data storage and
management techniques. Traditional approaches of managing data might
fail to address the challenges of extreme-scale hybrid workloads. Novel
I/O optimization and management techniques integrating machine learning
and AI algorithms, such as intelligent load balancing and I/O pattern
prediction, are needed to ease the handling of the exponential growth of
data as well as the complex hierarchies in the storage and file systems.
Furthermore, user-friendly, transparent and innovative approaches are
essential to adapt to the needs of different HPC I/O workloads while
easing the scientific and commercial code development and efficiently
utilizing extreme-scale parallel I/O and storage resources.

The Re-envisioning Extreme-Scale I/O for Emerging Hybrid HPC Workloads
(REX-IO) workshop solicits novel work that characterizes I/O behavior
and identifies the challenges in scientific data and storage management
for emerging HPC workloads, introduces potential solutions to alleviate
some of these challenges, and demonstrates the effectiveness of the
proposed solutions. We envision that this workshop will contribute to
the community by analyzing emerging hybrid workloads, recognizing the
gap in the data management methodologies, and providing novel techniques
to improve I/O performance for the exascale supercomputing era and
beyond. This workshop will also provide a platform to felicitate
discussions between storage and I/O researchers, HPC application users
and the data analytics community to give a better in-depth understanding
of the impact on the storage and file systems induced by emerging HPC
applications.

Topics of interest include, but are not limited to:
- Understanding I/O inefficiencies in emerging workloads such as complex
multi-step workflows, in-situ analysis, AI, and data analytics methods 
- New I/O optimization techniques, including how ML and AI algorithms
might be adapted for intelligent load balancing and I/O pattern
prediction of complex, hybrid application workloads
- Performance benchmarking, resource management, and I/O behavior
studies of emerging workloads
- New possibilities for the I/O optimization of emerging application
workloads and their I/O subsystems
- Efficient tools for the monitoring of metadata and storage hardware
statistics at runtime, dynamic storage resource management, and I/O load
balancing
- Parallel file systems, metadata management, and complex data management
- Understanding and efficiently utilizing complex storage hierarchies
beyond the traditional two-tiered file system and archive model
- User-friendly tools and techniques for managing data movement among
compute and storage nodes
- Use of staging areas, such as burst buffers or other private or shared
acceleration tiers for managing intermediate data between computation tasks
- Application of emerging big data frameworks towards scientific
computing and analysis
- Alternative data storage models, including object and key-value
stores, and scalable software architectures for data storage and archive
- Position papers on related topics

================================
Submission Guidelines
================================
All papers must be original and not simultaneously submitted to another
journal or conference. Please indicate all authors and affiliations. All
papers will be peer-reviewed using a single-blind peer-review process by
at least three members of the program committee. Submissions should be a
complete manuscript. Manuscripts shall not exceed six (6) single-spaced,
double-column pages using 10-point size font on 8.5X11 inch pages (IEEE
conference format,
https://www.ieee.org/conferences/publishing/templates.html) including
text and figures excluding references.

Papers are to be submitted electronically in PDF format through
EasyChair. Submitted papers should not have appeared in or be under
consideration for a different workshop, conference or journal. It is
also expected that all accepted papers will be presented at the workshop
by one of the authors.

All accepted papers (subject to post-review revisions) will be published
in the IEEE Cluster 2021 proceedings.

Submission Link: https://easychair.org/conferences/?conf=rexio21

================================
Important Dates
================================
- Submission deadline: July 9, 2021, 11:59PM AoE (final deadline extension)
- Notification to authors: July 26, 2021
- Camera-ready paper due: July 30, 2021
- Workshop date: September 7, 2021

================================
Workshop Committees
================================
Workshop Co-Chairs:
- Arnab K. Paul (Oak Ridge National Laboratory, USA) <paula AT ornl DOT gov>
- Sarah M. Neuwirth (Goethe-University Frankfurt, Germany) <s.neuwirth
AT em DOT uni-frankfurt DOT de>
- Jay Lofstead (Sandia National Laboratories, USA) <gflofst AT sandia
DOT gov>

Program Committee (confirmed):
- Thomas Boenisch (High-Performance Computing Center Stuttgart (HLRS),
Germany)
- Sarp Oral (Oak Ridge National Laboratory, USA)
- Feiyi Wang (Oak Ridge National Laboratory, USA)
- Ali R. Butt (Virginia Tech, USA)
- Bing Xie (Oak Ridge National Laboratory, USA)
- Yue Cheng (George Mason University, USA)
- Ali Anwar (IBM Research, USA)
- Nannan Zhao (Northwestern Polytechnical University, China)
- Elsa Gonsiorowski (Lawrence Livermore National Laboratory, USA)
- Julian Kunkel (University of Reading, UK)
- Jean Luca Bez (Federal University of Rio Grande do Sul (UFRGS), Brazil)
- Houjun Tang (Lawrence Berkeley National Laborytory, USA)
- Huan Ke (The University of Chicago, USA)
- Luna Xu (IBM Research, USA)
- Sandra Mendez (Barcelona Supercomputing Center (BSC), Spain)
- Wolfgang Frings (Juelich Supercomputing Centre (JSC), Germany)
- Esteban Rangel (Argonne National Laboratory, USA)

-- 
-----------------------------------------------------------------------
Dr. Sarah M. Neuwirth
Goethe-University Frankfurt | Campus Riedberg
Modular Supercomputing and Quantum Computing
FIAS Building | Room 2.403 | Ruth-Moufang-Str. 1
60438 Frankfurt am Main | Germany
Phone: +49 (0)69 798-47533
Email: s.neuwirth at em.uni-frankfurt.de
-----------------------------------------------------------------------




More information about the hpc-announce mailing list