[hpc-announce] [Deadline Extension] CFP: 4th International Workshop on Extreme-Scale Storage and Analysis (ESSA 2023) in conjunction with IEEE IPDPS 2023, St-Petersburg, Florida, USA

François Tessier francois.tessier at inria.fr
Mon Jan 16 06:01:47 CST 2023


[Apologies if you got multiple copies of this email.]

=================================================
                         CALL FOR PAPERS
         ESSA 2023: 4th International Workshop on Extreme-Scale Storage 
and Analysis
             (Formerly HPS: International Workshop on High Performance 
Storage)

         Held in conjunction with IPDPS 2023 - May, 2023, St. 
Petersburg, Florida USA

         Submission website: https://ssl.linklings.net/conferences/ipdps/
         Submission deadline: February 4, 2023 (Extended deadline)

         Workshop website: https://sites.google.com/view/essa-2023/
=================================================

=== *Overview* ===

Advances in storage are becoming increasingly critical because workloads 
on high performance computing (HPC) and cloud systems are producing and 
consuming more data than ever before, and the situation promises to only 
increase in future years. Additionally, the last decades have seen 
relatively few changes in the structure of parallel file systems, and 
limited interaction between the evolution of parallel file systems and 
I/O support systems that take advantage of hierarchical storage layers. 
However, recently the community has seen a large uptick in innovations 
in data storage and processing systems as well as in I/O support 
software for several reasons:

   * Technology: The availability of an increasing number of persistent 
solid-state storage and persistent storage-class memory technologies 
that can replace either memory or disk are creating new opportunities 
for the structure of storage systems.

   * Performance requirements: Disk-based parallel file systems cannot 
satisfy the performance needs of high-end systems. However, it is not 
clear how solid-state storage and storage-class memory can best be used 
to achieve the needed performance, so new approaches for using 
solid-state storage and storage-class memory in HPC systems are being 
designed and evaluated.

   * Application evolution: Data analysis applications, including graph 
analytics and machine learning, are becoming increasingly important both 
for scientific computing and for commercial computing.  I/O is often a 
major bottleneck for such applications, both in cloud and HPC 
environments – especially when fast turnaround or integration of heavy 
computation and analysis are required. Consequently, data storage, I/O 
and processing requirements are evolving, as complex workflows involving 
computation, analytics and learning emerge.

   * Infrastructure evolution: HPC technology will not only be deployed 
in dedicated supercomputing centers in the future. "Embedded HPC", "HPC 
in the box", "HPC in the loop", "HPC in the cloud", "HPC as a service", 
and "near-to-real-time simulation" are concepts requiring new 
small-scale deployment environments for HPC. A federation of systems and 
functions with consistent mechanisms for managing I/O, storage, and data 
processing across all participating systems will be required to create 
what is called a "computing continuum".

   * Virtualization and disaggregation: As virtualization and 
disaggregation become broadly used in cloud and HPC computing, the issue 
of virtualized storage has increasing importance and efforts will be 
needed to understand its implications for performance.

Our goals in the ESSA Workshop are to bring together expert researchers 
and developers in data-related areas such as storage, I/O, processing 
and analytics on extreme scale infrastructures including HPC systems, 
clouds, edge systems or hybrid combinations of these, to discuss 
advances and possible solutions to the new challenges we face. We expect 
the ESSA Workshop to result in lively interactions over a wide range of 
interesting topics, including:

  * Extreme-scale storage systems (on high-end HPC infrastructures,
    clouds, or hybrid combinations of them)
  * Extreme-scale parallel and distributed storage architectures
  * The synergy between different storage models (POSIX file system,
    object storage, key-value, row-oriented, and column-oriented databases)
  * Structures and interfaces for leveraging persistent solid-state
    storage and storage-class memory
  * High-performance I/O libraries and services
  * I/O performance in extreme-scale systems and applications
    (HPC/clouds/edge)
  * Storage and data processing architectures and systems for hybrid
    HPC/cloud/edge infrastructures, in support of complex workflows
    potentially combining simulation and analytics
  * Integrating computation into the memory and storage hierarchy to
    facilitate in-situ and in-transit data processing
  * I/O characterization and data processing techniques for application
    workloads relying on extreme-scale parallel/distributed
    machine-learning/deep learning
  * Tools and techniques for managing data movement among compute and
    data intensive components
  * Data reduction and compression
  * Failure and recovery of extreme-scale storage systems
  * Benchmarks and performance tools for extreme-scale I/O
  * Language and library support for data-centric computing
  * Storage virtualization and disaggregation
  * Ephemeral storage media and consistency optimizations
  * Storage architectures and systems for scalable stream-based processing
  * Study cases of I/O services and data processing architectures in
    support of various application domains (bioinformatics, scientific
    simulations, large observatories, experimental facilities, etc.)

=== *Submission Guidelines* ===

The workshop will accept traditional research papers (8-10 pages) for 
in-depth topics and short papers (4-8 pages) for work in progress on hot 
topics. Papers should present original research and provide sufficient 
background material to make them accessible to the broader community.

Paper format: single-spaced double-column pages using 10-point size font 
on 8.5x11 inch pages (IEEE conference style), including figures, tables, 
and references. The submitted manuscripts should include author names 
and affiliations. The IEEE conference style templates for MS Word and 
LaTeX provided by IEEE eXpress Conference Publishing are available here: 
https://www.ieee.org/conferences/publishing/templates.html

Submission site: https://ssl.linklings.net/conferences/ipdps/

=== *Important Dates* ===

  * Abstract submission (optional) deadline: January 28, 2023 (Extended
    deadline)
  * Paper submission deadline: February 4, 2023 (Extended deadline)
  * Acceptance notification: February 21, 2023
  * Camera-ready deadline: February 28, 2023
  * Workshop date: May 15, 2023

=== *Organization* ===

Workshop Chairs

   Kento Sato, RIKEN, Japan - Chair - kento.sato at riken.jp
   Gabriel Antoniu, Inria, France  - Co-Chair - gabriel.antoniu at inria.fr

Program Chairs

   Weikuan Yu, Florida State University, USA - Chair - yuw at cs.fsu.edu
   Sarah Neuwirth, Goethe University Frankfurt, Germany - Co-Chair - 
s.neuwirth at em.uni-frankfurt.de

Web & Publicity Chair

   François Tessier, Inria, France - Chair - francois.tessier at inria.fr

Program Committee

   Gabriel Antoniu, French Insitute for Research in Computer Science and 
Automation (INRIA), France
   Jean Luca Bez, Lawrence Berkeley National Laboratory (LBNL), USA
   Suren Byna, Lawrence Berkeley National Laboratory (LBNL), USA
   Wei Der Chien, University of Edinburgh, Great Britain
   Alexandru Costan, French Insitute for Research in Computer Science 
and Automation (INRIA), France
   Hariharan Devarajan, Lawrence Livermore National Laboratory (LLNL), 
Illinois Institute of Technology, USA
   Matthieu Dorier, Argonne National Laboratory (ANL), USA
   Hideyuki Kawashima, Keio University, Tokio, Japan
   Younjae Kim, Sogang University, South Korea
   Christos Kozanitis, Foundation for Research and Technology - Hellas 
(FORTH), Greece
   Jay Lofstead, Sandia National Laboratories, USA
   Ricardo Macedo, INESC TEC, University of Minho, Portugal
   Sarah Neuwirth, Goethe University Frankfurt, Germany
   Hiroki Ohtsuji, Fujitsu Ltd, Japan
   Arnab K. Paul, PITS Pilani, K.K. Goa Campus, India
   Dana Petcu, West University of Timisoara, Romania
   Michael Schoettner, Duesseldorf University, Germany
   Chen Wang, Lawrence Livermore National Laboratory (LLNL), USA

For additional details, see  web site: 
https://sites.google.com/view/essa-2023/


More information about the hpc-announce mailing list