[hpc-announce] HCW'21 - Paper Deadline Extension - now 02/08/21

Friese, Ryan D ryan.friese at pnnl.gov
Mon Jan 25 12:44:49 CST 2021


Dear All,

The paper submission deadline for the Heterogeneity in Computing Workshop (HCW'21) has been extended to February 8, 2021.

Please refer to the CFP included below.

------------------------------------------------------------------------

HCW 2021 Call for Papers

In conjunction with IPDPS 2021, May 17, 2021, Portland, Oregon USA
Sponsored by the IEEE Computer Society
through the Technical Committee on Parallel Processing (TCPP)

SCOPE

Most modern computing systems are heterogeneous, either for organic reasons because components grew independently, as it is the case in desktop grids, or by design to leverage the strength of specific hardware, as it is the case in accelerated systems. In any case, all computing systems have some form of hardware or software heterogeneity that must been managed, leveraged, understood, and exploited. The Heterogeneity in Computing Workshop (HCW) is a venue to discuss and innovate in all theoretical and practical aspects of heterogeneous computing: design, programmability, efficient utilization, algorithms, modeling, applications, etc. The 2021 HCW is the 30th annual gathering of this workshop.

TOPICS
Topics of interest include but are not limited to the following areas:

!!! SPECIAL TOPIC !!! Heterogeneous computing for Machine Learning (ML): Design, exploration, and analysis of architectures and software frameworks, enabling significant performance improvement of Machine Learning algorithms/applications on heterogeneous computing systems. Well known architectures are the Nvidia DLA (deep learning accelerator) and DGX systems, SambaNova, and Cerebras. Example frameworks include Nvidia TensorRT, Tensorflow, Caffe2, and PyTorch.  Submissions in the following areas are very welcome: Designing and programming of Machine Learning accelerators (e.g., GPUs, FPGAs, or Coarse Grain Architectures), exploration and benchmarking of Machine Learning frameworks on heterogeneous computing systems.

Heterogeneous multicore systems and architectures: Design, exploration, and experimental analysis of heterogeneous computing systems such as GPGPUs, heterogeneous systems-on-chip (SoC), accelerator systems (e.g., Intel Xeon Phi, AI chips such as Google's TPUs), FPGAs, big.LITTLE, and application-specific architectures.

Heterogeneous parallel and distributed systems: Design and analysis of computing grids, cloud systems, hybrid clusters, datacenters, geo-distributed computing systems, and supercomputers.

Deep-memory hierarchies: Design and analysis of memory hierarchies with SRAM, DRAM, Flash/SSD, and HDD technologies; NUMA architectures; cache coherence strategies; novel memory systems such as phase-change RAM, magnetic (e.g., STT) RAM, 3D Xpoint/crossbars, and memristors.

On-chip, off-chip and heterogeneous network architectures: Network-on-chip (NoC) architectures and protocols for heterogeneous multicore applications; energy, latency, reliability, and security optimizations for NoCs; off-chip (chip-to-chip) network architectures and optimizations; heterogeneous networks (combination of NoC and off-chip) design, evaluation, and optimizations; large scale parallel and distributed heterogeneous network design, evaluation, and optimizations.

Programming models and tools: Programming paradigms and tools for heterogeneous systems; middleware and runtime systems; performance-abstraction tradeoff; interoperability of heterogeneous software environments; workflows; dataflows.

Resource management and algorithms for heterogeneous systems: Parallel algorithms for solving problems on heterogeneous systems (e.g., multicores, hybrid clusters, grids or clouds); strategies for scheduling and allocation on heterogeneous 2D and 3D multicore architectures; static and dynamic scheduling and resource management for large-scale and parallel heterogeneous systems.

Modeling, characterization, and optimizations: Performance models and their use in the design of parallel and distributed algorithms for heterogeneous platforms, characterizations and optimizations for improving the time to solve a problem (e.g., throughput, latency, runtime), modeling and optimizing electric consumption (e.g., power, energy); modeling for failure management (e.g., fault tolerance, recovery, reliability); modeling for security in heterogeneous platforms.

Applications on heterogeneous systems: Case studies; confluence of Big Data systems and heterogeneous systems; data-intensive computing; deep learning; scientific computing.
IMPORTANT DATES
Paper submission: January 25, 2021 New!!! February 8, 2021
Author notification: ~March 1, 2021
Camera Ready: ~March 15, 2021


WEBSITE: http://hcw.oucreate.com/
PAPER SUBMISSIONS
·        Papers are to be submitted through  https://ssl.linklings.net/conferences/ipdps/?page=Submit&id=HCWWorkshopFullSubmission&site=ipdps2021
·        Submissions for the Special Topic Session on DSAs: Please add (Special Topic Submission) to your paper title during the submission process.
·        Papers submitted to HCW 2021 should not have been previously published or be under review for a different workshop, conference, or journal.
·        It is required that all accepted papers will be presented at the workshop by one of the authors.
WORKSHOP ORGANIZATION
General Chair: Florina M. Ciorba, University of Basel, Switzerland

Program Chair: Ryan D. Friese, Pacific Northwest National Laboratory, USA



Steering Committee:
Behrooz Shirazi, Washington State University, USA (Chair)
H. J. Siegel, Colorado State University, USA (Past Chair)
John Antonio, University of Oklahoma, USA
David Bader, New Jersey Institute of Technology, USA
Anne Benoit, École Normale Supérieure de Lyon, France
Jack Dongarra, University of Tennessee, USA
Alexey Lastovetsky, University College Dublin, UK
Sudeep Pasricha, Colorado State University, USA
Viktor K. Prasanna, University of Southern California, USA
Yves Robert, École Normale Supérieure de Lyon, France
Erik Saule, University of North Carolina at Charlotte, USA
Uwe Schwiegelshohn, TU Dortmund University, Germany

Technical Program Committee:

Ryan D. Friese, Pacific Northwest National Laboratory (PNNL), USA (TPC Chair)

Mohsen Amini, University of Louisiana Lafayette, USA

Ioana Banicescu, Mississippi State University, USA

Lucas Brasilino, Indiana University, USA

Louis-Claude Canon, Université de Franche-Comté, France

Daniel Cordeiro, University of São Paulo, Brazil

Matthias Diener, University of Illinois at Urbana-Champaign, USA

Diana Göhringer, Technische Universität Dresden, Germany

Nicolas Grounds, MSCI, INC., USA

Michael Huebner, Technische Universität Berlin, Germany

Georgios Keramidas, Aristotle University, Greece

Jong-Kook Kim, Korea University, Korea

Tushar Krishna, Georgia Tech, USA

Alexey Lastovetsky, University College Dublin, Ireland

Laercio Lima Pilla, CNRS, France

Hatem Ltaief, KAUST, Saudi Arabia

Burcu Mutlu, PNNL, USA

Mahdi Nikdast, Colorado State University, USA

Guillermo Paya-Vaya, University of Hannover, Germany

Dana Petcu, West University of Timisoara, Romania

Sridhar Radhakrishnan, University of Oklahoma, USA

Srishti Srivastava, University of Southern Indiana

Achim Streit, Karlsruhe Institute of Technology, Germany

Samuel Thibault, LaBRI, Université Bordeaux, France

Cheng Wang, Microsoft, USA

Questions may be sent to the program chair: Ryan D. Friese (ryan.friese at pnnl.gov<mailto:ryan.friese at pnnl.gov>)


__________________________________________
Ryan D. Friese Ph.D.
Pacific Northwest National Laboratory
High Performance Computing Group

902 Battelle Boulevard
Richland, WA  99352 USA
Tel:  509-375-2903
ryan.friese at pnnl.gov<mailto:ryan.friese at pnnl.gov>



More information about the hpc-announce mailing list