[hpc-announce] Call for Participation: AsHES 2021 workshop (collocated with IPDPS 2021)

Si, Min msi at anl.gov
Mon May 10 09:52:07 CDT 2021


- Register by: 14TH MAY 2021
- Workshop date: 17TH  MAY 2021 - Online
Collocated with IPDPS2021 (https://www.ipdps.org/)

- Registration for IPDPS 2021 is required in order to attend AsHES 2021. (https://www.ipdps.org/ipdps2021/2021-registration.html)
- After registering for IPDPS please register for AsHES 2021 to receive online connection details (https://pmodels.github.io/ashes-www/2021/reg.html)

The 11th International Workshop on Accelerators and Hybrid Emerging Systems (AsHES 2021) will be held virtually on May 17, 6.00-11.00 PT / 8.00-13.00 CT / 15.00-20.00 CET.

The current computing landscape has gone through an ever-increasing rate of change and innovation. This change has been driven by the relentless need to improve the energy-efficient, memory, and compute throughput at all levels of the architectural hierarchy. Although the amount of data that has to be organized by today's systems posed new challenges to the architecture, which can no longer be solved with classical, homogeneous design. Improvements in all of those areas have led Heterogeneous systems to become the norm rather than the exception.
Heterogeneous computing leverages a diverse set of computing (CPU, GPU, FPGA, TPU ...) and Memory (HBM, Persistent Memory, Coherent PCI protocols, etc ..), hierarchical storage systems and units to accelerate the execution of a diverse set of applications. Emerging and existing areas such as AI, BigData, Cloud Computing, Edge-Computing, Real-time systems, High-Performance Computing, and others have seen a real benefit due to Heterogenous computer architectures. These new heterogeneous architectures often also require the development of new applications and programming models, in order to satisfy these new architectures and to fully utilize these capacities. This workshop focuses on understanding the implications of heterogeneous designs at all levels of the computing system stack, such as hardware, compiler optimizations, porting of applications, and developing programming environments for current and emerging systems in all the above-mentioned areas. It seeks to ground heterogeneous system design research through studies of application kernels and/or whole applications, as well as shed light on new tools, libraries and runtime systems that improve the performance and productivity of applications on heterogeneous systems.
The goal of this workshop is to bring together researchers and practitioners who are at the forefront of Heterogeneous computing in order to learn the opportunities and challenges in future Heterogeneous system design trends and thus help influence the next trends in this area.

Topics of interest include:
- Innovative use of heterogeneous computing in AI for science or optimizations for AI
- Heterogeneous computing at Edge
- Design and use of domain-specific functionalities on accelerators
- Strategies for programming heterogeneous systems using high-level models such as OpenMP,   OpenACC, SYCL, low-level models such as OpenCL, CUDA;
- Methods and tools to tackle challenges from heterogeneity in AI/ML/DL, BigData, Cloud Computing, Edge-Computing, Real-time Systems, and High-Performance Computing;
- Strategies for application behavior characterization and performance optimization for accelerators;
- Techniques for optimizing kernels for execution on GPGPU, FPGA, TPU, and emerging heterogeneous platforms;
- Models of application performance on heterogeneous and accelerated HPC systems;
- Compiler Optimizations and tuning heterogeneous systems including parallelization, loop transformation, locality optimizations, Vectorization;
- Implications of workload characterization in heterogeneous and accelerated architecture design;
- Benchmarking and performance evaluation for heterogeneous systems at all level of the system stack;
- Tools and techniques to address both performance and correctness to assist application development for accelerators and heterogeneous processors;
- System software techniques to abstract application domain-specific functionalities for accelerators;

## Opening Statement, 6:00 am - 6:10 am PDT ##

## Session One: Emerging systems for Machine Learning, 6:10 am - 7:10 am PDT ##
- Time-Division Multiplexing for FPGA Considering CNN Model Switch Time. Tetsuro Nakamura, Shogo Saito, Kei Fujimoto, Masashi Kaneko, Akinori Shiraga.
- Design Space Exploration of Emerging Memory Technologies for Machine Learning Applications. S.M.Shamimul Hasan, Neena Imam, Ramakrishnan Kannan, Srikanth Yoginath, Kuldeep Kurte.

## Break, 7:10 am - 7:45 am PDT ##

## Session Two: GPUs Computing, 7:45 am - 9:45 am PDT ##
- Accelerating Radiation Therapy Dose Calculation with Nvidia GPUs. Felix Liu, Niclas Jansson, Artur Podobas, Albin Fredriksson, Stefano Markidis.
- Improving Cryptanalytic Applications with Stochastic Runtimes on GPUs. Lena Oden, Jörg Keller.
- Experimental Evaluation of Multiprecision Strategies for GMRES on GPUs. Jennifer Loe, Christian A. Glusa, Ichitaro Yamazaki, Erik G. Boman, Sivasankaran Rajamanickam.
- GPU-aware Communication with UCX in Parallel Programming Models: Charm++, MPI, and Python. Jaemin Choi, Zane Fink, Sam White, Nitin Bhat, David F. Richards, Laxmikant V. Kale.

## Keynote, 09:45 am PDT ##
Title: Addressing Scalability Bottlenecks of DNN Training Through Hardware Heterogeneity: A View from the Perspectives of Memory Capacity and Energy Consumption
Speaker: Prof. Dong Li, Department of Electrical Engineering and Computer Science, University of California, Merced

General Chairs
Min Si, Argonne National Laboratory

Program Chairs
Lena Oden, FernUni Hagen
Simon Garcia de Gonzalo, Barcelona Supercomputing Center (BSC)

Technical Program Committee
Guray Ozen, NVIDIA, USA
Adrián Castelló, Universitat Jaume I, Spain
Leonel Toledo, Barcelona Supercomputing Center, Spain
Pedro Valero-Lara, Cray, a Hewlett Packard Enterprise company, USA
John Leidel, Texas Tech University, USA
Paul F. Baumeister, Jülich Supercomputing Centre
Seyong Lee, Oak Ridge National Laboratory, USA
Ashwin M. Aji, AMD, USA
Stephen Olivier, Sandia National Laboratories, USA
Gabriele Jost, NASA Ames Research Center/CSRA, USA
Guido Juckeland, Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Germany
Huimin Cui, Institute of Computing Technology, CAS, China
Gaurav Mitra, Texas Instruments Inc., USA
Nikela Papadopoulou, National Technical University of Athens, Greece
Bronis de Supinski, Lawrence Livermore National Laboratory, USA
Shintaro Iwasaki, Argonne National Laboratory, USA
Juan Gómez Luna, ETH Zürich, Switzerland

More information about the hpc-announce mailing list