[hpc-announce] PDML19 - CfP (Deadline Extended)

Rio Yokota rioyokota at gsic.titech.ac.jp
Tue May 14 06:00:01 CDT 2019

We apologize if you receive multiple copies of the CFP.

                                            CALL FOR PAPERS 
                                (Deadline Extended: May 23rd, 2019)

The 1st Workshop on Parallel and Distributed Machine Learning  2019 (PDML19)
                                Kyoto, Japan on August 5th, 2019

Held in conjunction with The 48th International Conference on Parallel Processing

Paper Submission (Extended): *** May 23rd, 2019 (AoE) ***
Author Notification: May 31st, 2019
Camera-Ready Copy: June 7th, 2019 
Workshop Date: August 5th, 2019

Parallel and distributed computing has been making tremendous impacts on the recent advancement of data-oriented machine learning such as deep learning. Accelerating ML workloads with HPC systems can present opportunities to enable more complicated machine learning. However, significant challenges remain to be addressed due to limited computation power against the huge volume of datasets. In this workshop, we bring together researchers in the field of machine learning and facilitate discussions for their experiences, new ideas and the latest trends to leverage HPC for ML, ML for HPC and ML applications in HPC.

We welcome all audience who are interested in ML and HPC. Especially, we target researchers and practitioners that are actively working on applying parallel and distributed computing to machine learning. Specifically, the topics will include but not limited to:

Algorithmic techniques to improve performance and efficiency of parallel applications of machine learning
ML-based techniques to improve system and application efficiency of HPC environments
Development of algorithms, models and solvers for parallel and distributed applications using machine/deep learning
All aspects of parallel processing hardware including the optimization and evaluation of processors and networks for machine/deep learning
Techniques for performance measurement, performance modeling and performance tools in machine/deep learning
Techniques to support parallel programming, system software, runtime system and other low-level software research and development for machine/deep learning
We also encourage submissions in emerging fields that may not fit into these categories to have more diversity of the topics. If authors are in doubt, we willing to have any questions from the authors.

We encourage the submission of both full and short papers containing high-quality research describing original and unpublished work. Short papers are intended to provide opportunities to present and discuss preliminary research results on emerging topics. Submissions should be in PDF format in U.S. letter size paper using the ACM conference style. Full and short papers are a maximum of eight (8) and four (4) double-column pages, respectively. Page limits include all figures, tables and appendices; only references do not count against the page limit. Submissions will be judged based on relevance, significance, originality, correctness and clarification. Reviews are not double-blind. All accepted papers are planned to be published by ACM, and included in ACM digital library if presented at the conference.

The paper submission online system is open: https://easychair.org/conferences/?conf=pdml19

Naoya Maruyama, Lawrence Livermore National Laboratory
Rio Yokota, Tokyo Institute of Technology
Kento Sato, RIKEN Center for Computational Science

Tal Ben-Nun, ETH Zurich
Keisuke Fukuda, Preferred Networks
Masaaki Kondo, University of Tokyo/RIKEN Center for Computational Science
Naoya Maruyama, Lawrence Livermore National Laboratory
Kento Sato, RIKEN Center for Computational Science
Koichi Shirahata, Fujitsu Laboratories
Mohamed Wahib, National Institute for Advanced Industrial Science and Technology
Rio Yokota, Tokyo Institute of Technology

More information about the hpc-announce mailing list