[hpc-announce] Call for papers: special session on Parallel and Distributed Machine Learning: Theory and Applications at ESANN 2019
Jorge González Domínguez
jorge.gonzalezd at udc.es
Thu Sep 13 03:21:30 CDT 2018
[Apologies if you receive multiple copies of this CFP]
Call for papers: special session on "Parallel and Distributed Machine Learning: Theory and Applications" at ESANN 2019
European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2019)
24-26 April 2019, Bruges (Belgium) - http://www.esann.org
Parallel and Distributed Machine Learning: Theory and Applications
Organized by: Beatriz Remeseiro (Universidad de Oviedo, Spain), Verónica Bolón-Canedo, Jorge González-Domínguez, Amparo Alonso-Betanzos (Universidade da Coruña, Spain)
The spread of Internet and the technological advances have resulted in huge volumes of data, very valuable for different agents in the industrial world that are interested in analyzing them for different purposes. Machine Learning (ML) algorithms play a key role in this context, being able to learn from and make predictions on data. Their increasing complexity, since they have to deal
now with millions of parameters, as well as their computational cost lead to new research opportunities and technical challenges.
This continuous increase of data involved in ML analyses leads to a growing interest in the design and implementation of parallel and distributed ML algorithms. The efficient exploitation of the vast aggregate main memory and processing power of High Performance Computing (HPC) resources such as multicore CPUs, hardware accelerators (GPUs, Intel Xeon Phi coprocessors, FPGAs, etc.), clusters or cloud-based systems can significantly accelerate many ML algorithms. However, the development of efficient parallel algorithms is not trivial, as we must pay much attention to the data organization and decomposition strategy in order to balance the workload among resources while minimizing data dependencies as well as synchronization and communication overhead.
We invite papers on both practical and theoretical issues about incorporating parallel and distributed approaches into ML problems, as well as review papers with the state-of-art techniques and the open challenges encountered in this field. In particular, topics of interest include, but are not limited to:
- Development of parallel ML algorithms on multicore and manycore architectures: multithreading, GPUs, Intel Xeon Phi coprocessor, FPGAs, etc.
- Exploitation of cloud, grid and distributed-memory systems to accelerate ML algorithms: Spark, Hadoop, MPI, etc.
- Deep learning models trained across multicore CPUs, GPUs or clusters of computers.
- Development of distributed ML algorithms.
- Novel programming paradigms to support HPC for ML.
- Middleware, programming models, tools, and environments for HPC in ML.
- Caching, streaming, pipelining, and other optimization techniques for data management in HPC for ML.
- Benchmarking and performance studies of high-performance ML applications.
- Parallel databases and I/O systems to store ML data.
- Applications and services: bioinformatics, medicine, multimedia, video surveillance, etc.
Submitted papers will be reviewed according to the ESANN reviewing process and will be evaluated on their scientific value: originality, correctness, and writing style.
IMPORTANT DATES:
- Paper submission deadline: 19 November 2018
- Notification of acceptance: 31 January 2019
- ESANN conference: 24-26 April 2019
More information about the hpc-announce
mailing list