[hpc-announce] CFP - 8th Workshop on Accelerated Machine Learning (AccML) at HiPEAC 2026

Jose Cano Reyes Jose.CanoReyes at glasgow.ac.uk
Sat Oct 11 04:20:07 CDT 2025


==================================================================
8th Workshop on Accelerated Machine Learning (AccML)

Co-located with the HiPEAC 2026 Conference
(https://urldefense.us/v3/__https://www.hipeac.net/2026/krakow/__;!!G_uCfscf7eWS!ZbL9pNTktcDR-G3_dvhblfu-MmEV8QkayOqO05kJBZW535V61x8T19qzwJxMkK-orW5YllAvBc8f0thBc-OO9Tq8WqFI5jShTACuGQ$ )

January 27, 2026
Kraków, Poland
==================================================================

-------------------------------------------------------------------------
CALL FOR CONTRIBUTIONS
-------------------------------------------------------------------------
In the last few years, the remarkable performance achieved in a variety 
of application areas (natural language processing, computer vision, 
games, etc.) has led to the emergence of heterogeneous architectures to 
accelerate machine learning workloads. In parallel, production 
deployment, model complexity and diversity pushed for higher 
productivity systems, more powerful programming abstractions, software 
and system architectures, dedicated runtime systems and numerical 
libraries, deployment and analysis tools. Deep learning models are 
generally memory and computationally intensive, for both training and 
inference. Accelerating these operations has obvious advantages, first 
by reducing the energy consumption (e.g. in data centers), and secondly, 
making these models usable on smaller devices at the edge of the 
Internet. In addition, while Convolutional Neural Networks (CNNs) have 
motivated much of this effort, numerous applications and models (e.g., 
Vision Transformers, Large Language Models) involve a wider variety of 
operations, network architectures, and data processing. These 
applications and models permanently challenge computer architecture, the 
system stack, and programming abstractions. The high level of interest 
in these areas calls for a dedicated forum to discuss emerging 
acceleration techniques and computation paradigms for machine learning 
algorithms, as well as the applications of machine learning to the 
construction of such systems.

-------------------------------------------------------------------------
Links to the Workshop page
-------------------------------------------------------------------------
Organizers: https://urldefense.us/v3/__https://accml.dcs.gla.ac.uk/__;!!G_uCfscf7eWS!ZbL9pNTktcDR-G3_dvhblfu-MmEV8QkayOqO05kJBZW535V61x8T19qzwJxMkK-orW5YllAvBc8f0thBc-OO9Tq8WqFI5jRDIwRBdA$ 

HiPEAC: https://urldefense.us/v3/__https://www.hipeac.net/2026/krakow/*/program/sessions/8255/__;Iw!!G_uCfscf7eWS!ZbL9pNTktcDR-G3_dvhblfu-MmEV8QkayOqO05kJBZW535V61x8T19qzwJxMkK-orW5YllAvBc8f0thBc-OO9Tq8WqFI5jTEqxokqQ$ 

-------------------------------------------------------------------------
Topics
-------------------------------------------------------------------------
Topics of interest include (but are not limited to):

- Novel ML/AI systems: heterogeneous multi/many-core systems, GPUs, 
ASICs and FPGAs;
- Software ML/AI acceleration: languages, primitives, libraries, 
compilers and frameworks;
- Novel ML/AI hardware accelerators and associated software;
- Emerging semiconductor technologies with applications to ML/AI 
hardware acceleration;
- ML/AI for the design and tuning of hardware, compilers, and systems;
- Cloud and edge ML/AI computing: hardware and software to accelerate 
training and inference;
- Hardware-Software co-design techniques for more efficient model 
training and inference (e.g. addressing sparsity, pruning, etc);
- Training and deployment of huge LLMs (such as GPT, Llama), or large GNNs;
- Computing systems research addressing the privacy and security of 
ML/AI-dominated systems;

-------------------------------------------------------------------------
Submission
-------------------------------------------------------------------------
Papers will be reviewed by the workshop's technical program committee 
according to criteria regarding the submission's quality, relevance to 
the workshop's topics, and, foremost, its potential to spark discussions 
about directions, insights, and solutions in the context of accelerating 
machine learning. Research papers, case studies, and position papers are 
all welcome.

In particular, we encourage authors to submit work-in-progress papers: 
To facilitate sharing of thought-provoking ideas and high-potential 
though preliminary research, authors are welcome to make submissions 
describing early-stage, in-progress, and/or exploratory work in order to 
elicit feedback, discover collaboration opportunities, and spark 
productive discussions.

The workshop does not have formal proceedings.

-------------------------------------------------------------------------
Important Dates
-------------------------------------------------------------------------
Submission deadline: November 21, 2025
Notification of decision: December 5, 2025

-------------------------------------------------------------------------
Organizers
-------------------------------------------------------------------------
José Cano (University of Glasgow)
Valentin Radu (University of Sheffield)
José L. Abellán (University of Murcia)
Marco Corner (Google DeepMind)
Ulysse Beaugnon (Google DeepMind)
Juliana Franco (Google DeepMind)



More information about the hpc-announce mailing list