[hpc-announce] Call for Papers: ARC-LG’2025: New Approaches for Addressing the Computing Requirements of LLMs and GNNs
Prakash, Pavana
prakash at hpe.com
Fri Mar 28 18:59:09 CDT 2025
Dear all,
I hope this message finds you well. We are excited to announce the 2nd edition of the upcoming workshop, ARC-LG’2025: New Approaches for Addressing the Computing Requirements of LLMs and GNNs, scheduled to take place on June 22nd, 2025, at Tokyo, Japan (held in conjunction with ISCA’2025). We encourage full or short papers submission and contribute to the exchange of knowledge and insights.
Workshop Details:
Title: ARC-LG’2025: New Approaches for Addressing the Computing Requirements of LLMs and GNNs.
Date: June 22, 2025
Location: Tokyo, Japan (held in conjunction with ISCA’2025)
Website: https://urldefense.us/v3/__https://llm-gnn.org/__;!!G_uCfscf7eWS!fg9E9cH6SCTiNHBz5VNMMtvqhCHuC_KoCsMnn2AgkBFhjCHfAUBXCvYlzjpdtb5UzOPBULTaHDDE0xorZMGu$
Overview:
Training and deploying huge machine learning models, such as GPT, Llama, or large GNNs, require a vast amount of compute resources, power, storage, memory. The size of such models is growing exponentially, as is the training time and the resources required. The cost to train large foundation models has become prohibitive for everyone but very few large players. While the challenges are most visible in training, similar considerations apply to deploying and serving large foundation models for a large user base.
The proposed workshop aims to bring together AI/ML researchers, computer architects, and engineers working on a range of topics focused on training and serving large ML models. The workshop will provide a forum for presenting and exchanging new ideas and experiences in this area and to discuss and explore hardware/software techniques and tools to lower the significant barrier of entry in the computation requirements of AI foundation models.
Submissions:
Authors can submit either 8-page full papers or up to 4-page short papers. In the short paper format, out-of-the box ideas and position papers are especially encouraged. See the website <https://urldefense.us/v3/__https://llm-gnn.org/__;!!G_uCfscf7eWS!fg9E9cH6SCTiNHBz5VNMMtvqhCHuC_KoCsMnn2AgkBFhjCHfAUBXCvYlzjpdtb5UzOPBULTaHDDE0xorZMGu$ > for submission details.
Topics:
The workshop will present original works in areas such as (but not inclusive to): workload characterization, inference serving at scale, distributed training, novel networking and interconnect approaches for large AI/ML workloads, addressing resilience of large training runs, data reduction techniques, better model partitioning, data formats and precision, efficient hardware and competitive accelerators.
=============================================================
Important Deadlines - All times below are 11:59 pm (anywhere on earth):
Paper Submission:15 April 2025
Accept Notification: 10 May 2025
Workshop Date: 22 June 2025
Program co-chairs:
Avi Mendelson, Technion (avi.mendelson at technion.ac.il<mailto:avi.mendelson at technion.ac.il>),
David Kaeli, Northeastern University (kaeli at ece.neu.edu<mailto:kaeli at ece.neu.edu>)
Dejan S. Milojicic, Hewlett Packard Labs (dejan.milojicic at hpe.com<mailto:dejan.milojicic at hpe.com>)
Program Committee
Jose Luis Abellan - University of Murcia Paolo Faraboschi – Hewlett Packard Labs
Rosa M Badia – Barcelona Supercomputer Center Alexandra Posoldova – Sigma
Chaim Baskin – Technion Chang Qiong - Institute of Science Tokyo
Jose Cano - University of Glasgow Bin Ren - William and Mary
Freddy Gabbay – Ruppin College Carole Jean Wu - META
Jhibin Yu – Shenzhen Institute of Technology Kaustubh Shivdikar - Northeastern University
John Kim - KAIST Zlatan Feric - Northeastern University
Publicity Chair:
Pavana Prakash - Hewlett Packard Labs
Web Chair:
Zlatan Feric - Northeastern University
Regards,
Pavana Prakash
Research Scientist, Systems Architecture Lab
Hewlett Packard Labs
More information about the hpc-announce
mailing list