[hpc-announce] ARC-LG-2024: Call for Papers

Prakash, Pavana prakash at hpe.com
Wed Mar 13 13:00:55 CDT 2024


Dear all,

We are excited to announce the upcoming workshop, ARC-LG’2024: New Approaches for Addressing the Computing Requirements of LLMs and GNNs, scheduled to take place on June 30, 2024, at Buenos Aires, Argentina (held in conjunction with ISCA’2024). As part of this event, we invite researchers, scholars, and professionals to submit their papers and contribute to the exchange of knowledge and insights.

Workshop Details:
Title: ARC-LG’2024: New Approaches for Addressing the Computing Requirements of LLMs and GNNs.
Date: June 30, 2024
Location: Buenos Aires, Argentina (held in conjunction with ISCA’2024)
Website: https://urldefense.us/v3/__https://llm-gnn.org/__;!!G_uCfscf7eWS!dqtXnlOiq-uI_jrh9h1Hxq81oSeWCJ_uBh7chSSwsblitpR2GGbytv908QhFnefnZ5HLQ88V1paiOAZe8l0s$ 

Overview:
Training and deployment of huge machine learning models, such as GPT, Llama, or large GNNs, require a vast amount of compute resources, power, storage, memory. The size of such models is growing exponentially, as is the training time and the resources required. The cost to train large foundation models has become prohibitive for everyone but very few large players. While the challenges are most visible in training, similar considerations apply to deploying and serving large foundation models for a large user base.
The proposed workshop aims to bring together AI/ML researchers, computer architects, and engineers working on a range of topics focused on training and serving large ML models. The workshop will provide a forum for presenting and exchanging new ideas and experiences in this area and to discuss and explore hardware/software techniques and tools to lower the significant barrier of entry in the computation requirements of AI foundation models.
We are seeking innovative, evolutionary and revolutionary ideas around software and hardware architectures for training such challenging models and strive to present and discuss new approaches that may lead to alternative solutions.

Submissions:
Authors can submit either 8-page full papers or up to 4-page short papers. In the short paper format, out-of-the box ideas and position papers are especially encouraged.  See the website <https://urldefense.us/v3/__https://llm-gnn.org/__;!!G_uCfscf7eWS!dqtXnlOiq-uI_jrh9h1Hxq81oSeWCJ_uBh7chSSwsblitpR2GGbytv908QhFnefnZ5HLQ88V1paiOAZe8l0s$ > for submission details.

Topics:
The workshop will present original works in areas such as (but not inclusive to): workload characterization, inference serving at scale, distributed training, novel networking and interconnect approaches for large AI/ML workloads, addressing resilience of large training runs, data reduction techniques, better model partitioning, data formats and precision, efficient hardware and competitive accelerators.

=============================================================
IMPORTANT DATES - All times below are 11:59 pm (anywhere on earth):
Workshop papers:
- Paper submission due: April 15th , 2024
- Acceptance notification: May 10th, 2024
- Workshop date: June 30, 2024

Program co-chairs:
Avi Mendelson, Technion (avi.mendelson at technion.ac.il<mailto:avi.mendelson at technion.ac.il>),
David Kaeli, Northeastern University (kaeli at ece.neu.edu<mailto:kaeli at ece.neu.edu>
Paolo Faraboschi, Hewlett Packard Labs (paolo.faraboschi at hpe.com<mailto:paolo.faraboschi at hpe.com>)
Program Committee (initial list):
Jose Luis Abellan - University of Murcia                                                   Dejan S. Milojicic – HPE
Rosa M Badia – Barcelona Supercomputer Center                               Alexandra Posoldova – Sigma
Chaim Baskin – Technion                                                                               Bin Ren - William and Mary
Jose Cano - University of Glasgow                                                              Carole Jean Wu - META
Freddy Gabbay – Ruppin College                                                                Jhibin Yu – Shenzhen Institute of Technology
John Kim - KAIST
Publicity Chair:
Pavana Prakash -- Hewlett Packard Labs
Web Chair:
Kaustubh Shivdikar, Northeastern University

Regards,
Pavana Prakash
Research Scientist, Systems Architecture Lab
Hewlett Packard Labs
LinkedIn<https://urldefense.us/v3/__https://www.linkedin.com/in/pavana-prakash__;!!G_uCfscf7eWS!dqtXnlOiq-uI_jrh9h1Hxq81oSeWCJ_uBh7chSSwsblitpR2GGbytv908QhFnefnZ5HLQ88V1paiOIRhfPX5$ > Email<mailto:prakash at hpe.com>




More information about the hpc-announce mailing list