[hpc-announce] LLM4HPCAsia at SCA/HPC Asia'26 CFP (January 26th-29th, 2026)

Valero Lara, Pedro valerolarap at ornl.gov
Fri Aug 22 15:21:50 CDT 2025


Dear Sir/Madam,

Please, consider submit your contribution to LLM4HPCAsia (initial deadline: Oct 20, 2025):
https://urldefense.us/v3/__https://ornl.github.io/events/llm4hpcasia2026/__;!!G_uCfscf7eWS!aIiPumQr5gBN9t_O8IaW-FWHdXaGSKGZVhFdXkPJfJLrhSWFd97PTgmsZjsx1i5lYsRXzOZzD1oMPqHTvBrnC8FeFpD5Ow$ 
that will take place in Osaka, Japan, in conjunction with SCA/HPC Asia'26: https://urldefense.us/v3/__https://www.sca-hpcasia2026.jp/__;!!G_uCfscf7eWS!aIiPumQr5gBN9t_O8IaW-FWHdXaGSKGZVhFdXkPJfJLrhSWFd97PTgmsZjsx1i5lYsRXzOZzD1oMPqHTvBrnC8HX_KJ0lg$ 

LLM4HPCAsia 2026
The 1st International Workshop on Foundational Large Language Models Advances for HPC in Asia
to be held in conjunction with SCA/HPC Asia 2026
January 26th-29th, 2026
Osaka, Japan

LLM4HPCAsia 2026
The 1st International Workshop on Foundational large Language Models Advances for HPC in Asia
to be held in conjunction with
SCA/HPC Asia 2026
26-29 January 2026
Osaka, Japan

Introduction:
Since their development and release, modern Large Language Models (LLMs), such as the Generative Pre-trained Transformer (GPT) model and the Large Language Model Meta AI (LLaMA), have come to signify a revolution in human-computer interaction spurred on by their high-quality results. LLMs have repaved this landscape thanks to unprecedented investments and enormous training models (hundreds of billions of parameters). The availability of LLMs has led to increasing interest in how they could be applied to a large variety of applications. The HPC community made recent research efforts to evaluate current LLM capabilities for some HPC tasks, including code generation, auto parallelization, performance portability, correctness, among others. All these studies concluded that state-of-the-art LLM capabilities have proven so far insufficient for these targets. Hence, it is necessary to explore novel techniques to further empower LLMs to enrich the HPC mission and its impact.

Call For Papers

Objectives, scope and topics of the workshop:

This workshop objectives are focused on LLMs advances for any HPC major priority and challenge with the aims to define and discuss the fundamentals of LLMs for HPC-specific tasks, including but not limited to hardware design, compilation, parallel programming models and runtimes, application development, enabling LLM technologies to have more autonomous decision-making about the efficient use of HPC. This workshop aims to provide a forum to discuss new and emerging solutions to address these important challenges towards an AI-assisted HPC era. Papers are being sought on many aspects of LLM for HPC targets including (but not limited to):
-- LLMs for Programming Environments and Runtime Systems
-- LLMs for HPC and Scientific Applications
-- LLMs for Hardware design (including non-von Neumann Architectures)
-- Reliability/Benchmarking/Measurements for LLMs

Important Dates:
-- Paper submission deadline : Oct 20, 2025
-- Notification of acceptance : Nov 26, 2025

Steering Committee: TBD

Organizers (Contact us):

Pedro Valero-Lara (chair)
  Oak Ridge National Laboratory, USA
  valerolarap at ornl.gov

William F. Godoy
  Oak Ridge National Laboratory, USA
  godoywf at ornl.gov

Dhabaleswar K. Panda (co-chair)
  The Ohio State University, USA
  panda at cse.ohio-state.edu

Best Paper Award
The Best Paper Award will be selected on the basis of explicit recommendations of the reviewers and their scoring towards the paper’s originality and quality.

Keynote (Rio Yokota, Institute of Science Tokio):
- Updates on the Development of Japanese LLMs
- Large language models (LLM) are mainly pre-trained on internet data, which is predominantly English. Such models have suboptimal performance when used in non-English languages. Also, LLMs are not mechanical tools that benefit everyone equally. They are rather intellectual tools that disproportionately benefit certain groups of people, depending on what data they are trained on. Furthermore, the interaction with LLMs will influence our local culture in the long term. Sovereign LLMs are crucial for customizing the models to meet the needs of each local culture. In this talk I will give an update of the efforts in Japan to train LLMs. I will cover both the data and training aspects.
Rio Yokota is a Professor at the Supercomputing Research Center, Institute of Integrated Research, Institute of Science Tokyo. He also leads the AI for Science Foundation Model Research Team at RIKEN Center for Computational Science. His research interests lie at the intersection of high performance computing, machine learning, and linear algebra. He has been optimizing algorithms on GPUs since 2007, and was part of a team that received the Gordon Bell prize in 2009 using the first GPU supercomputer. More recently, he has been leading distributed training efforts on Japanese supercomputers such as ABCI, TSUBAME, and Fugaku. He is the co-developer of the Japanese LLM Swallow, and LLM-jp. He is also involved in the organization of multinational collaborations such as ADAC and TPC.

Invited Talk (Min Si, Facebook AI):
- TBD
- TBD
- Min Si is a Research Scientist at Facebook AI System SW/HW Co-design group. Her role is to investigate and resolve interesting scale-out challenges for Facebook AI workloads. Previously, she was an Assistant Computer Scientist at Argonne National Laboratory an



More information about the hpc-announce mailing list