[hpc-announce] [Deadline Extended to Sep 15, 2025] - The 17th BenchCouncil International Symposium on Evaluation Science and Engineering (Bench 2025)

gaowanling at ict.ac.cn gaowanling at ict.ac.cn
Sat Aug 2 02:04:05 CDT 2025


[Apologies if you receive multiple copies of this CFP]

CALL FOR PAPERS

==========================================================

The 17th BenchCouncil International Symposium on Evaluation Science and Engineering (Bench 2025)

https://urldefense.us/v3/__https://www.benchcouncil.org/bench2025/__;!!G_uCfscf7eWS!amSUc1O23OldGNCxondKlqqWKMq8C2sjqPRfuyvHSLEOjaqkQYJczVjcHraIGEJ7zuN_tte5zrxUddUGXzWC4cXQJVVG$ 

Submission Deadline: September 15, 2025, 08:00 PM AoE
Notification: October 15, 2025, 11:59 PM AoE
Final Papers Due: November 15, 2025, 11:59 PM AoE
Conference Date: December 3 - 4, 2025 

Venue:  Chengdu, China
Chengdu is the homeland of giant pandas, embraced by enchanting landscapes, rich artistic heritage, and irresistible cuisine. It is a city where timeless tradition harmonizes with modern rhythm.

Submission website: https://urldefense.us/v3/__https://bench2025.hotcrp.com__;!!G_uCfscf7eWS!amSUc1O23OldGNCxondKlqqWKMq8C2sjqPRfuyvHSLEOjaqkQYJczVjcHraIGEJ7zuN_tte5zrxUddUGXzWC4X-N3B_C$ 

==========================================================


Introduction
----------------

The Bench conference has hosted 16 successful versions as the BenchCouncil Symposium on Benchmarking, Measuring and Optimizing. This year, we have rebranded it as a cutting-edge conference on Evaluation Science and Engineering, fully aligned with the mission of the International Open Benchmark Council (BenchCouncil). Evaluation is an essential human activity universally present. The discipline of Evaluation Science and Engineering (Evaluatology), pioneered by BenchCouncil, aims to develop comprehensive, rigorous, and scientific evaluation methodologies - transcending ad-hoc or empirical approaches.

Bench 2025 upholds this mission. It is an interdisciplinary international symposium seeking contributions from various domains including computer science, AI, medicine, education, finance, psychology, business, and more. We particularly welcome state-of-the-practice work, crucial for bridging research and real-world impact.


Highlights
-----------------

- Official release of the monograph: Evaluatology: The Science and Engineering of Evaluation
- Award ceremony for the prestigious BenchCouncil Achievement Award, with past recipients including Turing Award laureates.
- Comprehensive showcase of achievements from international research centers on Evaluatology.
- Release of outcomes from standardization working groups on open source, foundation models (LLMs), and emerging domains such as the low-altitude economy
- Announcement of the Youth Contribution Rankings in trending areas
- Launch of a series of Evaluatology training programs
- Introduction to the BenchCouncil Standard Evaluation Process


Organization
-----------------

General Chair
Weiping Li, Oklahoma State University, USA and Civil Aviation Flight University of China, China

Organizing Committee Chair
Lin Zou, Civil Aviation Flight University of China, China

Program Committee Chairs
Jianfeng Zhan, ICT, Chinese Academy of Sciences, China
Wei Wang, East China Normal University, China

Program Committee Vice-Chairs
Fanda Fan, University of Chinese Academy of Sciences, China
Yushan Su, Waymo LLC, USA

Bench Steering Committee
Jack J. Dongarra, University of Tennessee, USA
Geoffrey Fox, Indiana University, USA
D. K. Panda, The Ohio State University, USA
Felix Wolf, TU Darmstadt, Germany
Xiaoyi Lu, University of California, Merced, USA
Resit Sendag, University of Rhode Island, USA
Wanling Gao, ICT, Chinese Academy of Sciences, China
Jianfeng Zhan, BenchCouncil, China

Award Committee
2025 BenchCouncil Achievement Award Committee:
D. K. Panda, The Ohio State University, USA
Geoffrey Fox, Indiana University, USA
Jianfeng Zhan, University of Chinese Academy of Sciences, China
Tony Hey, Rutherford Appleton Laboratory STFC, UK
David J. Lilja, University of Minnesota, Minneapolis, USA
Jack J. Dongarra, University of Tennessee, USA
Lieven Eeckhout, Universiteit Gent, Belgium

Publication Chair
Zhengxin Yang, University of Chinese Academy of Sciences, China

Web Chair
Jiahui Dai, BenchCouncil


Call for papers
------------------------

The Bench conference encompasses a wide range of topics in benchmarks, datasets, metrics, indexes, measurement, evaluation, optimization, supporting methods and tools, and other best practices in computer science, medicine, finance, education, management, etc. Bench's multidisciplinary and interdisciplinary emphasis provides an ideal environment for developers and researchers from different areas and communities to discuss practical and theoretical work. The topics of interest include, but are not limited to the following:

-- Evaluation theory and methodology
  ** Formal specification of evaluation requirements
  ** Development of evaluation models
  ** Design and implementation of evaluation systems
  ** Analysis of evaluation risk
  ** Cost modeling for evaluations
  ** Accuracy modeling for evaluations
  ** Evaluation traceability
  ** Identification and establishment of evaluation conditions
  ** Equivalent evaluation conditions
  ** Design of experiments
  ** Statistical analysis techniques for evaluations
  ** Methodologies and techniques for eliminating confounding factors in evaluations
  ** Analytical modeling techniques and validation of models
  ** Simulation and emulation-based modeling techniques and validation of models
  ** Development of methodologies, metrics, abstractions, and algorithms specifically tailored for evaluations

-- The engineering of evaluation
  ** Benchmark design and implementation
  ** Benchmark traceability
  ** Establishing least equivalent evaluation conditions
  ** Index design, implementation
  ** Scale design, implementation
  ** Evaluation standard design and implementations
  ** Evaluation and benchmark practice
  ** Tools for evaluations
  ** Real-world evaluation systems
  ** Testbed

-- Data set
  ** Explicit or implicit problem definition deduced from the data set
  ** Detailed descriptions of research or industry datasets, including the methods used to collect the data and technical analyses supporting the quality of the measurements
  ** Analyses or meta-analyses of existing data
  ** Systems, technologies, and techniques that advance data sharing and reuse to support reproducible research
  ** Tools that generate large-scale data while preserving their original characteristics
  ** Evaluating the rigor and quality of the experiments used to generate the data and the completeness of the data description

-- Benchmarking
  ** Summary and review of state-of-the-art and state-of-the-practice
  ** Searching and summarizing industry best practice
  ** Evaluation and optimization of industry practice
  ** Retrospective of industry practice
  ** Characterizing and optimizing real-world applications and systems
  ** Evaluations of state-of-the-art solutions in the real-world setting

-- Measurement and testing
  ** Workload characterization
  ** Instrumentation, sampling, tracing, and profiling of large-scale, real-world applications and systems
  ** Collection and analysis of measurement and testing data that yield new insights
  ** Measurement and testing-based modeling (e.g., workloads, scaling behavior, and assessment of performance bottlenecks)
  ** Methods and tools to monitor and visualize measurement and testing data
  ** Systems and algorithms that build on measurement and testing-based findings
  ** Reappraisal of previous empirical measurements and measurement-based conclusions
  ** Reappraisal of previous empirical testing and testing-based conclusions
  ** Metrological Evaluation of Algorithmic Scalability, Convergence, and Stability

-- Algorithm Evaluation and Optimization
  ** Cross-domain benchmark-driven assessment (e.g., AI, optimization, scheduling, search)
  ** Comparative studies under standardized benchmark settings
  ** Evaluation of algorithmic generalization, robustness, and fairness under real-world constraints
  ** Hardware-aware algorithm co-design methodologies
  ** Trade-off analysis frameworks (e.g., accuracy vs. cost, speed vs. interpretability)
  ** Evaluation of adaptive algorithms under dynamic workloads
  ** Adversarial evaluation between classical algorithms and learning-based paradigms
  ** Mechanisms for ensuring algorithmic reproducibility
  ** Case studies on benchmark-guided algorithm optimization
  ** Construction of open-source benchmark suites and performance rankings


Paper Submission
------------------------

Submissions may take the form of full papers, short papers, or one-page abstracts. Full papers (up to 15 pages in LNCS format, excluding references) and short papers (up to 12 pages in LNCS format, excluding references) will undergo anonymous peer review to determine acceptance. One-page abstracts (in LNCS format, excluding references) will be evaluated through review, presentation, and audience feedback. If the abstract is approved, authors will have the opportunity to expand it into a full-length publication. The review process follows a strict double-blind policy per the established Bench conference norms. The submissions will be judged based on the merit of the ideas rather than the length. After the conference, the proceedings will be published by Springer LNCS (Pending, Indexed by EI). Extended versions of selected outstanding papers will be invited to BenchCouncil Transactions on Benchmarks, Standards and Evaluations (May 2025 CiteScore: 16).

Please note that the LNCS format is the final one for publishing. At least one author must pre-register for the symposium, and at least one author must attend the symposium to present the paper. Papers for which no author is pre-registered will be removed from the proceedings.

Formatting Instructions
Please make sure your submission satisfies ALL of the following requirements:
- The submission must describe unpublished work that is not currently under review of any other conference or journal venues.
- All authors and affiliation information must be anonymized.
- Paper must be submitted in printable PDF format.
- Please number the pages of your submission.
- The submission must be formatted for black-and-white printers. Please make sure your figures are readable when printed in black and white.
- References must include all authors' names; the use of "et al." is not permitted.

Submission site: https://urldefense.us/v3/__https://bench2025.hotcrp.com/__;!!G_uCfscf7eWS!amSUc1O23OldGNCxondKlqqWKMq8C2sjqPRfuyvHSLEOjaqkQYJczVjcHraIGEJ7zuN_tte5zrxUddUGXzWC4aYxKkPU$ 
LNCS latex template: https://urldefense.us/v3/__https://www.benchcouncil.org/file/llncs2e.zip__;!!G_uCfscf7eWS!amSUc1O23OldGNCxondKlqqWKMq8C2sjqPRfuyvHSLEOjaqkQYJczVjcHraIGEJ7zuN_tte5zrxUddUGXzWC4Sl0q-HQ$ 


Technical Program Committees
------------------------

Ana Gainaru, Oak Ridge National Laboratory, USA
Bin Hu, Institute of Computing Technology, Chinese Academic of Science, China
Biwei Xie, Institute of Computing Technology, Chinese Academic of Science, China
Ce Zhang, ETH Zurich, Switzerland
Chen Zheng, Institute of software, Chinese Academy of Sciences, China
Chunjie Luo, Institute of Computing Technology, Chinese Academic of Science, China
Emmanuel Jeannot, Inria
Fei Sun, Meta, USA
Gang Lu, Tencent, China
Gregory Diamos, Baidu, China
Guangli Li, Institute of Computing Technology, Chinese Academic of Science, China
Gwangsun Kim, Pohang University of Science and Technology, Korean
Khaled lbrahim, Lawrence Berkeley National Laboratory, USA
Krishnakumar Nair, Facebook, USA
Lei Wang, Institute of Computing Technology, Chinese Academic of Science, China
Mario Marino, Leeds Beckett University, UK
Miaoqing Huang, University of Arkansas, USA
Murali Emani, Argonne National Laboratory, USA
Nana Wang, Henan University, China
Narayanan Sundaram, Meta, USA
Nicolas Rougier, Inria, France
Peter Mattson, Google, USA
Piotr Luszczek, University of Tennessee, USA
Partha Pratim Ray, Sikkim University, India
Rui Ren, Beijing Open Source IC Academy, China
Sascha Hunold, TU Wien, Austria
Shengen Yan, SenseTime, China
Shin-ying Lee, AMD, USA
Steven Farrell, Lawrence Berkeley National Laboratory, USA
Sadia Samar Ali, King Abdulaziz University, Saudi Arabia
Vladimir Getov, University of Westminster, UK
Wanling Gao, Institute of Computing Technology, Chinese Academic of Science, China
Woongki Baek, Ulsan National Institute of Science and Technology, Korean
Xiaoyi Lu, University of California, Merced, USA
Yunyou Huang, Guangxi Normal University, China
Zhen Jia, Amazon, China


More information about the hpc-announce mailing list