[hpc-announce] Call for benchmark proposals: MLPerf HPC

Murali Emani m.k.eemani at gmail.com
Mon Jan 9 17:38:58 CST 2023

Dear HPC, AI, and benchmarking enthusiasts,

The MLCommonsTM HPC working group is putting out a call for new
benchmark proposals in the MLPerfTM HPC benchmark suite.

MLPerf benchmarks are the standard measure of performance for AI
training and inference, and MLPerf HPC brings this established and
robust methodology to scientific HPC AI workloads.

In MLPerf HPC we measure both the time to train models as well as the
aggregate throughput capabilities of arbitrary scale systems. The
suite features scientific AI training workloads that push on HPC
systems at scale, including

CosmoFlow - a 3D CNN predicting parameters of cosmological simulations

DeepCAM - a segmentation model identifying extreme weather events in
climate simulations

OpenCatalyst - a GraphNN predicting energy and forces in atomic catalyst systems

We have now had three successful submission rounds for MLPerf HPC
since 2020, with impressive results from top HPC systems around the
world. Additionally, we have a lot of exciting things planned for
MLPerf HPC v3.0 in 2023, and would like to hear the community's input
on what are the most important HPC AI workloads to characterize. We
are currently accepting proposals for new benchmark applications for
the upcoming submission round through January 23, 2023, but welcome
contributions and ideas at any time. Proposing a benchmark involves
filling a brief questionnaire and giving a presentation to the group
covering benchmark relevance and technical specifications. Benchmark
selection is then done via group consensus.

The current schedule for the v3.0 submission round in 2023 is as follows

January 23 deadline for new benchmark proposals

February 13 selection of new benchmark proposals

June 12 benchmark rules and code freeze

October 6 submission deadline

November 8 results publication

For more information and to get involved, please feel free to contact
the MLCommons HPC chairs or visit the following links.

MLCommons HPC chairs: Murali Emani <memani at anl.gov>, Steven Farrell
<sfarrell at lbl.gov>

MLPerf HPC v2.0 press release:

MLPerf HPC v2.0 results and overview: https://mlcommons.org/en/training-hpc-20/

Get involved in MLCommons: https://mlcommons.org/en/get-involved

We look forward to hearing from you,

Murali Emani

Steven Farrell

More information about the hpc-announce mailing list