[hpc-announce] 1st Tournament on Reproducible and Pareto-Efficient Deep Learning at ASPLOS’18
Grigori Fursin
Grigori.Fursin at cTuning.org
Mon Dec 18 09:42:07 CST 2017
Apologies if you receive multiple copies of this call!
==============================================================
CALL FOR SUBMISSIONS
ReQuEST: 1st Reproducible Quality-Efficient Systems Tournament
Call for Pareto efficient deep learning
Intent to submit: February 5, 2018 AoE
Associated ReQuEST workshop co-located with ASPLOS 2018
March 24th, 2018 (afternoon), Williamsburg, VA, USA
cKnowledge.org/request
==============================================================
ORGANIZERS
* Luis Ceze, University of Washington, USA
* Natalie Enright Jerger, University of Toronto, Canada
* Babak Falsafi, EPFL, Switzerland
* Grigori Fursin, cTuning foundation, France
* Anton Lokhmotov, dividiti, UK
* Thierry Moreau, University of Washington, USA
* Adrian Sampson, Cornell University, USA
* Phillip Stanley Marbell, University of Cambridge, UK
==============================================================
CALL FOR PARTICIPATION
Co-designing emerging workloads across the hardware/software
stack to optimize for speed, accuracy, costs and other metrics
is extremely complex and time consuming. The lack of
a rigorous methodology and common tools for open, reproducible
and multi-objective optimization makes it challenging or even
impossible to evaluate and compare different published works
across numerous heterogeneous hardware platforms, software
frameworks, compilers, libraries, algorithms, data sets, and
environments.
The 1st ReQuEST workshop aims to bring together
multidisciplinary researchers in systems, compilers,
architecture and machine learning to optimize the quality vs.
efficiency Pareto optimality of deep learning systems
on complete hardware/software platforms in a standardized,
reproducible and comparable fashion. The target application
for the first incarnation of ReQuEST will be the ImageNet
Large Scale Visual Recognition Challenge (ILSVRC) and will
focus solely on optimizing inference on real systems
. Restricting the competition to a single application domain
will allow us to test an open-source tournament
infrastructure, validate it across multiple platforms and
environments, and prepare a dedicated live scoreboard with
the "winning" solutions. For future incarnations of ReQuEST,
we will provide broader application coverage.
Unlike the classical ILSVRC where submissions are ranked based
on accuracy, ReQuEST submissions will be evaluated across
multiple metrics and trade-offs (exposed by authors):
accuracy, speed, throughput, energy, cost of usage, etc.
Furthermore, in contrast with other deep learning benchmarking
challenges ReQuEST participants will be asked to submit
a complete workflow artifact (see submission procedures) which
encompasses toolchains, frameworks, algorithm, libraries, and
target hardware platform; any of which can be fine-tuned,
or customized at will by the participant to implement their
optimization technique.
We strongly encourage artifact submissions for already
published techniques since one of the ReQuEST goals is to
prepare an open set of reference and optimized implementations
of popular deep learning algorithms as portable and
customizable workflows which can be easily reused, improved
and build upon!
A ReQuEST artifact evaluation committee (AEC) will be tasked
to independently reproduce and evaluate workflow submissions
on compliant hardware platforms to reproduce results and
aggregate them in a multi-objective public leaderboard. Due to
the multi-faceted nature of the competition, submissions won't
be ranked according to a single metric, but instead the AEC
will assess their Pareto optimality across two or more metrics
exposed by authors. There won't be a single ranking
of submissions since this competition is multi-objective:
it accounts for classification accuracy, inference latency,
energy, ownership/usage cost and so on. As such, there won't
be a single winner, but better and worse designs based
on their relative Pareto optimality (up to 3 design points
allowed per submission).
The workshop co-located with ASPLOS 2018 will be the
opportunity for the participants to share their research and
implementation insights with the research community. A common
academic and industrial panel will be held at the end of the
workshop to discuss how to improve common SW/HW co-design
methodology for deep learning and other real-world
applications.
==============================================================
Further details about deadlines, submission procedures
and artifact evaluation:
* http://cKnowledge.org/request
Looking forward to your participation and submissions!
More information about the hpc-announce
mailing list