[hpc-announce] Call for Papers: Special Issue “Energy-Efficient Computing Systems for Deep Learning” (MDPI Sustainability, IF 2.576)

Jose Cano Reyes Jose.CanoReyes at glasgow.ac.uk
Thu Apr 15 05:40:36 CDT 2021


[Apologies if you receive multiple copies of this Call for Papers]

Dear Colleagues,

We want to invite you to submit your latest research to the MDPI 
Sustainability Journal, Special Issue on “Energy-Efficient Computing 
Systems for Deep Learning” which is open for submissions until April 30, 
2021.

https://www.mdpi.com/journal/sustainability/special_issues/Energy-Efficient_Computing

Deep learning (DL) is receiving much attention these days due to the 
impressive performance achieved in a variety of application areas, such 
as computer vision, natural language processing, machine translation, 
and many more. Aimed at achieving ever-faster processing of these DL 
workloads in an energy-efficient way, a myriad of specialized hardware 
architectures (e.g., sparse tensor cores in NVIDIA A100 GPU) and 
accelerators (e.g., Google TPU) are emerging. The goal is to provide 
much higher performance-per-watt than general-purpose CPU processors. 
Production deployments tend to have very high model complexity and 
diversity, demanding solutions that can deliver higher productivity, 
more powerful programming abstractions, more efficient software and 
system architectures, faster runtime systems, and numerical libraries, 
accompanied by a rich set of analysis tools.

DL models are generally memory and computationally intensive, for both 
training and inference. Accelerating these operations in an 
energy-efficient way has obvious advantages, first by reducing energy 
consumption (e.g., data centers can consume megawatts, producing an 
electricity bill similar to that of a small town), and secondly, by 
making these models usable on smaller battery-operated devices at the 
edge of the Internet. Edge devices run on strict power budgets and 
highly constrained computing power. In addition, while deep neural 
networks have motivated much of this effort, numerous applications and 
models involve a wider variety of operations, network architectures, and 
data processing. These applications and models are a challenge for 
today’s computer architectures, system stacks, and programming 
abstractions. As a result, non-von Neumann computing systems such as 
those based on in-memory and/or in-network computing, which perform 
specific computational tasks just where the data are generated, are 
being investigated in order to avoid the latency of shuttling huge 
amounts of data back and forth between processing and memory units. 
Additionally, machine learning (ML) techniques are being explored to 
reduce overall energy consumption in computing systems. These 
applications of ML range from energy-aware scheduling algorithms in data 
centers to battery life prediction techniques in edge devices. The high 
level of interest in these areas calls for a dedicated journal issue to 
discuss novel acceleration techniques and computation paradigms for 
energy-efficient DL algorithms. Since the journal targets the 
interaction of machine learning and computing systems, it will 
complement other publications specifically focused on one of these two 
parts in isolation.

The main objective of this Special Issue is to discuss and disseminate 
the current work in this area, showcasing new and novel DL algorithms, 
programming paradigms, software tools/libraries, and hardware 
architectures oriented at providing energy efficiency, in particular 
(but not limited to):

- Novel energy-efficient DL systems: heterogeneous multi/many-core 
systems, GPUs, and FPGAs;
- Novel energy-efficient DL hardware accelerators and associated software;
- Emerging semiconductor technologies with applications to 
energy-efficient DL hardware acceleration;
- Cloud and edge energy-efficient DL computing: hardware and software to 
accelerate training and inference;
- In-memory computation and in-network computation for energy-efficient 
DL processing;
- Machine-learning-based techniques for managing energy efficiency of 
computing platforms.

Dr. José Cano
Dr. José L. Abellán
Prof. David Kaeli
Guest Editors



More information about the hpc-announce mailing list