[hpc-announce] [CFP] The 8th EMC2 - Energy Efficient Training and Inference of Transformer Based Models Workshop at AAAI23
Fanny Nina Paravecino
fninaparavecino at gmail.com
Fri Oct 28 00:18:20 CDT 2022
[Corrected Date]
The 8th EMC2 - Energy Efficient Training and Inference of Transformer Based
Models Workshop at AAAI23
Monday February 13, 2023, Washington DC, US.
Website site: <https://www.latinxinai.org/neurips-2022>
https://www.emc2-ai.org/aaai-23
Important dates:
Submission Deadline:Nov 7, 2022 (AOE)
Notifications sent: Nov 18, 2022
Final Manuscript due: Dec 1st, 2022
Talk Recording due: Dec 19, 2022
Submission site: https://www.emc2-ai.org/submission
Publication:
Proceedings will be published by workshop proceedings IEEE Xplore -
Conference Table of Contents
<https://ieeexplore.ieee.org/xpl/conhome/1826546/all-proceedings>
=========
Introduction
Transformers are the foundational principles of large deep learning
language models. Recent successes of Transformer-based models in image
classification and action prediction use cases indicate their wide
applicability. In this workshop, we want to focus on the leading ideas
using Transformer models such as PALM from Google. We will learn what have
been their key observations on performance of the model, optimizations for
inference and power consumption of both mixed-precision inference and
training.
The goal of this Workshop is to provide a forum for researchers and
industry experts who are exploring novel ideas, tools, and techniques to
improve the energy efficiency of machine learning and deep learning as it
is practiced today and would evolve in the next decade. We envision that
only through close collaboration between industry and the academia we will
be able to address the difficult challenges and opportunities of reducing
the carbon footprint of AI and its uses. We have tailored our program to
best serve the participants in a fully digital setting. Our forum
facilitates active exchange of ideas through
-
Keynotes, invited talks and discussion panels by leading researchers
from industry and academia
-
Peer-reviewed papers on latest solutions including works-in-progress to
seek directed feedback from experts
-
Independent publication of proceedings through IEEE CPS
We invite full-length papers describing original, cutting-edge, and even
work-in-progress research projects about efficient machine learning.
Suggested topics for papers include, but are not limited to:
-
Neural network architectures for resource constrained applications
-
Efficient hardware designs to implement neural networks including
sparsity, locality, and systolic designs
-
Power and performance efficient memory architectures suited for neural
networks
-
Network reduction techniques – approximation, quantization, reduced
precision, pruning, distillation, and reconfiguration
-
Exploring interplay of precision, performance, power, and energy through
benchmarks, workloads, and characterization
-
Simulation and emulation techniques, frameworks, tools, and platforms
for machine learning
-
Optimizations to improve performance of training techniques including
on-device and large-scale learning
-
Load balancing and efficient task distribution, communication and
computation overlapping for optimal performance
-
Verification, validation, determinism, robustness, bias, safety, and
privacy challenges in AI systems
Submission Guidelines
Short-papers: Up to 6 pages excluding references. No supplementary material
will be allowed. They can present work in progress, exploratory/preliminary
research or already published work, or any relevant artificial intelligence
applications for Latin America
Style: Submissions must follow the guidelines provided by the
<https://neurips.cc/Conferences/2021/PaperInformation/StyleFiles>
<https://neurips.cc/Conferences/2022/PaperInformation/StyleFiles>IEEE style
<https://www.ieee.org/conferences/publishing/templates.html>. Submissions
should state the research problem, motivation, and technical contribution.
All submissions must be in English. The submissions should be sent in a
single PDF file.
Desk rejection: Submissions that do not follow the length or style
requirements above shall be automatically rejected without consideration of
their merits.
(Optional) Source code: We encourage authors of accepted submissions to
provide a link to their source code. To maintain a double-blind review
process, you will be allowed to submit or link your code in the
camera-ready stage.
Submission of a paper should be regarded as an undertaking that if the
paper should be accepted, at least one of the authors need to register for
the conference and present the work.
Submit your paper(s) in PDF format at the submission site:
https://easychair.org/conferences/?conf=emc24
Workshop Chairs
Fanny Nina Paravecino, Microsoft, US
Kushal Datta, Microsoft, US
Satyam Srivastava, d-Matrix, US
Raj Parihar, Meta, US
Sushant Kondguli, Meta, US
Ananya Pareek, Apple, US
Tao (Terry) Sheng, Oracle, US
On Thu, Oct 27, 2022 at 10:13 PM Fanny Nina Paravecino <
fninaparavecino at gmail.com> wrote:
> The 8th EMC2 - Energy Efficient Training and Inference of Transformer
> Based Models Workshop at AAAI23
>
> Tuesday 13, 2023, Washington DC, US.
>
>
>
> Website site: <https://www.latinxinai.org/neurips-2022>
> https://www.emc2-ai.org/aaai-23
>
>
>
> Important dates:
>
> Submission Deadline:Nov 7, 2022 (AOE)
>
> Notifications sent: Nov 18, 2022
>
> Final Manuscript due: Dec 1st, 2022
>
> Talk Recording due: Dec 19, 2022
>
>
> Submission site: https://www.emc2-ai.org/submission
>
>
> Publication:
>
> Proceedings will be published by workshop proceedings IEEE Xplore -
> Conference Table of Contents
> <https://ieeexplore.ieee.org/xpl/conhome/1826546/all-proceedings>
>
>
> =========
>
> Introduction
>
> Transformers are the foundational principles of large deep learning
> language models. Recent successes of Transformer-based models in image
> classification and action prediction use cases indicate their wide
> applicability. In this workshop, we want to focus on the leading ideas
> using Transformer models such as PALM from Google. We will learn what have
> been their key observations on performance of the model, optimizations for
> inference and power consumption of both mixed-precision inference and
> training.
>
> The goal of this Workshop is to provide a forum for researchers and
> industry experts who are exploring novel ideas, tools, and techniques to
> improve the energy efficiency of machine learning and deep learning as it
> is practiced today and would evolve in the next decade. We envision that
> only through close collaboration between industry and the academia we will
> be able to address the difficult challenges and opportunities of reducing
> the carbon footprint of AI and its uses. We have tailored our program to
> best serve the participants in a fully digital setting. Our forum
> facilitates active exchange of ideas through
>
> -
>
> Keynotes, invited talks and discussion panels by leading researchers
> from industry and academia
> -
>
> Peer-reviewed papers on latest solutions including works-in-progress
> to seek directed feedback from experts
> -
>
> Independent publication of proceedings through IEEE CPS
>
> We invite full-length papers describing original, cutting-edge, and even
> work-in-progress research projects about efficient machine learning.
> Suggested topics for papers include, but are not limited to:
>
> -
>
> Neural network architectures for resource constrained applications
> -
>
> Efficient hardware designs to implement neural networks including
> sparsity, locality, and systolic designs
> -
>
> Power and performance efficient memory architectures suited for neural
> networks
> -
>
> Network reduction techniques – approximation, quantization, reduced
> precision, pruning, distillation, and reconfiguration
> -
>
> Exploring interplay of precision, performance, power, and energy
> through benchmarks, workloads, and characterization
> -
>
> Simulation and emulation techniques, frameworks, tools, and platforms
> for machine learning
> -
>
> Optimizations to improve performance of training techniques including
> on-device and large-scale learning
> -
>
> Load balancing and efficient task distribution, communication and
> computation overlapping for optimal performance
> -
>
> Verification, validation, determinism, robustness, bias, safety, and
> privacy challenges in AI systems
>
> Submission Guidelines
>
>
>
> Short-papers: Up to 6 pages excluding references. No supplementary
> material will be allowed. They can present work in progress,
> exploratory/preliminary research or already published work, or any relevant
> artificial intelligence applications for Latin America
>
> Style: Submissions must follow the guidelines provided by the
> <https://neurips.cc/Conferences/2021/PaperInformation/StyleFiles>
> <https://neurips.cc/Conferences/2022/PaperInformation/StyleFiles>IEEE
> style <https://www.ieee.org/conferences/publishing/templates.html>.
> Submissions should state the research problem, motivation, and technical
> contribution. All submissions must be in English. The submissions should be
> sent in a single PDF file.
>
> Desk rejection: Submissions that do not follow the length or style
> requirements above shall be automatically rejected without consideration of
> their merits.
>
> (Optional) Source code: We encourage authors of accepted submissions to
> provide a link to their source code. To maintain a double-blind review
> process, you will be allowed to submit or link your code in the
> camera-ready stage.
>
> Submission of a paper should be regarded as an undertaking that if the
> paper should be accepted, at least one of the authors need to register for
> the conference and present the work.
>
>
>
> Submit your paper(s) in PDF format at the submission site:
>
> https://easychair.org/conferences/?conf=emc24
>
>
> Workshop Chairs
>
> Fanny Nina Paravecino, Microsoft, US
>
> Kushal Datta, Microsoft, US
>
> Satyam Srivastava, d-Matrix, US
>
> Raj Parihar, Meta, US
>
> Sushant Kondguli, Meta, US
>
> Ananya Pareek, Apple, US
>
> Tao (Terry) Sheng, Oracle, US
>
--
Fanny Nina Paravecino, PhD
Principal Research Architect, Microsoft
More information about the hpc-announce
mailing list