[hpc-announce] (3rd call) CFP: DFM'20 — Deadline extension — 9th International Workshop on Data-Flow Models for Extreme-Scale Computing
Stéphane Zuckerman
stephane.zuckerman at u-cergy.fr
Mon Apr 6 11:02:32 CDT 2020
9th IEEE International Workshop on Data Flow Models and Extreme-Scale
Computing (DFM 2020)
Hosted as part of COMPSAC 2020, July 13—17, 2020, as an online event
Please see COMPSAC's website for details:
https://ieeecompsac.computer.org/2020
*** IMPORTANT DATES HAVE CHANGED, SEE BELOW ***
This workshop is organized as part of the activities of the IEEE
Computer Society Dataflow STC.
The ninth installment of the international workshop on Data Flow Models
(DFM) for extreme-scale computing is held this year in conjunction with
the COMPSAC conference. The purpose of DFM continues to being to bring
together those researchers interested in novel computational models
based on dataflow principles of execution. The switch to multi-core
systems, at both the high-performance and embedded levels, has raised
concurrency to the level of a major issue, with the trend of increasing
the core count on a chip continuing, as well as energy and resiliency
issues coming to the fore of major issues to tackle.
Computer systems, both for high-performance and embedded computing, have
now fully embraced parallelism at the hardware and software levels. From
the HPC systems viewpoint, new challenges have arisen, which are common
issues in the embedded world: power and energy efficiency are now major
issues to be overcome when considering building efficient
supercomputers. Conversely, harnessing true parallel systems is now
necessary to efficiently exploit embedded systems equipped with multiple
cores. Moreover, fault-tolerance and resiliency must also be taken into
consideration, at both the hardware and software level. Finally, many
such systems (both embedded and HPC) are networked together, forming
extremely large distributed and parallel systems. Dataflow-inspired
models of computation, once discarded by the sequential programming
crowd, are again considered serious contenders to help increase
programmability, performance, and scalability in highly parallel and
extreme scale systems. By their very nature, dataflow and event-driven
inspired models tend to naturally solve (if only partially) some of the
newer problems related to power and energy efficiency, or provide
fertile ground to help with implementing efficient fault-tolerance and
resiliency mechanisms, as many of the required properties are enmeshed
in the models themselves. Yet, to achieve high scalability and
performance, modern computing systems, both HPC and embedded, rely on
heterogeneous means to carry out computations: GPUs, FPGAs, etc.
Meanwhile, legacy programming and execution models, such as MPI and
OpenMP, add asynchronous and data-driven constructs to their models, all
the while trying to take into account the very complex hardware targeted
by parallel applications. Consequently, programming and execution
models, trying to combine both legacy control flow-based and data
flow-based aspects of computing, have also become increasingly complex
to handle. Developing new models and their implementation, from the
application programmer level, to the system level, down to the hardware
level is key to provide better data- and event-driven systems which can
efficiently exploit the wealth of diversity that composes current
high-performance systems, for extreme scale parallel computing. To this
end, the whole stack, from the application programming interface down to
the hardware must be investigated for programmability, performance,
scalability, energy and power efficiency, as well as resiliency and
fault-tolerance. All these aspects may have a different impact on
high-performance computing and embedded systems.
Researchers and practitioners all over the world, from both academia and
industry, working in the
areas of language, system software, and hardware design, parallel
computing, execution models, and resiliency modeling are invited to
discuss state of the art solutions, novel issues, recent developments,
applications, methodologies, techniques, experience reports, and tools
for the development and use of data flow models of computation. Topics
of interest include, but are not limited to, the following:
DFM 2020 solicits novel papers that include but are not limited to:
• Programming languages and compilers for existing and new languages —
in particular single-assigned and functional languages
• System software: Operating systems, runtime systems
• Hardware design: ASICs and reconfigurable computing (FPGAs)
• Resiliency and fault-tolerance for parallel and distributed systems
• New data flow inspired execution models — in particular strict and
non-strict models
• Hybrid system design for control-flow and data-flow based systems
• Position papers on the future of data flow in the era of parallel and
distributed many-core systems, and beyond, including heterogeneous systems
SUBMISSION INFORMATION
DFM 2020 will accept both full (6 pages) and short papers (4 pages).
Full page papers may go up to 8 pages for a fee. Papers should be
prepared using the IEEE Proceedings format; Short Papers could be
submitted in the form of extended abstracts. All accepted papers will
appear in the Computer Society Digital Library. Submission site
https://easychair.org/my/conference.cgi?welcome=1;conf=compsac2020.
IMPORTANT DATES
Submission deadline: April 24th, 2020, AOE
Authors notification: May 15th, 2020
Camera ready due: May 31st, 2020
PROGRAM COMMITTEE
Arthur Stoutchinin ST Microelectronics
Guang R. Gao U. of Delaware
Jean-Luc Gaudiot U. of California, Irvine
Roberto Giorgi U. of Sienna
Robert Clay Sandia National Lab
Wolfgang Karl Karlsruhe Institute of Technology
Erik Altman IBM
Albert Cohen Google
Theo Ungerer U. of Augsburg
--
Maître de Conférences / IUT GEII (site de Neuville)
Laboratoire ETIS — Université Paris-Seine
UMR 8051, Université de Cergy-Pontoise, ENSEA, CNRS
F-95000, Cergy, France
More information about the hpc-announce
mailing list