[hpc-announce] Special Issue on Orchestration of computing resources in the Cloud-to-Things continuum

angalletta at ieee.org angalletta at ieee.org
Fri May 29 06:09:28 CDT 2020


Dear users of Hpc-Announce mailing list,
FYI
[Apologies if you receive multiple copies of this CFP]

**************************************************************************************************
Journal of Grid Computing - From Grids to Cloud Federations [IF 3.288
(2018)]

Special Issue on Orchestration of computing resources in the
Cloud-to-Things continuum
**************************************************************************************************
https://www.springer.com/journal/10723/updates/18017998

Aim and Scope:
The objective of the SI is to collect the latest research findings on major
and emerging topics related to the orchestration of resources in a wide
ecosystem where IoT, Edge/Fog and Cloud converge to form a computing
continuum also known as Cloud-to-Things continuum.
Cloud computing can provide flexible and scalable resources to meet any
computing needs. Big Data has revolutionized the approach to data
computation. With the increase of the volume of data produced by IoT
devices, there is a growing demand of applications capable of elaborating
such data flows close to their sources, not just on the Cloud, or anywhere
else along the IoT-to- Cloud path (Edge/Fog). Where computation should
occur depends on the specific needs of each application. Strict real-time
constraints require computation to run as close to the data origin as
possible (e.g., IoT Gateway). Conversely, batch-wise tasks (e.g., Big Data
analytics) are advised to run on the Cloud where computing resources are
abundant. Edge/Fog may be a good compromise in case of a concomitant demand
of both computing power and timeliness of elaboration. Application
designers would greatly benefit from a support for a flexible and dynamic
provisioning of computing resources along the Cloud-to-Things path, that
is, a provisioning system capable of orchestrating (activating,
deactivating, integrating, etc.) computing resources provided by
heterogeneous computing infrastructures. Furthermore, that system shall
also take into account and cope with the heterogeneity of providers owning
the computing infrastructures in terms of service APIs, guaranteed service
levels, data management policies, etc. Moreover, typical data-intensive
workloads that consist of data-analytics tasks such as Machine Learning
(ML)/AI and descriptive analysis are perfect candidates for the
Cloud-to-things continuum, since data is being generated typically on the
edge (by IoT devices, etc) with the use for instance of a serverless
pipeline, whereas the analysis (either for model training, or execution of
descriptive tasks) traditionally happens on centralized locations on the
cloud with the use of distributed processing frameworks.
This SI encourages submissions that address resource orchestration issues
in the Cloud-to-Things landscape and propose experimental solutions, case
studies, deployed systems and best practices in that field.

Topics of the SI include (but are not limited to):
- Resource provisioning and monitoring in Cloud-to-Things environments;
Cross cloud/edge service migration; Orchestration of microservices;
Blockchain techniques for resource orchestration; Machine learning
techniques for resource orchestration;
- Data governance across multiple computing domains; Scheduling and
provisioning data analytics on hybrid Edge/Fog and Cloud infrastructures;
Stream data processing in Edge/Fog and Clouds; Serverless Execution of
Machine Learning and SQL workloads on the Cloud;
- Quality of Service and SLA in Cloud-to-Things environments; Fault
management and recovery strategies; Identity and access management;
Cross-Infrastructure security mechanisms; Data security and protection;
- Multi-cloud deployment and orchestration; Multi-cloud resource
elasticity; Optimisation of services and service chains for Cloud-to-Things
systems; Optimisation of networking services; SDN/NFV based solutions;
Networking; Network orchestration;
- Orchestration of computing resources in the following applicative
domains: Smart City; Smart Industry; Smart Grid; Smart Agriculture; Smart
Health.

Submission guidelines:
All submitted papers will undergo a rigorous revision process adopted by
Journal of Grid Computing. Please submit a full-length paper through the
Journal of Grid Computing online submission system (
https://www.editorialmanager.com/grid/default.aspx) and indicate that it is
for this special issue. Papers should be formatted by following Journal of
Grid Computing manuscript formatting guidelines and must not exceed 18
pages. Please refer to the Journal’s website for detailed instructions on
paper submission. For further inquiries, please contact the corresponding
Guest Editor Giuseppe Di Modica (see contact details below).

Papers submission is according to the following timetable (Tentative
Schedule):
- Submission deadline: 15th October 2020
- Author notification: 15th January 2021
- Revised papers: 1st March 2021
- Final notification: 15th April 2021
- Publication: as per the policy of the journal

Guest Editors
Giuseppe Di Modica (Corresponding Guest Editor), University of Bologna,
Italy
Antonino Galletta, University of Messina, Italy
Shadi Ibrahim, Inria, France
Ioannis Konstantinou, University of Thessaly, Greece
Javid Taheri, Karlstad University, Sweden

For more details: https://www.springer.com/journal/10723/updates/18017998


More information about the hpc-announce mailing list