[hpc-announce] IEEE Transactions on Cloud Computing - Special Issue on Big Data - Deadline extended to March 21

Miranda.Zhang at csiro.au Miranda.Zhang at csiro.au
Sun Mar 2 02:19:50 CST 2014



Special Issue on

Autonomic Provisioning of Big Data Applications on Clouds



IEEE Transactions on Cloud Computing



Guest Editors: Rajiv Ranjan, Lizhe Wang, Albert Zomaya, Dimitrios Georgakopoulos, Guojun Wang, and Xian-He Sun



Editor-in-Chief: Rajkumar Buyya



*** Call for Papers ***



This special issue solicits papers that advance the fundamental understanding, technologies, and concepts related to autonomic provisioning of cloud resources for Big Data applications. The research advancement is in this area is important because such large, heterogeneous, and uncertain Big Data applications are becoming increasingly common, yet current cloud resource provisioning methods do not scale well and nor do they perform well under highly unpredictable conditions (data volume, data variety, data arrival rate, etc.). If these problems are resolved, then cloud-hosted Big Data applications will operate more efficiently, with reduced financial and environmental costs, reduced under-utilisation of resources, and better performance at times of unpredictable workload.

Cloud computing assembles large networks of virtualised ICT services such as hardware resources (such as CPU, storage, and network), software resources (such as databases, application servers, and web servers) and applications. In industry these services are referred to as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Mainstream ICT powerhouses such as Amazon, HP, and IBM  are heavily investing in the provision and support of public cloud infrastructure. Cloud computing is rapidly becoming a popular infrastructure of choice among all types of organisations. Despite some initial security concerns and technical issues, an increasing number of organisations have moved their applications and services in to "The Cloud".  These applications range from generic word processing software to online healthcare. The Cloud system taps into the processing power of virtualized computers on the back end, thus significantly speeding up the application for the user, which just pays for the used services.

Big Data applications has become a common phenomenon in domain of science, engineering, and commerce Some of the representative applications include disaster management, high energy physics, genomics, connectomics, automobile simulations, medical imaging, and the like. The 'BigData' problem, which is defined as the practice of collecting complex data sets so large that it becomes difficult to analyse and interpret manually or using on-hand data management applications (e.g., Microsoft Excel).   For example, in case of disaster management Big Data application there is a need to analyse "a deluge of online data from multiple sources (feeds from social media and mobile devices)" for understanding and managing real-life events such as flooding, earthquake, etc. Over 20 million tweets posted during Hurricane Sandy (2012) lead to an instance of the BigData problem. The statistics provided by the PearAnalytics study reveal that almost 44% of the Twitter posts are spam and pointless, about 6% are personal or product advertising, while 3.6% are news and 37.6% are conversational posts. During the 2010 Haiti earthquake, text messaging via mobile phones and Twitter made headlines as being crucial for disaster response, but only some 100,000 messages were actually processed by government agencies due to lack of automated and scalable ICT (cloud) infrastructure.



Although significant effort has been devoted to migrating generic web-based application to the Cloud, scant research and development has been done to create a unified software framework for provisioning Big Data applications on clouds.Provisioning means the selection, deployment, monitoring, and run-time management of PaaS and IaaS resources for ensuring that applications meet their Quality of Service (QoS) targets (for data analysis delay, data availability, alert generation delay, etc.) as agreed in the negotiated Service Level Agreement (SLA). IaaS clouds such as Amazon EC2 and GoGrid are too low-level, making development of Big Data applications difficult and resource provisioning unintelligent and inefficient. PaaS clouds such as Microsoft Azure are at an appropriate level, but it does not provide the right kind of abstraction required for supporting real-time analysis of massive dataset from multiple sources. Big Data applications are uncertain, as it has to deal with data which can be from multiple contexts and originated from heterogeneous sources.  Furthermore, it is a difficult problem to estimate the behavior of Big Data applications in terms of data volume, data arrival rate, data types, and data processing time distributions. Secondly, from a cloud resource perspective, without knowing the requirements or behaviours of BigData applications, it is difficult to make decisions about the size of resources to be provisioned at any given time. Furthermore, the availability, load, and throughput of cloud resources can vary in unpredictable ways, due to failure, malicious attacks, or congestion of network links.



The special issue will also encourage submission of revised and extended versions of 2-3 best papers (based on votes of a panel) in the area of Cloud Computing from IEEE HPCC 2013 (http://trust.csu.edu.cn/conference/hpcc2013/Call%20for%20Papers.htm), IEEE CCGRID 2014 (http://datasys.cs.iit.edu/events/CCGrid2014/), and IEEE IC2E 2014 conferences.



Topics

  Areas of interest for this special issue include the following:

-      Algorithms for petabyte efficient non-SQL query-based Big Data processing and related Cloud resource optimisation.

-      Big Data Application behavior prediction models

-      Real-time analytics on streaming Big Data

-      Collaborative sharing and management

-      Big Data application performance evaluation study on public and private clouds

-      Dynamic learning technique for new Big application behavior adaptation

-      Queuing theory based cloud resource performance model solvers

-      Stochastic fault-tolerance and reliability models

-      Decentralized networking models for scalable Big Data application health monitoring

-      Energy-efficiency models for provisioning of Big Data application provisioning

-      Innovative Big Data Application use cases (disaster management, high energy physics, genomics, connectomics, automobile simulations, medical imaging, and the like)

-      Security, privacy and trust-based  Big Data  application provisioning



Schedule

Submission due date: March 21, 2014

Notification of acceptance: June 15, 2014

Submission of final manuscript: August 15, 2014

Publication date: 2nd Quarter, 2014(Tentative)



Submission & Major Guidelines

The special issue invites original research papers that make significant contributions to the state-of-the-art in "Autonomic Provisioning of Big Data Applications on Clouds".  The papers must not have been previously published or submitted for journal or conference publications. However, the papers that have been previously published with reputed conferences could be considered for publication in the special issue if they are substantially revised from their earlier versions with at least 30% new contents or results that comply with the copyright regulations, if any.  Every submitted paper will receive at least three reviews. The editorial review committee will include well known experts in the area of Grid, Cloud, and Autonomic computing.



Selection and Evaluation Criteria:

-      Significance to the readership of the journal

-      Relevance to the special issue

-      Originality of idea, technical contribution, and significance of the presented results

-      Quality, clarity, and readability of the written text

-      Quality of references and related work

-      Quality of research hypothesis, assertions, and conclusion



Guest Editors

Dr. Rajiv Ranjan - Corresponding Guest Editor

Research Scientist, CSIRO ICT Center

Computer Science and Information Technology Building (108)

North Road, Australian National University, Acton, ACT, Australia

Email: raj.ranjan at csiro.au<mailto:raj.ranjan at csiro.au>



Prof. Lizhe Wang

Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences

No.9 Dengzhuang South Road, Hadian District

Beijing 100094, P.R. China

Email:lzwang at ceode.ac.cn<mailto:Email%3Alzwang at ceode.ac.cn>



Prof. Albert Zomaya

Australian Research Council Professorial Fellow

Chair Professor of High Performance Computing & Networking

School of Information Technologies, Building J12

The University of Sydney

Sydney, NSW 2006, Australia

Email: albert.zomaya at sydney.edu.au<mailto:albert.zomaya at sydney.edu.au>



Prof. Dimitrios Georgakopoulos

Research Director, Information Engineering, Laboratory, CSIRO ICT Center

Computer Science and Information Technology Building (108)

North Road, Australian National University, Acton, ACT, Australia

Email: dimitrios.georgakopoulos at csiro.au<mailto:dimitrios.georgakopoulos at csiro.au>



Dr. Guojun Wang

Chairman and Professor of Department of Computer Science,

Central South University, Changsha, Hunan Province,

P. R. China, 410083

Tel/Fax: +86 731 88877711<tel:%2B86%20731%2088877711>, Mobile: +86 13508486821<tel:%2B86%2013508486821>

Email: csgjwang at mail.csu.edu.cn<mailto:csgjwang at mail.csu.edu.cn>; csgjwang at gmail.com<mailto:csgjwang at gmail.com>



Prof. Xian-He Sun

Director, The SCS Laboratory, Department of Computer Science

Illinois Institute of Technology

Email: sun at iit.edu<mailto:sun at iit.edu>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.mcs.anl.gov/mailman/private/hpc-announce/attachments/20140302/f1e54cc0/attachment-0001.html>


More information about the hpc-announce mailing list