[hpc-announce] PDSW-DISCS 2017: Call for Works in Progress Abstracts

Mohror, Kathryn mohror1 at llnl.gov
Thu Oct 26 12:51:48 CDT 2017


Call for Works in Progress Abstracts: (PDSW-DISCS'17)


DEADLINE: November 1, 2017 AOE


PDSW-DISCS'17: The 2nd Joint International Workshop on Parallel Data Storage and Data Intensive Scalable Computing Systems

   ** http://www.pdsw-discs.org

   ** Monday, November 13, 2017 9:00am - 5:30pm

   ** SC'17 Workshop

   ** Denver, CO

   ** In cooperation with SIGHPC


General Chair:

   ** Dean Hildebrand (IBM Research)


Program Co-Chairs:

   ** Kathryn Mohror (Lawrence Livermore National Laboratory)

   ** Brent Welch (Google)


Important Dates:

   ** Work in Progress (WIP) submissions due: Wednesday, November 1, 2017, 11:59 PM AoE

   ** WIP Notification: Tuesday, November 7, 2017


Workshop Abstract:

We are pleased to announce that the second Joint International Workshop on Parallel Data Storage and Data Intensive Scalable Computing Systems (PDSW-DISCS'17) will be hosted at SC17: The International Conference for High Performance Computing, Networking, Storage and Analysis.  The objective of this one day joint workshop is to combine two overlapping communities and to better promote and stimulate researchers' interactions to address some of the most critical challenges for scientific data storage, management, devices, and processing infrastructure for both traditional compute intensive simulations and data-intensive high performance computing solutions.  Special attention will be given to issues in which community collaboration can be crucial for problem identification, workload capture, solution interoperability, standards with community buy-in, and shared tools.


Many scientific problem domains continue to be extremely data intensive. Traditional high performance computing (HPC) systems and the programming models for using them such as MPI were designed from a compute-centric perspective with an emphasis on achieving high floating point computation rates. But processing, memory, and storage technologies have not kept pace and there is a widening performance gap between computation and the data management infrastructure. Hence data management has become the performance bottleneck for a significant number of applications targeting HPC systems.  Concurrently, there are increasing challenges in meeting the growing demand for analyzing experimental and observational data.  In many cases, this is leading new communities to look towards HPC platforms.  In addition, the broader computing space has seen a revolution in new tools and frameworks to support Big Data analysis and machine learning.


There is a growing need for convergence between these two worlds.  Consequently, the U.S. Congressional Office of Management and Budget has informed the U.S. Department of Energy that new machines beyond the first exascale machines must address both the traditional simulation workloads as well as data intensive applications. This coming convergence prompts integrating these two workshops into a single entity to address the common challenges.


Workshop topics of interest:

   ** Scalable storage architectures, archival storage, storage virtualization, emerging storage devices and techniques

  ** Performance benchmarking, resource management, and workload studies from production systems including both traditional HPC and data-intensive workloads.

   ** Programmability, APIs, and fault tolerance of storage systems

   ** Parallel file systems, metadata management, and complex data management, object and key-value storage, and other emerging data storage/retrieval techniques

   ** Programming models and frameworks for data intensive computing including extensions to traditional and nontraditional programming models, asynchronous multi-task programming models, or to data intensive programming models

   ** Techniques for data integrity, availability and reliability especially

   ** Productivity tools for data intensive computing, data mining and knowledge discovery

   ** Application or optimization of emerging "big data" frameworks towards scientific computing and analysis

   ** Techniques and architectures to enable cloud and container-based models for scientific computing and analysis

   ** Techniques for integrating compute into a complex memory and storage hierarchy facilitating in situ and in transit data processing

   ** Data filtering/compressing/reduction techniques that maintain sufficient scientific validity for large scale compute-intensive workloads

   ** Tools and techniques for managing data movement among compute and data intensive components both solely within the computational infrastructure as well as incorporating the memory/storage hierarchy


Work-in-progress (WIP) Submissions Details:

  ** Email to pdswdiscs17 at easychair.org<mailto:pdswdiscs17 at easychair.org> by Wednesday Nov 1 AoE

  ** Submission format: A 1 page PDF abstract.


For the WIP session at the workshop, accepted WIP presenters will give 5-minute brief talks on their on-going work, with fresh problems/solutions, but may not be mature or complete yet for paper submission. A 1-page PDF abstract is required for the submission.


WIP Submission Deadline: Wednesday, Nov. 1, 2017

WIP Notification: Tuesday, Nov. 7, 2017


_________________________________________________________________
Kathryn Mohror, kathryn at llnl.gov<mailto:kathryn at llnl.gov>, http://scalability.llnl.gov/
Scalability Team @ Lawrence Livermore National Laboratory, Livermore, CA, USA

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.mcs.anl.gov/mailman/private/hpc-announce/attachments/20171026/73580874/attachment.html>


More information about the hpc-announce mailing list