[hpc-announce] [CFP] IA^3 2016 - DEADLINE EXTENDED TO SEPTEMBER 7

Tumeo, Antonino Antonino.Tumeo at pnnl.gov
Tue Aug 23 20:31:47 CDT 2016

[Apologies if you receive multiple copies of this CFP]

IA^3 2016 - Sixth Workshop on Irregular Applications: Architectures and Algorithms
November 13 2016
Salt Lake City, UT

To be held in conjunction with SC16
To be held in cooperation with SIGHPC

Call for Papers

Irregular applications occur in many subject matters. While inherently parallel, they exhibit highly variable execution performance at a local level due to unpredictable memory access patterns and/or network transfers, divergent control structures, and data imbalances. Moreover, they often require fine-grain synchronization and communication on large-data structures such as graphs, trees, unstructured grids, tables, sparse matrices, deep nets, and their combinations (such as, for example, attributed graphs). They have a significant degree of latent parallelism, which however is difficult to exploit due to their complex behavior. Current high performance architectures rely on data locality and regular computation to reduce access latencies, and often do not cope well with the requirements of these applications. Furthermore, irregular applications are difficult to scale on current supercomputing machines, due to their limits in fine-grained synchronization and small data transfers.

Irregular applications pertain both to well established and emerging fields, such as machine learning, social network analysis, bioinformatics, semantic graph databases, Computer Aided Design (CAD), and computer security.  Many of these application areas also process massive sets of unstructured data, which keep growing exponentially. Addressing the issues of irregular applications on current and future architectures will become critical to solve the challenges in science and data analysis of the next few years.

This workshop seeks to explore solutions for supporting efficient execution of irregular applications in the form of new features at the level of the micro- and system-architecture, network, languages and libraries, runtimes, compilers, analysis, algorithms. Topics of interest, of both theoretical and practical significance, include but are not limited to:

* Micro- and System-architectures, including multi- and many-core designs, heterogeneous processors, accelerators (GPUs, vector processors, Automata processor), reconfigurable (coarse grained reconfigurable and FPGA designs) and custom processors 
* Network architectures and interconnect (including high-radix networks, optical interconnects) 
* Novel memory architectures and designs (including processors-in memory) 
* Impact of new computing paradigms on irregular workloads (including neuromorphic processors and quantum computing) 
* Modeling, simulation and evaluation of novel architectures with irregular workloads 
* Innovative algorithmic techniques 
* Combinatorial algorithms (graph algorithms, sparse linear algebra, etc.)
* Impact of irregularity on machine learning approaches
* Parallelization techniques and data structures for irregular workloads
* Data structures combining regular and irregular computations (e.g., attributed graphs)
* Approaches for managing massive unstructured datasets (including streaming data) 
* Languages and programming models for irregular workloads
* Library and runtime support for irregular workloads
* Compiler and analysis techniques for irregular workloads
* High performance data analytics applications, including graph databases 

Besides regular papers, papers describing work-in-progress or incomplete but sound, innovative ideas related to the workshop theme are also encouraged. We solicit both 8-page regular papers and 4-page position papers. Authors of exciting but not mature enough regular papers may be offered the option of a short 4-page paper and related short presentation.

Special Issue
Authors of papers accepted at the workshop will be invited to submit extended and updated versions of their work for a Special Issue of Elsevier’s Journal of Parallel and Distributed Computing (JPDC) on Systems for Learning, Inferencing, and Discovering (SLID).

Important Dates - EXTENDED
Abstract submission:                                7 September 2016 23:59 PST - EXTENDED
Position or full paper submission:             7 September 2016 23:59 PST - EXTENDED
Notification of acceptance:                        3 October 2016
Camera-ready position and full papers:    10 October 2016
Workshop:                                                 13 November 2016
Submission site: https://easychair.org/conferences/?conf=ia32016

All submissions should be in double-column, single-spaced letter format, using 10-point size fonts, with at least one-inch margins on each side and respect the IEEE conference templates available at: http://www.ieee.org/conferences_events/conferences/publishing/templates.html. 

The proceedings of the workshop will be published in cooperation with ACM SIGHPC. 

Submitted manuscripts may not exceed eight (8) pages in length for regular papers and four (4) pages for position papers including figures, tables and references.

For any question, please contact the organizers at antonino.tumeo at pnnl.gov, john.feo at pnnl.gov, and ovilla at nvidia.com.


Antonino Tumeo, PNNL, antonino.tumeo at pnnl.gov
John Feo, PNNL, Northwest Institute for Advanced Computing (NIAC), john.feo at pnnl.gov
Oreste Villa, NVIDIA Research, ovilla at nvidia.com

Program Committee 

Scott Beamer, Lawrence Berkeley National Laboratory, US
Michela Becchi, University of Missouri, US
David Brooks, Harvard University, US
Hubertus Franke, IBM TJ Watson, US
John Gilbert, University of California at Santa Barbara, US
Maya Gokhale, Lawrence Livermore National Laboratory, US
Vivek Kumar, Rice University, US
John Leidel, Texas Tech University, US
Kamesh Madduri, Penn State University, US
Naoya Maruyama, RIKEN AICS, JP
Satoshi Matsuoka, Tokio Institute of Technology, JP
Tim Mattson, Intel, US
Richard Murphy, Micron, US
Miquel Moretó, UPC-BSC, ES
Walid Najjar, University of California Riverside, US
Jacob Nelson, University of Washington, US
Ozcan Ozturk, Bilkent University, TR
Gianluca Palermo, Politecnico di Milano, IT
D.K. Panda, The Ohio State University, US
Fabrizio Petrini, Intel, US
Jason Riedy, Georgia Institute of Technology, US
Daniel Sanchez, Massachusetts Institute of Technology, US
Erik Saule, University of North Carolina at Charlotte, US
John Shalf, Lawrence Berkeley National Laboratory, US
Ruud Van Der Pas, Oracle, US
Flavio Vella, Sapienza, University of Rome, IT

More information about the hpc-announce mailing list