[hpc-announce] CFS P-RECS'22 6-April-2022 deadline

Lofstead, Gerald F II gflofst at sandia.gov
Sun Feb 13 17:55:50 CST 2022


Call for Papers: 5th edition of P-RECS workshop, co-located with hdpc.org/2022 

The P-RECS workshop focuses heavily on practical, actionable aspects of reproducibility in broad areas of computational science and data exploration, with special emphasis on issues in which community collaboration can be essential for adopting novel methodologies, techniques and frameworks aimed at addressing some of the challenges we face today. The workshop brings together researchers and experts to share experiences and advance the state of the art in the reproducible evaluation of computer systems, featuring contributed papers and invited talks. Submit your position and experience papers on practical aspects of reproducibility!
 
Submission deadline: 6 April 2022
Author Notification: 18 April 2022
Camera Ready Due: 21 April 2022
Workshop date: 30 June 2022
Submissions: https://easychair.org/conferences/?conf=precs22

The goal of the workshop is to bring together researchers and experts to share experiences and advance the state of the art in the reproducible evaluation of computer systems. The workshop will focus heavily on practical, actionable aspects of reproducibility in broad areas of computational science and data exploration, with special emphasis on issues in which community collaboration can be essential for adopting  novel methodologies, techniques and frameworks aimed at addressing some of the challenges  we face today.
The workshop will address practical but challenging questions that multiple communities face today: how do we re-execute experiments in an easy way? How do we minimize the time it takes practitioners to extend the state of the art of a particular domain? Should we pay the price (time and effort) of reproducing experiments in constantly changing environments (such as HPC centers)?  How do we curate experiments and their data sets? Who should curate them? How do we index experiments? How do we decide which experiments and data sets to preserve? When reviewing papers, what should the reviewers see? How does double-blind review interact with reproducibility?
We will peer review (double-open) via EasyChair with each paper having at least 3 reviews. The goal of an open review process is to foster a collaborative atmosphere and encourage easier discussion at the workshop. We will use the ACM conference format with no more than 5 pages submissions (expect references). We will also publish a call for 5- minute work-in-progress (WIP) presentations, with authors required to submit a 1-page abstract.

Paper review outcomes are Strong Accept, Accept, Accept with Shepherd, Not Ready, and Wrong Venue. Strong Accept and Accept are accepted as is, but the authors are requested to make changes. The Accept with Shepherd means the work is believed to be close to acceptable, but requires verified/supervised changes. Not Ready indicates the distance to an acceptable paper is believed to be too great to meet deadlines. Wrong Venue means the paper does not correspond with the workshop topic.

We expect submissions from topics such as, but not limited to:
Experiment dependency management.
Software citation and persistence.
Data versioning and preservation.
Provenance of data-intensive experiments.
Tools and techniques for incorporating provenance into publications.
Automated experiment execution and validation.
Experiment portability for code, performance, and related metrics.
Experiment discoverability for re-use.
Cost-benefit analysis frameworks for reproducibility.
Usability and adaptability of reproducibility frameworks into already-established domain-specific tools.
Long-term artifact archiving for future reproducibility.
Frameworks for sociological constructs to incentivize paradigm shifts.
Policies around publication of articles/software.
Blinding and selecting artifacts for review while maintaining history.
Reproducibility-aware computational infrastructure.
System support for reproducibility.

There will be two categories of submissions:
Position papers. These will be vision papers whose goal is to propose solutions (or scope the work that needs to be done) to address some of the issues outlined above. We hope that a research agenda comes out of this and that we can create a community that meets yearly to report on our status in addressing these problems. Previous vision papers have led to lively discussion that have informed the community's research direction.
Experience papers. We will encourage the community to make use of an automated service (see subsection below). Based on their experience in automating one or more experiments, the committee will look for submissions reporting on their experience: what worked? What aspects of experiment automation and validation are hard in their domain? What can be done to improve the tooling for their domain?

For further information, please consult the website:
website: https://p-recs.github.io/2022
Or email the organizers
Jay Lofstead
gflofst at sandia.gov



More information about the hpc-announce mailing list