<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<div class="moz-text-html" lang="x-unicode">
<div>[ Apologies if you are receiving multiple copies of this CFP.
Please do forward it to interested colleagues. ]<font
face="monospace"><br>
</font></div>
<font face="monospace">
<div><font face="monospace"><br>
</font></div>
=======================================================================================<br>
Call for papers: PDSW’23<br>
The 8th International Parallel Data Systems
Workshop<br>
<a href="http://www.pdsw.org/"
target="_blank" class="moz-txt-link-freetext">http://www.pdsw.org</a><br>
November 12, 2023 1:30 PM - 5:00 PM<br>
Held in conjunction with SC23, Denver, CO<br>
=======================================================================================<br>
<br>
Please note the one week extension for the paper submission. The
new deadline is now August, 6th.<br>
</font><br>
<font face="monospace"><font face="arial, sans-serif">Important
Dates<br>
----------------------</font></font><font face="monospace"><font
face="arial, sans-serif"><br>
Regular Papers and Reproducibility Study Papers <br>
<b>Submissions due: August, 6th, 2023, 11:59 PM AoE</b><br>
Paper Notification: Sept 8th, 2023, 11:59 PM AoE <br>
Camera ready due: Sept 29th, 2023, 11:59 PM AoE <br>
<br>
Work in Progress (WIP) <br>
<b>Submissions due: Sept 15th, 2023, 11:59PM AoE</b><br>
WIP Notification: On or before Sept 23nd, 2023 <br>
</font><br>
<br>
</font><font face="arial, sans-serif">Abstract</font><br>
<font face="arial, sans-serif"><font face="monospace"><font
face="arial, sans-serif">----------------------</font></font><font
face="monospace"><font face="arial, sans-serif"><br>
</font></font>We are pleased to announce the 8th International
Parallel Data Systems Workshop (PDSW’23). PDSW'23 will be hosted
in conjunction with SC23: The International Conference for High
Performance Computing, Networking, Storage and Analysis, in
Denver, CO.<br>
<br>
Efficient data storage and data management are crucial to
scientific productivity in both traditional simulation-oriented
HPC environments and Big Data analysis environments. This issue
is further exacerbated by the growing volume of experimental and
observational data, the widening gap between the performance of
computational hardware and storage hardware, and the emergence
of new data-driven algorithms in machine learning. The goal of
this workshop is to facilitate research that addresses the most
critical challenges in scientific data storage and data
processing. PDSW will continue to build on the successful
tradition established by its predecessor workshops: the
Petascale Data Storage Workshop (PDSW, 2006-2015) and the Data
Intensive Scalable Computing Systems (DISCS 2012-2015) workshop.
These workshops were successfully combined in 2016, and the
resulting joint workshop has attracted up to 38 full paper
submissions and 140 attendees per year from 2016 to 2022. <br>
</font>
<p><font face="arial, sans-serif">We encourage the community to
submit original manuscripts that:</font></p>
<ul>
<li><font face="arial, sans-serif">introduce and evaluate novel
algorithms or architectures,</font></li>
<li><font face="arial, sans-serif">inform the community of
important scientific case studies or workloads, or</font></li>
<li><font face="arial, sans-serif">validate the reproducibility
of previously published work<br>
<br>
</font></li>
</ul>
<font face="arial, sans-serif">Special attention will be given to
issues in which community collaboration is crucial for problem
identification, workload capture, solution interoperability,
standardization, and shared tools. We also strongly encourage
papers to share complete experimental environment information
(software version numbers, benchmark configurations, etc.) to
facilitate collaboration. <br>
</font>
<p><font face="arial, sans-serif">Topics of interest include the
following: <br>
</font></p>
<ul>
<li><font face="arial, sans-serif"> Large-scale data caching
architectures</font></li>
<li><font face="arial, sans-serif"> Scalable architectures for
distributed data storage, archival, and virtualization </font></li>
<li><font face="arial, sans-serif"> The application of new data
processing models and algorithms towards computing and
analysis </font></li>
<li><font face="arial, sans-serif"> Performance benchmarking,
resource management, and workload studies</font></li>
<li><font face="arial, sans-serif"> Enabling cloud and
container-based models for scientific data analysis </font></li>
<li><font face="arial, sans-serif"> Techniques for data
integrity, availability, reliability, and fault tolerance </font></li>
<li><font face="arial, sans-serif"> Programming models and big
data frameworks for data intensive computing</font></li>
<li><font face="arial, sans-serif"> Hybrid cloud/on-premise data
processing</font></li>
<li><font face="arial, sans-serif"> Cloud-specific data storage
and transit costs and opportunities </font></li>
<li><font face="arial, sans-serif"> Programmability of storage
systems </font></li>
<li><font face="arial, sans-serif"> Data filtering, compression,
reduction techniques </font></li>
<li><font face="arial, sans-serif"> Data and metadata indexing
and querying</font></li>
<li><font face="arial, sans-serif"> Parallel file systems,
metadata management, and complex data management</font></li>
<li><font face="arial, sans-serif"> Integrating computation into
the memory and storage hierarchy to facilitate in-situ and
in-transit data processing </font></li>
<li><font face="arial, sans-serif"> Alternative data storage
models, including object stores and key-value stores </font></li>
<li><font face="arial, sans-serif"> Productivity tools for data
intensive computing, data mining, and knowledge discovery</font></li>
<li><font face="arial, sans-serif"> Tools and techniques for
managing data movement among compute and data intensive
components </font></li>
<li><font face="arial, sans-serif"> Cross-cloud data management
</font></li>
<li><font face="arial, sans-serif"> Storage system optimization
and data analytics with machine learning</font></li>
<li><font face="arial, sans-serif"> Innovative techniques and
performance evaluation for new memory and storage systems</font></li>
</ul>
<font face="arial, sans-serif"><br>
</font><font face="arial, sans-serif">Regular Paper Submissions<br>
--------------------------------------<br>
<br>
All papers will be evaluated by a competitive peer review
process under the supervision of the workshop program committee.
Selected papers and associated talk slides will be made
available on the workshop web site. The papers will also be
published in the SC23 Workshop Proceedings. <br>
<br>
Authors of regular papers are strongly encouraged to submit
Artifact Description (AD) Appendices that can help to reproduce
and validate their experimental results. While the inclusion of
the AD Appendices is optional for PDSW’23, submissions that are
accompanied by AD Appendices will be given favorable
consideration for the PDSW Best Paper award. <br>
<br>
PDSW’23 follows the SC23 Reproducibility Initiative. For
Artifact Description (AD) Appendices, we will use the format of
the SC23 for PDSW'23 submissions. The AD should include a field
for one or more links to data (zenodo, figshare, etc.) and code
(github, gitlab, bitbucket, etc.) repositories. For the
Artifacts that will be placed in the code repository, we
encourage authors to follow the PDSW 2023 Reproducibility
Addendum on how to structure the artifact, as it will make it
easier for the reviewing committee and readers of the paper in
the future.<br>
<br>
Submit a not previously published paper as a PDF file, indicate
authors and affiliations. Papers must be up to 6 pages, not less
than 10 point font and not including references and optional
reproducibility appendices. <br>
<b>Submission site</b>: <a class="moz-txt-link-freetext"
href="https://submissions.supercomputing.org/">https://submissions.supercomputing.org/</a><br>
<br>
<b>Submissions due: </b>July 30th, 2023, 11:59 PM AoE<br>
Papers must use the ACM conference paper template available at:
<br>
<a class="moz-txt-link-freetext"
href="https://www.acm.org/publications/proceedings-template">https://www.acm.org/publications/proceedings-template</a><br>
</font>
<p><br>
</p>
<font face="arial, sans-serif">Work-in-progress (WIP) Session<br>
--------------------------------------------------<br>
<br>
There will be a WIP session where presenters provide brief
5-minute talks on their on-going work, with fresh
problems/solutions. WIP content is typically material that may
not be mature or complete enough for a full paper submission and
will not be included in the proceedings. A one-page abstract is
required. <br>
Submission site: <a class="moz-txt-link-freetext"
href="https://submissions.supercomputing.org/">https://submissions.supercomputing.org/</a><br>
<br>
<br>
Workshop Organizers<br>
------------------------------<br>
General Chair<br>
</font><font face="arial, sans-serif"><font
face="arial,
sans-serif">- Amelie Chi Zhou, Shenzhen University, China <br>
</font></font><font face="arial, sans-serif"><br>
Program Co-Chairs<br>
- Bing Xie, Oak Ridge National Laboratory, USA <br>
- Suren Byna, The Ohio State University, USA<br>
<br>
Reproducibility Co-Chairs<br>
- Tanu Malik, DePaul University, USA<br>
</font><font face="arial, sans-serif">- Jean Luca Bez, Lawrence
Berkeley National Laboratory, USA <br>
<br>
Publicity Chair<br>
- Kira Duwe, EPFL, Switzerland<br>
<br>
Web and Proceedings Chair<br>
- Joan Digney, Carnegie Mellon University</font> </div>
<p></p>
</body>
</html>