The Second International Workshop on Big Data Reduction

held with 2021 IEEE International Conference on Big Data



Today’s modern applications are producing too large volumes of data to be stored, processed, or transferred efficiently. Data reduction is becoming an indispensable technique in many domains because it can offer a great capability to reduce the data size by one or even two orders of magnitude, significantly saving the memory/storage space, mitigating the I/O burden, reducing communication time, and improving the energy/power efficiency in various parallel and distributed environments, such as high-performance computing (HPC), cloud computing, edge computing, and Internet-of-Things (IoT). An HPC system, for instance, is expected to have a computational capability of floating-point operations per second, and large-scale HPC scientific applications may generate vast volumes of data (several orders of magnitude larger than the available storage space) for post-anlaysis. Moreover, runtime memory footprint and communication could be non-negligible bottlenecks of current HPC systems.

Tackling the big data reduction research requires expertise from computer science, mathematics, and application domains to study the problem holistically, and develop solutions and harden software tools that can be used by production applications. Specifically, the big-data computing community needs to understand a clear yet complex relationship between application design, data analysis and reduction methods, programming models, system software, hardware, and other elements of a next-generation large-scale computing infrastructure, especially given constraints on applicability, fidelity, performance portability, and energy efficiency. New data reduction techniques also need to be explored and developed continuously to suit emerging applications and diverse use cases.

There are at least three significant research topics that the community is striving to answer: (1) whether several orders of magnitude of data reduction is possible for extreme-scale sciences; (2) understanding the trade-off between the performance and accuracy of data reduction; and (3) solutions to effectively reduce data size while preserving the information inside the big datasets.

The goal of this workshop is to provide a focused venue for researchers in all aspects of data reduction in all related communities to present their research results, exchange ideas, identify new research directions, and foster new collaborations within the community.

Please note this year’s IEEE BigData conference and IWBDR workshop will be held virtually. Proceedings of the workshop will be published as planned. We will provide more details about how to attend this workshop virtually soon.


Topics of Interest

The focus areas for this workshop include, but are not limited to:


All papers accepted for this workshop will be published in the Workshop Proceedings of IEEE Big Data Conference, made available in the IEEE eXplore digital library.

Submission Instructions

Important Dates


Program Chairs

Web Chair

Program Committee

Program Schedule

Timezone: Eastern Time (ET/EST), UTC-5

Time Title
1:00 – 1:05 pm ET Opening Remarks and Welcome
  Dingwen Tao, Sheng Di, Xin Liang
1:05 – 1:50 pm ET Keynote Speech: High Ratio, Speed and Accuracy Customizable Scientific Data Compression with SZ
  Franck Cappello, Argonne National Laboratory
1:50 – 2:15 pm ET S15202: Efficient loading of reduced data ensembles produced at ORNL SNS/HFIR neutron time-of-flight facilities
  William Godoy, Andrei Savici, Steven Hahn, and Peter Peterson
2:15 – 2:40 pm ET BigD302: LCTL: Lightweight Compression Template Library
  Juliana Hildebrandt, André Berthold, Dirk Habich, and Wolfgang Lehner
2:40 – 3:05 pm ET S15205: On Large-Scale Matrix-Matrix Multiplication on Compressed Structures
  Sudhindra Gopal Krishna, Aditya Narasimhan, Sridhar Radhakrishnan, and Richard Veras
3:05 – 3:25 pm ET S15206: Tuning Parallel Data Compression and I/O for Large-scale Earthquake Simulation
  Houjun Tang, Suren Byna, N. Anders Petersson, and David Mccallen
3:25 – 3:30 pm ET Coffee Break
3:30 – 3:55 pm ET S15207: Using Neural Networks for Two Dimensional Scientific Data Compression
  Lucas Hayne, John Clyne, and Shaomeng Li
3:55 – 4:20 pm ET BigD312: Prototyping: Sample Selection for Imbalanced Data
  Edward Schwalb
4:20 – 4:45 pm ET S15204: Fast Machine Learning in Data Science with a Comprehensive Data Summarization
  Sikder Tahsin Al-Amin and Carlos Ordonez
4:45 – 5:05 pm ET S15203: Improving Lossy Compression for SZ by Exploring the Best-Fit Lossless Compression Techniques
  Jinyang Liu, Sihuan Li, Sheng Di, Xin Liang, Kai Zhao, Dingwen Tao, Zizhong Chen, and Franck Cappello
5:05 – 5:10 pm ET Closing Remarks


Participants can find the Zoom link to join the workshop through Underline (