Welcome to DBTest

Welcome

About DBTest

With the ever increasing amount of data stored and processed, there is an ongoing need of testing database management systems but also data-intensive systems in general. Specifically, emerging new technologies such as Non-Volatile Memory impose new challenges (e.g., avoiding persistent memory leaks and partial writes), and novel system designs including FPGAs, GPUs, and RDMA call for additional attention and sophistication.

Reviving the previous success of the eight previous workshops, the goal of DBTest 2022 is to bring researchers and practitioners from academia and industry together to discuss key problems and ideas related to testing database systems and applications. The long-term objective is to reduce the cost and time required to test and tune data management and processing products so that users and vendors can spend more time and energy on actual innovations.

Topics Of Interest

  • Testing of database systems, storage services, and database applications
  • Testing of database systems using novel hardware and software technology (non-volatile memory, hardware transactional memory, …)
  • Testing heterogeneous systems with hardware accelerators (GPUs, FPGAs, ASICs, …)
  • Testing distributed and big data systems
  • Testing machine learning systems
  • Specific challenges of testing and quality assurance for cloud-based systems
  • War stories and lessons learned
  • Performance and scalability testing
  • Testing the reliability and availability of database systems
  • Algorithms and techniques for automatic program verification
  • Maximizing code coverage during testing of database systems and applications
  • Generation of synthetic data for test databases
  • Testing the effectiveness of adaptive policies and components
  • Tools for analyzing database management systems (e.g., profilers, debuggers)
  • Workload characterization with respect to performance metrics and engine components
  • Metrics for test quality, robustness, efficiency, and effectiveness
  • Operational aspects such as continuous integration and delivery pipelines
  • Security and vulnerability testing
  • Experimental reproduction of benchmark results
  • Functional and performance testing of interactive data exploration systems
  • Tracability, reproducibility and reasoning for ML-based systems

Details

Paper Submission

Authors are invited to submit original, unpublished research papers that are not being considered for publication in any other forum.

Submission Guideline

Easy Chair

Timeline

March 1, 2022 / 11:59PM US PST

Submission

April 4, 2022 / 11:59PM US PST

Notification Of Outcome

April 17, 2022 / 11:59PM US PST

Camera-Ready Copy
Organization

Programm Committee

Anisoara Nica (SAP SE)
Anja Grünheid (Microsoft)
Chee-Yong Chan (National University of Singapore)
Danica Porobic (Oracle)
Daniel Ritter (HPI)
Jayant R. Haritsa (Indian Institute of Science)
Joy Aruljay (Georgia Tech)
Junwen Yang (University of Chicago)
Muhammad Ali Gulzar (Virginia Tech)
Numair Mansur (MPI-SWS)
Renata Borovica-Gajic (University of Melbourne)
Shuai Wang (HKUST)
S. Sudarshan (IIT Bombay)
Stefania Dumbrava (ENSIIE)
Utku Sirin (Harvard University)

Workshop Co-Chairs

Manuel
Rigger

ETH Zurich, Switzerland

Pinar
Tözün

ITU Copenhagen, Denmark

Steering Comitee

Carsten
Binnig

TU Darmstadt, Germany

Alexander
Böhm

SAP SE, Germany

Tilmann
Rabl

TU Berlin, Germany