Skip to content

Delete source dataset files before sync to make cold-run prep fair#842

Open
alexey-milovidov wants to merge 2 commits intomainfrom
cleanup-source-files-before-sync
Open

Delete source dataset files before sync to make cold-run prep fair#842
alexey-milovidov wants to merge 2 commits intomainfrom
cleanup-source-files-before-sync

Conversation

@alexey-milovidov
Copy link
Copy Markdown
Member

Summary

  • Most benchmarks call sync && echo 3 > /proc/sys/vm/drop_caches at the start of run.sh to prepare for a cold first run.
  • That sync also flushes any dirty pages of the source dataset files (hits.tsv, hits.csv, hits.parquet, hits.json.gz, etc.) that were downloaded and loaded into the DB but are no longer needed.
  • Cost of that incidental flush depends on what input format the system happened to ingest (~70 GB uncompressed TSV/CSV vs ~14 GB Parquet, plus per-system layout differences), so it leaks into cold-run measurements unevenly across systems — effectively a hidden violation of benchmark rules.
  • Fix: in benchmark.sh of every system that ingests the dataset into its own storage format, delete the downloaded source files immediately after the load step (and after any data_size measurement that depends on them) and before ./run.sh is invoked. The unlinked files' dirty pages are dropped by the kernel without being flushed, so the subsequent sync covers only the database's own writes.
  • Systems that query the parquet/csv/tsv file directly as their storage (clickhouse-parquet, duckdb-parquet, datafusion, polars, sail, etc.) are intentionally not modified — those files ARE the data and must remain.
  • Skipped: locustdb (script panics during load and never reaches run.sh), mongodb (uses run.js, not run.sh).

33 systems modified: byconity, cedardb, chdb, citus, clickhouse, cloudberry, cockroachdb, cratedb, databend, doris, druid, duckdb, duckdb-vortex, elasticsearch, greenplum, heavyai, hologres, hyper, infobright, kinetica, mariadb, mariadb-columnstore, monetdb, mysql, mysql-myisam, oxla, pg_duckdb, pg_duckdb-indexed, pgpro_tam, pinot, selectdb, starrocks, umbra.

Test plan

  • Re-run one ingest-from-TSV benchmark (e.g. starrocks, mysql) on AWS and confirm the script still completes end-to-end with the source file deleted before ./run.sh.
  • Re-run one ingest-from-Parquet benchmark (e.g. duckdb, doris, clickhouse) and confirm the same.
  • Spot-check that no removed file is needed by ./run.sh (run.sh only references the database state, not the source files, in modified systems).
  • Verify data_size output is unchanged for systems where it's measured after ./run.sh — those measurements read internal storage, not the deleted source.

🤖 Generated with Claude Code

alexey-milovidov and others added 2 commits May 3, 2026 17:14
Most systems run `sync && echo 3 > /proc/sys/vm/drop_caches` at the
start of run.sh to prepare for a cold first run of each query. This
sync also flushes any dirty pages of the *source* dataset files
(hits.tsv, hits.csv, hits.parquet, etc.) that were downloaded and
loaded into the system but are no longer needed once ingest is done.

The sync's flush of those unrelated source pages adds time and disk
I/O that varies wildly across systems (uncompressed size ~70 GB for
TSV/CSV vs ~14 GB for Parquet, and some systems decompress in-place
while others move to a separate dir). That's effectively a hidden
violation of benchmark rules: cold-run prep cost ends up depending
on what input format the system happened to use, not on the system
itself.

Fix: in benchmark.sh of every system that ingests the dataset into
its own storage format, delete the downloaded source files
(.csv, .tsv, .parquet, .json.gz - both compressed and uncompressed
forms) immediately after the load step (and after any data_size
measurement that depends on those files) and before run.sh is
invoked. The unlinked files' dirty pages are dropped by the kernel
without being flushed to disk, so the subsequent sync only covers
the database's own writes.

Systems that query the parquet/csv/tsv file directly as their
storage (clickhouse-parquet, duckdb-parquet, datafusion, polars,
sail, etc.) are intentionally NOT modified - those files ARE the
data and must remain.

Skipped: locustdb (script panics during load and never reaches
run.sh), mongodb (uses run.js, not run.sh).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@alexey-milovidov alexey-milovidov self-assigned this May 5, 2026
@rschu1ze
Copy link
Copy Markdown
Member

rschu1ze commented May 5, 2026

Most benchmarks call sync && echo 3 > /proc/sys/vm/drop_caches at the start of run.sh to prepare for a cold first run.
That sync also flushes any dirty pages of the source dataset files (hits.tsv, hits.csv, hits.parquet, hits.json.gz, etc.) that were downloaded and loaded into the DB but are no longer needed.
Cost of that incidental flush depends on what input format the system happened to ingest (~70 GB uncompressed TSV/CSV vs ~14 GB Parquet, plus per-system layout differences), so it leaks into cold-run measurements unevenly across systems — effectively a hidden violation of benchmark rules.

I don't understand the problem. So the source file (e.g. .tsv) was ingested, now the page cache is flushed. How does that affect cold runtimes?

@alexey-milovidov
Copy link
Copy Markdown
Member Author

So the source file (e.g. .tsv) was ingested, now the page cache is flushed. How does that affect cold runtimes?

The sync command will flush the database files and all other unrelated files in the filesystem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants