Delete source dataset files before sync to make cold-run prep fair#842
Delete source dataset files before sync to make cold-run prep fair#842alexey-milovidov wants to merge 2 commits intomainfrom
Conversation
Most systems run `sync && echo 3 > /proc/sys/vm/drop_caches` at the start of run.sh to prepare for a cold first run of each query. This sync also flushes any dirty pages of the *source* dataset files (hits.tsv, hits.csv, hits.parquet, etc.) that were downloaded and loaded into the system but are no longer needed once ingest is done. The sync's flush of those unrelated source pages adds time and disk I/O that varies wildly across systems (uncompressed size ~70 GB for TSV/CSV vs ~14 GB for Parquet, and some systems decompress in-place while others move to a separate dir). That's effectively a hidden violation of benchmark rules: cold-run prep cost ends up depending on what input format the system happened to use, not on the system itself. Fix: in benchmark.sh of every system that ingests the dataset into its own storage format, delete the downloaded source files (.csv, .tsv, .parquet, .json.gz - both compressed and uncompressed forms) immediately after the load step (and after any data_size measurement that depends on those files) and before run.sh is invoked. The unlinked files' dirty pages are dropped by the kernel without being flushed to disk, so the subsequent sync only covers the database's own writes. Systems that query the parquet/csv/tsv file directly as their storage (clickhouse-parquet, duckdb-parquet, datafusion, polars, sail, etc.) are intentionally NOT modified - those files ARE the data and must remain. Skipped: locustdb (script panics during load and never reaches run.sh), mongodb (uses run.js, not run.sh). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
I don't understand the problem. So the source file (e.g. .tsv) was ingested, now the page cache is flushed. How does that affect cold runtimes? |
The sync command will flush the database files and all other unrelated files in the filesystem. |
Summary
sync && echo 3 > /proc/sys/vm/drop_cachesat the start ofrun.shto prepare for a cold first run.syncalso flushes any dirty pages of the source dataset files (hits.tsv,hits.csv,hits.parquet,hits.json.gz, etc.) that were downloaded and loaded into the DB but are no longer needed.benchmark.shof every system that ingests the dataset into its own storage format, delete the downloaded source files immediately after the load step (and after anydata_sizemeasurement that depends on them) and before./run.shis invoked. The unlinked files' dirty pages are dropped by the kernel without being flushed, so the subsequentsynccovers only the database's own writes.locustdb(script panics during load and never reachesrun.sh),mongodb(usesrun.js, notrun.sh).33 systems modified: byconity, cedardb, chdb, citus, clickhouse, cloudberry, cockroachdb, cratedb, databend, doris, druid, duckdb, duckdb-vortex, elasticsearch, greenplum, heavyai, hologres, hyper, infobright, kinetica, mariadb, mariadb-columnstore, monetdb, mysql, mysql-myisam, oxla, pg_duckdb, pg_duckdb-indexed, pgpro_tam, pinot, selectdb, starrocks, umbra.
Test plan
./run.sh../run.sh(run.shonly references the database state, not the source files, in modified systems).data_sizeoutput is unchanged for systems where it's measured after./run.sh— those measurements read internal storage, not the deleted source.🤖 Generated with Claude Code