Skip to content
☁️ HitKeep Cloud is live. Choose EU or US and start now →

Backups and Restore

HitKeep keeps its operational story simple, but there is an important distinction in 2.0.0:

  • single-tenant installs can still feel like “one database”
  • multiteam installs store analytics in multiple DuckDB files under one data-path

That means your backup strategy must follow the real storage boundary, not an older “copy one hitkeep.db file” assumption.

If you only use the default tenant, your live data is primarily:

  • the shared control-plane database at {data-path}/hitkeep.db
  • your archive directory if retention archiving is enabled

Once you use non-default teams, the live data footprint becomes:

  • the shared control-plane database at {data-path}/hitkeep.db
  • one analytics database per non-default team at {data-path}/tenants/{team_id}/hitkeep.db
  • your archive directory if retention archiving is enabled

That is why the safe rule is:

In multiteam installs, back up the whole data-path tree, not only hitkeep.db.

HitKeep can export live databases automatically using DuckDB’s EXPORT DATABASE.

Terminal window
export HITKEEP_DATA_PATH=/var/lib/hitkeep/data
export HITKEEP_BACKUP_PATH=/var/lib/hitkeep/backups
export HITKEEP_BACKUP_INTERVAL=60
export HITKEEP_BACKUP_RETENTION=24
./hitkeep

Built-in backups include:

  • the shared database snapshot under shared/{timestamp}/
  • each non-default tenant snapshot under tenants/{team_id}/{timestamp}/

Use this when you want consistent application-level snapshots without relying on filesystem-level tooling.

Restore is an offline operation:

Terminal window
./hitkeep recover restore-backup \
-from /var/lib/hitkeep/backups \
-snapshot 2026-03-08T120000Z \
-db /var/lib/hitkeep/data/hitkeep.db \
-data-path /var/lib/hitkeep/data \
-yes

For S3-backed snapshots:

Terminal window
./hitkeep recover restore-backup \
-from s3://my-bucket/hitkeep/backups \
-snapshot 2026-03-08T120000Z \
-yes

The restore flow in 2.0.0 is intentionally WAL-safe:

  1. Existing database files are moved aside as .pre-restore.{timestamp} safety copies.
  2. HitKeep imports the snapshot into a temporary DuckDB file.
  3. It checkpoints and closes that temporary database.
  4. It refuses to promote the restore if the temporary database still has a .wal.
  5. Only then is the restored database renamed into place.

That means hitkeep recover restore-backup itself should not leave your restored database dependent on replaying a leftover WAL.

Built-in backups are the easiest supported option, but external tooling is still valid.

Terminal window
# Copy the full live data tree
rsync -az /var/lib/hitkeep/data/ backup-host:/backups/hitkeep/data/
# Copy retention archives if you use them
rsync -az /var/lib/hitkeep/archive/ backup-host:/backups/hitkeep/archive/

Or with object storage:

Terminal window
rclone sync /var/lib/hitkeep/data/ remote:my-bucket/hitkeep/data/
rclone sync /var/lib/hitkeep/archive/ remote:my-bucket/hitkeep/archive/

For multiteam installs, do not back up only:

Terminal window
cp /var/lib/hitkeep/data/hitkeep.db /backups/

That copies only the shared control plane and misses tenant-local analytics databases.

After recover restore-backup finishes:

  1. Start HitKeep normally.
  2. Confirm login works.
  3. Open the dashboard for a site in the default tenant.
  4. Open at least one site from a non-default team if you use teams.
  5. Check that goals, funnels, ecommerce, and team-specific analytics still render.

The next normal startup may create a new, valid DuckDB .wal during runtime. That is expected. The thing to avoid is a restore that only works if an old or partial WAL is replayed.