How to Back Up Your Self-Hosted Services Automatically

# backup# selfhosting# docker# disasterrecovery
How to Back Up Your Self-Hosted Services AutomaticallyRoyce

How to Back Up Your Self-Hosted Services Automatically The #1 risk of self-hosting is data...

How to Back Up Your Self-Hosted Services Automatically

The #1 risk of self-hosting is data loss. No SaaS vendor is handling backups for you. Here's a complete, automated backup strategy that protects everything you self-host.

The 3-2-1 Backup Rule

  • 3 copies of your data
  • 2 different storage types
  • 1 copy off-site

For self-hosting:

  1. Primary: Live data on your VPS
  2. Local backup: Compressed archives on the same VPS (different disk)
  3. Off-site: Synced to S3, another VPS, or local NAS

What to Back Up

Category What How Often Priority
Databases PostgreSQL, MySQL, SQLite Daily Critical
File uploads User files, attachments, images Daily Critical
Configuration Docker Compose, .env files, config.toml On change High
Secrets Encryption keys, API keys, certs On change Critical
Docker volumes App data not in databases Daily Medium
Cron jobs Backup scripts, scheduled tasks On change Low

DO NOT back up: Docker images (re-pullable), temporary files, logs older than 7 days.

Database Backup Scripts

PostgreSQL (Most common for self-hosted tools)

#!/bin/bash
# backup-postgres.sh

BACKUP_DIR="/backups/postgres"
DATE=$(date +%Y%m%d_%H%M)
mkdir -p $BACKUP_DIR

# Dump all databases from a shared PostgreSQL
docker exec shared-postgres pg_dumpall -U postgres | gzip > $BACKUP_DIR/all-$DATE.sql.gz

# Or dump individual databases
for DB in mattermost outline plane keycloak chatwoot n8n listmonk; do
  docker exec shared-postgres pg_dump -U postgres $DB | gzip > $BACKUP_DIR/$DB-$DATE.sql.gz
done

# Remove backups older than 30 days
find $BACKUP_DIR -name "*.sql.gz" -mtime +30 -delete

echo "[$(date)] PostgreSQL backup completed" >> /var/log/backups.log
Enter fullscreen mode Exit fullscreen mode

SQLite (PocketBase, Uptime Kuma, Vaultwarden)

#!/bin/bash
# backup-sqlite.sh

BACKUP_DIR="/backups/sqlite"
DATE=$(date +%Y%m%d_%H%M)
mkdir -p $BACKUP_DIR

# Vaultwarden (CRITICAL — password vault)
docker run --rm -v vw-data:/data -v $BACKUP_DIR:/backup alpine \
  cp /data/db.sqlite3 /backup/vaultwarden-$DATE.db

# Uptime Kuma
docker run --rm -v uptime-kuma:/data -v $BACKUP_DIR:/backup alpine \
  cp /data/kuma.db /backup/uptime-kuma-$DATE.db

# PocketBase
cp /opt/pocketbase/pb_data/data.db $BACKUP_DIR/pocketbase-$DATE.db

# Compress all
gzip $BACKUP_DIR/*-$DATE.db

find $BACKUP_DIR -name "*.db.gz" -mtime +30 -delete
Enter fullscreen mode Exit fullscreen mode

MySQL/MariaDB

#!/bin/bash
# backup-mysql.sh

BACKUP_DIR="/backups/mysql"
DATE=$(date +%Y%m%d_%H%M)
mkdir -p $BACKUP_DIR

docker exec nextcloud-db mysqldump -u root -p'password' --all-databases | gzip > $BACKUP_DIR/all-$DATE.sql.gz

find $BACKUP_DIR -name "*.sql.gz" -mtime +30 -delete
Enter fullscreen mode Exit fullscreen mode

File Backup Scripts

Docker Volumes

#!/bin/bash
# backup-volumes.sh

BACKUP_DIR="/backups/volumes"
DATE=$(date +%Y%m%d)
mkdir -p $BACKUP_DIR

# Backup specific Docker volumes
declare -A VOLUMES=(
  ["nextcloud"]="nextcloud_data"
  ["mattermost"]="mattermost_data"
  ["outline"]="minio_data"
  ["chatwoot"]="chatwoot_storage"
)

for NAME in "${!VOLUMES[@]}"; do
  VOL=${VOLUMES[$NAME]}
  docker run --rm -v $VOL:/data -v $BACKUP_DIR:/backup alpine \
    tar czf /backup/$NAME-$DATE.tar.gz -C /data .
done

find $BACKUP_DIR -name "*.tar.gz" -mtime +14 -delete
Enter fullscreen mode Exit fullscreen mode

Configuration Files

#!/bin/bash
# backup-config.sh

BACKUP_DIR="/backups/config"
DATE=$(date +%Y%m%d)
mkdir -p $BACKUP_DIR

# Backup all compose files and environment configs
tar czf $BACKUP_DIR/configs-$DATE.tar.gz \
  /opt/*/docker-compose.yml \
  /opt/*/.env \
  /opt/*/config.toml \
  /etc/caddy/Caddyfile \
  /etc/systemd/system/pocketbase.service

find $BACKUP_DIR -name "configs-*.tar.gz" -mtime +90 -delete
Enter fullscreen mode Exit fullscreen mode

Off-Site Sync with rclone

Set Up rclone

# Install rclone
curl https://rclone.org/install.sh | sudo bash

# Configure remote (S3 example)
rclone config
# Name: s3backup
# Type: s3
# Provider: AWS/Wasabi/Backblaze/MinIO
# Access key, secret key, region, bucket
Enter fullscreen mode Exit fullscreen mode

Sync Backups Off-Site

#!/bin/bash
# sync-offsite.sh

# Sync all local backups to S3
rclone sync /backups s3backup:my-server-backups/ \
  --transfers 4 \
  --progress \
  --log-file /var/log/rclone-backup.log

echo "[$(date)] Off-site sync completed" >> /var/log/backups.log
Enter fullscreen mode Exit fullscreen mode

Recommended Off-Site Storage

Provider Cost Notes
Backblaze B2 $0.005/GB/month Cheapest. 10 GB free
Wasabi $0.007/GB/month No egress fees
AWS S3 Glacier $0.004/GB/month Cheapest for archival
Hetzner Storage Box €3.50/month (1 TB) EU, SFTP/rclone
Another VPS €3.30+/month Full control

100 GB of backups costs ~$0.50-0.70/month on Backblaze B2 or Wasabi.

The Master Backup Script

#!/bin/bash
# master-backup.sh — runs all backup scripts

set -e

LOG="/var/log/backups.log"
echo "========================================" >> $LOG
echo "[$(date)] Starting full backup" >> $LOG

# 1. Database backups
/opt/scripts/backup-postgres.sh
/opt/scripts/backup-sqlite.sh

# 2. File backups
/opt/scripts/backup-volumes.sh

# 3. Config backup (weekly)
if [ "$(date +%u)" = "1" ]; then
  /opt/scripts/backup-config.sh
fi

# 4. Sync off-site
/opt/scripts/sync-offsite.sh

# 5. Health check — notify if backup succeeds
curl -s "https://status.yourdomain.com/api/push/BACKUP_TOKEN?status=up&msg=OK"

echo "[$(date)] Full backup completed" >> $LOG
Enter fullscreen mode Exit fullscreen mode

Schedule with Cron

# Edit crontab
crontab -e
Enter fullscreen mode Exit fullscreen mode
# Daily full backup at 3 AM
0 3 * * * /opt/scripts/master-backup.sh 2>&1 | tee -a /var/log/backups.log

# Hourly database backup for critical services
0 * * * * /opt/scripts/backup-postgres.sh 2>&1 | tee -a /var/log/backups.log
Enter fullscreen mode Exit fullscreen mode

Retention Policy

Data Type Local Retention Off-Site Retention
Database dumps 30 days 90 days
File backups 14 days 30 days
Config backups 90 days 1 year
Vaultwarden 90 days 1 year

Testing Restores

Backups are worthless if you can't restore. Test quarterly:

# 1. Spin up a test PostgreSQL container
docker run -d --name test-restore -e POSTGRES_PASSWORD=test postgres:16-alpine

# 2. Restore a backup
gunzip -c /backups/postgres/outline-20260308.sql.gz | \
  docker exec -i test-restore psql -U postgres

# 3. Verify data
docker exec test-restore psql -U postgres -c "SELECT count(*) FROM documents;"

# 4. Clean up
docker stop test-restore && docker rm test-restore
Enter fullscreen mode Exit fullscreen mode

Disaster Recovery Checklist

If your server dies, here's how to recover:

  1. Provision new VPS (same specs or bigger)
  2. Install Docker and Caddy
  3. Restore config files from off-site backup
  4. Create Docker volumes
  5. Restore databases from latest dump
  6. Restore file volumes from latest archive
  7. Start Docker Compose services
  8. Update DNS to new server IP
  9. Verify all services
  10. Update backup scripts for new server

Recovery time objective: 1-2 hours with a tested recovery plan.

Monitoring Your Backups

Use Uptime Kuma push monitors:

  1. Create Push monitors for each backup script
  2. Add curl to the end of each script (as shown in master backup)
  3. If a backup doesn't push within the expected interval, you get alerted

Alert on:

  • Backup script didn't complete
  • Off-site sync failed
  • Disk space below 20%
  • Backup file size is suspiciously small (corruption)

Find the best self-hosting tools and guides on OSSAlt — complete deployment and backup strategies side by side.