Every sysadmin has a "before I set up proper backups" story. Here's how to stop having that story.
rsync is already on your server. No extra software, no subscription, no setup wizard. Just a command that copies files efficiently over SSH.
The basic command
rsync -avz /source/ user@backup-server:/destination/
-a— archive: preserves permissions, timestamps, symlinks, owner, group-v— verbose: shows what's transferring-z— compress during transfer (skip on fast local networks)
The trailing slash on /source/ matters. With slash: copies contents into destination. Without slash: copies the directory itself. I always use trailing slash and spell out the destination path explicitly.
SSH key auth first
rsync over SSH only works unattended if you've set up key auth:
ssh-keygen -t ed25519 -C "backup@server1"
ssh-copy-id user@backup-server
Test it: ssh user@backup-server should log in without a password. Fix that before automating anything.
Incremental snapshots with --link-dest
Copying everything every time wastes disk. With --link-dest, unchanged files are hard-linked from the previous backup instead of copied. You get a full snapshot each run, but disk usage is only what changed.
rsync -avz --delete --link-dest=/backups/daily.1 /var/www/ user@backup-server:/backups/daily.0/
Rotate snapshots before each run on the backup server:
ssh user@backup-server "
rm -rf /backups/daily.7
for i in 6 5 4 3 2 1 0; do
[ -d /backups/daily.$i ] && mv /backups/daily.$i /backups/daily.$(($i+1))
done
"
Seven daily snapshots, each looks like a full backup, disk usage is minimal. To restore from 3 days ago, copy from daily.3.
Exclude what you don't need
rsync -avz --delete --exclude='*.log' --exclude='*.tmp' --exclude='node_modules/' --exclude='.git/' --exclude='__pycache__/' /var/www/ user@backup-server:/backups/web/
Databases — dump first, never rsync live files
rsyncing a live database directory gives you corrupted backup files. Always dump first:
# MySQL/MariaDB
mysqldump --all-databases | gzip > /tmp/db-$(date +%Y%m%d).sql.gz
# PostgreSQL
sudo -u postgres pg_dumpall | gzip > /tmp/db-$(date +%Y%m%d).sql.gz
# SQLite (Ghost, etc.)
sqlite3 /var/lib/ghost/content/data/ghost.db ".backup /tmp/ghost-$(date +%Y%m%d).db"
Then rsync the dump file and delete the local copy after.
A working backup script
Save to /usr/local/bin/backup.sh:
#!/bin/bash
set -euo pipefail
BACKUP_HOST="user@backup-server"
BACKUP_BASE="/backups/$(hostname)"
DATE=$(date +%Y%m%d-%H%M%S)
LOG="/var/log/backup.log"
log() { echo "$(date '+%Y-%m-%d %H:%M:%S') $*" | tee -a "$LOG"; }
log "Starting backup"
# Dump database
mysqldump --all-databases | gzip > /tmp/db-${DATE}.sql.gz
log "Database dumped"
# Rotate snapshots on backup server
ssh "${BACKUP_HOST}" "
rm -rf ${BACKUP_BASE}/daily.7 2>/dev/null || true
for i in 6 5 4 3 2 1 0; do
[ -d ${BACKUP_BASE}/daily.\$i ] && mv ${BACKUP_BASE}/daily.\$i ${BACKUP_BASE}/daily.\$((\$i+1)) || true
done
mkdir -p ${BACKUP_BASE}/daily.0
"
# Sync files
rsync -az --delete --link-dest="${BACKUP_BASE}/daily.1" --exclude='*.log' --exclude='node_modules/' /var/www/ "${BACKUP_HOST}:${BACKUP_BASE}/daily.0/www/"
# Sync database dump
rsync -az /tmp/db-${DATE}.sql.gz "${BACKUP_HOST}:${BACKUP_BASE}/daily.0/"
# Cleanup
rm -f /tmp/db-${DATE}.sql.gz
log "Backup completed"
chmod +x /usr/local/bin/backup.sh
Schedule it:
sudo crontab -e
0 2 * * * /usr/local/bin/backup.sh
Test your restore — not optional
A backup you've never restored from is not a backup, it's a hope. Once a month, restore a random file:
rsync -avz user@backup-server:/backups/web/daily.2/www/html/index.php /tmp/test-restore/
If that takes more than 2 minutes, your process is too complicated.
Detect when backups stop running
Cron jobs fail silently. Check backup age on the backup server:
find /backups/ -maxdepth 2 -name "daily.0" -not -newer /backups -mtime +1 -exec echo "WARNING: stale backup: {}" ;
Or add a healthcheck ping at the end of your backup script to something like healthchecks.io — it alerts you if the ping doesn't arrive.