The server is throwing errors. df -h shows / at 100%. Services are failing to write logs, databases are crashing, uploads are silently broken. You need to find the cause fast.

Here's how I work through it.

Step 1: confirm which partition is full

df -h

Check all mount points — it might not be /. Maybe /var is on its own partition and that's what's full. Fixing the wrong place wastes time.

Step 2: find the big directories fast

sudo du -sh /* 2>/dev/null | sort -rh | head -20

This scans the root, sorts by size descending, shows top 20. The 2>/dev/null hides permission errors so the output is clean. Takes 30–60 seconds on a loaded system.

Drill into whichever directory is biggest:

sudo du -sh /var/* 2>/dev/null | sort -rh | head -20
sudo du -sh /var/log/* 2>/dev/null | sort -rh | head -20

Keep going until you find the specific file or directory.

Step 3: use ncdu if you have time

sudo apt install -y ncdu
sudo ncdu /

ncdu is an interactive disk usage browser. Navigate with arrow keys, d to delete. Much faster for exploring when you don't know where the problem is. For a 1-minute diagnosis, du + sort is faster. For a thorough cleanup, ncdu wins.

The usual suspects

Log files

sudo du -sh /var/log/* | sort -rh | head -10

Nginx, Apache, MySQL, and application logs can grow without bounds if log rotation isn't configured. A busy access log can hit 10GB+ in a week.

# Force logrotate to run now
sudo logrotate -f /etc/logrotate.conf

systemd journal

sudo journalctl --disk-usage

By default the journal keeps logs indefinitely. On a server that's been running for a year, this can be several gigabytes.

# Keep only last 7 days
sudo journalctl --vacuum-time=7d

# Or limit by size
sudo journalctl --vacuum-size=500M

Make the limit permanent in /etc/systemd/journald.conf:

SystemMaxUse=500M
MaxRetentionSec=7day
sudo systemctl restart systemd-journald

Docker

docker system df

Shows disk usage by images, containers, volumes, and build cache. Old images from previous deploys, stopped containers, and build cache accumulate fast.

# Remove everything unused (stopped containers, dangling images, unused networks)
docker system prune

# Include unused volumes (careful — make sure they're actually unused)
docker system prune --volumes

# Remove specific old images
docker image prune -a --filter "until=720h"

Docker container logs at /var/lib/docker/containers/<id>/<id>-json.log have no size limit unless you configured log rotation. A chatty container can write gigabytes. Check:

sudo du -sh /var/lib/docker/containers/*/  2>/dev/null | sort -rh | head -5

apt cache

sudo du -sh /var/cache/apt/archives/
sudo apt clean

Can be hundreds of MB on a system that's been updated regularly without cleanup. apt clean removes the cached .deb files — they'll be re-downloaded if you need to reinstall something.

MySQL / PostgreSQL data directories

sudo du -sh /var/lib/mysql/
sudo du -sh /var/lib/postgresql/

Binary logs on MySQL can fill a disk. Check:

sudo du -sh /var/lib/mysql/*bin* 2>/dev/null

The tricky one: deleted files still open

A process has a file open, you delete the file, but the disk space isn't freed because the file handle is still open. The file shows as (deleted) in /proc.

sudo lsof | grep deleted | sort -k7 -rn | head -20

Column 7 is file size. If you see large deleted files, the fix is to restart the process holding them open:

# Find the PID from the lsof output, then:
sudo systemctl restart servicename

This comes up most often with log files that were rotated but the old file handle wasn't closed. Nginx and MySQL have -HUP/SIGUSR1 signals to reopen log files without a full restart — most other services just need a restart.

Temp files

sudo du -sh /tmp /var/tmp

Usually not the cause but worth a look. Failed uploads, build artifacts, and extracted archives sometimes end up here and never get cleaned.

sudo find /tmp -type f -atime +7 -delete
sudo find /var/tmp -type f -atime +30 -delete

Find large files directly

Sometimes you just want to find every file over 100MB regardless of location:

sudo find / -xdev -type f -size +100M 2>/dev/null | sort | xargs -I{} du -sh {} | sort -rh

-xdev stays on the current filesystem — doesn't cross into /proc, /sys, or other mounts.

After you've freed space: prevent recurrence

  • Configure log rotation for any app that writes its own logs
  • Set journal size limits in journald.conf
  • Set Docker log rotation in /etc/docker/daemon.json
  • Set up a monitoring alert at 80% disk usage — don't wait until 100% to find out

A disk that hits 100% once will hit it again if you just delete files and move on without understanding why they accumulated.


Related posts