If you just run apt install docker.io or let Ubuntu snap you into the Snap version, you'll get a Docker that's slightly behind, has different file paths, and will confuse you when half the tutorials online don't match your setup.
Install from the official Docker repo. It takes two minutes more and you'll never think about it again.
Remove anything that's already there
Clean slate first:
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do
sudo apt-get remove -y $pkg 2>/dev/null
done
If you get "package not found" errors, that's fine — it means they weren't installed.
Add the Docker GPG key and repo
sudo apt-get update
sudo apt-get install -y ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
Install Docker Engine
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Note the package names: docker-ce not docker.io. And docker-compose-plugin gives you docker compose (with a space) — the modern version. The old docker-compose (with a dash) is deprecated, stop using it.
Run Docker without sudo
By default Docker requires root. Add your user to the docker group:
sudo usermod -aG docker $USER
newgrp docker
The newgrp docker activates it in the current shell without logging out. New SSH sessions pick it up automatically.
Verify it works
docker run hello-world
You should see "Hello from Docker!" — if you do, you're done.
docker --version
docker compose version
Keep Docker data off your root partition
Docker stores everything under /var/lib/docker by default. On a server where / is small and you have a separate data disk mounted at /data, this will fill your root partition silently.
Move it before you pull any real images:
sudo systemctl stop docker
sudo nano /etc/docker/daemon.json
{
"data-root": "/data/docker"
}
sudo mkdir -p /data/docker
sudo rsync -aP /var/lib/docker/ /data/docker/
sudo rm -rf /var/lib/docker
sudo systemctl start docker
Enable log rotation
Containers write to JSON log files that have no size limit by default. Give it six months and you'll find a 20GB /var/lib/docker/containers/<id>/<id>-json.log.
In /etc/docker/daemon.json:
{
"data-root": "/data/docker",
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
sudo systemctl restart docker
This keeps logs at max 30MB per container. Existing containers need to be recreated to pick up the new log settings — it doesn't apply retroactively.
Useful one-liners to know from day one
# List running containers
docker ps
# List all containers including stopped ones
docker ps -a
# Pull and run nginx for testing
docker run -d -p 8080:80 --name test-nginx nginx
# Follow container logs
docker logs -f test-nginx
# Get a shell in a running container
docker exec -it test-nginx bash
# Stop and remove
docker stop test-nginx && docker rm test-nginx
# Remove all stopped containers
docker container prune
# Disk usage by images/containers/volumes
docker system df
Docker Compose: your actual workflow
You'll rarely run docker run manually in production. Docker Compose is how you define services properly. Create a docker-compose.yml:
services:
web:
image: nginx:alpine
ports:
- "8080:80"
volumes:
- ./html:/usr/share/nginx/html:ro
restart: unless-stopped
docker compose up -d # start in background
docker compose logs -f # tail logs
docker compose down # stop and remove containers
restart: unless-stopped means the container comes back after a reboot, but stays down if you manually stopped it. That's what you want 99% of the time.
What I actually run in Docker
Ghost (this blog), Nginx reverse proxy in front of it, a few monitoring containers. Anything stateful gets volumes mapped to a specific path so I know exactly where data lives. No guessing, no digging through /var/lib/docker/volumes/.
Next step from here: put Nginx in front of your containers so you can run multiple services on port 80/443.