You have a script or app that needs to run permanently, restart on crash, and start on boot. Here's how to do it properly with systemd instead of hacking it into screen or a cron @reboot.

The minimal unit file

Create /etc/systemd/system/myapp.service:

[Unit]
Description=My Application
After=network.target

[Service]
Type=simple
User=www-data
WorkingDirectory=/opt/myapp
ExecStart=/opt/myapp/myapp --config /etc/myapp/config.yml
Restart=on-failure
RestartSec=5s

[Install]
WantedBy=multi-user.target

Enable and start:

sudo systemctl daemon-reload
sudo systemctl enable myapp
sudo systemctl start myapp
sudo systemctl status myapp

Type= matters more than you'd think

  • Type=simple — the ExecStart process IS the service. Use this for most apps.
  • Type=forking — the process forks and the parent exits. Old-style daemons that write a PID file. Pair with PIDFile=.
  • Type=notify — process tells systemd when it's ready via sd_notify(). Supported by nginx, PostgreSQL, others.
  • Type=oneshot — runs and exits. Add RemainAfterExit=yes if you want it to show active after completion.

Get this wrong and systemd thinks your service crashed when it's actually running fine. Most apps want simple.

Environment variables

Inline (fine for non-sensitive config):

[Service]
Environment="APP_ENV=production"
Environment="PORT=8080"

From a file — better for anything secret:

[Service]
EnvironmentFile=/etc/myapp/env

/etc/myapp/env:

APP_ENV=production
DB_PASSWORD=hunter2
sudo chmod 600 /etc/myapp/env

Don't put secrets in the unit file. It's world-readable by default.

Restart policies

Restart=on-failure      # restart only on non-zero exit or signal
Restart=always          # restart no matter what, even clean exit
Restart=on-abnormal     # crash, signal, watchdog timeout only
RestartSec=10s          # wait before restarting

I use Restart=on-failure for production services. always creates restart loops if your app exits cleanly during a controlled shutdown.

Stop infinite restart loops

[Service]
StartLimitBurst=5
StartLimitIntervalSec=60s

If the service crashes 5 times in 60 seconds, systemd gives up and marks it failed. Without this, a misconfigured app hammers the disk with logs forever.

Dependencies

[Unit]
After=network.target postgresql.service
Requires=postgresql.service

After= controls start order only. Requires= means if PostgreSQL fails, this fails too. Wants= is the soft version — try to start it but don't fail if it's not there.

Logs

sudo journalctl -u myapp -f             # follow live
sudo journalctl -u myapp -n 100         # last 100 lines
sudo journalctl -u myapp --since today

stdout and stderr from your process go to the journal automatically. No log file config needed unless you specifically want file output.

Directives I use constantly

[Service]
# Limit resources
MemoryMax=512M
CPUQuota=50%

# Ensure directory exists before starting
ExecStartPre=/bin/mkdir -p /var/run/myapp

# How long to wait before force-killing on stop
TimeoutStopSec=30s

# Run a command after the service stops
ExecStopPost=/usr/bin/cleanup-script.sh

Overriding system units without breaking updates

Never directly edit unit files in /lib/systemd/system/ — package updates overwrite them. Use override files instead:

sudo systemctl edit nginx

Creates /etc/systemd/system/nginx.service.d/override.conf — survives updates, only contains your changes. For your own services in /etc/systemd/system/, edit directly.

After any change to a unit file:

sudo systemctl daemon-reload
sudo systemctl restart myapp

Related posts