Troubleshooting
When something isn't working, start with the built-in diagnostics:
lerd doctor # full check: podman, systemd, DNS, ports, images, config
lerd status # quick health snapshot of all running serviceslerd doctor reports OK/FAIL/WARN for each check with a hint for every failure.
Filing a bug report
If you need help on the issue tracker, run:
lerd bug-reportThis writes a single plain-text file (default: ./lerd-bug-report-<timestamp>.txt) containing the full lerd doctor output, your config.yaml and sites.yaml, the state of every lerd-* systemd unit, recent journal and container logs for lerd's own infra units, listening sockets on the lerd ports, and a curated set of environment variables.
What gets filtered before it lands on disk:
- Site
.envfiles are excluded outright. - Home paths render as
$HOMEand the username as$USER. - Site names, domains and parked-directory paths are replaced with
site-1/site1.<tld>/$PARK_1placeholders. Pass--show-real-namesto keep the raw values for local debugging. - Logs are kept only for lerd's own infra (
lerd-nginx,lerd-ui,lerd-dns,lerd-watcher,lerd-tray, etc.). Preset services (mysql, redis, meilisearch, gotenberg, …), FPM containers and per-site workers still appear in the unit-state and container tables but their logs are dropped — they were producing repetitive request-shaped noise that didn't help triage. - Custom services and per-site custom / FrankenPHP containers are omitted entirely so the report doesn't expose user app identifiers.
- Nginx structured error lines have their
request:/upstream:/referrer:URI fields redacted, and HTTP access lines are dropped.
Skim the file before posting (it's plain text — open it in any editor) and attach it to your GitHub issue.
Override the destination with --output, change how many log lines per service to include with --log-lines, or keep raw site names with --show-real-names:
lerd bug-report --output /tmp/report.txt --log-lines 500
lerd bug-report --show-real-names.test domains not resolving
First, confirm DNS is actually meant to be managed by lerd. If lerd dns:check reports DNS managed externally, you opted out of dnsmasq during install and your sites should be on *.localhost rather than *.test. See DNS for switching modes.
Otherwise, the fastest way to find the broken rung is lerd doctor. The DNS section walks the chain top to bottom and surfaces exactly where it breaks, with a hint per failure:
[DNS]
DNS TLD (.test) OK
lerd-dns container running
dnsmasq config address=/.test/127.0.0.1, port=5300
port 5300 listening 127.0.0.1:5300
dig @127.0.0.1 -p 5300 127.0.0.1
resolver hookup NetworkManager dispatcher: /etc/NetworkManager/dispatcher.d/99-lerd-dns
interface routes .test to 5300 enp14s0
system DNS lookup 127.0.0.1The chain in order:
| Rung | What it checks | If it fails |
|---|---|---|
lerd-dns container | The dnsmasq container is running. | lerd start (or podman logs lerd-dns to see why it crashed). |
dnsmasq config | ~/.local/share/lerd/dnsmasq/lerd.conf exists with port=5300 and address=/.<tld>/. | lerd start regenerates the config from your registered TLD. |
port 5300 listening | TCP/UDP 5300 is reachable on 127.0.0.1. | Another process owns the port. Find it with ss -tlnp sport = :5300 on Linux, or lsof -nP -iTCP:5300 -sTCP:LISTEN on macOS. |
dig @127.0.0.1 -p 5300 | A direct query at port 5300 returns 127.0.0.1 for lerd-probe.<tld>. | dnsmasq is up but its config drifted. systemctl --user restart lerd-dns. |
resolver hookup | The NetworkManager dispatcher script or systemd-resolved drop-in is installed. | Rerun lerd install. |
interface routes .test to 5300 | resolvectl status shows 127.0.0.1:5300 and ~<tld> on the active interface. | sudo systemctl restart NetworkManager, or set the routing manually with sudo resolvectl domain <iface> ~test ~.. |
system DNS lookup | host lerd-probe.test (the system resolver) returns 127.0.0.1. | The drop-in is installed but resolved isn't honouring it. Check whether cloud-init or another tool wrote a higher-priority resolver config. Common on EC2 / cloud images. |
You can also call this programmatically over MCP via the dns_diagnose tool, useful for AI-driven troubleshooting:
echo '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"dns_diagnose","arguments":{}}}' | lerd mcpThe response includes a steps array with a status (ok / fail / warn / skip) and hint per rung, plus a first_failure index so an LLM can jump straight to the broken layer.
Nginx not serving a site
Check that nginx and the PHP-FPM container are running, then inspect the generated vhost:
lerd status # check nginx and FPM are running
podman logs lerd-nginx # nginx error log
cat ~/.local/share/lerd/nginx/conf.d/my-app.test.conf # check generated vhostMy custom nginx directive disappeared after an update
Don't edit ~/.local/share/lerd/nginx/conf.d/*.conf directly. Lerd regenerates those files on lerd link, lerd secure, lerd site rebuild, and every lerd install (which lerd update re-execs). Drop your snippet in ~/.local/share/lerd/nginx/custom.d/{domain}.conf instead — the generated vhost ends with an include for that file, and lerd never writes into custom.d/. See Nginx Overrides for examples.
PHP-FPM container not running
Check the systemd unit status and logs:
systemctl --user status lerd-php84-fpm
systemctl --user start lerd-php84-fpm
podman logs lerd-php84-fpmIf the image is missing (e.g. after podman rmi):
lerd php:rebuildpodman exec fails with "chdir: No such file or directory"
This happens when your project is outside your home directory (e.g. /var/www/, /opt/projects/). The PHP-FPM and nginx containers only mount $HOME by default.
Lerd handles this automatically: when you lerd link, lerd park, or run any exec command (lerd php, composer, laravel new) from an outside path, lerd adds the volume mount and restarts the affected containers.
If you see this error on an older lerd version, update to the latest and re-link the site:
lerd update
lerd unlink && lerd linkTo verify the mounts are in place:
grep Volume ~/.config/containers/systemd/lerd-nginx.container
grep Volume ~/.config/containers/systemd/lerd-php*-fpm.containerYou should see your project path listed alongside the %h:%h mount.
Permission denied on port 80/443
Rootless Podman cannot bind to ports below 1024 by default. Allow it:
sudo sysctl -w net.ipv4.ip_unprivileged_port_start=80
# Make permanent:
echo 'net.ipv4.ip_unprivileged_port_start=80' | sudo tee /etc/sysctl.d/99-lerd.conflerd install sets this automatically, but it may need to be re-applied after a kernel update.
Watcher service not running
The watcher monitors parked directories, site config files, git worktrees, and DNS health. If sites aren't being auto-registered or queue workers aren't restarting on .env changes:
lerd status # shows watcher running/stopped
systemctl --user start lerd-watcher # start it from the terminal
# or use the Start button in the UI under System > WatcherTo see what the watcher is doing:
journalctl --user -u lerd-watcher -f
# or open the live log stream in the UI under System > WatcherFor verbose output (DEBUG level), set LERD_DEBUG=1 in the service environment:
systemctl --user edit lerd-watcher
# Add:
# [Service]
# Environment=LERD_DEBUG=1
systemctl --user restart lerd-watcherHTTPS certificate warning in browser
The mkcert CA must be installed in your browser's trust store. Ensure certutil / nss-tools is installed, then re-run lerd install:
- Arch:
sudo pacman -S nss - Debian/Ubuntu:
sudo apt install libnss3-tools - Fedora:
sudo dnf install nss-tools
After installing the package, run lerd install again to register the CA.
PHP image build is slow on first run
lerd normally pulls a pre-built base image from ghcr.io and finishes in ~30 seconds. If you see it fall back to a local build instead, the most common cause is being logged into ghcr.io with expired or unrelated credentials; the registry rejects the authenticated request even though the image is public.
lerd handles this automatically since v1.3.4 by always pulling anonymously. If you are on an older version, running podman logout ghcr.io before the build will fix it.
Nginx fails to start (missing certificates)
lerd start automatically detects SSL vhosts that reference missing certificate files and repairs them before starting nginx:
- Registered sites: the site is switched back to HTTP and the vhost is regenerated. The registry is updated (
Secured = false). - Orphan SSL vhosts: configs left behind by unlinked sites with missing certs are removed.
Repaired items are printed as warnings during startup:
WARN: missing TLS certificate for myapp.test, switched to HTTPTo re-enable HTTPS after the automatic repair, run lerd secure <name>.
If nginx still fails to start, check the logs:
journalctl --user -u lerd-nginx -n 30 --no-pagerPort conflicts on lerd start
lerd start checks for port conflicts before starting containers. If another process is already using a required port, you'll see a warning:
Port conflicts detected:
WARN: port 80 (nginx HTTP) already in use, may fail to start (check: ss -tlnp sport = :80)Common culprits are Apache, another nginx instance, or a previously running lerd that wasn't stopped cleanly. Find and stop the conflicting process:
# Linux
ss -tlnp sport = :80
# macOS
lsof -nP -iTCP:80 -sTCP:LISTENThe exact command lerd suggests in lerd doctor and lerd start output is already platform-correct, so you can copy it from there.
lerd doctor also checks for port conflicts as part of its full diagnostic, and adds a dedicated [Stopped service ports] section that flags installed services whose host port is already bound by another process. The same warning is shown next to the inactive status pill in the web UI, so you can spot the conflict without running anything: most often this is a system-installed service (Postgres, MySQL, Redis) listening on the default port. Stop the conflicting process and the warning clears on the next snapshot refresh.
Workers missing after reinstall
If you ran lerd uninstall and then reinstalled, worker units and service quadlets are deleted during uninstall. Running lerd start after reinstalling automatically restores them from the workers list saved in each site's .lerd.yaml. If .lerd.yaml does not exist or was not committed, you will need to start workers again manually (lerd queue:start, etc.).
To check what was restored:
lerd status # shows all active workers and servicesWorkers failing or crash-looping
Check lerd status, the Workers section lists all active, restarting, or failed workers. In the web UI, failing workers show a pulsing red toggle and a ! on their log tab.
To inspect the error:
journalctl --user -u lerd-queue-my-app -f # or lerd-horizon-my-app, lerd-schedule-my-appCommon causes:
- Missing Redis when
QUEUE_CONNECTION=redis, start it withlerd service start redis - Missing dependencies after a fresh clone, run
lerd setupto install them - Bad
.envvalues, runlerd envto reset service connection settings
When you unlink a site, crash-looping workers are automatically detected and stopped.
Error: NetworkUpdate is not supported for backend CNI: invalid argument
Your system is likely configured to use the older CNI backend, which lacks support for the requested network operation. Edit or create the Podman configuration file at /etc/containers/containers.conf and add or modify the network_backend setting to netavark:
[network]
network_backend = "netavark"To ensure a clean switch and recreate the networks with the new backend, reset the Podman storage. Warning: this will wipe all existing containers, pods, and networks:
podman system resetError: unable to parse ip fe80::...%18 specified in AddDNSServer: invalid argument
Your host's DNS configuration includes a zoned link-local IPv6 nameserver, typically advertised by your router via SLAAC + RDNSS. The zone identifier (%18 is a kernel interface index) is meaningless inside a container's network namespace, and netavark refuses to accept it.
Lerd 1.18+ filters these addresses automatically before handing them to podman. If you're still on 1.17 or older, upgrade with lerd update and rerun lerd install. The filter is conservative: only zoned link-local (fe80::...%iface) addresses are dropped; globally routable IPv6 nameservers (e.g. 2606:4700:4700::1111) are preserved.
When filtering empties the entire DNS list, lerd falls back to pasta's standard forwarder (169.254.1.1), which bridges into the host's resolver and preserves .test routing.
Containers can resolve .test over IPv4 but not over IPv6
Lerd 1.18+ creates the lerd podman network as dual-stack (v4 + v6) and writes both A and AAAA records for .test domains. If you upgraded from an older version, the existing v4-only lerd network is migrated automatically the next time you run lerd install: attached containers stop, the network is recreated with the fd00:1e7d::/64 ULA prefix, the previous DNS server list is restored, and the containers restart. Quick check:
podman network inspect lerd --format '{{.Subnets}}'
# expect both an IPv4 subnet and one starting with fd00:1e7d::If the v6 subnet is missing, run lerd install once to migrate. To verify resolution from inside a container:
podman run --rm --network lerd alpine sh -c 'nslookup laravel.test; nslookup -type=AAAA laravel.test'Services fail to start with "aardvark-dns failed to bind [fd00:1e7d::1]:53"
Symptom: after lerd install, a subset of service containers (commonly lerd-nginx, lerd-postgres, lerd-meilisearch) fail to start. Journal shows:
Error: netavark: error while applying dns entries: IO error: aardvark-dns failed to start
Error starting server failed to bind udp listener on [fd00:1e7d::1]:53:
IO error: Cannot assign requested address (os error 99)Cause: the host advertises IPv6 in the kernel but has no routable v6 address on any interface — only ::1 and fe80:: — so netavark can't hold the ULA gateway on the rootless bridge, and aardvark-dns bind fails with EADDRNOTAVAIL. Typical in headless QEMU/KVM VMs and networks without v6 DHCP.
Lerd 1.18+ detects this on every lerd install by reading /proc/net/if_inet6 (any non-loopback, non-link-local v6 address counts as usable) and falls back to a v4-only lerd network. An existing dual-stack network on a v6-less host is recreated as v4-only automatically. Force it:
lerd install
# look for: "Recreated lerd network as v4-only (host has no usable IPv6)."If the host later gains v6 connectivity, the next lerd install will recreate the network as dual-stack again.
If you'd rather skip the dual-stack code path entirely, even on a v6-capable host, opt out:
lerd install --no-ipv6
# or persistently via shell rc:
export LERD_DISABLE_IPV6=1Either path writes ~/.local/share/lerd/ipv6-probe-failed-lerd, which EnsureNetwork honors on every code path (initial create, migration, recreate). To re-enable dual-stack, delete that marker file and re-run lerd install.
Every DNS lookup inside a lerd container stalls ~5 seconds
Symptom: pages that hit the database or any container-to-container hostname feel slow, and time dig <anything> @<container> takes roughly five seconds before returning an answer. The network looks fine in podman network inspect lerd (both IPv4 and IPv6 subnets present), but aardvark-dns's on-disk config has the v6 gateway absent from its listen-ips line.
Cause: podman network rm doesn't clean up $XDG_RUNTIME_DIR/containers/networks/aardvark-dns/<name> between rm and recreate, so a network that was originally v4-only can leave aardvark with a v4-only listen header even after the network is recreated dual-stack. The container's /etc/resolv.conf still lists the v6 gateway as the primary nameserver, queries to it time out (~5s), then glibc falls back to the v4 gateway.
Lerd 1.18+ detects this drift on lerd install (aardvark listen line is v4-only despite the network being dual-stack) and self-heals by recreating the network with the stale aardvark state wiped. If you're on an earlier 1.18 build or the heal didn't fire, force it:
lerd installManual verification:
cat "${XDG_RUNTIME_DIR:-/run/user/$(id -u)}/containers/networks/aardvark-dns/lerd" | head -1
# expect both gateways, e.g.: fd00:1e7d::1,10.89.7.1 169.254.1.1
# if only 10.89.7.1 is present, the drift fix didn't run — re-run lerd install