How I Finally Stopped Docker Containers From Fighting Over Ports
You spin up a couple containers, expose a few ports, and boom — bind: address already in use. Annoying, inconsistent, and usually at the worst time. Here’s the complete fix I use now, plus why the problem happens, other causes that look similar, and how to bullet-proof your setup so it stays fixed.
The Symptom
- Starting a new container fails with an error like:
Error starting userland proxy: listen tcp4 0.0.0.0:80: bind: address already in use - Or it starts, but something else breaks (random 502s in a reverse proxy, “connection refused”, etc.)
Why This Happens
- You can only bind a given host port once per IP. If container A maps
-p 80:80, container B can’t also map-p 80:80on the same host interface. - Random “helpful” defaults. Many images expose the same internal port (often
80or8080). If you mirror that to the host for multiple containers, you collide. - Host services already using the port. Web servers (Apache/Nginx), VPNs, dockerized proxies, even stale dev servers — they’ll block you.
- Leftover or zombie containers. A container you forgot about is still holding the port.
- Systemd/socket activation / WSL quirks (Windows/Mac). System services or host subsystems can grab ports in the background.
The Strategic Fix (That Scales)
Expose only one public entrypoint (80/443) and route everything internally through a reverse proxy on Docker’s bridge network. Each app keeps a unique internal port (or just its default), and the proxy handles all the outside traffic by hostname or path.
- One place to terminate TLS.
- Zero host-port juggling per container.
- Cleaner firewall rules and logging.
- Horizontal scale by adding services without touching host ports again.
Quick Triage: Find What’s Holding the Port
Linux: sudo ss -tulpn | grep -E ':80|:443' sudo lsof -i :80 -sTCP:LISTEN sudo lsof -i :443 -sTCP:LISTEN
Windows (PowerShell): netstat -aon | findstr :80 tasklist /FI "PID eq <PID_FROM_NETSTAT>" Kill or disable the offender only if it’s rogue; otherwise plan to move it behind the proxy too.
Check for zombie containers: docker ps --format 'table {{.Names}}\t{{.Ports}}'
⸻
The Pattern to Use Going Forward
1) Put a Reverse Proxy in Front
You can use Caddy, Traefik, or Nginx Proxy Manager. Pick one; I’ll show Caddy and Traefik because they’re scriptable and production-friendly.
Option A: Caddy (simple, automatic HTTPS)
caddy:
image: caddy:2
container_name: caddy
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
networks:
- proxy
app1:
image: ghcr.io/example/app1:latest
expose:
- "8080"
networks:
- proxy
app2:
image: ghcr.io/example/app2:latest
expose:
- "8000"
networks:
- proxy
volumes:
caddy_data:
caddy_config:
networks:
proxy:
driver: bridge
Caddyfile
app1.example.com {
reverse_proxy app1:8080
}
app2.example.com {
reverse_proxy app2:8000
}
Notes: • expose makes ports available to the Docker network, not the host. • Caddy listens on the host’s 80/443. Everything else stays internal.
⸻
Option B: Traefik (labels-based routing and Let’s Encrypt)
traefik:
image: traefik:v3.1
container_name: traefik
command:
- "--api.dashboard=true"
- "--providers.docker=true"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--certificatesresolvers.le.acme.httpchallenge=true"
- "--certificatesresolvers.le.acme.httpchallenge.entrypoint=web"
- "--certificatesresolvers.le.acme.email=admin@example.com"
- "--certificatesresolvers.le.acme.storage=/letsencrypt/acme.json"
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./letsencrypt:/letsencrypt
networks:
- proxy
app1:
image: ghcr.io/example/app1:latest
expose:
- "8080"
labels:
- "traefik.enable=true"
- "traefik.http.routers.app1.rule=Host(`app1.example.com`)"
- "traefik.http.routers.app1.entrypoints=websecure"
- "traefik.http.routers.app1.tls.certresolver=le"
- "traefik.http.services.app1.loadbalancer.server.port=8080"
networks:
- proxy
app2:
image: ghcr.io/example/app2:latest
expose:
- "8000"
labels:
- "traefik.enable=true"
- "traefik.http.routers.app2.rule=Host(`app2.example.com`)"
- "traefik.http.routers.app2.entrypoints=websecure"
- "traefik.http.routers.app2.tls.certresolver=le"
- "traefik.http.services.app2.loadbalancer.server.port=8000"
networks:
- proxy
networks:
proxy:
driver: bridge
⸻
2) Use Hostnames or Paths, Not Host Ports
- Per-service subdomains: app1.example.com, app2.example.com
- Path routing (if sharing a domain):
Caddy: example.com { handle_path /app1* { reverse_proxy app1:8080 } handle_path /app2* { reverse_proxy app2:8000 } }
Traefik: "traefik.http.routers.app1.rule=Host(`example.com`) && PathPrefix(`/app1`)"
⸻
3) Keep Everything on an Isolated Bridge Network
- One proxy network for externally routed services.
- Additional per-app networks for isolation.
- Avoid
--networkhost unless you fully understand the tradeoffs.
⸻
Similar Problems That Look Like Port Conflicts (But Aren’t)
• Container reachable by IP but not hostname: That’s DNS. Check proxy vhost rules and external DNS records.
• Works on localhost only: App is bound to 127.0.0.1. Ensure it’s listening on 0.0.0.0.
• Intermittent 502/504: Health checks, upstream app slow to start, or proxy timeouts.
• Fails after reboot: Use restart: unless-stopped and prefer HTTP challenge if DNS isn’t reachable early.
⸻
Hardening & Quality-of-Life Tweaks
• Firewall sanity: Only open ports 80/443.
• Access logs: Enable proxy access logs for visibility.
• IPv6: Make sure your proxy listens on both v4/v6 if you use AAAA records.
• SELinux/AppArmor: Adjust volume labels if needed.
• Certificates: Centralize via the proxy; don’t mix cert sources.
⸻
A Reliable “Clean Slate” Procedure
1) Stop everything
docker compose down
2) Check ports
sudo ss -tulpn | grep -E ':80|:443' || echo "OK: 80/443 free"
3) Bring up proxy first
docker compose up -d caddy # or traefik
4) Test
curl -I http://YOUR_DOMAIN
5) Add services one at a time
docker compose up -d app1
curl -I https://app1.example.com
docker compose up -d app2
curl -I https://app2.example.com
⸻
When You Shouldn’t Use the Proxy Pattern
- When you need unique IPs per service on your LAN (use macvlan).
- For low-latency UDP services where NAT adds noticeable delay.
⸻
Checklist
• Only the reverse proxy binds host ports.
• Apps use expose: not ports:.
• Shared proxy network.
• Routing by hostname or path.
• Central TLS via proxy.
• restart: unless-stopped used.
• Firewall locked to 80/443.
• Logs and dashboards secured.
⸻
Bottom Line
Stop fighting host ports. Put a single reverse proxy in front, keep everything else internal, and scale by DNS and labels. not by juggling -p 8081:80, -p 8082:80, forever. It’s cleaner, safer, and once it’s set, you don’t think about ports again.