Worker agent
Provision a Linux host as a WiseHosting worker — installer, container limits.
A worker is a Linux host that runs user containers. It connects out to the control plane over the WireGuard tunnel (WSS to 10.50.0.1). End-user app traffic reaches worker containers via the proxy server (10.50.0.30) over WireGuard — the worker does not run Traefik and does not need a Cloudflare tunnel.
No Cloudflare Tunnel needed
In the old model, each worker ran Traefik and a cloudflared tunnel. That's gone. The proxy server handles all end-user TLS and routing. Workers only need outbound connectivity to the control plane over WireGuard.
What a worker needs
- Linux host with root (Ubuntu / Debian / Fedora / Arch)
- Outbound UDP
51821to the control plane (for WireGuard) - Outbound HTTPS for any HTTPS git clones
- The proxy server must already be set up and in the WireGuard mesh so it can reach this worker's container ports (see Proxy server setup)
0. Join the WireGuard mesh
Run before the worker installer so register/refresh and WSS can reach the control plane. The CP must already have run sudo ./scripts/wireguard-setup.sh control and shared its public key + public IPv4.
sudo ./scripts/wireguard-setup.sh worker \
--cp-pubkey '<paste from CP /etc/wireguard/cp_public.key>' \
--cp-public-ip '<CP public IPv4>' \
--worker-id 2--worker-id 2 lands this host at 10.50.0.3 (the formula is 10.50.0.{ID+1}). The script prints the worker's own public key — paste that as a [Peer] on the control plane (with AllowedIPs = 10.50.0.3/32) and run sudo wg syncconf wg0 <(wg-quick strip wg0). After this, ping 10.50.0.1 from the worker should succeed.
You also need to add this worker as a peer on the proxy server so it can forward app traffic to containers. See Proxy server setup for those steps.
For the full WireGuard story (key exchange, troubleshooting, key rotation), see WireGuard mesh.
1. Worker install (one command)
curl -fsSL https://raw.githubusercontent.com/Krinali12/hostingbot/master/scripts/worker-setup.sh | sudo bashThe installer:
- Downloads the latest
wisehosting-workerbinary (verifies SHA-256) - Installs runtime deps (
iptables-persistent, etc.) - Installs Podman 5.x + Buildah 1.43 from the community alvistack OBS repo on Ubuntu 22.04 / Debian 12 (the older kubic repo is dead). Falls back to distro Podman with a warning if the repo is unreachable — Node 22 builds may fail on Podman 4 in that case.
- Prompts for config on
/dev/tty - Writes
/etc/wisehosting/config.yaml - Installs
wisehosting-worker.service(hardened: see systemd unit below) - Creates a dedicated Podman network
wisehosting-buildon10.89.0.0/16for build-time DNS - Deploys the themed catch-all 404 page
- Applies iptables egress hardening (drops cloud-metadata + private subnets from containers)
podman build runs with --network=wisehosting-build --no-cache under default OCI isolation — no --network=host (which would let build steps reach the host's loopback or any privileged listener) and no --isolation=chroot (which would skip mount-namespace isolation entirely). The dedicated build network gives builds working DNS without sharing the host's interfaces. Once the build finishes, podman image inspect --format {{.Id}} reads back the local image ID and the deploy is pinned to that ID, not the tag — if the underlying image bytes are tampered with between build and run, the run fails closed with an unresolvable reference.
Optional: image signing
scripts/cosign-setup.sh provisions a per-host cosign keypair and a wrapper that signs every freshly built image. Verification can be bolted onto the podman run invocation if your threat model requires it.
Non-interactive install
Set env vars before piping. Most-used:
| Var | Purpose |
|---|---|
WISEHOSTING_API_HOST | Control-plane URL or host:port |
WISEHOSTING_API_SECRET | Control plane's api_server.secret |
WISEHOSTING_WORKER_NAME | Unique name (e.g. worker-de-01) |
WISEHOSTING_WORKER_IP | Reachable IP (defaults to first non-loopback) |
WISEHOSTING_WORKER_ZONE | DNS zone for app subdomains (e.g. route.uday.me) |
WISEHOSTING_SSH_ALLOW_IP | Lock :22 to a CIDR |
WISEHOSTING_SKIP_HARDENING | 1 to skip iptables rules |
WISEHOSTING_BANDWIDTH | Per-container egress rate (e.g. 10mbit) |
WISEHOSTING_DISK_LIMIT_MB | Per-container disk quota (needs xfs+prjquota) |
2. Add worker config fields
The worker's /etc/wisehosting/config.yaml needs a wg_ip field so the control plane knows which WireGuard IP to advertise to the proxy for routing:
worker:
name: worker-de-01
ip: 5.45.109.72 # public IP (used for registration)
wg_ip: 10.50.0.2 # WireGuard IP on this worker
api_key: "<key>"
zone: route.uday.me # base zone for app subdomains
region_name: "Frankfurt, Germany"
capacity_cpu: 4
capacity_memory: 4294967296wg_ip is required
The proxy uses wg_ip from the control plane's /v1/traefik/proxy-config response to build upstream URLs like http://10.50.0.2:30001. Without it, app traffic from the proxy cannot reach this worker.
3. Verify
# Control plane
journalctl -u wisehosting -f | grep "hub: worker"
# Expect: hub: worker <ID> (<name>) connected via WSS
# Worker
systemctl status wisehosting-worker
ping -c 1 10.50.0.1 # WireGuard tunnel up
podman ps # running app containers
iptables -L FORWARD -n # confirm egress drops
# End-to-end (replace slug and zone)
curl -sI https://my-app.route.uday.me/
# Expect 200 (or your app's response) from the proxysystemd unit
The installer drops a hardened unit at /etc/systemd/system/wisehosting-worker.service:
[Unit]
Description=WiseHosting worker
After=network-online.target wg-quick@wg0.service podman.service
Wants=network-online.target wg-quick@wg0.service
[Service]
Type=simple
ExecStart=/usr/local/bin/wisehosting-worker
Restart=on-failure
RestartSec=5s
LimitNOFILE=65535
# Hardening
NoNewPrivileges=yes
ProtectSystem=full
ProtectHome=yes
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictSUIDSGID=yes
RestrictRealtime=yes
RestrictNamespaces=no
ReadWritePaths=/var/lib/containers /run /tmp /etc/wisehosting
[Install]
WantedBy=multi-user.targetPrivateDevices= is intentionally omitted — the agent shells out to tc / nsenter against container netns, which need access to /dev. RestrictNamespaces=no is required because Podman creates user/mount/network namespaces. ProtectSystem=full (not strict) leaves /etc writable for iptables-persistent, but the explicit ReadWritePaths= list narrows what the worker itself can touch.
Per-container limits
Configured under container: in /etc/wisehosting/config.yaml:
| Limit | Enforcement | When crossed |
|---|---|---|
cpu_limit | --cpus=N | hard cap, throttled |
memory_limit | --memory=Nb | kernel OOMs the proc; worker reports oom, alert manager fires app.memory |
pids_limit | --pids-limit=N | new forks fail |
disk_limit | --storage-opt size=N | needs xfs+prjquota or btrfs; worker also polls writable layer size and stops + notifies on overage |
bandwidth_limit | tc tbf inside container netns (egress) | rate cap, packets queued/dropped |
The worker also detects rapid restarts (>3 in 30 s) as a crash loop, stops the container, and emits app.crashloop so the alert manager fires.
Upgrading
curl -fsSL https://raw.githubusercontent.com/Krinali12/hostingbot/master/scripts/worker-setup.sh | sudo bash -s -- --upgradeRe-fetches the latest binary and restarts the agent. Doesn't re-touch traefik / 404 / iptables.
Upgrading hosts that ran Podman 3 or 4
If podman build fails with DNS errors after a host-level Podman 3/4 → 5 upgrade, you may have stale CNI state from the old runtime. Remove the legacy interface, route, and chains by hand:
ip link delete cni-podman0 2>/dev/null || true
ip route del 10.88.0.0/16 2>/dev/null || true
iptables -t nat -F CNI-HOSTPORT-DNAT 2>/dev/null || true
iptables -t nat -F CNI-HOSTPORT-MASQ 2>/dev/null || trueFresh installs are pure netavark and don't hit this — only in-place upgrades.
Source build
WISEHOSTING_BUILD_FROM_SOURCE=1 makes the installer clone + compile with Go instead of downloading.