WiseHosting
Architecture

Architecture

How the control plane, scheduler, workers, and dashboard fit together.

WiseHosting is split into three runtime processes plus a database, with a dedicated proxy server for end-user app traffic.

Reading this for the first time?

Hold on to five ideas:

  1. Control plane = the Go binary that owns the dashboard + REST + the WSS hub. One process.
  2. Worker = a separate Linux host that runs Podman containers. There can be many.
  3. Proxy server = a dedicated VPS (192.99.14.173) running Traefik. It terminates TLS for all app traffic and forwards requests to worker containers over WireGuard.
  4. Postgres = the source of truth. The job queue is just SELECT … FOR UPDATE SKIP LOCKED — no Redis, no RabbitMQ.
  5. WireGuard mesh = a private network (10.50.0.0/24) that all CP↔worker and proxy↔worker traffic rides on. See WireGuard mesh.

All worker↔control-plane traffic (REST register/refresh + WSS) rides the self-hosted WireGuard mesh on 10.50.0.0/24 (UDP 51821). End-user app traffic hits the proxy server at 192.99.14.173, which terminates TLS (Let's Encrypt) and forwards requests directly to worker container ports over WireGuard — no Cloudflare tunnel or Cloudflare-for-SaaS needed for app routing. This replaces the older per-worker Traefik model. For the full setup walk-through and troubleshooting, see WireGuard mesh.

Process boundaries

ProcessBinaryPurpose
Control planemain.goWires every subsystem in dependency order: config → DB → plans → scheduler → alerts → usage recorder → log bus → webhook dispatcher → API server → web handler. One Go binary, all HTTP/WSS in-process.
Worker agentcmd/worker-agent/main.goConnects out to the control plane over the WireGuard tunnel (WSS to the CP's 10.50.0.1). Runs Podman containers. Container ports are accessed directly by the proxy server over WireGuard.
Proxy serverTraefik (binary on 192.99.14.173)Terminates TLS for all end-user app traffic (Let's Encrypt). Polls /v1/traefik/proxy-config every 5 s and forwards HTTP to worker container ports over WireGuard.
PostgresSystem of record. Schema is applied by golang-migrate from versioned SQL files in internal/database/migrations/ on every startup.

What runs in the control plane

  • internal/api — HTTP server, worker registration & token-refresh endpoints, WebSocket hub, per-app stats cache, two Traefik HTTP-provider endpoints (/v1/traefik/config for legacy per-worker use and /v1/traefik/proxy-config for the proxy server), worker-JWT signer/verifier.
  • internal/web — Dashboard API (apps, deployments, env vars, webhooks, sessions, alerts, usage, OAuth, custom domains) plus the embedded SPA assets. Per-IP rate limits include a dedicated 60/min limiter on inbound git-provider webhooks.
  • internal/scheduler — Polls the jobs table, atomically assigns pending jobs to the lowest-utilisation worker, recovers stuck jobs, monitors worker health. Retries are no longer attempted in-process — failed jobs surface immediately to the user.
  • internal/alerts — Alert manager + threshold poller. Evaluates per-app rules (cpu, memory, network, disk, offline, crashloop, deployment_failed) every 30s with a sustain window, fires/resolves alerts, and emits webhook events.
  • internal/usage — Background recorder that samples live stats every minute into 5-minute usage_samples buckets (90-day retention).
  • internal/webhooks — Outbound dispatcher with retries. Accepts both signed-HTTPS targets and 22 Shoutrrr channels (Discord, Slack, Telegram, ntfy, …). Test-deliveries reuse the same code paths via TestDeliver.
  • internal/logbus — In-process per-app ring buffer for runtime logs.
  • internal/database — GORM wrapper, golang-migrate runner (migrate.go + migrations/*.sql), AES-GCM secret encryption with HKDF-SHA256 per-purpose key derivation, hashed worker API keys (api_key_hash), generalized audit_events table, in-memory TTL cache.
  • internal/httpx — Hardened outbound HTTP client constructors. NewSecureClient is used everywhere; NewWebhookClient adds a DNS-resolve + private-range reject pre-dial check (basic SSRF guard).
  • internal/gitproviders — GitHub / GitLab / Bitbucket / Codeberg adapters. GitHub additionally exposes per-org repo listing, org enumeration, and OAuth grant revocation.
  • internal/frameworks — Built-in Dockerfile presets (Node, Next.js, Vite, Go, Python, static).

Custom domains

Custom domains let an app respond on user-supplied hostnames in addition to its <slug>.route.uday.me subdomain.

  1. The user adds a hostname through the dashboard. internal/web/domains_api.go validates the label syntax, generates a 32-byte verification token, and writes a domains row.
  2. The user creates a TXT record at _wisehosting.<hostname> containing the token; the dashboard's Verify button triggers an 8-second DNS lookup against the public resolvers.
  3. Once verified, internal/api/traefik.go includes the hostname in the proxy Traefik HTTP-provider response (/v1/traefik/proxy-config), polled every 5 seconds by the proxy Traefik. Routes are emitted as Host(\a`) || Host(`b`)` rules per app, so a single app can serve multiple domains without duplicating services.
  4. The user points theirdomain.com CNAME slug.route.uday.me at their DNS provider. The DNS record must be DNS-only (no Cloudflare orange-cloud proxy) — the proxy server needs to complete the Let's Encrypt HTTP-01 challenge directly.
  5. Let's Encrypt issues a TLS certificate automatically on the proxy server within ~1 minute of DNS propagation.

Cloudflare orange-cloud breaks Let's Encrypt

If the user's custom domain is on Cloudflare with the proxy (orange cloud) enabled, the Let's Encrypt HTTP-01 challenge will be intercepted by Cloudflare and fail. The CNAME must be set to DNS-only (grey cloud).

What runs on the worker

A worker holds no persistent state. It pulls everything (jobs, app config) from the control plane over WSS and persists nothing locally beyond the running containers and the cloned repos in temp dirs. The worker no longer runs Traefik — all end-user app traffic is routed to container ports directly by the proxy server over WireGuard.

  • internal/worker/agent.go — Job execution: clone, build, run, monitor. Holds a short-lived JWT and refreshes it 2 minutes before expiry via /v1/workers/refresh-token. Builds run with --network=wisehosting-build (a per-host 10.89.0.0/16 Podman network) under default OCI isolation — no --network=host, no --isolation=chroot.
  • internal/worker/transport.go — WSS reconnection, HMAC, dedup, ping/pong. Bearer token comes from a closure (tokenFn) so JWT rotations apply on the next reconnect; HMAC signing key is sha256(rawKey) derived independently on both sides; dialer pins NextProtos: ["http/1.1"] to keep CDN ALPN from negotiating h2.
  • internal/wsproto — Wire format and HMAC envelope shared by both sides.

The worker drives git, podman, ss, nsenter/tc, and findmnt as subprocesses. It probes for disk-quota support at startup (xfs+prjquota or btrfs).

What the proxy server does

The proxy server (192.99.14.173) is a dedicated VPS that acts as the single public ingress point for all end-user app traffic. It is a WireGuard peer at 10.50.0.30.

  • Runs Traefik on :80 and :443.
  • Polls GET /v1/traefik/proxy-config on the control plane (over WireGuard at http://10.50.0.1:8081) every 5 seconds, authenticated by a static bearer token.
  • For each running app, the response tells Traefik: match Host(slug.route.uday.me) (plus any verified custom domains), forward to http://<worker-wg-ip>:<container-port>.
  • Issues TLS certificates automatically via Let's Encrypt (HTTP-01 challenge). No Cloudflare-for-SaaS, no manual cert management.
  • DNS: a wildcard A record *.route.uday.me → 192.99.14.173 points all app subdomains at the proxy. Custom domains require a user-side CNAME to slug.route.uday.me.

See Proxy server setup for the full installation walk-through.

Auth flow

The HMAC signing key is sha256(api_key), computed independently by both sides — it's never sent over the wire. The worker stores only the raw key (in its config); the control plane stores only api_key_hash in the workers table. The JWT in the Authorization header authenticates the connection; the per-envelope HMAC authenticates each message.

Every WSS envelope is HMAC-SHA256 signed with sha256(api_key) and includes:

  • t — message type
  • i — random 16-byte ID (hex)
  • s — monotonic sequence number (per direction)
  • ts — unix milliseconds
  • p — JSON payload
  • h — HMAC over type|id|seq|ts|payload

Both sides reject envelopes with clock skew > 5 minutes and replay duplicate sequence numbers in a 256-entry sliding window.

On this page