Files
cameleer-saas/docker/CLAUDE.md
hsiegeln 132143c083
Some checks failed
CI / build (push) Successful in 1m59s
CI / docker (push) Successful in 1m24s
SonarQube Analysis / sonarqube (push) Failing after 2m4s
refactor: decompose CLAUDE.md into directory-scoped files
Root CLAUDE.md reduced from 475 to 175 lines (75 excl. GitNexus).
Detailed context now loads automatically only when editing code in
the relevant directory:

- provisioning/CLAUDE.md — env vars, provisioning flow, lifecycle
- config/CLAUDE.md — auth, scopes, JWT, OIDC role extraction
- docker/CLAUDE.md — routing, networks, bootstrap, deployment pipeline
- installer/CLAUDE.md — deployment modes, compose templates, env naming
- ui/CLAUDE.md — frontend files, sign-in UI

No information lost — everything moved, nothing deleted.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 09:30:21 +02:00

7.1 KiB

Docker & Infrastructure

Routing (single-domain, path-based via Traefik)

All services on one hostname. Infrastructure containers (Traefik, Logto) use PUBLIC_HOST + PUBLIC_PROTOCOL env vars directly. The SaaS app reads these via CAMELEER_SAAS_PROVISIONING_PUBLICHOST / CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL (Spring Boot properties cameleer.saas.provisioning.publichost / cameleer.saas.provisioning.publicprotocol).

Path Target Notes
/platform/* cameleer-saas:8080 SPA + API (server.servlet.context-path: /platform)
/platform/vendor/* (SPA routes) Vendor console (platform:admin)
/platform/tenant/* (SPA routes) Tenant admin portal (org-scoped)
/t/{slug}/* per-tenant server-ui Provisioned tenant UI containers (Traefik labels)
/ redirect -> /platform/ Via docker/traefik-dynamic.yml
/* (catch-all) cameleer-logto:3001 (priority=1) Custom sign-in UI, OIDC, interaction
  • SPA assets at /_app/ (Vite assetsDir: '_app') to avoid conflict with Logto's /assets/
  • Logto ENDPOINT = ${PUBLIC_PROTOCOL}://${PUBLIC_HOST} (same domain, same origin)
  • TLS: traefik-certs init container generates self-signed cert (dev) or copies user-supplied cert via CERT_FILE/KEY_FILE/CA_FILE env vars. Default cert configured in docker/traefik-dynamic.yml (NOT static traefik.yml — Traefik v3 ignores tls.stores.default in static config). Runtime cert replacement via vendor UI (stage/activate/restore). ACME for production (future). Server containers import /certs/ca.pem into JVM truststore at startup via docker-entrypoint.sh for OIDC trust.
  • Root / -> /platform/ redirect via Traefik file provider (docker/traefik-dynamic.yml)
  • LoginPage auto-redirects to Logto OIDC (no intermediate button)
  • Per-tenant server containers get Traefik labels for /t/{slug}/* routing at provisioning time

Docker Networks

Compose-defined networks:

Network Name on Host Purpose
cameleer cameleer-saas_cameleer Compose default — shared services (DB, Logto, SaaS)
cameleer-traefik cameleer-traefik (fixed name:) Traefik + provisioned tenant containers

Per-tenant networks (created dynamically by DockerTenantProvisioner):

Network Name Pattern Purpose
Tenant network cameleer-tenant-{slug} Internal bridge, no internet — isolates tenant server + apps
Environment network cameleer-env-{tenantId}-{envSlug} Tenant-scoped (includes tenantId to prevent slug collision across tenants)

Server containers join three networks: tenant network (primary), shared services network (cameleer), and traefik network. Apps deployed by the server use the tenant network as primary.

IMPORTANT: Dynamically-created containers MUST have traefik.docker.network=cameleer-traefik label. Traefik's Docker provider defaults to network: cameleer (compose-internal name) for IP resolution, which doesn't match dynamically-created containers connected via Docker API using the host network name (cameleer-saas_cameleer). Without this label, Traefik returns 504 Gateway Timeout for /t/{slug}/api/* paths.

Custom sign-in UI (ui/sign-in/)

Separate Vite+React SPA replacing Logto's default sign-in page. Visually matches cameleer-server LoginPage.

  • Built as custom Logto Docker image (cameleer-logto): ui/sign-in/Dockerfile = node build stage + FROM ghcr.io/logto-io/logto:latest + COPY dist over /etc/logto/packages/experience/dist/
  • Uses @cameleer/design-system components (Card, Input, Button, FormField, Alert)
  • Authenticates via Logto Experience API (4-step: init -> verify password -> identify -> submit -> redirect)
  • CUSTOM_UI_PATH env var does NOT work for Logto OSS — must volume-mount or replace the experience dist directory
  • Favicon bundled in ui/sign-in/public/favicon.svg (served by Logto, not SaaS)

Deployment pipeline

App deployment is handled by the cameleer-server's DeploymentExecutor (7-stage async flow):

  1. PRE_FLIGHT — validate config, check JAR exists
  2. PULL_IMAGE — pull base image if missing
  3. CREATE_NETWORK — ensure cameleer-traefik and cameleer-env-{slug} networks
  4. START_REPLICAS — create N containers with Traefik labels
  5. HEALTH_CHECK — poll /cameleer/health on agent port 9464
  6. SWAP_TRAFFIC — stop old deployment (blue/green)
  7. COMPLETE — mark RUNNING or DEGRADED

Key files:

  • DeploymentExecutor.java (in cameleer-server) — async staged deployment, runtime type auto-detection
  • DockerRuntimeOrchestrator.java (in cameleer-server) — Docker client, container lifecycle, builds runtime-type-specific entrypoints (spring-boot uses -cp + PropertiesLauncher with -Dloader.path for log appender; quarkus uses -jar; plain-java uses -cp + detected main class; native exec directly). Overrides the Dockerfile ENTRYPOINT.
  • docker/runtime-base/Dockerfile — base image with agent JAR + cameleer-log-appender.jar + JRE. The Dockerfile ENTRYPOINT (-jar /app/app.jar) is a fallback — DockerRuntimeOrchestrator overrides it at container creation.
  • RuntimeDetector.java (in cameleer-server) — detects runtime type from JAR manifest Main-Class; derives correct PropertiesLauncher package (Spring Boot 3.2+ vs pre-3.2)
  • ServerApiClient.java — M2M token acquisition for SaaS->server API calls (agent status). Uses X-Cameleer-Protocol-Version: 1 header
  • Docker socket access: group_add: ["0"] in docker-compose.dev.yml (not root group membership in Dockerfile)
  • Network: deployed containers join cameleer-tenant-{slug} (primary, isolation) + cameleer-traefik (routing) + cameleer-env-{tenantId}-{envSlug} (environment isolation)

Bootstrap (docker/logto-bootstrap.sh)

Idempotent script run inside the Logto container entrypoint. Clean slate — no example tenant, no viewer user, no server configuration. Phases:

  1. Wait for Logto health (no server to wait for — servers are provisioned per-tenant)
  2. Get Management API token (reads m-default secret from DB)
  3. Create Logto apps (SPA, Traditional Web App with skipConsent, M2M with Management API role + server API role) 3b. Create API resource scopes (1 platform + 9 tenant + 3 server scopes)
  4. Create org roles (owner, operator, viewer with API resource scope assignments) + M2M server role (cameleer-m2m-server with server:admin scope)
  5. Create admin user (SaaS admin with Logto console access) 7b. Configure Logto Custom JWT for access tokens (maps org roles -> roles claim: owner->server:admin, operator->server:operator, viewer->server:viewer; saas-vendor global role -> server:admin)
  6. Configure Logto sign-in branding (Cameleer colors #C6820E/#D4941E, logo from /platform/logo.svg)
  7. Cleanup seeded Logto apps
  8. Write bootstrap results to /data/logto-bootstrap.json
  9. Create saas-vendor global role with all API scopes and assign to admin user (always runs — admin IS the platform admin).

The multi-tenant compose stack is: Traefik + PostgreSQL + ClickHouse + Logto (with bootstrap entrypoint) + cameleer-saas. No cameleer-server or cameleer-server-ui in compose — those are provisioned per-tenant by DockerTenantProvisioner.