Compare commits
27 Commits
41052d01e8
...
v1.0.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1fbafbb16d | ||
|
|
6c1241ed89 | ||
|
|
df64573bfb | ||
|
|
4526d97bda | ||
|
|
132143c083 | ||
|
|
b824942408 | ||
|
|
31e8dd05f0 | ||
|
|
eba9f560ac | ||
|
|
3c2bf4a9b1 | ||
|
|
97b2235914 | ||
|
|
338db5dcda | ||
|
|
fd50a147a2 | ||
|
|
0dd52624b7 | ||
|
|
1ce0ea411d | ||
|
|
81be25198c | ||
|
|
dc4ea33c9b | ||
|
|
186f7639ad | ||
|
|
6c7895b0d6 | ||
|
|
6170f61eeb | ||
|
|
2ed527ac74 | ||
|
|
cb1f6b8ccf | ||
|
|
758585cc9a | ||
|
|
141b44048c | ||
|
|
3c343f9441 | ||
|
|
bdb24f8de6 | ||
|
|
933b56f68f | ||
|
|
19c463051a |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -28,6 +28,9 @@ Thumbs.db
|
||||
.playwright-mcp/
|
||||
.gitnexus
|
||||
|
||||
# Installer output (generated by install.sh / install.ps1)
|
||||
installer/cameleer/
|
||||
|
||||
# Generated by postinstall from @cameleer/design-system
|
||||
ui/public/favicon.svg
|
||||
docker/runtime-base/agent.jar
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
<!-- gitnexus:start -->
|
||||
# GitNexus — Code Intelligence
|
||||
|
||||
This project is indexed by GitNexus as **cameleer-saas** (2676 symbols, 5768 relationships, 224 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
|
||||
This project is indexed by GitNexus as **cameleer-saas** (2816 symbols, 5989 relationships, 238 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
|
||||
|
||||
> If any GitNexus tool warns the index is stale, run `npx gitnexus analyze` in terminal first.
|
||||
|
||||
|
||||
324
CLAUDE.md
324
CLAUDE.md
@@ -17,315 +17,37 @@ This repo is the SaaS layer on top of two proven components:
|
||||
|
||||
Agent-server protocol is defined in `cameleer/cameleer-common/PROTOCOL.md`. The agent and server are mature, proven components — this repo wraps them with multi-tenancy, billing, and self-service onboarding.
|
||||
|
||||
## Key Classes
|
||||
## Key Packages
|
||||
|
||||
### Java Backend (`src/main/java/net/siegeln/cameleer/saas/`)
|
||||
|
||||
**config/** — Security, tenant isolation, web config
|
||||
- `SecurityConfig.java` — OAuth2 JWT decoder (ES384, issuer/audience validation, scope extraction)
|
||||
- `TenantIsolationInterceptor.java` — HandlerInterceptor on `/api/**`; JWT org_id -> TenantContext, path variable validation, fail-closed
|
||||
- `TenantContext.java` — ThreadLocal<UUID> tenant ID storage
|
||||
- `WebConfig.java` — registers TenantIsolationInterceptor
|
||||
- `PublicConfigController.java` — GET /api/config (Logto endpoint, SPA client ID, scopes)
|
||||
- `MeController.java` — GET /api/me (authenticated user, tenant list)
|
||||
| Package | Purpose | Key classes |
|
||||
|---------|---------|-------------|
|
||||
| `config/` | Security, tenant isolation, web config | `SecurityConfig`, `TenantIsolationInterceptor`, `TenantContext`, `PublicConfigController`, `MeController` |
|
||||
| `tenant/` | Tenant data model | `TenantEntity` (JPA: id, name, slug, tier, status, logto_org_id, db_password) |
|
||||
| `vendor/` | Vendor console (platform:admin) | `VendorTenantService`, `VendorTenantController`, `InfrastructureService` |
|
||||
| `portal/` | Tenant admin portal (org-scoped) | `TenantPortalService`, `TenantPortalController` |
|
||||
| `provisioning/` | Pluggable tenant provisioning | `DockerTenantProvisioner`, `TenantDatabaseService`, `TenantDataCleanupService` |
|
||||
| `certificate/` | TLS certificate lifecycle | `CertificateService`, `CertificateController`, `TenantCaCertService` |
|
||||
| `license/` | License management | `LicenseService`, `LicenseController` |
|
||||
| `identity/` | Logto & server integration | `LogtoManagementClient`, `ServerApiClient` |
|
||||
| `audit/` | Audit logging | `AuditService` |
|
||||
|
||||
**tenant/** — Tenant data model
|
||||
- `TenantEntity.java` — JPA entity (id, name, slug, tier, status, logto_org_id, stripe IDs, settings JSONB, db_password)
|
||||
### Frontend
|
||||
|
||||
**vendor/** — Vendor console (platform:admin only)
|
||||
- `VendorTenantService.java` — orchestrates tenant creation (sync: DB + Logto + license, async: Docker provisioning + config push), suspend/activate, delete, restart server, upgrade server (force-pull + re-provision), license renewal
|
||||
- `VendorTenantController.java` — REST at `/api/vendor/tenants` (platform:admin required). List endpoint returns `VendorTenantSummary` with fleet health data (agentCount, environmentCount, agentLimit) fetched in parallel via `CompletableFuture`.
|
||||
- `InfrastructureService.java` — raw JDBC queries against shared PostgreSQL and ClickHouse for per-tenant infrastructure monitoring (schema sizes, table stats, row counts, disk usage)
|
||||
- `InfrastructureController.java` — REST at `/api/vendor/infrastructure` (platform:admin required). PostgreSQL and ClickHouse overview with per-tenant breakdown.
|
||||
|
||||
**portal/** — Tenant admin portal (org-scoped)
|
||||
- `TenantPortalService.java` — customer-facing: dashboard (health + agent/env counts from server via M2M), license, SSO connectors, team, settings (public endpoint URL), server restart/upgrade, password management (own + team + server admin)
|
||||
- `TenantPortalController.java` — REST at `/api/tenant/*` (org-scoped, includes CA cert management at `/api/tenant/ca`, password endpoints at `/api/tenant/password` and `/api/tenant/server/admin-password`)
|
||||
|
||||
**provisioning/** — Pluggable tenant provisioning
|
||||
- `TenantProvisioner.java` — pluggable interface (like server's RuntimeOrchestrator)
|
||||
- `DockerTenantProvisioner.java` — Docker implementation, creates per-tenant server + UI containers with per-tenant JDBC credentials (`currentSchema=tenant_{slug}&ApplicationName=tenant_{slug}`). `upgrade(slug)` force-pulls latest images and removes server+UI containers (preserves app containers, volumes, networks) for re-provisioning. `remove(slug)` does full cleanup: label-based container removal, env networks, tenant network, JAR volume.
|
||||
- `TenantDatabaseService.java` — creates/drops per-tenant PostgreSQL users (`tenant_{slug}`) and schemas; used during provisioning and delete
|
||||
- `TenantDataCleanupService.java` — GDPR data erasure on tenant delete: deletes ClickHouse data across all tables with `tenant_id` column (PostgreSQL cleanup handled by `TenantDatabaseService`)
|
||||
- `TenantProvisionerAutoConfig.java` — auto-detects Docker socket
|
||||
- `DockerCertificateManager.java` — file-based cert management with atomic `.wip` swap (Docker volume)
|
||||
- `DisabledCertificateManager.java` — no-op when certs dir unavailable
|
||||
- `CertificateManagerAutoConfig.java` — auto-detects `/certs` directory
|
||||
|
||||
**certificate/** — TLS certificate lifecycle management
|
||||
- `CertificateManager.java` — provider interface (Docker now, K8s later)
|
||||
- `CertificateService.java` — orchestrates stage/activate/restore/discard, DB metadata, tenant CA staleness
|
||||
- `CertificateController.java` — REST at `/api/vendor/certificates` (platform:admin required)
|
||||
- `CertificateEntity.java` — JPA entity (status: ACTIVE/STAGED/ARCHIVED, subject, fingerprint, etc.)
|
||||
- `CertificateStartupListener.java` — seeds DB from filesystem on boot (for bootstrap-generated certs)
|
||||
- `TenantCaCertEntity.java` — JPA entity for per-tenant CA certs (PEM stored in DB, multiple per tenant)
|
||||
- `TenantCaCertRepository.java` — queries by tenant, status, all active across tenants
|
||||
- `TenantCaCertService.java` — stage/activate/delete tenant CAs, rebuilds aggregated `ca.pem` on changes
|
||||
|
||||
**license/** — License management
|
||||
- `LicenseEntity.java` — JPA entity (id, tenant_id, tier, features JSONB, limits JSONB, expires_at)
|
||||
- `LicenseService.java` — generation, validation, feature/limit lookups
|
||||
- `LicenseController.java` — POST issue, GET verify, DELETE revoke
|
||||
|
||||
**identity/** — Logto & server integration
|
||||
- `LogtoConfig.java` — Logto endpoint, M2M credentials (reads from bootstrap file)
|
||||
- `LogtoManagementClient.java` — Logto Management API calls (create org, create user, add to org, get user, SSO connectors, JIT provisioning, password updates via `PATCH /api/users/{id}/password`)
|
||||
- `ServerApiClient.java` — M2M client for cameleer-server API (Logto M2M token, `X-Cameleer-Protocol-Version: 1` header). Health checks, license/OIDC push, agent count, environment count, server admin password reset per tenant server.
|
||||
|
||||
**audit/** — Audit logging
|
||||
- `AuditEntity.java` — JPA entity (actor_id, actor_email, tenant_id, action, resource, status)
|
||||
- `AuditService.java` — log audit events (TENANT_CREATE, TENANT_UPDATE, etc.); auto-resolves actor name from Logto when actorEmail is null (cached in-memory)
|
||||
|
||||
### React Frontend (`ui/src/`)
|
||||
|
||||
- `main.tsx` — React 19 root
|
||||
- `router.tsx` — `/vendor/*` + `/tenant/*` with `RequireScope` guards and `LandingRedirect` that waits for scopes
|
||||
- `Layout.tsx` — persona-aware sidebar: vendor sees expandable "Vendor" section (Tenants, Audit Log, Certificates, Infrastructure, Identity/Logto), tenant admin sees Dashboard/License/SSO/Team/Audit/Settings
|
||||
- `OrgResolver.tsx` — merges global + org-scoped token scopes (vendor's platform:admin is global)
|
||||
- `config.ts` — fetch Logto config from /platform/api/config
|
||||
- `auth/useAuth.ts` — auth hook (isAuthenticated, logout, signIn)
|
||||
- `auth/useOrganization.ts` — Zustand store for current tenant
|
||||
- `auth/useScopes.ts` — decode JWT scopes, hasScope()
|
||||
- `auth/ProtectedRoute.tsx` — guard (redirects to /login)
|
||||
- **Vendor pages**: `VendorTenantsPage.tsx`, `CreateTenantPage.tsx`, `TenantDetailPage.tsx`, `VendorAuditPage.tsx`, `CertificatesPage.tsx`
|
||||
- **Tenant pages**: `TenantDashboardPage.tsx` (restart + upgrade server), `TenantLicensePage.tsx`, `SsoPage.tsx`, `TeamPage.tsx` (reset member passwords), `TenantAuditPage.tsx`, `SettingsPage.tsx` (change own password, reset server admin password)
|
||||
|
||||
### Custom Sign-in UI (`ui/sign-in/src/`)
|
||||
|
||||
- `SignInPage.tsx` — form with @cameleer/design-system components
|
||||
- `experience-api.ts` — Logto Experience API client (4-step: init -> verify -> identify -> submit)
|
||||
- **`ui/src/`** — React 19 SPA at `/platform/*` (vendor + tenant admin pages)
|
||||
- **`ui/sign-in/`** — Custom Logto sign-in UI (built into `cameleer-logto` Docker image)
|
||||
|
||||
## Architecture Context
|
||||
|
||||
The SaaS platform is a **vendor management plane**. It does not proxy requests to servers — instead it provisions dedicated per-tenant cameleer-server instances via Docker API. Each tenant gets isolated server + UI containers with their own database schemas, networks, and Traefik routing.
|
||||
|
||||
### Routing (single-domain, path-based via Traefik)
|
||||
|
||||
All services on one hostname. Infrastructure containers (Traefik, Logto) use `PUBLIC_HOST` + `PUBLIC_PROTOCOL` env vars directly. The SaaS app reads these via `CAMELEER_SAAS_PROVISIONING_PUBLICHOST` / `CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL` (Spring Boot properties `cameleer.saas.provisioning.publichost` / `cameleer.saas.provisioning.publicprotocol`).
|
||||
|
||||
| Path | Target | Notes |
|
||||
|------|--------|-------|
|
||||
| `/platform/*` | cameleer-saas:8080 | SPA + API (`server.servlet.context-path: /platform`) |
|
||||
| `/platform/vendor/*` | (SPA routes) | Vendor console (platform:admin) |
|
||||
| `/platform/tenant/*` | (SPA routes) | Tenant admin portal (org-scoped) |
|
||||
| `/t/{slug}/*` | per-tenant server-ui | Provisioned tenant UI containers (Traefik labels) |
|
||||
| `/` | redirect -> `/platform/` | Via `docker/traefik-dynamic.yml` |
|
||||
| `/*` (catch-all) | cameleer-logto:3001 (priority=1) | Custom sign-in UI, OIDC, interaction |
|
||||
|
||||
- SPA assets at `/_app/` (Vite `assetsDir: '_app'`) to avoid conflict with Logto's `/assets/`
|
||||
- Logto `ENDPOINT` = `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}` (same domain, same origin)
|
||||
- TLS: `traefik-certs` init container generates self-signed cert (dev) or copies user-supplied cert via `CERT_FILE`/`KEY_FILE`/`CA_FILE` env vars. Default cert configured in `docker/traefik-dynamic.yml` (NOT static `traefik.yml` — Traefik v3 ignores `tls.stores.default` in static config). Runtime cert replacement via vendor UI (stage/activate/restore). ACME for production (future). Server containers import `/certs/ca.pem` into JVM truststore at startup via `docker-entrypoint.sh` for OIDC trust.
|
||||
- Root `/` -> `/platform/` redirect via Traefik file provider (`docker/traefik-dynamic.yml`)
|
||||
- LoginPage auto-redirects to Logto OIDC (no intermediate button)
|
||||
- Per-tenant server containers get Traefik labels for `/t/{slug}/*` routing at provisioning time
|
||||
|
||||
### Docker Networks
|
||||
|
||||
Compose-defined networks:
|
||||
|
||||
| Network | Name on Host | Purpose |
|
||||
|---------|-------------|---------|
|
||||
| `cameleer` | `cameleer-saas_cameleer` | Compose default — shared services (DB, Logto, SaaS) |
|
||||
| `cameleer-traefik` | `cameleer-traefik` (fixed `name:`) | Traefik + provisioned tenant containers |
|
||||
|
||||
Per-tenant networks (created dynamically by `DockerTenantProvisioner`):
|
||||
|
||||
| Network | Name Pattern | Purpose |
|
||||
|---------|-------------|---------|
|
||||
| Tenant network | `cameleer-tenant-{slug}` | Internal bridge, no internet — isolates tenant server + apps |
|
||||
| Environment network | `cameleer-env-{tenantId}-{envSlug}` | Tenant-scoped (includes tenantId to prevent slug collision across tenants) |
|
||||
|
||||
Server containers join three networks: tenant network (primary), shared services network (`cameleer`), and traefik network. Apps deployed by the server use the tenant network as primary.
|
||||
|
||||
**IMPORTANT:** Dynamically-created containers MUST have `traefik.docker.network=cameleer-traefik` label. Traefik's Docker provider defaults to `network: cameleer` (compose-internal name) for IP resolution, which doesn't match dynamically-created containers connected via Docker API using the host network name (`cameleer-saas_cameleer`). Without this label, Traefik returns 504 Gateway Timeout for `/t/{slug}/api/*` paths.
|
||||
|
||||
### Custom sign-in UI (`ui/sign-in/`)
|
||||
|
||||
Separate Vite+React SPA replacing Logto's default sign-in page. Visually matches cameleer-server LoginPage.
|
||||
|
||||
- Built as custom Logto Docker image (`cameleer-logto`): `ui/sign-in/Dockerfile` = node build stage + `FROM ghcr.io/logto-io/logto:latest` + COPY dist over `/etc/logto/packages/experience/dist/`
|
||||
- Uses `@cameleer/design-system` components (Card, Input, Button, FormField, Alert)
|
||||
- Authenticates via Logto Experience API (4-step: init -> verify password -> identify -> submit -> redirect)
|
||||
- `CUSTOM_UI_PATH` env var does NOT work for Logto OSS — must volume-mount or replace the experience dist directory
|
||||
- Favicon bundled in `ui/sign-in/public/favicon.svg` (served by Logto, not SaaS)
|
||||
|
||||
### Auth enforcement
|
||||
|
||||
- All API endpoints enforce OAuth2 scopes via `@PreAuthorize("hasAuthority('SCOPE_xxx')")` annotations
|
||||
- Tenant isolation enforced by `TenantIsolationInterceptor` (a single `HandlerInterceptor` on `/api/**` that resolves JWT org_id to TenantContext and validates `{tenantId}`, `{environmentId}`, `{appId}` path variables; fail-closed, platform admins bypass)
|
||||
- 13 OAuth2 scopes on the Logto API resource (`https://api.cameleer.local`): 10 platform scopes + 3 server scopes (`server:admin`, `server:operator`, `server:viewer`), served to the frontend from `GET /platform/api/config`
|
||||
- Server scopes map to server RBAC roles via JWT `scope` claim (SaaS platform path) or `roles` claim (server-ui OIDC login path)
|
||||
- Org roles: `owner` -> `server:admin` + `tenant:manage`, `operator` -> `server:operator`, `viewer` -> `server:viewer`
|
||||
- `saas-vendor` global role created by bootstrap Phase 12 and always assigned to the admin user — has `platform:admin` + all tenant scopes
|
||||
- Custom `JwtDecoder` in `SecurityConfig.java` — ES384 algorithm, `at+jwt` token type, split issuer-uri (string validation) / jwk-set-uri (Docker-internal fetch), audience validation (`https://api.cameleer.local`)
|
||||
- Logto Custom JWT (Phase 7b in bootstrap) injects a `roles` claim into access tokens based on org roles and global roles — this makes role data available to the server without Logto-specific code
|
||||
|
||||
### Auth routing by persona
|
||||
|
||||
| Persona | Logto role | Key scope | Landing route |
|
||||
|---------|-----------|-----------|---------------|
|
||||
| SaaS admin | `saas-vendor` (global) | `platform:admin` | `/vendor/tenants` |
|
||||
| Tenant admin | org `owner` | `tenant:manage` | `/tenant` (dashboard) |
|
||||
| Regular user (operator/viewer) | org member | `server:operator` or `server:viewer` | Redirected to server dashboard directly |
|
||||
|
||||
- `LandingRedirect` component waits for scopes to load, then routes to the correct persona landing page
|
||||
- `RequireScope` guard on route groups enforces scope requirements
|
||||
- SSO bridge: Logto session carries over to provisioned server's OIDC flow (Traditional Web App per tenant)
|
||||
|
||||
### Per-tenant server env vars (set by DockerTenantProvisioner)
|
||||
|
||||
These env vars are injected into provisioned per-tenant server containers:
|
||||
|
||||
| Env var | Value | Purpose |
|
||||
|---------|-------|---------|
|
||||
| `SPRING_DATASOURCE_URL` | `jdbc:postgresql://cameleer-postgres:5432/cameleer?currentSchema=tenant_{slug}&ApplicationName=tenant_{slug}` | Per-tenant schema isolation + diagnostic query scoping |
|
||||
| `SPRING_DATASOURCE_USERNAME` | `tenant_{slug}` | Per-tenant PG user (owns only its schema) |
|
||||
| `SPRING_DATASOURCE_PASSWORD` | (generated, stored in `TenantEntity.dbPassword`) | Per-tenant PG password |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDCISSUERURI` | `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}/oidc` | Token issuer claim validation |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDCJWKSETURI` | `http://cameleer-logto:3001/oidc/jwks` | Docker-internal JWK fetch |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDCTLSSKIPVERIFY` | `true` (conditional) | Skip cert verify for OIDC discovery; only set when no `/certs/ca.pem` exists. When ca.pem exists, the server's `docker-entrypoint.sh` imports it into the JVM truststore instead. |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDCAUDIENCE` | `https://api.cameleer.local` | JWT audience validation for OIDC tokens |
|
||||
| `CAMELEER_SERVER_SECURITY_CORSALLOWEDORIGINS` | `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}` | Allow browser requests through Traefik |
|
||||
| `CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN` | (generated) | Bootstrap auth token for M2M communication |
|
||||
| `CAMELEER_SERVER_RUNTIME_ENABLED` | `true` | Enable Docker orchestration |
|
||||
| `CAMELEER_SERVER_RUNTIME_SERVERURL` | `http://cameleer-server-{slug}:8081` | Per-tenant server URL (DNS alias on tenant network) |
|
||||
| `CAMELEER_SERVER_RUNTIME_ROUTINGDOMAIN` | `${PUBLIC_HOST}` | Domain for Traefik routing labels |
|
||||
| `CAMELEER_SERVER_RUNTIME_ROUTINGMODE` | `path` | `path` or `subdomain` routing |
|
||||
| `CAMELEER_SERVER_RUNTIME_JARSTORAGEPATH` | `/data/jars` | Directory for uploaded JARs |
|
||||
| `CAMELEER_SERVER_RUNTIME_DOCKERNETWORK` | `cameleer-tenant-{slug}` | Primary network for deployed app containers |
|
||||
| `CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME` | `cameleer-jars-{slug}` | Docker volume name for JAR sharing between server and deployed containers |
|
||||
| `CAMELEER_SERVER_TENANT_ID` | (tenant UUID) | Tenant identifier for data isolation |
|
||||
| `CAMELEER_SERVER_SECURITY_INFRASTRUCTUREENDPOINTS` | `false` | Hides Database/ClickHouse admin from tenant admins |
|
||||
| `BASE_PATH` (server-ui) | `/t/{slug}` | React Router basename + `<base>` tag |
|
||||
| `CAMELEER_API_URL` (server-ui) | `http://cameleer-server-{slug}:8081` | Nginx upstream proxy target (NOT `API_URL` — image uses `${CAMELEER_API_URL}`) |
|
||||
|
||||
### Per-tenant volume mounts (set by DockerTenantProvisioner)
|
||||
|
||||
| Mount | Container path | Purpose |
|
||||
|-------|---------------|---------|
|
||||
| `/var/run/docker.sock` | `/var/run/docker.sock` | Docker socket for app deployment orchestration |
|
||||
| `cameleer-jars-{slug}` (volume, via `CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME`) | `/data/jars` | Shared JAR storage — server writes, deployed app containers read |
|
||||
| `cameleer-saas_certs` (volume, ro) | `/certs` | Platform TLS certs + CA bundle for OIDC trust |
|
||||
|
||||
### SaaS app configuration (env vars for cameleer-saas itself)
|
||||
|
||||
SaaS properties use the `cameleer.saas.*` prefix (env vars: `CAMELEER_SAAS_*`). Two groups:
|
||||
|
||||
**Identity** (`cameleer.saas.identity.*` / `CAMELEER_SAAS_IDENTITY_*`):
|
||||
- Logto endpoint, M2M credentials, bootstrap file path — used by `LogtoConfig.java`
|
||||
|
||||
**Provisioning** (`cameleer.saas.provisioning.*` / `CAMELEER_SAAS_PROVISIONING_*`):
|
||||
|
||||
| Env var | Spring property | Purpose |
|
||||
|---------|----------------|---------|
|
||||
| `CAMELEER_SAAS_PROVISIONING_SERVERIMAGE` | `cameleer.saas.provisioning.serverimage` | Docker image for per-tenant server containers |
|
||||
| `CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE` | `cameleer.saas.provisioning.serveruiimage` | Docker image for per-tenant UI containers |
|
||||
| `CAMELEER_SAAS_PROVISIONING_NETWORKNAME` | `cameleer.saas.provisioning.networkname` | Shared services Docker network (compose default) |
|
||||
| `CAMELEER_SAAS_PROVISIONING_TRAEFIKNETWORK` | `cameleer.saas.provisioning.traefiknetwork` | Traefik Docker network for routing |
|
||||
| `CAMELEER_SAAS_PROVISIONING_PUBLICHOST` | `cameleer.saas.provisioning.publichost` | Public hostname (same value as infrastructure `PUBLIC_HOST`) |
|
||||
| `CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL` | `cameleer.saas.provisioning.publicprotocol` | Public protocol (same value as infrastructure `PUBLIC_PROTOCOL`) |
|
||||
|
||||
**Note:** `PUBLIC_HOST` and `PUBLIC_PROTOCOL` remain as infrastructure env vars for Traefik and Logto containers. The SaaS app reads its own copies via the `CAMELEER_SAAS_PROVISIONING_*` prefix. `LOGTO_ENDPOINT` and `LOGTO_DB_PASSWORD` are infrastructure env vars for the Logto service and are unchanged.
|
||||
|
||||
### Server OIDC role extraction (two paths)
|
||||
|
||||
| Path | Token type | Role source | How it works |
|
||||
|------|-----------|-------------|--------------|
|
||||
| SaaS platform -> server API | Logto org-scoped access token | `scope` claim | `JwtAuthenticationFilter.extractRolesFromScopes()` reads `server:admin` from scope |
|
||||
| Server-ui SSO login | Logto JWT access token (via Traditional Web App) | `roles` claim | `OidcTokenExchanger` decodes access_token, reads `roles` injected by Custom JWT |
|
||||
|
||||
The server's OIDC config (`OidcConfig`) includes `audience` (RFC 8707 resource indicator) and `additionalScopes`. The `audience` is sent as `resource` in both the authorization request and token exchange, which makes Logto return a JWT access token instead of opaque. The Custom JWT script maps org roles to `roles: ["server:admin"]`.
|
||||
|
||||
**CRITICAL:** `additionalScopes` MUST include `urn:logto:scope:organizations` and `urn:logto:scope:organization_roles` — without these, Logto doesn't populate `context.user.organizationRoles` in the Custom JWT script, so the `roles` claim is empty and all users get `defaultRoles` (VIEWER). The server's `OidcAuthController.applyClaimMappings()` uses OIDC token roles (from Custom JWT) as fallback when no DB claim mapping rules exist: claim mapping rules > OIDC token roles > defaultRoles.
|
||||
|
||||
### Deployment pipeline
|
||||
|
||||
App deployment is handled by the cameleer-server's `DeploymentExecutor` (7-stage async flow):
|
||||
1. PRE_FLIGHT — validate config, check JAR exists
|
||||
2. PULL_IMAGE — pull base image if missing
|
||||
3. CREATE_NETWORK — ensure cameleer-traefik and cameleer-env-{slug} networks
|
||||
4. START_REPLICAS — create N containers with Traefik labels
|
||||
5. HEALTH_CHECK — poll `/cameleer/health` on agent port 9464
|
||||
6. SWAP_TRAFFIC — stop old deployment (blue/green)
|
||||
7. COMPLETE — mark RUNNING or DEGRADED
|
||||
|
||||
Key files:
|
||||
- `DeploymentExecutor.java` (in cameleer-server) — async staged deployment
|
||||
- `DockerRuntimeOrchestrator.java` (in cameleer-server) — Docker client, container lifecycle
|
||||
- `docker/runtime-base/Dockerfile` — base image with agent JAR, maps env vars to `-D` system properties
|
||||
- `ServerApiClient.java` — M2M token acquisition for SaaS->server API calls (agent status). Uses `X-Cameleer-Protocol-Version: 1` header
|
||||
- Docker socket access: `group_add: ["0"]` in docker-compose.dev.yml (not root group membership in Dockerfile)
|
||||
- Network: deployed containers join `cameleer-tenant-{slug}` (primary, isolation) + `cameleer-traefik` (routing) + `cameleer-env-{tenantId}-{envSlug}` (environment isolation)
|
||||
|
||||
### Bootstrap (`docker/logto-bootstrap.sh`)
|
||||
|
||||
Idempotent script run inside the Logto container entrypoint. **Clean slate** — no example tenant, no viewer user, no server configuration. Phases:
|
||||
1. Wait for Logto health (no server to wait for — servers are provisioned per-tenant)
|
||||
2. Get Management API token (reads `m-default` secret from DB)
|
||||
3. Create Logto apps (SPA, Traditional Web App with `skipConsent`, M2M with Management API role + server API role)
|
||||
3b. Create API resource scopes (10 platform + 3 server scopes)
|
||||
4. Create org roles (owner, operator, viewer with API resource scope assignments) + M2M server role (`cameleer-m2m-server` with `server:admin` scope)
|
||||
5. Create admin user (SaaS admin with Logto console access)
|
||||
7b. Configure Logto Custom JWT for access tokens (maps org roles -> `roles` claim: owner->server:admin, operator->server:operator, viewer->server:viewer; saas-vendor global role -> server:admin)
|
||||
8. Configure Logto sign-in branding (Cameleer colors `#C6820E`/`#D4941E`, logo from `/platform/logo.svg`)
|
||||
9. Cleanup seeded Logto apps
|
||||
10. Write bootstrap results to `/data/logto-bootstrap.json`
|
||||
12. Create `saas-vendor` global role with all API scopes and assign to admin user (always runs — admin IS the platform admin).
|
||||
|
||||
The multi-tenant compose stack is: Traefik + PostgreSQL + ClickHouse + Logto (with bootstrap entrypoint) + cameleer-saas. No `cameleer-server` or `cameleer-server-ui` in compose — those are provisioned per-tenant by `DockerTenantProvisioner`.
|
||||
|
||||
### Deployment Modes (installer)
|
||||
|
||||
The installer (`installer/install.sh`) supports two deployment modes:
|
||||
|
||||
| | Multi-tenant SaaS (`DEPLOYMENT_MODE=saas`) | Standalone (`DEPLOYMENT_MODE=standalone`) |
|
||||
|---|---|---|
|
||||
| **Containers** | traefik, postgres, clickhouse, logto, cameleer-saas | traefik, postgres, clickhouse, server, server-ui |
|
||||
| **Auth** | Logto OIDC (SaaS admin + tenant users) | Local auth (built-in admin, no identity provider) |
|
||||
| **Tenant management** | SaaS admin creates/manages tenants via UI | Single server instance, no fleet management |
|
||||
| **PostgreSQL** | `cameleer-postgres` image (multi-DB init) | Stock `postgres:16-alpine` (server creates schema via Flyway) |
|
||||
| **Use case** | Platform vendor managing multiple customers | Single customer running the product directly |
|
||||
|
||||
Standalone mode generates a simpler compose with the server running directly. No Logto, no SaaS management plane, no bootstrap. The admin logs in with local credentials at `/`.
|
||||
|
||||
### Tenant Provisioning Flow
|
||||
|
||||
When SaaS admin creates a tenant via `VendorTenantService`:
|
||||
|
||||
**Synchronous (in `createAndProvision`):**
|
||||
1. Create `TenantEntity` (status=PROVISIONING) + Logto organization
|
||||
2. Create admin user in Logto with owner org role (if credentials provided)
|
||||
3. Register OIDC redirect URIs for `/t/{slug}/oidc/callback` on Logto Traditional Web App
|
||||
5. Generate license (tier-appropriate, 365 days)
|
||||
6. Return immediately — UI shows provisioning spinner, polls via `refetchInterval`
|
||||
|
||||
**Asynchronous (in `provisionAsync`, `@Async`):**
|
||||
7. Create per-tenant PostgreSQL user + schema via `TenantDatabaseService.createTenantDatabase(slug, password)`, store `dbPassword` on entity
|
||||
8. Create tenant-isolated Docker network (`cameleer-tenant-{slug}`)
|
||||
9. Create server container with per-tenant JDBC URL (`currentSchema=tenant_{slug}&ApplicationName=tenant_{slug}`), Traefik labels (`traefik.docker.network`), health check, Docker socket bind, JAR volume, certs volume (ro)
|
||||
10. Create UI container with `CAMELEER_API_URL`, `BASE_PATH`, Traefik strip-prefix labels
|
||||
10. Wait for health check (`/api/v1/health`, not `/actuator/health` which requires auth)
|
||||
11. Push license token to server via M2M API
|
||||
12. Push OIDC config (Traditional Web App credentials + `additionalScopes: [urn:logto:scope:organizations, urn:logto:scope:organization_roles]`) to server for SSO
|
||||
13. Update tenant status -> ACTIVE (or set `provisionError` on failure)
|
||||
|
||||
**Server restart** (available to SaaS admin + tenant admin):
|
||||
- `POST /api/vendor/tenants/{id}/restart` (SaaS admin) and `POST /api/tenant/server/restart` (tenant)
|
||||
- Calls `TenantProvisioner.stop(slug)` then `start(slug)` — restarts server + UI containers only (same image)
|
||||
|
||||
**Server upgrade** (available to SaaS admin + tenant admin):
|
||||
- `POST /api/vendor/tenants/{id}/upgrade` (SaaS admin) and `POST /api/tenant/server/upgrade` (tenant)
|
||||
- Calls `TenantProvisioner.upgrade(slug)` — removes server + UI containers, force-pulls latest images (preserves app containers, volumes, networks), then `provisionAsync()` re-creates containers with the new image + pushes license + OIDC config
|
||||
|
||||
**Tenant delete** cleanup:
|
||||
- `DockerTenantProvisioner.remove(slug)` — label-based container removal (`cameleer.tenant={slug}`), env network cleanup, tenant network removal, JAR volume removal
|
||||
- `TenantDatabaseService.dropTenantDatabase(slug)` — drops PostgreSQL `tenant_{slug}` schema + `tenant_{slug}` user
|
||||
- `TenantDataCleanupService.cleanupClickHouse(slug)` — deletes ClickHouse data across all tables with `tenant_id` column (GDPR)
|
||||
|
||||
**Password management** (tenant portal):
|
||||
- `POST /api/tenant/password` — tenant admin changes own Logto password (via `@AuthenticationPrincipal` JWT subject)
|
||||
- `POST /api/tenant/team/{userId}/password` — tenant admin resets a team member's Logto password (validates org membership first)
|
||||
- `POST /api/tenant/server/admin-password` — tenant admin resets the server's built-in local admin password (via M2M API to `POST /api/v1/admin/users/user:admin/password`)
|
||||
For detailed architecture docs, see the directory-scoped CLAUDE.md files (loaded automatically when editing code in that directory):
|
||||
- **Provisioning flow, env vars, lifecycle** → `src/.../provisioning/CLAUDE.md`
|
||||
- **Auth, scopes, JWT, OIDC** → `src/.../config/CLAUDE.md`
|
||||
- **Docker, routing, networks, bootstrap, deployment pipeline** → `docker/CLAUDE.md`
|
||||
- **Installer, deployment modes, compose templates** → `installer/CLAUDE.md`
|
||||
- **Frontend, sign-in UI** → `ui/CLAUDE.md`
|
||||
|
||||
## Database Migrations
|
||||
|
||||
@@ -341,7 +63,7 @@ PostgreSQL (Flyway): `src/main/resources/db/migration/`
|
||||
- `cameleer-saas` — SaaS vendor management plane (frontend + JAR baked in)
|
||||
- `cameleer-logto` — custom Logto with sign-in UI baked in
|
||||
- `cameleer-server` / `cameleer-server-ui` — provisioned per-tenant (not in compose, created by `DockerTenantProvisioner`)
|
||||
- `cameleer-runtime-base` — base image for deployed apps (agent JAR + JRE). CI downloads latest agent SNAPSHOT from Gitea Maven registry. Uses `CAMELEER_SERVER_RUNTIME_SERVERURL` env var (not CAMELEER_EXPORT_ENDPOINT).
|
||||
- `cameleer-runtime-base` — base image for deployed apps (agent JAR + `cameleer-log-appender.jar` + JRE). CI downloads latest agent and log appender SNAPSHOTs from Gitea Maven registry. The Dockerfile ENTRYPOINT is overridden by `DockerRuntimeOrchestrator` at container creation; agent config uses `CAMELEER_AGENT_*` env vars set by `DeploymentExecutor`.
|
||||
- Docker builds: `--no-cache`, `--provenance=false` for Gitea compatibility
|
||||
- `docker-compose.dev.yml` — exposes ports for direct access, sets `SPRING_PROFILES_ACTIVE: dev`. Volume-mounts `./ui/dist` into the container so local UI builds are served without rebuilding the Docker image (`SPRING_WEB_RESOURCES_STATIC_LOCATIONS` overrides classpath). Adds Docker socket mount for tenant provisioning.
|
||||
- Design system: import from `@cameleer/design-system` (Gitea npm registry)
|
||||
@@ -353,7 +75,7 @@ PostgreSQL (Flyway): `src/main/resources/db/migration/`
|
||||
<!-- gitnexus:start -->
|
||||
# GitNexus — Code Intelligence
|
||||
|
||||
This project is indexed by GitNexus as **cameleer-saas** (2676 symbols, 5768 relationships, 224 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
|
||||
This project is indexed by GitNexus as **cameleer-saas** (2816 symbols, 5989 relationships, 238 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
|
||||
|
||||
> If any GitNexus tool warns the index is stale, run `npx gitnexus analyze` in terminal first.
|
||||
|
||||
|
||||
@@ -28,6 +28,7 @@ services:
|
||||
CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL: ${PUBLIC_PROTOCOL:-https}
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE: gitea.siegeln.net/cameleer/cameleer-server:${VERSION:-latest}
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE: gitea.siegeln.net/cameleer/cameleer-server-ui:${VERSION:-latest}
|
||||
CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE: gitea.siegeln.net/cameleer/cameleer-runtime-base:${VERSION:-latest}
|
||||
CAMELEER_SAAS_PROVISIONING_NETWORKNAME: cameleer-saas_cameleer
|
||||
CAMELEER_SAAS_PROVISIONING_TRAEFIKNETWORK: cameleer-traefik
|
||||
|
||||
|
||||
@@ -126,6 +126,7 @@ services:
|
||||
CAMELEER_SAAS_IDENTITY_LOGTOPUBLICENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
|
||||
CAMELEER_SAAS_IDENTITY_M2MCLIENTID: ${LOGTO_M2M_CLIENT_ID:-}
|
||||
CAMELEER_SAAS_IDENTITY_M2MCLIENTSECRET: ${LOGTO_M2M_CLIENT_SECRET:-}
|
||||
CAMELEER_SERVER_SECURITY_JWTSECRET: ${CAMELEER_SERVER_SECURITY_JWTSECRET:-cameleer-dev-jwt-secret}
|
||||
# Provisioning — passed to per-tenant server containers
|
||||
CAMELEER_SAAS_PROVISIONING_PUBLICHOST: ${PUBLIC_HOST:-localhost}
|
||||
CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL: ${PUBLIC_PROTOCOL:-https}
|
||||
|
||||
88
docker/CLAUDE.md
Normal file
88
docker/CLAUDE.md
Normal file
@@ -0,0 +1,88 @@
|
||||
# Docker & Infrastructure
|
||||
|
||||
## Routing (single-domain, path-based via Traefik)
|
||||
|
||||
All services on one hostname. Infrastructure containers (Traefik, Logto) use `PUBLIC_HOST` + `PUBLIC_PROTOCOL` env vars directly. The SaaS app reads these via `CAMELEER_SAAS_PROVISIONING_PUBLICHOST` / `CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL` (Spring Boot properties `cameleer.saas.provisioning.publichost` / `cameleer.saas.provisioning.publicprotocol`).
|
||||
|
||||
| Path | Target | Notes |
|
||||
|------|--------|-------|
|
||||
| `/platform/*` | cameleer-saas:8080 | SPA + API (`server.servlet.context-path: /platform`) |
|
||||
| `/platform/vendor/*` | (SPA routes) | Vendor console (platform:admin) |
|
||||
| `/platform/tenant/*` | (SPA routes) | Tenant admin portal (org-scoped) |
|
||||
| `/t/{slug}/*` | per-tenant server-ui | Provisioned tenant UI containers (Traefik labels) |
|
||||
| `/` | redirect -> `/platform/` | Via `docker/traefik-dynamic.yml` |
|
||||
| `/*` (catch-all) | cameleer-logto:3001 (priority=1) | Custom sign-in UI, OIDC, interaction |
|
||||
|
||||
- SPA assets at `/_app/` (Vite `assetsDir: '_app'`) to avoid conflict with Logto's `/assets/`
|
||||
- Logto `ENDPOINT` = `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}` (same domain, same origin)
|
||||
- TLS: `traefik-certs` init container generates self-signed cert (dev) or copies user-supplied cert via `CERT_FILE`/`KEY_FILE`/`CA_FILE` env vars. Default cert configured in `docker/traefik-dynamic.yml` (NOT static `traefik.yml` — Traefik v3 ignores `tls.stores.default` in static config). Runtime cert replacement via vendor UI (stage/activate/restore). ACME for production (future). Server containers import `/certs/ca.pem` into JVM truststore at startup via `docker-entrypoint.sh` for OIDC trust.
|
||||
- Root `/` -> `/platform/` redirect via Traefik file provider (`docker/traefik-dynamic.yml`)
|
||||
- LoginPage auto-redirects to Logto OIDC (no intermediate button)
|
||||
- Per-tenant server containers get Traefik labels for `/t/{slug}/*` routing at provisioning time
|
||||
|
||||
## Docker Networks
|
||||
|
||||
Compose-defined networks:
|
||||
|
||||
| Network | Name on Host | Purpose |
|
||||
|---------|-------------|---------|
|
||||
| `cameleer` | `cameleer-saas_cameleer` | Compose default — shared services (DB, Logto, SaaS) |
|
||||
| `cameleer-traefik` | `cameleer-traefik` (fixed `name:`) | Traefik + provisioned tenant containers |
|
||||
|
||||
Per-tenant networks (created dynamically by `DockerTenantProvisioner`):
|
||||
|
||||
| Network | Name Pattern | Purpose |
|
||||
|---------|-------------|---------|
|
||||
| Tenant network | `cameleer-tenant-{slug}` | Internal bridge, no internet — isolates tenant server + apps |
|
||||
| Environment network | `cameleer-env-{tenantId}-{envSlug}` | Tenant-scoped (includes tenantId to prevent slug collision across tenants) |
|
||||
|
||||
Server containers join three networks: tenant network (primary), shared services network (`cameleer`), and traefik network. Apps deployed by the server use the tenant network as primary.
|
||||
|
||||
**Backend IP resolution:** Traefik's Docker provider is configured with `network: cameleer-traefik` (static `traefik.yml`). Every cameleer-managed container — saas-provisioned tenant containers (via `DockerTenantProvisioner`) and cameleer-server's per-app containers (via `DockerNetworkManager`) — is attached to `cameleer-traefik` at creation, so Traefik always resolves a reachable backend IP. Provisioned tenant containers additionally emit a `traefik.docker.network=cameleer-traefik` label as per-service defense-in-depth. (Pre-2026-04-23 the static config pointed at `network: cameleer`, a name that never matched any real network — that produced 504 Gateway Timeout on every managed app until the Traefik image was rebuilt.)
|
||||
|
||||
## Custom sign-in UI (`ui/sign-in/`)
|
||||
|
||||
Separate Vite+React SPA replacing Logto's default sign-in page. Visually matches cameleer-server LoginPage.
|
||||
|
||||
- Built as custom Logto Docker image (`cameleer-logto`): `ui/sign-in/Dockerfile` = node build stage + `FROM ghcr.io/logto-io/logto:latest` + COPY dist over `/etc/logto/packages/experience/dist/`
|
||||
- Uses `@cameleer/design-system` components (Card, Input, Button, FormField, Alert)
|
||||
- Authenticates via Logto Experience API (4-step: init -> verify password -> identify -> submit -> redirect)
|
||||
- `CUSTOM_UI_PATH` env var does NOT work for Logto OSS — must volume-mount or replace the experience dist directory
|
||||
- Favicon bundled in `ui/sign-in/public/favicon.svg` (served by Logto, not SaaS)
|
||||
|
||||
## Deployment pipeline
|
||||
|
||||
App deployment is handled by the cameleer-server's `DeploymentExecutor` (7-stage async flow):
|
||||
1. PRE_FLIGHT — validate config, check JAR exists
|
||||
2. PULL_IMAGE — pull base image if missing
|
||||
3. CREATE_NETWORK — ensure cameleer-traefik and cameleer-env-{slug} networks
|
||||
4. START_REPLICAS — create N containers with Traefik labels
|
||||
5. HEALTH_CHECK — poll `/cameleer/health` on agent port 9464
|
||||
6. SWAP_TRAFFIC — stop old deployment (blue/green)
|
||||
7. COMPLETE — mark RUNNING or DEGRADED
|
||||
|
||||
Key files:
|
||||
- `DeploymentExecutor.java` (in cameleer-server) — async staged deployment, runtime type auto-detection
|
||||
- `DockerRuntimeOrchestrator.java` (in cameleer-server) — Docker client, container lifecycle, builds runtime-type-specific entrypoints (spring-boot uses `-cp` + `PropertiesLauncher` with `-Dloader.path` for log appender; quarkus uses `-jar`; plain-java uses `-cp` + detected main class; native exec directly). Overrides the Dockerfile ENTRYPOINT.
|
||||
- `docker/runtime-base/Dockerfile` — base image with agent JAR + `cameleer-log-appender.jar` + JRE. The Dockerfile ENTRYPOINT (`-jar /app/app.jar`) is a fallback — `DockerRuntimeOrchestrator` overrides it at container creation.
|
||||
- `RuntimeDetector.java` (in cameleer-server) — detects runtime type from JAR manifest `Main-Class`; derives correct `PropertiesLauncher` package (Spring Boot 3.2+ vs pre-3.2)
|
||||
- `ServerApiClient.java` — M2M token acquisition for SaaS->server API calls (agent status). Uses `X-Cameleer-Protocol-Version: 1` header
|
||||
- Docker socket access: `group_add: ["0"]` in docker-compose.dev.yml (not root group membership in Dockerfile)
|
||||
- Network: deployed containers join `cameleer-tenant-{slug}` (primary, isolation) + `cameleer-traefik` (routing) + `cameleer-env-{tenantId}-{envSlug}` (environment isolation)
|
||||
|
||||
## Bootstrap (`docker/logto-bootstrap.sh`)
|
||||
|
||||
Idempotent script run inside the Logto container entrypoint. **Clean slate** — no example tenant, no viewer user, no server configuration. Phases:
|
||||
1. Wait for Logto health (no server to wait for — servers are provisioned per-tenant)
|
||||
2. Get Management API token (reads `m-default` secret from DB)
|
||||
3. Create Logto apps (SPA, Traditional Web App with `skipConsent`, M2M with Management API role + server API role)
|
||||
3b. Create API resource scopes (1 platform + 9 tenant + 3 server scopes)
|
||||
4. Create org roles (owner, operator, viewer with API resource scope assignments) + M2M server role (`cameleer-m2m-server` with `server:admin` scope)
|
||||
5. Create admin user (SaaS admin with Logto console access)
|
||||
7b. Configure Logto Custom JWT for access tokens (maps org roles -> `roles` claim: owner->server:admin, operator->server:operator, viewer->server:viewer; saas-vendor global role -> server:admin)
|
||||
8. Configure Logto sign-in branding (Cameleer colors `#C6820E`/`#D4941E`, logo from `/platform/logo.svg`)
|
||||
9. Cleanup seeded Logto apps
|
||||
10. Write bootstrap results to `/data/logto-bootstrap.json`
|
||||
12. Create `saas-vendor` global role with all API scopes and assign to admin user (always runs — admin IS the platform admin).
|
||||
|
||||
The multi-tenant compose stack is: Traefik + PostgreSQL + ClickHouse + Logto (with bootstrap entrypoint) + cameleer-saas. No `cameleer-server` or `cameleer-server-ui` in compose — those are provisioned per-tenant by `DockerTenantProvisioner`.
|
||||
@@ -18,6 +18,6 @@ providers:
|
||||
docker:
|
||||
endpoint: "unix:///var/run/docker.sock"
|
||||
exposedByDefault: false
|
||||
network: cameleer
|
||||
network: cameleer-traefik
|
||||
file:
|
||||
filename: /etc/traefik/dynamic.yml
|
||||
|
||||
@@ -897,7 +897,7 @@ Env vars injected into provisioned per-tenant server containers by `DockerTenant
|
||||
| `CAMELEER_SERVER_CLICKHOUSE_URL` | `jdbc:clickhouse://cameleer-clickhouse:8123/cameleer` | ClickHouse JDBC URL |
|
||||
| `CAMELEER_SERVER_TENANT_ID` | *(tenant slug)* | Tenant identifier for data isolation |
|
||||
| `CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN` | *(generated)* | Agent bootstrap token |
|
||||
| `CAMELEER_SERVER_SECURITY_JWTSECRET` | *(generated)* | JWT signing secret |
|
||||
| `CAMELEER_SERVER_SECURITY_JWTSECRET` | *(generated, must be non-empty)* | JWT signing secret |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDC_ISSUERURI` | `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}/oidc` | OIDC issuer for M2M tokens |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDC_JWKSETURI` | `http://cameleer-logto:3001/oidc/jwks` | Docker-internal JWK fetch |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDC_AUDIENCE` | `https://api.cameleer.local` | JWT audience validation |
|
||||
|
||||
@@ -0,0 +1,961 @@
|
||||
# Externalize Docker Compose Templates — Implementation Plan
|
||||
|
||||
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||
|
||||
**Goal:** Replace inline docker-compose generation in installer scripts with static template files, reducing duplication and enabling user customization.
|
||||
|
||||
**Architecture:** Static YAML templates in `installer/templates/` are copied to the install directory. The installer writes `.env` (including `COMPOSE_FILE` to select which templates are active) and runs `docker compose up -d`. Conditional features (TLS, monitoring) are handled via compose file layering and `.env` variables instead of heredoc injection.
|
||||
|
||||
**Tech Stack:** Docker Compose v2, YAML, Bash, PowerShell
|
||||
|
||||
**Spec:** `docs/superpowers/specs/2026-04-15-externalize-compose-templates-design.md`
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Create `docker-compose.yml` (infra base template)
|
||||
|
||||
**Files:**
|
||||
- Create: `installer/templates/docker-compose.yml`
|
||||
|
||||
This is the shared infrastructure base — always loaded regardless of deployment mode.
|
||||
|
||||
- [ ] **Step 1: Create the infra base template**
|
||||
|
||||
```yaml
|
||||
# Cameleer Infrastructure
|
||||
# Shared base — always loaded. Mode-specific services in separate compose files.
|
||||
|
||||
services:
|
||||
cameleer-traefik:
|
||||
image: ${TRAEFIK_IMAGE:-gitea.siegeln.net/cameleer/cameleer-traefik}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "${HTTP_PORT:-80}:80"
|
||||
- "${HTTPS_PORT:-443}:443"
|
||||
- "${LOGTO_CONSOLE_BIND:-127.0.0.1}:${LOGTO_CONSOLE_PORT:-3002}:3002"
|
||||
environment:
|
||||
PUBLIC_HOST: ${PUBLIC_HOST:-localhost}
|
||||
CERT_FILE: ${CERT_FILE:-}
|
||||
KEY_FILE: ${KEY_FILE:-}
|
||||
CA_FILE: ${CA_FILE:-}
|
||||
volumes:
|
||||
- cameleer-certs:/certs
|
||||
- ${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock:ro
|
||||
labels:
|
||||
- "prometheus.io/scrape=true"
|
||||
- "prometheus.io/port=8082"
|
||||
- "prometheus.io/path=/metrics"
|
||||
networks:
|
||||
- cameleer
|
||||
- cameleer-traefik
|
||||
- monitoring
|
||||
|
||||
cameleer-postgres:
|
||||
image: ${POSTGRES_IMAGE:-gitea.siegeln.net/cameleer/cameleer-postgres}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
POSTGRES_DB: ${POSTGRES_DB:-cameleer_saas}
|
||||
POSTGRES_USER: ${POSTGRES_USER:-cameleer}
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:?POSTGRES_PASSWORD must be set in .env}
|
||||
volumes:
|
||||
- cameleer-pgdata:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER:-cameleer} -d $${POSTGRES_DB:-cameleer_saas}"]
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
networks:
|
||||
- cameleer
|
||||
- monitoring
|
||||
|
||||
cameleer-clickhouse:
|
||||
image: ${CLICKHOUSE_IMAGE:-gitea.siegeln.net/cameleer/cameleer-clickhouse}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD:?CLICKHOUSE_PASSWORD must be set in .env}
|
||||
volumes:
|
||||
- cameleer-chdata:/var/lib/clickhouse
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "clickhouse-client --password $${CLICKHOUSE_PASSWORD} --query 'SELECT 1'"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
labels:
|
||||
- "prometheus.io/scrape=true"
|
||||
- "prometheus.io/port=9363"
|
||||
- "prometheus.io/path=/metrics"
|
||||
networks:
|
||||
- cameleer
|
||||
- monitoring
|
||||
|
||||
volumes:
|
||||
cameleer-pgdata:
|
||||
cameleer-chdata:
|
||||
cameleer-certs:
|
||||
|
||||
networks:
|
||||
cameleer:
|
||||
driver: bridge
|
||||
cameleer-traefik:
|
||||
name: cameleer-traefik
|
||||
driver: bridge
|
||||
monitoring:
|
||||
name: cameleer-monitoring-noop
|
||||
```
|
||||
|
||||
Key changes from the generated version:
|
||||
- Logto console port always present with `LOGTO_CONSOLE_BIND` controlling exposure
|
||||
- Prometheus labels unconditional on traefik and clickhouse
|
||||
- `monitoring` network defined as local noop bridge
|
||||
- All services join `monitoring` network
|
||||
- `POSTGRES_DB` uses `${POSTGRES_DB:-cameleer_saas}` (parameterized — standalone overrides via `.env`)
|
||||
- Password variables use `:?` fail-if-unset
|
||||
|
||||
Note: The SaaS mode uses `cameleer-postgres` (custom multi-DB image) while standalone uses `postgres:16-alpine`. The `POSTGRES_IMAGE` variable already handles this — the infra base uses `${POSTGRES_IMAGE:-...}` and standalone `.env` sets `POSTGRES_IMAGE=postgres:16-alpine`.
|
||||
|
||||
- [ ] **Step 2: Verify YAML is valid**
|
||||
|
||||
Run: `python -c "import yaml; yaml.safe_load(open('installer/templates/docker-compose.yml'))"`
|
||||
Expected: No output (valid YAML). If python/yaml not available, use `docker compose -f installer/templates/docker-compose.yml config --quiet` (will fail on unset vars, but validates structure).
|
||||
|
||||
- [ ] **Step 3: Commit**
|
||||
|
||||
```bash
|
||||
git add installer/templates/docker-compose.yml
|
||||
git commit -m "feat(installer): add infra base docker-compose template"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 2: Create `docker-compose.saas.yml` (SaaS mode template)
|
||||
|
||||
**Files:**
|
||||
- Create: `installer/templates/docker-compose.saas.yml`
|
||||
|
||||
SaaS-specific services: Logto identity provider and cameleer-saas management plane.
|
||||
|
||||
- [ ] **Step 1: Create the SaaS template**
|
||||
|
||||
```yaml
|
||||
# Cameleer SaaS — Logto + management plane
|
||||
# Loaded in SaaS deployment mode
|
||||
|
||||
services:
|
||||
cameleer-logto:
|
||||
image: ${LOGTO_IMAGE:-gitea.siegeln.net/cameleer/cameleer-logto}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
cameleer-postgres:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
DB_URL: postgres://${POSTGRES_USER:-cameleer}:${POSTGRES_PASSWORD}@cameleer-postgres:5432/logto
|
||||
ENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
|
||||
ADMIN_ENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}:${LOGTO_CONSOLE_PORT:-3002}
|
||||
TRUST_PROXY_HEADER: 1
|
||||
NODE_TLS_REJECT_UNAUTHORIZED: "${NODE_TLS_REJECT:-0}"
|
||||
LOGTO_ENDPOINT: http://cameleer-logto:3001
|
||||
LOGTO_ADMIN_ENDPOINT: http://cameleer-logto:3002
|
||||
LOGTO_PUBLIC_ENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
|
||||
PUBLIC_HOST: ${PUBLIC_HOST:-localhost}
|
||||
PUBLIC_PROTOCOL: ${PUBLIC_PROTOCOL:-https}
|
||||
PG_HOST: cameleer-postgres
|
||||
PG_USER: ${POSTGRES_USER:-cameleer}
|
||||
PG_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
PG_DB_SAAS: cameleer_saas
|
||||
SAAS_ADMIN_USER: ${SAAS_ADMIN_USER:-admin}
|
||||
SAAS_ADMIN_PASS: ${SAAS_ADMIN_PASS:?SAAS_ADMIN_PASS must be set in .env}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "node -e \"require('http').get('http://localhost:3001/oidc/.well-known/openid-configuration', r => process.exit(r.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))\" && test -f /data/logto-bootstrap.json"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 60
|
||||
start_period: 30s
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.cameleer-logto.rule=PathPrefix(`/`)
|
||||
- traefik.http.routers.cameleer-logto.priority=1
|
||||
- traefik.http.routers.cameleer-logto.entrypoints=websecure
|
||||
- traefik.http.routers.cameleer-logto.tls=true
|
||||
- traefik.http.routers.cameleer-logto.service=cameleer-logto
|
||||
- traefik.http.routers.cameleer-logto.middlewares=cameleer-logto-cors
|
||||
- "traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowOriginList=${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}:${LOGTO_CONSOLE_PORT:-3002}"
|
||||
- traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowMethods=GET,POST,PUT,PATCH,DELETE,OPTIONS
|
||||
- traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowHeaders=Authorization,Content-Type
|
||||
- traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowCredentials=true
|
||||
- traefik.http.services.cameleer-logto.loadbalancer.server.port=3001
|
||||
- traefik.http.routers.cameleer-logto-console.rule=PathPrefix(`/`)
|
||||
- traefik.http.routers.cameleer-logto-console.entrypoints=admin-console
|
||||
- traefik.http.routers.cameleer-logto-console.tls=true
|
||||
- traefik.http.routers.cameleer-logto-console.service=cameleer-logto-console
|
||||
- traefik.http.services.cameleer-logto-console.loadbalancer.server.port=3002
|
||||
volumes:
|
||||
- cameleer-bootstrapdata:/data
|
||||
networks:
|
||||
- cameleer
|
||||
- monitoring
|
||||
|
||||
cameleer-saas:
|
||||
image: ${CAMELEER_IMAGE:-gitea.siegeln.net/cameleer/cameleer-saas}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
cameleer-logto:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
# SaaS database
|
||||
SPRING_DATASOURCE_URL: jdbc:postgresql://cameleer-postgres:5432/cameleer_saas
|
||||
SPRING_DATASOURCE_USERNAME: ${POSTGRES_USER:-cameleer}
|
||||
SPRING_DATASOURCE_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
# Identity (Logto)
|
||||
CAMELEER_SAAS_IDENTITY_LOGTOENDPOINT: http://cameleer-logto:3001
|
||||
CAMELEER_SAAS_IDENTITY_LOGTOPUBLICENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
|
||||
# Provisioning — passed to per-tenant server containers
|
||||
CAMELEER_SAAS_PROVISIONING_PUBLICHOST: ${PUBLIC_HOST:-localhost}
|
||||
CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL: ${PUBLIC_PROTOCOL:-https}
|
||||
CAMELEER_SAAS_PROVISIONING_NETWORKNAME: ${COMPOSE_PROJECT_NAME:-cameleer-saas}_cameleer
|
||||
CAMELEER_SAAS_PROVISIONING_TRAEFIKNETWORK: cameleer-traefik
|
||||
CAMELEER_SAAS_PROVISIONING_DATASOURCEUSERNAME: ${POSTGRES_USER:-cameleer}
|
||||
CAMELEER_SAAS_PROVISIONING_DATASOURCEPASSWORD: ${POSTGRES_PASSWORD}
|
||||
CAMELEER_SAAS_PROVISIONING_CLICKHOUSEPASSWORD: ${CLICKHOUSE_PASSWORD}
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE: ${CAMELEER_SAAS_PROVISIONING_SERVERIMAGE:-gitea.siegeln.net/cameleer/cameleer-server:latest}
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE: ${CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE:-gitea.siegeln.net/cameleer/cameleer-server-ui:latest}
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.saas.rule=PathPrefix(`/platform`)
|
||||
- traefik.http.routers.saas.entrypoints=websecure
|
||||
- traefik.http.routers.saas.tls=true
|
||||
- traefik.http.services.saas.loadbalancer.server.port=8080
|
||||
- "prometheus.io/scrape=true"
|
||||
- "prometheus.io/port=8080"
|
||||
- "prometheus.io/path=/platform/actuator/prometheus"
|
||||
volumes:
|
||||
- cameleer-bootstrapdata:/data/bootstrap:ro
|
||||
- cameleer-certs:/certs
|
||||
- ${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock
|
||||
group_add:
|
||||
- "${DOCKER_GID:-0}"
|
||||
networks:
|
||||
- cameleer
|
||||
- monitoring
|
||||
|
||||
volumes:
|
||||
cameleer-bootstrapdata:
|
||||
|
||||
networks:
|
||||
monitoring:
|
||||
name: cameleer-monitoring-noop
|
||||
```
|
||||
|
||||
Key changes:
|
||||
- Logto console traefik labels always included (harmless when port is localhost-only)
|
||||
- Prometheus labels on cameleer-saas always included
|
||||
- `DOCKER_GID` read from `.env` via `${DOCKER_GID:-0}` instead of inline `stat`
|
||||
- Both services join `monitoring` network
|
||||
- `monitoring` network redefined as noop bridge (compose merges with base definition)
|
||||
|
||||
- [ ] **Step 2: Commit**
|
||||
|
||||
```bash
|
||||
git add installer/templates/docker-compose.saas.yml
|
||||
git commit -m "feat(installer): add SaaS docker-compose template"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 3: Create `docker-compose.server.yml` (standalone mode template)
|
||||
|
||||
**Files:**
|
||||
- Create: `installer/templates/docker-compose.server.yml`
|
||||
- Create: `installer/templates/traefik-dynamic.yml`
|
||||
|
||||
Standalone-specific services: cameleer-server + server-ui. Also includes the traefik dynamic config that standalone mode needs (overrides the baked-in SaaS redirect).
|
||||
|
||||
- [ ] **Step 1: Create the standalone template**
|
||||
|
||||
```yaml
|
||||
# Cameleer Server (standalone)
|
||||
# Loaded in standalone deployment mode
|
||||
|
||||
services:
|
||||
cameleer-traefik:
|
||||
volumes:
|
||||
- ./traefik-dynamic.yml:/etc/traefik/dynamic.yml:ro
|
||||
|
||||
cameleer-postgres:
|
||||
image: postgres:16-alpine
|
||||
environment:
|
||||
POSTGRES_DB: ${POSTGRES_DB:-cameleer}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER:-cameleer} -d $${POSTGRES_DB:-cameleer}"]
|
||||
|
||||
cameleer-server:
|
||||
image: ${SERVER_IMAGE:-gitea.siegeln.net/cameleer/cameleer-server}:${VERSION:-latest}
|
||||
container_name: cameleer-server
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
cameleer-postgres:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
CAMELEER_SERVER_TENANT_ID: default
|
||||
SPRING_DATASOURCE_URL: jdbc:postgresql://cameleer-postgres:5432/${POSTGRES_DB:-cameleer}?currentSchema=tenant_default
|
||||
SPRING_DATASOURCE_USERNAME: ${POSTGRES_USER:-cameleer}
|
||||
SPRING_DATASOURCE_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
CAMELEER_SERVER_CLICKHOUSE_URL: jdbc:clickhouse://cameleer-clickhouse:8123/cameleer
|
||||
CAMELEER_SERVER_CLICKHOUSE_USERNAME: default
|
||||
CAMELEER_SERVER_CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD}
|
||||
CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN: ${BOOTSTRAP_TOKEN:?BOOTSTRAP_TOKEN must be set in .env}
|
||||
CAMELEER_SERVER_SECURITY_UIUSER: ${SERVER_ADMIN_USER:-admin}
|
||||
CAMELEER_SERVER_SECURITY_UIPASSWORD: ${SERVER_ADMIN_PASS:?SERVER_ADMIN_PASS must be set in .env}
|
||||
CAMELEER_SERVER_SECURITY_CORSALLOWEDORIGINS: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
|
||||
CAMELEER_SERVER_RUNTIME_ENABLED: "true"
|
||||
CAMELEER_SERVER_RUNTIME_SERVERURL: http://cameleer-server:8081
|
||||
CAMELEER_SERVER_RUNTIME_ROUTINGDOMAIN: ${PUBLIC_HOST:-localhost}
|
||||
CAMELEER_SERVER_RUNTIME_ROUTINGMODE: path
|
||||
CAMELEER_SERVER_RUNTIME_JARSTORAGEPATH: /data/jars
|
||||
CAMELEER_SERVER_RUNTIME_DOCKERNETWORK: cameleer-apps
|
||||
CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME: cameleer-jars
|
||||
CAMELEER_SERVER_RUNTIME_BASEIMAGE: gitea.siegeln.net/cameleer/cameleer-runtime-base:${VERSION:-latest}
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.server-api.rule=PathPrefix(`/api`)
|
||||
- traefik.http.routers.server-api.entrypoints=websecure
|
||||
- traefik.http.routers.server-api.tls=true
|
||||
- traefik.http.services.server-api.loadbalancer.server.port=8081
|
||||
- traefik.docker.network=cameleer-traefik
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -sf http://localhost:8081/api/v1/health || exit 1"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 30
|
||||
start_period: 30s
|
||||
volumes:
|
||||
- jars:/data/jars
|
||||
- cameleer-certs:/certs:ro
|
||||
- ${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock
|
||||
group_add:
|
||||
- "${DOCKER_GID:-0}"
|
||||
networks:
|
||||
- cameleer
|
||||
- cameleer-traefik
|
||||
- cameleer-apps
|
||||
- monitoring
|
||||
|
||||
cameleer-server-ui:
|
||||
image: ${SERVER_UI_IMAGE:-gitea.siegeln.net/cameleer/cameleer-server-ui}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
cameleer-server:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
CAMELEER_API_URL: http://cameleer-server:8081
|
||||
BASE_PATH: ""
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.ui.rule=PathPrefix(`/`)
|
||||
- traefik.http.routers.ui.priority=1
|
||||
- traefik.http.routers.ui.entrypoints=websecure
|
||||
- traefik.http.routers.ui.tls=true
|
||||
- traefik.http.services.ui.loadbalancer.server.port=80
|
||||
- traefik.docker.network=cameleer-traefik
|
||||
networks:
|
||||
- cameleer-traefik
|
||||
- monitoring
|
||||
|
||||
volumes:
|
||||
jars:
|
||||
|
||||
networks:
|
||||
cameleer-apps:
|
||||
name: cameleer-apps
|
||||
driver: bridge
|
||||
monitoring:
|
||||
name: cameleer-monitoring-noop
|
||||
```
|
||||
|
||||
Key design decisions:
|
||||
- `cameleer-traefik` and `cameleer-postgres` entries are **overrides** — compose merges them with the base. The postgres image switches to `postgres:16-alpine` and the healthcheck uses `${POSTGRES_DB:-cameleer}` instead of hardcoded `cameleer_saas`. Traefik gets the `traefik-dynamic.yml` volume mount.
|
||||
- `DOCKER_GID` from `.env` via `${DOCKER_GID:-0}`
|
||||
- `BOOTSTRAP_TOKEN` uses `:?` fail-if-unset
|
||||
- Both server and server-ui join `monitoring` network
|
||||
|
||||
- [ ] **Step 2: Create the traefik dynamic config template**
|
||||
|
||||
```yaml
|
||||
tls:
|
||||
stores:
|
||||
default:
|
||||
defaultCertificate:
|
||||
certFile: /certs/cert.pem
|
||||
keyFile: /certs/key.pem
|
||||
```
|
||||
|
||||
This file is only relevant in standalone mode (overrides the baked-in SaaS `/` -> `/platform/` redirect in the traefik image).
|
||||
|
||||
- [ ] **Step 3: Commit**
|
||||
|
||||
```bash
|
||||
git add installer/templates/docker-compose.server.yml installer/templates/traefik-dynamic.yml
|
||||
git commit -m "feat(installer): add standalone docker-compose and traefik templates"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 4: Create overlay templates (TLS + monitoring)
|
||||
|
||||
**Files:**
|
||||
- Create: `installer/templates/docker-compose.tls.yml`
|
||||
- Create: `installer/templates/docker-compose.monitoring.yml`
|
||||
|
||||
- [ ] **Step 1: Create the TLS overlay**
|
||||
|
||||
```yaml
|
||||
# Custom TLS certificates overlay
|
||||
# Adds user-supplied certificate volume to traefik
|
||||
|
||||
services:
|
||||
cameleer-traefik:
|
||||
volumes:
|
||||
- ./certs:/user-certs:ro
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Create the monitoring overlay**
|
||||
|
||||
```yaml
|
||||
# External monitoring network overlay
|
||||
# Overrides the noop monitoring bridge with a real external network
|
||||
|
||||
networks:
|
||||
monitoring:
|
||||
external: true
|
||||
name: ${MONITORING_NETWORK:?MONITORING_NETWORK must be set in .env}
|
||||
```
|
||||
|
||||
This is the key to the monitoring pattern: the base compose files define `monitoring` as a local noop bridge and all services join it. When this overlay is included in `COMPOSE_FILE`, compose merges the network definition — overriding it to point at the real external monitoring network. No per-service entries needed.
|
||||
|
||||
- [ ] **Step 3: Commit**
|
||||
|
||||
```bash
|
||||
git add installer/templates/docker-compose.tls.yml installer/templates/docker-compose.monitoring.yml
|
||||
git commit -m "feat(installer): add TLS and monitoring overlay templates"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 5: Create `.env.example`
|
||||
|
||||
**Files:**
|
||||
- Create: `installer/templates/.env.example`
|
||||
|
||||
- [ ] **Step 1: Create the documented variable reference**
|
||||
|
||||
```bash
|
||||
# Cameleer Configuration
|
||||
# Copy this file to .env and fill in the values.
|
||||
# The installer generates .env automatically — this file is for reference.
|
||||
|
||||
# ============================================================
|
||||
# Compose file assembly (set by installer)
|
||||
# ============================================================
|
||||
# SaaS: docker-compose.yml:docker-compose.saas.yml
|
||||
# Standalone: docker-compose.yml:docker-compose.server.yml
|
||||
# Add :docker-compose.tls.yml for custom TLS certificates
|
||||
# Add :docker-compose.monitoring.yml for external monitoring network
|
||||
COMPOSE_FILE=docker-compose.yml:docker-compose.saas.yml
|
||||
|
||||
# ============================================================
|
||||
# Image version
|
||||
# ============================================================
|
||||
VERSION=latest
|
||||
|
||||
# ============================================================
|
||||
# Public access
|
||||
# ============================================================
|
||||
PUBLIC_HOST=localhost
|
||||
PUBLIC_PROTOCOL=https
|
||||
|
||||
# ============================================================
|
||||
# Ports
|
||||
# ============================================================
|
||||
HTTP_PORT=80
|
||||
HTTPS_PORT=443
|
||||
# Set to 0.0.0.0 to expose Logto admin console externally (default: localhost only)
|
||||
# LOGTO_CONSOLE_BIND=0.0.0.0
|
||||
LOGTO_CONSOLE_PORT=3002
|
||||
|
||||
# ============================================================
|
||||
# PostgreSQL
|
||||
# ============================================================
|
||||
POSTGRES_USER=cameleer
|
||||
POSTGRES_PASSWORD=CHANGE_ME
|
||||
# SaaS: cameleer_saas, Standalone: cameleer
|
||||
POSTGRES_DB=cameleer_saas
|
||||
|
||||
# ============================================================
|
||||
# ClickHouse
|
||||
# ============================================================
|
||||
CLICKHOUSE_PASSWORD=CHANGE_ME
|
||||
|
||||
# ============================================================
|
||||
# Admin credentials (SaaS mode)
|
||||
# ============================================================
|
||||
SAAS_ADMIN_USER=admin
|
||||
SAAS_ADMIN_PASS=CHANGE_ME
|
||||
|
||||
# ============================================================
|
||||
# Admin credentials (standalone mode)
|
||||
# ============================================================
|
||||
# SERVER_ADMIN_USER=admin
|
||||
# SERVER_ADMIN_PASS=CHANGE_ME
|
||||
# BOOTSTRAP_TOKEN=CHANGE_ME
|
||||
|
||||
# ============================================================
|
||||
# TLS
|
||||
# ============================================================
|
||||
# Set to 1 to reject unauthorized TLS certificates (production)
|
||||
NODE_TLS_REJECT=0
|
||||
# Custom TLS certificate paths (inside container, set by installer)
|
||||
# CERT_FILE=/user-certs/cert.pem
|
||||
# KEY_FILE=/user-certs/key.pem
|
||||
# CA_FILE=/user-certs/ca.pem
|
||||
|
||||
# ============================================================
|
||||
# Docker
|
||||
# ============================================================
|
||||
DOCKER_SOCKET=/var/run/docker.sock
|
||||
# GID of the docker socket — detected by installer, used for container group_add
|
||||
DOCKER_GID=0
|
||||
|
||||
# ============================================================
|
||||
# Provisioning images (SaaS mode only)
|
||||
# ============================================================
|
||||
# CAMELEER_SAAS_PROVISIONING_SERVERIMAGE=gitea.siegeln.net/cameleer/cameleer-server:latest
|
||||
# CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE=gitea.siegeln.net/cameleer/cameleer-server-ui:latest
|
||||
|
||||
# ============================================================
|
||||
# Monitoring (optional)
|
||||
# ============================================================
|
||||
# External Docker network name for Prometheus scraping.
|
||||
# Only needed when docker-compose.monitoring.yml is in COMPOSE_FILE.
|
||||
# MONITORING_NETWORK=prometheus
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Commit**
|
||||
|
||||
```bash
|
||||
git add installer/templates/.env.example
|
||||
git commit -m "feat(installer): add .env.example with documented variables"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 6: Update `install.sh` — replace compose generation with template copying
|
||||
|
||||
**Files:**
|
||||
- Modify: `installer/install.sh:574-672` (generate_env_file — add COMPOSE_FILE and LOGTO_CONSOLE_BIND)
|
||||
- Modify: `installer/install.sh:674-1135` (replace generate_compose_file + generate_compose_file_standalone with copy_templates)
|
||||
- Modify: `installer/install.sh:1728-1731` (reinstall cleanup — delete template files)
|
||||
- Modify: `installer/install.sh:1696-1710` (upgrade path — copy templates instead of generate)
|
||||
- Modify: `installer/install.sh:1790-1791` (main — call copy_templates instead of generate_compose_file)
|
||||
|
||||
- [ ] **Step 1: Replace `generate_compose_file` and `generate_compose_file_standalone` with `copy_templates`**
|
||||
|
||||
Delete both functions (`generate_compose_file` at line 674 and `generate_compose_file_standalone` at line 934) and replace with:
|
||||
|
||||
```bash
|
||||
copy_templates() {
|
||||
local src
|
||||
src="$(cd "$(dirname "$0")" && pwd)/templates"
|
||||
|
||||
# Base infra — always copied
|
||||
cp "$src/docker-compose.yml" "$INSTALL_DIR/docker-compose.yml"
|
||||
cp "$src/.env.example" "$INSTALL_DIR/.env.example"
|
||||
|
||||
# Mode-specific
|
||||
if [ "$DEPLOYMENT_MODE" = "standalone" ]; then
|
||||
cp "$src/docker-compose.server.yml" "$INSTALL_DIR/docker-compose.server.yml"
|
||||
cp "$src/traefik-dynamic.yml" "$INSTALL_DIR/traefik-dynamic.yml"
|
||||
else
|
||||
cp "$src/docker-compose.saas.yml" "$INSTALL_DIR/docker-compose.saas.yml"
|
||||
fi
|
||||
|
||||
# Optional overlays
|
||||
if [ "$TLS_MODE" = "custom" ]; then
|
||||
cp "$src/docker-compose.tls.yml" "$INSTALL_DIR/docker-compose.tls.yml"
|
||||
fi
|
||||
if [ -n "$MONITORING_NETWORK" ]; then
|
||||
cp "$src/docker-compose.monitoring.yml" "$INSTALL_DIR/docker-compose.monitoring.yml"
|
||||
fi
|
||||
|
||||
log_info "Copied docker-compose templates to $INSTALL_DIR"
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Update `generate_env_file` to include `COMPOSE_FILE`, `LOGTO_CONSOLE_BIND`, and `DOCKER_GID`**
|
||||
|
||||
In the standalone `.env` block (line 577-614), add after the `DOCKER_GID` line:
|
||||
|
||||
```bash
|
||||
# Compose file assembly
|
||||
COMPOSE_FILE=docker-compose.yml:docker-compose.server.yml$([ "$TLS_MODE" = "custom" ] && echo ":docker-compose.tls.yml")$([ -n "$MONITORING_NETWORK" ] && echo ":docker-compose.monitoring.yml")
|
||||
EOF
|
||||
```
|
||||
|
||||
In the SaaS `.env` block (line 617-668), add `LOGTO_CONSOLE_BIND` and `COMPOSE_FILE`. After the `LOGTO_CONSOLE_PORT` line:
|
||||
|
||||
```bash
|
||||
LOGTO_CONSOLE_BIND=$([ "$LOGTO_CONSOLE_EXPOSED" = "true" ] && echo "0.0.0.0" || echo "127.0.0.1")
|
||||
```
|
||||
|
||||
And at the end of the SaaS block, add the `COMPOSE_FILE` line:
|
||||
|
||||
```bash
|
||||
# Compose file assembly
|
||||
COMPOSE_FILE=docker-compose.yml:docker-compose.saas.yml$([ "$TLS_MODE" = "custom" ] && echo ":docker-compose.tls.yml")$([ -n "$MONITORING_NETWORK" ] && echo ":docker-compose.monitoring.yml")
|
||||
```
|
||||
|
||||
Also add the `MONITORING_NETWORK` variable to `.env` when set:
|
||||
|
||||
```bash
|
||||
if [ -n "$MONITORING_NETWORK" ]; then
|
||||
echo "" >> "$f"
|
||||
echo "# Monitoring" >> "$f"
|
||||
echo "MONITORING_NETWORK=${MONITORING_NETWORK}" >> "$f"
|
||||
fi
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Update `main()` — replace `generate_compose_file` call with `copy_templates`**
|
||||
|
||||
At line 1791, change:
|
||||
```bash
|
||||
generate_compose_file
|
||||
```
|
||||
to:
|
||||
```bash
|
||||
copy_templates
|
||||
```
|
||||
|
||||
- [ ] **Step 4: Update `handle_rerun` upgrade path**
|
||||
|
||||
At line 1703, change:
|
||||
```bash
|
||||
generate_compose_file
|
||||
```
|
||||
to:
|
||||
```bash
|
||||
copy_templates
|
||||
```
|
||||
|
||||
- [ ] **Step 5: Update reinstall cleanup to remove template files**
|
||||
|
||||
At lines 1728-1731, update the `rm -f` list to include all possible template files:
|
||||
```bash
|
||||
rm -f "$INSTALL_DIR/.env" "$INSTALL_DIR/.env.bak" "$INSTALL_DIR/.env.example" \
|
||||
"$INSTALL_DIR/docker-compose.yml" "$INSTALL_DIR/docker-compose.saas.yml" \
|
||||
"$INSTALL_DIR/docker-compose.server.yml" "$INSTALL_DIR/docker-compose.tls.yml" \
|
||||
"$INSTALL_DIR/docker-compose.monitoring.yml" "$INSTALL_DIR/traefik-dynamic.yml" \
|
||||
"$INSTALL_DIR/cameleer.conf" "$INSTALL_DIR/credentials.txt" \
|
||||
"$INSTALL_DIR/INSTALL.md"
|
||||
```
|
||||
|
||||
- [ ] **Step 6: Commit**
|
||||
|
||||
```bash
|
||||
git add installer/install.sh
|
||||
git commit -m "refactor(installer): replace sh compose generation with template copying"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 7: Update `install.ps1` — replace compose generation with template copying
|
||||
|
||||
**Files:**
|
||||
- Modify: `installer/install.ps1:574-666` (Generate-EnvFile — add COMPOSE_FILE and LOGTO_CONSOLE_BIND)
|
||||
- Modify: `installer/install.ps1:671-1105` (replace Generate-ComposeFile + Generate-ComposeFileStandalone with Copy-Templates)
|
||||
- Modify: `installer/install.ps1:1706-1723` (upgrade path)
|
||||
- Modify: `installer/install.ps1:1746` (reinstall cleanup)
|
||||
- Modify: `installer/install.ps1:1797-1798` (Main — call Copy-Templates)
|
||||
|
||||
- [ ] **Step 1: Replace `Generate-ComposeFile` and `Generate-ComposeFileStandalone` with `Copy-Templates`**
|
||||
|
||||
Delete both functions and replace with:
|
||||
|
||||
```powershell
|
||||
function Copy-Templates {
|
||||
$c = $script:cfg
|
||||
$src = Join-Path $PSScriptRoot 'templates'
|
||||
|
||||
# Base infra — always copied
|
||||
Copy-Item (Join-Path $src 'docker-compose.yml') (Join-Path $c.InstallDir 'docker-compose.yml') -Force
|
||||
Copy-Item (Join-Path $src '.env.example') (Join-Path $c.InstallDir '.env.example') -Force
|
||||
|
||||
# Mode-specific
|
||||
if ($c.DeploymentMode -eq 'standalone') {
|
||||
Copy-Item (Join-Path $src 'docker-compose.server.yml') (Join-Path $c.InstallDir 'docker-compose.server.yml') -Force
|
||||
Copy-Item (Join-Path $src 'traefik-dynamic.yml') (Join-Path $c.InstallDir 'traefik-dynamic.yml') -Force
|
||||
} else {
|
||||
Copy-Item (Join-Path $src 'docker-compose.saas.yml') (Join-Path $c.InstallDir 'docker-compose.saas.yml') -Force
|
||||
}
|
||||
|
||||
# Optional overlays
|
||||
if ($c.TlsMode -eq 'custom') {
|
||||
Copy-Item (Join-Path $src 'docker-compose.tls.yml') (Join-Path $c.InstallDir 'docker-compose.tls.yml') -Force
|
||||
}
|
||||
if ($c.MonitoringNetwork) {
|
||||
Copy-Item (Join-Path $src 'docker-compose.monitoring.yml') (Join-Path $c.InstallDir 'docker-compose.monitoring.yml') -Force
|
||||
}
|
||||
|
||||
Log-Info "Copied docker-compose templates to $($c.InstallDir)"
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Update `Generate-EnvFile` to include `COMPOSE_FILE`, `LOGTO_CONSOLE_BIND`, and `MONITORING_NETWORK`**
|
||||
|
||||
In the standalone `.env` content block, add after `DOCKER_GID`:
|
||||
|
||||
```powershell
|
||||
$composeFile = 'docker-compose.yml:docker-compose.server.yml'
|
||||
if ($c.TlsMode -eq 'custom') { $composeFile += ':docker-compose.tls.yml' }
|
||||
if ($c.MonitoringNetwork) { $composeFile += ':docker-compose.monitoring.yml' }
|
||||
```
|
||||
|
||||
Then append to `$content`:
|
||||
```powershell
|
||||
$content += "`n`n# Compose file assembly`nCOMPOSE_FILE=$composeFile"
|
||||
if ($c.MonitoringNetwork) {
|
||||
$content += "`n`n# Monitoring`nMONITORING_NETWORK=$($c.MonitoringNetwork)"
|
||||
}
|
||||
```
|
||||
|
||||
In the SaaS `.env` content block, add `LOGTO_CONSOLE_BIND` after `LOGTO_CONSOLE_PORT`:
|
||||
|
||||
```powershell
|
||||
$consoleBind = if ($c.LogtoConsoleExposed -eq 'true') { '0.0.0.0' } else { '127.0.0.1' }
|
||||
```
|
||||
|
||||
Add to the content string: `LOGTO_CONSOLE_BIND=$consoleBind`
|
||||
|
||||
Build `COMPOSE_FILE`:
|
||||
```powershell
|
||||
$composeFile = 'docker-compose.yml:docker-compose.saas.yml'
|
||||
if ($c.TlsMode -eq 'custom') { $composeFile += ':docker-compose.tls.yml' }
|
||||
if ($c.MonitoringNetwork) { $composeFile += ':docker-compose.monitoring.yml' }
|
||||
```
|
||||
|
||||
And append to `$content`:
|
||||
```powershell
|
||||
$content += "`n`n# Compose file assembly`nCOMPOSE_FILE=$composeFile"
|
||||
if ($c.MonitoringNetwork) {
|
||||
$content += "`n`n# Monitoring`nMONITORING_NETWORK=$($c.MonitoringNetwork)"
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Update `Main` — replace `Generate-ComposeFile` call with `Copy-Templates`**
|
||||
|
||||
At line 1798, change:
|
||||
```powershell
|
||||
Generate-ComposeFile
|
||||
```
|
||||
to:
|
||||
```powershell
|
||||
Copy-Templates
|
||||
```
|
||||
|
||||
- [ ] **Step 4: Update `Handle-Rerun` upgrade path**
|
||||
|
||||
At line 1716, change:
|
||||
```powershell
|
||||
Generate-ComposeFile
|
||||
```
|
||||
to:
|
||||
```powershell
|
||||
Copy-Templates
|
||||
```
|
||||
|
||||
- [ ] **Step 5: Update reinstall cleanup to remove template files**
|
||||
|
||||
At line 1746, update the filename list:
|
||||
```powershell
|
||||
foreach ($fname in @('.env','.env.bak','.env.example','docker-compose.yml','docker-compose.saas.yml','docker-compose.server.yml','docker-compose.tls.yml','docker-compose.monitoring.yml','traefik-dynamic.yml','cameleer.conf','credentials.txt','INSTALL.md')) {
|
||||
```
|
||||
|
||||
- [ ] **Step 6: Commit**
|
||||
|
||||
```bash
|
||||
git add installer/install.ps1
|
||||
git commit -m "refactor(installer): replace ps1 compose generation with template copying"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 8: Update existing generated install and clean up
|
||||
|
||||
**Files:**
|
||||
- Modify: `installer/cameleer/docker-compose.yml` (replace with template copy for dev environment)
|
||||
|
||||
- [ ] **Step 1: Remove the old generated docker-compose.yml from the cameleer/ directory**
|
||||
|
||||
The `installer/cameleer/` directory contains a previously generated install. The `docker-compose.yml` there is now stale — it was generated by the old inline method. Since this is a dev environment output, remove it (it will be recreated by running the installer with the new template approach).
|
||||
|
||||
```bash
|
||||
git rm installer/cameleer/docker-compose.yml
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Add `installer/cameleer/` to `.gitignore` if not already there**
|
||||
|
||||
The install output directory should not be tracked. Check if `.gitignore` already covers it. If not, add:
|
||||
|
||||
```
|
||||
installer/cameleer/
|
||||
```
|
||||
|
||||
This prevents generated `.env`, `credentials.txt`, and compose files from being committed.
|
||||
|
||||
- [ ] **Step 3: Commit**
|
||||
|
||||
```bash
|
||||
git add -A installer/cameleer/ .gitignore
|
||||
git commit -m "chore(installer): remove generated install output, add to gitignore"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 9: Verify the templates produce equivalent output
|
||||
|
||||
**Files:** (no changes — verification only)
|
||||
|
||||
- [ ] **Step 1: Compare template output against the old generated compose**
|
||||
|
||||
Create a temporary `.env` file and run `docker compose config` to render the resolved compose. Compare against the old generated output:
|
||||
|
||||
```bash
|
||||
cd installer/cameleer
|
||||
# Back up old generated file for comparison
|
||||
cp docker-compose.yml docker-compose.old.yml 2>/dev/null || true
|
||||
|
||||
# Create a test .env that exercises the SaaS path
|
||||
cat > /tmp/test-saas.env << 'EOF'
|
||||
COMPOSE_FILE=docker-compose.yml:docker-compose.saas.yml
|
||||
VERSION=latest
|
||||
PUBLIC_HOST=test.example.com
|
||||
PUBLIC_PROTOCOL=https
|
||||
HTTP_PORT=80
|
||||
HTTPS_PORT=443
|
||||
LOGTO_CONSOLE_PORT=3002
|
||||
LOGTO_CONSOLE_BIND=0.0.0.0
|
||||
POSTGRES_USER=cameleer
|
||||
POSTGRES_PASSWORD=testpass
|
||||
POSTGRES_DB=cameleer_saas
|
||||
CLICKHOUSE_PASSWORD=testpass
|
||||
SAAS_ADMIN_USER=admin
|
||||
SAAS_ADMIN_PASS=testpass
|
||||
NODE_TLS_REJECT=0
|
||||
DOCKER_SOCKET=/var/run/docker.sock
|
||||
DOCKER_GID=0
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE=gitea.siegeln.net/cameleer/cameleer-server:latest
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE=gitea.siegeln.net/cameleer/cameleer-server-ui:latest
|
||||
EOF
|
||||
|
||||
# Render the new templates
|
||||
cd ../templates
|
||||
docker compose --env-file /tmp/test-saas.env config
|
||||
```
|
||||
|
||||
Expected: A fully resolved compose with all 5 services (traefik, postgres, clickhouse, logto, saas), correct environment variables, and the monitoring noop network.
|
||||
|
||||
- [ ] **Step 2: Test standalone mode rendering**
|
||||
|
||||
```bash
|
||||
cat > /tmp/test-standalone.env << 'EOF'
|
||||
COMPOSE_FILE=docker-compose.yml:docker-compose.server.yml
|
||||
VERSION=latest
|
||||
PUBLIC_HOST=test.example.com
|
||||
PUBLIC_PROTOCOL=https
|
||||
HTTP_PORT=80
|
||||
HTTPS_PORT=443
|
||||
POSTGRES_IMAGE=postgres:16-alpine
|
||||
POSTGRES_USER=cameleer
|
||||
POSTGRES_PASSWORD=testpass
|
||||
POSTGRES_DB=cameleer
|
||||
CLICKHOUSE_PASSWORD=testpass
|
||||
SERVER_ADMIN_USER=admin
|
||||
SERVER_ADMIN_PASS=testpass
|
||||
BOOTSTRAP_TOKEN=testtoken
|
||||
DOCKER_SOCKET=/var/run/docker.sock
|
||||
DOCKER_GID=0
|
||||
EOF
|
||||
|
||||
cd ../templates
|
||||
docker compose --env-file /tmp/test-standalone.env config
|
||||
```
|
||||
|
||||
Expected: 5 services (traefik, postgres with `postgres:16-alpine` image, clickhouse, server, server-ui). Postgres `POSTGRES_DB` should be `cameleer`. Server should have all env vars resolved.
|
||||
|
||||
- [ ] **Step 3: Test with TLS + monitoring overlays**
|
||||
|
||||
```bash
|
||||
cat > /tmp/test-full.env << 'EOF'
|
||||
COMPOSE_FILE=docker-compose.yml:docker-compose.saas.yml:docker-compose.tls.yml:docker-compose.monitoring.yml
|
||||
VERSION=latest
|
||||
PUBLIC_HOST=test.example.com
|
||||
PUBLIC_PROTOCOL=https
|
||||
HTTP_PORT=80
|
||||
HTTPS_PORT=443
|
||||
LOGTO_CONSOLE_PORT=3002
|
||||
LOGTO_CONSOLE_BIND=0.0.0.0
|
||||
POSTGRES_USER=cameleer
|
||||
POSTGRES_PASSWORD=testpass
|
||||
POSTGRES_DB=cameleer_saas
|
||||
CLICKHOUSE_PASSWORD=testpass
|
||||
SAAS_ADMIN_USER=admin
|
||||
SAAS_ADMIN_PASS=testpass
|
||||
NODE_TLS_REJECT=0
|
||||
DOCKER_SOCKET=/var/run/docker.sock
|
||||
DOCKER_GID=0
|
||||
MONITORING_NETWORK=prometheus
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE=gitea.siegeln.net/cameleer/cameleer-server:latest
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE=gitea.siegeln.net/cameleer/cameleer-server-ui:latest
|
||||
EOF
|
||||
|
||||
cd ../templates
|
||||
docker compose --env-file /tmp/test-full.env config
|
||||
```
|
||||
|
||||
Expected: Same as SaaS mode but with `./certs:/user-certs:ro` volume on traefik and the `monitoring` network declared as `external: true` with name `prometheus`.
|
||||
|
||||
- [ ] **Step 4: Clean up temp files**
|
||||
|
||||
```bash
|
||||
rm -f /tmp/test-saas.env /tmp/test-standalone.env /tmp/test-full.env
|
||||
```
|
||||
|
||||
- [ ] **Step 5: Commit verification results as a note (optional)**
|
||||
|
||||
No code changes — this task is verification only. If all checks pass, proceed to the final commit.
|
||||
|
||||
---
|
||||
|
||||
### Task 10: Final commit — update CLAUDE.md deployment modes table
|
||||
|
||||
**Files:**
|
||||
- Modify: `CLAUDE.md` (update Deployment Modes section to reference template files)
|
||||
|
||||
- [ ] **Step 1: Update the deployment modes documentation**
|
||||
|
||||
In the "Deployment Modes (installer)" section of CLAUDE.md, add a note about the template-based approach:
|
||||
|
||||
After the deployment modes table, add:
|
||||
|
||||
```markdown
|
||||
The installer uses static docker-compose templates in `installer/templates/`. Templates are copied to the install directory and composed via `COMPOSE_FILE` in `.env`:
|
||||
- `docker-compose.yml` — shared infrastructure (traefik, postgres, clickhouse)
|
||||
- `docker-compose.saas.yml` — SaaS mode (logto, cameleer-saas)
|
||||
- `docker-compose.server.yml` — standalone mode (server, server-ui)
|
||||
- `docker-compose.tls.yml` — overlay: custom TLS cert volume
|
||||
- `docker-compose.monitoring.yml` — overlay: external monitoring network
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Commit**
|
||||
|
||||
```bash
|
||||
git add CLAUDE.md
|
||||
git commit -m "docs: update CLAUDE.md with template-based installer architecture"
|
||||
```
|
||||
@@ -0,0 +1,164 @@
|
||||
# Externalize Docker Compose Templates
|
||||
|
||||
**Date:** 2026-04-15
|
||||
**Status:** Approved
|
||||
|
||||
## Problem
|
||||
|
||||
The installer scripts (`install.sh` and `install.ps1`) generate docker-compose YAML inline via heredocs and string interpolation. This causes three problems:
|
||||
|
||||
1. **Maintainability** -- ~450 lines of compose generation are duplicated across two scripts in two languages. Every compose change must be made twice.
|
||||
2. **Readability** -- The compose structure is buried inside 1800-line scripts and difficult to review or edit.
|
||||
3. **User customization** -- Users cannot tweak the compose after installation without re-running the installer or editing generated output that gets overwritten on next run.
|
||||
|
||||
## Design
|
||||
|
||||
Replace inline compose generation with static template files. The installer collects user input, writes `.env`, copies templates, and runs `docker compose up -d`.
|
||||
|
||||
### File Layout
|
||||
|
||||
```
|
||||
installer/
|
||||
templates/
|
||||
docker-compose.yml # Infra: traefik, postgres, clickhouse
|
||||
docker-compose.server.yml # Standalone: server + server-ui
|
||||
docker-compose.saas.yml # SaaS: logto + cameleer-saas
|
||||
docker-compose.tls.yml # Overlay: custom cert volume on traefik
|
||||
docker-compose.monitoring.yml # Overlay: override monitoring network to external
|
||||
.env.example # Documented variable reference
|
||||
install.sh
|
||||
install.ps1
|
||||
```
|
||||
|
||||
### Compose File Responsibilities
|
||||
|
||||
#### `docker-compose.yml` (infra base, always loaded)
|
||||
|
||||
- **Services:** `cameleer-traefik`, `cameleer-postgres`, `cameleer-clickhouse`
|
||||
- **Prometheus labels** on all services unconditionally (harmless when no scraper connects)
|
||||
- **Logto console port** always mapped; bind address controlled by `${LOGTO_CONSOLE_BIND:-127.0.0.1}` (exposed = `0.0.0.0`, unexposed = localhost-only)
|
||||
- **Networks:** `cameleer`, `cameleer-traefik`, `monitoring` (local noop bridge by default)
|
||||
- **Volumes:** `cameleer-pgdata`, `cameleer-chdata`, `cameleer-certs`
|
||||
- All services reference the `monitoring` network
|
||||
|
||||
#### `docker-compose.server.yml` (standalone mode)
|
||||
|
||||
- **Services:** `cameleer-server`, `cameleer-server-ui`
|
||||
- **Additional volumes:** `jars`
|
||||
- **Additional network:** `cameleer-apps`
|
||||
- **Monitoring network:** defines local noop bridge, both services reference it
|
||||
|
||||
#### `docker-compose.saas.yml` (SaaS mode)
|
||||
|
||||
- **Services:** `cameleer-logto`, `cameleer-saas`
|
||||
- **Additional volume:** `cameleer-bootstrapdata`
|
||||
- **Monitoring network:** defines local noop bridge, both services reference it
|
||||
|
||||
#### `docker-compose.tls.yml` (overlay)
|
||||
|
||||
- Adds `./certs:/user-certs:ro` volume mount to `cameleer-traefik`
|
||||
|
||||
#### `docker-compose.monitoring.yml` (overlay)
|
||||
|
||||
- Overrides the `monitoring` network definition from local noop bridge to `external: true` with `name: ${MONITORING_NETWORK}`
|
||||
- No per-service entries needed -- services already reference the `monitoring` network in their base files
|
||||
|
||||
### Monitoring Network Pattern
|
||||
|
||||
Each compose file defines the `monitoring` network as a local bridge with a throwaway name:
|
||||
|
||||
```yaml
|
||||
# In docker-compose.yml, docker-compose.server.yml, docker-compose.saas.yml
|
||||
networks:
|
||||
monitoring:
|
||||
name: cameleer-monitoring-noop
|
||||
```
|
||||
|
||||
Services reference this network unconditionally. Without the monitoring overlay, it is an inert local bridge. When `docker-compose.monitoring.yml` is included, Docker Compose merges the network definition, overriding it to external:
|
||||
|
||||
```yaml
|
||||
# docker-compose.monitoring.yml
|
||||
networks:
|
||||
monitoring:
|
||||
external: true
|
||||
name: ${MONITORING_NETWORK}
|
||||
```
|
||||
|
||||
This avoids conditional YAML injection and keeps a single monitoring overlay file regardless of deployment mode.
|
||||
|
||||
### Logto Console Port Handling
|
||||
|
||||
Instead of conditionally injecting the port mapping, always include it with a bind address variable:
|
||||
|
||||
```yaml
|
||||
# In docker-compose.yml (traefik ports)
|
||||
- "${LOGTO_CONSOLE_BIND:-127.0.0.1}:${LOGTO_CONSOLE_PORT:-3002}:3002"
|
||||
```
|
||||
|
||||
The installer writes `LOGTO_CONSOLE_BIND=0.0.0.0` when the user chooses to expose the console, and omits it (defaulting to `127.0.0.1`) when not.
|
||||
|
||||
### `.env.example`
|
||||
|
||||
Documented reference of all variables, grouped by section:
|
||||
|
||||
- **Version/images:** `VERSION`, `CAMELEER_SAAS_PROVISIONING_SERVERIMAGE`, `CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE`
|
||||
- **Public access:** `PUBLIC_HOST`, `PUBLIC_PROTOCOL`
|
||||
- **Ports:** `HTTP_PORT`, `HTTPS_PORT`, `LOGTO_CONSOLE_BIND`, `LOGTO_CONSOLE_PORT`
|
||||
- **PostgreSQL:** `POSTGRES_USER`, `POSTGRES_PASSWORD`, `POSTGRES_DB`
|
||||
- **ClickHouse:** `CLICKHOUSE_PASSWORD`
|
||||
- **Admin credentials (SaaS):** `SAAS_ADMIN_USER`, `SAAS_ADMIN_PASS`
|
||||
- **Admin credentials (standalone):** `SERVER_ADMIN_USER`, `SERVER_ADMIN_PASS`, `BOOTSTRAP_TOKEN`
|
||||
- **Docker:** `DOCKER_SOCKET`, `DOCKER_GID`
|
||||
- **TLS (optional):** `CERT_FILE`, `KEY_FILE`, `CA_FILE`
|
||||
- **Monitoring (optional):** `MONITORING_NETWORK`
|
||||
- **Compose file assembly:** `COMPOSE_FILE`
|
||||
- **TLS validation:** `NODE_TLS_REJECT`
|
||||
|
||||
### `COMPOSE_FILE` Assembly
|
||||
|
||||
The installer builds the `COMPOSE_FILE` value in `.env` based on user choices:
|
||||
|
||||
| Mode | COMPOSE_FILE |
|
||||
|---|---|
|
||||
| Standalone | `docker-compose.yml:docker-compose.server.yml` |
|
||||
| Standalone + TLS | `docker-compose.yml:docker-compose.server.yml:docker-compose.tls.yml` |
|
||||
| Standalone + monitoring | `docker-compose.yml:docker-compose.server.yml:docker-compose.monitoring.yml` |
|
||||
| SaaS | `docker-compose.yml:docker-compose.saas.yml` |
|
||||
| SaaS + TLS | `docker-compose.yml:docker-compose.saas.yml:docker-compose.tls.yml` |
|
||||
| SaaS + TLS + monitoring | `docker-compose.yml:docker-compose.saas.yml:docker-compose.tls.yml:docker-compose.monitoring.yml` |
|
||||
|
||||
### Installer Script Changes
|
||||
|
||||
**Removed:**
|
||||
|
||||
- `generate_compose_file()` / `Generate-ComposeFile` (~250 lines each, SaaS mode)
|
||||
- `generate_compose_file_standalone()` / `Generate-ComposeFileStandalone` (~200 lines each, standalone mode)
|
||||
- All conditional YAML injection logic (logto console, TLS, monitoring, GID)
|
||||
|
||||
**Added:**
|
||||
|
||||
- Copy template files from `templates/` to install directory
|
||||
- `COMPOSE_FILE` assembly logic in `.env` generation
|
||||
- `LOGTO_CONSOLE_BIND` variable (replaces conditional port injection)
|
||||
- `DOCKER_GID` written to `.env` (replaces inline `stat` in heredoc)
|
||||
|
||||
**Unchanged:**
|
||||
|
||||
- User prompting, argument parsing, config file reading
|
||||
- Password generation, Docker GID detection
|
||||
- `credentials.txt`, `INSTALL.md`, `cameleer.conf` generation
|
||||
- `docker compose up -d` invocation
|
||||
|
||||
### Password Fallback Safety
|
||||
|
||||
All password variables in templates use `:?` (fail-if-unset) instead of `:-` (default) to prevent silent fallback to weak credentials:
|
||||
|
||||
```yaml
|
||||
SAAS_ADMIN_PASS: ${SAAS_ADMIN_PASS:?SAAS_ADMIN_PASS must be set in .env}
|
||||
```
|
||||
|
||||
Username defaults (`:-admin`) are acceptable and retained.
|
||||
|
||||
### Migration for Existing Installs
|
||||
|
||||
Existing `installer/cameleer/` has a single generated `docker-compose.yml`. On re-install, the installer copies the template files and regenerates `.env`. The old monolithic compose file is replaced by the multi-file set. `cameleer.conf` preserves user choices, so re-running the installer reproduces the same configuration with the new file structure.
|
||||
32
installer/CLAUDE.md
Normal file
32
installer/CLAUDE.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# Installer
|
||||
|
||||
## Deployment Modes
|
||||
|
||||
The installer (`installer/install.sh`) supports two deployment modes:
|
||||
|
||||
| | Multi-tenant SaaS (`DEPLOYMENT_MODE=saas`) | Standalone (`DEPLOYMENT_MODE=standalone`) |
|
||||
|---|---|---|
|
||||
| **Containers** | traefik, postgres, clickhouse, logto, cameleer-saas | traefik, postgres, clickhouse, server, server-ui |
|
||||
| **Auth** | Logto OIDC (SaaS admin + tenant users) | Local auth (built-in admin, no identity provider) |
|
||||
| **Tenant management** | SaaS admin creates/manages tenants via UI | Single server instance, no fleet management |
|
||||
| **PostgreSQL** | `cameleer-postgres` image (multi-DB init) | Stock `postgres:16-alpine` (server creates schema via Flyway) |
|
||||
| **Use case** | Platform vendor managing multiple customers | Single customer running the product directly |
|
||||
|
||||
Standalone mode generates a simpler compose with the server running directly. No Logto, no SaaS management plane, no bootstrap. The admin logs in with local credentials at `/`.
|
||||
|
||||
## Compose templates
|
||||
|
||||
The installer uses static docker-compose templates in `installer/templates/`. Templates are copied to the install directory and composed via `COMPOSE_FILE` in `.env`:
|
||||
- `docker-compose.yml` — shared infrastructure (traefik, postgres, clickhouse)
|
||||
- `docker-compose.saas.yml` — SaaS mode (logto, cameleer-saas)
|
||||
- `docker-compose.server.yml` — standalone mode (server, server-ui)
|
||||
- `docker-compose.tls.yml` — overlay: custom TLS cert volume
|
||||
- `docker-compose.monitoring.yml` — overlay: external monitoring network
|
||||
|
||||
## Env var naming convention
|
||||
|
||||
- `CAMELEER_AGENT_*` — agent config (consumed by the Java agent)
|
||||
- `CAMELEER_SERVER_*` — server config (consumed by cameleer-server)
|
||||
- `CAMELEER_SAAS_*` — SaaS management plane config
|
||||
- `CAMELEER_SAAS_PROVISIONING_*` — "SaaS forwards this to provisioned tenant servers"
|
||||
- No prefix (e.g. `POSTGRES_PASSWORD`, `PUBLIC_HOST`) — shared infrastructure, consumed by multiple components
|
||||
@@ -1,33 +0,0 @@
|
||||
# Cameleer SaaS Configuration
|
||||
# Generated by installer v1.0.0 on 2026-04-15 08:55:30 UTC
|
||||
|
||||
VERSION=latest
|
||||
|
||||
PUBLIC_HOST=desktop-fb5vgj9.siegeln.internal
|
||||
PUBLIC_PROTOCOL=https
|
||||
|
||||
HTTP_PORT=80
|
||||
HTTPS_PORT=443
|
||||
LOGTO_CONSOLE_PORT=3002
|
||||
|
||||
# PostgreSQL
|
||||
POSTGRES_USER=cameleer
|
||||
POSTGRES_PASSWORD=dwnyYXj3bVe6kFcOHERr57SkrkD9476a
|
||||
POSTGRES_DB=cameleer_saas
|
||||
|
||||
# ClickHouse
|
||||
CLICKHOUSE_PASSWORD=SshXE61qZqB1kVoZpQLbr2mDYokw1ZgJ
|
||||
|
||||
# Admin user
|
||||
SAAS_ADMIN_USER=admin
|
||||
SAAS_ADMIN_PASS=1J3TrbgIYZbxjav1K14uy5DX8nil6Bdi
|
||||
|
||||
# TLS
|
||||
NODE_TLS_REJECT=0
|
||||
# Docker
|
||||
DOCKER_SOCKET=/var/run/docker.sock
|
||||
DOCKER_GID=0
|
||||
|
||||
# Provisioning images
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE=gitea.siegeln.net/cameleer/cameleer-server:latest
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE=gitea.siegeln.net/cameleer/cameleer-server-ui:latest
|
||||
@@ -1,95 +0,0 @@
|
||||
# Cameleer SaaS -- Installation Documentation
|
||||
|
||||
## Installation Summary
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Version** | latest |
|
||||
| **Date** | 2026-04-15 08:55:55 UTC |
|
||||
| **Installer** | v1.0.0 |
|
||||
| **Install Directory** | C:\Users\Hendrik\Documents\projects\cameleer-saas\installer\cameleer |
|
||||
| **Hostname** | desktop-fb5vgj9.siegeln.internal |
|
||||
| **TLS** | Self-signed (auto-generated) |
|
||||
|
||||
## Service URLs
|
||||
|
||||
- **Platform UI:** https://desktop-fb5vgj9.siegeln.internal/platform/
|
||||
- **API Endpoint:** https://desktop-fb5vgj9.siegeln.internal/platform/api/
|
||||
- **Logto Admin Console:** https://desktop-fb5vgj9.siegeln.internal:3002
|
||||
|
||||
## First Steps
|
||||
|
||||
1. Open the Platform UI in your browser
|
||||
2. Log in as admin with the credentials from `credentials.txt`
|
||||
3. Create tenants from the admin console
|
||||
4. The platform will provision a dedicated server instance for each tenant
|
||||
|
||||
## Architecture
|
||||
|
||||
| Container | Purpose |
|
||||
|---|---|
|
||||
| `traefik` | Reverse proxy, TLS termination, routing |
|
||||
| `postgres` | PostgreSQL database (SaaS + Logto + tenant schemas) |
|
||||
| `clickhouse` | Time-series storage (traces, metrics, logs) |
|
||||
| `logto` | OIDC identity provider + bootstrap |
|
||||
| `cameleer-saas` | SaaS platform (Spring Boot + React) |
|
||||
|
||||
Per-tenant `cameleer-server` and `cameleer-server-ui` containers are provisioned dynamically.
|
||||
|
||||
## Networking
|
||||
|
||||
| Port | Service |
|
||||
|---|---|
|
||||
| 80 | HTTP (redirects to HTTPS) |
|
||||
| 443 | HTTPS (main entry point) |
|
||||
| 3002 | Logto Admin Console |
|
||||
|
||||
|
||||
## TLS
|
||||
|
||||
**Mode:** Self-signed (auto-generated)
|
||||
|
||||
The platform generated a self-signed certificate on first boot. To replace it:
|
||||
1. Log in as admin and navigate to **Certificates** in the admin console
|
||||
2. Upload your certificate and key via the UI
|
||||
3. Activate the new certificate (zero-downtime swap)
|
||||
|
||||
## Data & Backups
|
||||
|
||||
| Docker Volume | Contains |
|
||||
|---|---|
|
||||
| `cameleer-pgdata` | PostgreSQL data (tenants, licenses, audit) |
|
||||
| `cameleer-chdata` | ClickHouse data (traces, metrics, logs) |
|
||||
| `cameleer-certs` | TLS certificates |
|
||||
| `cameleer-bootstrapdata` | Logto bootstrap results |
|
||||
|
||||
### Backup Commands
|
||||
|
||||
```bash
|
||||
docker compose -p cameleer-saas exec cameleer-postgres pg_dump -U cameleer cameleer_saas > backup.sql
|
||||
docker compose -p cameleer-saas exec cameleer-clickhouse clickhouse-client --query "SELECT * FROM cameleer.traces FORMAT Native" > traces.native
|
||||
```
|
||||
|
||||
## Upgrading
|
||||
|
||||
```powershell
|
||||
.\install.ps1 -InstallDir C:\Users\Hendrik\Documents\projects\cameleer-saas\installer\cameleer -Version NEW_VERSION
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Command |
|
||||
|---|---|
|
||||
| Service not starting | `docker compose -p cameleer-saas logs SERVICE_NAME` |
|
||||
| Bootstrap failed | `docker compose -p cameleer-saas logs cameleer-logto` |
|
||||
| Routing issues | `docker compose -p cameleer-saas logs cameleer-traefik` |
|
||||
| Database issues | `docker compose -p cameleer-saas exec cameleer-postgres psql -U cameleer -d cameleer_saas` |
|
||||
|
||||
## Uninstalling
|
||||
|
||||
```powershell
|
||||
Set-Location C:\Users\Hendrik\Documents\projects\cameleer-saas\installer\cameleer
|
||||
docker compose -p cameleer-saas down
|
||||
docker compose -p cameleer-saas down -v
|
||||
Remove-Item -Recurse -Force C:\Users\Hendrik\Documents\projects\cameleer-saas\installer\cameleer
|
||||
```
|
||||
@@ -1,18 +0,0 @@
|
||||
# Cameleer installation config
|
||||
# Generated by installer v1.0.0 on 2026-04-15 08:55:30 UTC
|
||||
|
||||
install_dir=C:\Users\Hendrik\Documents\projects\cameleer-saas\installer\cameleer
|
||||
public_host=desktop-fb5vgj9.siegeln.internal
|
||||
public_protocol=https
|
||||
admin_user=admin
|
||||
tls_mode=self-signed
|
||||
http_port=80
|
||||
https_port=443
|
||||
logto_console_port=3002
|
||||
logto_console_exposed=true
|
||||
monitoring_network=
|
||||
version=latest
|
||||
compose_project=cameleer-saas
|
||||
docker_socket=/var/run/docker.sock
|
||||
node_tls_reject=0
|
||||
deployment_mode=saas
|
||||
@@ -1,16 +0,0 @@
|
||||
===========================================
|
||||
CAMELEER PLATFORM CREDENTIALS
|
||||
Generated: 2026-04-15 08:55:55 UTC
|
||||
|
||||
SECURE THIS FILE AND DELETE AFTER NOTING
|
||||
THESE CREDENTIALS CANNOT BE RECOVERED
|
||||
===========================================
|
||||
|
||||
Admin Console: https://desktop-fb5vgj9.siegeln.internal/platform/
|
||||
Admin User: admin
|
||||
Admin Password: 1J3TrbgIYZbxjav1K14uy5DX8nil6Bdi
|
||||
|
||||
PostgreSQL: cameleer / dwnyYXj3bVe6kFcOHERr57SkrkD9476a
|
||||
ClickHouse: default / SshXE61qZqB1kVoZpQLbr2mDYokw1ZgJ
|
||||
|
||||
Logto Console: https://desktop-fb5vgj9.siegeln.internal:3002
|
||||
@@ -578,55 +578,71 @@ function Generate-EnvFile {
|
||||
$ts = (Get-Date -Format 'yyyy-MM-dd HH:mm:ss') + ' UTC'
|
||||
$bt = Generate-Password
|
||||
|
||||
$jwtSecret = Generate-Password
|
||||
|
||||
if ($c.DeploymentMode -eq 'standalone') {
|
||||
$content = @"
|
||||
# Cameleer Server Configuration (standalone)
|
||||
# Generated by installer v${CAMELEER_INSTALLER_VERSION} on $ts
|
||||
|
||||
|
||||
VERSION=$($c.Version)
|
||||
PUBLIC_HOST=$($c.PublicHost)
|
||||
PUBLIC_PROTOCOL=$($c.PublicProtocol)
|
||||
HTTP_PORT=$($c.HttpPort)
|
||||
HTTPS_PORT=$($c.HttpsPort)
|
||||
|
||||
|
||||
# PostgreSQL
|
||||
POSTGRES_USER=cameleer
|
||||
POSTGRES_PASSWORD=$($c.PostgresPassword)
|
||||
POSTGRES_DB=cameleer
|
||||
|
||||
|
||||
# ClickHouse
|
||||
CLICKHOUSE_PASSWORD=$($c.ClickhousePassword)
|
||||
|
||||
|
||||
# Server admin
|
||||
SERVER_ADMIN_USER=$($c.AdminUser)
|
||||
SERVER_ADMIN_PASS=$($c.AdminPass)
|
||||
|
||||
|
||||
# Bootstrap token
|
||||
BOOTSTRAP_TOKEN=$bt
|
||||
|
||||
|
||||
# JWT signing secret (required by server, must be non-empty)
|
||||
CAMELEER_SERVER_SECURITY_JWTSECRET=$jwtSecret
|
||||
|
||||
# Docker
|
||||
DOCKER_SOCKET=$($c.DockerSocket)
|
||||
DOCKER_GID=$gid
|
||||
|
||||
POSTGRES_IMAGE=postgres:16-alpine
|
||||
"@
|
||||
if ($c.TlsMode -eq 'custom') {
|
||||
$content += "`nCERT_FILE=/user-certs/cert.pem"
|
||||
$content += "`nKEY_FILE=/user-certs/key.pem"
|
||||
if ($c.CaFile) { $content += "`nCA_FILE=/user-certs/ca.pem" }
|
||||
}
|
||||
$composeFile = 'docker-compose.yml;docker-compose.server.yml'
|
||||
if ($c.TlsMode -eq 'custom') { $composeFile += ';docker-compose.tls.yml' }
|
||||
if ($c.MonitoringNetwork) { $composeFile += ';docker-compose.monitoring.yml' }
|
||||
$content += "`n`n# Compose file assembly`nCOMPOSE_FILE=$composeFile"
|
||||
if ($c.MonitoringNetwork) {
|
||||
$content += "`n`n# Monitoring`nMONITORING_NETWORK=$($c.MonitoringNetwork)"
|
||||
}
|
||||
} else {
|
||||
$consoleBind = if ($c.LogtoConsoleExposed -eq 'true') { '0.0.0.0' } else { '127.0.0.1' }
|
||||
$content = @"
|
||||
# Cameleer SaaS Configuration
|
||||
# Generated by installer v${CAMELEER_INSTALLER_VERSION} on $ts
|
||||
|
||||
|
||||
VERSION=$($c.Version)
|
||||
|
||||
|
||||
PUBLIC_HOST=$($c.PublicHost)
|
||||
PUBLIC_PROTOCOL=$($c.PublicProtocol)
|
||||
|
||||
|
||||
HTTP_PORT=$($c.HttpPort)
|
||||
HTTPS_PORT=$($c.HttpsPort)
|
||||
LOGTO_CONSOLE_PORT=$($c.LogtoConsolePort)
|
||||
|
||||
LOGTO_CONSOLE_BIND=$consoleBind
|
||||
|
||||
# PostgreSQL
|
||||
POSTGRES_USER=cameleer
|
||||
POSTGRES_PASSWORD=$($c.PostgresPassword)
|
||||
@@ -656,8 +672,19 @@ DOCKER_GID=$gid
|
||||
# Provisioning images
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE=${REGISTRY}/cameleer-server:$($c.Version)
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE=${REGISTRY}/cameleer-server-ui:$($c.Version)
|
||||
CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE=${REGISTRY}/cameleer-runtime-base:$($c.Version)
|
||||
|
||||
# JWT signing secret (forwarded to provisioned tenant servers, must be non-empty)
|
||||
CAMELEER_SERVER_SECURITY_JWTSECRET=$jwtSecret
|
||||
"@
|
||||
$content += $provisioningBlock
|
||||
$composeFile = 'docker-compose.yml;docker-compose.saas.yml'
|
||||
if ($c.TlsMode -eq 'custom') { $composeFile += ';docker-compose.tls.yml' }
|
||||
if ($c.MonitoringNetwork) { $composeFile += ';docker-compose.monitoring.yml' }
|
||||
$content += "`n`n# Compose file assembly`nCOMPOSE_FILE=$composeFile"
|
||||
if ($c.MonitoringNetwork) {
|
||||
$content += "`n`n# Monitoring`nMONITORING_NETWORK=$($c.MonitoringNetwork)"
|
||||
}
|
||||
}
|
||||
|
||||
Write-Utf8File $f $content
|
||||
@@ -665,443 +692,33 @@ CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE=${REGISTRY}/cameleer-server-ui:$($c.Ver
|
||||
Log-Info 'Generated .env'
|
||||
}
|
||||
|
||||
# --- Docker Compose generation ---
|
||||
# Rule: '@ and "@ closing delimiters must ALWAYS be alone at column 0 on their own line.
|
||||
# --- Copy docker-compose templates ---
|
||||
|
||||
function Generate-ComposeFile {
|
||||
$c = $script:cfg
|
||||
if ($c.DeploymentMode -eq 'standalone') { Generate-ComposeFileStandalone; return }
|
||||
|
||||
$f = Join-Path $c.InstallDir 'docker-compose.yml'
|
||||
$gid = Get-DockerGid $c.DockerSocket
|
||||
$out = New-Object System.Collections.Generic.List[string]
|
||||
|
||||
$out.Add(@'
|
||||
# Cameleer SaaS Platform
|
||||
# Generated by Cameleer installer -- do not edit manually
|
||||
|
||||
services:
|
||||
cameleer-traefik:
|
||||
image: ${TRAEFIK_IMAGE:-gitea.siegeln.net/cameleer/cameleer-traefik}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "${HTTP_PORT:-80}:80"
|
||||
- "${HTTPS_PORT:-443}:443"
|
||||
'@
|
||||
)
|
||||
if ($c.LogtoConsoleExposed -eq 'true') {
|
||||
$out.Add(' - "${LOGTO_CONSOLE_PORT:-3002}:3002"')
|
||||
}
|
||||
$out.Add(@'
|
||||
environment:
|
||||
PUBLIC_HOST: ${PUBLIC_HOST:-localhost}
|
||||
CERT_FILE: ${CERT_FILE:-}
|
||||
KEY_FILE: ${KEY_FILE:-}
|
||||
CA_FILE: ${CA_FILE:-}
|
||||
volumes:
|
||||
- cameleer-certs:/certs
|
||||
- ${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock:ro
|
||||
'@
|
||||
)
|
||||
if ($c.TlsMode -eq 'custom') { $out.Add(' - ./certs:/user-certs:ro') }
|
||||
$out.Add(@'
|
||||
networks:
|
||||
- cameleer
|
||||
- cameleer-traefik
|
||||
'@
|
||||
)
|
||||
if ($c.MonitoringNetwork) {
|
||||
$out.Add(" - $($c.MonitoringNetwork)")
|
||||
$out.Add(@'
|
||||
labels:
|
||||
- "prometheus.io/scrape=true"
|
||||
- "prometheus.io/port=8082"
|
||||
- "prometheus.io/path=/metrics"
|
||||
'@
|
||||
)
|
||||
}
|
||||
|
||||
$out.Add(@'
|
||||
|
||||
cameleer-postgres:
|
||||
image: ${POSTGRES_IMAGE:-gitea.siegeln.net/cameleer/cameleer-postgres}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
POSTGRES_DB: cameleer_saas
|
||||
POSTGRES_USER: ${POSTGRES_USER:-cameleer}
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
volumes:
|
||||
- cameleer-pgdata:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER:-cameleer} -d cameleer_saas"]
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
networks:
|
||||
- cameleer
|
||||
'@
|
||||
)
|
||||
if ($c.MonitoringNetwork) { $out.Add(" - $($c.MonitoringNetwork)") }
|
||||
|
||||
$out.Add(@'
|
||||
|
||||
cameleer-clickhouse:
|
||||
image: ${CLICKHOUSE_IMAGE:-gitea.siegeln.net/cameleer/cameleer-clickhouse}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD}
|
||||
volumes:
|
||||
- cameleer-chdata:/var/lib/clickhouse
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "clickhouse-client --password $${CLICKHOUSE_PASSWORD} --query 'SELECT 1'"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
networks:
|
||||
- cameleer
|
||||
'@
|
||||
)
|
||||
if ($c.MonitoringNetwork) {
|
||||
$out.Add(" - $($c.MonitoringNetwork)")
|
||||
$out.Add(@'
|
||||
labels:
|
||||
- "prometheus.io/scrape=true"
|
||||
- "prometheus.io/port=9363"
|
||||
- "prometheus.io/path=/metrics"
|
||||
'@
|
||||
)
|
||||
}
|
||||
|
||||
$out.Add(@'
|
||||
|
||||
cameleer-logto:
|
||||
image: ${LOGTO_IMAGE:-gitea.siegeln.net/cameleer/cameleer-logto}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
cameleer-postgres:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
DB_URL: postgres://${POSTGRES_USER:-cameleer}:${POSTGRES_PASSWORD}@cameleer-postgres:5432/logto
|
||||
ENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
|
||||
ADMIN_ENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}:${LOGTO_CONSOLE_PORT:-3002}
|
||||
TRUST_PROXY_HEADER: 1
|
||||
NODE_TLS_REJECT_UNAUTHORIZED: "${NODE_TLS_REJECT:-0}"
|
||||
LOGTO_ENDPOINT: http://cameleer-logto:3001
|
||||
LOGTO_ADMIN_ENDPOINT: http://cameleer-logto:3002
|
||||
LOGTO_PUBLIC_ENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
|
||||
PUBLIC_HOST: ${PUBLIC_HOST:-localhost}
|
||||
PUBLIC_PROTOCOL: ${PUBLIC_PROTOCOL:-https}
|
||||
PG_HOST: cameleer-postgres
|
||||
PG_USER: ${POSTGRES_USER:-cameleer}
|
||||
PG_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
PG_DB_SAAS: cameleer_saas
|
||||
SAAS_ADMIN_USER: ${SAAS_ADMIN_USER:-admin}
|
||||
SAAS_ADMIN_PASS: ${SAAS_ADMIN_PASS:?SAAS_ADMIN_PASS must be set in .env}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "node -e \"require('http').get('http://localhost:3001/oidc/.well-known/openid-configuration', r => process.exit(r.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))\" && test -f /data/logto-bootstrap.json"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 60
|
||||
start_period: 30s
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.cameleer-logto.rule=PathPrefix(`/`)
|
||||
- traefik.http.routers.cameleer-logto.priority=1
|
||||
- traefik.http.routers.cameleer-logto.entrypoints=websecure
|
||||
- traefik.http.routers.cameleer-logto.tls=true
|
||||
- traefik.http.routers.cameleer-logto.service=cameleer-logto
|
||||
- traefik.http.routers.cameleer-logto.middlewares=cameleer-logto-cors
|
||||
- "traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowOriginList=${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}:${LOGTO_CONSOLE_PORT:-3002}"
|
||||
- traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowMethods=GET,POST,PUT,PATCH,DELETE,OPTIONS
|
||||
- traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowHeaders=Authorization,Content-Type
|
||||
- traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowCredentials=true
|
||||
- traefik.http.services.cameleer-logto.loadbalancer.server.port=3001
|
||||
'@
|
||||
)
|
||||
if ($c.LogtoConsoleExposed -eq 'true') {
|
||||
$out.Add(@'
|
||||
- traefik.http.routers.cameleer-logto-console.rule=PathPrefix(`/`)
|
||||
- traefik.http.routers.cameleer-logto-console.entrypoints=admin-console
|
||||
- traefik.http.routers.cameleer-logto-console.tls=true
|
||||
- traefik.http.routers.cameleer-logto-console.service=cameleer-logto-console
|
||||
- traefik.http.services.cameleer-logto-console.loadbalancer.server.port=3002
|
||||
'@
|
||||
)
|
||||
}
|
||||
|
||||
$out.Add(@'
|
||||
volumes:
|
||||
- cameleer-bootstrapdata:/data
|
||||
networks:
|
||||
- cameleer
|
||||
|
||||
cameleer-saas:
|
||||
image: ${CAMELEER_IMAGE:-gitea.siegeln.net/cameleer/cameleer-saas}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
cameleer-logto:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
SPRING_DATASOURCE_URL: jdbc:postgresql://cameleer-postgres:5432/cameleer_saas
|
||||
SPRING_DATASOURCE_USERNAME: ${POSTGRES_USER:-cameleer}
|
||||
SPRING_DATASOURCE_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
CAMELEER_SAAS_IDENTITY_LOGTOENDPOINT: http://cameleer-logto:3001
|
||||
CAMELEER_SAAS_IDENTITY_LOGTOPUBLICENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
|
||||
CAMELEER_SAAS_PROVISIONING_PUBLICHOST: ${PUBLIC_HOST:-localhost}
|
||||
CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL: ${PUBLIC_PROTOCOL:-https}
|
||||
CAMELEER_SAAS_PROVISIONING_NETWORKNAME: ${COMPOSE_PROJECT_NAME:-cameleer-saas}_cameleer
|
||||
CAMELEER_SAAS_PROVISIONING_TRAEFIKNETWORK: cameleer-traefik
|
||||
CAMELEER_SAAS_PROVISIONING_DATASOURCEUSERNAME: ${POSTGRES_USER:-cameleer}
|
||||
CAMELEER_SAAS_PROVISIONING_DATASOURCEPASSWORD: ${POSTGRES_PASSWORD}
|
||||
CAMELEER_SAAS_PROVISIONING_CLICKHOUSEPASSWORD: ${CLICKHOUSE_PASSWORD}
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE: ${CAMELEER_SAAS_PROVISIONING_SERVERIMAGE:-gitea.siegeln.net/cameleer/cameleer-server:latest}
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE: ${CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE:-gitea.siegeln.net/cameleer/cameleer-server-ui:latest}
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.saas.rule=PathPrefix(`/platform`)
|
||||
- traefik.http.routers.saas.entrypoints=websecure
|
||||
- traefik.http.routers.saas.tls=true
|
||||
- traefik.http.services.saas.loadbalancer.server.port=8080
|
||||
'@
|
||||
)
|
||||
if ($c.MonitoringNetwork) {
|
||||
$out.Add(@'
|
||||
- "prometheus.io/scrape=true"
|
||||
- "prometheus.io/port=8080"
|
||||
- "prometheus.io/path=/platform/actuator/prometheus"
|
||||
'@
|
||||
)
|
||||
}
|
||||
$out.Add(@'
|
||||
volumes:
|
||||
- cameleer-bootstrapdata:/data/bootstrap:ro
|
||||
- cameleer-certs:/certs
|
||||
- ${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock
|
||||
networks:
|
||||
- cameleer
|
||||
'@
|
||||
)
|
||||
if ($c.MonitoringNetwork) { $out.Add(" - $($c.MonitoringNetwork)") }
|
||||
$out.Add(" group_add:")
|
||||
$out.Add(" - `"$gid`"")
|
||||
|
||||
$out.Add(@'
|
||||
|
||||
volumes:
|
||||
cameleer-pgdata:
|
||||
cameleer-chdata:
|
||||
cameleer-certs:
|
||||
cameleer-bootstrapdata:
|
||||
|
||||
networks:
|
||||
cameleer:
|
||||
driver: bridge
|
||||
cameleer-traefik:
|
||||
name: cameleer-traefik
|
||||
driver: bridge
|
||||
'@
|
||||
)
|
||||
if ($c.MonitoringNetwork) {
|
||||
$out.Add(" $($c.MonitoringNetwork):")
|
||||
$out.Add(' external: true')
|
||||
}
|
||||
|
||||
Write-Utf8File $f ($out -join "`n")
|
||||
Log-Info 'Generated docker-compose.yml'
|
||||
}
|
||||
|
||||
function Generate-ComposeFileStandalone {
|
||||
function Copy-Templates {
|
||||
$c = $script:cfg
|
||||
$f = Join-Path $c.InstallDir 'docker-compose.yml'
|
||||
$gid = Get-DockerGid $c.DockerSocket
|
||||
$out = New-Object System.Collections.Generic.List[string]
|
||||
|
||||
$out.Add(@'
|
||||
# Cameleer Server (standalone)
|
||||
# Generated by Cameleer installer -- do not edit manually
|
||||
|
||||
services:
|
||||
cameleer-traefik:
|
||||
image: ${TRAEFIK_IMAGE:-gitea.siegeln.net/cameleer/cameleer-traefik}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "${HTTP_PORT:-80}:80"
|
||||
- "${HTTPS_PORT:-443}:443"
|
||||
environment:
|
||||
PUBLIC_HOST: ${PUBLIC_HOST:-localhost}
|
||||
CERT_FILE: ${CERT_FILE:-}
|
||||
KEY_FILE: ${KEY_FILE:-}
|
||||
CA_FILE: ${CA_FILE:-}
|
||||
volumes:
|
||||
- cameleer-certs:/certs
|
||||
- ${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock:ro
|
||||
- ./traefik-dynamic.yml:/etc/traefik/dynamic.yml:ro
|
||||
'@
|
||||
)
|
||||
if ($c.TlsMode -eq 'custom') { $out.Add(' - ./certs:/user-certs:ro') }
|
||||
$out.Add(@'
|
||||
networks:
|
||||
- cameleer
|
||||
- cameleer-traefik
|
||||
'@
|
||||
)
|
||||
if ($c.MonitoringNetwork) { $out.Add(" - $($c.MonitoringNetwork)") }
|
||||
|
||||
$out.Add(@'
|
||||
|
||||
cameleer-postgres:
|
||||
image: postgres:16-alpine
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
POSTGRES_DB: ${POSTGRES_DB:-cameleer}
|
||||
POSTGRES_USER: ${POSTGRES_USER:-cameleer}
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
volumes:
|
||||
- cameleer-pgdata:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER:-cameleer} -d $${POSTGRES_DB:-cameleer}"]
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
networks:
|
||||
- cameleer
|
||||
'@
|
||||
)
|
||||
if ($c.MonitoringNetwork) { $out.Add(" - $($c.MonitoringNetwork)") }
|
||||
|
||||
$out.Add(@'
|
||||
|
||||
cameleer-clickhouse:
|
||||
image: ${CLICKHOUSE_IMAGE:-gitea.siegeln.net/cameleer/cameleer-clickhouse}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD}
|
||||
volumes:
|
||||
- cameleer-chdata:/var/lib/clickhouse
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "clickhouse-client --password $${CLICKHOUSE_PASSWORD} --query 'SELECT 1'"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
networks:
|
||||
- cameleer
|
||||
'@
|
||||
)
|
||||
if ($c.MonitoringNetwork) { $out.Add(" - $($c.MonitoringNetwork)") }
|
||||
|
||||
# Server block: double-quoted so $gid expands; compose ${VAR} uses backtick-dollar
|
||||
$serverBlock = @"
|
||||
|
||||
cameleer-server:
|
||||
image: `${SERVER_IMAGE:-gitea.siegeln.net/cameleer/cameleer-server}:`${VERSION:-latest}
|
||||
container_name: cameleer-server
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
cameleer-postgres:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
CAMELEER_SERVER_TENANT_ID: default
|
||||
SPRING_DATASOURCE_URL: jdbc:postgresql://cameleer-postgres:5432/`${POSTGRES_DB:-cameleer}?currentSchema=tenant_default
|
||||
SPRING_DATASOURCE_USERNAME: `${POSTGRES_USER:-cameleer}
|
||||
SPRING_DATASOURCE_PASSWORD: `${POSTGRES_PASSWORD}
|
||||
CAMELEER_SERVER_CLICKHOUSE_URL: jdbc:clickhouse://cameleer-clickhouse:8123/cameleer
|
||||
CAMELEER_SERVER_CLICKHOUSE_USERNAME: default
|
||||
CAMELEER_SERVER_CLICKHOUSE_PASSWORD: `${CLICKHOUSE_PASSWORD}
|
||||
CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN: `${BOOTSTRAP_TOKEN}
|
||||
CAMELEER_SERVER_SECURITY_UIUSER: `${SERVER_ADMIN_USER:-admin}
|
||||
CAMELEER_SERVER_SECURITY_UIPASSWORD: `${SERVER_ADMIN_PASS:?SERVER_ADMIN_PASS must be set in .env}
|
||||
CAMELEER_SERVER_SECURITY_CORSALLOWEDORIGINS: `${PUBLIC_PROTOCOL:-https}://`${PUBLIC_HOST:-localhost}
|
||||
CAMELEER_SERVER_RUNTIME_ENABLED: "true"
|
||||
CAMELEER_SERVER_RUNTIME_SERVERURL: http://cameleer-server:8081
|
||||
CAMELEER_SERVER_RUNTIME_ROUTINGDOMAIN: `${PUBLIC_HOST:-localhost}
|
||||
CAMELEER_SERVER_RUNTIME_ROUTINGMODE: path
|
||||
CAMELEER_SERVER_RUNTIME_JARSTORAGEPATH: /data/jars
|
||||
CAMELEER_SERVER_RUNTIME_DOCKERNETWORK: cameleer-apps
|
||||
CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME: cameleer-jars
|
||||
CAMELEER_SERVER_RUNTIME_BASEIMAGE: gitea.siegeln.net/cameleer/cameleer-runtime-base:`${VERSION:-latest}
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.server-api.rule=PathPrefix(``/api``)
|
||||
- traefik.http.routers.server-api.entrypoints=websecure
|
||||
- traefik.http.routers.server-api.tls=true
|
||||
- traefik.http.services.server-api.loadbalancer.server.port=8081
|
||||
- traefik.docker.network=cameleer-traefik
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -sf http://localhost:8081/api/v1/health || exit 1"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 30
|
||||
start_period: 30s
|
||||
volumes:
|
||||
- jars:/data/jars
|
||||
- cameleer-certs:/certs:ro
|
||||
- `${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock
|
||||
group_add:
|
||||
- "$gid"
|
||||
networks:
|
||||
- cameleer
|
||||
- cameleer-traefik
|
||||
- cameleer-apps
|
||||
|
||||
cameleer-server-ui:
|
||||
image: `${SERVER_UI_IMAGE:-gitea.siegeln.net/cameleer/cameleer-server-ui}:`${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
cameleer-server:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
CAMELEER_API_URL: http://cameleer-server:8081
|
||||
BASE_PATH: ""
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.ui.rule=PathPrefix(``/``)
|
||||
- traefik.http.routers.ui.priority=1
|
||||
- traefik.http.routers.ui.entrypoints=websecure
|
||||
- traefik.http.routers.ui.tls=true
|
||||
- traefik.http.services.ui.loadbalancer.server.port=80
|
||||
- traefik.docker.network=cameleer-traefik
|
||||
networks:
|
||||
- cameleer-traefik
|
||||
"@
|
||||
$out.Add($serverBlock)
|
||||
|
||||
$out.Add(@'
|
||||
|
||||
volumes:
|
||||
cameleer-pgdata:
|
||||
cameleer-chdata:
|
||||
cameleer-certs:
|
||||
jars:
|
||||
|
||||
networks:
|
||||
cameleer:
|
||||
driver: bridge
|
||||
cameleer-traefik:
|
||||
name: cameleer-traefik
|
||||
driver: bridge
|
||||
cameleer-apps:
|
||||
name: cameleer-apps
|
||||
driver: bridge
|
||||
'@
|
||||
)
|
||||
if ($c.MonitoringNetwork) {
|
||||
$out.Add(" $($c.MonitoringNetwork):")
|
||||
$out.Add(' external: true')
|
||||
$src = Join-Path $PSScriptRoot 'templates'
|
||||
|
||||
# Base infra -- always copied
|
||||
Copy-Item (Join-Path $src 'docker-compose.yml') (Join-Path $c.InstallDir 'docker-compose.yml') -Force
|
||||
Copy-Item (Join-Path $src '.env.example') (Join-Path $c.InstallDir '.env.example') -Force
|
||||
|
||||
# Mode-specific
|
||||
if ($c.DeploymentMode -eq 'standalone') {
|
||||
Copy-Item (Join-Path $src 'docker-compose.server.yml') (Join-Path $c.InstallDir 'docker-compose.server.yml') -Force
|
||||
Copy-Item (Join-Path $src 'traefik-dynamic.yml') (Join-Path $c.InstallDir 'traefik-dynamic.yml') -Force
|
||||
} else {
|
||||
Copy-Item (Join-Path $src 'docker-compose.saas.yml') (Join-Path $c.InstallDir 'docker-compose.saas.yml') -Force
|
||||
}
|
||||
|
||||
Write-Utf8File $f ($out -join "`n")
|
||||
|
||||
$traefikDyn = @'
|
||||
tls:
|
||||
stores:
|
||||
default:
|
||||
defaultCertificate:
|
||||
certFile: /certs/cert.pem
|
||||
keyFile: /certs/key.pem
|
||||
'@
|
||||
Write-Utf8File (Join-Path $c.InstallDir 'traefik-dynamic.yml') $traefikDyn
|
||||
|
||||
Log-Info 'Generated docker-compose.yml (standalone)'
|
||||
|
||||
# Optional overlays
|
||||
if ($c.TlsMode -eq 'custom') {
|
||||
Copy-Item (Join-Path $src 'docker-compose.tls.yml') (Join-Path $c.InstallDir 'docker-compose.tls.yml') -Force
|
||||
}
|
||||
if ($c.MonitoringNetwork) {
|
||||
Copy-Item (Join-Path $src 'docker-compose.monitoring.yml') (Join-Path $c.InstallDir 'docker-compose.monitoring.yml') -Force
|
||||
}
|
||||
|
||||
Log-Info "Copied docker-compose templates to $($c.InstallDir)"
|
||||
}
|
||||
|
||||
# --- Docker operations ---
|
||||
@@ -1424,10 +1041,10 @@ $logtoConsoleRow
|
||||
|
||||
| Container | Purpose |
|
||||
|---|---|
|
||||
| ``traefik`` | Reverse proxy, TLS termination, routing |
|
||||
| ``postgres`` | PostgreSQL database (SaaS + Logto + tenant schemas) |
|
||||
| ``clickhouse`` | Time-series storage (traces, metrics, logs) |
|
||||
| ``logto`` | OIDC identity provider + bootstrap |
|
||||
| ``cameleer-traefik`` | Reverse proxy, TLS termination, routing |
|
||||
| ``cameleer-postgres`` | PostgreSQL database (SaaS + Logto + tenant schemas) |
|
||||
| ``cameleer-clickhouse`` | Time-series storage (traces, metrics, logs) |
|
||||
| ``cameleer-logto`` | OIDC identity provider + bootstrap |
|
||||
| ``cameleer-saas`` | SaaS platform (Spring Boot + React) |
|
||||
|
||||
Per-tenant ``cameleer-server`` and ``cameleer-server-ui`` containers are provisioned dynamically.
|
||||
@@ -1548,11 +1165,11 @@ placing your certificate and key files in the ``certs/`` directory and restartin
|
||||
|
||||
| Container | Purpose |
|
||||
|---|---|
|
||||
| ``traefik`` | Reverse proxy, TLS termination, routing |
|
||||
| ``postgres`` | PostgreSQL database (server data) |
|
||||
| ``clickhouse`` | Time-series storage (traces, metrics, logs) |
|
||||
| ``server`` | Cameleer Server (Spring Boot backend) |
|
||||
| ``server-ui`` | Cameleer Dashboard (React frontend) |
|
||||
| ``cameleer-traefik`` | Reverse proxy, TLS termination, routing |
|
||||
| ``cameleer-postgres`` | PostgreSQL database (server data) |
|
||||
| ``cameleer-clickhouse`` | Time-series storage (traces, metrics, logs) |
|
||||
| ``cameleer-server`` | Cameleer Server (Spring Boot backend) |
|
||||
| ``cameleer-server-ui`` | Cameleer Dashboard (React frontend) |
|
||||
|
||||
## Networking
|
||||
|
||||
@@ -1594,7 +1211,7 @@ docker compose -p $($c.ComposeProject) exec cameleer-clickhouse clickhouse-clien
|
||||
| Issue | Command |
|
||||
|---|---|
|
||||
| Service not starting | ``docker compose -p $($c.ComposeProject) logs SERVICE_NAME`` |
|
||||
| Server issues | ``docker compose -p $($c.ComposeProject) logs server`` |
|
||||
| Server issues | ``docker compose -p $($c.ComposeProject) logs cameleer-server`` |
|
||||
| Routing issues | ``docker compose -p $($c.ComposeProject) logs cameleer-traefik`` |
|
||||
| Database issues | ``docker compose -p $($c.ComposeProject) exec cameleer-postgres psql -U cameleer -d cameleer`` |
|
||||
|
||||
@@ -1713,7 +1330,7 @@ function Handle-Rerun {
|
||||
Merge-Config
|
||||
$script:cfg.InstallDir = $ExecutionContext.SessionState.Path.GetUnresolvedProviderPathFromPSPath(
|
||||
$script:cfg.InstallDir)
|
||||
Generate-ComposeFile
|
||||
Copy-Templates
|
||||
Invoke-ComposePull
|
||||
Invoke-ComposeDown
|
||||
Invoke-ComposeUp
|
||||
@@ -1743,7 +1360,7 @@ function Handle-Rerun {
|
||||
docker compose -p $proj down -v 2>$null
|
||||
} catch {}
|
||||
finally { Pop-Location }
|
||||
foreach ($fname in @('.env','.env.bak','docker-compose.yml','cameleer.conf','credentials.txt','INSTALL.md','traefik-dynamic.yml')) {
|
||||
foreach ($fname in @('.env','.env.bak','.env.example','docker-compose.yml','docker-compose.saas.yml','docker-compose.server.yml','docker-compose.tls.yml','docker-compose.monitoring.yml','traefik-dynamic.yml','cameleer.conf','credentials.txt','INSTALL.md')) {
|
||||
$fp = Join-Path $c.InstallDir $fname
|
||||
if (Test-Path $fp) { Remove-Item $fp -Force }
|
||||
}
|
||||
@@ -1795,7 +1412,7 @@ function Main {
|
||||
if ($script:cfg.TlsMode -eq 'custom') { Copy-Certs }
|
||||
|
||||
Generate-EnvFile
|
||||
Generate-ComposeFile
|
||||
Copy-Templates
|
||||
Write-ConfigFile
|
||||
|
||||
Invoke-ComposePull
|
||||
|
||||
@@ -600,15 +600,28 @@ SERVER_ADMIN_PASS=${ADMIN_PASS}
|
||||
# Bootstrap token (required by server, not used externally in standalone mode)
|
||||
BOOTSTRAP_TOKEN=$(generate_password)
|
||||
|
||||
# JWT signing secret (required by server, must be non-empty)
|
||||
CAMELEER_SERVER_SECURITY_JWTSECRET=$(generate_password)
|
||||
|
||||
# Docker
|
||||
DOCKER_SOCKET=${DOCKER_SOCKET}
|
||||
DOCKER_GID=$(stat -c '%g' "${DOCKER_SOCKET}" 2>/dev/null || echo "0")
|
||||
|
||||
POSTGRES_IMAGE=postgres:16-alpine
|
||||
|
||||
# Compose file assembly
|
||||
COMPOSE_FILE=docker-compose.yml:docker-compose.server.yml$([ "$TLS_MODE" = "custom" ] && echo ":docker-compose.tls.yml")$([ -n "$MONITORING_NETWORK" ] && echo ":docker-compose.monitoring.yml")
|
||||
EOF
|
||||
if [ "$TLS_MODE" = "custom" ]; then
|
||||
echo "CERT_FILE=/user-certs/cert.pem" >> "$f"
|
||||
echo "KEY_FILE=/user-certs/key.pem" >> "$f"
|
||||
[ -n "$CA_FILE" ] && echo "CA_FILE=/user-certs/ca.pem" >> "$f"
|
||||
fi
|
||||
if [ -n "$MONITORING_NETWORK" ]; then
|
||||
echo "" >> "$f"
|
||||
echo "# Monitoring" >> "$f"
|
||||
echo "MONITORING_NETWORK=${MONITORING_NETWORK}" >> "$f"
|
||||
fi
|
||||
log_info "Generated .env"
|
||||
cp "$f" "$INSTALL_DIR/.env.bak"
|
||||
return
|
||||
@@ -629,6 +642,7 @@ PUBLIC_PROTOCOL=${PUBLIC_PROTOCOL}
|
||||
HTTP_PORT=${HTTP_PORT}
|
||||
HTTPS_PORT=${HTTPS_PORT}
|
||||
LOGTO_CONSOLE_PORT=${LOGTO_CONSOLE_PORT}
|
||||
LOGTO_CONSOLE_BIND=$([ "$LOGTO_CONSOLE_EXPOSED" = "true" ] && echo "0.0.0.0" || echo "127.0.0.1")
|
||||
|
||||
# PostgreSQL
|
||||
POSTGRES_USER=cameleer
|
||||
@@ -665,473 +679,50 @@ DOCKER_GID=$(stat -c '%g' "${DOCKER_SOCKET}" 2>/dev/null || echo "0")
|
||||
# Provisioning images
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE=${REGISTRY}/cameleer-server:${VERSION}
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE=${REGISTRY}/cameleer-server-ui:${VERSION}
|
||||
CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE=${REGISTRY}/cameleer-runtime-base:${VERSION}
|
||||
|
||||
# JWT signing secret (forwarded to provisioned tenant servers, must be non-empty)
|
||||
CAMELEER_SERVER_SECURITY_JWTSECRET=$(generate_password)
|
||||
|
||||
# Compose file assembly
|
||||
COMPOSE_FILE=docker-compose.yml:docker-compose.saas.yml$([ "$TLS_MODE" = "custom" ] && echo ":docker-compose.tls.yml")$([ -n "$MONITORING_NETWORK" ] && echo ":docker-compose.monitoring.yml")
|
||||
EOF
|
||||
|
||||
if [ -n "$MONITORING_NETWORK" ]; then
|
||||
echo "" >> "$f"
|
||||
echo "# Monitoring" >> "$f"
|
||||
echo "MONITORING_NETWORK=${MONITORING_NETWORK}" >> "$f"
|
||||
fi
|
||||
|
||||
log_info "Generated .env"
|
||||
cp "$f" "$INSTALL_DIR/.env.bak"
|
||||
}
|
||||
|
||||
generate_compose_file() {
|
||||
copy_templates() {
|
||||
local src
|
||||
src="$(cd "$(dirname "$0")" && pwd)/templates"
|
||||
|
||||
# Base infra — always copied
|
||||
cp "$src/docker-compose.yml" "$INSTALL_DIR/docker-compose.yml"
|
||||
cp "$src/.env.example" "$INSTALL_DIR/.env.example"
|
||||
|
||||
# Mode-specific
|
||||
if [ "$DEPLOYMENT_MODE" = "standalone" ]; then
|
||||
generate_compose_file_standalone
|
||||
return
|
||||
fi
|
||||
local f="$INSTALL_DIR/docker-compose.yml"
|
||||
: > "$f"
|
||||
|
||||
cat >> "$f" << 'EOF'
|
||||
# Cameleer SaaS Platform
|
||||
# Generated by Cameleer installer <20> do not edit manually
|
||||
|
||||
services:
|
||||
cameleer-traefik:
|
||||
image: ${TRAEFIK_IMAGE:-gitea.siegeln.net/cameleer/cameleer-traefik}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "${HTTP_PORT:-80}:80"
|
||||
- "${HTTPS_PORT:-443}:443"
|
||||
EOF
|
||||
|
||||
if [ "$LOGTO_CONSOLE_EXPOSED" = "true" ]; then
|
||||
cat >> "$f" << 'EOF'
|
||||
- "${LOGTO_CONSOLE_PORT:-3002}:3002"
|
||||
EOF
|
||||
cp "$src/docker-compose.server.yml" "$INSTALL_DIR/docker-compose.server.yml"
|
||||
cp "$src/traefik-dynamic.yml" "$INSTALL_DIR/traefik-dynamic.yml"
|
||||
else
|
||||
cp "$src/docker-compose.saas.yml" "$INSTALL_DIR/docker-compose.saas.yml"
|
||||
fi
|
||||
|
||||
cat >> "$f" << 'EOF'
|
||||
environment:
|
||||
PUBLIC_HOST: ${PUBLIC_HOST:-localhost}
|
||||
CERT_FILE: ${CERT_FILE:-}
|
||||
KEY_FILE: ${KEY_FILE:-}
|
||||
CA_FILE: ${CA_FILE:-}
|
||||
volumes:
|
||||
- cameleer-certs:/certs
|
||||
- ${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock:ro
|
||||
EOF
|
||||
|
||||
# Optional overlays
|
||||
if [ "$TLS_MODE" = "custom" ]; then
|
||||
cat >> "$f" << 'EOF'
|
||||
- ./certs:/user-certs:ro
|
||||
EOF
|
||||
cp "$src/docker-compose.tls.yml" "$INSTALL_DIR/docker-compose.tls.yml"
|
||||
fi
|
||||
|
||||
cat >> "$f" << 'EOF'
|
||||
networks:
|
||||
- cameleer
|
||||
- cameleer-traefik
|
||||
EOF
|
||||
|
||||
if [ -n "$MONITORING_NETWORK" ]; then
|
||||
echo " - ${MONITORING_NETWORK}" >> "$f"
|
||||
cat >> "$f" << 'EOF'
|
||||
labels:
|
||||
- "prometheus.io/scrape=true"
|
||||
- "prometheus.io/port=8082"
|
||||
- "prometheus.io/path=/metrics"
|
||||
EOF
|
||||
cp "$src/docker-compose.monitoring.yml" "$INSTALL_DIR/docker-compose.monitoring.yml"
|
||||
fi
|
||||
|
||||
cat >> "$f" << 'EOF'
|
||||
|
||||
cameleer-postgres:
|
||||
image: ${POSTGRES_IMAGE:-gitea.siegeln.net/cameleer/cameleer-postgres}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
POSTGRES_DB: cameleer_saas
|
||||
POSTGRES_USER: ${POSTGRES_USER:-cameleer}
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
volumes:
|
||||
- cameleer-pgdata:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER:-cameleer} -d cameleer_saas"]
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
networks:
|
||||
- cameleer
|
||||
EOF
|
||||
|
||||
if [ -n "$MONITORING_NETWORK" ]; then
|
||||
echo " - ${MONITORING_NETWORK}" >> "$f"
|
||||
fi
|
||||
|
||||
cat >> "$f" << 'EOF'
|
||||
|
||||
cameleer-clickhouse:
|
||||
image: ${CLICKHOUSE_IMAGE:-gitea.siegeln.net/cameleer/cameleer-clickhouse}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD}
|
||||
volumes:
|
||||
- cameleer-chdata:/var/lib/clickhouse
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "clickhouse-client --password $${CLICKHOUSE_PASSWORD} --query 'SELECT 1'"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
networks:
|
||||
- cameleer
|
||||
EOF
|
||||
|
||||
if [ -n "$MONITORING_NETWORK" ]; then
|
||||
echo " - ${MONITORING_NETWORK}" >> "$f"
|
||||
cat >> "$f" << 'EOF'
|
||||
labels:
|
||||
- "prometheus.io/scrape=true"
|
||||
- "prometheus.io/port=9363"
|
||||
- "prometheus.io/path=/metrics"
|
||||
EOF
|
||||
fi
|
||||
|
||||
cat >> "$f" << 'EOF'
|
||||
|
||||
cameleer-logto:
|
||||
image: ${LOGTO_IMAGE:-gitea.siegeln.net/cameleer/cameleer-logto}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
cameleer-postgres:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
DB_URL: postgres://${POSTGRES_USER:-cameleer}:${POSTGRES_PASSWORD}@cameleer-postgres:5432/logto
|
||||
ENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
|
||||
ADMIN_ENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}:${LOGTO_CONSOLE_PORT:-3002}
|
||||
TRUST_PROXY_HEADER: 1
|
||||
NODE_TLS_REJECT_UNAUTHORIZED: "${NODE_TLS_REJECT:-0}"
|
||||
LOGTO_ENDPOINT: http://cameleer-logto:3001
|
||||
LOGTO_ADMIN_ENDPOINT: http://cameleer-logto:3002
|
||||
LOGTO_PUBLIC_ENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
|
||||
PUBLIC_HOST: ${PUBLIC_HOST:-localhost}
|
||||
PUBLIC_PROTOCOL: ${PUBLIC_PROTOCOL:-https}
|
||||
PG_HOST: cameleer-postgres
|
||||
PG_USER: ${POSTGRES_USER:-cameleer}
|
||||
PG_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
PG_DB_SAAS: cameleer_saas
|
||||
SAAS_ADMIN_USER: ${SAAS_ADMIN_USER:-admin}
|
||||
SAAS_ADMIN_PASS: ${SAAS_ADMIN_PASS:?SAAS_ADMIN_PASS must be set in .env}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "node -e \"require('http').get('http://localhost:3001/oidc/.well-known/openid-configuration', r => process.exit(r.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))\" && test -f /data/logto-bootstrap.json"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 60
|
||||
start_period: 30s
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.cameleer-logto.rule=PathPrefix(`/`)
|
||||
- traefik.http.routers.cameleer-logto.priority=1
|
||||
- traefik.http.routers.cameleer-logto.entrypoints=websecure
|
||||
- traefik.http.routers.cameleer-logto.tls=true
|
||||
- traefik.http.routers.cameleer-logto.service=cameleer-logto
|
||||
- traefik.http.routers.cameleer-logto.middlewares=cameleer-logto-cors
|
||||
- "traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowOriginList=${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}:${LOGTO_CONSOLE_PORT:-3002}"
|
||||
- traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowMethods=GET,POST,PUT,PATCH,DELETE,OPTIONS
|
||||
- traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowHeaders=Authorization,Content-Type
|
||||
- traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowCredentials=true
|
||||
- traefik.http.services.cameleer-logto.loadbalancer.server.port=3001
|
||||
EOF
|
||||
|
||||
if [ "$LOGTO_CONSOLE_EXPOSED" = "true" ]; then
|
||||
cat >> "$f" << 'EOF'
|
||||
- traefik.http.routers.cameleer-logto-console.rule=PathPrefix(`/`)
|
||||
- traefik.http.routers.cameleer-logto-console.entrypoints=admin-console
|
||||
- traefik.http.routers.cameleer-logto-console.tls=true
|
||||
- traefik.http.routers.cameleer-logto-console.service=cameleer-logto-console
|
||||
- traefik.http.services.cameleer-logto-console.loadbalancer.server.port=3002
|
||||
EOF
|
||||
fi
|
||||
|
||||
cat >> "$f" << 'EOF'
|
||||
volumes:
|
||||
- cameleer-bootstrapdata:/data
|
||||
networks:
|
||||
- cameleer
|
||||
|
||||
cameleer-saas:
|
||||
image: ${CAMELEER_IMAGE:-gitea.siegeln.net/cameleer/cameleer-saas}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
cameleer-logto:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
# SaaS database
|
||||
SPRING_DATASOURCE_URL: jdbc:postgresql://cameleer-postgres:5432/cameleer_saas
|
||||
SPRING_DATASOURCE_USERNAME: ${POSTGRES_USER:-cameleer}
|
||||
SPRING_DATASOURCE_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
# Identity (Logto)
|
||||
CAMELEER_SAAS_IDENTITY_LOGTOENDPOINT: http://cameleer-logto:3001
|
||||
CAMELEER_SAAS_IDENTITY_LOGTOPUBLICENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
|
||||
# Provisioning — passed to per-tenant server containers
|
||||
CAMELEER_SAAS_PROVISIONING_PUBLICHOST: ${PUBLIC_HOST:-localhost}
|
||||
CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL: ${PUBLIC_PROTOCOL:-https}
|
||||
CAMELEER_SAAS_PROVISIONING_NETWORKNAME: ${COMPOSE_PROJECT_NAME:-cameleer-saas}_cameleer
|
||||
CAMELEER_SAAS_PROVISIONING_TRAEFIKNETWORK: cameleer-traefik
|
||||
CAMELEER_SAAS_PROVISIONING_DATASOURCEUSERNAME: ${POSTGRES_USER:-cameleer}
|
||||
CAMELEER_SAAS_PROVISIONING_DATASOURCEPASSWORD: ${POSTGRES_PASSWORD}
|
||||
CAMELEER_SAAS_PROVISIONING_CLICKHOUSEPASSWORD: ${CLICKHOUSE_PASSWORD}
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE: ${CAMELEER_SAAS_PROVISIONING_SERVERIMAGE:-gitea.siegeln.net/cameleer/cameleer-server:latest}
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE: ${CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE:-gitea.siegeln.net/cameleer/cameleer-server-ui:latest}
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.saas.rule=PathPrefix(`/platform`)
|
||||
- traefik.http.routers.saas.entrypoints=websecure
|
||||
- traefik.http.routers.saas.tls=true
|
||||
- traefik.http.services.saas.loadbalancer.server.port=8080
|
||||
EOF
|
||||
|
||||
if [ -n "$MONITORING_NETWORK" ]; then
|
||||
cat >> "$f" << 'EOF'
|
||||
- "prometheus.io/scrape=true"
|
||||
- "prometheus.io/port=8080"
|
||||
- "prometheus.io/path=/platform/actuator/prometheus"
|
||||
EOF
|
||||
fi
|
||||
|
||||
cat >> "$f" << 'EOF'
|
||||
volumes:
|
||||
- cameleer-bootstrapdata:/data/bootstrap:ro
|
||||
- cameleer-certs:/certs
|
||||
- ${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock
|
||||
networks:
|
||||
- cameleer
|
||||
EOF
|
||||
|
||||
if [ -n "$MONITORING_NETWORK" ]; then
|
||||
echo " - ${MONITORING_NETWORK}" >> "$f"
|
||||
fi
|
||||
|
||||
# Detect Docker socket GID for container access
|
||||
local docker_gid
|
||||
docker_gid=$(stat -c '%g' "${DOCKER_SOCKET:-/var/run/docker.sock}" 2>/dev/null || echo "0")
|
||||
cat >> "$f" << EOF
|
||||
group_add:
|
||||
- "${docker_gid}"
|
||||
|
||||
volumes:
|
||||
EOF
|
||||
cat >> "$f" << 'EOF'
|
||||
cameleer-pgdata:
|
||||
cameleer-chdata:
|
||||
cameleer-certs:
|
||||
cameleer-bootstrapdata:
|
||||
|
||||
networks:
|
||||
cameleer:
|
||||
driver: bridge
|
||||
cameleer-traefik:
|
||||
name: cameleer-traefik
|
||||
driver: bridge
|
||||
EOF
|
||||
|
||||
if [ -n "$MONITORING_NETWORK" ]; then
|
||||
cat >> "$f" << EOF
|
||||
${MONITORING_NETWORK}:
|
||||
external: true
|
||||
EOF
|
||||
fi
|
||||
|
||||
log_info "Generated docker-compose.yml"
|
||||
}
|
||||
|
||||
generate_compose_file_standalone() {
|
||||
local f="$INSTALL_DIR/docker-compose.yml"
|
||||
: > "$f"
|
||||
|
||||
cat >> "$f" << 'COMPOSEEOF'
|
||||
# Cameleer Server (standalone)
|
||||
# Generated by Cameleer installer — do not edit manually
|
||||
|
||||
services:
|
||||
cameleer-traefik:
|
||||
image: ${TRAEFIK_IMAGE:-gitea.siegeln.net/cameleer/cameleer-traefik}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "${HTTP_PORT:-80}:80"
|
||||
- "${HTTPS_PORT:-443}:443"
|
||||
environment:
|
||||
PUBLIC_HOST: ${PUBLIC_HOST:-localhost}
|
||||
CERT_FILE: ${CERT_FILE:-}
|
||||
KEY_FILE: ${KEY_FILE:-}
|
||||
CA_FILE: ${CA_FILE:-}
|
||||
volumes:
|
||||
- cameleer-certs:/certs
|
||||
- ${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock:ro
|
||||
- ./traefik-dynamic.yml:/etc/traefik/dynamic.yml:ro
|
||||
COMPOSEEOF
|
||||
|
||||
if [ "$TLS_MODE" = "custom" ]; then
|
||||
echo " - ./certs:/user-certs:ro" >> "$f"
|
||||
fi
|
||||
|
||||
cat >> "$f" << 'COMPOSEEOF'
|
||||
networks:
|
||||
- cameleer
|
||||
- cameleer-traefik
|
||||
COMPOSEEOF
|
||||
|
||||
if [ -n "$MONITORING_NETWORK" ]; then
|
||||
echo " - ${MONITORING_NETWORK}" >> "$f"
|
||||
fi
|
||||
|
||||
cat >> "$f" << 'COMPOSEEOF'
|
||||
|
||||
cameleer-postgres:
|
||||
image: postgres:16-alpine
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
POSTGRES_DB: ${POSTGRES_DB:-cameleer}
|
||||
POSTGRES_USER: ${POSTGRES_USER:-cameleer}
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
volumes:
|
||||
- cameleer-pgdata:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER:-cameleer} -d $${POSTGRES_DB:-cameleer}"]
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
networks:
|
||||
- cameleer
|
||||
COMPOSEEOF
|
||||
|
||||
if [ -n "$MONITORING_NETWORK" ]; then
|
||||
echo " - ${MONITORING_NETWORK}" >> "$f"
|
||||
fi
|
||||
|
||||
cat >> "$f" << 'COMPOSEEOF'
|
||||
|
||||
cameleer-clickhouse:
|
||||
image: ${CLICKHOUSE_IMAGE:-gitea.siegeln.net/cameleer/cameleer-clickhouse}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD}
|
||||
volumes:
|
||||
- cameleer-chdata:/var/lib/clickhouse
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "clickhouse-client --password $${CLICKHOUSE_PASSWORD} --query 'SELECT 1'"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
networks:
|
||||
- cameleer
|
||||
COMPOSEEOF
|
||||
|
||||
if [ -n "$MONITORING_NETWORK" ]; then
|
||||
echo " - ${MONITORING_NETWORK}" >> "$f"
|
||||
fi
|
||||
|
||||
# Detect Docker socket GID
|
||||
local docker_gid
|
||||
docker_gid=$(stat -c '%g' "${DOCKER_SOCKET:-/var/run/docker.sock}" 2>/dev/null || echo "0")
|
||||
|
||||
cat >> "$f" << COMPOSEEOF
|
||||
|
||||
cameleer-server:
|
||||
image: \${SERVER_IMAGE:-gitea.siegeln.net/cameleer/cameleer-server}:\${VERSION:-latest}
|
||||
container_name: cameleer-server
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
cameleer-postgres:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
CAMELEER_SERVER_TENANT_ID: default
|
||||
SPRING_DATASOURCE_URL: jdbc:postgresql://cameleer-postgres:5432/\${POSTGRES_DB:-cameleer}?currentSchema=tenant_default
|
||||
SPRING_DATASOURCE_USERNAME: \${POSTGRES_USER:-cameleer}
|
||||
SPRING_DATASOURCE_PASSWORD: \${POSTGRES_PASSWORD}
|
||||
CAMELEER_SERVER_CLICKHOUSE_URL: jdbc:clickhouse://cameleer-clickhouse:8123/cameleer
|
||||
CAMELEER_SERVER_CLICKHOUSE_USERNAME: default
|
||||
CAMELEER_SERVER_CLICKHOUSE_PASSWORD: \${CLICKHOUSE_PASSWORD}
|
||||
CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN: \${BOOTSTRAP_TOKEN}
|
||||
CAMELEER_SERVER_SECURITY_UIUSER: \${SERVER_ADMIN_USER:-admin}
|
||||
CAMELEER_SERVER_SECURITY_UIPASSWORD: \${SERVER_ADMIN_PASS:?SERVER_ADMIN_PASS must be set in .env}
|
||||
CAMELEER_SERVER_SECURITY_CORSALLOWEDORIGINS: \${PUBLIC_PROTOCOL:-https}://\${PUBLIC_HOST:-localhost}
|
||||
CAMELEER_SERVER_RUNTIME_ENABLED: "true"
|
||||
CAMELEER_SERVER_RUNTIME_SERVERURL: http://cameleer-server:8081
|
||||
CAMELEER_SERVER_RUNTIME_ROUTINGDOMAIN: \${PUBLIC_HOST:-localhost}
|
||||
CAMELEER_SERVER_RUNTIME_ROUTINGMODE: path
|
||||
CAMELEER_SERVER_RUNTIME_JARSTORAGEPATH: /data/jars
|
||||
CAMELEER_SERVER_RUNTIME_DOCKERNETWORK: cameleer-apps
|
||||
CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME: cameleer-jars
|
||||
CAMELEER_SERVER_RUNTIME_BASEIMAGE: gitea.siegeln.net/cameleer/cameleer-runtime-base:\${VERSION:-latest}
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.server-api.rule=PathPrefix(\`/api\`)
|
||||
- traefik.http.routers.server-api.entrypoints=websecure
|
||||
- traefik.http.routers.server-api.tls=true
|
||||
- traefik.http.services.server-api.loadbalancer.server.port=8081
|
||||
- traefik.docker.network=cameleer-traefik
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -sf http://localhost:8081/api/v1/health || exit 1"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 30
|
||||
start_period: 30s
|
||||
volumes:
|
||||
- jars:/data/jars
|
||||
- cameleer-certs:/certs:ro
|
||||
- \${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock
|
||||
group_add:
|
||||
- "${docker_gid}"
|
||||
networks:
|
||||
- cameleer
|
||||
- cameleer-traefik
|
||||
- cameleer-apps
|
||||
|
||||
cameleer-server-ui:
|
||||
image: \${SERVER_UI_IMAGE:-gitea.siegeln.net/cameleer/cameleer-server-ui}:\${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
cameleer-server:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
CAMELEER_API_URL: http://cameleer-server:8081
|
||||
BASE_PATH: ""
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.ui.rule=PathPrefix(\`/\`)
|
||||
- traefik.http.routers.ui.priority=1
|
||||
- traefik.http.routers.ui.entrypoints=websecure
|
||||
- traefik.http.routers.ui.tls=true
|
||||
- traefik.http.services.ui.loadbalancer.server.port=80
|
||||
- traefik.docker.network=cameleer-traefik
|
||||
networks:
|
||||
- cameleer-traefik
|
||||
COMPOSEEOF
|
||||
|
||||
cat >> "$f" << 'COMPOSEEOF'
|
||||
|
||||
volumes:
|
||||
cameleer-pgdata:
|
||||
cameleer-chdata:
|
||||
cameleer-certs:
|
||||
jars:
|
||||
|
||||
networks:
|
||||
cameleer:
|
||||
driver: bridge
|
||||
cameleer-traefik:
|
||||
name: cameleer-traefik
|
||||
driver: bridge
|
||||
cameleer-apps:
|
||||
name: cameleer-apps
|
||||
driver: bridge
|
||||
COMPOSEEOF
|
||||
|
||||
if [ -n "$MONITORING_NETWORK" ]; then
|
||||
cat >> "$f" << EOF
|
||||
${MONITORING_NETWORK}:
|
||||
external: true
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Generate standalone traefik dynamic config (overrides baked-in redirect)
|
||||
cat > "$INSTALL_DIR/traefik-dynamic.yml" << 'TRAEFIKEOF'
|
||||
tls:
|
||||
stores:
|
||||
default:
|
||||
defaultCertificate:
|
||||
certFile: /certs/cert.pem
|
||||
keyFile: /certs/key.pem
|
||||
TRAEFIKEOF
|
||||
|
||||
log_info "Generated docker-compose.yml (standalone)"
|
||||
log_info "Copied docker-compose templates to $INSTALL_DIR"
|
||||
}
|
||||
|
||||
# --- Docker operations ---
|
||||
@@ -1366,10 +957,10 @@ EOF
|
||||
|
||||
| Container | Purpose |
|
||||
|---|---|
|
||||
| `traefik` | Reverse proxy, TLS termination, routing |
|
||||
| `postgres` | PostgreSQL database (SaaS + Logto + tenant schemas) |
|
||||
| `clickhouse` | Time-series storage (traces, metrics, logs) |
|
||||
| `logto` | OIDC identity provider + bootstrap |
|
||||
| `cameleer-traefik` | Reverse proxy, TLS termination, routing |
|
||||
| `cameleer-postgres` | PostgreSQL database (SaaS + Logto + tenant schemas) |
|
||||
| `cameleer-clickhouse` | Time-series storage (traces, metrics, logs) |
|
||||
| `cameleer-logto` | OIDC identity provider + bootstrap |
|
||||
| `cameleer-saas` | SaaS platform (Spring Boot + React) |
|
||||
|
||||
Per-tenant `cameleer-server` and `cameleer-server-ui` containers are provisioned dynamically when tenants are created.
|
||||
@@ -1508,11 +1099,11 @@ generate_install_doc_standalone() {
|
||||
|
||||
| Container | Purpose |
|
||||
|---|---|
|
||||
| \`traefik\` | Reverse proxy, TLS termination, routing |
|
||||
| \`postgres\` | PostgreSQL database (server data) |
|
||||
| \`clickhouse\` | Time-series storage (traces, metrics, logs) |
|
||||
| \`server\` | Cameleer Server (Spring Boot backend) |
|
||||
| \`server-ui\` | Cameleer Dashboard (React frontend) |
|
||||
| \`cameleer-traefik\` | Reverse proxy, TLS termination, routing |
|
||||
| \`cameleer-postgres\` | PostgreSQL database (server data) |
|
||||
| \`cameleer-clickhouse\` | Time-series storage (traces, metrics, logs) |
|
||||
| \`cameleer-server\` | Cameleer Server (Spring Boot backend) |
|
||||
| \`cameleer-server-ui\` | Cameleer Dashboard (React frontend) |
|
||||
|
||||
## Networking
|
||||
|
||||
@@ -1582,7 +1173,7 @@ The installer preserves your \`.env\`, credentials, and data volumes. Only the c
|
||||
| Issue | Command |
|
||||
|---|---|
|
||||
| Service not starting | \`docker compose -p ${COMPOSE_PROJECT} logs SERVICE_NAME\` |
|
||||
| Server issues | \`docker compose -p ${COMPOSE_PROJECT} logs server\` |
|
||||
| Server issues | \`docker compose -p ${COMPOSE_PROJECT} logs cameleer-server\` |
|
||||
| Routing issues | \`docker compose -p ${COMPOSE_PROJECT} logs cameleer-traefik\` |
|
||||
| Database issues | \`docker compose -p ${COMPOSE_PROJECT} exec cameleer-postgres psql -U cameleer -d cameleer\` |
|
||||
|
||||
@@ -1700,7 +1291,7 @@ handle_rerun() {
|
||||
load_config_file "$INSTALL_DIR/cameleer.conf"
|
||||
load_env_overrides
|
||||
merge_config
|
||||
generate_compose_file
|
||||
copy_templates
|
||||
docker_compose_pull
|
||||
docker_compose_down
|
||||
docker_compose_up
|
||||
@@ -1725,9 +1316,12 @@ handle_rerun() {
|
||||
log_info "Reinstalling..."
|
||||
docker_compose_down 2>/dev/null || true
|
||||
(cd "$INSTALL_DIR" && docker compose -p "${COMPOSE_PROJECT:-cameleer-saas}" down -v 2>/dev/null || true)
|
||||
rm -f "$INSTALL_DIR/.env" "$INSTALL_DIR/docker-compose.yml" \
|
||||
rm -f "$INSTALL_DIR/.env" "$INSTALL_DIR/.env.bak" "$INSTALL_DIR/.env.example" \
|
||||
"$INSTALL_DIR/docker-compose.yml" "$INSTALL_DIR/docker-compose.saas.yml" \
|
||||
"$INSTALL_DIR/docker-compose.server.yml" "$INSTALL_DIR/docker-compose.tls.yml" \
|
||||
"$INSTALL_DIR/docker-compose.monitoring.yml" "$INSTALL_DIR/traefik-dynamic.yml" \
|
||||
"$INSTALL_DIR/cameleer.conf" "$INSTALL_DIR/credentials.txt" \
|
||||
"$INSTALL_DIR/INSTALL.md" "$INSTALL_DIR/.env.bak"
|
||||
"$INSTALL_DIR/INSTALL.md"
|
||||
rm -rf "$INSTALL_DIR/certs"
|
||||
IS_RERUN=false
|
||||
return
|
||||
@@ -1788,7 +1382,7 @@ main() {
|
||||
|
||||
# Generate configuration files
|
||||
generate_env_file
|
||||
generate_compose_file
|
||||
copy_templates
|
||||
write_config_file
|
||||
|
||||
# Pull and start
|
||||
|
||||
89
installer/templates/.env.example
Normal file
89
installer/templates/.env.example
Normal file
@@ -0,0 +1,89 @@
|
||||
# Cameleer Configuration
|
||||
# Copy this file to .env and fill in the values.
|
||||
# The installer generates .env automatically — this file is for reference.
|
||||
|
||||
# ============================================================
|
||||
# Compose file assembly (set by installer)
|
||||
# ============================================================
|
||||
# SaaS: docker-compose.yml:docker-compose.saas.yml
|
||||
# Standalone: docker-compose.yml:docker-compose.server.yml
|
||||
# Add :docker-compose.tls.yml for custom TLS certificates
|
||||
# Add :docker-compose.monitoring.yml for external monitoring network
|
||||
COMPOSE_FILE=docker-compose.yml:docker-compose.saas.yml
|
||||
|
||||
# ============================================================
|
||||
# Image version
|
||||
# ============================================================
|
||||
VERSION=latest
|
||||
|
||||
# ============================================================
|
||||
# Public access
|
||||
# ============================================================
|
||||
PUBLIC_HOST=localhost
|
||||
PUBLIC_PROTOCOL=https
|
||||
|
||||
# ============================================================
|
||||
# Ports
|
||||
# ============================================================
|
||||
HTTP_PORT=80
|
||||
HTTPS_PORT=443
|
||||
# Set to 0.0.0.0 to expose Logto admin console externally (default: localhost only)
|
||||
# LOGTO_CONSOLE_BIND=0.0.0.0
|
||||
LOGTO_CONSOLE_PORT=3002
|
||||
|
||||
# ============================================================
|
||||
# PostgreSQL
|
||||
# ============================================================
|
||||
POSTGRES_USER=cameleer
|
||||
POSTGRES_PASSWORD=CHANGE_ME
|
||||
# SaaS: cameleer_saas, Standalone: cameleer
|
||||
POSTGRES_DB=cameleer_saas
|
||||
|
||||
# ============================================================
|
||||
# ClickHouse
|
||||
# ============================================================
|
||||
CLICKHOUSE_PASSWORD=CHANGE_ME
|
||||
|
||||
# ============================================================
|
||||
# Admin credentials (SaaS mode)
|
||||
# ============================================================
|
||||
SAAS_ADMIN_USER=admin
|
||||
SAAS_ADMIN_PASS=CHANGE_ME
|
||||
|
||||
# ============================================================
|
||||
# Admin credentials (standalone mode)
|
||||
# ============================================================
|
||||
# SERVER_ADMIN_USER=admin
|
||||
# SERVER_ADMIN_PASS=CHANGE_ME
|
||||
# BOOTSTRAP_TOKEN=CHANGE_ME
|
||||
|
||||
# ============================================================
|
||||
# TLS
|
||||
# ============================================================
|
||||
# Set to 1 to reject unauthorized TLS certificates (production)
|
||||
NODE_TLS_REJECT=0
|
||||
# Custom TLS certificate paths (inside container, set by installer)
|
||||
# CERT_FILE=/user-certs/cert.pem
|
||||
# KEY_FILE=/user-certs/key.pem
|
||||
# CA_FILE=/user-certs/ca.pem
|
||||
|
||||
# ============================================================
|
||||
# Docker
|
||||
# ============================================================
|
||||
DOCKER_SOCKET=/var/run/docker.sock
|
||||
# GID of the docker socket — detected by installer, used for container group_add
|
||||
DOCKER_GID=0
|
||||
|
||||
# ============================================================
|
||||
# Provisioning images (SaaS mode only)
|
||||
# ============================================================
|
||||
# CAMELEER_SAAS_PROVISIONING_SERVERIMAGE=gitea.siegeln.net/cameleer/cameleer-server:latest
|
||||
# CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE=gitea.siegeln.net/cameleer/cameleer-server-ui:latest
|
||||
# CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE=gitea.siegeln.net/cameleer/cameleer-runtime-base:latest
|
||||
|
||||
# ============================================================
|
||||
# Monitoring (optional)
|
||||
# ============================================================
|
||||
# External Docker network name for Prometheus scraping.
|
||||
# Only needed when docker-compose.monitoring.yml is in COMPOSE_FILE.
|
||||
# MONITORING_NETWORK=prometheus
|
||||
7
installer/templates/docker-compose.monitoring.yml
Normal file
7
installer/templates/docker-compose.monitoring.yml
Normal file
@@ -0,0 +1,7 @@
|
||||
# External monitoring network overlay
|
||||
# Overrides the noop monitoring bridge with a real external network
|
||||
|
||||
networks:
|
||||
monitoring:
|
||||
external: true
|
||||
name: ${MONITORING_NETWORK:?MONITORING_NETWORK must be set in .env}
|
||||
@@ -1,58 +1,7 @@
|
||||
# Cameleer SaaS Platform
|
||||
# Generated by Cameleer installer -- do not edit manually
|
||||
|
||||
# Cameleer SaaS — Logto + management plane
|
||||
# Loaded in SaaS deployment mode
|
||||
|
||||
services:
|
||||
cameleer-traefik:
|
||||
image: ${TRAEFIK_IMAGE:-gitea.siegeln.net/cameleer/cameleer-traefik}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "${HTTP_PORT:-80}:80"
|
||||
- "${HTTPS_PORT:-443}:443"
|
||||
- "${LOGTO_CONSOLE_PORT:-3002}:3002"
|
||||
environment:
|
||||
PUBLIC_HOST: ${PUBLIC_HOST:-localhost}
|
||||
CERT_FILE: ${CERT_FILE:-}
|
||||
KEY_FILE: ${KEY_FILE:-}
|
||||
CA_FILE: ${CA_FILE:-}
|
||||
volumes:
|
||||
- cameleer-certs:/certs
|
||||
- ${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock:ro
|
||||
networks:
|
||||
- cameleer
|
||||
- cameleer-traefik
|
||||
|
||||
cameleer-postgres:
|
||||
image: ${POSTGRES_IMAGE:-gitea.siegeln.net/cameleer/cameleer-postgres}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
POSTGRES_DB: cameleer_saas
|
||||
POSTGRES_USER: ${POSTGRES_USER:-cameleer}
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
volumes:
|
||||
- cameleer-pgdata:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER:-cameleer} -d cameleer_saas"]
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
networks:
|
||||
- cameleer
|
||||
|
||||
cameleer-clickhouse:
|
||||
image: ${CLICKHOUSE_IMAGE:-gitea.siegeln.net/cameleer/cameleer-clickhouse}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD}
|
||||
volumes:
|
||||
- cameleer-chdata:/var/lib/clickhouse
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "clickhouse-client --password $${CLICKHOUSE_PASSWORD} --query 'SELECT 1'"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
networks:
|
||||
- cameleer
|
||||
|
||||
cameleer-logto:
|
||||
image: ${LOGTO_IMAGE:-gitea.siegeln.net/cameleer/cameleer-logto}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
@@ -104,7 +53,8 @@ services:
|
||||
- cameleer-bootstrapdata:/data
|
||||
networks:
|
||||
- cameleer
|
||||
|
||||
- monitoring
|
||||
|
||||
cameleer-saas:
|
||||
image: ${CAMELEER_IMAGE:-gitea.siegeln.net/cameleer/cameleer-saas}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
@@ -112,11 +62,14 @@ services:
|
||||
cameleer-logto:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
# SaaS database
|
||||
SPRING_DATASOURCE_URL: jdbc:postgresql://cameleer-postgres:5432/cameleer_saas
|
||||
SPRING_DATASOURCE_USERNAME: ${POSTGRES_USER:-cameleer}
|
||||
SPRING_DATASOURCE_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
# Identity (Logto)
|
||||
CAMELEER_SAAS_IDENTITY_LOGTOENDPOINT: http://cameleer-logto:3001
|
||||
CAMELEER_SAAS_IDENTITY_LOGTOPUBLICENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
|
||||
# Provisioning — passed to per-tenant server containers
|
||||
CAMELEER_SAAS_PROVISIONING_PUBLICHOST: ${PUBLIC_HOST:-localhost}
|
||||
CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL: ${PUBLIC_PROTOCOL:-https}
|
||||
CAMELEER_SAAS_PROVISIONING_NETWORKNAME: ${COMPOSE_PROJECT_NAME:-cameleer-saas}_cameleer
|
||||
@@ -124,32 +77,32 @@ services:
|
||||
CAMELEER_SAAS_PROVISIONING_DATASOURCEUSERNAME: ${POSTGRES_USER:-cameleer}
|
||||
CAMELEER_SAAS_PROVISIONING_DATASOURCEPASSWORD: ${POSTGRES_PASSWORD}
|
||||
CAMELEER_SAAS_PROVISIONING_CLICKHOUSEPASSWORD: ${CLICKHOUSE_PASSWORD}
|
||||
CAMELEER_SERVER_SECURITY_JWTSECRET: ${CAMELEER_SERVER_SECURITY_JWTSECRET:?CAMELEER_SERVER_SECURITY_JWTSECRET must be set in .env}
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE: ${CAMELEER_SAAS_PROVISIONING_SERVERIMAGE:-gitea.siegeln.net/cameleer/cameleer-server:latest}
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE: ${CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE:-gitea.siegeln.net/cameleer/cameleer-server-ui:latest}
|
||||
CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE: ${CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE:-gitea.siegeln.net/cameleer/cameleer-runtime-base:latest}
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.saas.rule=PathPrefix(`/platform`)
|
||||
- traefik.http.routers.saas.entrypoints=websecure
|
||||
- traefik.http.routers.saas.tls=true
|
||||
- traefik.http.services.saas.loadbalancer.server.port=8080
|
||||
- "prometheus.io/scrape=true"
|
||||
- "prometheus.io/port=8080"
|
||||
- "prometheus.io/path=/platform/actuator/prometheus"
|
||||
volumes:
|
||||
- cameleer-bootstrapdata:/data/bootstrap:ro
|
||||
- cameleer-certs:/certs
|
||||
- ${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock
|
||||
group_add:
|
||||
- "${DOCKER_GID:-0}"
|
||||
networks:
|
||||
- cameleer
|
||||
group_add:
|
||||
- "0"
|
||||
|
||||
- monitoring
|
||||
|
||||
volumes:
|
||||
cameleer-pgdata:
|
||||
cameleer-chdata:
|
||||
cameleer-certs:
|
||||
cameleer-bootstrapdata:
|
||||
|
||||
|
||||
networks:
|
||||
cameleer:
|
||||
driver: bridge
|
||||
cameleer-traefik:
|
||||
name: cameleer-traefik
|
||||
driver: bridge
|
||||
monitoring:
|
||||
name: cameleer-monitoring-noop
|
||||
99
installer/templates/docker-compose.server.yml
Normal file
99
installer/templates/docker-compose.server.yml
Normal file
@@ -0,0 +1,99 @@
|
||||
# Cameleer Server (standalone)
|
||||
# Loaded in standalone deployment mode
|
||||
|
||||
services:
|
||||
cameleer-traefik:
|
||||
volumes:
|
||||
- ./traefik-dynamic.yml:/etc/traefik/dynamic.yml:ro
|
||||
|
||||
cameleer-postgres:
|
||||
image: postgres:16-alpine
|
||||
environment:
|
||||
POSTGRES_DB: ${POSTGRES_DB:-cameleer}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER:-cameleer} -d $${POSTGRES_DB:-cameleer}"]
|
||||
|
||||
cameleer-server:
|
||||
image: ${SERVER_IMAGE:-gitea.siegeln.net/cameleer/cameleer-server}:${VERSION:-latest}
|
||||
container_name: cameleer-server
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
cameleer-postgres:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
CAMELEER_SERVER_TENANT_ID: default
|
||||
SPRING_DATASOURCE_URL: jdbc:postgresql://cameleer-postgres:5432/${POSTGRES_DB:-cameleer}?currentSchema=tenant_default
|
||||
SPRING_DATASOURCE_USERNAME: ${POSTGRES_USER:-cameleer}
|
||||
SPRING_DATASOURCE_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
CAMELEER_SERVER_CLICKHOUSE_URL: jdbc:clickhouse://cameleer-clickhouse:8123/cameleer
|
||||
CAMELEER_SERVER_CLICKHOUSE_USERNAME: default
|
||||
CAMELEER_SERVER_CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD}
|
||||
CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN: ${BOOTSTRAP_TOKEN:?BOOTSTRAP_TOKEN must be set in .env}
|
||||
CAMELEER_SERVER_SECURITY_JWTSECRET: ${CAMELEER_SERVER_SECURITY_JWTSECRET:?CAMELEER_SERVER_SECURITY_JWTSECRET must be set in .env}
|
||||
CAMELEER_SERVER_SECURITY_UIUSER: ${SERVER_ADMIN_USER:-admin}
|
||||
CAMELEER_SERVER_SECURITY_UIPASSWORD: ${SERVER_ADMIN_PASS:?SERVER_ADMIN_PASS must be set in .env}
|
||||
CAMELEER_SERVER_SECURITY_CORSALLOWEDORIGINS: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
|
||||
CAMELEER_SERVER_RUNTIME_ENABLED: "true"
|
||||
CAMELEER_SERVER_RUNTIME_SERVERURL: http://cameleer-server:8081
|
||||
CAMELEER_SERVER_RUNTIME_ROUTINGDOMAIN: ${PUBLIC_HOST:-localhost}
|
||||
CAMELEER_SERVER_RUNTIME_ROUTINGMODE: path
|
||||
CAMELEER_SERVER_RUNTIME_JARSTORAGEPATH: /data/jars
|
||||
CAMELEER_SERVER_RUNTIME_DOCKERNETWORK: cameleer-apps
|
||||
CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME: cameleer-jars
|
||||
CAMELEER_SERVER_RUNTIME_BASEIMAGE: gitea.siegeln.net/cameleer/cameleer-runtime-base:${VERSION:-latest}
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.server-api.rule=PathPrefix(`/api`)
|
||||
- traefik.http.routers.server-api.entrypoints=websecure
|
||||
- traefik.http.routers.server-api.tls=true
|
||||
- traefik.http.services.server-api.loadbalancer.server.port=8081
|
||||
- traefik.docker.network=cameleer-traefik
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -sf http://localhost:8081/api/v1/health || exit 1"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 30
|
||||
start_period: 30s
|
||||
volumes:
|
||||
- jars:/data/jars
|
||||
- cameleer-certs:/certs:ro
|
||||
- ${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock
|
||||
group_add:
|
||||
- "${DOCKER_GID:-0}"
|
||||
networks:
|
||||
- cameleer
|
||||
- cameleer-traefik
|
||||
- cameleer-apps
|
||||
- monitoring
|
||||
|
||||
cameleer-server-ui:
|
||||
image: ${SERVER_UI_IMAGE:-gitea.siegeln.net/cameleer/cameleer-server-ui}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
cameleer-server:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
CAMELEER_API_URL: http://cameleer-server:8081
|
||||
BASE_PATH: ""
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.ui.rule=PathPrefix(`/`)
|
||||
- traefik.http.routers.ui.priority=1
|
||||
- traefik.http.routers.ui.entrypoints=websecure
|
||||
- traefik.http.routers.ui.tls=true
|
||||
- traefik.http.services.ui.loadbalancer.server.port=80
|
||||
- traefik.docker.network=cameleer-traefik
|
||||
networks:
|
||||
- cameleer-traefik
|
||||
- monitoring
|
||||
|
||||
volumes:
|
||||
jars:
|
||||
name: cameleer-jars
|
||||
|
||||
networks:
|
||||
cameleer-apps:
|
||||
name: cameleer-apps
|
||||
driver: bridge
|
||||
monitoring:
|
||||
name: cameleer-monitoring-noop
|
||||
7
installer/templates/docker-compose.tls.yml
Normal file
7
installer/templates/docker-compose.tls.yml
Normal file
@@ -0,0 +1,7 @@
|
||||
# Custom TLS certificates overlay
|
||||
# Adds user-supplied certificate volume to traefik
|
||||
|
||||
services:
|
||||
cameleer-traefik:
|
||||
volumes:
|
||||
- ./certs:/user-certs:ro
|
||||
79
installer/templates/docker-compose.yml
Normal file
79
installer/templates/docker-compose.yml
Normal file
@@ -0,0 +1,79 @@
|
||||
# Cameleer Infrastructure
|
||||
# Shared base — always loaded. Mode-specific services in separate compose files.
|
||||
|
||||
services:
|
||||
cameleer-traefik:
|
||||
image: ${TRAEFIK_IMAGE:-gitea.siegeln.net/cameleer/cameleer-traefik}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "${HTTP_PORT:-80}:80"
|
||||
- "${HTTPS_PORT:-443}:443"
|
||||
- "${LOGTO_CONSOLE_BIND:-127.0.0.1}:${LOGTO_CONSOLE_PORT:-3002}:3002"
|
||||
environment:
|
||||
PUBLIC_HOST: ${PUBLIC_HOST:-localhost}
|
||||
CERT_FILE: ${CERT_FILE:-}
|
||||
KEY_FILE: ${KEY_FILE:-}
|
||||
CA_FILE: ${CA_FILE:-}
|
||||
volumes:
|
||||
- cameleer-certs:/certs
|
||||
- ${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock:ro
|
||||
labels:
|
||||
- "prometheus.io/scrape=true"
|
||||
- "prometheus.io/port=8082"
|
||||
- "prometheus.io/path=/metrics"
|
||||
networks:
|
||||
- cameleer
|
||||
- cameleer-traefik
|
||||
- monitoring
|
||||
|
||||
cameleer-postgres:
|
||||
image: ${POSTGRES_IMAGE:-gitea.siegeln.net/cameleer/cameleer-postgres}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
POSTGRES_DB: ${POSTGRES_DB:-cameleer_saas}
|
||||
POSTGRES_USER: ${POSTGRES_USER:-cameleer}
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:?POSTGRES_PASSWORD must be set in .env}
|
||||
volumes:
|
||||
- cameleer-pgdata:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER:-cameleer} -d $${POSTGRES_DB:-cameleer_saas}"]
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
networks:
|
||||
- cameleer
|
||||
- monitoring
|
||||
|
||||
cameleer-clickhouse:
|
||||
image: ${CLICKHOUSE_IMAGE:-gitea.siegeln.net/cameleer/cameleer-clickhouse}:${VERSION:-latest}
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD:?CLICKHOUSE_PASSWORD must be set in .env}
|
||||
volumes:
|
||||
- cameleer-chdata:/var/lib/clickhouse
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "clickhouse-client --password $${CLICKHOUSE_PASSWORD} --query 'SELECT 1'"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
labels:
|
||||
- "prometheus.io/scrape=true"
|
||||
- "prometheus.io/port=9363"
|
||||
- "prometheus.io/path=/metrics"
|
||||
networks:
|
||||
- cameleer
|
||||
- monitoring
|
||||
|
||||
volumes:
|
||||
cameleer-pgdata:
|
||||
cameleer-chdata:
|
||||
cameleer-certs:
|
||||
|
||||
networks:
|
||||
cameleer:
|
||||
driver: bridge
|
||||
cameleer-traefik:
|
||||
name: cameleer-traefik
|
||||
driver: bridge
|
||||
monitoring:
|
||||
name: cameleer-monitoring-noop
|
||||
6
installer/templates/traefik-dynamic.yml
Normal file
6
installer/templates/traefik-dynamic.yml
Normal file
@@ -0,0 +1,6 @@
|
||||
tls:
|
||||
stores:
|
||||
default:
|
||||
defaultCertificate:
|
||||
certFile: /certs/cert.pem
|
||||
keyFile: /certs/key.pem
|
||||
42
src/main/java/net/siegeln/cameleer/saas/config/CLAUDE.md
Normal file
42
src/main/java/net/siegeln/cameleer/saas/config/CLAUDE.md
Normal file
@@ -0,0 +1,42 @@
|
||||
# Auth & Security Config
|
||||
|
||||
## Auth enforcement
|
||||
|
||||
- All API endpoints enforce OAuth2 scopes via `@PreAuthorize("hasAuthority('SCOPE_xxx')")` annotations
|
||||
- Tenant isolation enforced by `TenantIsolationInterceptor` (a single `HandlerInterceptor` on `/api/**` that resolves JWT org_id to TenantContext and validates `{tenantId}`, `{environmentId}`, `{appId}` path variables; fail-closed, platform admins bypass)
|
||||
- 13 OAuth2 scopes on the Logto API resource (`https://api.cameleer.local`): 1 platform (`platform:admin`) + 9 tenant (`tenant:manage`, `billing:manage`, `team:manage`, `apps:manage`, `apps:deploy`, `secrets:manage`, `observe:read`, `observe:debug`, `settings:manage`) + 3 server (`server:admin`, `server:operator`, `server:viewer`), served to the frontend from `GET /platform/api/config`
|
||||
- Server scopes map to server RBAC roles via JWT `scope` claim (SaaS platform path) or `roles` claim (server-ui OIDC login path)
|
||||
- Org roles: `owner` -> `server:admin` + `tenant:manage`, `operator` -> `server:operator`, `viewer` -> `server:viewer`
|
||||
- `saas-vendor` global role created by bootstrap Phase 12 and always assigned to the admin user — has `platform:admin` + all tenant scopes
|
||||
- Custom `JwtDecoder` in `SecurityConfig.java` — ES384 algorithm, `at+jwt` token type, split issuer-uri (string validation) / jwk-set-uri (Docker-internal fetch), audience validation (`https://api.cameleer.local`)
|
||||
- Logto Custom JWT (Phase 7b in bootstrap) injects a `roles` claim into access tokens based on org roles and global roles — this makes role data available to the server without Logto-specific code
|
||||
|
||||
## Auth routing by persona
|
||||
|
||||
| Persona | Logto role | Key scope | Landing route |
|
||||
|---------|-----------|-----------|---------------|
|
||||
| SaaS admin | `saas-vendor` (global) | `platform:admin` | `/vendor/tenants` |
|
||||
| Tenant admin | org `owner` | `tenant:manage` | `/tenant` (dashboard) |
|
||||
| Regular user (operator/viewer) | org member | `server:operator` or `server:viewer` | Redirected to server dashboard directly |
|
||||
|
||||
- `LandingRedirect` component waits for scopes to load, then routes to the correct persona landing page
|
||||
- `RequireScope` guard on route groups enforces scope requirements
|
||||
- SSO bridge: Logto session carries over to provisioned server's OIDC flow (Traditional Web App per tenant)
|
||||
|
||||
## Server OIDC role extraction (two paths)
|
||||
|
||||
| Path | Token type | Role source | How it works |
|
||||
|------|-----------|-------------|--------------|
|
||||
| SaaS platform -> server API | Logto org-scoped access token | `scope` claim | `JwtAuthenticationFilter.extractRolesFromScopes()` reads `server:admin` from scope |
|
||||
| Server-ui SSO login | Logto JWT access token (via Traditional Web App) | `roles` claim | `OidcTokenExchanger` decodes access_token, reads `roles` injected by Custom JWT |
|
||||
|
||||
The server's OIDC config (`OidcConfig`) includes `audience` (RFC 8707 resource indicator) and `additionalScopes`. The `audience` is sent as `resource` in both the authorization request and token exchange, which makes Logto return a JWT access token instead of opaque. The Custom JWT script maps org roles to `roles: ["server:admin"]`.
|
||||
|
||||
**CRITICAL:** `additionalScopes` MUST include `urn:logto:scope:organizations` and `urn:logto:scope:organization_roles` — without these, Logto doesn't populate `context.user.organizationRoles` in the Custom JWT script, so the `roles` claim is empty and all users get `defaultRoles` (VIEWER). The server's `OidcAuthController.applyClaimMappings()` uses OIDC token roles (from Custom JWT) as fallback when no DB claim mapping rules exist: claim mapping rules > OIDC token roles > defaultRoles.
|
||||
|
||||
## SaaS app identity configuration
|
||||
|
||||
**Identity** (`cameleer.saas.identity.*` / `CAMELEER_SAAS_IDENTITY_*`):
|
||||
- Logto endpoint, M2M credentials, bootstrap file path — used by `LogtoConfig.java`
|
||||
|
||||
**Note:** `PUBLIC_HOST` and `PUBLIC_PROTOCOL` remain as infrastructure env vars for Traefik and Logto containers. The SaaS app reads its own copies via the `CAMELEER_SAAS_PROVISIONING_*` prefix. `LOGTO_ENDPOINT` and `LOGTO_DB_PASSWORD` are infrastructure env vars for the Logto service and are unchanged.
|
||||
@@ -8,6 +8,7 @@ import org.springframework.stereotype.Service;
|
||||
import org.springframework.web.client.RestClient;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
@@ -171,6 +172,38 @@ public class ServerApiClient {
|
||||
|
||||
public record ServerHealthResponse(boolean healthy, String status) {}
|
||||
|
||||
// --- Server metrics query (POST /api/v1/admin/server-metrics/query) ---
|
||||
|
||||
public record MetricsQueryResponse(
|
||||
String metric,
|
||||
String statistic,
|
||||
String aggregation,
|
||||
String mode,
|
||||
int stepSeconds,
|
||||
List<MetricsSeries> series
|
||||
) {}
|
||||
|
||||
public record MetricsSeries(Map<String, String> tags, List<MetricsPoint> points) {}
|
||||
|
||||
public record MetricsPoint(String t, double v) {}
|
||||
|
||||
/** Execute a server-metrics query against a tenant's server. */
|
||||
public MetricsQueryResponse queryServerMetrics(String serverEndpoint, Map<String, Object> body) {
|
||||
try {
|
||||
return RestClient.create().post()
|
||||
.uri(serverEndpoint + "/api/v1/admin/server-metrics/query")
|
||||
.header("Authorization", "Bearer " + getAccessToken())
|
||||
.header("X-Cameleer-Protocol-Version", "1")
|
||||
.contentType(MediaType.APPLICATION_JSON)
|
||||
.body(body)
|
||||
.retrieve()
|
||||
.body(MetricsQueryResponse.class);
|
||||
} catch (Exception e) {
|
||||
log.warn("Metrics query failed for {}: {}", serverEndpoint, e.getMessage());
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
private synchronized String getAccessToken() {
|
||||
if (cachedToken != null && Instant.now().isBefore(tokenExpiry.minusSeconds(60))) {
|
||||
return cachedToken;
|
||||
|
||||
102
src/main/java/net/siegeln/cameleer/saas/provisioning/CLAUDE.md
Normal file
102
src/main/java/net/siegeln/cameleer/saas/provisioning/CLAUDE.md
Normal file
@@ -0,0 +1,102 @@
|
||||
# Provisioning
|
||||
|
||||
Pluggable tenant provisioning via `TenantProvisioner` interface. `DockerTenantProvisioner` is the Docker implementation; `DisabledTenantProvisioner` is the fallback when no Docker socket is detected. Auto-configured by `TenantProvisionerAutoConfig`.
|
||||
|
||||
## Tenant Provisioning Flow
|
||||
|
||||
When SaaS admin creates a tenant via `VendorTenantService`:
|
||||
|
||||
**Synchronous (in `createAndProvision`):**
|
||||
1. Create `TenantEntity` (status=PROVISIONING) + Logto organization
|
||||
2. Create admin user in Logto with owner org role (if credentials provided)
|
||||
3. Register OIDC redirect URIs for `/t/{slug}/oidc/callback` on Logto Traditional Web App
|
||||
5. Generate license (tier-appropriate, 365 days)
|
||||
6. Return immediately — UI shows provisioning spinner, polls via `refetchInterval`
|
||||
|
||||
**Asynchronous (via `self.provisionAsync()` — `@Lazy` self-proxy for `@Async`):**
|
||||
7. Create per-tenant PostgreSQL user + schema via `TenantDatabaseService.createTenantDatabase(slug, password)`, store `dbPassword` on entity
|
||||
8. Create tenant-isolated Docker network (`cameleer-tenant-{slug}`)
|
||||
9. Create server container with per-tenant JDBC URL (`currentSchema=tenant_{slug}&ApplicationName=tenant_{slug}`), Traefik labels (`traefik.docker.network`), health check, Docker socket bind, JAR volume, certs volume (ro)
|
||||
10. Create UI container with `CAMELEER_API_URL`, `BASE_PATH`, Traefik strip-prefix labels
|
||||
10. Wait for health check (`/api/v1/health`, not `/actuator/health` which requires auth)
|
||||
11. Push license token to server via M2M API
|
||||
12. Push OIDC config (Traditional Web App credentials + `additionalScopes: [urn:logto:scope:organizations, urn:logto:scope:organization_roles]`) to server for SSO
|
||||
13. Update tenant status -> ACTIVE (or set `provisionError` on failure)
|
||||
|
||||
**Server restart** (available to SaaS admin + tenant admin):
|
||||
- `POST /api/vendor/tenants/{id}/restart` (SaaS admin) and `POST /api/tenant/server/restart` (tenant)
|
||||
- Calls `TenantProvisioner.stop(slug)` then `start(slug)` — restarts server + UI containers only (same image)
|
||||
|
||||
**Server upgrade** (available to SaaS admin + tenant admin):
|
||||
- `POST /api/vendor/tenants/{id}/upgrade` (SaaS admin) and `POST /api/tenant/server/upgrade` (tenant)
|
||||
- Calls `TenantProvisioner.upgrade(slug)` — removes server + UI containers, force-pulls latest images (preserves app containers, volumes, networks), then `provisionAsync()` re-creates containers with the new image + pushes license + OIDC config
|
||||
|
||||
**Tenant delete** cleanup:
|
||||
- `DockerTenantProvisioner.remove(slug)` — label-based container removal (`cameleer.tenant={slug}`), env network cleanup, tenant network removal, JAR volume removal
|
||||
- `TenantDatabaseService.dropTenantDatabase(slug)` — drops PostgreSQL `tenant_{slug}` schema + `tenant_{slug}` user
|
||||
- `TenantDataCleanupService.cleanupClickHouse(slug)` — deletes ClickHouse data across all tables with `tenant_id` column (GDPR)
|
||||
|
||||
**Password management** (tenant portal):
|
||||
- `POST /api/tenant/password` — tenant admin changes own Logto password (via `@AuthenticationPrincipal` JWT subject)
|
||||
- `POST /api/tenant/team/{userId}/password` — tenant admin resets a team member's Logto password (validates org membership first)
|
||||
- `POST /api/tenant/server/admin-password` — tenant admin resets the server's built-in local admin password (via M2M API to `POST /api/v1/admin/users/user:admin/password`)
|
||||
|
||||
## Per-tenant server env vars (set by DockerTenantProvisioner)
|
||||
|
||||
These env vars are injected into provisioned per-tenant server containers:
|
||||
|
||||
| Env var | Value | Purpose |
|
||||
|---------|-------|---------|
|
||||
| `SPRING_DATASOURCE_URL` | `jdbc:postgresql://cameleer-postgres:5432/cameleer?currentSchema=tenant_{slug}&ApplicationName=tenant_{slug}` | Per-tenant schema isolation + diagnostic query scoping |
|
||||
| `SPRING_DATASOURCE_USERNAME` | `tenant_{slug}` | Per-tenant PG user (owns only its schema) |
|
||||
| `SPRING_DATASOURCE_PASSWORD` | (generated, stored in `TenantEntity.dbPassword`) | Per-tenant PG password |
|
||||
| `CAMELEER_SERVER_CLICKHOUSE_URL` | `jdbc:clickhouse://cameleer-clickhouse:8123/cameleer` | ClickHouse connection |
|
||||
| `CAMELEER_SERVER_CLICKHOUSE_USERNAME` | (from provisioning config) | ClickHouse user |
|
||||
| `CAMELEER_SERVER_CLICKHOUSE_PASSWORD` | (from provisioning config) | ClickHouse password |
|
||||
| `CAMELEER_SERVER_TENANT_ID` | `{slug}` | Tenant slug for data isolation |
|
||||
| `CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN` | (license token) | Bootstrap auth token for M2M communication |
|
||||
| `CAMELEER_SERVER_SECURITY_JWTSECRET` | (from env, installer-generated) | JWT signing secret (must be non-empty) |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDC_ISSUERURI` | `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}/oidc` | Token issuer claim validation |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDC_JWKSETURI` | `http://cameleer-logto:3001/oidc/jwks` | Docker-internal JWK fetch |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDC_TLSSKIPVERIFY` | `true` (conditional) | Skip cert verify for OIDC discovery; only set when no `/certs/ca.pem` exists. When ca.pem exists, the server's `docker-entrypoint.sh` imports it into the JVM truststore instead. |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDC_AUDIENCE` | `https://api.cameleer.local` | JWT audience validation for OIDC tokens |
|
||||
| `CAMELEER_SERVER_SECURITY_CORSALLOWEDORIGINS` | `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}` | Allow browser requests through Traefik |
|
||||
| `CAMELEER_SERVER_LICENSE_TOKEN` | (generated) | License token for this tenant |
|
||||
| `CAMELEER_SERVER_RUNTIME_ENABLED` | `true` | Enable Docker orchestration |
|
||||
| `CAMELEER_SERVER_RUNTIME_SERVERURL` | `http://cameleer-server-{slug}:8081` | Per-tenant server URL (DNS alias on tenant network) |
|
||||
| `CAMELEER_SERVER_RUNTIME_ROUTINGDOMAIN` | `${PUBLIC_HOST}` | Domain for Traefik routing labels |
|
||||
| `CAMELEER_SERVER_RUNTIME_ROUTINGMODE` | `path` | `path` or `subdomain` routing |
|
||||
| `CAMELEER_SERVER_RUNTIME_JARSTORAGEPATH` | `/data/jars` | Directory for uploaded JARs |
|
||||
| `CAMELEER_SERVER_RUNTIME_DOCKERNETWORK` | `cameleer-tenant-{slug}` | Primary network for deployed app containers |
|
||||
| `CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME` | `cameleer-jars-{slug}` | Docker volume name for JAR sharing between server and deployed containers |
|
||||
| `CAMELEER_SERVER_RUNTIME_BASEIMAGE` | (from `CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE`) | Runtime base image for deployed app containers |
|
||||
| `CAMELEER_SERVER_SECURITY_INFRASTRUCTUREENDPOINTS` | `false` | Hides Database/ClickHouse admin from tenant admins |
|
||||
| `BASE_PATH` (server-ui) | `/t/{slug}` | React Router basename + `<base>` tag |
|
||||
| `CAMELEER_API_URL` (server-ui) | `http://cameleer-server-{slug}:8081` | Nginx upstream proxy target (NOT `API_URL` — image uses `${CAMELEER_API_URL}`) |
|
||||
|
||||
## Per-tenant volume mounts
|
||||
|
||||
| Mount | Container path | Purpose |
|
||||
|-------|---------------|---------|
|
||||
| `/var/run/docker.sock` | `/var/run/docker.sock` | Docker socket for app deployment orchestration |
|
||||
| `cameleer-jars-{slug}` (volume, via `CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME`) | `/data/jars` | Shared JAR storage — server writes, deployed app containers read |
|
||||
| `cameleer-saas_certs` (volume, ro) | `/certs` | Platform TLS certs + CA bundle for OIDC trust |
|
||||
|
||||
## SaaS provisioning properties (`ProvisioningProperties`)
|
||||
|
||||
The `CAMELEER_SAAS_PROVISIONING_*` prefix means "SaaS forwards this to provisioned tenant servers". These values are read by the SaaS app and injected as `CAMELEER_SERVER_*` env vars on provisioned containers.
|
||||
|
||||
| Env var | Spring property | Purpose |
|
||||
|---------|----------------|---------|
|
||||
| `CAMELEER_SAAS_PROVISIONING_SERVERIMAGE` | `cameleer.saas.provisioning.serverimage` | Docker image for per-tenant server containers |
|
||||
| `CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE` | `cameleer.saas.provisioning.serveruiimage` | Docker image for per-tenant UI containers |
|
||||
| `CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE` | `cameleer.saas.provisioning.runtimebaseimage` | Runtime base image for deployed apps (forwarded as `CAMELEER_SERVER_RUNTIME_BASEIMAGE`) |
|
||||
| `CAMELEER_SAAS_PROVISIONING_NETWORKNAME` | `cameleer.saas.provisioning.networkname` | Shared services Docker network (compose default) |
|
||||
| `CAMELEER_SAAS_PROVISIONING_TRAEFIKNETWORK` | `cameleer.saas.provisioning.traefiknetwork` | Traefik Docker network for routing |
|
||||
| `CAMELEER_SAAS_PROVISIONING_PUBLICHOST` | `cameleer.saas.provisioning.publichost` | Public hostname (same value as infrastructure `PUBLIC_HOST`) |
|
||||
| `CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL` | `cameleer.saas.provisioning.publicprotocol` | Public protocol (same value as infrastructure `PUBLIC_PROTOCOL`) |
|
||||
| `CAMELEER_SAAS_PROVISIONING_DATASOURCEURL` | `cameleer.saas.provisioning.datasourceurl` | PostgreSQL JDBC URL (base, without schema params) |
|
||||
| `CAMELEER_SAAS_PROVISIONING_DATASOURCEUSERNAME` | `cameleer.saas.provisioning.datasourceusername` | PostgreSQL user (fallback for pre-isolation tenants) |
|
||||
| `CAMELEER_SAAS_PROVISIONING_DATASOURCEPASSWORD` | `cameleer.saas.provisioning.datasourcepassword` | PostgreSQL password (fallback for pre-isolation tenants) |
|
||||
| `CAMELEER_SAAS_PROVISIONING_CLICKHOUSEPASSWORD` | `cameleer.saas.provisioning.clickhousepassword` | ClickHouse password for provisioned servers |
|
||||
| `CAMELEER_SAAS_PROVISIONING_CORSORIGINS` | `cameleer.saas.provisioning.corsorigins` | CORS allowed origins for provisioned servers |
|
||||
@@ -231,6 +231,7 @@ public class DockerTenantProvisioner implements TenantProvisioner {
|
||||
// Apps deployed by this server join the tenant network (isolated)
|
||||
"CAMELEER_SERVER_RUNTIME_DOCKERNETWORK=" + tenantNetwork,
|
||||
"CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME=cameleer-jars-" + slug,
|
||||
"CAMELEER_SERVER_RUNTIME_BASEIMAGE=" + props.runtimeBaseImage(),
|
||||
"CAMELEER_SERVER_SECURITY_INFRASTRUCTUREENDPOINTS=false"
|
||||
));
|
||||
// If no CA bundle exists, fall back to TLS skip for OIDC (self-signed dev)
|
||||
|
||||
@@ -6,6 +6,7 @@ import org.springframework.boot.context.properties.ConfigurationProperties;
|
||||
public record ProvisioningProperties(
|
||||
String serverImage,
|
||||
String serverUiImage,
|
||||
String runtimeBaseImage,
|
||||
String networkName,
|
||||
String traefikNetwork,
|
||||
String publicHost,
|
||||
|
||||
74
src/main/java/net/siegeln/cameleer/saas/vendor/TenantMetricsController.java
vendored
Normal file
74
src/main/java/net/siegeln/cameleer/saas/vendor/TenantMetricsController.java
vendored
Normal file
@@ -0,0 +1,74 @@
|
||||
package net.siegeln.cameleer.saas.vendor;
|
||||
|
||||
import net.siegeln.cameleer.saas.provisioning.ServerStatus;
|
||||
import net.siegeln.cameleer.saas.tenant.TenantEntity;
|
||||
import org.springframework.http.ResponseEntity;
|
||||
import org.springframework.security.access.prepost.PreAuthorize;
|
||||
import org.springframework.web.bind.annotation.GetMapping;
|
||||
import org.springframework.web.bind.annotation.RequestMapping;
|
||||
import org.springframework.web.bind.annotation.RestController;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.concurrent.CompletableFuture;
|
||||
|
||||
@RestController
|
||||
@RequestMapping("/api/vendor/metrics")
|
||||
@PreAuthorize("hasAuthority('SCOPE_platform:admin')")
|
||||
public class TenantMetricsController {
|
||||
|
||||
private final VendorTenantService vendorTenantService;
|
||||
private final TenantMetricsService metricsService;
|
||||
|
||||
public TenantMetricsController(VendorTenantService vendorTenantService,
|
||||
TenantMetricsService metricsService) {
|
||||
this.vendorTenantService = vendorTenantService;
|
||||
this.metricsService = metricsService;
|
||||
}
|
||||
|
||||
public record TenantMetricsEntry(
|
||||
String tenantId,
|
||||
String tenantName,
|
||||
String slug,
|
||||
String tier,
|
||||
String status,
|
||||
String serverState,
|
||||
TenantMetricsService.MetricsSummary metrics
|
||||
) {}
|
||||
|
||||
@GetMapping
|
||||
public ResponseEntity<List<TenantMetricsEntry>> all() {
|
||||
List<TenantEntity> tenants = vendorTenantService.listAll();
|
||||
|
||||
List<CompletableFuture<TenantMetricsEntry>> futures = tenants.stream()
|
||||
.map(tenant -> CompletableFuture.supplyAsync(() -> {
|
||||
ServerStatus serverStatus = vendorTenantService.getServerStatus(tenant);
|
||||
String state = serverStatus.state().name();
|
||||
|
||||
TenantMetricsService.MetricsSummary metrics = null;
|
||||
String endpoint = tenant.getServerEndpoint();
|
||||
boolean isRunning = "ACTIVE".equals(tenant.getStatus().name())
|
||||
&& endpoint != null && !endpoint.isBlank()
|
||||
&& "RUNNING".equals(state);
|
||||
if (isRunning) {
|
||||
metrics = metricsService.getMetricsSummary(endpoint);
|
||||
}
|
||||
|
||||
return new TenantMetricsEntry(
|
||||
tenant.getId().toString(),
|
||||
tenant.getName(),
|
||||
tenant.getSlug(),
|
||||
tenant.getTier().name(),
|
||||
tenant.getStatus().name(),
|
||||
state,
|
||||
metrics
|
||||
);
|
||||
}))
|
||||
.toList();
|
||||
|
||||
List<TenantMetricsEntry> entries = futures.stream()
|
||||
.map(CompletableFuture::join)
|
||||
.toList();
|
||||
|
||||
return ResponseEntity.ok(entries);
|
||||
}
|
||||
}
|
||||
176
src/main/java/net/siegeln/cameleer/saas/vendor/TenantMetricsService.java
vendored
Normal file
176
src/main/java/net/siegeln/cameleer/saas/vendor/TenantMetricsService.java
vendored
Normal file
@@ -0,0 +1,176 @@
|
||||
package net.siegeln.cameleer.saas.vendor;
|
||||
|
||||
import net.siegeln.cameleer.saas.identity.ServerApiClient;
|
||||
import net.siegeln.cameleer.saas.identity.ServerApiClient.MetricsQueryResponse;
|
||||
import net.siegeln.cameleer.saas.identity.ServerApiClient.MetricsPoint;
|
||||
import net.siegeln.cameleer.saas.identity.ServerApiClient.MetricsSeries;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.stereotype.Service;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.time.temporal.ChronoUnit;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.CompletableFuture;
|
||||
|
||||
@Service
|
||||
public class TenantMetricsService {
|
||||
|
||||
private static final Logger log = LoggerFactory.getLogger(TenantMetricsService.class);
|
||||
|
||||
private final ServerApiClient serverApiClient;
|
||||
|
||||
public TenantMetricsService(ServerApiClient serverApiClient) {
|
||||
this.serverApiClient = serverApiClient;
|
||||
}
|
||||
|
||||
// --- Response records ---
|
||||
|
||||
public record MetricsSummary(
|
||||
String collectedAt,
|
||||
AgentMetrics agents,
|
||||
IngestionMetrics ingestion,
|
||||
ServerJvmMetrics server,
|
||||
HttpMetrics http,
|
||||
double authFailuresPerMinute
|
||||
) {}
|
||||
|
||||
public record AgentMetrics(int live, int stale, int dead, int shutdown) {}
|
||||
|
||||
public record IngestionMetrics(long bufferDepth, double dropsPerMinute) {}
|
||||
|
||||
public record ServerJvmMetrics(
|
||||
double cpuUsage,
|
||||
long heapUsedBytes,
|
||||
long heapMaxBytes,
|
||||
long uptimeSeconds,
|
||||
int threadCount
|
||||
) {}
|
||||
|
||||
public record HttpMetrics(double requestsPerMinute, double errorRate) {}
|
||||
|
||||
/**
|
||||
* Query a tenant's server for key metrics and assemble a summary snapshot.
|
||||
* Fires multiple queries concurrently (one per metric group) over the last 5 minutes.
|
||||
*/
|
||||
public MetricsSummary getMetricsSummary(String serverEndpoint) {
|
||||
Instant to = Instant.now();
|
||||
Instant from = to.minus(5, ChronoUnit.MINUTES);
|
||||
String fromStr = from.toString();
|
||||
String toStr = to.toString();
|
||||
|
||||
// Fire all queries concurrently
|
||||
var agentsFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "cameleer.agents.connected", "value", fromStr, toStr, "avg", "raw", List.of("state"), null));
|
||||
var cpuFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "process.cpu.usage", "value", fromStr, toStr, "avg", "raw", null, null));
|
||||
var heapUsedFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "jvm.memory.used", "value", fromStr, toStr, "sum", "raw", null, Map.of("area", "heap")));
|
||||
var heapMaxFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "jvm.memory.max", "value", fromStr, toStr, "sum", "raw", null, Map.of("area", "heap")));
|
||||
var uptimeFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "process.uptime", "value", fromStr, toStr, "latest", "raw", null, null));
|
||||
var threadsFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "jvm.threads.live", "value", fromStr, toStr, "avg", "raw", null, null));
|
||||
var dropsFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "cameleer.ingestion.drops", "count", fromStr, toStr, "sum", "delta", null, null));
|
||||
var bufferFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "cameleer.ingestion.buffer.size", "value", fromStr, toStr, "sum", "raw", null, null));
|
||||
var httpTotalFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "http.server.requests", "count", fromStr, toStr, "sum", "delta", null, null));
|
||||
var http5xxFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "http.server.requests", "count", fromStr, toStr, "sum", "delta", null, Map.of("outcome", "SERVER_ERROR")));
|
||||
var authFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "cameleer.auth.failures", "count", fromStr, toStr, "sum", "delta", null, null));
|
||||
|
||||
try {
|
||||
// Extract latest values from each response
|
||||
var agentsResp = agentsFuture.join();
|
||||
int live = agentStateValue(agentsResp, "live");
|
||||
int stale = agentStateValue(agentsResp, "stale");
|
||||
int dead = agentStateValue(agentsResp, "dead");
|
||||
int shutdown = agentStateValue(agentsResp, "shutdown");
|
||||
|
||||
double cpu = latestValue(cpuFuture.join());
|
||||
long heapUsed = (long) latestValue(heapUsedFuture.join());
|
||||
long heapMax = (long) latestValue(heapMaxFuture.join());
|
||||
long uptimeMs = (long) latestValue(uptimeFuture.join());
|
||||
int threads = (int) latestValue(threadsFuture.join());
|
||||
|
||||
double dropsTotal = sumLatestValues(dropsFuture.join());
|
||||
long bufferDepth = (long) latestValue(bufferFuture.join());
|
||||
|
||||
double httpTotal = sumLatestValues(httpTotalFuture.join());
|
||||
double http5xx = sumLatestValues(http5xxFuture.join());
|
||||
double errorRate = httpTotal > 0 ? http5xx / httpTotal : 0.0;
|
||||
// stepSeconds=300 (5min window), so total is per-5-min; convert to per-minute
|
||||
double httpPerMin = httpTotal / 5.0;
|
||||
|
||||
double authTotal = sumLatestValues(authFuture.join());
|
||||
double authPerMin = authTotal / 5.0;
|
||||
|
||||
return new MetricsSummary(
|
||||
toStr,
|
||||
new AgentMetrics(live, stale, dead, shutdown),
|
||||
new IngestionMetrics(bufferDepth, dropsTotal / 5.0),
|
||||
new ServerJvmMetrics(cpu, heapUsed, heapMax, uptimeMs / 1000, threads),
|
||||
new HttpMetrics(httpPerMin, errorRate),
|
||||
authPerMin
|
||||
);
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to assemble metrics summary for {}: {}", serverEndpoint, e.getMessage());
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
private MetricsQueryResponse query(String endpoint, String metric, String statistic,
|
||||
String from, String to, String aggregation, String mode,
|
||||
List<String> groupByTags, Map<String, String> filterTags) {
|
||||
Map<String, Object> body = new HashMap<>();
|
||||
body.put("metric", metric);
|
||||
body.put("statistic", statistic);
|
||||
body.put("from", from);
|
||||
body.put("to", to);
|
||||
body.put("stepSeconds", 300);
|
||||
body.put("aggregation", aggregation);
|
||||
body.put("mode", mode);
|
||||
if (groupByTags != null) body.put("groupByTags", groupByTags);
|
||||
if (filterTags != null) body.put("filterTags", filterTags);
|
||||
return serverApiClient.queryServerMetrics(endpoint, body);
|
||||
}
|
||||
|
||||
/** Extract the latest value from the first (or only) series. */
|
||||
private double latestValue(MetricsQueryResponse resp) {
|
||||
if (resp == null || resp.series() == null || resp.series().isEmpty()) return 0.0;
|
||||
List<MetricsPoint> points = resp.series().getFirst().points();
|
||||
if (points == null || points.isEmpty()) return 0.0;
|
||||
return points.getLast().v();
|
||||
}
|
||||
|
||||
/** Sum the latest value across all series (for metrics with groupByTags or multiple series). */
|
||||
private double sumLatestValues(MetricsQueryResponse resp) {
|
||||
if (resp == null || resp.series() == null || resp.series().isEmpty()) return 0.0;
|
||||
double sum = 0.0;
|
||||
for (MetricsSeries series : resp.series()) {
|
||||
if (series.points() != null && !series.points().isEmpty()) {
|
||||
sum += series.points().getLast().v();
|
||||
}
|
||||
}
|
||||
return sum;
|
||||
}
|
||||
|
||||
/** Extract the latest value for a specific agent state tag. */
|
||||
private int agentStateValue(MetricsQueryResponse resp, String state) {
|
||||
if (resp == null || resp.series() == null) return 0;
|
||||
for (MetricsSeries series : resp.series()) {
|
||||
if (series.tags() != null && state.equals(series.tags().get("state"))) {
|
||||
if (series.points() != null && !series.points().isEmpty()) {
|
||||
return (int) series.points().getLast().v();
|
||||
}
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
@@ -22,6 +22,7 @@ import net.siegeln.cameleer.saas.tenant.TenantStatus;
|
||||
import net.siegeln.cameleer.saas.tenant.dto.CreateTenantRequest;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.context.annotation.Lazy;
|
||||
import org.springframework.scheduling.annotation.Async;
|
||||
import org.springframework.stereotype.Service;
|
||||
import org.springframework.transaction.annotation.Transactional;
|
||||
@@ -49,6 +50,7 @@ public class VendorTenantService {
|
||||
private final ProvisioningProperties provisioningProps;
|
||||
private final TenantDataCleanupService dataCleanupService;
|
||||
private final TenantDatabaseService tenantDatabaseService;
|
||||
private final VendorTenantService self;
|
||||
|
||||
public VendorTenantService(TenantService tenantService,
|
||||
TenantRepository tenantRepository,
|
||||
@@ -60,7 +62,8 @@ public class VendorTenantService {
|
||||
AuditService auditService,
|
||||
ProvisioningProperties provisioningProps,
|
||||
TenantDataCleanupService dataCleanupService,
|
||||
TenantDatabaseService tenantDatabaseService) {
|
||||
TenantDatabaseService tenantDatabaseService,
|
||||
@Lazy VendorTenantService self) {
|
||||
this.tenantService = tenantService;
|
||||
this.tenantRepository = tenantRepository;
|
||||
this.licenseService = licenseService;
|
||||
@@ -72,6 +75,7 @@ public class VendorTenantService {
|
||||
this.provisioningProps = provisioningProps;
|
||||
this.dataCleanupService = dataCleanupService;
|
||||
this.tenantDatabaseService = tenantDatabaseService;
|
||||
this.self = self;
|
||||
}
|
||||
|
||||
@Transactional
|
||||
@@ -114,7 +118,7 @@ public class VendorTenantService {
|
||||
|
||||
// 4. Provision server asynchronously (Docker containers, health check, config push)
|
||||
if (tenantProvisioner.isAvailable()) {
|
||||
provisionAsync(tenant.getId(), tenant.getSlug(), tenant.getTier().name(), license.getToken(), actorId);
|
||||
self.provisionAsync(tenant.getId(), tenant.getSlug(), tenant.getTier().name(), license.getToken(), actorId);
|
||||
}
|
||||
|
||||
return tenant;
|
||||
@@ -251,7 +255,7 @@ public class VendorTenantService {
|
||||
tenantProvisioner.remove(tenant.getSlug());
|
||||
var license = licenseService.getActiveLicense(tenantId).orElse(null);
|
||||
String token = license != null ? license.getToken() : "";
|
||||
provisionAsync(tenantId, tenant.getSlug(), tenant.getTier().name(), token, null);
|
||||
self.provisionAsync(tenantId, tenant.getSlug(), tenant.getTier().name(), token, null);
|
||||
return;
|
||||
}
|
||||
throw e;
|
||||
@@ -268,7 +272,7 @@ public class VendorTenantService {
|
||||
// Re-provision with freshly pulled images
|
||||
var license = licenseService.getActiveLicense(tenantId).orElse(null);
|
||||
String token = license != null ? license.getToken() : "";
|
||||
provisionAsync(tenantId, tenant.getSlug(), tenant.getTier().name(), token, null);
|
||||
self.provisionAsync(tenantId, tenant.getSlug(), tenant.getTier().name(), token, null);
|
||||
}
|
||||
|
||||
@Transactional
|
||||
|
||||
@@ -45,6 +45,7 @@ cameleer:
|
||||
provisioning:
|
||||
serverimage: ${CAMELEER_SAAS_PROVISIONING_SERVERIMAGE:gitea.siegeln.net/cameleer/cameleer-server:latest}
|
||||
serveruiimage: ${CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE:gitea.siegeln.net/cameleer/cameleer-server-ui:latest}
|
||||
runtimebaseimage: ${CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE:gitea.siegeln.net/cameleer/cameleer-runtime-base:latest}
|
||||
networkname: ${CAMELEER_SAAS_PROVISIONING_NETWORKNAME:cameleer-saas_cameleer}
|
||||
traefiknetwork: ${CAMELEER_SAAS_PROVISIONING_TRAEFIKNETWORK:cameleer-traefik}
|
||||
publichost: ${CAMELEER_SAAS_PROVISIONING_PUBLICHOST:localhost}
|
||||
|
||||
@@ -47,7 +47,7 @@ class TenantPortalServiceTest {
|
||||
private TenantProvisioner tenantProvisioner;
|
||||
|
||||
private final ProvisioningProperties provisioningProps = new ProvisioningProperties(
|
||||
null, null, null, null, "test.example.com", "https", null, null, null, null, null, null, null, null, null);
|
||||
null, null, null, null, null, "test.example.com", "https", null, null, null, null, null, null, null, null, null);
|
||||
|
||||
private TenantPortalService tenantPortalService;
|
||||
|
||||
|
||||
@@ -75,14 +75,20 @@ class VendorTenantServiceTest {
|
||||
@BeforeEach
|
||||
void setUp() {
|
||||
var provisioningProps = new ProvisioningProperties(
|
||||
"img", "uiimg", "net", "traefik", "localhost", "https",
|
||||
"img", "uiimg", "runtime-base:latest", "net", "traefik", "localhost", "https",
|
||||
"jdbc:postgresql://pg:5432/db", "cameleer", "cameleer_dev",
|
||||
"jdbc:clickhouse://ch:8123/cameleer", "default", "cameleer_ch",
|
||||
"https://localhost/oidc", "http://cameleer-logto:3001/oidc/jwks", "https://localhost");
|
||||
// Pass null for self-proxy initially, then re-create with the instance itself
|
||||
// (in production, Spring's @Lazy proxy handles this circular ref)
|
||||
vendorTenantService = new VendorTenantService(
|
||||
tenantService, tenantRepository, licenseService,
|
||||
tenantProvisioner, serverApiClient, logtoClient, logtoConfig,
|
||||
auditService, provisioningProps, dataCleanupService, tenantDatabaseService);
|
||||
auditService, provisioningProps, dataCleanupService, tenantDatabaseService, null);
|
||||
vendorTenantService = new VendorTenantService(
|
||||
tenantService, tenantRepository, licenseService,
|
||||
tenantProvisioner, serverApiClient, logtoClient, logtoConfig,
|
||||
auditService, provisioningProps, dataCleanupService, tenantDatabaseService, vendorTenantService);
|
||||
}
|
||||
|
||||
// --- Helpers ---
|
||||
|
||||
30
ui/CLAUDE.md
Normal file
30
ui/CLAUDE.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# Frontend
|
||||
|
||||
React 19 SPA served at `/platform/*` by the Spring Boot backend.
|
||||
|
||||
## Core files
|
||||
|
||||
- `main.tsx` — React 19 root
|
||||
- `router.tsx` — `/vendor/*` + `/tenant/*` with `RequireScope` guards and `LandingRedirect` that waits for scopes
|
||||
- `Layout.tsx` — persona-aware sidebar: vendor sees expandable "Vendor" section (Tenants, Audit Log, Certificates, Infrastructure, Identity/Logto), tenant admin sees Dashboard/License/SSO/Team/Audit/Settings
|
||||
- `OrgResolver.tsx` — merges global + org-scoped token scopes (vendor's platform:admin is global)
|
||||
- `config.ts` — fetch Logto config from /platform/api/config
|
||||
|
||||
## Auth hooks
|
||||
|
||||
- `auth/useAuth.ts` — auth hook (isAuthenticated, logout, signIn)
|
||||
- `auth/useOrganization.ts` — Zustand store for current tenant
|
||||
- `auth/useScopes.ts` — decode JWT scopes, hasScope()
|
||||
- `auth/ProtectedRoute.tsx` — guard (redirects to /login)
|
||||
|
||||
## Pages
|
||||
|
||||
- **Vendor pages**: `VendorTenantsPage.tsx`, `CreateTenantPage.tsx`, `TenantDetailPage.tsx`, `VendorAuditPage.tsx`, `CertificatesPage.tsx`, `InfrastructurePage.tsx`
|
||||
- **Tenant pages**: `TenantDashboardPage.tsx` (restart + upgrade server), `TenantLicensePage.tsx`, `SsoPage.tsx`, `TeamPage.tsx` (reset member passwords), `TenantAuditPage.tsx`, `SettingsPage.tsx` (change own password, reset server admin password)
|
||||
|
||||
## Custom Sign-in UI (`ui/sign-in/`)
|
||||
|
||||
Separate Vite+React SPA replacing Logto's default sign-in page. Built as custom Logto Docker image — see `docker/CLAUDE.md` for details.
|
||||
|
||||
- `SignInPage.tsx` — form with @cameleer/design-system components
|
||||
- `experience-api.ts` — Logto Experience API client (4-step: init -> verify -> identify -> submit)
|
||||
8
ui/package-lock.json
generated
8
ui/package-lock.json
generated
@@ -9,7 +9,7 @@
|
||||
"version": "0.1.0",
|
||||
"hasInstallScript": true,
|
||||
"dependencies": {
|
||||
"@cameleer/design-system": "^0.1.51",
|
||||
"@cameleer/design-system": "^0.1.54",
|
||||
"@logto/react": "^4.0.13",
|
||||
"@tanstack/react-query": "^5.90.0",
|
||||
"lucide-react": "^1.7.0",
|
||||
@@ -309,9 +309,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@cameleer/design-system": {
|
||||
"version": "0.1.51",
|
||||
"resolved": "https://gitea.siegeln.net/api/packages/cameleer/npm/%40cameleer%2Fdesign-system/-/0.1.51/design-system-0.1.51.tgz",
|
||||
"integrity": "sha512-ppZSiR6ZzzrUbtHTtnwpU4Zr2LPbcbJfAn0Ayh/OzDf9k6kFjn5myJWFlg+VJAZkFQoJA5y76GcKBdJ8nty4Tw==",
|
||||
"version": "0.1.54",
|
||||
"resolved": "https://gitea.siegeln.net/api/packages/cameleer/npm/%40cameleer%2Fdesign-system/-/0.1.54/design-system-0.1.54.tgz",
|
||||
"integrity": "sha512-IX05JmY/JcxTndfDWBHF7uizrRSqJgEM/J5uv5vQerM+Zq02yUzVNcV4QufVYBevGdnI4acUScnDlmSOOb85Qg==",
|
||||
"dependencies": {
|
||||
"lucide-react": "^1.7.0",
|
||||
"react": "^19.0.0",
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
"postinstall": "node -e \"const fs=require('fs'),p='node_modules/@cameleer/design-system/assets/';if(fs.existsSync('public')){fs.copyFileSync(p+'cameleer-logo.svg','public/favicon.svg')}\""
|
||||
},
|
||||
"dependencies": {
|
||||
"@cameleer/design-system": "^0.1.51",
|
||||
"@cameleer/design-system": "^0.1.54",
|
||||
"@logto/react": "^4.0.13",
|
||||
"@tanstack/react-query": "^5.90.0",
|
||||
"lucide-react": "^1.7.0",
|
||||
|
||||
8
ui/sign-in/package-lock.json
generated
8
ui/sign-in/package-lock.json
generated
@@ -8,7 +8,7 @@
|
||||
"name": "cameleer-sign-in",
|
||||
"version": "0.1.0",
|
||||
"dependencies": {
|
||||
"@cameleer/design-system": "^0.1.51",
|
||||
"@cameleer/design-system": "^0.1.54",
|
||||
"react": "^19.0.0",
|
||||
"react-dom": "^19.0.0"
|
||||
},
|
||||
@@ -303,9 +303,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@cameleer/design-system": {
|
||||
"version": "0.1.51",
|
||||
"resolved": "https://gitea.siegeln.net/api/packages/cameleer/npm/%40cameleer%2Fdesign-system/-/0.1.51/design-system-0.1.51.tgz",
|
||||
"integrity": "sha512-ppZSiR6ZzzrUbtHTtnwpU4Zr2LPbcbJfAn0Ayh/OzDf9k6kFjn5myJWFlg+VJAZkFQoJA5y76GcKBdJ8nty4Tw==",
|
||||
"version": "0.1.54",
|
||||
"resolved": "https://gitea.siegeln.net/api/packages/cameleer/npm/%40cameleer%2Fdesign-system/-/0.1.54/design-system-0.1.54.tgz",
|
||||
"integrity": "sha512-IX05JmY/JcxTndfDWBHF7uizrRSqJgEM/J5uv5vQerM+Zq02yUzVNcV4QufVYBevGdnI4acUScnDlmSOOb85Qg==",
|
||||
"dependencies": {
|
||||
"lucide-react": "^1.7.0",
|
||||
"react": "^19.0.0",
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
"preview": "vite preview"
|
||||
},
|
||||
"dependencies": {
|
||||
"@cameleer/design-system": "^0.1.51",
|
||||
"@cameleer/design-system": "^0.1.54",
|
||||
"react": "^19.0.0",
|
||||
"react-dom": "^19.0.0"
|
||||
},
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query';
|
||||
import { api } from './client';
|
||||
import type { VendorTenantSummary, VendorTenantDetail, CreateTenantRequest, TenantResponse, LicenseResponse, AuditLogPage, AuditLogFilters } from '../types/api';
|
||||
import type { VendorTenantSummary, VendorTenantDetail, CreateTenantRequest, TenantResponse, LicenseResponse, AuditLogPage, AuditLogFilters, TenantMetricsEntry } from '../types/api';
|
||||
|
||||
export function useVendorTenants() {
|
||||
return useQuery<VendorTenantSummary[]>({
|
||||
@@ -179,3 +179,13 @@ export function useInfraChDetail(tenantId: string) {
|
||||
enabled: !!tenantId,
|
||||
});
|
||||
}
|
||||
|
||||
// --- Tenant Metrics ---
|
||||
|
||||
export function useTenantMetrics() {
|
||||
return useQuery<TenantMetricsEntry[]>({
|
||||
queryKey: ['vendor', 'metrics'],
|
||||
queryFn: () => api.get('/vendor/metrics'),
|
||||
refetchInterval: 60_000,
|
||||
});
|
||||
}
|
||||
|
||||
@@ -109,6 +109,14 @@ export function Layout() {
|
||||
>
|
||||
Certificates
|
||||
</div>
|
||||
<div
|
||||
style={{ padding: '6px 12px 6px 36px', fontSize: 13, cursor: 'pointer',
|
||||
fontWeight: isActive(location, '/vendor/metrics') ? 600 : 400,
|
||||
color: isActive(location, '/vendor/metrics') ? 'var(--amber)' : 'var(--text-muted)' }}
|
||||
onClick={() => navigate('/vendor/metrics')}
|
||||
>
|
||||
Metrics
|
||||
</div>
|
||||
<div
|
||||
style={{ padding: '6px 12px 6px 36px', fontSize: 13, cursor: 'pointer',
|
||||
fontWeight: isActive(location, '/vendor/infrastructure') ? 600 : 400,
|
||||
|
||||
194
ui/src/pages/vendor/VendorMetricsPage.tsx
vendored
Normal file
194
ui/src/pages/vendor/VendorMetricsPage.tsx
vendored
Normal file
@@ -0,0 +1,194 @@
|
||||
import { Card, KpiStrip, Spinner, Badge } from '@cameleer/design-system';
|
||||
import { Activity } from 'lucide-react';
|
||||
import { useTenantMetrics } from '../../api/vendor-hooks';
|
||||
import { useNavigate } from 'react-router';
|
||||
import type { TenantMetricsEntry, MetricsSummary } from '../../types/api';
|
||||
import { tierColor } from '../../utils/tier';
|
||||
|
||||
function formatBytes(n: number): string {
|
||||
if (n < 1024) return `${n} B`;
|
||||
if (n < 1024 * 1024) return `${(n / 1024).toFixed(1)} KB`;
|
||||
if (n < 1024 * 1024 * 1024) return `${(n / 1024 / 1024).toFixed(1)} MB`;
|
||||
return `${(n / 1024 / 1024 / 1024).toFixed(2)} GB`;
|
||||
}
|
||||
|
||||
function formatUptime(seconds: number): string {
|
||||
const d = Math.floor(seconds / 86400);
|
||||
const h = Math.floor((seconds % 86400) / 3600);
|
||||
const m = Math.floor((seconds % 3600) / 60);
|
||||
if (d > 0) return `${d}d ${h}h`;
|
||||
if (h > 0) return `${h}h ${m}m`;
|
||||
return `${m}m`;
|
||||
}
|
||||
|
||||
function formatPct(v: number): string {
|
||||
return `${(v * 100).toFixed(1)}%`;
|
||||
}
|
||||
|
||||
function formatRate(v: number): string {
|
||||
if (v === 0) return '0';
|
||||
if (v < 0.1) return v.toFixed(3);
|
||||
if (v < 10) return v.toFixed(1);
|
||||
return Math.round(v).toLocaleString();
|
||||
}
|
||||
|
||||
const thStyle: React.CSSProperties = {
|
||||
textAlign: 'left',
|
||||
padding: '8px 16px',
|
||||
fontSize: 11,
|
||||
fontWeight: 600,
|
||||
color: 'var(--text-muted)',
|
||||
textTransform: 'uppercase',
|
||||
letterSpacing: '0.05em',
|
||||
borderBottom: '1px solid var(--border)',
|
||||
};
|
||||
|
||||
const tdStyle: React.CSSProperties = {
|
||||
padding: '10px 16px',
|
||||
fontSize: 13,
|
||||
borderBottom: '1px solid var(--border)',
|
||||
fontVariantNumeric: 'tabular-nums',
|
||||
};
|
||||
|
||||
function AgentsBadges({ m }: { m: MetricsSummary }) {
|
||||
const { live, stale, dead } = m.agents;
|
||||
return (
|
||||
<span style={{ display: 'inline-flex', gap: 6 }}>
|
||||
<Badge label={`${live} live`} color="success" />
|
||||
{stale > 0 && <Badge label={`${stale} stale`} color="warning" />}
|
||||
{dead > 0 && <Badge label={`${dead} dead`} color="error" />}
|
||||
</span>
|
||||
);
|
||||
}
|
||||
|
||||
function HeapBar({ used, max }: { used: number; max: number }) {
|
||||
const pct = max > 0 ? (used / max) * 100 : 0;
|
||||
const color = pct > 85 ? 'var(--error, #ef4444)' : pct > 70 ? 'var(--warning, #f59e0b)' : 'var(--success, #22c55e)';
|
||||
return (
|
||||
<div style={{ display: 'flex', alignItems: 'center', gap: 8 }}>
|
||||
<div style={{ width: 60, height: 6, borderRadius: 3, background: 'var(--border)', overflow: 'hidden' }}>
|
||||
<div style={{ width: `${pct}%`, height: '100%', borderRadius: 3, background: color }} />
|
||||
</div>
|
||||
<span style={{ fontSize: 12, color: 'var(--text-muted)' }}>
|
||||
{formatBytes(used)} / {formatBytes(max)}
|
||||
</span>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
function DropsBadge({ rate }: { rate: number }) {
|
||||
if (rate === 0) return <span style={{ color: 'var(--text-muted)' }}>0</span>;
|
||||
return <Badge label={`${formatRate(rate)}/min`} color="error" />;
|
||||
}
|
||||
|
||||
function TenantRow({ entry, onClick }: { entry: TenantMetricsEntry; onClick: () => void }) {
|
||||
const m = entry.metrics;
|
||||
const notRunning = entry.serverState !== 'RUNNING';
|
||||
|
||||
return (
|
||||
<tr style={{ cursor: 'pointer' }} onClick={onClick}>
|
||||
<td style={tdStyle}>
|
||||
<div style={{ display: 'flex', alignItems: 'center', gap: 8 }}>
|
||||
<span style={{ fontWeight: 500 }}>{entry.tenantName}</span>
|
||||
<Badge label={entry.tier} color={tierColor(entry.tier)} />
|
||||
</div>
|
||||
</td>
|
||||
{notRunning || !m ? (
|
||||
<td colSpan={6} style={{ ...tdStyle, color: 'var(--text-muted)', textAlign: 'center' }}>
|
||||
{notRunning ? `Server ${entry.serverState.toLowerCase()}` : 'No metrics available'}
|
||||
</td>
|
||||
) : (
|
||||
<>
|
||||
<td style={tdStyle}><AgentsBadges m={m} /></td>
|
||||
<td style={{ ...tdStyle, textAlign: 'right' }}>{formatPct(m.server.cpuUsage)}</td>
|
||||
<td style={tdStyle}><HeapBar used={m.server.heapUsedBytes} max={m.server.heapMaxBytes} /></td>
|
||||
<td style={{ ...tdStyle, textAlign: 'right' }}>{formatRate(m.http.requestsPerMinute)}/min</td>
|
||||
<td style={{ ...tdStyle, textAlign: 'center' }}><DropsBadge rate={m.ingestion.dropsPerMinute} /></td>
|
||||
<td style={{ ...tdStyle, textAlign: 'right', color: 'var(--text-muted)' }}>{formatUptime(m.server.uptimeSeconds)}</td>
|
||||
</>
|
||||
)}
|
||||
</tr>
|
||||
);
|
||||
}
|
||||
|
||||
function FleetKpis({ entries }: { entries: TenantMetricsEntry[] }) {
|
||||
const withMetrics = entries.filter((e) => e.metrics != null);
|
||||
const totalAgentsLive = withMetrics.reduce((s, e) => s + (e.metrics!.agents.live), 0);
|
||||
const totalAgentsDead = withMetrics.reduce((s, e) => s + (e.metrics!.agents.dead), 0);
|
||||
const totalDrops = withMetrics.reduce((s, e) => s + (e.metrics!.ingestion.dropsPerMinute), 0);
|
||||
const running = entries.filter((e) => e.serverState === 'RUNNING').length;
|
||||
const avgCpu = withMetrics.length > 0
|
||||
? withMetrics.reduce((s, e) => s + e.metrics!.server.cpuUsage, 0) / withMetrics.length
|
||||
: 0;
|
||||
|
||||
return (
|
||||
<KpiStrip
|
||||
items={[
|
||||
{ label: 'Tenants Running', value: `${running} / ${entries.length}` },
|
||||
{ label: 'Total Agents Live', value: String(totalAgentsLive) },
|
||||
{ label: 'Dead Agents', value: String(totalAgentsDead) },
|
||||
{ label: 'Avg CPU', value: formatPct(avgCpu) },
|
||||
{ label: 'Ingestion Drops', value: totalDrops === 0 ? '0' : `${formatRate(totalDrops)}/min` },
|
||||
]}
|
||||
/>
|
||||
);
|
||||
}
|
||||
|
||||
export function VendorMetricsPage() {
|
||||
const { data, isLoading, isError } = useTenantMetrics();
|
||||
const navigate = useNavigate();
|
||||
|
||||
return (
|
||||
<div style={{ padding: 24, display: 'flex', flexDirection: 'column', gap: 24, maxWidth: 1200 }}>
|
||||
<h1 style={{ margin: 0, fontSize: '1.25rem', fontWeight: 600 }}>Tenant Metrics</h1>
|
||||
|
||||
<Card>
|
||||
<div style={{ display: 'flex', alignItems: 'center', gap: 10, padding: '16px 20px', borderBottom: '1px solid var(--border)' }}>
|
||||
<Activity size={18} style={{ color: 'var(--amber)' }} />
|
||||
<h2 style={{ margin: 0, fontSize: '1rem', fontWeight: 600 }}>Fleet Overview</h2>
|
||||
{isLoading && <Spinner size="sm" />}
|
||||
{isError && <span style={{ fontSize: 13, color: 'var(--error, #ef4444)' }}>Failed to load</span>}
|
||||
</div>
|
||||
|
||||
{data && (
|
||||
<>
|
||||
<div style={{ padding: '4px 20px' }}>
|
||||
<FleetKpis entries={data} />
|
||||
</div>
|
||||
|
||||
<table style={{ width: '100%', borderCollapse: 'collapse' }}>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style={thStyle}>Tenant</th>
|
||||
<th style={thStyle}>Agents</th>
|
||||
<th style={{ ...thStyle, textAlign: 'right' }}>CPU</th>
|
||||
<th style={thStyle}>Heap</th>
|
||||
<th style={{ ...thStyle, textAlign: 'right' }}>HTTP Req</th>
|
||||
<th style={{ ...thStyle, textAlign: 'center' }}>Drops</th>
|
||||
<th style={{ ...thStyle, textAlign: 'right' }}>Uptime</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{data.length === 0 ? (
|
||||
<tr>
|
||||
<td colSpan={7} style={{ ...tdStyle, color: 'var(--text-muted)', textAlign: 'center' }}>
|
||||
No tenants found
|
||||
</td>
|
||||
</tr>
|
||||
) : (
|
||||
data.map((entry) => (
|
||||
<TenantRow
|
||||
key={entry.tenantId}
|
||||
entry={entry}
|
||||
onClick={() => navigate(`/vendor/tenants/${entry.tenantId}`)}
|
||||
/>
|
||||
))
|
||||
)}
|
||||
</tbody>
|
||||
</table>
|
||||
</>
|
||||
)}
|
||||
</Card>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -14,6 +14,7 @@ import { TenantDetailPage } from './pages/vendor/TenantDetailPage';
|
||||
import { VendorAuditPage } from './pages/vendor/VendorAuditPage';
|
||||
import { CertificatesPage } from './pages/vendor/CertificatesPage';
|
||||
import { InfrastructurePage } from './pages/vendor/InfrastructurePage';
|
||||
import { VendorMetricsPage } from './pages/vendor/VendorMetricsPage';
|
||||
import { TenantDashboardPage } from './pages/tenant/TenantDashboardPage';
|
||||
import { TenantLicensePage } from './pages/tenant/TenantLicensePage';
|
||||
import { SsoPage } from './pages/tenant/SsoPage';
|
||||
@@ -82,6 +83,11 @@ export function AppRouter() {
|
||||
<CertificatesPage />
|
||||
</RequireScope>
|
||||
} />
|
||||
<Route path="/vendor/metrics" element={
|
||||
<RequireScope scope="platform:admin" fallback={<Navigate to="/tenant" replace />}>
|
||||
<VendorMetricsPage />
|
||||
</RequireScope>
|
||||
} />
|
||||
<Route path="/vendor/infrastructure" element={
|
||||
<RequireScope scope="platform:admin" fallback={<Navigate to="/tenant" replace />}>
|
||||
<InfrastructurePage />
|
||||
|
||||
@@ -155,3 +155,48 @@ export interface AuditLogFilters {
|
||||
page?: number;
|
||||
size?: number;
|
||||
}
|
||||
|
||||
// Tenant metrics (from server /api/v1/admin/metrics/summary)
|
||||
export interface AgentMetrics {
|
||||
live: number;
|
||||
stale: number;
|
||||
dead: number;
|
||||
shutdown: number;
|
||||
}
|
||||
|
||||
export interface IngestionMetrics {
|
||||
bufferDepth: number;
|
||||
dropsPerMinute: number;
|
||||
}
|
||||
|
||||
export interface ServerMetrics {
|
||||
cpuUsage: number;
|
||||
heapUsedBytes: number;
|
||||
heapMaxBytes: number;
|
||||
uptimeSeconds: number;
|
||||
threadCount: number;
|
||||
}
|
||||
|
||||
export interface HttpMetrics {
|
||||
requestsPerMinute: number;
|
||||
errorRate: number;
|
||||
}
|
||||
|
||||
export interface MetricsSummary {
|
||||
collectedAt: string;
|
||||
agents: AgentMetrics;
|
||||
ingestion: IngestionMetrics;
|
||||
server: ServerMetrics;
|
||||
http: HttpMetrics;
|
||||
authFailuresPerMinute: number;
|
||||
}
|
||||
|
||||
export interface TenantMetricsEntry {
|
||||
tenantId: string;
|
||||
tenantName: string;
|
||||
slug: string;
|
||||
tier: string;
|
||||
status: string;
|
||||
serverState: string;
|
||||
metrics: MetricsSummary | null;
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user