Compare commits
15 Commits
dc4ea33c9b
...
v1.0.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1fbafbb16d | ||
|
|
6c1241ed89 | ||
|
|
df64573bfb | ||
|
|
4526d97bda | ||
|
|
132143c083 | ||
|
|
b824942408 | ||
|
|
31e8dd05f0 | ||
|
|
eba9f560ac | ||
|
|
3c2bf4a9b1 | ||
|
|
97b2235914 | ||
|
|
338db5dcda | ||
|
|
fd50a147a2 | ||
|
|
0dd52624b7 | ||
|
|
1ce0ea411d | ||
|
|
81be25198c |
@@ -1,7 +1,7 @@
|
||||
<!-- gitnexus:start -->
|
||||
# GitNexus — Code Intelligence
|
||||
|
||||
This project is indexed by GitNexus as **cameleer-saas** (2676 symbols, 5768 relationships, 224 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
|
||||
This project is indexed by GitNexus as **cameleer-saas** (2816 symbols, 5989 relationships, 238 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
|
||||
|
||||
> If any GitNexus tool warns the index is stale, run `npx gitnexus analyze` in terminal first.
|
||||
|
||||
|
||||
331
CLAUDE.md
331
CLAUDE.md
@@ -17,322 +17,37 @@ This repo is the SaaS layer on top of two proven components:
|
||||
|
||||
Agent-server protocol is defined in `cameleer/cameleer-common/PROTOCOL.md`. The agent and server are mature, proven components — this repo wraps them with multi-tenancy, billing, and self-service onboarding.
|
||||
|
||||
## Key Classes
|
||||
## Key Packages
|
||||
|
||||
### Java Backend (`src/main/java/net/siegeln/cameleer/saas/`)
|
||||
|
||||
**config/** — Security, tenant isolation, web config
|
||||
- `SecurityConfig.java` — OAuth2 JWT decoder (ES384, issuer/audience validation, scope extraction)
|
||||
- `TenantIsolationInterceptor.java` — HandlerInterceptor on `/api/**`; JWT org_id -> TenantContext, path variable validation, fail-closed
|
||||
- `TenantContext.java` — ThreadLocal<UUID> tenant ID storage
|
||||
- `WebConfig.java` — registers TenantIsolationInterceptor
|
||||
- `PublicConfigController.java` — GET /api/config (Logto endpoint, SPA client ID, scopes)
|
||||
- `MeController.java` — GET /api/me (authenticated user, tenant list)
|
||||
| Package | Purpose | Key classes |
|
||||
|---------|---------|-------------|
|
||||
| `config/` | Security, tenant isolation, web config | `SecurityConfig`, `TenantIsolationInterceptor`, `TenantContext`, `PublicConfigController`, `MeController` |
|
||||
| `tenant/` | Tenant data model | `TenantEntity` (JPA: id, name, slug, tier, status, logto_org_id, db_password) |
|
||||
| `vendor/` | Vendor console (platform:admin) | `VendorTenantService`, `VendorTenantController`, `InfrastructureService` |
|
||||
| `portal/` | Tenant admin portal (org-scoped) | `TenantPortalService`, `TenantPortalController` |
|
||||
| `provisioning/` | Pluggable tenant provisioning | `DockerTenantProvisioner`, `TenantDatabaseService`, `TenantDataCleanupService` |
|
||||
| `certificate/` | TLS certificate lifecycle | `CertificateService`, `CertificateController`, `TenantCaCertService` |
|
||||
| `license/` | License management | `LicenseService`, `LicenseController` |
|
||||
| `identity/` | Logto & server integration | `LogtoManagementClient`, `ServerApiClient` |
|
||||
| `audit/` | Audit logging | `AuditService` |
|
||||
|
||||
**tenant/** — Tenant data model
|
||||
- `TenantEntity.java` — JPA entity (id, name, slug, tier, status, logto_org_id, stripe IDs, settings JSONB, db_password)
|
||||
### Frontend
|
||||
|
||||
**vendor/** — Vendor console (platform:admin only)
|
||||
- `VendorTenantService.java` — orchestrates tenant creation (sync: DB + Logto + license, async: Docker provisioning + config push), suspend/activate, delete, restart server, upgrade server (force-pull + re-provision), license renewal
|
||||
- `VendorTenantController.java` — REST at `/api/vendor/tenants` (platform:admin required). List endpoint returns `VendorTenantSummary` with fleet health data (agentCount, environmentCount, agentLimit) fetched in parallel via `CompletableFuture`.
|
||||
- `InfrastructureService.java` — raw JDBC queries against shared PostgreSQL and ClickHouse for per-tenant infrastructure monitoring (schema sizes, table stats, row counts, disk usage)
|
||||
- `InfrastructureController.java` — REST at `/api/vendor/infrastructure` (platform:admin required). PostgreSQL and ClickHouse overview with per-tenant breakdown.
|
||||
|
||||
**portal/** — Tenant admin portal (org-scoped)
|
||||
- `TenantPortalService.java` — customer-facing: dashboard (health + agent/env counts from server via M2M), license, SSO connectors, team, settings (public endpoint URL), server restart/upgrade, password management (own + team + server admin)
|
||||
- `TenantPortalController.java` — REST at `/api/tenant/*` (org-scoped, includes CA cert management at `/api/tenant/ca`, password endpoints at `/api/tenant/password` and `/api/tenant/server/admin-password`)
|
||||
|
||||
**provisioning/** — Pluggable tenant provisioning
|
||||
- `TenantProvisioner.java` — pluggable interface (like server's RuntimeOrchestrator)
|
||||
- `DockerTenantProvisioner.java` — Docker implementation, creates per-tenant server + UI containers with per-tenant JDBC credentials (`currentSchema=tenant_{slug}&ApplicationName=tenant_{slug}`). `upgrade(slug)` force-pulls latest images and removes server+UI containers (preserves app containers, volumes, networks) for re-provisioning. `remove(slug)` does full cleanup: label-based container removal, env networks, tenant network, JAR volume.
|
||||
- `TenantDatabaseService.java` — creates/drops per-tenant PostgreSQL users (`tenant_{slug}`) and schemas; used during provisioning and delete
|
||||
- `TenantDataCleanupService.java` — GDPR data erasure on tenant delete: deletes ClickHouse data across all tables with `tenant_id` column (PostgreSQL cleanup handled by `TenantDatabaseService`)
|
||||
- `TenantProvisionerAutoConfig.java` — auto-detects Docker socket
|
||||
- `DockerCertificateManager.java` — file-based cert management with atomic `.wip` swap (Docker volume)
|
||||
- `DisabledCertificateManager.java` — no-op when certs dir unavailable
|
||||
- `CertificateManagerAutoConfig.java` — auto-detects `/certs` directory
|
||||
|
||||
**certificate/** — TLS certificate lifecycle management
|
||||
- `CertificateManager.java` — provider interface (Docker now, K8s later)
|
||||
- `CertificateService.java` — orchestrates stage/activate/restore/discard, DB metadata, tenant CA staleness
|
||||
- `CertificateController.java` — REST at `/api/vendor/certificates` (platform:admin required)
|
||||
- `CertificateEntity.java` — JPA entity (status: ACTIVE/STAGED/ARCHIVED, subject, fingerprint, etc.)
|
||||
- `CertificateStartupListener.java` — seeds DB from filesystem on boot (for bootstrap-generated certs)
|
||||
- `TenantCaCertEntity.java` — JPA entity for per-tenant CA certs (PEM stored in DB, multiple per tenant)
|
||||
- `TenantCaCertRepository.java` — queries by tenant, status, all active across tenants
|
||||
- `TenantCaCertService.java` — stage/activate/delete tenant CAs, rebuilds aggregated `ca.pem` on changes
|
||||
|
||||
**license/** — License management
|
||||
- `LicenseEntity.java` — JPA entity (id, tenant_id, tier, features JSONB, limits JSONB, expires_at)
|
||||
- `LicenseService.java` — generation, validation, feature/limit lookups
|
||||
- `LicenseController.java` — POST issue, GET verify, DELETE revoke
|
||||
|
||||
**identity/** — Logto & server integration
|
||||
- `LogtoConfig.java` — Logto endpoint, M2M credentials (reads from bootstrap file)
|
||||
- `LogtoManagementClient.java` — Logto Management API calls (create org, create user, add to org, get user, SSO connectors, JIT provisioning, password updates via `PATCH /api/users/{id}/password`)
|
||||
- `ServerApiClient.java` — M2M client for cameleer-server API (Logto M2M token, `X-Cameleer-Protocol-Version: 1` header). Health checks, license/OIDC push, agent count, environment count, server admin password reset per tenant server.
|
||||
|
||||
**audit/** — Audit logging
|
||||
- `AuditEntity.java` — JPA entity (actor_id, actor_email, tenant_id, action, resource, status)
|
||||
- `AuditService.java` — log audit events (TENANT_CREATE, TENANT_UPDATE, etc.); auto-resolves actor name from Logto when actorEmail is null (cached in-memory)
|
||||
|
||||
### React Frontend (`ui/src/`)
|
||||
|
||||
- `main.tsx` — React 19 root
|
||||
- `router.tsx` — `/vendor/*` + `/tenant/*` with `RequireScope` guards and `LandingRedirect` that waits for scopes
|
||||
- `Layout.tsx` — persona-aware sidebar: vendor sees expandable "Vendor" section (Tenants, Audit Log, Certificates, Infrastructure, Identity/Logto), tenant admin sees Dashboard/License/SSO/Team/Audit/Settings
|
||||
- `OrgResolver.tsx` — merges global + org-scoped token scopes (vendor's platform:admin is global)
|
||||
- `config.ts` — fetch Logto config from /platform/api/config
|
||||
- `auth/useAuth.ts` — auth hook (isAuthenticated, logout, signIn)
|
||||
- `auth/useOrganization.ts` — Zustand store for current tenant
|
||||
- `auth/useScopes.ts` — decode JWT scopes, hasScope()
|
||||
- `auth/ProtectedRoute.tsx` — guard (redirects to /login)
|
||||
- **Vendor pages**: `VendorTenantsPage.tsx`, `CreateTenantPage.tsx`, `TenantDetailPage.tsx`, `VendorAuditPage.tsx`, `CertificatesPage.tsx`
|
||||
- **Tenant pages**: `TenantDashboardPage.tsx` (restart + upgrade server), `TenantLicensePage.tsx`, `SsoPage.tsx`, `TeamPage.tsx` (reset member passwords), `TenantAuditPage.tsx`, `SettingsPage.tsx` (change own password, reset server admin password)
|
||||
|
||||
### Custom Sign-in UI (`ui/sign-in/src/`)
|
||||
|
||||
- `SignInPage.tsx` — form with @cameleer/design-system components
|
||||
- `experience-api.ts` — Logto Experience API client (4-step: init -> verify -> identify -> submit)
|
||||
- **`ui/src/`** — React 19 SPA at `/platform/*` (vendor + tenant admin pages)
|
||||
- **`ui/sign-in/`** — Custom Logto sign-in UI (built into `cameleer-logto` Docker image)
|
||||
|
||||
## Architecture Context
|
||||
|
||||
The SaaS platform is a **vendor management plane**. It does not proxy requests to servers — instead it provisions dedicated per-tenant cameleer-server instances via Docker API. Each tenant gets isolated server + UI containers with their own database schemas, networks, and Traefik routing.
|
||||
|
||||
### Routing (single-domain, path-based via Traefik)
|
||||
|
||||
All services on one hostname. Infrastructure containers (Traefik, Logto) use `PUBLIC_HOST` + `PUBLIC_PROTOCOL` env vars directly. The SaaS app reads these via `CAMELEER_SAAS_PROVISIONING_PUBLICHOST` / `CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL` (Spring Boot properties `cameleer.saas.provisioning.publichost` / `cameleer.saas.provisioning.publicprotocol`).
|
||||
|
||||
| Path | Target | Notes |
|
||||
|------|--------|-------|
|
||||
| `/platform/*` | cameleer-saas:8080 | SPA + API (`server.servlet.context-path: /platform`) |
|
||||
| `/platform/vendor/*` | (SPA routes) | Vendor console (platform:admin) |
|
||||
| `/platform/tenant/*` | (SPA routes) | Tenant admin portal (org-scoped) |
|
||||
| `/t/{slug}/*` | per-tenant server-ui | Provisioned tenant UI containers (Traefik labels) |
|
||||
| `/` | redirect -> `/platform/` | Via `docker/traefik-dynamic.yml` |
|
||||
| `/*` (catch-all) | cameleer-logto:3001 (priority=1) | Custom sign-in UI, OIDC, interaction |
|
||||
|
||||
- SPA assets at `/_app/` (Vite `assetsDir: '_app'`) to avoid conflict with Logto's `/assets/`
|
||||
- Logto `ENDPOINT` = `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}` (same domain, same origin)
|
||||
- TLS: `traefik-certs` init container generates self-signed cert (dev) or copies user-supplied cert via `CERT_FILE`/`KEY_FILE`/`CA_FILE` env vars. Default cert configured in `docker/traefik-dynamic.yml` (NOT static `traefik.yml` — Traefik v3 ignores `tls.stores.default` in static config). Runtime cert replacement via vendor UI (stage/activate/restore). ACME for production (future). Server containers import `/certs/ca.pem` into JVM truststore at startup via `docker-entrypoint.sh` for OIDC trust.
|
||||
- Root `/` -> `/platform/` redirect via Traefik file provider (`docker/traefik-dynamic.yml`)
|
||||
- LoginPage auto-redirects to Logto OIDC (no intermediate button)
|
||||
- Per-tenant server containers get Traefik labels for `/t/{slug}/*` routing at provisioning time
|
||||
|
||||
### Docker Networks
|
||||
|
||||
Compose-defined networks:
|
||||
|
||||
| Network | Name on Host | Purpose |
|
||||
|---------|-------------|---------|
|
||||
| `cameleer` | `cameleer-saas_cameleer` | Compose default — shared services (DB, Logto, SaaS) |
|
||||
| `cameleer-traefik` | `cameleer-traefik` (fixed `name:`) | Traefik + provisioned tenant containers |
|
||||
|
||||
Per-tenant networks (created dynamically by `DockerTenantProvisioner`):
|
||||
|
||||
| Network | Name Pattern | Purpose |
|
||||
|---------|-------------|---------|
|
||||
| Tenant network | `cameleer-tenant-{slug}` | Internal bridge, no internet — isolates tenant server + apps |
|
||||
| Environment network | `cameleer-env-{tenantId}-{envSlug}` | Tenant-scoped (includes tenantId to prevent slug collision across tenants) |
|
||||
|
||||
Server containers join three networks: tenant network (primary), shared services network (`cameleer`), and traefik network. Apps deployed by the server use the tenant network as primary.
|
||||
|
||||
**IMPORTANT:** Dynamically-created containers MUST have `traefik.docker.network=cameleer-traefik` label. Traefik's Docker provider defaults to `network: cameleer` (compose-internal name) for IP resolution, which doesn't match dynamically-created containers connected via Docker API using the host network name (`cameleer-saas_cameleer`). Without this label, Traefik returns 504 Gateway Timeout for `/t/{slug}/api/*` paths.
|
||||
|
||||
### Custom sign-in UI (`ui/sign-in/`)
|
||||
|
||||
Separate Vite+React SPA replacing Logto's default sign-in page. Visually matches cameleer-server LoginPage.
|
||||
|
||||
- Built as custom Logto Docker image (`cameleer-logto`): `ui/sign-in/Dockerfile` = node build stage + `FROM ghcr.io/logto-io/logto:latest` + COPY dist over `/etc/logto/packages/experience/dist/`
|
||||
- Uses `@cameleer/design-system` components (Card, Input, Button, FormField, Alert)
|
||||
- Authenticates via Logto Experience API (4-step: init -> verify password -> identify -> submit -> redirect)
|
||||
- `CUSTOM_UI_PATH` env var does NOT work for Logto OSS — must volume-mount or replace the experience dist directory
|
||||
- Favicon bundled in `ui/sign-in/public/favicon.svg` (served by Logto, not SaaS)
|
||||
|
||||
### Auth enforcement
|
||||
|
||||
- All API endpoints enforce OAuth2 scopes via `@PreAuthorize("hasAuthority('SCOPE_xxx')")` annotations
|
||||
- Tenant isolation enforced by `TenantIsolationInterceptor` (a single `HandlerInterceptor` on `/api/**` that resolves JWT org_id to TenantContext and validates `{tenantId}`, `{environmentId}`, `{appId}` path variables; fail-closed, platform admins bypass)
|
||||
- 13 OAuth2 scopes on the Logto API resource (`https://api.cameleer.local`): 10 platform scopes + 3 server scopes (`server:admin`, `server:operator`, `server:viewer`), served to the frontend from `GET /platform/api/config`
|
||||
- Server scopes map to server RBAC roles via JWT `scope` claim (SaaS platform path) or `roles` claim (server-ui OIDC login path)
|
||||
- Org roles: `owner` -> `server:admin` + `tenant:manage`, `operator` -> `server:operator`, `viewer` -> `server:viewer`
|
||||
- `saas-vendor` global role created by bootstrap Phase 12 and always assigned to the admin user — has `platform:admin` + all tenant scopes
|
||||
- Custom `JwtDecoder` in `SecurityConfig.java` — ES384 algorithm, `at+jwt` token type, split issuer-uri (string validation) / jwk-set-uri (Docker-internal fetch), audience validation (`https://api.cameleer.local`)
|
||||
- Logto Custom JWT (Phase 7b in bootstrap) injects a `roles` claim into access tokens based on org roles and global roles — this makes role data available to the server without Logto-specific code
|
||||
|
||||
### Auth routing by persona
|
||||
|
||||
| Persona | Logto role | Key scope | Landing route |
|
||||
|---------|-----------|-----------|---------------|
|
||||
| SaaS admin | `saas-vendor` (global) | `platform:admin` | `/vendor/tenants` |
|
||||
| Tenant admin | org `owner` | `tenant:manage` | `/tenant` (dashboard) |
|
||||
| Regular user (operator/viewer) | org member | `server:operator` or `server:viewer` | Redirected to server dashboard directly |
|
||||
|
||||
- `LandingRedirect` component waits for scopes to load, then routes to the correct persona landing page
|
||||
- `RequireScope` guard on route groups enforces scope requirements
|
||||
- SSO bridge: Logto session carries over to provisioned server's OIDC flow (Traditional Web App per tenant)
|
||||
|
||||
### Per-tenant server env vars (set by DockerTenantProvisioner)
|
||||
|
||||
These env vars are injected into provisioned per-tenant server containers:
|
||||
|
||||
| Env var | Value | Purpose |
|
||||
|---------|-------|---------|
|
||||
| `SPRING_DATASOURCE_URL` | `jdbc:postgresql://cameleer-postgres:5432/cameleer?currentSchema=tenant_{slug}&ApplicationName=tenant_{slug}` | Per-tenant schema isolation + diagnostic query scoping |
|
||||
| `SPRING_DATASOURCE_USERNAME` | `tenant_{slug}` | Per-tenant PG user (owns only its schema) |
|
||||
| `SPRING_DATASOURCE_PASSWORD` | (generated, stored in `TenantEntity.dbPassword`) | Per-tenant PG password |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDCISSUERURI` | `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}/oidc` | Token issuer claim validation |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDCJWKSETURI` | `http://cameleer-logto:3001/oidc/jwks` | Docker-internal JWK fetch |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDCTLSSKIPVERIFY` | `true` (conditional) | Skip cert verify for OIDC discovery; only set when no `/certs/ca.pem` exists. When ca.pem exists, the server's `docker-entrypoint.sh` imports it into the JVM truststore instead. |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDCAUDIENCE` | `https://api.cameleer.local` | JWT audience validation for OIDC tokens |
|
||||
| `CAMELEER_SERVER_SECURITY_CORSALLOWEDORIGINS` | `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}` | Allow browser requests through Traefik |
|
||||
| `CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN` | (generated) | Bootstrap auth token for M2M communication |
|
||||
| `CAMELEER_SERVER_RUNTIME_ENABLED` | `true` | Enable Docker orchestration |
|
||||
| `CAMELEER_SERVER_RUNTIME_SERVERURL` | `http://cameleer-server-{slug}:8081` | Per-tenant server URL (DNS alias on tenant network) |
|
||||
| `CAMELEER_SERVER_RUNTIME_ROUTINGDOMAIN` | `${PUBLIC_HOST}` | Domain for Traefik routing labels |
|
||||
| `CAMELEER_SERVER_RUNTIME_ROUTINGMODE` | `path` | `path` or `subdomain` routing |
|
||||
| `CAMELEER_SERVER_RUNTIME_JARSTORAGEPATH` | `/data/jars` | Directory for uploaded JARs |
|
||||
| `CAMELEER_SERVER_RUNTIME_DOCKERNETWORK` | `cameleer-tenant-{slug}` | Primary network for deployed app containers |
|
||||
| `CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME` | `cameleer-jars-{slug}` | Docker volume name for JAR sharing between server and deployed containers |
|
||||
| `CAMELEER_SERVER_TENANT_ID` | (tenant UUID) | Tenant identifier for data isolation |
|
||||
| `CAMELEER_SERVER_SECURITY_INFRASTRUCTUREENDPOINTS` | `false` | Hides Database/ClickHouse admin from tenant admins |
|
||||
| `BASE_PATH` (server-ui) | `/t/{slug}` | React Router basename + `<base>` tag |
|
||||
| `CAMELEER_API_URL` (server-ui) | `http://cameleer-server-{slug}:8081` | Nginx upstream proxy target (NOT `API_URL` — image uses `${CAMELEER_API_URL}`) |
|
||||
|
||||
### Per-tenant volume mounts (set by DockerTenantProvisioner)
|
||||
|
||||
| Mount | Container path | Purpose |
|
||||
|-------|---------------|---------|
|
||||
| `/var/run/docker.sock` | `/var/run/docker.sock` | Docker socket for app deployment orchestration |
|
||||
| `cameleer-jars-{slug}` (volume, via `CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME`) | `/data/jars` | Shared JAR storage — server writes, deployed app containers read |
|
||||
| `cameleer-saas_certs` (volume, ro) | `/certs` | Platform TLS certs + CA bundle for OIDC trust |
|
||||
|
||||
### SaaS app configuration (env vars for cameleer-saas itself)
|
||||
|
||||
SaaS properties use the `cameleer.saas.*` prefix (env vars: `CAMELEER_SAAS_*`). Two groups:
|
||||
|
||||
**Identity** (`cameleer.saas.identity.*` / `CAMELEER_SAAS_IDENTITY_*`):
|
||||
- Logto endpoint, M2M credentials, bootstrap file path — used by `LogtoConfig.java`
|
||||
|
||||
**Provisioning** (`cameleer.saas.provisioning.*` / `CAMELEER_SAAS_PROVISIONING_*`):
|
||||
|
||||
| Env var | Spring property | Purpose |
|
||||
|---------|----------------|---------|
|
||||
| `CAMELEER_SAAS_PROVISIONING_SERVERIMAGE` | `cameleer.saas.provisioning.serverimage` | Docker image for per-tenant server containers |
|
||||
| `CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE` | `cameleer.saas.provisioning.serveruiimage` | Docker image for per-tenant UI containers |
|
||||
| `CAMELEER_SAAS_PROVISIONING_NETWORKNAME` | `cameleer.saas.provisioning.networkname` | Shared services Docker network (compose default) |
|
||||
| `CAMELEER_SAAS_PROVISIONING_TRAEFIKNETWORK` | `cameleer.saas.provisioning.traefiknetwork` | Traefik Docker network for routing |
|
||||
| `CAMELEER_SAAS_PROVISIONING_PUBLICHOST` | `cameleer.saas.provisioning.publichost` | Public hostname (same value as infrastructure `PUBLIC_HOST`) |
|
||||
| `CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL` | `cameleer.saas.provisioning.publicprotocol` | Public protocol (same value as infrastructure `PUBLIC_PROTOCOL`) |
|
||||
|
||||
**Note:** `PUBLIC_HOST` and `PUBLIC_PROTOCOL` remain as infrastructure env vars for Traefik and Logto containers. The SaaS app reads its own copies via the `CAMELEER_SAAS_PROVISIONING_*` prefix. `LOGTO_ENDPOINT` and `LOGTO_DB_PASSWORD` are infrastructure env vars for the Logto service and are unchanged.
|
||||
|
||||
### Server OIDC role extraction (two paths)
|
||||
|
||||
| Path | Token type | Role source | How it works |
|
||||
|------|-----------|-------------|--------------|
|
||||
| SaaS platform -> server API | Logto org-scoped access token | `scope` claim | `JwtAuthenticationFilter.extractRolesFromScopes()` reads `server:admin` from scope |
|
||||
| Server-ui SSO login | Logto JWT access token (via Traditional Web App) | `roles` claim | `OidcTokenExchanger` decodes access_token, reads `roles` injected by Custom JWT |
|
||||
|
||||
The server's OIDC config (`OidcConfig`) includes `audience` (RFC 8707 resource indicator) and `additionalScopes`. The `audience` is sent as `resource` in both the authorization request and token exchange, which makes Logto return a JWT access token instead of opaque. The Custom JWT script maps org roles to `roles: ["server:admin"]`.
|
||||
|
||||
**CRITICAL:** `additionalScopes` MUST include `urn:logto:scope:organizations` and `urn:logto:scope:organization_roles` — without these, Logto doesn't populate `context.user.organizationRoles` in the Custom JWT script, so the `roles` claim is empty and all users get `defaultRoles` (VIEWER). The server's `OidcAuthController.applyClaimMappings()` uses OIDC token roles (from Custom JWT) as fallback when no DB claim mapping rules exist: claim mapping rules > OIDC token roles > defaultRoles.
|
||||
|
||||
### Deployment pipeline
|
||||
|
||||
App deployment is handled by the cameleer-server's `DeploymentExecutor` (7-stage async flow):
|
||||
1. PRE_FLIGHT — validate config, check JAR exists
|
||||
2. PULL_IMAGE — pull base image if missing
|
||||
3. CREATE_NETWORK — ensure cameleer-traefik and cameleer-env-{slug} networks
|
||||
4. START_REPLICAS — create N containers with Traefik labels
|
||||
5. HEALTH_CHECK — poll `/cameleer/health` on agent port 9464
|
||||
6. SWAP_TRAFFIC — stop old deployment (blue/green)
|
||||
7. COMPLETE — mark RUNNING or DEGRADED
|
||||
|
||||
Key files:
|
||||
- `DeploymentExecutor.java` (in cameleer-server) — async staged deployment
|
||||
- `DockerRuntimeOrchestrator.java` (in cameleer-server) — Docker client, container lifecycle
|
||||
- `docker/runtime-base/Dockerfile` — base image with agent JAR, maps env vars to `-D` system properties
|
||||
- `ServerApiClient.java` — M2M token acquisition for SaaS->server API calls (agent status). Uses `X-Cameleer-Protocol-Version: 1` header
|
||||
- Docker socket access: `group_add: ["0"]` in docker-compose.dev.yml (not root group membership in Dockerfile)
|
||||
- Network: deployed containers join `cameleer-tenant-{slug}` (primary, isolation) + `cameleer-traefik` (routing) + `cameleer-env-{tenantId}-{envSlug}` (environment isolation)
|
||||
|
||||
### Bootstrap (`docker/logto-bootstrap.sh`)
|
||||
|
||||
Idempotent script run inside the Logto container entrypoint. **Clean slate** — no example tenant, no viewer user, no server configuration. Phases:
|
||||
1. Wait for Logto health (no server to wait for — servers are provisioned per-tenant)
|
||||
2. Get Management API token (reads `m-default` secret from DB)
|
||||
3. Create Logto apps (SPA, Traditional Web App with `skipConsent`, M2M with Management API role + server API role)
|
||||
3b. Create API resource scopes (10 platform + 3 server scopes)
|
||||
4. Create org roles (owner, operator, viewer with API resource scope assignments) + M2M server role (`cameleer-m2m-server` with `server:admin` scope)
|
||||
5. Create admin user (SaaS admin with Logto console access)
|
||||
7b. Configure Logto Custom JWT for access tokens (maps org roles -> `roles` claim: owner->server:admin, operator->server:operator, viewer->server:viewer; saas-vendor global role -> server:admin)
|
||||
8. Configure Logto sign-in branding (Cameleer colors `#C6820E`/`#D4941E`, logo from `/platform/logo.svg`)
|
||||
9. Cleanup seeded Logto apps
|
||||
10. Write bootstrap results to `/data/logto-bootstrap.json`
|
||||
12. Create `saas-vendor` global role with all API scopes and assign to admin user (always runs — admin IS the platform admin).
|
||||
|
||||
The multi-tenant compose stack is: Traefik + PostgreSQL + ClickHouse + Logto (with bootstrap entrypoint) + cameleer-saas. No `cameleer-server` or `cameleer-server-ui` in compose — those are provisioned per-tenant by `DockerTenantProvisioner`.
|
||||
|
||||
### Deployment Modes (installer)
|
||||
|
||||
The installer (`installer/install.sh`) supports two deployment modes:
|
||||
|
||||
| | Multi-tenant SaaS (`DEPLOYMENT_MODE=saas`) | Standalone (`DEPLOYMENT_MODE=standalone`) |
|
||||
|---|---|---|
|
||||
| **Containers** | traefik, postgres, clickhouse, logto, cameleer-saas | traefik, postgres, clickhouse, server, server-ui |
|
||||
| **Auth** | Logto OIDC (SaaS admin + tenant users) | Local auth (built-in admin, no identity provider) |
|
||||
| **Tenant management** | SaaS admin creates/manages tenants via UI | Single server instance, no fleet management |
|
||||
| **PostgreSQL** | `cameleer-postgres` image (multi-DB init) | Stock `postgres:16-alpine` (server creates schema via Flyway) |
|
||||
| **Use case** | Platform vendor managing multiple customers | Single customer running the product directly |
|
||||
|
||||
Standalone mode generates a simpler compose with the server running directly. No Logto, no SaaS management plane, no bootstrap. The admin logs in with local credentials at `/`.
|
||||
|
||||
The installer uses static docker-compose templates in `installer/templates/`. Templates are copied to the install directory and composed via `COMPOSE_FILE` in `.env`:
|
||||
- `docker-compose.yml` — shared infrastructure (traefik, postgres, clickhouse)
|
||||
- `docker-compose.saas.yml` — SaaS mode (logto, cameleer-saas)
|
||||
- `docker-compose.server.yml` — standalone mode (server, server-ui)
|
||||
- `docker-compose.tls.yml` — overlay: custom TLS cert volume
|
||||
- `docker-compose.monitoring.yml` — overlay: external monitoring network
|
||||
|
||||
### Tenant Provisioning Flow
|
||||
|
||||
When SaaS admin creates a tenant via `VendorTenantService`:
|
||||
|
||||
**Synchronous (in `createAndProvision`):**
|
||||
1. Create `TenantEntity` (status=PROVISIONING) + Logto organization
|
||||
2. Create admin user in Logto with owner org role (if credentials provided)
|
||||
3. Register OIDC redirect URIs for `/t/{slug}/oidc/callback` on Logto Traditional Web App
|
||||
5. Generate license (tier-appropriate, 365 days)
|
||||
6. Return immediately — UI shows provisioning spinner, polls via `refetchInterval`
|
||||
|
||||
**Asynchronous (in `provisionAsync`, `@Async`):**
|
||||
7. Create per-tenant PostgreSQL user + schema via `TenantDatabaseService.createTenantDatabase(slug, password)`, store `dbPassword` on entity
|
||||
8. Create tenant-isolated Docker network (`cameleer-tenant-{slug}`)
|
||||
9. Create server container with per-tenant JDBC URL (`currentSchema=tenant_{slug}&ApplicationName=tenant_{slug}`), Traefik labels (`traefik.docker.network`), health check, Docker socket bind, JAR volume, certs volume (ro)
|
||||
10. Create UI container with `CAMELEER_API_URL`, `BASE_PATH`, Traefik strip-prefix labels
|
||||
10. Wait for health check (`/api/v1/health`, not `/actuator/health` which requires auth)
|
||||
11. Push license token to server via M2M API
|
||||
12. Push OIDC config (Traditional Web App credentials + `additionalScopes: [urn:logto:scope:organizations, urn:logto:scope:organization_roles]`) to server for SSO
|
||||
13. Update tenant status -> ACTIVE (or set `provisionError` on failure)
|
||||
|
||||
**Server restart** (available to SaaS admin + tenant admin):
|
||||
- `POST /api/vendor/tenants/{id}/restart` (SaaS admin) and `POST /api/tenant/server/restart` (tenant)
|
||||
- Calls `TenantProvisioner.stop(slug)` then `start(slug)` — restarts server + UI containers only (same image)
|
||||
|
||||
**Server upgrade** (available to SaaS admin + tenant admin):
|
||||
- `POST /api/vendor/tenants/{id}/upgrade` (SaaS admin) and `POST /api/tenant/server/upgrade` (tenant)
|
||||
- Calls `TenantProvisioner.upgrade(slug)` — removes server + UI containers, force-pulls latest images (preserves app containers, volumes, networks), then `provisionAsync()` re-creates containers with the new image + pushes license + OIDC config
|
||||
|
||||
**Tenant delete** cleanup:
|
||||
- `DockerTenantProvisioner.remove(slug)` — label-based container removal (`cameleer.tenant={slug}`), env network cleanup, tenant network removal, JAR volume removal
|
||||
- `TenantDatabaseService.dropTenantDatabase(slug)` — drops PostgreSQL `tenant_{slug}` schema + `tenant_{slug}` user
|
||||
- `TenantDataCleanupService.cleanupClickHouse(slug)` — deletes ClickHouse data across all tables with `tenant_id` column (GDPR)
|
||||
|
||||
**Password management** (tenant portal):
|
||||
- `POST /api/tenant/password` — tenant admin changes own Logto password (via `@AuthenticationPrincipal` JWT subject)
|
||||
- `POST /api/tenant/team/{userId}/password` — tenant admin resets a team member's Logto password (validates org membership first)
|
||||
- `POST /api/tenant/server/admin-password` — tenant admin resets the server's built-in local admin password (via M2M API to `POST /api/v1/admin/users/user:admin/password`)
|
||||
For detailed architecture docs, see the directory-scoped CLAUDE.md files (loaded automatically when editing code in that directory):
|
||||
- **Provisioning flow, env vars, lifecycle** → `src/.../provisioning/CLAUDE.md`
|
||||
- **Auth, scopes, JWT, OIDC** → `src/.../config/CLAUDE.md`
|
||||
- **Docker, routing, networks, bootstrap, deployment pipeline** → `docker/CLAUDE.md`
|
||||
- **Installer, deployment modes, compose templates** → `installer/CLAUDE.md`
|
||||
- **Frontend, sign-in UI** → `ui/CLAUDE.md`
|
||||
|
||||
## Database Migrations
|
||||
|
||||
@@ -348,7 +63,7 @@ PostgreSQL (Flyway): `src/main/resources/db/migration/`
|
||||
- `cameleer-saas` — SaaS vendor management plane (frontend + JAR baked in)
|
||||
- `cameleer-logto` — custom Logto with sign-in UI baked in
|
||||
- `cameleer-server` / `cameleer-server-ui` — provisioned per-tenant (not in compose, created by `DockerTenantProvisioner`)
|
||||
- `cameleer-runtime-base` — base image for deployed apps (agent JAR + JRE). CI downloads latest agent SNAPSHOT from Gitea Maven registry. Uses `CAMELEER_SERVER_RUNTIME_SERVERURL` env var (not CAMELEER_EXPORT_ENDPOINT).
|
||||
- `cameleer-runtime-base` — base image for deployed apps (agent JAR + `cameleer-log-appender.jar` + JRE). CI downloads latest agent and log appender SNAPSHOTs from Gitea Maven registry. The Dockerfile ENTRYPOINT is overridden by `DockerRuntimeOrchestrator` at container creation; agent config uses `CAMELEER_AGENT_*` env vars set by `DeploymentExecutor`.
|
||||
- Docker builds: `--no-cache`, `--provenance=false` for Gitea compatibility
|
||||
- `docker-compose.dev.yml` — exposes ports for direct access, sets `SPRING_PROFILES_ACTIVE: dev`. Volume-mounts `./ui/dist` into the container so local UI builds are served without rebuilding the Docker image (`SPRING_WEB_RESOURCES_STATIC_LOCATIONS` overrides classpath). Adds Docker socket mount for tenant provisioning.
|
||||
- Design system: import from `@cameleer/design-system` (Gitea npm registry)
|
||||
@@ -360,7 +75,7 @@ PostgreSQL (Flyway): `src/main/resources/db/migration/`
|
||||
<!-- gitnexus:start -->
|
||||
# GitNexus — Code Intelligence
|
||||
|
||||
This project is indexed by GitNexus as **cameleer-saas** (2676 symbols, 5768 relationships, 224 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
|
||||
This project is indexed by GitNexus as **cameleer-saas** (2816 symbols, 5989 relationships, 238 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
|
||||
|
||||
> If any GitNexus tool warns the index is stale, run `npx gitnexus analyze` in terminal first.
|
||||
|
||||
|
||||
@@ -28,6 +28,7 @@ services:
|
||||
CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL: ${PUBLIC_PROTOCOL:-https}
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE: gitea.siegeln.net/cameleer/cameleer-server:${VERSION:-latest}
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE: gitea.siegeln.net/cameleer/cameleer-server-ui:${VERSION:-latest}
|
||||
CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE: gitea.siegeln.net/cameleer/cameleer-runtime-base:${VERSION:-latest}
|
||||
CAMELEER_SAAS_PROVISIONING_NETWORKNAME: cameleer-saas_cameleer
|
||||
CAMELEER_SAAS_PROVISIONING_TRAEFIKNETWORK: cameleer-traefik
|
||||
|
||||
|
||||
@@ -126,6 +126,7 @@ services:
|
||||
CAMELEER_SAAS_IDENTITY_LOGTOPUBLICENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
|
||||
CAMELEER_SAAS_IDENTITY_M2MCLIENTID: ${LOGTO_M2M_CLIENT_ID:-}
|
||||
CAMELEER_SAAS_IDENTITY_M2MCLIENTSECRET: ${LOGTO_M2M_CLIENT_SECRET:-}
|
||||
CAMELEER_SERVER_SECURITY_JWTSECRET: ${CAMELEER_SERVER_SECURITY_JWTSECRET:-cameleer-dev-jwt-secret}
|
||||
# Provisioning — passed to per-tenant server containers
|
||||
CAMELEER_SAAS_PROVISIONING_PUBLICHOST: ${PUBLIC_HOST:-localhost}
|
||||
CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL: ${PUBLIC_PROTOCOL:-https}
|
||||
|
||||
88
docker/CLAUDE.md
Normal file
88
docker/CLAUDE.md
Normal file
@@ -0,0 +1,88 @@
|
||||
# Docker & Infrastructure
|
||||
|
||||
## Routing (single-domain, path-based via Traefik)
|
||||
|
||||
All services on one hostname. Infrastructure containers (Traefik, Logto) use `PUBLIC_HOST` + `PUBLIC_PROTOCOL` env vars directly. The SaaS app reads these via `CAMELEER_SAAS_PROVISIONING_PUBLICHOST` / `CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL` (Spring Boot properties `cameleer.saas.provisioning.publichost` / `cameleer.saas.provisioning.publicprotocol`).
|
||||
|
||||
| Path | Target | Notes |
|
||||
|------|--------|-------|
|
||||
| `/platform/*` | cameleer-saas:8080 | SPA + API (`server.servlet.context-path: /platform`) |
|
||||
| `/platform/vendor/*` | (SPA routes) | Vendor console (platform:admin) |
|
||||
| `/platform/tenant/*` | (SPA routes) | Tenant admin portal (org-scoped) |
|
||||
| `/t/{slug}/*` | per-tenant server-ui | Provisioned tenant UI containers (Traefik labels) |
|
||||
| `/` | redirect -> `/platform/` | Via `docker/traefik-dynamic.yml` |
|
||||
| `/*` (catch-all) | cameleer-logto:3001 (priority=1) | Custom sign-in UI, OIDC, interaction |
|
||||
|
||||
- SPA assets at `/_app/` (Vite `assetsDir: '_app'`) to avoid conflict with Logto's `/assets/`
|
||||
- Logto `ENDPOINT` = `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}` (same domain, same origin)
|
||||
- TLS: `traefik-certs` init container generates self-signed cert (dev) or copies user-supplied cert via `CERT_FILE`/`KEY_FILE`/`CA_FILE` env vars. Default cert configured in `docker/traefik-dynamic.yml` (NOT static `traefik.yml` — Traefik v3 ignores `tls.stores.default` in static config). Runtime cert replacement via vendor UI (stage/activate/restore). ACME for production (future). Server containers import `/certs/ca.pem` into JVM truststore at startup via `docker-entrypoint.sh` for OIDC trust.
|
||||
- Root `/` -> `/platform/` redirect via Traefik file provider (`docker/traefik-dynamic.yml`)
|
||||
- LoginPage auto-redirects to Logto OIDC (no intermediate button)
|
||||
- Per-tenant server containers get Traefik labels for `/t/{slug}/*` routing at provisioning time
|
||||
|
||||
## Docker Networks
|
||||
|
||||
Compose-defined networks:
|
||||
|
||||
| Network | Name on Host | Purpose |
|
||||
|---------|-------------|---------|
|
||||
| `cameleer` | `cameleer-saas_cameleer` | Compose default — shared services (DB, Logto, SaaS) |
|
||||
| `cameleer-traefik` | `cameleer-traefik` (fixed `name:`) | Traefik + provisioned tenant containers |
|
||||
|
||||
Per-tenant networks (created dynamically by `DockerTenantProvisioner`):
|
||||
|
||||
| Network | Name Pattern | Purpose |
|
||||
|---------|-------------|---------|
|
||||
| Tenant network | `cameleer-tenant-{slug}` | Internal bridge, no internet — isolates tenant server + apps |
|
||||
| Environment network | `cameleer-env-{tenantId}-{envSlug}` | Tenant-scoped (includes tenantId to prevent slug collision across tenants) |
|
||||
|
||||
Server containers join three networks: tenant network (primary), shared services network (`cameleer`), and traefik network. Apps deployed by the server use the tenant network as primary.
|
||||
|
||||
**Backend IP resolution:** Traefik's Docker provider is configured with `network: cameleer-traefik` (static `traefik.yml`). Every cameleer-managed container — saas-provisioned tenant containers (via `DockerTenantProvisioner`) and cameleer-server's per-app containers (via `DockerNetworkManager`) — is attached to `cameleer-traefik` at creation, so Traefik always resolves a reachable backend IP. Provisioned tenant containers additionally emit a `traefik.docker.network=cameleer-traefik` label as per-service defense-in-depth. (Pre-2026-04-23 the static config pointed at `network: cameleer`, a name that never matched any real network — that produced 504 Gateway Timeout on every managed app until the Traefik image was rebuilt.)
|
||||
|
||||
## Custom sign-in UI (`ui/sign-in/`)
|
||||
|
||||
Separate Vite+React SPA replacing Logto's default sign-in page. Visually matches cameleer-server LoginPage.
|
||||
|
||||
- Built as custom Logto Docker image (`cameleer-logto`): `ui/sign-in/Dockerfile` = node build stage + `FROM ghcr.io/logto-io/logto:latest` + COPY dist over `/etc/logto/packages/experience/dist/`
|
||||
- Uses `@cameleer/design-system` components (Card, Input, Button, FormField, Alert)
|
||||
- Authenticates via Logto Experience API (4-step: init -> verify password -> identify -> submit -> redirect)
|
||||
- `CUSTOM_UI_PATH` env var does NOT work for Logto OSS — must volume-mount or replace the experience dist directory
|
||||
- Favicon bundled in `ui/sign-in/public/favicon.svg` (served by Logto, not SaaS)
|
||||
|
||||
## Deployment pipeline
|
||||
|
||||
App deployment is handled by the cameleer-server's `DeploymentExecutor` (7-stage async flow):
|
||||
1. PRE_FLIGHT — validate config, check JAR exists
|
||||
2. PULL_IMAGE — pull base image if missing
|
||||
3. CREATE_NETWORK — ensure cameleer-traefik and cameleer-env-{slug} networks
|
||||
4. START_REPLICAS — create N containers with Traefik labels
|
||||
5. HEALTH_CHECK — poll `/cameleer/health` on agent port 9464
|
||||
6. SWAP_TRAFFIC — stop old deployment (blue/green)
|
||||
7. COMPLETE — mark RUNNING or DEGRADED
|
||||
|
||||
Key files:
|
||||
- `DeploymentExecutor.java` (in cameleer-server) — async staged deployment, runtime type auto-detection
|
||||
- `DockerRuntimeOrchestrator.java` (in cameleer-server) — Docker client, container lifecycle, builds runtime-type-specific entrypoints (spring-boot uses `-cp` + `PropertiesLauncher` with `-Dloader.path` for log appender; quarkus uses `-jar`; plain-java uses `-cp` + detected main class; native exec directly). Overrides the Dockerfile ENTRYPOINT.
|
||||
- `docker/runtime-base/Dockerfile` — base image with agent JAR + `cameleer-log-appender.jar` + JRE. The Dockerfile ENTRYPOINT (`-jar /app/app.jar`) is a fallback — `DockerRuntimeOrchestrator` overrides it at container creation.
|
||||
- `RuntimeDetector.java` (in cameleer-server) — detects runtime type from JAR manifest `Main-Class`; derives correct `PropertiesLauncher` package (Spring Boot 3.2+ vs pre-3.2)
|
||||
- `ServerApiClient.java` — M2M token acquisition for SaaS->server API calls (agent status). Uses `X-Cameleer-Protocol-Version: 1` header
|
||||
- Docker socket access: `group_add: ["0"]` in docker-compose.dev.yml (not root group membership in Dockerfile)
|
||||
- Network: deployed containers join `cameleer-tenant-{slug}` (primary, isolation) + `cameleer-traefik` (routing) + `cameleer-env-{tenantId}-{envSlug}` (environment isolation)
|
||||
|
||||
## Bootstrap (`docker/logto-bootstrap.sh`)
|
||||
|
||||
Idempotent script run inside the Logto container entrypoint. **Clean slate** — no example tenant, no viewer user, no server configuration. Phases:
|
||||
1. Wait for Logto health (no server to wait for — servers are provisioned per-tenant)
|
||||
2. Get Management API token (reads `m-default` secret from DB)
|
||||
3. Create Logto apps (SPA, Traditional Web App with `skipConsent`, M2M with Management API role + server API role)
|
||||
3b. Create API resource scopes (1 platform + 9 tenant + 3 server scopes)
|
||||
4. Create org roles (owner, operator, viewer with API resource scope assignments) + M2M server role (`cameleer-m2m-server` with `server:admin` scope)
|
||||
5. Create admin user (SaaS admin with Logto console access)
|
||||
7b. Configure Logto Custom JWT for access tokens (maps org roles -> `roles` claim: owner->server:admin, operator->server:operator, viewer->server:viewer; saas-vendor global role -> server:admin)
|
||||
8. Configure Logto sign-in branding (Cameleer colors `#C6820E`/`#D4941E`, logo from `/platform/logo.svg`)
|
||||
9. Cleanup seeded Logto apps
|
||||
10. Write bootstrap results to `/data/logto-bootstrap.json`
|
||||
12. Create `saas-vendor` global role with all API scopes and assign to admin user (always runs — admin IS the platform admin).
|
||||
|
||||
The multi-tenant compose stack is: Traefik + PostgreSQL + ClickHouse + Logto (with bootstrap entrypoint) + cameleer-saas. No `cameleer-server` or `cameleer-server-ui` in compose — those are provisioned per-tenant by `DockerTenantProvisioner`.
|
||||
@@ -18,6 +18,6 @@ providers:
|
||||
docker:
|
||||
endpoint: "unix:///var/run/docker.sock"
|
||||
exposedByDefault: false
|
||||
network: cameleer
|
||||
network: cameleer-traefik
|
||||
file:
|
||||
filename: /etc/traefik/dynamic.yml
|
||||
|
||||
@@ -897,7 +897,7 @@ Env vars injected into provisioned per-tenant server containers by `DockerTenant
|
||||
| `CAMELEER_SERVER_CLICKHOUSE_URL` | `jdbc:clickhouse://cameleer-clickhouse:8123/cameleer` | ClickHouse JDBC URL |
|
||||
| `CAMELEER_SERVER_TENANT_ID` | *(tenant slug)* | Tenant identifier for data isolation |
|
||||
| `CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN` | *(generated)* | Agent bootstrap token |
|
||||
| `CAMELEER_SERVER_SECURITY_JWTSECRET` | *(generated)* | JWT signing secret |
|
||||
| `CAMELEER_SERVER_SECURITY_JWTSECRET` | *(generated, must be non-empty)* | JWT signing secret |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDC_ISSUERURI` | `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}/oidc` | OIDC issuer for M2M tokens |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDC_JWKSETURI` | `http://cameleer-logto:3001/oidc/jwks` | Docker-internal JWK fetch |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDC_AUDIENCE` | `https://api.cameleer.local` | JWT audience validation |
|
||||
|
||||
32
installer/CLAUDE.md
Normal file
32
installer/CLAUDE.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# Installer
|
||||
|
||||
## Deployment Modes
|
||||
|
||||
The installer (`installer/install.sh`) supports two deployment modes:
|
||||
|
||||
| | Multi-tenant SaaS (`DEPLOYMENT_MODE=saas`) | Standalone (`DEPLOYMENT_MODE=standalone`) |
|
||||
|---|---|---|
|
||||
| **Containers** | traefik, postgres, clickhouse, logto, cameleer-saas | traefik, postgres, clickhouse, server, server-ui |
|
||||
| **Auth** | Logto OIDC (SaaS admin + tenant users) | Local auth (built-in admin, no identity provider) |
|
||||
| **Tenant management** | SaaS admin creates/manages tenants via UI | Single server instance, no fleet management |
|
||||
| **PostgreSQL** | `cameleer-postgres` image (multi-DB init) | Stock `postgres:16-alpine` (server creates schema via Flyway) |
|
||||
| **Use case** | Platform vendor managing multiple customers | Single customer running the product directly |
|
||||
|
||||
Standalone mode generates a simpler compose with the server running directly. No Logto, no SaaS management plane, no bootstrap. The admin logs in with local credentials at `/`.
|
||||
|
||||
## Compose templates
|
||||
|
||||
The installer uses static docker-compose templates in `installer/templates/`. Templates are copied to the install directory and composed via `COMPOSE_FILE` in `.env`:
|
||||
- `docker-compose.yml` — shared infrastructure (traefik, postgres, clickhouse)
|
||||
- `docker-compose.saas.yml` — SaaS mode (logto, cameleer-saas)
|
||||
- `docker-compose.server.yml` — standalone mode (server, server-ui)
|
||||
- `docker-compose.tls.yml` — overlay: custom TLS cert volume
|
||||
- `docker-compose.monitoring.yml` — overlay: external monitoring network
|
||||
|
||||
## Env var naming convention
|
||||
|
||||
- `CAMELEER_AGENT_*` — agent config (consumed by the Java agent)
|
||||
- `CAMELEER_SERVER_*` — server config (consumed by cameleer-server)
|
||||
- `CAMELEER_SAAS_*` — SaaS management plane config
|
||||
- `CAMELEER_SAAS_PROVISIONING_*` — "SaaS forwards this to provisioned tenant servers"
|
||||
- No prefix (e.g. `POSTGRES_PASSWORD`, `PUBLIC_HOST`) — shared infrastructure, consumed by multiple components
|
||||
@@ -578,32 +578,37 @@ function Generate-EnvFile {
|
||||
$ts = (Get-Date -Format 'yyyy-MM-dd HH:mm:ss') + ' UTC'
|
||||
$bt = Generate-Password
|
||||
|
||||
$jwtSecret = Generate-Password
|
||||
|
||||
if ($c.DeploymentMode -eq 'standalone') {
|
||||
$content = @"
|
||||
# Cameleer Server Configuration (standalone)
|
||||
# Generated by installer v${CAMELEER_INSTALLER_VERSION} on $ts
|
||||
|
||||
|
||||
VERSION=$($c.Version)
|
||||
PUBLIC_HOST=$($c.PublicHost)
|
||||
PUBLIC_PROTOCOL=$($c.PublicProtocol)
|
||||
HTTP_PORT=$($c.HttpPort)
|
||||
HTTPS_PORT=$($c.HttpsPort)
|
||||
|
||||
|
||||
# PostgreSQL
|
||||
POSTGRES_USER=cameleer
|
||||
POSTGRES_PASSWORD=$($c.PostgresPassword)
|
||||
POSTGRES_DB=cameleer
|
||||
|
||||
|
||||
# ClickHouse
|
||||
CLICKHOUSE_PASSWORD=$($c.ClickhousePassword)
|
||||
|
||||
|
||||
# Server admin
|
||||
SERVER_ADMIN_USER=$($c.AdminUser)
|
||||
SERVER_ADMIN_PASS=$($c.AdminPass)
|
||||
|
||||
|
||||
# Bootstrap token
|
||||
BOOTSTRAP_TOKEN=$bt
|
||||
|
||||
|
||||
# JWT signing secret (required by server, must be non-empty)
|
||||
CAMELEER_SERVER_SECURITY_JWTSECRET=$jwtSecret
|
||||
|
||||
# Docker
|
||||
DOCKER_SOCKET=$($c.DockerSocket)
|
||||
DOCKER_GID=$gid
|
||||
@@ -615,9 +620,9 @@ POSTGRES_IMAGE=postgres:16-alpine
|
||||
$content += "`nKEY_FILE=/user-certs/key.pem"
|
||||
if ($c.CaFile) { $content += "`nCA_FILE=/user-certs/ca.pem" }
|
||||
}
|
||||
$composeFile = 'docker-compose.yml:docker-compose.server.yml'
|
||||
if ($c.TlsMode -eq 'custom') { $composeFile += ':docker-compose.tls.yml' }
|
||||
if ($c.MonitoringNetwork) { $composeFile += ':docker-compose.monitoring.yml' }
|
||||
$composeFile = 'docker-compose.yml;docker-compose.server.yml'
|
||||
if ($c.TlsMode -eq 'custom') { $composeFile += ';docker-compose.tls.yml' }
|
||||
if ($c.MonitoringNetwork) { $composeFile += ';docker-compose.monitoring.yml' }
|
||||
$content += "`n`n# Compose file assembly`nCOMPOSE_FILE=$composeFile"
|
||||
if ($c.MonitoringNetwork) {
|
||||
$content += "`n`n# Monitoring`nMONITORING_NETWORK=$($c.MonitoringNetwork)"
|
||||
@@ -667,11 +672,15 @@ DOCKER_GID=$gid
|
||||
# Provisioning images
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE=${REGISTRY}/cameleer-server:$($c.Version)
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE=${REGISTRY}/cameleer-server-ui:$($c.Version)
|
||||
CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE=${REGISTRY}/cameleer-runtime-base:$($c.Version)
|
||||
|
||||
# JWT signing secret (forwarded to provisioned tenant servers, must be non-empty)
|
||||
CAMELEER_SERVER_SECURITY_JWTSECRET=$jwtSecret
|
||||
"@
|
||||
$content += $provisioningBlock
|
||||
$composeFile = 'docker-compose.yml:docker-compose.saas.yml'
|
||||
if ($c.TlsMode -eq 'custom') { $composeFile += ':docker-compose.tls.yml' }
|
||||
if ($c.MonitoringNetwork) { $composeFile += ':docker-compose.monitoring.yml' }
|
||||
$composeFile = 'docker-compose.yml;docker-compose.saas.yml'
|
||||
if ($c.TlsMode -eq 'custom') { $composeFile += ';docker-compose.tls.yml' }
|
||||
if ($c.MonitoringNetwork) { $composeFile += ';docker-compose.monitoring.yml' }
|
||||
$content += "`n`n# Compose file assembly`nCOMPOSE_FILE=$composeFile"
|
||||
if ($c.MonitoringNetwork) {
|
||||
$content += "`n`n# Monitoring`nMONITORING_NETWORK=$($c.MonitoringNetwork)"
|
||||
@@ -1032,10 +1041,10 @@ $logtoConsoleRow
|
||||
|
||||
| Container | Purpose |
|
||||
|---|---|
|
||||
| ``traefik`` | Reverse proxy, TLS termination, routing |
|
||||
| ``postgres`` | PostgreSQL database (SaaS + Logto + tenant schemas) |
|
||||
| ``clickhouse`` | Time-series storage (traces, metrics, logs) |
|
||||
| ``logto`` | OIDC identity provider + bootstrap |
|
||||
| ``cameleer-traefik`` | Reverse proxy, TLS termination, routing |
|
||||
| ``cameleer-postgres`` | PostgreSQL database (SaaS + Logto + tenant schemas) |
|
||||
| ``cameleer-clickhouse`` | Time-series storage (traces, metrics, logs) |
|
||||
| ``cameleer-logto`` | OIDC identity provider + bootstrap |
|
||||
| ``cameleer-saas`` | SaaS platform (Spring Boot + React) |
|
||||
|
||||
Per-tenant ``cameleer-server`` and ``cameleer-server-ui`` containers are provisioned dynamically.
|
||||
@@ -1156,11 +1165,11 @@ placing your certificate and key files in the ``certs/`` directory and restartin
|
||||
|
||||
| Container | Purpose |
|
||||
|---|---|
|
||||
| ``traefik`` | Reverse proxy, TLS termination, routing |
|
||||
| ``postgres`` | PostgreSQL database (server data) |
|
||||
| ``clickhouse`` | Time-series storage (traces, metrics, logs) |
|
||||
| ``server`` | Cameleer Server (Spring Boot backend) |
|
||||
| ``server-ui`` | Cameleer Dashboard (React frontend) |
|
||||
| ``cameleer-traefik`` | Reverse proxy, TLS termination, routing |
|
||||
| ``cameleer-postgres`` | PostgreSQL database (server data) |
|
||||
| ``cameleer-clickhouse`` | Time-series storage (traces, metrics, logs) |
|
||||
| ``cameleer-server`` | Cameleer Server (Spring Boot backend) |
|
||||
| ``cameleer-server-ui`` | Cameleer Dashboard (React frontend) |
|
||||
|
||||
## Networking
|
||||
|
||||
@@ -1202,7 +1211,7 @@ docker compose -p $($c.ComposeProject) exec cameleer-clickhouse clickhouse-clien
|
||||
| Issue | Command |
|
||||
|---|---|
|
||||
| Service not starting | ``docker compose -p $($c.ComposeProject) logs SERVICE_NAME`` |
|
||||
| Server issues | ``docker compose -p $($c.ComposeProject) logs server`` |
|
||||
| Server issues | ``docker compose -p $($c.ComposeProject) logs cameleer-server`` |
|
||||
| Routing issues | ``docker compose -p $($c.ComposeProject) logs cameleer-traefik`` |
|
||||
| Database issues | ``docker compose -p $($c.ComposeProject) exec cameleer-postgres psql -U cameleer -d cameleer`` |
|
||||
|
||||
|
||||
@@ -600,6 +600,9 @@ SERVER_ADMIN_PASS=${ADMIN_PASS}
|
||||
# Bootstrap token (required by server, not used externally in standalone mode)
|
||||
BOOTSTRAP_TOKEN=$(generate_password)
|
||||
|
||||
# JWT signing secret (required by server, must be non-empty)
|
||||
CAMELEER_SERVER_SECURITY_JWTSECRET=$(generate_password)
|
||||
|
||||
# Docker
|
||||
DOCKER_SOCKET=${DOCKER_SOCKET}
|
||||
DOCKER_GID=$(stat -c '%g' "${DOCKER_SOCKET}" 2>/dev/null || echo "0")
|
||||
@@ -676,6 +679,10 @@ DOCKER_GID=$(stat -c '%g' "${DOCKER_SOCKET}" 2>/dev/null || echo "0")
|
||||
# Provisioning images
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE=${REGISTRY}/cameleer-server:${VERSION}
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE=${REGISTRY}/cameleer-server-ui:${VERSION}
|
||||
CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE=${REGISTRY}/cameleer-runtime-base:${VERSION}
|
||||
|
||||
# JWT signing secret (forwarded to provisioned tenant servers, must be non-empty)
|
||||
CAMELEER_SERVER_SECURITY_JWTSECRET=$(generate_password)
|
||||
|
||||
# Compose file assembly
|
||||
COMPOSE_FILE=docker-compose.yml:docker-compose.saas.yml$([ "$TLS_MODE" = "custom" ] && echo ":docker-compose.tls.yml")$([ -n "$MONITORING_NETWORK" ] && echo ":docker-compose.monitoring.yml")
|
||||
@@ -950,10 +957,10 @@ EOF
|
||||
|
||||
| Container | Purpose |
|
||||
|---|---|
|
||||
| `traefik` | Reverse proxy, TLS termination, routing |
|
||||
| `postgres` | PostgreSQL database (SaaS + Logto + tenant schemas) |
|
||||
| `clickhouse` | Time-series storage (traces, metrics, logs) |
|
||||
| `logto` | OIDC identity provider + bootstrap |
|
||||
| `cameleer-traefik` | Reverse proxy, TLS termination, routing |
|
||||
| `cameleer-postgres` | PostgreSQL database (SaaS + Logto + tenant schemas) |
|
||||
| `cameleer-clickhouse` | Time-series storage (traces, metrics, logs) |
|
||||
| `cameleer-logto` | OIDC identity provider + bootstrap |
|
||||
| `cameleer-saas` | SaaS platform (Spring Boot + React) |
|
||||
|
||||
Per-tenant `cameleer-server` and `cameleer-server-ui` containers are provisioned dynamically when tenants are created.
|
||||
@@ -1092,11 +1099,11 @@ generate_install_doc_standalone() {
|
||||
|
||||
| Container | Purpose |
|
||||
|---|---|
|
||||
| \`traefik\` | Reverse proxy, TLS termination, routing |
|
||||
| \`postgres\` | PostgreSQL database (server data) |
|
||||
| \`clickhouse\` | Time-series storage (traces, metrics, logs) |
|
||||
| \`server\` | Cameleer Server (Spring Boot backend) |
|
||||
| \`server-ui\` | Cameleer Dashboard (React frontend) |
|
||||
| \`cameleer-traefik\` | Reverse proxy, TLS termination, routing |
|
||||
| \`cameleer-postgres\` | PostgreSQL database (server data) |
|
||||
| \`cameleer-clickhouse\` | Time-series storage (traces, metrics, logs) |
|
||||
| \`cameleer-server\` | Cameleer Server (Spring Boot backend) |
|
||||
| \`cameleer-server-ui\` | Cameleer Dashboard (React frontend) |
|
||||
|
||||
## Networking
|
||||
|
||||
@@ -1166,7 +1173,7 @@ The installer preserves your \`.env\`, credentials, and data volumes. Only the c
|
||||
| Issue | Command |
|
||||
|---|---|
|
||||
| Service not starting | \`docker compose -p ${COMPOSE_PROJECT} logs SERVICE_NAME\` |
|
||||
| Server issues | \`docker compose -p ${COMPOSE_PROJECT} logs server\` |
|
||||
| Server issues | \`docker compose -p ${COMPOSE_PROJECT} logs cameleer-server\` |
|
||||
| Routing issues | \`docker compose -p ${COMPOSE_PROJECT} logs cameleer-traefik\` |
|
||||
| Database issues | \`docker compose -p ${COMPOSE_PROJECT} exec cameleer-postgres psql -U cameleer -d cameleer\` |
|
||||
|
||||
|
||||
@@ -79,6 +79,7 @@ DOCKER_GID=0
|
||||
# ============================================================
|
||||
# CAMELEER_SAAS_PROVISIONING_SERVERIMAGE=gitea.siegeln.net/cameleer/cameleer-server:latest
|
||||
# CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE=gitea.siegeln.net/cameleer/cameleer-server-ui:latest
|
||||
# CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE=gitea.siegeln.net/cameleer/cameleer-runtime-base:latest
|
||||
|
||||
# ============================================================
|
||||
# Monitoring (optional)
|
||||
|
||||
@@ -77,8 +77,10 @@ services:
|
||||
CAMELEER_SAAS_PROVISIONING_DATASOURCEUSERNAME: ${POSTGRES_USER:-cameleer}
|
||||
CAMELEER_SAAS_PROVISIONING_DATASOURCEPASSWORD: ${POSTGRES_PASSWORD}
|
||||
CAMELEER_SAAS_PROVISIONING_CLICKHOUSEPASSWORD: ${CLICKHOUSE_PASSWORD}
|
||||
CAMELEER_SERVER_SECURITY_JWTSECRET: ${CAMELEER_SERVER_SECURITY_JWTSECRET:?CAMELEER_SERVER_SECURITY_JWTSECRET must be set in .env}
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE: ${CAMELEER_SAAS_PROVISIONING_SERVERIMAGE:-gitea.siegeln.net/cameleer/cameleer-server:latest}
|
||||
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE: ${CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE:-gitea.siegeln.net/cameleer/cameleer-server-ui:latest}
|
||||
CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE: ${CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE:-gitea.siegeln.net/cameleer/cameleer-runtime-base:latest}
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.saas.rule=PathPrefix(`/platform`)
|
||||
|
||||
@@ -29,6 +29,7 @@ services:
|
||||
CAMELEER_SERVER_CLICKHOUSE_USERNAME: default
|
||||
CAMELEER_SERVER_CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD}
|
||||
CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN: ${BOOTSTRAP_TOKEN:?BOOTSTRAP_TOKEN must be set in .env}
|
||||
CAMELEER_SERVER_SECURITY_JWTSECRET: ${CAMELEER_SERVER_SECURITY_JWTSECRET:?CAMELEER_SERVER_SECURITY_JWTSECRET must be set in .env}
|
||||
CAMELEER_SERVER_SECURITY_UIUSER: ${SERVER_ADMIN_USER:-admin}
|
||||
CAMELEER_SERVER_SECURITY_UIPASSWORD: ${SERVER_ADMIN_PASS:?SERVER_ADMIN_PASS must be set in .env}
|
||||
CAMELEER_SERVER_SECURITY_CORSALLOWEDORIGINS: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
|
||||
@@ -88,6 +89,7 @@ services:
|
||||
|
||||
volumes:
|
||||
jars:
|
||||
name: cameleer-jars
|
||||
|
||||
networks:
|
||||
cameleer-apps:
|
||||
|
||||
42
src/main/java/net/siegeln/cameleer/saas/config/CLAUDE.md
Normal file
42
src/main/java/net/siegeln/cameleer/saas/config/CLAUDE.md
Normal file
@@ -0,0 +1,42 @@
|
||||
# Auth & Security Config
|
||||
|
||||
## Auth enforcement
|
||||
|
||||
- All API endpoints enforce OAuth2 scopes via `@PreAuthorize("hasAuthority('SCOPE_xxx')")` annotations
|
||||
- Tenant isolation enforced by `TenantIsolationInterceptor` (a single `HandlerInterceptor` on `/api/**` that resolves JWT org_id to TenantContext and validates `{tenantId}`, `{environmentId}`, `{appId}` path variables; fail-closed, platform admins bypass)
|
||||
- 13 OAuth2 scopes on the Logto API resource (`https://api.cameleer.local`): 1 platform (`platform:admin`) + 9 tenant (`tenant:manage`, `billing:manage`, `team:manage`, `apps:manage`, `apps:deploy`, `secrets:manage`, `observe:read`, `observe:debug`, `settings:manage`) + 3 server (`server:admin`, `server:operator`, `server:viewer`), served to the frontend from `GET /platform/api/config`
|
||||
- Server scopes map to server RBAC roles via JWT `scope` claim (SaaS platform path) or `roles` claim (server-ui OIDC login path)
|
||||
- Org roles: `owner` -> `server:admin` + `tenant:manage`, `operator` -> `server:operator`, `viewer` -> `server:viewer`
|
||||
- `saas-vendor` global role created by bootstrap Phase 12 and always assigned to the admin user — has `platform:admin` + all tenant scopes
|
||||
- Custom `JwtDecoder` in `SecurityConfig.java` — ES384 algorithm, `at+jwt` token type, split issuer-uri (string validation) / jwk-set-uri (Docker-internal fetch), audience validation (`https://api.cameleer.local`)
|
||||
- Logto Custom JWT (Phase 7b in bootstrap) injects a `roles` claim into access tokens based on org roles and global roles — this makes role data available to the server without Logto-specific code
|
||||
|
||||
## Auth routing by persona
|
||||
|
||||
| Persona | Logto role | Key scope | Landing route |
|
||||
|---------|-----------|-----------|---------------|
|
||||
| SaaS admin | `saas-vendor` (global) | `platform:admin` | `/vendor/tenants` |
|
||||
| Tenant admin | org `owner` | `tenant:manage` | `/tenant` (dashboard) |
|
||||
| Regular user (operator/viewer) | org member | `server:operator` or `server:viewer` | Redirected to server dashboard directly |
|
||||
|
||||
- `LandingRedirect` component waits for scopes to load, then routes to the correct persona landing page
|
||||
- `RequireScope` guard on route groups enforces scope requirements
|
||||
- SSO bridge: Logto session carries over to provisioned server's OIDC flow (Traditional Web App per tenant)
|
||||
|
||||
## Server OIDC role extraction (two paths)
|
||||
|
||||
| Path | Token type | Role source | How it works |
|
||||
|------|-----------|-------------|--------------|
|
||||
| SaaS platform -> server API | Logto org-scoped access token | `scope` claim | `JwtAuthenticationFilter.extractRolesFromScopes()` reads `server:admin` from scope |
|
||||
| Server-ui SSO login | Logto JWT access token (via Traditional Web App) | `roles` claim | `OidcTokenExchanger` decodes access_token, reads `roles` injected by Custom JWT |
|
||||
|
||||
The server's OIDC config (`OidcConfig`) includes `audience` (RFC 8707 resource indicator) and `additionalScopes`. The `audience` is sent as `resource` in both the authorization request and token exchange, which makes Logto return a JWT access token instead of opaque. The Custom JWT script maps org roles to `roles: ["server:admin"]`.
|
||||
|
||||
**CRITICAL:** `additionalScopes` MUST include `urn:logto:scope:organizations` and `urn:logto:scope:organization_roles` — without these, Logto doesn't populate `context.user.organizationRoles` in the Custom JWT script, so the `roles` claim is empty and all users get `defaultRoles` (VIEWER). The server's `OidcAuthController.applyClaimMappings()` uses OIDC token roles (from Custom JWT) as fallback when no DB claim mapping rules exist: claim mapping rules > OIDC token roles > defaultRoles.
|
||||
|
||||
## SaaS app identity configuration
|
||||
|
||||
**Identity** (`cameleer.saas.identity.*` / `CAMELEER_SAAS_IDENTITY_*`):
|
||||
- Logto endpoint, M2M credentials, bootstrap file path — used by `LogtoConfig.java`
|
||||
|
||||
**Note:** `PUBLIC_HOST` and `PUBLIC_PROTOCOL` remain as infrastructure env vars for Traefik and Logto containers. The SaaS app reads its own copies via the `CAMELEER_SAAS_PROVISIONING_*` prefix. `LOGTO_ENDPOINT` and `LOGTO_DB_PASSWORD` are infrastructure env vars for the Logto service and are unchanged.
|
||||
@@ -8,6 +8,7 @@ import org.springframework.stereotype.Service;
|
||||
import org.springframework.web.client.RestClient;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
@@ -171,6 +172,38 @@ public class ServerApiClient {
|
||||
|
||||
public record ServerHealthResponse(boolean healthy, String status) {}
|
||||
|
||||
// --- Server metrics query (POST /api/v1/admin/server-metrics/query) ---
|
||||
|
||||
public record MetricsQueryResponse(
|
||||
String metric,
|
||||
String statistic,
|
||||
String aggregation,
|
||||
String mode,
|
||||
int stepSeconds,
|
||||
List<MetricsSeries> series
|
||||
) {}
|
||||
|
||||
public record MetricsSeries(Map<String, String> tags, List<MetricsPoint> points) {}
|
||||
|
||||
public record MetricsPoint(String t, double v) {}
|
||||
|
||||
/** Execute a server-metrics query against a tenant's server. */
|
||||
public MetricsQueryResponse queryServerMetrics(String serverEndpoint, Map<String, Object> body) {
|
||||
try {
|
||||
return RestClient.create().post()
|
||||
.uri(serverEndpoint + "/api/v1/admin/server-metrics/query")
|
||||
.header("Authorization", "Bearer " + getAccessToken())
|
||||
.header("X-Cameleer-Protocol-Version", "1")
|
||||
.contentType(MediaType.APPLICATION_JSON)
|
||||
.body(body)
|
||||
.retrieve()
|
||||
.body(MetricsQueryResponse.class);
|
||||
} catch (Exception e) {
|
||||
log.warn("Metrics query failed for {}: {}", serverEndpoint, e.getMessage());
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
private synchronized String getAccessToken() {
|
||||
if (cachedToken != null && Instant.now().isBefore(tokenExpiry.minusSeconds(60))) {
|
||||
return cachedToken;
|
||||
|
||||
102
src/main/java/net/siegeln/cameleer/saas/provisioning/CLAUDE.md
Normal file
102
src/main/java/net/siegeln/cameleer/saas/provisioning/CLAUDE.md
Normal file
@@ -0,0 +1,102 @@
|
||||
# Provisioning
|
||||
|
||||
Pluggable tenant provisioning via `TenantProvisioner` interface. `DockerTenantProvisioner` is the Docker implementation; `DisabledTenantProvisioner` is the fallback when no Docker socket is detected. Auto-configured by `TenantProvisionerAutoConfig`.
|
||||
|
||||
## Tenant Provisioning Flow
|
||||
|
||||
When SaaS admin creates a tenant via `VendorTenantService`:
|
||||
|
||||
**Synchronous (in `createAndProvision`):**
|
||||
1. Create `TenantEntity` (status=PROVISIONING) + Logto organization
|
||||
2. Create admin user in Logto with owner org role (if credentials provided)
|
||||
3. Register OIDC redirect URIs for `/t/{slug}/oidc/callback` on Logto Traditional Web App
|
||||
5. Generate license (tier-appropriate, 365 days)
|
||||
6. Return immediately — UI shows provisioning spinner, polls via `refetchInterval`
|
||||
|
||||
**Asynchronous (via `self.provisionAsync()` — `@Lazy` self-proxy for `@Async`):**
|
||||
7. Create per-tenant PostgreSQL user + schema via `TenantDatabaseService.createTenantDatabase(slug, password)`, store `dbPassword` on entity
|
||||
8. Create tenant-isolated Docker network (`cameleer-tenant-{slug}`)
|
||||
9. Create server container with per-tenant JDBC URL (`currentSchema=tenant_{slug}&ApplicationName=tenant_{slug}`), Traefik labels (`traefik.docker.network`), health check, Docker socket bind, JAR volume, certs volume (ro)
|
||||
10. Create UI container with `CAMELEER_API_URL`, `BASE_PATH`, Traefik strip-prefix labels
|
||||
10. Wait for health check (`/api/v1/health`, not `/actuator/health` which requires auth)
|
||||
11. Push license token to server via M2M API
|
||||
12. Push OIDC config (Traditional Web App credentials + `additionalScopes: [urn:logto:scope:organizations, urn:logto:scope:organization_roles]`) to server for SSO
|
||||
13. Update tenant status -> ACTIVE (or set `provisionError` on failure)
|
||||
|
||||
**Server restart** (available to SaaS admin + tenant admin):
|
||||
- `POST /api/vendor/tenants/{id}/restart` (SaaS admin) and `POST /api/tenant/server/restart` (tenant)
|
||||
- Calls `TenantProvisioner.stop(slug)` then `start(slug)` — restarts server + UI containers only (same image)
|
||||
|
||||
**Server upgrade** (available to SaaS admin + tenant admin):
|
||||
- `POST /api/vendor/tenants/{id}/upgrade` (SaaS admin) and `POST /api/tenant/server/upgrade` (tenant)
|
||||
- Calls `TenantProvisioner.upgrade(slug)` — removes server + UI containers, force-pulls latest images (preserves app containers, volumes, networks), then `provisionAsync()` re-creates containers with the new image + pushes license + OIDC config
|
||||
|
||||
**Tenant delete** cleanup:
|
||||
- `DockerTenantProvisioner.remove(slug)` — label-based container removal (`cameleer.tenant={slug}`), env network cleanup, tenant network removal, JAR volume removal
|
||||
- `TenantDatabaseService.dropTenantDatabase(slug)` — drops PostgreSQL `tenant_{slug}` schema + `tenant_{slug}` user
|
||||
- `TenantDataCleanupService.cleanupClickHouse(slug)` — deletes ClickHouse data across all tables with `tenant_id` column (GDPR)
|
||||
|
||||
**Password management** (tenant portal):
|
||||
- `POST /api/tenant/password` — tenant admin changes own Logto password (via `@AuthenticationPrincipal` JWT subject)
|
||||
- `POST /api/tenant/team/{userId}/password` — tenant admin resets a team member's Logto password (validates org membership first)
|
||||
- `POST /api/tenant/server/admin-password` — tenant admin resets the server's built-in local admin password (via M2M API to `POST /api/v1/admin/users/user:admin/password`)
|
||||
|
||||
## Per-tenant server env vars (set by DockerTenantProvisioner)
|
||||
|
||||
These env vars are injected into provisioned per-tenant server containers:
|
||||
|
||||
| Env var | Value | Purpose |
|
||||
|---------|-------|---------|
|
||||
| `SPRING_DATASOURCE_URL` | `jdbc:postgresql://cameleer-postgres:5432/cameleer?currentSchema=tenant_{slug}&ApplicationName=tenant_{slug}` | Per-tenant schema isolation + diagnostic query scoping |
|
||||
| `SPRING_DATASOURCE_USERNAME` | `tenant_{slug}` | Per-tenant PG user (owns only its schema) |
|
||||
| `SPRING_DATASOURCE_PASSWORD` | (generated, stored in `TenantEntity.dbPassword`) | Per-tenant PG password |
|
||||
| `CAMELEER_SERVER_CLICKHOUSE_URL` | `jdbc:clickhouse://cameleer-clickhouse:8123/cameleer` | ClickHouse connection |
|
||||
| `CAMELEER_SERVER_CLICKHOUSE_USERNAME` | (from provisioning config) | ClickHouse user |
|
||||
| `CAMELEER_SERVER_CLICKHOUSE_PASSWORD` | (from provisioning config) | ClickHouse password |
|
||||
| `CAMELEER_SERVER_TENANT_ID` | `{slug}` | Tenant slug for data isolation |
|
||||
| `CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN` | (license token) | Bootstrap auth token for M2M communication |
|
||||
| `CAMELEER_SERVER_SECURITY_JWTSECRET` | (from env, installer-generated) | JWT signing secret (must be non-empty) |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDC_ISSUERURI` | `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}/oidc` | Token issuer claim validation |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDC_JWKSETURI` | `http://cameleer-logto:3001/oidc/jwks` | Docker-internal JWK fetch |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDC_TLSSKIPVERIFY` | `true` (conditional) | Skip cert verify for OIDC discovery; only set when no `/certs/ca.pem` exists. When ca.pem exists, the server's `docker-entrypoint.sh` imports it into the JVM truststore instead. |
|
||||
| `CAMELEER_SERVER_SECURITY_OIDC_AUDIENCE` | `https://api.cameleer.local` | JWT audience validation for OIDC tokens |
|
||||
| `CAMELEER_SERVER_SECURITY_CORSALLOWEDORIGINS` | `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}` | Allow browser requests through Traefik |
|
||||
| `CAMELEER_SERVER_LICENSE_TOKEN` | (generated) | License token for this tenant |
|
||||
| `CAMELEER_SERVER_RUNTIME_ENABLED` | `true` | Enable Docker orchestration |
|
||||
| `CAMELEER_SERVER_RUNTIME_SERVERURL` | `http://cameleer-server-{slug}:8081` | Per-tenant server URL (DNS alias on tenant network) |
|
||||
| `CAMELEER_SERVER_RUNTIME_ROUTINGDOMAIN` | `${PUBLIC_HOST}` | Domain for Traefik routing labels |
|
||||
| `CAMELEER_SERVER_RUNTIME_ROUTINGMODE` | `path` | `path` or `subdomain` routing |
|
||||
| `CAMELEER_SERVER_RUNTIME_JARSTORAGEPATH` | `/data/jars` | Directory for uploaded JARs |
|
||||
| `CAMELEER_SERVER_RUNTIME_DOCKERNETWORK` | `cameleer-tenant-{slug}` | Primary network for deployed app containers |
|
||||
| `CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME` | `cameleer-jars-{slug}` | Docker volume name for JAR sharing between server and deployed containers |
|
||||
| `CAMELEER_SERVER_RUNTIME_BASEIMAGE` | (from `CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE`) | Runtime base image for deployed app containers |
|
||||
| `CAMELEER_SERVER_SECURITY_INFRASTRUCTUREENDPOINTS` | `false` | Hides Database/ClickHouse admin from tenant admins |
|
||||
| `BASE_PATH` (server-ui) | `/t/{slug}` | React Router basename + `<base>` tag |
|
||||
| `CAMELEER_API_URL` (server-ui) | `http://cameleer-server-{slug}:8081` | Nginx upstream proxy target (NOT `API_URL` — image uses `${CAMELEER_API_URL}`) |
|
||||
|
||||
## Per-tenant volume mounts
|
||||
|
||||
| Mount | Container path | Purpose |
|
||||
|-------|---------------|---------|
|
||||
| `/var/run/docker.sock` | `/var/run/docker.sock` | Docker socket for app deployment orchestration |
|
||||
| `cameleer-jars-{slug}` (volume, via `CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME`) | `/data/jars` | Shared JAR storage — server writes, deployed app containers read |
|
||||
| `cameleer-saas_certs` (volume, ro) | `/certs` | Platform TLS certs + CA bundle for OIDC trust |
|
||||
|
||||
## SaaS provisioning properties (`ProvisioningProperties`)
|
||||
|
||||
The `CAMELEER_SAAS_PROVISIONING_*` prefix means "SaaS forwards this to provisioned tenant servers". These values are read by the SaaS app and injected as `CAMELEER_SERVER_*` env vars on provisioned containers.
|
||||
|
||||
| Env var | Spring property | Purpose |
|
||||
|---------|----------------|---------|
|
||||
| `CAMELEER_SAAS_PROVISIONING_SERVERIMAGE` | `cameleer.saas.provisioning.serverimage` | Docker image for per-tenant server containers |
|
||||
| `CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE` | `cameleer.saas.provisioning.serveruiimage` | Docker image for per-tenant UI containers |
|
||||
| `CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE` | `cameleer.saas.provisioning.runtimebaseimage` | Runtime base image for deployed apps (forwarded as `CAMELEER_SERVER_RUNTIME_BASEIMAGE`) |
|
||||
| `CAMELEER_SAAS_PROVISIONING_NETWORKNAME` | `cameleer.saas.provisioning.networkname` | Shared services Docker network (compose default) |
|
||||
| `CAMELEER_SAAS_PROVISIONING_TRAEFIKNETWORK` | `cameleer.saas.provisioning.traefiknetwork` | Traefik Docker network for routing |
|
||||
| `CAMELEER_SAAS_PROVISIONING_PUBLICHOST` | `cameleer.saas.provisioning.publichost` | Public hostname (same value as infrastructure `PUBLIC_HOST`) |
|
||||
| `CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL` | `cameleer.saas.provisioning.publicprotocol` | Public protocol (same value as infrastructure `PUBLIC_PROTOCOL`) |
|
||||
| `CAMELEER_SAAS_PROVISIONING_DATASOURCEURL` | `cameleer.saas.provisioning.datasourceurl` | PostgreSQL JDBC URL (base, without schema params) |
|
||||
| `CAMELEER_SAAS_PROVISIONING_DATASOURCEUSERNAME` | `cameleer.saas.provisioning.datasourceusername` | PostgreSQL user (fallback for pre-isolation tenants) |
|
||||
| `CAMELEER_SAAS_PROVISIONING_DATASOURCEPASSWORD` | `cameleer.saas.provisioning.datasourcepassword` | PostgreSQL password (fallback for pre-isolation tenants) |
|
||||
| `CAMELEER_SAAS_PROVISIONING_CLICKHOUSEPASSWORD` | `cameleer.saas.provisioning.clickhousepassword` | ClickHouse password for provisioned servers |
|
||||
| `CAMELEER_SAAS_PROVISIONING_CORSORIGINS` | `cameleer.saas.provisioning.corsorigins` | CORS allowed origins for provisioned servers |
|
||||
@@ -231,6 +231,7 @@ public class DockerTenantProvisioner implements TenantProvisioner {
|
||||
// Apps deployed by this server join the tenant network (isolated)
|
||||
"CAMELEER_SERVER_RUNTIME_DOCKERNETWORK=" + tenantNetwork,
|
||||
"CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME=cameleer-jars-" + slug,
|
||||
"CAMELEER_SERVER_RUNTIME_BASEIMAGE=" + props.runtimeBaseImage(),
|
||||
"CAMELEER_SERVER_SECURITY_INFRASTRUCTUREENDPOINTS=false"
|
||||
));
|
||||
// If no CA bundle exists, fall back to TLS skip for OIDC (self-signed dev)
|
||||
|
||||
@@ -6,6 +6,7 @@ import org.springframework.boot.context.properties.ConfigurationProperties;
|
||||
public record ProvisioningProperties(
|
||||
String serverImage,
|
||||
String serverUiImage,
|
||||
String runtimeBaseImage,
|
||||
String networkName,
|
||||
String traefikNetwork,
|
||||
String publicHost,
|
||||
|
||||
74
src/main/java/net/siegeln/cameleer/saas/vendor/TenantMetricsController.java
vendored
Normal file
74
src/main/java/net/siegeln/cameleer/saas/vendor/TenantMetricsController.java
vendored
Normal file
@@ -0,0 +1,74 @@
|
||||
package net.siegeln.cameleer.saas.vendor;
|
||||
|
||||
import net.siegeln.cameleer.saas.provisioning.ServerStatus;
|
||||
import net.siegeln.cameleer.saas.tenant.TenantEntity;
|
||||
import org.springframework.http.ResponseEntity;
|
||||
import org.springframework.security.access.prepost.PreAuthorize;
|
||||
import org.springframework.web.bind.annotation.GetMapping;
|
||||
import org.springframework.web.bind.annotation.RequestMapping;
|
||||
import org.springframework.web.bind.annotation.RestController;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.concurrent.CompletableFuture;
|
||||
|
||||
@RestController
|
||||
@RequestMapping("/api/vendor/metrics")
|
||||
@PreAuthorize("hasAuthority('SCOPE_platform:admin')")
|
||||
public class TenantMetricsController {
|
||||
|
||||
private final VendorTenantService vendorTenantService;
|
||||
private final TenantMetricsService metricsService;
|
||||
|
||||
public TenantMetricsController(VendorTenantService vendorTenantService,
|
||||
TenantMetricsService metricsService) {
|
||||
this.vendorTenantService = vendorTenantService;
|
||||
this.metricsService = metricsService;
|
||||
}
|
||||
|
||||
public record TenantMetricsEntry(
|
||||
String tenantId,
|
||||
String tenantName,
|
||||
String slug,
|
||||
String tier,
|
||||
String status,
|
||||
String serverState,
|
||||
TenantMetricsService.MetricsSummary metrics
|
||||
) {}
|
||||
|
||||
@GetMapping
|
||||
public ResponseEntity<List<TenantMetricsEntry>> all() {
|
||||
List<TenantEntity> tenants = vendorTenantService.listAll();
|
||||
|
||||
List<CompletableFuture<TenantMetricsEntry>> futures = tenants.stream()
|
||||
.map(tenant -> CompletableFuture.supplyAsync(() -> {
|
||||
ServerStatus serverStatus = vendorTenantService.getServerStatus(tenant);
|
||||
String state = serverStatus.state().name();
|
||||
|
||||
TenantMetricsService.MetricsSummary metrics = null;
|
||||
String endpoint = tenant.getServerEndpoint();
|
||||
boolean isRunning = "ACTIVE".equals(tenant.getStatus().name())
|
||||
&& endpoint != null && !endpoint.isBlank()
|
||||
&& "RUNNING".equals(state);
|
||||
if (isRunning) {
|
||||
metrics = metricsService.getMetricsSummary(endpoint);
|
||||
}
|
||||
|
||||
return new TenantMetricsEntry(
|
||||
tenant.getId().toString(),
|
||||
tenant.getName(),
|
||||
tenant.getSlug(),
|
||||
tenant.getTier().name(),
|
||||
tenant.getStatus().name(),
|
||||
state,
|
||||
metrics
|
||||
);
|
||||
}))
|
||||
.toList();
|
||||
|
||||
List<TenantMetricsEntry> entries = futures.stream()
|
||||
.map(CompletableFuture::join)
|
||||
.toList();
|
||||
|
||||
return ResponseEntity.ok(entries);
|
||||
}
|
||||
}
|
||||
176
src/main/java/net/siegeln/cameleer/saas/vendor/TenantMetricsService.java
vendored
Normal file
176
src/main/java/net/siegeln/cameleer/saas/vendor/TenantMetricsService.java
vendored
Normal file
@@ -0,0 +1,176 @@
|
||||
package net.siegeln.cameleer.saas.vendor;
|
||||
|
||||
import net.siegeln.cameleer.saas.identity.ServerApiClient;
|
||||
import net.siegeln.cameleer.saas.identity.ServerApiClient.MetricsQueryResponse;
|
||||
import net.siegeln.cameleer.saas.identity.ServerApiClient.MetricsPoint;
|
||||
import net.siegeln.cameleer.saas.identity.ServerApiClient.MetricsSeries;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.stereotype.Service;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.time.temporal.ChronoUnit;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.CompletableFuture;
|
||||
|
||||
@Service
|
||||
public class TenantMetricsService {
|
||||
|
||||
private static final Logger log = LoggerFactory.getLogger(TenantMetricsService.class);
|
||||
|
||||
private final ServerApiClient serverApiClient;
|
||||
|
||||
public TenantMetricsService(ServerApiClient serverApiClient) {
|
||||
this.serverApiClient = serverApiClient;
|
||||
}
|
||||
|
||||
// --- Response records ---
|
||||
|
||||
public record MetricsSummary(
|
||||
String collectedAt,
|
||||
AgentMetrics agents,
|
||||
IngestionMetrics ingestion,
|
||||
ServerJvmMetrics server,
|
||||
HttpMetrics http,
|
||||
double authFailuresPerMinute
|
||||
) {}
|
||||
|
||||
public record AgentMetrics(int live, int stale, int dead, int shutdown) {}
|
||||
|
||||
public record IngestionMetrics(long bufferDepth, double dropsPerMinute) {}
|
||||
|
||||
public record ServerJvmMetrics(
|
||||
double cpuUsage,
|
||||
long heapUsedBytes,
|
||||
long heapMaxBytes,
|
||||
long uptimeSeconds,
|
||||
int threadCount
|
||||
) {}
|
||||
|
||||
public record HttpMetrics(double requestsPerMinute, double errorRate) {}
|
||||
|
||||
/**
|
||||
* Query a tenant's server for key metrics and assemble a summary snapshot.
|
||||
* Fires multiple queries concurrently (one per metric group) over the last 5 minutes.
|
||||
*/
|
||||
public MetricsSummary getMetricsSummary(String serverEndpoint) {
|
||||
Instant to = Instant.now();
|
||||
Instant from = to.minus(5, ChronoUnit.MINUTES);
|
||||
String fromStr = from.toString();
|
||||
String toStr = to.toString();
|
||||
|
||||
// Fire all queries concurrently
|
||||
var agentsFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "cameleer.agents.connected", "value", fromStr, toStr, "avg", "raw", List.of("state"), null));
|
||||
var cpuFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "process.cpu.usage", "value", fromStr, toStr, "avg", "raw", null, null));
|
||||
var heapUsedFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "jvm.memory.used", "value", fromStr, toStr, "sum", "raw", null, Map.of("area", "heap")));
|
||||
var heapMaxFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "jvm.memory.max", "value", fromStr, toStr, "sum", "raw", null, Map.of("area", "heap")));
|
||||
var uptimeFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "process.uptime", "value", fromStr, toStr, "latest", "raw", null, null));
|
||||
var threadsFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "jvm.threads.live", "value", fromStr, toStr, "avg", "raw", null, null));
|
||||
var dropsFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "cameleer.ingestion.drops", "count", fromStr, toStr, "sum", "delta", null, null));
|
||||
var bufferFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "cameleer.ingestion.buffer.size", "value", fromStr, toStr, "sum", "raw", null, null));
|
||||
var httpTotalFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "http.server.requests", "count", fromStr, toStr, "sum", "delta", null, null));
|
||||
var http5xxFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "http.server.requests", "count", fromStr, toStr, "sum", "delta", null, Map.of("outcome", "SERVER_ERROR")));
|
||||
var authFuture = CompletableFuture.supplyAsync(() ->
|
||||
query(serverEndpoint, "cameleer.auth.failures", "count", fromStr, toStr, "sum", "delta", null, null));
|
||||
|
||||
try {
|
||||
// Extract latest values from each response
|
||||
var agentsResp = agentsFuture.join();
|
||||
int live = agentStateValue(agentsResp, "live");
|
||||
int stale = agentStateValue(agentsResp, "stale");
|
||||
int dead = agentStateValue(agentsResp, "dead");
|
||||
int shutdown = agentStateValue(agentsResp, "shutdown");
|
||||
|
||||
double cpu = latestValue(cpuFuture.join());
|
||||
long heapUsed = (long) latestValue(heapUsedFuture.join());
|
||||
long heapMax = (long) latestValue(heapMaxFuture.join());
|
||||
long uptimeMs = (long) latestValue(uptimeFuture.join());
|
||||
int threads = (int) latestValue(threadsFuture.join());
|
||||
|
||||
double dropsTotal = sumLatestValues(dropsFuture.join());
|
||||
long bufferDepth = (long) latestValue(bufferFuture.join());
|
||||
|
||||
double httpTotal = sumLatestValues(httpTotalFuture.join());
|
||||
double http5xx = sumLatestValues(http5xxFuture.join());
|
||||
double errorRate = httpTotal > 0 ? http5xx / httpTotal : 0.0;
|
||||
// stepSeconds=300 (5min window), so total is per-5-min; convert to per-minute
|
||||
double httpPerMin = httpTotal / 5.0;
|
||||
|
||||
double authTotal = sumLatestValues(authFuture.join());
|
||||
double authPerMin = authTotal / 5.0;
|
||||
|
||||
return new MetricsSummary(
|
||||
toStr,
|
||||
new AgentMetrics(live, stale, dead, shutdown),
|
||||
new IngestionMetrics(bufferDepth, dropsTotal / 5.0),
|
||||
new ServerJvmMetrics(cpu, heapUsed, heapMax, uptimeMs / 1000, threads),
|
||||
new HttpMetrics(httpPerMin, errorRate),
|
||||
authPerMin
|
||||
);
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to assemble metrics summary for {}: {}", serverEndpoint, e.getMessage());
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
private MetricsQueryResponse query(String endpoint, String metric, String statistic,
|
||||
String from, String to, String aggregation, String mode,
|
||||
List<String> groupByTags, Map<String, String> filterTags) {
|
||||
Map<String, Object> body = new HashMap<>();
|
||||
body.put("metric", metric);
|
||||
body.put("statistic", statistic);
|
||||
body.put("from", from);
|
||||
body.put("to", to);
|
||||
body.put("stepSeconds", 300);
|
||||
body.put("aggregation", aggregation);
|
||||
body.put("mode", mode);
|
||||
if (groupByTags != null) body.put("groupByTags", groupByTags);
|
||||
if (filterTags != null) body.put("filterTags", filterTags);
|
||||
return serverApiClient.queryServerMetrics(endpoint, body);
|
||||
}
|
||||
|
||||
/** Extract the latest value from the first (or only) series. */
|
||||
private double latestValue(MetricsQueryResponse resp) {
|
||||
if (resp == null || resp.series() == null || resp.series().isEmpty()) return 0.0;
|
||||
List<MetricsPoint> points = resp.series().getFirst().points();
|
||||
if (points == null || points.isEmpty()) return 0.0;
|
||||
return points.getLast().v();
|
||||
}
|
||||
|
||||
/** Sum the latest value across all series (for metrics with groupByTags or multiple series). */
|
||||
private double sumLatestValues(MetricsQueryResponse resp) {
|
||||
if (resp == null || resp.series() == null || resp.series().isEmpty()) return 0.0;
|
||||
double sum = 0.0;
|
||||
for (MetricsSeries series : resp.series()) {
|
||||
if (series.points() != null && !series.points().isEmpty()) {
|
||||
sum += series.points().getLast().v();
|
||||
}
|
||||
}
|
||||
return sum;
|
||||
}
|
||||
|
||||
/** Extract the latest value for a specific agent state tag. */
|
||||
private int agentStateValue(MetricsQueryResponse resp, String state) {
|
||||
if (resp == null || resp.series() == null) return 0;
|
||||
for (MetricsSeries series : resp.series()) {
|
||||
if (series.tags() != null && state.equals(series.tags().get("state"))) {
|
||||
if (series.points() != null && !series.points().isEmpty()) {
|
||||
return (int) series.points().getLast().v();
|
||||
}
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
@@ -22,6 +22,7 @@ import net.siegeln.cameleer.saas.tenant.TenantStatus;
|
||||
import net.siegeln.cameleer.saas.tenant.dto.CreateTenantRequest;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.context.annotation.Lazy;
|
||||
import org.springframework.scheduling.annotation.Async;
|
||||
import org.springframework.stereotype.Service;
|
||||
import org.springframework.transaction.annotation.Transactional;
|
||||
@@ -49,6 +50,7 @@ public class VendorTenantService {
|
||||
private final ProvisioningProperties provisioningProps;
|
||||
private final TenantDataCleanupService dataCleanupService;
|
||||
private final TenantDatabaseService tenantDatabaseService;
|
||||
private final VendorTenantService self;
|
||||
|
||||
public VendorTenantService(TenantService tenantService,
|
||||
TenantRepository tenantRepository,
|
||||
@@ -60,7 +62,8 @@ public class VendorTenantService {
|
||||
AuditService auditService,
|
||||
ProvisioningProperties provisioningProps,
|
||||
TenantDataCleanupService dataCleanupService,
|
||||
TenantDatabaseService tenantDatabaseService) {
|
||||
TenantDatabaseService tenantDatabaseService,
|
||||
@Lazy VendorTenantService self) {
|
||||
this.tenantService = tenantService;
|
||||
this.tenantRepository = tenantRepository;
|
||||
this.licenseService = licenseService;
|
||||
@@ -72,6 +75,7 @@ public class VendorTenantService {
|
||||
this.provisioningProps = provisioningProps;
|
||||
this.dataCleanupService = dataCleanupService;
|
||||
this.tenantDatabaseService = tenantDatabaseService;
|
||||
this.self = self;
|
||||
}
|
||||
|
||||
@Transactional
|
||||
@@ -114,7 +118,7 @@ public class VendorTenantService {
|
||||
|
||||
// 4. Provision server asynchronously (Docker containers, health check, config push)
|
||||
if (tenantProvisioner.isAvailable()) {
|
||||
provisionAsync(tenant.getId(), tenant.getSlug(), tenant.getTier().name(), license.getToken(), actorId);
|
||||
self.provisionAsync(tenant.getId(), tenant.getSlug(), tenant.getTier().name(), license.getToken(), actorId);
|
||||
}
|
||||
|
||||
return tenant;
|
||||
@@ -251,7 +255,7 @@ public class VendorTenantService {
|
||||
tenantProvisioner.remove(tenant.getSlug());
|
||||
var license = licenseService.getActiveLicense(tenantId).orElse(null);
|
||||
String token = license != null ? license.getToken() : "";
|
||||
provisionAsync(tenantId, tenant.getSlug(), tenant.getTier().name(), token, null);
|
||||
self.provisionAsync(tenantId, tenant.getSlug(), tenant.getTier().name(), token, null);
|
||||
return;
|
||||
}
|
||||
throw e;
|
||||
@@ -268,7 +272,7 @@ public class VendorTenantService {
|
||||
// Re-provision with freshly pulled images
|
||||
var license = licenseService.getActiveLicense(tenantId).orElse(null);
|
||||
String token = license != null ? license.getToken() : "";
|
||||
provisionAsync(tenantId, tenant.getSlug(), tenant.getTier().name(), token, null);
|
||||
self.provisionAsync(tenantId, tenant.getSlug(), tenant.getTier().name(), token, null);
|
||||
}
|
||||
|
||||
@Transactional
|
||||
|
||||
@@ -45,6 +45,7 @@ cameleer:
|
||||
provisioning:
|
||||
serverimage: ${CAMELEER_SAAS_PROVISIONING_SERVERIMAGE:gitea.siegeln.net/cameleer/cameleer-server:latest}
|
||||
serveruiimage: ${CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE:gitea.siegeln.net/cameleer/cameleer-server-ui:latest}
|
||||
runtimebaseimage: ${CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE:gitea.siegeln.net/cameleer/cameleer-runtime-base:latest}
|
||||
networkname: ${CAMELEER_SAAS_PROVISIONING_NETWORKNAME:cameleer-saas_cameleer}
|
||||
traefiknetwork: ${CAMELEER_SAAS_PROVISIONING_TRAEFIKNETWORK:cameleer-traefik}
|
||||
publichost: ${CAMELEER_SAAS_PROVISIONING_PUBLICHOST:localhost}
|
||||
|
||||
@@ -47,7 +47,7 @@ class TenantPortalServiceTest {
|
||||
private TenantProvisioner tenantProvisioner;
|
||||
|
||||
private final ProvisioningProperties provisioningProps = new ProvisioningProperties(
|
||||
null, null, null, null, "test.example.com", "https", null, null, null, null, null, null, null, null, null);
|
||||
null, null, null, null, null, "test.example.com", "https", null, null, null, null, null, null, null, null, null);
|
||||
|
||||
private TenantPortalService tenantPortalService;
|
||||
|
||||
|
||||
@@ -75,14 +75,20 @@ class VendorTenantServiceTest {
|
||||
@BeforeEach
|
||||
void setUp() {
|
||||
var provisioningProps = new ProvisioningProperties(
|
||||
"img", "uiimg", "net", "traefik", "localhost", "https",
|
||||
"img", "uiimg", "runtime-base:latest", "net", "traefik", "localhost", "https",
|
||||
"jdbc:postgresql://pg:5432/db", "cameleer", "cameleer_dev",
|
||||
"jdbc:clickhouse://ch:8123/cameleer", "default", "cameleer_ch",
|
||||
"https://localhost/oidc", "http://cameleer-logto:3001/oidc/jwks", "https://localhost");
|
||||
// Pass null for self-proxy initially, then re-create with the instance itself
|
||||
// (in production, Spring's @Lazy proxy handles this circular ref)
|
||||
vendorTenantService = new VendorTenantService(
|
||||
tenantService, tenantRepository, licenseService,
|
||||
tenantProvisioner, serverApiClient, logtoClient, logtoConfig,
|
||||
auditService, provisioningProps, dataCleanupService, tenantDatabaseService);
|
||||
auditService, provisioningProps, dataCleanupService, tenantDatabaseService, null);
|
||||
vendorTenantService = new VendorTenantService(
|
||||
tenantService, tenantRepository, licenseService,
|
||||
tenantProvisioner, serverApiClient, logtoClient, logtoConfig,
|
||||
auditService, provisioningProps, dataCleanupService, tenantDatabaseService, vendorTenantService);
|
||||
}
|
||||
|
||||
// --- Helpers ---
|
||||
|
||||
30
ui/CLAUDE.md
Normal file
30
ui/CLAUDE.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# Frontend
|
||||
|
||||
React 19 SPA served at `/platform/*` by the Spring Boot backend.
|
||||
|
||||
## Core files
|
||||
|
||||
- `main.tsx` — React 19 root
|
||||
- `router.tsx` — `/vendor/*` + `/tenant/*` with `RequireScope` guards and `LandingRedirect` that waits for scopes
|
||||
- `Layout.tsx` — persona-aware sidebar: vendor sees expandable "Vendor" section (Tenants, Audit Log, Certificates, Infrastructure, Identity/Logto), tenant admin sees Dashboard/License/SSO/Team/Audit/Settings
|
||||
- `OrgResolver.tsx` — merges global + org-scoped token scopes (vendor's platform:admin is global)
|
||||
- `config.ts` — fetch Logto config from /platform/api/config
|
||||
|
||||
## Auth hooks
|
||||
|
||||
- `auth/useAuth.ts` — auth hook (isAuthenticated, logout, signIn)
|
||||
- `auth/useOrganization.ts` — Zustand store for current tenant
|
||||
- `auth/useScopes.ts` — decode JWT scopes, hasScope()
|
||||
- `auth/ProtectedRoute.tsx` — guard (redirects to /login)
|
||||
|
||||
## Pages
|
||||
|
||||
- **Vendor pages**: `VendorTenantsPage.tsx`, `CreateTenantPage.tsx`, `TenantDetailPage.tsx`, `VendorAuditPage.tsx`, `CertificatesPage.tsx`, `InfrastructurePage.tsx`
|
||||
- **Tenant pages**: `TenantDashboardPage.tsx` (restart + upgrade server), `TenantLicensePage.tsx`, `SsoPage.tsx`, `TeamPage.tsx` (reset member passwords), `TenantAuditPage.tsx`, `SettingsPage.tsx` (change own password, reset server admin password)
|
||||
|
||||
## Custom Sign-in UI (`ui/sign-in/`)
|
||||
|
||||
Separate Vite+React SPA replacing Logto's default sign-in page. Built as custom Logto Docker image — see `docker/CLAUDE.md` for details.
|
||||
|
||||
- `SignInPage.tsx` — form with @cameleer/design-system components
|
||||
- `experience-api.ts` — Logto Experience API client (4-step: init -> verify -> identify -> submit)
|
||||
8
ui/package-lock.json
generated
8
ui/package-lock.json
generated
@@ -9,7 +9,7 @@
|
||||
"version": "0.1.0",
|
||||
"hasInstallScript": true,
|
||||
"dependencies": {
|
||||
"@cameleer/design-system": "^0.1.51",
|
||||
"@cameleer/design-system": "^0.1.54",
|
||||
"@logto/react": "^4.0.13",
|
||||
"@tanstack/react-query": "^5.90.0",
|
||||
"lucide-react": "^1.7.0",
|
||||
@@ -309,9 +309,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@cameleer/design-system": {
|
||||
"version": "0.1.51",
|
||||
"resolved": "https://gitea.siegeln.net/api/packages/cameleer/npm/%40cameleer%2Fdesign-system/-/0.1.51/design-system-0.1.51.tgz",
|
||||
"integrity": "sha512-ppZSiR6ZzzrUbtHTtnwpU4Zr2LPbcbJfAn0Ayh/OzDf9k6kFjn5myJWFlg+VJAZkFQoJA5y76GcKBdJ8nty4Tw==",
|
||||
"version": "0.1.54",
|
||||
"resolved": "https://gitea.siegeln.net/api/packages/cameleer/npm/%40cameleer%2Fdesign-system/-/0.1.54/design-system-0.1.54.tgz",
|
||||
"integrity": "sha512-IX05JmY/JcxTndfDWBHF7uizrRSqJgEM/J5uv5vQerM+Zq02yUzVNcV4QufVYBevGdnI4acUScnDlmSOOb85Qg==",
|
||||
"dependencies": {
|
||||
"lucide-react": "^1.7.0",
|
||||
"react": "^19.0.0",
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
"postinstall": "node -e \"const fs=require('fs'),p='node_modules/@cameleer/design-system/assets/';if(fs.existsSync('public')){fs.copyFileSync(p+'cameleer-logo.svg','public/favicon.svg')}\""
|
||||
},
|
||||
"dependencies": {
|
||||
"@cameleer/design-system": "^0.1.51",
|
||||
"@cameleer/design-system": "^0.1.54",
|
||||
"@logto/react": "^4.0.13",
|
||||
"@tanstack/react-query": "^5.90.0",
|
||||
"lucide-react": "^1.7.0",
|
||||
|
||||
8
ui/sign-in/package-lock.json
generated
8
ui/sign-in/package-lock.json
generated
@@ -8,7 +8,7 @@
|
||||
"name": "cameleer-sign-in",
|
||||
"version": "0.1.0",
|
||||
"dependencies": {
|
||||
"@cameleer/design-system": "^0.1.51",
|
||||
"@cameleer/design-system": "^0.1.54",
|
||||
"react": "^19.0.0",
|
||||
"react-dom": "^19.0.0"
|
||||
},
|
||||
@@ -303,9 +303,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@cameleer/design-system": {
|
||||
"version": "0.1.51",
|
||||
"resolved": "https://gitea.siegeln.net/api/packages/cameleer/npm/%40cameleer%2Fdesign-system/-/0.1.51/design-system-0.1.51.tgz",
|
||||
"integrity": "sha512-ppZSiR6ZzzrUbtHTtnwpU4Zr2LPbcbJfAn0Ayh/OzDf9k6kFjn5myJWFlg+VJAZkFQoJA5y76GcKBdJ8nty4Tw==",
|
||||
"version": "0.1.54",
|
||||
"resolved": "https://gitea.siegeln.net/api/packages/cameleer/npm/%40cameleer%2Fdesign-system/-/0.1.54/design-system-0.1.54.tgz",
|
||||
"integrity": "sha512-IX05JmY/JcxTndfDWBHF7uizrRSqJgEM/J5uv5vQerM+Zq02yUzVNcV4QufVYBevGdnI4acUScnDlmSOOb85Qg==",
|
||||
"dependencies": {
|
||||
"lucide-react": "^1.7.0",
|
||||
"react": "^19.0.0",
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
"preview": "vite preview"
|
||||
},
|
||||
"dependencies": {
|
||||
"@cameleer/design-system": "^0.1.51",
|
||||
"@cameleer/design-system": "^0.1.54",
|
||||
"react": "^19.0.0",
|
||||
"react-dom": "^19.0.0"
|
||||
},
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query';
|
||||
import { api } from './client';
|
||||
import type { VendorTenantSummary, VendorTenantDetail, CreateTenantRequest, TenantResponse, LicenseResponse, AuditLogPage, AuditLogFilters } from '../types/api';
|
||||
import type { VendorTenantSummary, VendorTenantDetail, CreateTenantRequest, TenantResponse, LicenseResponse, AuditLogPage, AuditLogFilters, TenantMetricsEntry } from '../types/api';
|
||||
|
||||
export function useVendorTenants() {
|
||||
return useQuery<VendorTenantSummary[]>({
|
||||
@@ -179,3 +179,13 @@ export function useInfraChDetail(tenantId: string) {
|
||||
enabled: !!tenantId,
|
||||
});
|
||||
}
|
||||
|
||||
// --- Tenant Metrics ---
|
||||
|
||||
export function useTenantMetrics() {
|
||||
return useQuery<TenantMetricsEntry[]>({
|
||||
queryKey: ['vendor', 'metrics'],
|
||||
queryFn: () => api.get('/vendor/metrics'),
|
||||
refetchInterval: 60_000,
|
||||
});
|
||||
}
|
||||
|
||||
@@ -109,6 +109,14 @@ export function Layout() {
|
||||
>
|
||||
Certificates
|
||||
</div>
|
||||
<div
|
||||
style={{ padding: '6px 12px 6px 36px', fontSize: 13, cursor: 'pointer',
|
||||
fontWeight: isActive(location, '/vendor/metrics') ? 600 : 400,
|
||||
color: isActive(location, '/vendor/metrics') ? 'var(--amber)' : 'var(--text-muted)' }}
|
||||
onClick={() => navigate('/vendor/metrics')}
|
||||
>
|
||||
Metrics
|
||||
</div>
|
||||
<div
|
||||
style={{ padding: '6px 12px 6px 36px', fontSize: 13, cursor: 'pointer',
|
||||
fontWeight: isActive(location, '/vendor/infrastructure') ? 600 : 400,
|
||||
|
||||
194
ui/src/pages/vendor/VendorMetricsPage.tsx
vendored
Normal file
194
ui/src/pages/vendor/VendorMetricsPage.tsx
vendored
Normal file
@@ -0,0 +1,194 @@
|
||||
import { Card, KpiStrip, Spinner, Badge } from '@cameleer/design-system';
|
||||
import { Activity } from 'lucide-react';
|
||||
import { useTenantMetrics } from '../../api/vendor-hooks';
|
||||
import { useNavigate } from 'react-router';
|
||||
import type { TenantMetricsEntry, MetricsSummary } from '../../types/api';
|
||||
import { tierColor } from '../../utils/tier';
|
||||
|
||||
function formatBytes(n: number): string {
|
||||
if (n < 1024) return `${n} B`;
|
||||
if (n < 1024 * 1024) return `${(n / 1024).toFixed(1)} KB`;
|
||||
if (n < 1024 * 1024 * 1024) return `${(n / 1024 / 1024).toFixed(1)} MB`;
|
||||
return `${(n / 1024 / 1024 / 1024).toFixed(2)} GB`;
|
||||
}
|
||||
|
||||
function formatUptime(seconds: number): string {
|
||||
const d = Math.floor(seconds / 86400);
|
||||
const h = Math.floor((seconds % 86400) / 3600);
|
||||
const m = Math.floor((seconds % 3600) / 60);
|
||||
if (d > 0) return `${d}d ${h}h`;
|
||||
if (h > 0) return `${h}h ${m}m`;
|
||||
return `${m}m`;
|
||||
}
|
||||
|
||||
function formatPct(v: number): string {
|
||||
return `${(v * 100).toFixed(1)}%`;
|
||||
}
|
||||
|
||||
function formatRate(v: number): string {
|
||||
if (v === 0) return '0';
|
||||
if (v < 0.1) return v.toFixed(3);
|
||||
if (v < 10) return v.toFixed(1);
|
||||
return Math.round(v).toLocaleString();
|
||||
}
|
||||
|
||||
const thStyle: React.CSSProperties = {
|
||||
textAlign: 'left',
|
||||
padding: '8px 16px',
|
||||
fontSize: 11,
|
||||
fontWeight: 600,
|
||||
color: 'var(--text-muted)',
|
||||
textTransform: 'uppercase',
|
||||
letterSpacing: '0.05em',
|
||||
borderBottom: '1px solid var(--border)',
|
||||
};
|
||||
|
||||
const tdStyle: React.CSSProperties = {
|
||||
padding: '10px 16px',
|
||||
fontSize: 13,
|
||||
borderBottom: '1px solid var(--border)',
|
||||
fontVariantNumeric: 'tabular-nums',
|
||||
};
|
||||
|
||||
function AgentsBadges({ m }: { m: MetricsSummary }) {
|
||||
const { live, stale, dead } = m.agents;
|
||||
return (
|
||||
<span style={{ display: 'inline-flex', gap: 6 }}>
|
||||
<Badge label={`${live} live`} color="success" />
|
||||
{stale > 0 && <Badge label={`${stale} stale`} color="warning" />}
|
||||
{dead > 0 && <Badge label={`${dead} dead`} color="error" />}
|
||||
</span>
|
||||
);
|
||||
}
|
||||
|
||||
function HeapBar({ used, max }: { used: number; max: number }) {
|
||||
const pct = max > 0 ? (used / max) * 100 : 0;
|
||||
const color = pct > 85 ? 'var(--error, #ef4444)' : pct > 70 ? 'var(--warning, #f59e0b)' : 'var(--success, #22c55e)';
|
||||
return (
|
||||
<div style={{ display: 'flex', alignItems: 'center', gap: 8 }}>
|
||||
<div style={{ width: 60, height: 6, borderRadius: 3, background: 'var(--border)', overflow: 'hidden' }}>
|
||||
<div style={{ width: `${pct}%`, height: '100%', borderRadius: 3, background: color }} />
|
||||
</div>
|
||||
<span style={{ fontSize: 12, color: 'var(--text-muted)' }}>
|
||||
{formatBytes(used)} / {formatBytes(max)}
|
||||
</span>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
function DropsBadge({ rate }: { rate: number }) {
|
||||
if (rate === 0) return <span style={{ color: 'var(--text-muted)' }}>0</span>;
|
||||
return <Badge label={`${formatRate(rate)}/min`} color="error" />;
|
||||
}
|
||||
|
||||
function TenantRow({ entry, onClick }: { entry: TenantMetricsEntry; onClick: () => void }) {
|
||||
const m = entry.metrics;
|
||||
const notRunning = entry.serverState !== 'RUNNING';
|
||||
|
||||
return (
|
||||
<tr style={{ cursor: 'pointer' }} onClick={onClick}>
|
||||
<td style={tdStyle}>
|
||||
<div style={{ display: 'flex', alignItems: 'center', gap: 8 }}>
|
||||
<span style={{ fontWeight: 500 }}>{entry.tenantName}</span>
|
||||
<Badge label={entry.tier} color={tierColor(entry.tier)} />
|
||||
</div>
|
||||
</td>
|
||||
{notRunning || !m ? (
|
||||
<td colSpan={6} style={{ ...tdStyle, color: 'var(--text-muted)', textAlign: 'center' }}>
|
||||
{notRunning ? `Server ${entry.serverState.toLowerCase()}` : 'No metrics available'}
|
||||
</td>
|
||||
) : (
|
||||
<>
|
||||
<td style={tdStyle}><AgentsBadges m={m} /></td>
|
||||
<td style={{ ...tdStyle, textAlign: 'right' }}>{formatPct(m.server.cpuUsage)}</td>
|
||||
<td style={tdStyle}><HeapBar used={m.server.heapUsedBytes} max={m.server.heapMaxBytes} /></td>
|
||||
<td style={{ ...tdStyle, textAlign: 'right' }}>{formatRate(m.http.requestsPerMinute)}/min</td>
|
||||
<td style={{ ...tdStyle, textAlign: 'center' }}><DropsBadge rate={m.ingestion.dropsPerMinute} /></td>
|
||||
<td style={{ ...tdStyle, textAlign: 'right', color: 'var(--text-muted)' }}>{formatUptime(m.server.uptimeSeconds)}</td>
|
||||
</>
|
||||
)}
|
||||
</tr>
|
||||
);
|
||||
}
|
||||
|
||||
function FleetKpis({ entries }: { entries: TenantMetricsEntry[] }) {
|
||||
const withMetrics = entries.filter((e) => e.metrics != null);
|
||||
const totalAgentsLive = withMetrics.reduce((s, e) => s + (e.metrics!.agents.live), 0);
|
||||
const totalAgentsDead = withMetrics.reduce((s, e) => s + (e.metrics!.agents.dead), 0);
|
||||
const totalDrops = withMetrics.reduce((s, e) => s + (e.metrics!.ingestion.dropsPerMinute), 0);
|
||||
const running = entries.filter((e) => e.serverState === 'RUNNING').length;
|
||||
const avgCpu = withMetrics.length > 0
|
||||
? withMetrics.reduce((s, e) => s + e.metrics!.server.cpuUsage, 0) / withMetrics.length
|
||||
: 0;
|
||||
|
||||
return (
|
||||
<KpiStrip
|
||||
items={[
|
||||
{ label: 'Tenants Running', value: `${running} / ${entries.length}` },
|
||||
{ label: 'Total Agents Live', value: String(totalAgentsLive) },
|
||||
{ label: 'Dead Agents', value: String(totalAgentsDead) },
|
||||
{ label: 'Avg CPU', value: formatPct(avgCpu) },
|
||||
{ label: 'Ingestion Drops', value: totalDrops === 0 ? '0' : `${formatRate(totalDrops)}/min` },
|
||||
]}
|
||||
/>
|
||||
);
|
||||
}
|
||||
|
||||
export function VendorMetricsPage() {
|
||||
const { data, isLoading, isError } = useTenantMetrics();
|
||||
const navigate = useNavigate();
|
||||
|
||||
return (
|
||||
<div style={{ padding: 24, display: 'flex', flexDirection: 'column', gap: 24, maxWidth: 1200 }}>
|
||||
<h1 style={{ margin: 0, fontSize: '1.25rem', fontWeight: 600 }}>Tenant Metrics</h1>
|
||||
|
||||
<Card>
|
||||
<div style={{ display: 'flex', alignItems: 'center', gap: 10, padding: '16px 20px', borderBottom: '1px solid var(--border)' }}>
|
||||
<Activity size={18} style={{ color: 'var(--amber)' }} />
|
||||
<h2 style={{ margin: 0, fontSize: '1rem', fontWeight: 600 }}>Fleet Overview</h2>
|
||||
{isLoading && <Spinner size="sm" />}
|
||||
{isError && <span style={{ fontSize: 13, color: 'var(--error, #ef4444)' }}>Failed to load</span>}
|
||||
</div>
|
||||
|
||||
{data && (
|
||||
<>
|
||||
<div style={{ padding: '4px 20px' }}>
|
||||
<FleetKpis entries={data} />
|
||||
</div>
|
||||
|
||||
<table style={{ width: '100%', borderCollapse: 'collapse' }}>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style={thStyle}>Tenant</th>
|
||||
<th style={thStyle}>Agents</th>
|
||||
<th style={{ ...thStyle, textAlign: 'right' }}>CPU</th>
|
||||
<th style={thStyle}>Heap</th>
|
||||
<th style={{ ...thStyle, textAlign: 'right' }}>HTTP Req</th>
|
||||
<th style={{ ...thStyle, textAlign: 'center' }}>Drops</th>
|
||||
<th style={{ ...thStyle, textAlign: 'right' }}>Uptime</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{data.length === 0 ? (
|
||||
<tr>
|
||||
<td colSpan={7} style={{ ...tdStyle, color: 'var(--text-muted)', textAlign: 'center' }}>
|
||||
No tenants found
|
||||
</td>
|
||||
</tr>
|
||||
) : (
|
||||
data.map((entry) => (
|
||||
<TenantRow
|
||||
key={entry.tenantId}
|
||||
entry={entry}
|
||||
onClick={() => navigate(`/vendor/tenants/${entry.tenantId}`)}
|
||||
/>
|
||||
))
|
||||
)}
|
||||
</tbody>
|
||||
</table>
|
||||
</>
|
||||
)}
|
||||
</Card>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -14,6 +14,7 @@ import { TenantDetailPage } from './pages/vendor/TenantDetailPage';
|
||||
import { VendorAuditPage } from './pages/vendor/VendorAuditPage';
|
||||
import { CertificatesPage } from './pages/vendor/CertificatesPage';
|
||||
import { InfrastructurePage } from './pages/vendor/InfrastructurePage';
|
||||
import { VendorMetricsPage } from './pages/vendor/VendorMetricsPage';
|
||||
import { TenantDashboardPage } from './pages/tenant/TenantDashboardPage';
|
||||
import { TenantLicensePage } from './pages/tenant/TenantLicensePage';
|
||||
import { SsoPage } from './pages/tenant/SsoPage';
|
||||
@@ -82,6 +83,11 @@ export function AppRouter() {
|
||||
<CertificatesPage />
|
||||
</RequireScope>
|
||||
} />
|
||||
<Route path="/vendor/metrics" element={
|
||||
<RequireScope scope="platform:admin" fallback={<Navigate to="/tenant" replace />}>
|
||||
<VendorMetricsPage />
|
||||
</RequireScope>
|
||||
} />
|
||||
<Route path="/vendor/infrastructure" element={
|
||||
<RequireScope scope="platform:admin" fallback={<Navigate to="/tenant" replace />}>
|
||||
<InfrastructurePage />
|
||||
|
||||
@@ -155,3 +155,48 @@ export interface AuditLogFilters {
|
||||
page?: number;
|
||||
size?: number;
|
||||
}
|
||||
|
||||
// Tenant metrics (from server /api/v1/admin/metrics/summary)
|
||||
export interface AgentMetrics {
|
||||
live: number;
|
||||
stale: number;
|
||||
dead: number;
|
||||
shutdown: number;
|
||||
}
|
||||
|
||||
export interface IngestionMetrics {
|
||||
bufferDepth: number;
|
||||
dropsPerMinute: number;
|
||||
}
|
||||
|
||||
export interface ServerMetrics {
|
||||
cpuUsage: number;
|
||||
heapUsedBytes: number;
|
||||
heapMaxBytes: number;
|
||||
uptimeSeconds: number;
|
||||
threadCount: number;
|
||||
}
|
||||
|
||||
export interface HttpMetrics {
|
||||
requestsPerMinute: number;
|
||||
errorRate: number;
|
||||
}
|
||||
|
||||
export interface MetricsSummary {
|
||||
collectedAt: string;
|
||||
agents: AgentMetrics;
|
||||
ingestion: IngestionMetrics;
|
||||
server: ServerMetrics;
|
||||
http: HttpMetrics;
|
||||
authFailuresPerMinute: number;
|
||||
}
|
||||
|
||||
export interface TenantMetricsEntry {
|
||||
tenantId: string;
|
||||
tenantName: string;
|
||||
slug: string;
|
||||
tier: string;
|
||||
status: string;
|
||||
serverState: string;
|
||||
metrics: MetricsSummary | null;
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user