239 Commits

Author SHA1 Message Date
hsiegeln
417e6024b0 fix(test): update LicenseControllerTest to expect STARTER tier (default changed from TEAM)
All checks were successful
CI / build (push) Successful in 1m45s
CI / docker (push) Successful in 1m8s
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-29 12:06:18 +02:00
hsiegeln
385d79aa0f fix(test): update CertificateServiceTest and TenantPortalServiceTest for new audit constructor params
Some checks failed
CI / docker (push) Has been cancelled
CI / build (push) Has been cancelled
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-29 12:04:25 +02:00
hsiegeln
5e19e07257 docs: add SOC 2 audit logging implementation plan
Some checks failed
CI / build (push) Failing after 1m8s
CI / docker (push) Has been skipped
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-29 11:44:38 +02:00
hsiegeln
809f1e8a09 fix(audit): add SLF4J logging to 19 operations missing application-level logs
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-29 11:41:25 +02:00
hsiegeln
cb411ff337 feat(audit): add audit logging to vendor server ops and audit_log immutability migration
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-29 11:31:46 +02:00
hsiegeln
da52707aec feat(audit): add SOC 2 audit logging to tenant CA certs, account security, team management, SSO, and server operations
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-29 11:29:55 +02:00
hsiegeln
88733d76c0 feat(audit): add SOC 2 audit logging to vendor admin, auth policy, email connector, and certificate operations
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-29 11:23:13 +02:00
hsiegeln
295a185a03 refactor(security): per-tenant JWT secret instead of shared global secret
All checks were successful
CI / build (push) Successful in 2m17s
CI / docker (push) Successful in 1m49s
Generate a unique JWT secret per tenant at provision time, stored on
TenantEntity (same pattern as dbPassword). On upgrade, the existing
secret is reused so agent tokens survive container recreation.

- V005 migration: add jwt_secret column to tenants table
- TenantEntity: add jwtSecret field
- TenantProvisionRequest: add jwtSecret field
- VendorTenantService: generate secret in provisionAsync(), reuse on upgrade
- DockerTenantProvisioner: read from req.jwtSecret() not props
- ProvisioningProperties: remove jwtSecret (no longer global config)

Installer team: CAMELEER_SERVER_SECURITY_JWTSECRET and
CAMELEER_SAAS_PROVISIONING_JWTSECRET can be removed from compose
templates and .env — no longer consumed by the SaaS app.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-29 09:38:33 +02:00
hsiegeln
529028f0c3 fix(security): patch 5 vulnerabilities from full codebase security review
- Replace hardcoded JWT secret in DockerTenantProvisioner with config
  property (CAMELEER_SAAS_PROVISIONING_JWTSECRET) — every provisioned
  tenant server was sharing the same publicly-visible dev secret
- Add @PreAuthorize("SCOPE_tenant:manage") to 11 admin endpoints in
  TenantPortalController (team invite/remove/role, password resets,
  server restart/upgrade, CA cert management, MFA reset) — previously
  any org member including viewers could perform admin operations
- Remove dead PATCH /api/tenant/settings endpoint (duplicate of
  /auth-settings without authorization) and POST /api/tenant/password
  (allowed password change without current password verification) —
  frontend uses the secure alternatives
- Add @PreAuthorize("SCOPE_platform:admin") to TenantController
  getById and getBySlug — were exposing serverEndpoint, adminEmail,
  and provisionError to any authenticated user

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-29 09:13:39 +02:00
hsiegeln
df03c1a4cd fix(tenant): show email/role in team table, set username on invite
Some checks failed
CI / build (push) Successful in 2m35s
CI / docker (push) Successful in 1m23s
SonarQube Analysis / sonarqube (push) Failing after 2m34s
Team table showed dashes for email and role because the raw Logto
response uses primaryEmail (not email) and excludes org roles.
Enrich each member with normalized email and fetched role name.

Invited users couldn't sign in after password reset because
createAndInviteUser omitted the username field — the sign-in page
sent type:username for non-email input but Logto had no username
to match. Now sets username to the email local part, matching how
createUserWithPassword works for bootstrap admins.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-28 22:36:25 +02:00
hsiegeln
df25dcf81a fix(tenant): reuse existing Logto users and clean up on delete
All checks were successful
CI / build (push) Successful in 2m14s
CI / docker (push) Successful in 1m10s
Create: if admin email matches an existing Logto user, add them to the
tenant org instead of creating a duplicate account. Only creates a new
user when no match is found and a password is provided.

Delete: before deleting the Logto org, list its members. After org
deletion, delete tenant-only users (those with no remaining org
memberships). Users who belong to other orgs are preserved.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-28 20:25:21 +02:00
hsiegeln
029f2ef0de fix(onboarding): skip org name prompt for vendor-created admins
All checks were successful
CI / build (push) Successful in 2m11s
CI / docker (push) Successful in 1m40s
Vendor-created tenant admins already have an org membership. When they
land on /onboarding (first login, token lacks org claims), detect the
existing tenant via /api/me and trigger a re-auth to pick up org
membership instead of showing the org name form.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-28 20:05:29 +02:00
hsiegeln
345bc4a92b feat(tenant): welcome email, admin email display, async delete fix
All checks were successful
CI / build (push) Successful in 2m20s
CI / docker (push) Successful in 1m29s
- Send branded welcome email to tenant admin after provisioning completes
  (includes username and dashboard URL)
- Store admin_email on tenant entity (V004 migration)
- Show admin email in vendor tenant list table and detail page
- Fix ClickHouse cleanup: skip materialized views (can't ALTER DELETE on MVs)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-28 19:53:47 +02:00
hsiegeln
bd301ad1fe refactor(tenant): replace tier+username with email-first creation
All checks were successful
CI / build (push) Successful in 2m9s
CI / docker (push) Successful in 1m37s
- Remove tier from create tenant form (always defaults to STARTER,
  controlled via license minting)
- Admin email is now the primary identity field
- Username auto-derived from email local part, optionally overridable
- Set primaryEmail on Logto user at creation (prevents invalid accounts)
- Async tenant delete: PG/ClickHouse cleanup runs after commit instead
  of blocking the HTTP response
- Remove legacy /server/* OIDC redirect URIs from bootstrap

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-28 19:34:00 +02:00
hsiegeln
15c47fe36c fix(auth): register tenant /login as OIDC post-logout redirect URI
All checks were successful
CI / build (push) Successful in 2m22s
CI / docker (push) Successful in 1m7s
Server sends /t/{slug}/login as post_logout_redirect_uri on logout but
only /t/{slug} and /t/{slug}/login?local were registered, causing
"post_logout_redirect_uri not registered" error from Logto.

Also removes legacy /server/* redirect URIs from bootstrap (greenfield).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-28 19:15:18 +02:00
hsiegeln
61fc7f224f fix(ci): remove log-appender from runtime base image
All checks were successful
CI / build (push) Successful in 2m25s
CI / docker (push) Successful in 1m11s
The log appender is now embedded in cameleer-core, so it no longer
needs to be downloaded separately and baked into the runtime base
image. Removes the Maven download step and the Dockerfile COPY.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-28 18:56:29 +02:00
hsiegeln
11646b93ff fix(ci): update Maven artifact paths from com/cameleer to io/cameleer
Some checks failed
CI / build (push) Successful in 3m19s
CI / docker (push) Failing after 13s
The agent and log-appender SNAPSHOTs were republished under the
io.cameleer groupId after the rebrand. The runtime base image build
was failing because the old com/cameleer paths no longer resolve.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-28 18:45:36 +02:00
hsiegeln
fcb25778e1 fix(sign-in): TOTP enrollment QR branding and verification failure
Some checks failed
CI / build (push) Successful in 3m6s
CI / docker (push) Failing after 10s
Two bugs in the sign-in UI's TOTP MFA enrollment flow:

1. Auth app displayed the PC hostname and "Platform Owner" instead of
   "Cameleer" and the user's email. The sign-in UI was rendering Logto's
   pre-generated QR code which uses the ENDPOINT hostname as issuer.
   Now generates our own otpauth:// URI with proper branding, rendered
   client-side via qrcode.react.

2. TOTP code verification returned 400 "Invalid TOTP code". The
   verifyTotpSetup() call was missing the required verificationId
   parameter — Logto's Experience API needs it to locate the pending
   secret during enrollment.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-28 18:34:52 +02:00
hsiegeln
3aba32302a refactor: update license-minter dependency to io.cameleer
All checks were successful
CI / build (push) Successful in 2m57s
CI / docker (push) Successful in 1m35s
Other teams completed their com.cameleer → io.cameleer migration.
Update Maven groupId and Java imports to match.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-28 17:13:28 +02:00
hsiegeln
2fa8ba07de fix: swap Chainguard JRE to BellSoft Liberica JRE 21
All checks were successful
CI / build (push) Successful in 2m16s
CI / docker (push) Successful in 1m40s
Chainguard free tier only offers :latest (currently JDK 26, unpinned);
the :openjdk-21 tag requires a paid subscription, breaking CI.

Switch both Dockerfiles to bellsoft/liberica-runtime-container:jre-21-slim-glibc:
- Pinned to JDK 21 LTS
- Smallest image (199 MB vs 441/491 MB)
- glibc-based Alpaquita Linux, sh-only (no bash, no pkg manager)
- Free, multi-arch (amd64 + arm64)
- Has sh — required by cameleer-server's DeploymentExecutor (withCmd "sh -c")

Use nobody:nobody (65534) instead of Chainguard's nonroot (65532).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-28 16:52:55 +02:00
hsiegeln
966691f2c8 refactor: rebrand from net.siegeln to io.cameleer
Some checks failed
CI / build (push) Successful in 3m7s
CI / docker (push) Failing after 7s
Institutionalize the product identity ahead of public release:

- Java package: net.siegeln.cameleer.saas → io.cameleer.saas (109 files)
- Maven groupId: net.siegeln.cameleer → io.cameleer
- Public image defaults: gitea.siegeln.net/cameleer/ → registry.cameleer.io/cameleer/
- Updated docs (architecture, user-manual, HOWTO, runtime-loader README)
- Updated CLAUDE.md path references

Internal build infra (CI workflows, .gitmodules, npm registry, Maven repo)
intentionally kept at gitea.siegeln.net — code stays on internal Gitea.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-28 16:26:34 +02:00
hsiegeln
6ac06d6859 docs: document the runtime-loader image (moved here from cameleer-server)
Some checks failed
CI / build (push) Successful in 3m10s
CI / docker (push) Failing after 8s
Note that the loader source now lives at docker/runtime-loader/, that
the contract is owned by cameleer-server's DockerRuntimeOrchestrator
(don't change env vars / mount path / exit codes without a coordinated
commit there), and that cameleer-server's LoaderHardeningIT is the
cross-repo regression guard. Also document the chown-/app/jars line
(strip it and tenant deploys break with "wget: Permission denied").
2026-04-28 13:05:49 +02:00
hsiegeln
ac8d628271 feat(ci): build and push cameleer-runtime-loader image
Some checks failed
CI / build (push) Successful in 2m1s
CI / docker (push) Failing after 7s
Move the init-container loader image build from cameleer-server CI into
this repo so all sidecar/infra image builds (runtime-base, postgres,
clickhouse, traefik, logto, and now runtime-loader) live in one place.

The loader is consumed by cameleer-server's DockerRuntimeOrchestrator as
a per-replica init container that fetches the tenant JAR from a signed
URL into a named volume before the main container starts. Source +
Dockerfile copied verbatim from cameleer-server@c2efb7fb (the image with
the volume-permission fix). The published tag path is unchanged
(gitea.siegeln.net/cameleer/cameleer-runtime-loader:latest), so running
tenant servers continue pulling the same image.

Build step matches the runtime-base/postgres/clickhouse/traefik pattern
(unconditional rebuild on every push, sha + branch tags, --provenance=false
for Gitea). cameleer-server will follow up with a commit removing its
loader-build step and switching its LoaderHardeningIT to pull the
published image instead of building from a local Dockerfile.
2026-04-28 13:00:23 +02:00
hsiegeln
bc32d7e994 fix: use license/usage endpoint for agent/env/app counts
Some checks failed
CI / docker (push) Has been cancelled
CI / build (push) Has been cancelled
Server moved GET /agents to /environments/{envSlug}/agents and removed
GET /admin/apps. Replace three broken individual calls with a single
GET /admin/license/usage call that returns all counts.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-28 12:58:35 +02:00
hsiegeln
c43d7f639f harden: swap cameleer-saas runtime stage to Chainguard JRE
Some checks failed
CI / build (push) Successful in 3m22s
CI / docker (push) Failing after 9s
Replace eclipse-temurin:21-jre-alpine with cgr.dev/chainguard/jre:openjdk-21
for the SaaS management plane image. Use Chainguard's built-in nonroot user
instead of custom adduser. Build stages unchanged (ephemeral).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-28 09:42:38 +02:00
hsiegeln
5f210b76a9 harden: swap runtime base to Chainguard JRE, remove dead ENTRYPOINT
Replace eclipse-temurin:21-jre-alpine (musl) with cgr.dev/chainguard/jre:openjdk-21
(Wolfi/glibc, daily CVE refresh, signed images + SBOM). Remove the dead ENTRYPOINT
block — DeploymentExecutor overrides it at container creation anyway.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-28 09:32:49 +02:00
hsiegeln
06134d6e67 fix: TOTP label includes org name, passkeys show device as default name
Some checks failed
CI / build (push) Successful in 2m10s
CI / docker (push) Successful in 1m27s
SonarQube Analysis / sonarqube (push) Failing after 2m57s
- TOTP otpauth URI issuer changed from "Cameleer" to "Cameleer - <org>"
  so authenticator apps display the organization name
- Passkeys without a custom name now show parsed device info (e.g.
  "Chrome on Windows") instead of "Unnamed passkey"

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 22:53:05 +02:00
hsiegeln
7fe9c581b0 fix: remove MFA card from tenant settings, constrain card widths
All checks were successful
CI / build (push) Successful in 2m10s
CI / docker (push) Successful in 1m25s
MFA enrollment now happens during sign-in. Tenant settings page reduced
to: Tenant Details + Auth Policy side-by-side (max 520px each), Passkeys
full-width below.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 22:45:31 +02:00
hsiegeln
7fc8a4d407 fix: team invite role resolution, user cleanup, and settings page redesign
All checks were successful
CI / build (push) Successful in 2m9s
CI / docker (push) Successful in 1m33s
- Resolve org role names to Logto role IDs in invite and role change flows
  (fixes entity.relation_foreign_key_not_found on invite)
- Handle existing Logto users on re-invite instead of failing with
  email_already_in_use
- Delete users from Logto when removed from last org membership
- Consolidate tenant settings page into 3 cards: Tenant Details, MFA,
  Authentication Policy — remove duplicate MFA Enforcement and Change
  Password (now in Account Settings)
- Make passkey list scrollable

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 22:36:21 +02:00
hsiegeln
e21a9d6046 fix: override WebAuthn type in authentication verify too
All checks were successful
CI / build (push) Successful in 1m57s
CI / docker (push) Successful in 1m30s
Same fix as registration verify — @simplewebauthn/browser returns
type: "public-key" but Logto expects type: "WebAuthn".

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 20:57:37 +02:00
hsiegeln
0481cefaf4 fix: sign-in MFA flow overhaul — passkey verify, backup codes, defaults
All checks were successful
CI / build (push) Successful in 2m19s
CI / docker (push) Successful in 1m4s
Four fixes for the MFA sign-in flow:

1. Fix passkey verify crash: extract authenticationOptions from Logto
   response (was passing full response as optionsJSON). Pass
   verificationId to the verify endpoint.

2. Default to passkey verification when no MFA method preference is
   stored (was showing method picker which offered TOTP to passkey-only
   users).

3. Show backup codes after MFA enrollment: new mfaEnrollBackupCodes
   mode with copy/download buttons and confirmation checkbox. Users
   must save codes before completing sign-in.

4. Remove duplicate error alerts in enrollment screens (top-level
   alert handles all modes).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 20:49:32 +02:00
hsiegeln
040ae60be5 fix: correct Experience API endpoints for TOTP and backup codes
All checks were successful
CI / build (push) Successful in 2m4s
CI / docker (push) Successful in 1m1s
- TOTP secret: /verification/totp/secret (not /verification/totp)
- Backup codes: generate via /verification/backup-code/generate first,
  then bind with the returned verificationId. Cannot bind BackupCode
  without generating codes first.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 19:30:54 +02:00
hsiegeln
d8f7452ab7 feat: full MFA enrollment during sign-in — passkey + TOTP + backup codes
All checks were successful
CI / build (push) Successful in 2m2s
CI / docker (push) Successful in 1m10s
- Bind BackupCode after primary MFA factor (WebAuthn or TOTP) to satisfy
  Logto's requirement that backup codes accompany any MFA method.
- Add TOTP enrollment option alongside passkey on the enrollment screen:
  "Use passkey" / "Use authenticator app" / "Set up later".
- TOTP enrollment shows QR code + secret + 6-digit verification inline
  in the sign-in UI, using Experience API endpoints.
- Added createTotpSecret() and verifyTotpSetup() to experience-api.ts.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 19:22:53 +02:00
hsiegeln
c4fe16048c fix: include WebAuthn in bootstrap MFA factors
All checks were successful
CI / build (push) Successful in 2m14s
CI / docker (push) Successful in 25s
Bootstrap only set [Totp, BackupCode] — WebAuthn was missing. Now
matches LogtoStartupConfig: all three factors available from first boot.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 19:04:47 +02:00
hsiegeln
cba420fbeb fix: always offer MFA+passkey enrollment, separate availability from enforcement
All checks were successful
CI / build (push) Successful in 2m19s
CI / docker (push) Successful in 1m43s
Two fundamental fixes:

- user.missing_mfa now triggers MfaEnrollmentError (enroll UI) instead
  of MfaRequiredError (verify UI). Users without MFA were shown a TOTP
  code prompt they couldn't fill.
- Logto MFA factors always set to [Totp, WebAuthn, BackupCode] with
  UserControlled policy on startup. Availability is always-on for all
  users. The vendor auth policy controls enforcement (via
  MfaEnforcementFilter), not what Logto offers during sign-in.
- Removed syncMfaConfigToLogto from VendorAuthPolicyController — vendor
  policy changes no longer modify Logto's sign-in experience.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 18:59:21 +02:00
hsiegeln
67ec409383 fix: null display name, settings scrollbar, redundant passkey offer
All checks were successful
CI / build (push) Successful in 2m20s
CI / docker (push) Successful in 1m36s
- Profile API returns empty string instead of "null" when Logto user
  has no display name set (String.valueOf(null) → "null" bug).
- SettingsPage: add overflowY auto + flex 1 so content scrolls within
  the AppShell layout (which uses overflow: hidden).
- Remove redundant passkey offer from onboarding page — passkey
  enrollment now happens during sign-in via the Experience API.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 18:53:13 +02:00
hsiegeln
3384510f3c fix: passkeys work independently of MFA mode
All checks were successful
CI / build (push) Successful in 2m15s
CI / docker (push) Successful in 1m1s
When MFA mode is off but passkeys are enabled, WebAuthn + BackupCode
factors are still synced to Logto. Previously, MFA off cleared all
factors including WebAuthn, so passkey enrollment was never offered.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 18:45:30 +02:00
hsiegeln
18e6f32f90 refactor: move passkey enrollment to sign-in UI via Experience API
All checks were successful
CI / build (push) Successful in 2m12s
CI / docker (push) Successful in 1m49s
Remove the SaaS backend proxy approach for passkey registration (Account
API binding, Management API proxy, password modal in PasskeySection).
Instead, offer passkey enrollment natively during sign-in via Logto's
Experience API — the correct architectural layer.

Sign-in flow: when Logto returns MFA enrollment available (422), show a
"Secure your account" screen with Register passkey / Set up later. Uses
Experience API WebAuthn registration endpoints. Works for all users
(SaaS and future server users) since the sign-in UI is shared.

PasskeySection in account settings now only manages existing passkeys
(list/rename/delete) and directs users to register during sign-in.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 18:33:46 +02:00
hsiegeln
4df6fc9e03 fix: proxy passkey bind through Management API
All checks were successful
CI / build (push) Successful in 2m17s
CI / docker (push) Successful in 1m29s
Logto's /api/my-account/ endpoints reject the opaque access token with
401 even though /api/verifications/ accepts it. The bind step now goes
through the SaaS backend which calls the Management API instead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 18:23:41 +02:00
hsiegeln
2aa5100530 fix: add password re-verification before passkey registration
All checks were successful
CI / build (push) Successful in 2m28s
CI / docker (push) Successful in 1m32s
Logto Account API requires identity verification (logto-verification-id
header) for sensitive MFA operations. Adds a password prompt modal before
the WebAuthn ceremony — verifies password first, then proceeds with
passkey registration using the verification record ID.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 18:10:47 +02:00
hsiegeln
c360d9ad5f fix: override WebAuthn credential type for Logto Account API
All checks were successful
CI / build (push) Successful in 3m29s
CI / docker (push) Successful in 2m14s
@simplewebauthn/browser returns type: "public-key" (W3C standard) but
Logto's verify endpoint expects type: "WebAuthn".

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 17:59:53 +02:00
hsiegeln
e7952dd9de fix: keep vendor sidebar active on account settings page
All checks were successful
CI / build (push) Successful in 1m54s
CI / docker (push) Successful in 1m26s
Vendor sidebar collapsed and tenant sidebar appeared when navigating to
/settings/account because onVendorRoute was false for non-/vendor paths.
Now vendor users stay on vendor sidebar for all routes except /tenant/*.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 17:18:09 +02:00
hsiegeln
687598952f fix: correct Account Center enablement — mfa field is a string enum
All checks were successful
CI / build (push) Successful in 2m6s
CI / docker (push) Successful in 1m6s
Logto's PATCH /api/account-center expects mfa as 'Off'|'ReadOnly'|'Edit',
not a nested object. Fixes 400 Bad Request on startup.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 17:14:31 +02:00
hsiegeln
c22580e124 feat: always enable WebAuthn in MFA factors and add passkey registration
All checks were successful
CI / build (push) Successful in 2m3s
CI / docker (push) Successful in 1m26s
- Sync vendor auth policy to Logto sign-in experience on save and on
  startup. Always include WebAuthn + TOTP + BackupCode in MFA factors
  when MFA is enabled — no reason to gate passkeys behind a toggle.
- Enable Logto Account Center on startup for user-facing MFA management.
- Add passkey registration to account settings via Logto Account API.
  Frontend calls Logto directly (same domain) for the WebAuthn ceremony:
  generate options, browser credential creation, verify, and bind.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 17:01:58 +02:00
hsiegeln
a5c20830a7 fix: prevent MFA lockout and move enrollment to modal dialog
All checks were successful
CI / build (push) Successful in 1m58s
CI / docker (push) Successful in 1m47s
Three fixes for MFA enrollment and sign-in:

- Defer TOTP registration with Logto until after 6-digit code verification.
  Previously setupTotp() immediately registered the secret, so abandoning
  enrollment mid-way left MFA active without a working authenticator.
- Move entire MFA enrollment flow (QR code, verify, backup codes) into a
  Modal dialog instead of replacing the Card content inline.
- Fix sign-in MFA flow: submitMfa() no longer calls identifyUser() after
  TOTP verify — user is already identified, and passing the MFA
  verificationId to identification returned 422 ("method not activated").

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 16:25:15 +02:00
hsiegeln
9231a1fc60 fix: move forgot password link below sign-in button
All checks were successful
CI / build (push) Successful in 2m0s
CI / docker (push) Successful in 1m7s
Repositions the "Forgot password?" link from above the sign-in button
to below it, matching the desired layout. Updates link style to be
centered with link color instead of right-aligned muted text.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 16:06:36 +02:00
f325416833 Merge pull request 'feature/vendor-admin-account-settings' (#60) from feature/vendor-admin-account-settings into main
All checks were successful
CI / build (push) Successful in 3m22s
CI / docker (push) Successful in 27s
Reviewed-on: #60
2026-04-27 15:58:05 +02:00
hsiegeln
ab800bbef9 fix: handle Logto data URI in MFA QR code display
All checks were successful
CI / build (push) Successful in 2m7s
CI / docker (push) Successful in 1m31s
CI / build (pull_request) Successful in 3m23s
CI / docker (pull_request) Has been skipped
Logto's secretQrCode is a data:image/png;base64 URI, not an otpauth://
string. QRCodeSVG crashes trying to encode it ("Data too long"). Now
renders data URIs as <img> and only uses QRCodeSVG for otpauth:// URIs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 15:43:39 +02:00
hsiegeln
15d6c7abc1 fix: remove explicit pagination from Logto role API calls
All checks were successful
CI / build (push) Successful in 2m3s
CI / docker (push) Successful in 59s
Logto's /api/roles/{id}/users endpoint rejects page=1 with
guard.invalid_pagination. Remove explicit pagination params and
let Logto use its defaults.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 15:40:05 +02:00
0b4d0e3b2f Merge pull request 'feat: vendor admin management and shared account settings' (#59) from feature/vendor-admin-account-settings into main
All checks were successful
CI / build (push) Successful in 2m15s
CI / docker (push) Successful in 20s
Reviewed-on: #59
2026-04-27 15:20:23 +02:00
hsiegeln
f823a409d0 fix: add AccountService mock to TenantPortalServiceTest constructor
All checks were successful
CI / build (push) Successful in 3m9s
CI / build (pull_request) Successful in 3m8s
CI / docker (pull_request) Has been skipped
CI / docker (push) Successful in 1m43s
The TenantPortalService constructor gained an AccountService parameter
in the consolidation refactor — the test was missing it.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 15:15:19 +02:00
hsiegeln
e9e18f6c38 docs: update CLAUDE.md for account package, vendor admins, and shared components
Some checks failed
CI / build (push) Failing after 2m1s
CI / docker (push) Has been skipped
CI / build (pull_request) Failing after 1m46s
CI / docker (pull_request) Has been skipped
- Add account/ package to Key Packages table
- Add VendorAdminService/Controller to vendor/ package
- Note TenantPortalService delegation to AccountService
- Update ui/CLAUDE.md: AccountSettingsPage, VendorAdminsPage,
  Administrators sidebar, user menu dropdown, shared components

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 15:09:41 +02:00
hsiegeln
372d3c77a0 fix: code review findings — dead catch blocks, notification email, role verification
- Remove dead IllegalArgumentException catch blocks in TenantPortalController
  (delegated methods now throw ResponseStatusException, handled by Spring)
- Add password reset notification email in VendorAdminService.resetAdminPassword
- Add verifyIsVendorAdmin guard to resetAdminPassword and resetAdminMfa
  to prevent platform admins from resetting arbitrary non-admin users

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 15:06:16 +02:00
hsiegeln
e5e0cad7c3 refactor: consolidate tenant SettingsPage to use shared account components
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 14:59:09 +02:00
hsiegeln
8668642b8d feat: add account settings route and user menu dropdown
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 14:58:27 +02:00
hsiegeln
d44ee4b977 feat: add VendorAdminsPage with list, create/invite, remove, reset actions
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 14:56:03 +02:00
hsiegeln
5d1d263c74 feat: add AccountSettingsPage composing shared account components 2026-04-27 14:54:26 +02:00
hsiegeln
e563631efb feat: extract shared account components (Profile, Password, MFA, Passkey)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 14:53:05 +02:00
hsiegeln
bf42f13afc feat: add TypeScript types and React Query hooks for account and vendor admin APIs
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 14:49:44 +02:00
hsiegeln
0da1ffea7f fix: guard against null orgId in createAndInviteUser and createUserWithPassword
Vendor admins use global roles, not org roles — passing null orgId
would previously cause addUserToOrganization to call
/api/organizations/null/users and fail.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 14:48:00 +02:00
hsiegeln
022b6d9722 feat: add vendor admin management (list, create/invite, remove, reset password/MFA)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 14:46:42 +02:00
hsiegeln
665ffefd3e refactor: use AccountService for display name in OnboardingService
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 14:46:20 +02:00
hsiegeln
cc3d2dc111 refactor: delegate TenantPortalService MFA/password/passkey methods to AccountService
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 14:44:20 +02:00
hsiegeln
ab240e42b0 feat: add /api/account/** security config and MFA enforcement exemptions
Permit /settings/** SPA route, gate /api/account/** as authenticated,
and exempt account MFA/profile/password paths from MFA enforcement filter.
2026-04-27 14:41:21 +02:00
hsiegeln
b63e5e9c81 feat: add AccountController with /api/account/* endpoints 2026-04-27 14:41:05 +02:00
hsiegeln
90d84ffd00 feat: add AccountService extracting user identity operations from TenantPortalService
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 14:39:40 +02:00
hsiegeln
19428b4e27 feat: add password verify and role management methods to LogtoManagementClient
Adds verifyUserPassword (for current-password check before password change) and
four global role methods (listRoleUsers, getRoleByName, assignGlobalRole,
revokeGlobalRole) needed by the upcoming AccountService and VendorAdminService.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 14:36:59 +02:00
hsiegeln
316e5ef6c1 docs: implementation plan for vendor admin management and account settings
16 tasks covering: LogtoManagementClient additions, AccountService
extraction, AccountController, VendorAdminService/Controller,
SecurityConfig updates, frontend component extraction, shared
AccountSettingsPage, VendorAdminsPage, and Layout user menu.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 14:32:46 +02:00
hsiegeln
86d9ba4985 docs: vendor admin management and account settings design spec
Two features: multi-vendor admin management (invite/create, remove,
reset password/MFA) and shared account settings page (profile, password
change with current-password verification, MFA self-service). Includes
consolidation plan extracting user-level identity operations from
TenantPortalService into new AccountService.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 14:20:49 +02:00
hsiegeln
292adeea4c docs: update documentation for passkey MFA feature
All checks were successful
CI / build (push) Successful in 2m23s
CI / docker (push) Successful in 2m19s
- Add V002/V003 migrations and VendorAuthPolicy classes to CLAUDE.md
- Document MFA & passkey enforcement model in config CLAUDE.md
- Mark passkey MFA design spec as Implemented

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 11:51:12 +02:00
hsiegeln
43a1058f33 fix: code review findings — auth-settings HTTP method, authorization, redirect
- Change auth-settings endpoint from PUT to PATCH (matches partial update semantics and frontend hook)
- Add @PreAuthorize("SCOPE_tenant:manage") to updateAuthSettings endpoint
- Consolidate MFA/passkey 403 redirect handling in API client

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 09:01:23 +02:00
hsiegeln
60a800f757 feat: add passkey offer step to onboarding wizard
After tenant creation, checks vendor auth policy and conditionally
shows a passkey enrollment offer screen before redirecting. User
can skip and set up later.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 08:55:24 +02:00
hsiegeln
76a62135ab feat: add WebAuthn and method picker modes to sign-in UI
Adds mfaWebauthn and mfaMethodPicker modes with smart routing based on
stored preference (localStorage). Auto-triggers passkey prompt on mode
entry. Adds "Use passkey instead" link in TOTP mode. Saves method
preference on successful verification.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 08:55:16 +02:00
hsiegeln
17ba02c30d feat: add WebAuthn Experience API functions to sign-in UI
Adds startWebAuthnAuth and verifyWebAuthnAuth functions that call
the Logto Experience API WebAuthn endpoints for passkey MFA verification.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 08:55:10 +02:00
hsiegeln
9b898924ab feat: add passkey management and auth policy sections to tenant settings
Adds PasskeySection (list/rename/delete passkeys), AuthPolicySection
(MFA mode + passkey enable/mode controls), and PasskeyNudgeBanner
(dismissable nudge for users without a passkey enrolled).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 08:55:04 +02:00
hsiegeln
8de16019b7 feat: add vendor authentication policy management page
Adds /vendor/auth-policy route with MFA mode (off/optional/required) and passkey (enabled/disabled, optional/preferred/required mode) controls, including a confirmation guard before enforcing required MFA.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 08:51:45 +02:00
hsiegeln
ad2b16f26d feat: add passkey and auth policy React Query hooks
Adds hooks for listing/renaming/deleting passkeys, MFA method preference,
tenant auth settings, and vendor auth policy (using the new putJson method).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 08:48:44 +02:00
hsiegeln
2007a4b2da feat: add passkey types and APP_PASSKEY_REQUIRED handling
Extends MfaStatus with passkeyEnrolled/passkeyCount fields, adds
PasskeyCredential and AuthPolicy types, expands TenantSettings with
passkey fields, handles APP_PASSKEY_REQUIRED 403 redirect, and adds
putJson method to the api client for JSON PUT requests.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 08:48:39 +02:00
hsiegeln
9057479da7 feat: expose vendor auth policy in public config endpoint
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 08:48:13 +02:00
hsiegeln
89c83ec7b8 feat: expand MfaEnforcementFilter for vendor policy and passkey checks
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 08:48:09 +02:00
hsiegeln
b3104dc410 feat: add passkey and auth settings endpoints to TenantPortalController 2026-04-27 08:45:52 +02:00
hsiegeln
5bf94c6d4e feat: add passkey management and auth settings to TenantPortalService 2026-04-27 08:45:49 +02:00
hsiegeln
40daca36a0 feat: add WebAuthn credential and custom data methods to LogtoManagementClient 2026-04-27 08:45:45 +02:00
hsiegeln
8c9edfdb55 feat: add passkey_enrolled and mfa_method_preference to Custom JWT claims
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 08:44:42 +02:00
hsiegeln
25f4afcddc feat: add vendor auth policy REST endpoints
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 08:42:59 +02:00
hsiegeln
02be1d9264 feat: add VendorAuthPolicy entity and repository
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 08:42:55 +02:00
hsiegeln
cc7c87a520 feat: add vendor_auth_policy table for passkey MFA support
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-27 08:42:51 +02:00
hsiegeln
ca19faf4f0 docs: add passkey MFA implementation plan
18-task plan covering database migration, backend policy/endpoints,
sign-in UI WebAuthn modes, and platform UI management pages.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 08:39:44 +02:00
hsiegeln
b86cc812b7 docs: add passkey MFA design spec
Logto-native WebAuthn approach with independent vendor/tenant policy
domains, three registration entry points, and smart MFA method defaults.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 08:26:09 +02:00
hsiegeln
f0dda0d2ee fix(ui): clean up tenant pages and add license inspection
All checks were successful
CI / build (push) Successful in 2m6s
CI / docker (push) Successful in 1m28s
- Remove tier badge from tenant license page header
- Remove tier badge and Tier KPI card from tenant dashboard
- Add "Inspect License" toggle on vendor tenant detail to view all limits

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-27 07:51:29 +02:00
hsiegeln
3cd6bd5585 chore: update GitNexus index stats in documentation
Some checks failed
CI / build (push) Successful in 2m16s
CI / docker (push) Successful in 1m38s
SonarQube Analysis / sonarqube (push) Failing after 2m36s
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 22:14:42 +02:00
hsiegeln
25d66af45e fix(ui): hide redundant SSO button in empty state, fix dashboard navigation
Some checks failed
CI / docker (push) Has been cancelled
CI / build (push) Has been cancelled
- Hide top-right "Add SSO Connection" when no connectors exist (empty
  state already has its own button)
- Fix broken relative navigations on tenant dashboard: ../license and
  ../oidc resolved to wrong paths; now use absolute /tenant/license and
  /tenant/sso

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 22:13:41 +02:00
hsiegeln
d783040030 feat(ui): add license usage visualization with progress bars
Split license limits into metered "Resource Usage" (with color-coded
progress bars) and static "Plan Limits" cards. Updated UsageIndicator
with 8px bars, green/amber/red thresholds, and tabular-nums formatting.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 22:12:33 +02:00
hsiegeln
6afc337b16 feat: add usage data to license and vendor detail endpoints
Add getAppCount() to ServerApiClient, include usage counts (agents,
environments, apps, users) in tenant license and vendor detail responses
so the frontend can render progress bars against license limits.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 22:08:47 +02:00
hsiegeln
e881e302b6 fix(ui): check ApiError.status instead of message string for 404 detection
All checks were successful
CI / build (push) Successful in 2m3s
CI / docker (push) Successful in 1m27s
The ApiError class (088bc34) extracts messages from response bodies, so
a 404 with no body produces "Request failed" — not "404". The email
connector hook's string check failed, treating "not configured" as an
error and showing "Failed to load config" on fresh installs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 21:28:53 +02:00
hsiegeln
d7ef2c488b fix: use valid STARTER tier in onboarding tenant creation
All checks were successful
CI / build (push) Successful in 2m17s
CI / docker (push) Successful in 1m16s
OnboardingService passed "LOW" as the tier, but the Tier enum only has
STARTER/TEAM/BUSINESS/ENTERPRISE. Tier.valueOf("LOW") threw
IllegalArgumentException, which the controller caught as a blanket 409
Conflict — masking the real cause. Also catch IllegalStateException
(user already has a tenant) to return 409 instead of 500.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 21:17:23 +02:00
hsiegeln
088bc34e67 fix(ui): extract meaningful error messages from API responses
All checks were successful
CI / build (push) Successful in 2m9s
CI / docker (push) Successful in 1m28s
Introduces ApiError class in client.ts that parses Spring Boot error
bodies to extract human-readable messages (message, error, detail fields).
Adds errorMessage() helper used by all toast descriptions instead of
raw String(err) which dumped JSON blobs to the user.

Affected: all 10 page components that display error toasts.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 21:10:28 +02:00
hsiegeln
73e41e5607 fix(ci): force SNAPSHOT updates in build job Maven command
All checks were successful
CI / build (push) Successful in 2m15s
CI / docker (push) Successful in 1m31s
The actions/cache restored a stale ~/.m2/repository with the old
cameleer-license-minter SNAPSHOT (pre-license-api extraction).
Adding -U forces re-resolution of SNAPSHOT dependencies.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 20:58:06 +02:00
hsiegeln
f5b68c212b fix: force SNAPSHOT updates in Docker build to resolve stale cache
Some checks failed
CI / build (push) Failing after 1m2s
CI / docker (push) Has been skipped
The BuildKit cache mount for ~/.m2/repository persists the old
cameleer-license-minter SNAPSHOT (which depended on server-core).
Adding -U forces Maven to re-resolve SNAPSHOTs from the Gitea
registry, picking up the updated minter that depends on license-api.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 20:55:41 +02:00
hsiegeln
7c82ba93b0 refactor: update imports for cameleer-license-api package extraction
Some checks failed
CI / build (push) Successful in 3m6s
CI / docker (push) Failing after 30s
Server team extracted license types into cameleer-license-api (#156).
Package moved from com.cameleer.server.core.license to com.cameleer.license.
Dependency tree is now: cameleer-saas → minter → license-api (no server-core).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 20:50:18 +02:00
hsiegeln
1066101e8a fix: add Gitea Maven repository for cameleer-license-minter resolution
All checks were successful
CI / build (push) Successful in 2m31s
CI / docker (push) Successful in 1m25s
CI needs to resolve com.cameleer:cameleer-license-minter from the
Gitea package registry. Without this repository declaration, the
dependency only resolved from the local Maven cache.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 19:31:27 +02:00
hsiegeln
ffb7ef0839 feat(ui): add license minting form, verify tool, and update all pages
Some checks failed
CI / build (push) Successful in 2m42s
CI / docker (push) Failing after 51s
Vendor UI:
- TenantDetailPage: full minting form with tier presets, 13 configurable
  limits, expiry/grace period, label. Mint & Push or Mint & Copy actions.
  License bundle display with all three env vars for standalone deployment.
- LicenseVerifyPage: paste token to decode + validate signature, shows
  envelope details and state badge. Public key viewer with copy button.
- Layout: added "License Tools" nav item under Vendor section.
- vendor-hooks: useMintLicense, useLicensePresets, useVerifyLicense, usePublicKey

Tenant UI:
- TenantLicensePage: replaced features card with full 13-key limits display,
  added grace period and label fields
- TenantDashboardPage: fixed limit keys (agents→max_agents, environments→max_environments)

Common:
- Updated types (dropped features, added label/gracePeriodDays/bundle types)
- Updated tier colors for STARTER/TEAM/BUSINESS/ENTERPRISE
- Updated CreateTenantPage tier dropdown

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 17:41:40 +02:00
hsiegeln
4dea1c6764 feat: push Ed25519 public key to tenant server containers
DockerTenantProvisioner now injects CAMELEER_SERVER_LICENSE_PUBLICKEY
env var on provisioned server containers, enabling cryptographic
license validation. SigningKeyService passed through auto-config.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 17:36:06 +02:00
hsiegeln
6c3f21d4db test: update all tests for Ed25519 license minting and tier rename
- LicenseServiceTest: mock SigningKeyService, assert signed token format
- VendorTenantServiceTest: add SigningKeyService mock, update mintLicense stubs
- All tests: LOW→STARTER, MID→TEAM, HIGH→BUSINESS, BUSINESS→ENTERPRISE
- Remove all features-related test assertions
- 80/80 tests passing

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 17:33:10 +02:00
hsiegeln
7a8960ca46 feat: add vendor license minting, presets, and verify endpoints
- POST /vendor/tenants/{id}/license now accepts MintLicenseRequest body
  with custom limits, expiresAt, gracePeriodDays, label, pushToServer
- Returns LicenseBundleResponse with token + public key + tenant slug
- GET /vendor/license-presets returns tier preset limits
- POST /vendor/license/verify decodes and validates signed tokens
- GET /vendor/signing-key/public returns the Ed25519 public key
- VendorTenantService.mintLicense() supports configurable minting
- Updated portal DTOs to drop features, add label + gracePeriodDays

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 17:24:23 +02:00
hsiegeln
fdc7187424 feat: rewrite LicenseService to mint Ed25519-signed tokens
Replaces UUID token generation with LicenseMinter.mint() from
cameleer-license-minter. Adds full-control generateLicense() overload
accepting custom limits, expiry, grace period, and label.
Adds verifyToken() and verifyTokenSignature() using LicenseValidator.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 17:22:05 +02:00
hsiegeln
2fd14165bc feat: add SigningKeyService for Ed25519 keypair management
Entity, repository, and service for generating and storing Ed25519
signing keys. Lazy-initializes on first call — generates keypair
and persists to signing_keys table.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 17:21:16 +02:00
hsiegeln
13bd03997a refactor: rename tiers and rewrite LicenseDefaults to 13-key cap matrix
- Tier enum: LOW→STARTER, MID→TEAM, HIGH→BUSINESS, BUSINESS→ENTERPRISE
- LicenseDefaults: 13-key limits per tier matching server handoff cap matrix
- Drop features concept from LicenseEntity, LicenseResponse, portal DTOs
- Add label and gracePeriodDays to LicenseEntity
- Fix agent limit key from 'agents' to 'max_agents' in VendorTenantController

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 17:20:22 +02:00
hsiegeln
e64bf4f0d1 feat: add cameleer-license-minter dependency and V002 migration
Adds Ed25519 license minting library, signing_keys table,
renames tiers (LOW→STARTER, MID→TEAM, HIGH→BUSINESS, BUSINESS→ENTERPRISE),
adds label + grace_period_days to licenses, drops features column.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 17:17:44 +02:00
hsiegeln
883e10ba7c feat: test SMTP connection on save and retain password on edit
All checks were successful
CI / build (push) Successful in 2m20s
CI / docker (push) Successful in 1m36s
Adds testSmtpConnection() that performs EHLO + auth via JavaMailSender
before persisting to Logto — saves fail fast with a clear error if
SMTP credentials are wrong. Password is now optional when editing:
if left blank the backend fetches the existing password from Logto's
connector config, so users can update host/port/fromEmail without
re-entering the password every time.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 16:23:42 +02:00
hsiegeln
0413a5b882 fix: remove HTML document wrapper from email templates for GMX compat
All checks were successful
CI / build (push) Successful in 2m12s
CI / docker (push) Successful in 1m5s
GMX webmail broke after adding <!DOCTYPE html><html><head><body>
wrappers — the Logto SMTP connector sets these as nodemailer's html
field, and GMX's sanitizer chokes on a full document inside its own
page shell. Reverts to bare HTML fragments (the format that worked
before 12:17 commit 484a388) while keeping the extra text paragraphs
added for mail checker text-to-HTML ratio compliance.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 16:08:04 +02:00
hsiegeln
c6b6bafc0f fix: revert email templates to inline styles for GMX webmail compat
All checks were successful
CI / build (push) Successful in 1m59s
CI / docker (push) Successful in 1m52s
GMX webmail strips <head> content including <style> blocks, rendering
emails as unstyled plain text. Reverts to inline styles (the only
reliable approach for email HTML) while keeping the proper HTML document
structure and extra text content added for mail checker compliance.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 15:56:10 +02:00
hsiegeln
988035b952 fix: handle MFA binding skip during registration submit
The registration flow hit a 422 on /api/experience/submit when MFA
policy is UserControlled. Adds the same trySubmit + skipMfaBinding
pattern already used in the sign-in flow — Logto confirms mfa-skipped
works for both SignIn and Register interaction events.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 15:52:46 +02:00
hsiegeln
c55427c22b fix: restore watermark image in email templates
All checks were successful
CI / build (push) Successful in 2m18s
CI / docker (push) Successful in 1m10s
The previous commit incorrectly removed the watermark — only the
style extraction into <style> blocks was requested. Restores the
watermark <img>, {{watermarkUrl}} placeholder resolution in both
EmailConnectorService and PasswordResetNotificationService, and
the corresponding test assertions.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 15:14:35 +02:00
hsiegeln
f681784e7e fix: move email styles to <style> block and remove watermark image
Some checks failed
CI / build (push) Successful in 1m59s
CI / docker (push) Has been cancelled
Extracts repeated inline styles into <head> <style> to improve the
text-to-HTML ratio flagged by mail checkers. Removes the decorative
watermark <img> (opacity 0.07, barely visible) which was the only
image element and triggered the "too many images" classification.
Cleans up the now-unused ProvisioningProperties dependency from
EmailConnectorService and PasswordResetNotificationService.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 15:11:53 +02:00
hsiegeln
7b57ee8246 fix: add proper HTML document structure and more text to email templates
All checks were successful
CI / build (push) Successful in 2m35s
CI / docker (push) Successful in 1m6s
Mail checkers flagged missing <html> tag and insufficient text content.
Wraps all 5 templates in DOCTYPE/html/head/body, adds Outlook conditional
comments, and includes a descriptive paragraph to improve text-to-image ratio.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 14:57:47 +02:00
hsiegeln
6e6e4218c9 fix: skip MFA binding prompt for UserControlled policy during sign-in
All checks were successful
CI / build (push) Successful in 2m1s
CI / docker (push) Successful in 59s
Logto returns 422 with an MFA recommendation when policy is
UserControlled. Call POST /profile/mfa/mfa-skipped to skip the
binding prompt, then re-submit. Users who already have MFA enrolled
still get the TOTP verification flow.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 14:43:55 +02:00
hsiegeln
469b36613b fix: resolve CI type errors in TeamPage and install qrcode.react
All checks were successful
CI / build (push) Successful in 2m52s
CI / docker (push) Successful in 2m16s
- Change Button size="small" to size="sm" (design system API)
- Remove unsupported style prop from Card component
- Ensure qrcode.react is properly installed

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 14:29:59 +02:00
hsiegeln
bcb8a040f4 docs: add MFA handoff document for cameleer-server team
Some checks failed
CI / build (push) Failing after 38s
CI / docker (push) Has been skipped
Covers JWT mfa_enrolled claim, enforcement model (APP_MFA_REQUIRED),
Logto Management API contract for TOTP enrollment and backup codes,
UX requirements, and error states.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 14:07:27 +02:00
hsiegeln
d52084a081 feat: add Reset MFA action for team members
Adds a Reset MFA button in the Actions column and an inline confirmation
card (with warning Alert) that calls useResetTeamMemberMfa on confirm.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-26 14:06:20 +02:00
hsiegeln
7e7407b137 feat: add MFA enrollment and enforcement toggle to Settings page
Adds two new sections to the tenant Settings page:
- MfaSection: TOTP authenticator setup with QR code, 6-digit verification,
  backup code display (2-column grid with copy/download), and MFA removal
- MfaEnforcementToggle: tenant admin control to require MFA for all members,
  with confirmation dialog before enabling

Installs qrcode.react for QR code rendering. Uses existing MFA hooks from
tenant-hooks.ts and design-system components.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 14:04:28 +02:00
hsiegeln
0a77080bca feat: add MFA types, hooks, and APP_MFA_REQUIRED interceptor
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-26 14:01:04 +02:00
hsiegeln
a5b30cd1ea feat: add password reset security notification email endpoint
Adds POST /api/password-reset-notification (public, rate-limited 3/10min)
that sends a branded HTML security notification email via the runtime-
configured Logto SMTP connector. Uses spring-boot-starter-mail with a
programmatic JavaMailSender built from the connector's live credentials.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-26 13:59:23 +02:00
hsiegeln
ffb65edcec feat: add MFA enforcement filter with APP_MFA_REQUIRED error code
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-26 13:56:25 +02:00
hsiegeln
8b8909e488 feat: add MFA enrollment, removal, and settings endpoints
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 13:53:44 +02:00
hsiegeln
94de4c2a5b feat: add MFA Management API methods to LogtoManagementClient
Add 5 new methods for MFA operations via Logto Management API:
- getUserMfaVerifications: list all MFA factors for a user
- createTotpVerification: create TOTP MFA verification
- createBackupCodes: generate backup codes
- deleteMfaVerification: delete a specific MFA verification
- deleteAllMfaVerifications: delete all MFA verifications (admin reset)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 13:48:29 +02:00
hsiegeln
66477ff575 feat: configure MFA factors + mfa_enrolled JWT claim in Logto bootstrap
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-26 13:46:10 +02:00
hsiegeln
6c70efcb54 feat: add MFA verification (TOTP + backup code) to sign-in flow
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-26 13:44:28 +02:00
hsiegeln
1f3a9551c5 feat: add forgot-password UI flow to custom sign-in page
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-26 13:42:51 +02:00
hsiegeln
08a3ad03b7 feat: add forgot-password and MFA verification Experience API functions
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-26 13:39:31 +02:00
hsiegeln
cfcf852e2d docs: add password reset and MFA implementation plan
12-task plan covering:
- Password reset Experience API + sign-in UI
- MFA verification at sign-in (TOTP + backup codes)
- Logto bootstrap MFA config + mfa_enrolled JWT claim
- LogtoManagementClient MFA methods
- MFA enrollment endpoints + Settings page UI
- MFA enforcement filter (APP_MFA_REQUIRED)
- Password reset security notification email
- Team page Reset MFA action
- Server handoff document

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 13:35:42 +02:00
hsiegeln
67f7d634c9 docs: refine password reset + MFA spec from review feedback
- Add security notification email after password reset (warns MFA
  was not required, recommends enabling it)
- Use distinct APP_MFA_REQUIRED error code + X-Cameleer-Error header
  for MFA enforcement 403s to avoid collision with generic access denied
- Make backup code fallback prominent in MFA verification UI (visible
  secondary action, not a subtle link)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 13:26:55 +02:00
hsiegeln
6f984c6b78 docs: add password reset and MFA design spec
Covers self-service password reset via Logto Experience API,
TOTP + backup code MFA with per-tenant enforcement via JWT claims,
and a server handoff document for cameleer-server MFA enrollment.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 13:20:48 +02:00
hsiegeln
5754b0ca81 fix: set Logto display name from email during onboarding
All checks were successful
CI / build (push) Successful in 2m12s
CI / docker (push) Successful in 1m3s
Email-registered users have no name field in Logto, causing empty OIDC
name claims. After adding user to org, derive display name from email
local part (john.doe@acme.com -> john.doe) if name is not already set.

Also adds updateUserProfile() to LogtoManagementClient.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 12:31:12 +02:00
hsiegeln
484a388b62 fix: prevent grey bar when webmail blocks watermark image
All checks were successful
CI / build (push) Successful in 2m3s
CI / docker (push) Successful in 1m10s
Remove width/height HTML attributes and add border:0;outline:none to
the watermark img tag so broken-image placeholders collapse gracefully
when email clients block remote images.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 12:17:51 +02:00
hsiegeln
d720c0500f fix: force fresh OIDC sign-in after onboarding to pick up new org membership
All checks were successful
CI / build (push) Successful in 1m55s
CI / docker (push) Successful in 1m22s
After creating a tenant, the existing Logto tokens don't include the new
org membership/scopes. A hard page reload reused stale tokens, causing
the SDK to either lose auth state (redirect loop to login) or fail to
resolve org scopes (falling through to server UI instead of tenant UI).

Replace window.location.href with signIn() to trigger a fresh OIDC flow.
The existing Logto session cookie means auto-approval — no login form.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 12:06:39 +02:00
hsiegeln
cfa9d41b36 docs: add email template polish spec, plan, and update GitNexus index
All checks were successful
CI / build (push) Successful in 1m54s
CI / docker (push) Successful in 1m2s
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 10:37:41 +02:00
hsiegeln
b974f233f4 feat: load email templates from classpath with watermark URL resolution
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-26 10:35:16 +02:00
hsiegeln
3741ac2658 feat: add branded HTML email templates with desert/caravan copy 2026-04-26 10:31:50 +02:00
hsiegeln
e8a726af80 feat: permit /assets/** for unauthenticated access (email watermark) 2026-04-26 10:30:33 +02:00
hsiegeln
53f0e55e93 feat: add pre-faded logo watermark for email templates 2026-04-26 10:24:02 +02:00
hsiegeln
06d114b46b feat: validate slug uniqueness during onboarding
All checks were successful
CI / build (push) Successful in 1m50s
CI / docker (push) Successful in 1m22s
Add GET /api/onboarding/slug-available endpoint to check if a slug is
already taken. Frontend checks availability with 400ms debounce as the
user types and shows inline feedback. Submit button disabled when slug
is taken. POST /api/onboarding/tenant now returns 409 instead of 500
for duplicate slugs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-26 09:40:17 +02:00
hsiegeln
171ed1a6ab fix: provisioning race condition and noisy ClickHouse logs
Some checks failed
CI / build (push) Successful in 2m3s
CI / docker (push) Successful in 1m30s
SonarQube Analysis / sonarqube (push) Failing after 2m22s
Defer provisionAsync() until after the transaction commits using
TransactionSynchronization.afterCommit(). Previously the @Async thread
raced the @Transactional commit — findById returned null because the
tenant INSERT wasn't visible yet.

Downgrade ClickHouse UNKNOWN_TABLE errors to DEBUG level in
InfrastructureService. These are expected on fresh installs before any
cameleer-server has created the tables.

Make the onboarding slug field read-only (derived from org name).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 22:05:48 +02:00
hsiegeln
dee1f39554 fix: align button icons and polish vendor sidebar
All checks were successful
CI / build (push) Successful in 2m8s
CI / docker (push) Successful in 1m41s
Fix vertical alignment of Lucide icons inside Button children across
all pages by adding verticalAlign offsets (-3px for 16px icons, -2px
for 14px icons). The design system Button wraps children in an inline
span, so SVG icons defaulted to baseline alignment.

Hide the redundant top-right "Create Tenant" button on VendorTenantsPage
when no tenants exist — the EmptyState already provides that action.

Add icons to all vendor sidebar sub-items for consistency (previously
only Email Connector had one).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 21:30:37 +02:00
hsiegeln
adb4ef1af8 fix: enable email sign-in method alongside username in all modes
All checks were successful
CI / build (push) Successful in 1m50s
CI / docker (push) Successful in 59s
The sign-in experience must always include both email+password and
username+password methods. The admin user signs in with their email
(admin@company.com) which the sign-in UI detects as email type.
With only username method enabled, Logto rejects it with "this
sign-in method is not activated."

Fixes both bootstrap Phase 8c and EmailConnectorService disable path.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 21:11:07 +02:00
hsiegeln
4cc3e096b5 fix: bootstrap extracts username from admin email for Logto
All checks were successful
CI / build (push) Successful in 1m47s
CI / docker (push) Successful in 20s
Logto rejects @ in usernames. Extract local part (before @) as the
Logto username, use full email as primaryEmail. Also validates admin
user creation succeeded (logs error instead of silently continuing
with null ID).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 21:03:54 +02:00
hsiegeln
1d26ae481e docs: update user manual for current UI and identity model
All checks were successful
CI / build (push) Successful in 1m58s
CI / docker (push) Successful in 21s
- Sign-in instructions: "Enter your email" (not "email or username")
- Troubleshooting: remove reference to deleted "Sign in with Logto" button
- Sidebar navigation: replace outdated single table with vendor console
  and tenant portal sections reflecting current sidebar structure

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 20:53:19 +02:00
hsiegeln
8fe18c7f83 feat: unify admin identity — SAAS_ADMIN_USER is the email in SaaS mode
All checks were successful
CI / build (push) Successful in 1m56s
CI / docker (push) Successful in 1m32s
In SaaS mode, SAAS_ADMIN_USER must be an email address. It's used as
both the Logto username and primaryEmail. No separate SAAS_ADMIN_EMAIL.
Installer enforces email format in SaaS mode (moved deployment mode
question before admin credentials), accepts any username in standalone.
Sign-in form label changed to "Login".

Removes SAAS_ADMIN_EMAIL from bootstrap, compose template, installers,
and all documentation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 20:46:24 +02:00
hsiegeln
929e7d5aed chore: update installer submodule (add SAAS_ADMIN_EMAIL to both installers)
All checks were successful
CI / build (push) Successful in 1m57s
CI / docker (push) Successful in 20s
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 20:26:57 +02:00
hsiegeln
3ab6408258 feat: enforce email as primary user identity in SaaS mode
All checks were successful
CI / build (push) Successful in 2m23s
CI / docker (push) Successful in 53s
All users in SaaS mode must have an email address. The bootstrap creates
the admin user with primaryEmail set to SAAS_ADMIN_EMAIL (defaults to
<SAAS_ADMIN_USER>@<PUBLIC_HOST>). This prevents the admin from being
locked out when self-service registration (which requires email) is
enabled via the Email Connector UI.

Documentation updated across all CLAUDE.md files, .env.example,
user-manual.md, and installer submodule (README, .env.example, compose).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 20:23:30 +02:00
hsiegeln
f0aa2b7d3a fix: reset signUp identifiers when disabling registration
All checks were successful
CI / build (push) Successful in 1m45s
CI / docker (push) Successful in 1m17s
When registration is disabled, signUp.identifiers must be reset to
["username"] with verify:false. Otherwise Logto enforces email as a
mandatory profile field on all users, blocking username-only users
(like the admin) from signing in.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 20:08:46 +02:00
hsiegeln
9bf6c17d63 fix: hide registration option when sign-in mode is SignIn only
Some checks failed
CI / build (push) Successful in 2m4s
CI / docker (push) Has been cancelled
Fetch /api/.well-known/sign-in-exp on mount and check signInMode.
If not SignInAndRegister, hide the "Sign up" link and force sign-in
mode (even if ?first_screen=register was in the URL).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 20:06:17 +02:00
hsiegeln
1a4ae5b49b fix: style signed-out page to match sign-in UI
Some checks failed
CI / build (push) Successful in 2m12s
CI / docker (push) Has been cancelled
Use same layout as SignInPage: bg-base background, 400px card,
Cameleer logo with text header, matching font sizes and spacing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 20:03:04 +02:00
hsiegeln
400c32a539 fix: use sessionStorage instead of query param for logout flag
All checks were successful
CI / build (push) Successful in 2m2s
CI / docker (push) Successful in 1m12s
Logto does exact-match on post_logout_redirect_uri, so ?signed_out
caused "not registered" error. Use sessionStorage flag instead —
set before signOut, read and cleared on LoginPage mount.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 19:56:10 +02:00
hsiegeln
2cb818ec71 fix: prevent logout loop by showing signed-out state instead of auto-redirecting
All checks were successful
CI / build (push) Successful in 2m45s
CI / docker (push) Successful in 1m50s
After logout, redirect to /platform/login?signed_out which shows a
"Signed out" card with a "Sign in again" button instead of immediately
redirecting back to Logto OIDC (which would auto-authenticate if the
Logto session cookie persists).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 18:52:26 +02:00
hsiegeln
37668dcfe0 docs: update all documentation for email connector UI migration
All checks were successful
CI / build (push) Successful in 2m3s
CI / docker (push) Successful in 1m34s
- CLAUDE.md: add EmailConnectorService/Controller to vendor package
- .env.example: replace SMTP vars with note about runtime UI config
- docker/CLAUDE.md: update sign-in UI and bootstrap descriptions
- ui/CLAUDE.md: add EmailConfigPage, update sidebar and registration notes
- ui/sign-in/Dockerfile: update connector install comment
- installer: update README, .env.example (submodule)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 18:16:19 +02:00
hsiegeln
40ea6e5e69 docs: update docker CLAUDE.md and installer submodule for SMTP removal
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 18:09:57 +02:00
hsiegeln
6ab0a3c5a1 chore: update installer submodule (remove SMTP from both installers) 2026-04-25 18:08:51 +02:00
hsiegeln
8130f2053d chore: update installer submodule (remove SMTP from compose)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-25 18:05:42 +02:00
hsiegeln
9da908e4d2 feat: remove SMTP connector from bootstrap, default to sign-in only
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-25 18:05:17 +02:00
hsiegeln
d0dba73a29 feat: add email connector route and sidebar navigation
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-25 18:03:39 +02:00
hsiegeln
9aa535ace8 feat: add EmailConfigPage with SMTP form, registration toggle, and test email 2026-04-25 18:02:30 +02:00
hsiegeln
f85b5a3634 feat: add React Query hooks for email connector API 2026-04-25 18:00:47 +02:00
hsiegeln
39e1b39f7a feat: add EmailConnectorController with CRUD, test, and registration toggle endpoints 2026-04-25 17:59:40 +02:00
hsiegeln
283d3e34a0 feat: add EmailConnectorService for Logto email connector management 2026-04-25 17:58:26 +02:00
hsiegeln
2cd15509ba feat: add email connector and sign-in experience methods to LogtoManagementClient
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-25 17:56:37 +02:00
hsiegeln
9d87f71bc1 docs: add email connector UI design spec and implementation plan
Move email connector configuration from installer/bootstrap into the
vendor admin UI for runtime control over SMTP delivery and self-service
registration.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 17:50:47 +02:00
hsiegeln
6b77a96d52 fix: handle special characters in passwords during setup
All checks were successful
CI / build (push) Successful in 1m38s
CI / docker (push) Successful in 22s
- Logto entrypoint builds DB_URL from PG_USER/PG_PASSWORD/PG_HOST with
  URL-encoding via node's encodeURIComponent, instead of embedding the
  raw password in the connection string
- Installer submodule updated: passwords single-quoted in .env/.conf

Fixes SMTP and DB auth failures when passwords contain $, &, ;, [, etc.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 16:34:00 +02:00
hsiegeln
c58bf90604 chore: update installer submodule (restore install.ps1)
All checks were successful
CI / build (push) Successful in 1m56s
CI / docker (push) Successful in 21s
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 15:32:09 +02:00
hsiegeln
273baf7996 chore: use registry.cameleer.io as default image registry
All checks were successful
CI / build (push) Successful in 2m0s
CI / docker (push) Successful in 59s
Customer-facing image defaults now reference the public registry URL.
Updates installer templates and Spring Boot provisioning defaults.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 15:24:03 +02:00
hsiegeln
5ca118dc93 docs: update CLAUDE.md for installer submodule structure
All checks were successful
CI / build (push) Successful in 1m17s
CI / docker (push) Successful in 14s
Reflect that installer/ is now a git submodule pointing to the public
cameleer-saas-installer repo, and that docker-compose.yml is a thin
dev overlay chained via COMPOSE_FILE.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 13:05:20 +02:00
hsiegeln
0b8cdf6dd9 refactor: move installer to dedicated repo, wire as git submodule
All checks were successful
CI / build (push) Successful in 1m19s
CI / docker (push) Successful in 20s
The installer (install.sh, templates/, bootstrap scripts) now lives in
cameleer/cameleer-saas-installer (public repo). Added as a git submodule
at installer/ so compose templates remain the single source of truth.

Dev compose is now a thin overlay (ports + volume mount + dev env vars).
Production templates are chained via COMPOSE_FILE in .env:
  installer/templates/docker-compose.yml
  installer/templates/docker-compose.saas.yml
  docker-compose.yml (dev overrides)

No code duplication — fixes to compose templates go to the installer
repo and propagate to both production deployments and dev via submodule.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 12:59:44 +02:00
hsiegeln
cafd7e9369 feat: add bootstrap scripts for one-line installer download
All checks were successful
CI / build (push) Successful in 1m39s
CI / docker (push) Successful in 18s
get-cameleer.sh and get-cameleer.ps1 download just the installer
files from Gitea into a local ./installer directory. Usage:

  curl -fsSL https://gitea.siegeln.net/.../get-cameleer.sh | bash
  irm https://gitea.siegeln.net/.../get-cameleer.ps1 | iex

Supports --version=v1.2.0 to pin a specific tag, defaults to main.
Pass --run to auto-execute the installer after download.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 12:48:23 +02:00
hsiegeln
b5068250f9 fix(ui): improve onboarding page styling to match sign-in page
All checks were successful
CI / build (push) Successful in 1m51s
CI / docker (push) Successful in 1m16s
Add 32px card padding and reduce max-width to 420px for consistency
with the sign-in page. Add noValidate to prevent browser-native
required validation outlines on inputs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 12:43:11 +02:00
hsiegeln
0cfa359fc5 fix(sign-in): detect register mode from URL path, not just query param
All checks were successful
CI / build (push) Successful in 1m19s
CI / docker (push) Successful in 44s
Newer Logto versions redirect to /register?app_id=... instead of
/sign-in?first_screen=register. Check the pathname in addition to
the query param so the registration form shows correctly.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 10:10:57 +02:00
hsiegeln
5cc9f8c9ef fix(spa): add /register and /onboarding to SPA forward routes
All checks were successful
CI / build (push) Successful in 1m18s
CI / docker (push) Successful in 43s
These routes were missing from SpaController, so requests to
/platform/register and /platform/onboarding had no handler. Spring
forwarded to /error, which isn't in the permitAll() list, resulting
in a 401 instead of serving the SPA.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 10:05:56 +02:00
hsiegeln
b066d1abe7 fix(sign-in): validate email format before registration attempt
All checks were successful
CI / build (push) Successful in 1m17s
CI / docker (push) Successful in 46s
Show "Please enter a valid email address" when the user enters a
username instead of an email in the sign-up form, rather than letting
it hit Logto's API and returning a cryptic 400.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 09:41:41 +02:00
hsiegeln
ae1d9fa4db fix(docker): add extra_hosts so Logto can reach itself via public hostname
All checks were successful
CI / build (push) Successful in 1m14s
CI / docker (push) Successful in 18s
Logto validates M2M tokens by fetching its own JWKS from the ENDPOINT
URL (e.g. https://app.cameleer.io/oidc/jwks). Behind a Cloudflare
tunnel, that hostname resolves to Cloudflare's IP and the container
can't route back through the tunnel — the fetch times out (ETIMEDOUT),
causing all Management API calls to return 500.

Adding extra_hosts maps AUTH_HOST to host-gateway so the request goes
to the Docker host, which has Traefik on :443, which routes back to
Logto internally. This hairpin works because NODE_TLS_REJECT=0 accepts
the self-signed cert.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 09:13:39 +02:00
hsiegeln
6fe10432e6 fix(installer): remove duplicate config load that kills upgrade silently
All checks were successful
CI / build (push) Successful in 1m18s
CI / docker (push) Successful in 15s
The upgrade path in handle_rerun called load_config_file a second time
(already called by detect_existing_install). On the second pass, every
variable is already set, so [ -z "$VAR" ] && VAR="$value" returns
exit code 1 (test fails, && short-circuits). With set -e, the non-zero
exit from the case clause kills the script silently after printing
"[INFO] Upgrading installation..." — no error, no further output.

Removed the redundant load_config_file and load_env_overrides calls.
Both were already executed in main() before handle_rerun is reached.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 09:03:07 +02:00
hsiegeln
9f3faf4816 fix(traefik): set Logto router priority=1 to prevent route hijacking
All checks were successful
CI / build (push) Successful in 1m17s
CI / docker (push) Successful in 18s
Traefik auto-calculates router priority from rule string length. When
deployed with a domain longer than 23 chars (e.g. app.cameleer.io),
Host(`app.cameleer.io`) (25 chars) outranks PathPrefix(`/platform`)
(23 chars), causing ALL requests — including /platform/* — to route
to Logto instead of the SaaS app. This breaks login because the sign-in
UI loads without an OIDC interaction session.

Setting priority=1 makes Logto a true catch-all, matching the intent
documented in docker/CLAUDE.md.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 08:50:16 +02:00
hsiegeln
a60095608e fix(installer): send correct Host header in Traefik routing check
All checks were successful
CI / build (push) Successful in 1m18s
CI / docker (push) Successful in 19s
The root redirect rule matches Host(`PUBLIC_HOST`), not localhost.
Curl with --resolve (bash) and Host header (PS1) so the health
check sends the right hostname when verifying Traefik routing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 08:15:18 +02:00
hsiegeln
9f9112c6a5 feat(installer): add interactive registry prompts
Some checks failed
CI / build (push) Successful in 1m19s
CI / docker (push) Successful in 16s
SonarQube Analysis / sonarqube (push) Failing after 1m47s
Both simple and expert modes now ask "Pull images from a private
registry?" with follow-up prompts for URL, username, and token.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 02:17:52 +02:00
hsiegeln
e1a9f6d225 feat(installer): add --registry, --registry-user, --registry-token
All checks were successful
CI / build (push) Successful in 1m21s
CI / docker (push) Successful in 15s
Both installers (bash + PS1) now support pulling images from a
custom Docker registry. Writes *_IMAGE env vars to .env so compose
templates use the configured registry. Runs docker login before
pull when credentials are provided. Persisted in cameleer.conf
for upgrades/reconfigure.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 02:10:48 +02:00
hsiegeln
180644f0df fix(installer): SIGPIPE crash in generate_password with pipefail
All checks were successful
CI / build (push) Successful in 1m15s
CI / docker (push) Successful in 18s
`tr | head -c 32` causes tr to receive SIGPIPE when head exits early.
With `set -eo pipefail`, exit code 141 kills the script right after
"Configuration validated" before any passwords are generated.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 01:41:47 +02:00
hsiegeln
62b74d2d06 ci: remove sync-images workflow
All checks were successful
CI / build (push) Successful in 1m16s
CI / docker (push) Successful in 16s
Remote server will pull directly from the Gitea registry instead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 01:25:56 +02:00
hsiegeln
3e2f035d97 fix(ci): use POSIX-compatible loop instead of bash arrays
All checks were successful
CI / build (push) Successful in 1m18s
CI / docker (push) Successful in 18s
The docker-builder container runs ash/sh, not bash — arrays with ()
are not supported. Use a simple for-in loop instead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 00:32:12 +02:00
hsiegeln
9962ee99d9 fix(ci): drop ssh-keyscan, use StrictHostKeyChecking=accept-new instead
All checks were successful
CI / build (push) Successful in 1m16s
CI / docker (push) Successful in 17s
ssh-keyscan fails when the runner can't reach the host on port 22
during that step. Using accept-new on the ssh command itself is
equivalent for an ephemeral CI runner.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 00:29:52 +02:00
hsiegeln
b53840b77b ci: add manual workflow to sync Docker images to remote server
Some checks failed
CI / docker (push) Has been cancelled
CI / build (push) Has been cancelled
Pulls all :latest images from the Gitea registry and pipes them
via `docker save | ssh docker load` to the APP_HOST server.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 00:28:39 +02:00
hsiegeln
9ed2cedc98 feat: self-service sign-up with email verification and onboarding
All checks were successful
CI / build (push) Successful in 1m14s
CI / docker (push) Successful in 1m15s
Complete sign-up pipeline: email registration via Logto Experience API,
SMTP email verification, and self-service trial tenant creation.

Layer 1 — Logto config:
- Bootstrap Phase 8b: SMTP email connector with branded HTML templates
- Bootstrap Phase 8c: enable SignInAndRegister (email+password sign-up)
- Dockerfile installs official Logto connectors (ensures SMTP available)
- SMTP env vars in docker-compose, installer templates, .env.example

Layer 2 — Experience API (ui/sign-in/experience-api.ts):
- Registration flow: initRegistration → sendVerificationCode → verifyCode
  → addProfile (password) → identifyUser → submit
- Sign-in auto-detects email vs username identifier

Layer 3 — Custom sign-in UI (ui/sign-in/SignInPage.tsx):
- Three-mode state machine: signIn / register / verifyCode
- Reads first_screen=register from URL query params
- Toggle links between sign-in and register views

Layer 4 — Post-registration onboarding:
- OnboardingService: reuses VendorTenantService.createAndProvision(),
  adds calling user to Logto org as owner, enforces one trial per user
- OnboardingController: POST /api/onboarding/tenant (authenticated only)
- OnboardingPage.tsx: org name + auto-slug form
- LandingRedirect: detects zero orgs → redirects to /onboarding
- RegisterPage.tsx: /platform/register initiates OIDC with firstScreen

Installers (install.sh + install.ps1):
- Both prompt for SMTP config in SaaS mode
- CLI args, env var capture, cameleer.conf persistence

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-25 00:21:07 +02:00
hsiegeln
dc7ac3a1ec feat: split auth domain — Logto gets dedicated AUTH_HOST
All checks were successful
CI / build (push) Successful in 1m22s
CI / docker (push) Successful in 48s
Support separate auth domain (e.g. auth.cameleer.io) for Logto while
keeping the SaaS app on PUBLIC_HOST (e.g. app.cameleer.io). AUTH_HOST
defaults to PUBLIC_HOST for backward-compatible single-domain setups.

- Logto routing: Host(AUTH_HOST) replaces PathPrefix('/') catch-all
- Root redirect moved from traefik-dynamic.yml to Docker labels with
  Host(PUBLIC_HOST) scope so it doesn't intercept auth domain
- Self-signed cert generates SANs for both domains
- Bootstrap Host header uses AUTH_HOST for Logto endpoint validation
- Spring issuer-uri and oidcissueruri use new authhost property
- Both installers (sh + ps1) prompt for AUTH_HOST in expert mode

Local dev: AUTH_HOST=auth.localhost (resolves to 127.0.0.1, no hosts file)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-24 18:11:47 +02:00
hsiegeln
1fbafbb16d feat: add vendor tenant metrics dashboard
All checks were successful
CI / build (push) Successful in 1m24s
CI / docker (push) Successful in 1m0s
Fleet overview page at /vendor/metrics showing per-tenant operational
metrics (agents, CPU, heap, HTTP requests, ingestion drops, uptime).
Queries each tenant's server via the new POST /api/v1/admin/server-metrics/query
REST API instead of direct ClickHouse access, supporting future per-tenant
CH instances.

Backend: TenantMetricsService fires 11 metric queries per tenant
concurrently over a 5-minute window, assembles into a summary snapshot.
ServerApiClient.queryServerMetrics() handles the M2M authenticated POST.

Frontend: VendorMetricsPage with KPI strip (fleet totals) and per-tenant
table with color-coded badges and heap usage bars. Auto-refreshes every 60s.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-24 14:02:57 +02:00
hsiegeln
6c1241ed89 docs(docker): replace obsolete 504 workaround note with the real wiring
Some checks failed
CI / build (push) Successful in 1m23s
CI / docker (push) Successful in 18s
SonarQube Analysis / sonarqube (push) Failing after 1m20s
Pre-fix the paragraph claimed every dynamically-created container MUST
carry `traefik.docker.network=cameleer-traefik` to avoid a 504, because
Traefik's Docker provider pointed at `network: cameleer` (a literal
name that never matched any real network). After the one-line static
config fix (df64573), Traefik's provider targets `cameleer-traefik`
directly — the network every managed container already joins — so the
per-container label is just defense-in-depth, not required.

Rewritten to describe current behaviour and keep a short note about the
pre-fix 504 for operators who roll back to an old image.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-23 18:22:32 +02:00
hsiegeln
df64573bfb fix(traefik): point docker provider network at cameleer-traefik
All checks were successful
CI / build (push) Successful in 1m22s
CI / docker (push) Successful in 16s
The static config set `provider.docker.network: cameleer`, but no network
by that literal name exists. The `cameleer` network defined in the
compose file gets namespaced by compose to `cameleer_cameleer`, and
managed app containers created at runtime only ever attach to
`cameleer-traefik` (per `DockerNetworkManager.TRAEFIK_NETWORK`).

Symptom: when the Docker provider's preferred network doesn't match any
network on a container, Traefik picks an arbitrary container IP and may
route to one on a bridge Traefik itself isn't attached to — requests
hang until Traefik's upstream timeout fires (504 Gateway Timeout).

Fix is one line: match the network that `cameleer-server` actually
attaches its managed containers to.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-23 18:15:22 +02:00
hsiegeln
4526d97bda fix: generate CAMELEER_SERVER_SECURITY_JWTSECRET in installer and wire into containers
All checks were successful
CI / build (push) Successful in 1m16s
CI / docker (push) Successful in 59s
The server now requires a non-empty JWT secret. The installer (bash + ps1)
generates a random value for both SaaS and standalone modes, and the compose
templates map it into the respective containers. Also fixes container names
in generated INSTALL.md docs to use the cameleer- prefix consistently.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-23 09:30:11 +02:00
hsiegeln
132143c083 refactor: decompose CLAUDE.md into directory-scoped files
Some checks failed
CI / build (push) Successful in 1m59s
CI / docker (push) Successful in 1m24s
SonarQube Analysis / sonarqube (push) Failing after 2m4s
Root CLAUDE.md reduced from 475 to 175 lines (75 excl. GitNexus).
Detailed context now loads automatically only when editing code in
the relevant directory:

- provisioning/CLAUDE.md — env vars, provisioning flow, lifecycle
- config/CLAUDE.md — auth, scopes, JWT, OIDC role extraction
- docker/CLAUDE.md — routing, networks, bootstrap, deployment pipeline
- installer/CLAUDE.md — deployment modes, compose templates, env naming
- ui/CLAUDE.md — frontend files, sign-in UI

No information lost — everything moved, nothing deleted.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 09:30:21 +02:00
hsiegeln
b824942408 docs: fix scope breakdown and add missing InfrastructurePage
All checks were successful
CI / build (push) Successful in 2m12s
CI / docker (push) Successful in 19s
- OAuth2 scopes: 1 platform + 9 tenant + 3 server (not "10 platform")
- Add InfrastructurePage.tsx to vendor pages list

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 09:21:28 +02:00
hsiegeln
31e8dd05f0 docs: update CLAUDE.md for runtime base image, env var accuracy
All checks were successful
CI / build (push) Successful in 1m21s
CI / docker (push) Successful in 3m2s
- Fix OIDC env var names (OIDC_ISSUERURI not OIDCISSUERURI)
- Fix CAMELEER_SERVER_TENANT_ID value (slug, not UUID)
- Add missing env vars (ClickHouse, JWT secret, license token, base image)
- Complete provisioning properties table (was 6/16, now all listed)
- Add semantic note: CAMELEER_SAAS_PROVISIONING_* = "forwarded to tenant"
- Update runtime-base description (log appender JAR, entrypoint override,
  runtime type detection, PropertiesLauncher version handling)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 09:16:55 +02:00
hsiegeln
eba9f560ac fix: name JAR volume explicitly to match JARDOCKERVOLUME env var
Some checks failed
CI / build (push) Successful in 1m17s
CI / docker (push) Successful in 19s
SonarQube Analysis / sonarqube (push) Failing after 1m23s
The compose volume `jars` gets created as `<project>_jars` by Docker
Compose, but JARDOCKERVOLUME tells the server to mount `cameleer-jars`
on deployed app containers. These are different Docker volumes, so
the app JAR was never visible inside the app container — causing
ClassNotFoundException on startup.

Fix: add `name: cameleer-jars` to the volume definition so both the
server and deployed app containers share the same named volume.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 00:03:48 +02:00
hsiegeln
3c2bf4a9b1 fix: pass self-reference in VendorTenantServiceTest for async proxy
All checks were successful
CI / build (push) Successful in 1m17s
CI / docker (push) Successful in 44s
The @Lazy self-proxy pattern requires a non-null reference in tests.
Construct the instance then re-create with itself as the self param.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 23:33:07 +02:00
hsiegeln
97b2235914 fix: update tests for ProvisioningProperties runtimeBaseImage field
Some checks failed
CI / build (push) Failing after 1m22s
CI / docker (push) Has been skipped
Add missing runtimeBaseImage arg to ProvisioningProperties constructor
calls in tests. Also add missing self-proxy arg to VendorTenantService
constructor (pre-existing from async provisioning commit).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 23:27:53 +02:00
hsiegeln
338db5dcda fix: forward runtime base image to provisioned tenant servers
Some checks failed
CI / build (push) Failing after 59s
CI / docker (push) Has been skipped
CAMELEER_SERVER_RUNTIME_BASEIMAGE was never set on provisioned
per-tenant server containers, causing them to fall back to the
server's hardcoded default. Added CAMELEER_SAAS_PROVISIONING_RUNTIMEBASEIMAGE
as a configurable property that gets forwarded during provisioning.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 23:20:46 +02:00
hsiegeln
fd50a147a2 fix: make tenant provisioning truly async via self-proxy
Some checks failed
CI / build (push) Failing after 41s
CI / docker (push) Has been skipped
@Async on provisionAsync() was bypassed because all call sites were
internal (this.provisionAsync), skipping the Spring proxy. Inject self
via @Lazy to route through the proxy so provisioning runs in a
background thread and the API returns immediately.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 22:32:05 +02:00
hsiegeln
0dd52624b7 fix: use semicolon as COMPOSE_FILE separator on Windows
All checks were successful
CI / build (push) Successful in 1m59s
CI / docker (push) Successful in 46s
Windows Docker Compose uses ; not : as the path separator in COMPOSE_FILE.
The colon was being interpreted as part of the filename, causing CreateFile errors.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 22:11:34 +02:00
hsiegeln
1ce0ea411d chore: update design-system to 0.1.54
Some checks failed
CI / build (push) Successful in 1m25s
CI / docker (push) Has been cancelled
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 22:08:17 +02:00
hsiegeln
81be25198c chore: update design-system to 0.1.53
All checks were successful
CI / build (push) Successful in 1m16s
CI / docker (push) Successful in 1m32s
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 21:56:14 +02:00
hsiegeln
dc4ea33c9b feat: externalize docker-compose templates from installer scripts
All checks were successful
CI / build (push) Successful in 1m16s
CI / docker (push) Successful in 20s
Replace inline heredoc compose generation with static template files.
Templates are copied to the install dir and composed via COMPOSE_FILE in .env.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 21:53:26 +02:00
hsiegeln
186f7639ad docs: update CLAUDE.md with template-based installer architecture
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 21:11:04 +02:00
hsiegeln
6c7895b0d6 chore(installer): remove generated install output, add to gitignore
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 21:09:30 +02:00
hsiegeln
6170f61eeb refactor(installer): replace ps1 compose generation with template copying
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 21:08:34 +02:00
hsiegeln
2ed527ac74 refactor(installer): replace sh compose generation with template copying
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 21:03:01 +02:00
hsiegeln
cb1f6b8ccf feat(installer): add .env.example with documented variables
Reference .env file documenting all configuration variables across both
deployment modes, with section headers for compose assembly, public access,
credentials, TLS, Docker, provisioning, and monitoring.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 20:59:15 +02:00
hsiegeln
758585cc9a feat(installer): add TLS and monitoring overlay templates
Optional compose overlays: TLS overlay mounts user-supplied certs into
traefik, monitoring overlay replaces the noop bridge with an external
Docker network for Prometheus scraping.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 20:59:10 +02:00
hsiegeln
141b44048c feat(installer): add standalone docker-compose and traefik templates
Standalone mode: server + server-ui services with postgres image override
to stock postgres:16-alpine. Includes traefik-dynamic.yml for default TLS
certificate store configuration.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 20:59:05 +02:00
hsiegeln
3c343f9441 feat(installer): add SaaS docker-compose template
Logto identity provider and cameleer-saas management plane services.
Includes Traefik labels, CORS config, bootstrap healthcheck, and all
provisioning env vars parameterized from .env.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 20:59:00 +02:00
hsiegeln
bdb24f8de6 feat(installer): add infra base docker-compose template
Shared infrastructure base (traefik, postgres, clickhouse) always loaded
regardless of deployment mode. Uses parameterized images, fail-if-unset
password variables, and a noop monitoring network bridge.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 20:58:54 +02:00
hsiegeln
933b56f68f docs: add implementation plan for externalizing compose templates
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 20:54:31 +02:00
hsiegeln
19c463051a docs: add design spec for externalizing docker compose templates
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 20:47:14 +02:00
hsiegeln
41052d01e8 fix: replace admin password fallback defaults with fail-if-unset
All checks were successful
CI / build (push) Successful in 1m15s
CI / docker (push) Successful in 16s
Docker compose templates defaulted to admin/admin when .env was missing.
Now uses :? to fail with a clear error instead of silently using weak creds.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 20:17:46 +02:00
hsiegeln
99e75b0a4e fix: update sign-in UI to design-system 0.1.51
All checks were successful
CI / build (push) Successful in 1m15s
CI / docker (push) Successful in 1m31s
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 16:33:00 +02:00
hsiegeln
eb6897bf10 chore: update design-system to 0.1.51 (renamed assets)
Some checks failed
CI / build (push) Failing after 1m9s
CI / docker (push) Has been skipped
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 16:30:50 +02:00
hsiegeln
63c194dab7 chore: rename cameleer3 to cameleer
Some checks failed
CI / build (push) Failing after 18s
CI / docker (push) Has been skipped
Rename Java packages from net.siegeln.cameleer3 to net.siegeln.cameleer,
update all references in workflows, Docker configs, docs, and bootstrap.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 15:28:44 +02:00
hsiegeln
44a0e413e9 fix: include cameleer3-log-appender.jar in runtime base image
All checks were successful
CI / build (push) Successful in 1m20s
CI / docker (push) Successful in 13s
The log appender JAR was missing from the cameleer-runtime-base Docker
image, causing agent log forwarding to silently fail with "No supported
logging framework found, log forwarding disabled". This meant only
container stdout logs (source=container) were captured — no application
or agent logs reached ClickHouse.

CI now downloads the appender JAR from the Maven registry alongside the
agent JAR, and the Dockerfile COPYs it to /app/cameleer3-log-appender.jar
where the server's Docker entrypoint expects it (-Dloader.path for
Spring Boot, -cp for plain Java).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 11:11:04 +02:00
hsiegeln
15306dddc0 fix: force-pull images on install and fix provisioning test assertions
All checks were successful
CI / build (push) Successful in 1m11s
CI / docker (push) Successful in 47s
Installers now use `--pull always --force-recreate` on `docker compose up`
to ensure fresh images are used on every install/reinstall, preventing
stale containers from missing schema changes like db_password.

Fix VendorTenantServiceTest to expect two repository saves in provisioning
tests (one for dbPassword, one for final status).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 08:50:40 +02:00
hsiegeln
6eb848f353 fix: add missing TenantDatabaseService mock to VendorTenantServiceTest
Some checks failed
CI / build (push) Failing after 58s
CI / docker (push) Has been skipped
Constructor gained an 11th parameter (TenantDatabaseService) but the
test was not updated, breaking CI compilation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 08:32:14 +02:00
hsiegeln
d53afe43cc docs: update CLAUDE.md for per-tenant PG isolation and consolidated migrations
Some checks failed
CI / build (push) Failing after 42s
CI / docker (push) Has been skipped
SonarQube Analysis / sonarqube (push) Failing after 33s
- TenantDatabaseService added to key classes
- TenantDataCleanupService now ClickHouse-only
- Per-tenant JDBC URL with currentSchema/ApplicationName in env vars table
- Provisioning flow updated with DB creation step
- Delete flow updated with schema+user drop
- Database migrations section reflects consolidated V001 baseline

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 00:26:36 +02:00
hsiegeln
24a443ef30 refactor: consolidate Flyway migrations into single V001 baseline
Some checks failed
CI / build (push) Failing after 51s
CI / docker (push) Has been skipped
Replace 14 incremental migrations (V001-V015) with a single V001__init.sql
representing the final schema. Tables that were created and later dropped
(environments, api_keys, apps, deployments) are excluded.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 00:24:25 +02:00
hsiegeln
d7eb700860 refactor: move PG cleanup to TenantDatabaseService, keep only ClickHouse
TenantDataCleanupService now handles only ClickHouse GDPR erasure;
the dropPostgresSchema private method is removed and the public method
renamed cleanupClickHouse(). VendorTenantService updated accordingly
with the TODO comment removed.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-15 00:17:00 +02:00
hsiegeln
c1458e4995 feat: create per-tenant PG database during provisioning, drop on delete
Inject TenantDatabaseService; call createTenantDatabase() at the start
of provisionAsync() (stores generated password on TenantEntity), and
dropTenantDatabase() in delete() before GDPR data erasure.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-15 00:16:06 +02:00
hsiegeln
b79a7fe405 feat: construct per-tenant JDBC URL with currentSchema and ApplicationName
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-15 00:14:35 +02:00
hsiegeln
6d6c1f3562 feat: add TenantDatabaseService for per-tenant PG user+schema
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-15 00:13:34 +02:00
hsiegeln
0e3f383cf4 feat: add dbPassword to TenantProvisionRequest 2026-04-15 00:13:27 +02:00
hsiegeln
cd6dd1e5af feat: add dbPassword field to TenantEntity 2026-04-15 00:13:12 +02:00
hsiegeln
dfa2a6bfa2 feat: add db_password column to tenants table (V015) 2026-04-15 00:13:11 +02:00
hsiegeln
a7196ff4c1 docs: per-tenant PostgreSQL isolation implementation plan
8-task plan covering migration, entity change, TenantDatabaseService,
provisioner JDBC URL construction, VendorTenantService integration,
and TenantDataCleanupService refactor.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 00:11:34 +02:00
hsiegeln
17c6723f7e docs: per-tenant PostgreSQL isolation design spec
Per-tenant PG users and schemas for DB-level data isolation.
Each tenant server gets its own credentials and currentSchema/ApplicationName
JDBC parameters, aligned with server team's commit 7a63135.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 00:08:35 +02:00
hsiegeln
91e93696ed fix: improve Infrastructure page readability
All checks were successful
CI / build (push) Successful in 1m11s
CI / docker (push) Successful in 54s
Use Card and KpiStrip design system components, add database icons to
section headers, right-align numeric columns, replace text toggles with
chevron icons, and constrain max width to prevent ultra-wide stretching.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 23:26:43 +02:00
hsiegeln
57e41e407c fix: remove "Open Server Dashboard" link from tenant sidebar
All checks were successful
CI / build (push) Successful in 1m11s
CI / docker (push) Successful in 55s
The server dashboard link in the sidebar footer is premature — tenant
servers may not be provisioned yet and the link target depends on org
context that isn't always available.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 23:16:39 +02:00
hsiegeln
bc46af5cea fix: use configured credentials for tenant schema cleanup
All checks were successful
CI / build (push) Successful in 1m9s
CI / docker (push) Successful in 39s
Same hardcoded dev credentials bug as InfrastructureService —
TenantDataCleanupService.dropPostgresSchema() used "cameleer"/"cameleer_dev"
instead of the provisioning properties, causing schema DROP to fail on
production installs during tenant deletion.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 23:11:16 +02:00
hsiegeln
03fb414981 fix: use configured credentials for infrastructure PostgreSQL queries
All checks were successful
CI / build (push) Successful in 1m8s
CI / docker (push) Successful in 43s
pgConnection() had hardcoded dev credentials ("cameleer"/"cameleer_dev")
instead of using the provisioning properties, causing "password
authentication failed" on production installs where the password is
generated.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 23:01:00 +02:00
327 changed files with 31130 additions and 6691 deletions

View File

@@ -7,6 +7,9 @@ VERSION=latest
# Public access
PUBLIC_HOST=localhost
PUBLIC_PROTOCOL=https
# Auth domain (Logto). Defaults to PUBLIC_HOST for single-domain setups.
# Set to a separate subdomain (e.g. auth.cameleer.io) to split auth from the app.
# AUTH_HOST=localhost
# Ports
HTTP_PORT=80
@@ -22,9 +25,14 @@ POSTGRES_DB=cameleer_saas
CLICKHOUSE_PASSWORD=change_me_in_production
# Admin user (created by bootstrap)
SAAS_ADMIN_USER=admin
# In SaaS mode, this must be an email address (primary user identity).
# In standalone mode, any username is accepted.
SAAS_ADMIN_USER=admin@example.com
SAAS_ADMIN_PASS=change_me_in_production
# SMTP / email connector configuration is managed at runtime via the vendor
# admin UI (Email Connector page at /vendor/email). No SMTP env vars needed.
# TLS (leave empty for self-signed)
# NODE_TLS_REJECT=0 # Set to 1 when using real certificates
# CERT_FILE=
@@ -40,8 +48,8 @@ VENDOR_SEED_ENABLED=false
# DOCKER_GID=0
# Docker images (override for custom registries)
# TRAEFIK_IMAGE=gitea.siegeln.net/cameleer/cameleer-traefik
# POSTGRES_IMAGE=gitea.siegeln.net/cameleer/cameleer-postgres
# CLICKHOUSE_IMAGE=gitea.siegeln.net/cameleer/cameleer-clickhouse
# LOGTO_IMAGE=gitea.siegeln.net/cameleer/cameleer-logto
# CAMELEER_IMAGE=gitea.siegeln.net/cameleer/cameleer-saas
# TRAEFIK_IMAGE=registry.cameleer.io/cameleer/cameleer-traefik
# POSTGRES_IMAGE=registry.cameleer.io/cameleer/cameleer-postgres
# CLICKHOUSE_IMAGE=registry.cameleer.io/cameleer/cameleer-clickhouse
# LOGTO_IMAGE=registry.cameleer.io/cameleer/cameleer-logto
# CAMELEER_IMAGE=registry.cameleer.io/cameleer/cameleer-saas

View File

@@ -39,7 +39,7 @@ jobs:
- name: Build and Test (unit tests only)
run: >-
mvn clean verify -B
mvn clean verify -U -B
-Dsurefire.excludes="**/AuthControllerTest.java,**/TenantControllerTest.java,**/LicenseControllerTest.java,**/AuditRepositoryTest.java,**/CameleerSaasApplicationTest.java,**/EnvironmentControllerTest.java,**/AppControllerTest.java,**/DeploymentControllerTest.java,**/AgentStatusControllerTest.java,**/VendorTenantControllerTest.java,**/TenantPortalControllerTest.java"
- name: Build sign-in UI
@@ -111,11 +111,11 @@ jobs:
- name: Build and push runtime base image
run: |
AGENT_VERSION=$(curl -sf "https://gitea.siegeln.net/api/packages/cameleer/maven/com/cameleer3/cameleer3-agent/1.0-SNAPSHOT/maven-metadata.xml" \
AGENT_VERSION=$(curl -sf "https://gitea.siegeln.net/api/packages/cameleer/maven/io/cameleer/cameleer-agent/1.0-SNAPSHOT/maven-metadata.xml" \
| sed -n 's/.*<value>\([^<]*\)<\/value>.*/\1/p' | tail -1)
echo "Agent version: $AGENT_VERSION"
curl -sf -o docker/runtime-base/agent.jar \
"https://gitea.siegeln.net/api/packages/cameleer/maven/com/cameleer3/cameleer3-agent/1.0-SNAPSHOT/cameleer3-agent-${AGENT_VERSION}-shaded.jar"
"https://gitea.siegeln.net/api/packages/cameleer/maven/io/cameleer/cameleer-agent/1.0-SNAPSHOT/cameleer-agent-${AGENT_VERSION}-shaded.jar"
ls -la docker/runtime-base/agent.jar
TAGS="-t gitea.siegeln.net/cameleer/cameleer-runtime-base:${{ github.sha }}"
for TAG in $IMAGE_TAGS; do
@@ -126,6 +126,17 @@ jobs:
--provenance=false \
--push docker/runtime-base/
- name: Build and push runtime-loader image
run: |
TAGS="-t gitea.siegeln.net/cameleer/cameleer-runtime-loader:${{ github.sha }}"
for TAG in $IMAGE_TAGS; do
TAGS="$TAGS -t gitea.siegeln.net/cameleer/cameleer-runtime-loader:$TAG"
done
docker buildx build --platform linux/amd64 \
$TAGS \
--provenance=false \
--push docker/runtime-loader/
- name: Build and push Logto image
run: |
TAGS="-t gitea.siegeln.net/cameleer/cameleer-logto:${{ github.sha }}"

10
.gitignore vendored
View File

@@ -22,7 +22,15 @@ Thumbs.db
# Worktrees
.worktrees/
# Claude
.claude/
.superpowers/
.playwright-mcp/
.gitnexus
# Installer output (generated by install.sh / install.ps1)
installer/cameleer/
# Generated by postinstall from @cameleer/design-system
ui/public/favicon.svg
docker/runtime-base/agent.jar
.gitnexus

3
.gitmodules vendored Normal file
View File

@@ -0,0 +1,3 @@
[submodule "installer"]
path = installer
url = https://gitea.siegeln.net/cameleer/cameleer-saas-installer.git

View File

@@ -1,7 +1,7 @@
<!-- gitnexus:start -->
# GitNexus — Code Intelligence
This project is indexed by GitNexus as **cameleer-saas** (2676 symbols, 5768 relationships, 224 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
This project is indexed by GitNexus as **cameleer-saas** (3458 symbols, 7429 relationships, 292 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
> If any GitNexus tool warns the index is stale, run `npx gitnexus analyze` in terminal first.

355
CLAUDE.md
View File

@@ -4,338 +4,61 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
## Project
Cameleer SaaS — **vendor management plane** for the Cameleer observability stack. Two personas: **vendor** (platform:admin) manages the platform and provisions tenants; **tenant admin** (tenant:manage) manages their observability instance. The vendor creates tenants, which provisions per-tenant cameleer3-server + UI instances via Docker API. No example tenant — clean slate bootstrap, vendor creates everything.
Cameleer SaaS — **vendor management plane** for the Cameleer observability stack. Three personas: **vendor** (platform:admin) manages the platform and provisions tenants; **tenant admin** (tenant:manage) manages their observability instance; **new user** (authenticated, no scopes) goes through self-service onboarding. Tenants can be created by the vendor OR via self-service sign-up (email registration + onboarding wizard). Each tenant gets per-tenant cameleer-server + UI instances via Docker API.
**Email is the primary user identity** in SaaS mode. `SAAS_ADMIN_USER` IS the email address — there is no separate `SAAS_ADMIN_EMAIL`. The installer enforces email format in SaaS mode (must contain `@`; auto-appends `@<PUBLIC_HOST>` if missing). The bootstrap uses `SAAS_ADMIN_USER` as both the Logto username and primaryEmail. In standalone mode, any username is accepted. Self-service registration (email + password + verification code) is disabled by default and enabled via the vendor UI after configuring an email connector.
## Ecosystem
This repo is the SaaS layer on top of two proven components:
- **cameleer3** (sibling repo) — Java agent using ByteBuddy for zero-code instrumentation of Camel apps. Captures route executions, processor traces, payloads, metrics, and route graph topology. Deploys as `-javaagent` JAR.
- **cameleer3-server** (sibling repo) — Spring Boot observability backend. Receives agent data via HTTP, pushes config/commands via SSE. PostgreSQL + ClickHouse storage. React SPA dashboard. JWT auth with Ed25519 config signing. Docker container orchestration for app deployments.
- **cameleer** (sibling repo) — Java agent using ByteBuddy for zero-code instrumentation of Camel apps. Captures route executions, processor traces, payloads, metrics, and route graph topology. Deploys as `-javaagent` JAR.
- **cameleer-server** (sibling repo) — Spring Boot observability backend. Receives agent data via HTTP, pushes config/commands via SSE. PostgreSQL + ClickHouse storage. React SPA dashboard. JWT auth with Ed25519 config signing. Docker container orchestration for app deployments.
- **cameleer-website** — Marketing site (Astro 5)
- **design-system** — Shared React component library (`@cameleer/design-system` on Gitea npm registry)
Agent-server protocol is defined in `cameleer3/cameleer3-common/PROTOCOL.md`. The agent and server are mature, proven components — this repo wraps them with multi-tenancy, billing, and self-service onboarding.
Agent-server protocol is defined in `cameleer/cameleer-common/PROTOCOL.md`. The agent and server are mature, proven components — this repo wraps them with multi-tenancy, billing, and self-service onboarding.
## Key Classes
## Key Packages
### Java Backend (`src/main/java/net/siegeln/cameleer/saas/`)
### Java Backend (`src/main/java/io/cameleer/saas/`)
**config/** — Security, tenant isolation, web config
- `SecurityConfig.java` — OAuth2 JWT decoder (ES384, issuer/audience validation, scope extraction)
- `TenantIsolationInterceptor.java` — HandlerInterceptor on `/api/**`; JWT org_id -> TenantContext, path variable validation, fail-closed
- `TenantContext.java` ThreadLocal<UUID> tenant ID storage
- `WebConfig.java` — registers TenantIsolationInterceptor
- `PublicConfigController.java` — GET /api/config (Logto endpoint, SPA client ID, scopes)
- `MeController.java` — GET /api/me (authenticated user, tenant list)
| Package | Purpose | Key classes |
|---------|---------|-------------|
| `config/` | Security, tenant isolation, web config | `SecurityConfig`, `TenantIsolationInterceptor`, `TenantContext`, `PublicConfigController`, `MeController` |
| `tenant/` | Tenant data model | `TenantEntity` (JPA: id, name, slug, tier, status, logto_org_id, db_password) |
| `account/` | Shared user account operations | `AccountService` (profile, password, MFA, passkeys), `AccountController` (`/api/account/*`) |
| `vendor/` | Vendor console (platform:admin) | `VendorTenantService`, `VendorTenantController`, `InfrastructureService`, `EmailConnectorService`, `EmailConnectorController`, `VendorAuthPolicyController`, `VendorAuthPolicyEntity`, `VendorAdminService`, `VendorAdminController` |
| `onboarding/` | Self-service sign-up onboarding | `OnboardingController`, `OnboardingService` |
| `portal/` | Tenant admin portal (org-scoped) | `TenantPortalService` (delegates user-level ops to AccountService), `TenantPortalController` |
| `provisioning/` | Pluggable tenant provisioning | `DockerTenantProvisioner`, `TenantDatabaseService`, `TenantDataCleanupService` |
| `certificate/` | TLS certificate lifecycle | `CertificateService`, `CertificateController`, `TenantCaCertService` |
| `license/` | License management | `LicenseService`, `LicenseController` |
| `identity/` | Logto & server integration | `LogtoManagementClient`, `ServerApiClient` |
| `audit/` | Audit logging | `AuditService` |
**tenant/** — Tenant data model
- `TenantEntity.java` — JPA entity (id, name, slug, tier, status, logto_org_id, stripe IDs, settings JSONB)
### Frontend
**vendor/**Vendor console (platform:admin only)
- `VendorTenantService.java` — orchestrates tenant creation (sync: DB + Logto + license, async: Docker provisioning + config push), suspend/activate, delete, restart server, upgrade server (force-pull + re-provision), license renewal
- `VendorTenantController.java` — REST at `/api/vendor/tenants` (platform:admin required). List endpoint returns `VendorTenantSummary` with fleet health data (agentCount, environmentCount, agentLimit) fetched in parallel via `CompletableFuture`.
- `InfrastructureService.java` — raw JDBC queries against shared PostgreSQL and ClickHouse for per-tenant infrastructure monitoring (schema sizes, table stats, row counts, disk usage)
- `InfrastructureController.java` — REST at `/api/vendor/infrastructure` (platform:admin required). PostgreSQL and ClickHouse overview with per-tenant breakdown.
**portal/** — Tenant admin portal (org-scoped)
- `TenantPortalService.java` — customer-facing: dashboard (health + agent/env counts from server via M2M), license, SSO connectors, team, settings (public endpoint URL), server restart/upgrade, password management (own + team + server admin)
- `TenantPortalController.java` — REST at `/api/tenant/*` (org-scoped, includes CA cert management at `/api/tenant/ca`, password endpoints at `/api/tenant/password` and `/api/tenant/server/admin-password`)
**provisioning/** — Pluggable tenant provisioning
- `TenantProvisioner.java` — pluggable interface (like server's RuntimeOrchestrator)
- `DockerTenantProvisioner.java` — Docker implementation, creates per-tenant server + UI containers. `upgrade(slug)` force-pulls latest images and removes server+UI containers (preserves app containers, volumes, networks) for re-provisioning. `remove(slug)` does full cleanup: label-based container removal, env networks, tenant network, JAR volume.
- `TenantDataCleanupService.java` — GDPR data erasure on tenant delete: drops PostgreSQL `tenant_{slug}` schema, deletes ClickHouse data across all tables with `tenant_id` column
- `TenantProvisionerAutoConfig.java` — auto-detects Docker socket
- `DockerCertificateManager.java` — file-based cert management with atomic `.wip` swap (Docker volume)
- `DisabledCertificateManager.java` — no-op when certs dir unavailable
- `CertificateManagerAutoConfig.java` — auto-detects `/certs` directory
**certificate/** — TLS certificate lifecycle management
- `CertificateManager.java` — provider interface (Docker now, K8s later)
- `CertificateService.java` — orchestrates stage/activate/restore/discard, DB metadata, tenant CA staleness
- `CertificateController.java` — REST at `/api/vendor/certificates` (platform:admin required)
- `CertificateEntity.java` — JPA entity (status: ACTIVE/STAGED/ARCHIVED, subject, fingerprint, etc.)
- `CertificateStartupListener.java` — seeds DB from filesystem on boot (for bootstrap-generated certs)
- `TenantCaCertEntity.java` — JPA entity for per-tenant CA certs (PEM stored in DB, multiple per tenant)
- `TenantCaCertRepository.java` — queries by tenant, status, all active across tenants
- `TenantCaCertService.java` — stage/activate/delete tenant CAs, rebuilds aggregated `ca.pem` on changes
**license/** — License management
- `LicenseEntity.java` — JPA entity (id, tenant_id, tier, features JSONB, limits JSONB, expires_at)
- `LicenseService.java` — generation, validation, feature/limit lookups
- `LicenseController.java` — POST issue, GET verify, DELETE revoke
**identity/** — Logto & server integration
- `LogtoConfig.java` — Logto endpoint, M2M credentials (reads from bootstrap file)
- `LogtoManagementClient.java` — Logto Management API calls (create org, create user, add to org, get user, SSO connectors, JIT provisioning, password updates via `PATCH /api/users/{id}/password`)
- `ServerApiClient.java` — M2M client for cameleer3-server API (Logto M2M token, `X-Cameleer-Protocol-Version: 1` header). Health checks, license/OIDC push, agent count, environment count, server admin password reset per tenant server.
**audit/** — Audit logging
- `AuditEntity.java` — JPA entity (actor_id, actor_email, tenant_id, action, resource, status)
- `AuditService.java` — log audit events (TENANT_CREATE, TENANT_UPDATE, etc.); auto-resolves actor name from Logto when actorEmail is null (cached in-memory)
### React Frontend (`ui/src/`)
- `main.tsx` — React 19 root
- `router.tsx``/vendor/*` + `/tenant/*` with `RequireScope` guards and `LandingRedirect` that waits for scopes
- `Layout.tsx` — persona-aware sidebar: vendor sees expandable "Vendor" section (Tenants, Audit Log, Certificates, Infrastructure, Identity/Logto), tenant admin sees Dashboard/License/SSO/Team/Audit/Settings
- `OrgResolver.tsx` — merges global + org-scoped token scopes (vendor's platform:admin is global)
- `config.ts` — fetch Logto config from /platform/api/config
- `auth/useAuth.ts` — auth hook (isAuthenticated, logout, signIn)
- `auth/useOrganization.ts` — Zustand store for current tenant
- `auth/useScopes.ts` — decode JWT scopes, hasScope()
- `auth/ProtectedRoute.tsx` — guard (redirects to /login)
- **Vendor pages**: `VendorTenantsPage.tsx`, `CreateTenantPage.tsx`, `TenantDetailPage.tsx`, `VendorAuditPage.tsx`, `CertificatesPage.tsx`
- **Tenant pages**: `TenantDashboardPage.tsx` (restart + upgrade server), `TenantLicensePage.tsx`, `SsoPage.tsx`, `TeamPage.tsx` (reset member passwords), `TenantAuditPage.tsx`, `SettingsPage.tsx` (change own password, reset server admin password)
### Custom Sign-in UI (`ui/sign-in/src/`)
- `SignInPage.tsx` — form with @cameleer/design-system components
- `experience-api.ts` — Logto Experience API client (4-step: init -> verify -> identify -> submit)
- **`ui/src/`** — React 19 SPA at `/platform/*` (vendor + tenant admin pages)
- **`ui/sign-in/`** — Custom Logto sign-in UI (built into `cameleer-logto` Docker image)
## Architecture Context
The SaaS platform is a **vendor management plane**. It does not proxy requests to servers — instead it provisions dedicated per-tenant cameleer3-server instances via Docker API. Each tenant gets isolated server + UI containers with their own database schemas, networks, and Traefik routing.
The SaaS platform is a **vendor management plane**. It does not proxy requests to servers — instead it provisions dedicated per-tenant cameleer-server instances via Docker API. Each tenant gets isolated server + UI containers with their own database schemas, networks, and Traefik routing.
### Routing (single-domain, path-based via Traefik)
All services on one hostname. Infrastructure containers (Traefik, Logto) use `PUBLIC_HOST` + `PUBLIC_PROTOCOL` env vars directly. The SaaS app reads these via `CAMELEER_SAAS_PROVISIONING_PUBLICHOST` / `CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL` (Spring Boot properties `cameleer.saas.provisioning.publichost` / `cameleer.saas.provisioning.publicprotocol`).
| Path | Target | Notes |
|------|--------|-------|
| `/platform/*` | cameleer-saas:8080 | SPA + API (`server.servlet.context-path: /platform`) |
| `/platform/vendor/*` | (SPA routes) | Vendor console (platform:admin) |
| `/platform/tenant/*` | (SPA routes) | Tenant admin portal (org-scoped) |
| `/t/{slug}/*` | per-tenant server-ui | Provisioned tenant UI containers (Traefik labels) |
| `/` | redirect -> `/platform/` | Via `docker/traefik-dynamic.yml` |
| `/*` (catch-all) | cameleer-logto:3001 (priority=1) | Custom sign-in UI, OIDC, interaction |
- SPA assets at `/_app/` (Vite `assetsDir: '_app'`) to avoid conflict with Logto's `/assets/`
- Logto `ENDPOINT` = `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}` (same domain, same origin)
- TLS: `traefik-certs` init container generates self-signed cert (dev) or copies user-supplied cert via `CERT_FILE`/`KEY_FILE`/`CA_FILE` env vars. Default cert configured in `docker/traefik-dynamic.yml` (NOT static `traefik.yml` — Traefik v3 ignores `tls.stores.default` in static config). Runtime cert replacement via vendor UI (stage/activate/restore). ACME for production (future). Server containers import `/certs/ca.pem` into JVM truststore at startup via `docker-entrypoint.sh` for OIDC trust.
- Root `/` -> `/platform/` redirect via Traefik file provider (`docker/traefik-dynamic.yml`)
- LoginPage auto-redirects to Logto OIDC (no intermediate button)
- Per-tenant server containers get Traefik labels for `/t/{slug}/*` routing at provisioning time
### Docker Networks
Compose-defined networks:
| Network | Name on Host | Purpose |
|---------|-------------|---------|
| `cameleer` | `cameleer-saas_cameleer` | Compose default — shared services (DB, Logto, SaaS) |
| `cameleer-traefik` | `cameleer-traefik` (fixed `name:`) | Traefik + provisioned tenant containers |
Per-tenant networks (created dynamically by `DockerTenantProvisioner`):
| Network | Name Pattern | Purpose |
|---------|-------------|---------|
| Tenant network | `cameleer-tenant-{slug}` | Internal bridge, no internet — isolates tenant server + apps |
| Environment network | `cameleer-env-{tenantId}-{envSlug}` | Tenant-scoped (includes tenantId to prevent slug collision across tenants) |
Server containers join three networks: tenant network (primary), shared services network (`cameleer`), and traefik network. Apps deployed by the server use the tenant network as primary.
**IMPORTANT:** Dynamically-created containers MUST have `traefik.docker.network=cameleer-traefik` label. Traefik's Docker provider defaults to `network: cameleer` (compose-internal name) for IP resolution, which doesn't match dynamically-created containers connected via Docker API using the host network name (`cameleer-saas_cameleer`). Without this label, Traefik returns 504 Gateway Timeout for `/t/{slug}/api/*` paths.
### Custom sign-in UI (`ui/sign-in/`)
Separate Vite+React SPA replacing Logto's default sign-in page. Visually matches cameleer3-server LoginPage.
- Built as custom Logto Docker image (`cameleer-logto`): `ui/sign-in/Dockerfile` = node build stage + `FROM ghcr.io/logto-io/logto:latest` + COPY dist over `/etc/logto/packages/experience/dist/`
- Uses `@cameleer/design-system` components (Card, Input, Button, FormField, Alert)
- Authenticates via Logto Experience API (4-step: init -> verify password -> identify -> submit -> redirect)
- `CUSTOM_UI_PATH` env var does NOT work for Logto OSS — must volume-mount or replace the experience dist directory
- Favicon bundled in `ui/sign-in/public/favicon.svg` (served by Logto, not SaaS)
### Auth enforcement
- All API endpoints enforce OAuth2 scopes via `@PreAuthorize("hasAuthority('SCOPE_xxx')")` annotations
- Tenant isolation enforced by `TenantIsolationInterceptor` (a single `HandlerInterceptor` on `/api/**` that resolves JWT org_id to TenantContext and validates `{tenantId}`, `{environmentId}`, `{appId}` path variables; fail-closed, platform admins bypass)
- 13 OAuth2 scopes on the Logto API resource (`https://api.cameleer.local`): 10 platform scopes + 3 server scopes (`server:admin`, `server:operator`, `server:viewer`), served to the frontend from `GET /platform/api/config`
- Server scopes map to server RBAC roles via JWT `scope` claim (SaaS platform path) or `roles` claim (server-ui OIDC login path)
- Org roles: `owner` -> `server:admin` + `tenant:manage`, `operator` -> `server:operator`, `viewer` -> `server:viewer`
- `saas-vendor` global role created by bootstrap Phase 12 and always assigned to the admin user — has `platform:admin` + all tenant scopes
- Custom `JwtDecoder` in `SecurityConfig.java` — ES384 algorithm, `at+jwt` token type, split issuer-uri (string validation) / jwk-set-uri (Docker-internal fetch), audience validation (`https://api.cameleer.local`)
- Logto Custom JWT (Phase 7b in bootstrap) injects a `roles` claim into access tokens based on org roles and global roles — this makes role data available to the server without Logto-specific code
### Auth routing by persona
| Persona | Logto role | Key scope | Landing route |
|---------|-----------|-----------|---------------|
| SaaS admin | `saas-vendor` (global) | `platform:admin` | `/vendor/tenants` |
| Tenant admin | org `owner` | `tenant:manage` | `/tenant` (dashboard) |
| Regular user (operator/viewer) | org member | `server:operator` or `server:viewer` | Redirected to server dashboard directly |
- `LandingRedirect` component waits for scopes to load, then routes to the correct persona landing page
- `RequireScope` guard on route groups enforces scope requirements
- SSO bridge: Logto session carries over to provisioned server's OIDC flow (Traditional Web App per tenant)
### Per-tenant server env vars (set by DockerTenantProvisioner)
These env vars are injected into provisioned per-tenant server containers:
| Env var | Value | Purpose |
|---------|-------|---------|
| `CAMELEER_SERVER_SECURITY_OIDCISSUERURI` | `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}/oidc` | Token issuer claim validation |
| `CAMELEER_SERVER_SECURITY_OIDCJWKSETURI` | `http://cameleer-logto:3001/oidc/jwks` | Docker-internal JWK fetch |
| `CAMELEER_SERVER_SECURITY_OIDCTLSSKIPVERIFY` | `true` (conditional) | Skip cert verify for OIDC discovery; only set when no `/certs/ca.pem` exists. When ca.pem exists, the server's `docker-entrypoint.sh` imports it into the JVM truststore instead. |
| `CAMELEER_SERVER_SECURITY_OIDCAUDIENCE` | `https://api.cameleer.local` | JWT audience validation for OIDC tokens |
| `CAMELEER_SERVER_SECURITY_CORSALLOWEDORIGINS` | `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}` | Allow browser requests through Traefik |
| `CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN` | (generated) | Bootstrap auth token for M2M communication |
| `CAMELEER_SERVER_RUNTIME_ENABLED` | `true` | Enable Docker orchestration |
| `CAMELEER_SERVER_RUNTIME_SERVERURL` | `http://cameleer3-server-{slug}:8081` | Per-tenant server URL (DNS alias on tenant network) |
| `CAMELEER_SERVER_RUNTIME_ROUTINGDOMAIN` | `${PUBLIC_HOST}` | Domain for Traefik routing labels |
| `CAMELEER_SERVER_RUNTIME_ROUTINGMODE` | `path` | `path` or `subdomain` routing |
| `CAMELEER_SERVER_RUNTIME_JARSTORAGEPATH` | `/data/jars` | Directory for uploaded JARs |
| `CAMELEER_SERVER_RUNTIME_DOCKERNETWORK` | `cameleer-tenant-{slug}` | Primary network for deployed app containers |
| `CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME` | `cameleer-jars-{slug}` | Docker volume name for JAR sharing between server and deployed containers |
| `CAMELEER_SERVER_TENANT_ID` | (tenant UUID) | Tenant identifier for data isolation |
| `CAMELEER_SERVER_SECURITY_INFRASTRUCTUREENDPOINTS` | `false` | Hides Database/ClickHouse admin from tenant admins |
| `BASE_PATH` (server-ui) | `/t/{slug}` | React Router basename + `<base>` tag |
| `CAMELEER_API_URL` (server-ui) | `http://cameleer-server-{slug}:8081` | Nginx upstream proxy target (NOT `API_URL` — image uses `${CAMELEER_API_URL}`) |
### Per-tenant volume mounts (set by DockerTenantProvisioner)
| Mount | Container path | Purpose |
|-------|---------------|---------|
| `/var/run/docker.sock` | `/var/run/docker.sock` | Docker socket for app deployment orchestration |
| `cameleer-jars-{slug}` (volume, via `CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME`) | `/data/jars` | Shared JAR storage — server writes, deployed app containers read |
| `cameleer-saas_certs` (volume, ro) | `/certs` | Platform TLS certs + CA bundle for OIDC trust |
### SaaS app configuration (env vars for cameleer-saas itself)
SaaS properties use the `cameleer.saas.*` prefix (env vars: `CAMELEER_SAAS_*`). Two groups:
**Identity** (`cameleer.saas.identity.*` / `CAMELEER_SAAS_IDENTITY_*`):
- Logto endpoint, M2M credentials, bootstrap file path — used by `LogtoConfig.java`
**Provisioning** (`cameleer.saas.provisioning.*` / `CAMELEER_SAAS_PROVISIONING_*`):
| Env var | Spring property | Purpose |
|---------|----------------|---------|
| `CAMELEER_SAAS_PROVISIONING_SERVERIMAGE` | `cameleer.saas.provisioning.serverimage` | Docker image for per-tenant server containers |
| `CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE` | `cameleer.saas.provisioning.serveruiimage` | Docker image for per-tenant UI containers |
| `CAMELEER_SAAS_PROVISIONING_NETWORKNAME` | `cameleer.saas.provisioning.networkname` | Shared services Docker network (compose default) |
| `CAMELEER_SAAS_PROVISIONING_TRAEFIKNETWORK` | `cameleer.saas.provisioning.traefiknetwork` | Traefik Docker network for routing |
| `CAMELEER_SAAS_PROVISIONING_PUBLICHOST` | `cameleer.saas.provisioning.publichost` | Public hostname (same value as infrastructure `PUBLIC_HOST`) |
| `CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL` | `cameleer.saas.provisioning.publicprotocol` | Public protocol (same value as infrastructure `PUBLIC_PROTOCOL`) |
**Note:** `PUBLIC_HOST` and `PUBLIC_PROTOCOL` remain as infrastructure env vars for Traefik and Logto containers. The SaaS app reads its own copies via the `CAMELEER_SAAS_PROVISIONING_*` prefix. `LOGTO_ENDPOINT` and `LOGTO_DB_PASSWORD` are infrastructure env vars for the Logto service and are unchanged.
### Server OIDC role extraction (two paths)
| Path | Token type | Role source | How it works |
|------|-----------|-------------|--------------|
| SaaS platform -> server API | Logto org-scoped access token | `scope` claim | `JwtAuthenticationFilter.extractRolesFromScopes()` reads `server:admin` from scope |
| Server-ui SSO login | Logto JWT access token (via Traditional Web App) | `roles` claim | `OidcTokenExchanger` decodes access_token, reads `roles` injected by Custom JWT |
The server's OIDC config (`OidcConfig`) includes `audience` (RFC 8707 resource indicator) and `additionalScopes`. The `audience` is sent as `resource` in both the authorization request and token exchange, which makes Logto return a JWT access token instead of opaque. The Custom JWT script maps org roles to `roles: ["server:admin"]`.
**CRITICAL:** `additionalScopes` MUST include `urn:logto:scope:organizations` and `urn:logto:scope:organization_roles` — without these, Logto doesn't populate `context.user.organizationRoles` in the Custom JWT script, so the `roles` claim is empty and all users get `defaultRoles` (VIEWER). The server's `OidcAuthController.applyClaimMappings()` uses OIDC token roles (from Custom JWT) as fallback when no DB claim mapping rules exist: claim mapping rules > OIDC token roles > defaultRoles.
### Deployment pipeline
App deployment is handled by the cameleer3-server's `DeploymentExecutor` (7-stage async flow):
1. PRE_FLIGHT — validate config, check JAR exists
2. PULL_IMAGE — pull base image if missing
3. CREATE_NETWORK — ensure cameleer-traefik and cameleer-env-{slug} networks
4. START_REPLICAS — create N containers with Traefik labels
5. HEALTH_CHECK — poll `/cameleer/health` on agent port 9464
6. SWAP_TRAFFIC — stop old deployment (blue/green)
7. COMPLETE — mark RUNNING or DEGRADED
Key files:
- `DeploymentExecutor.java` (in cameleer3-server) — async staged deployment
- `DockerRuntimeOrchestrator.java` (in cameleer3-server) — Docker client, container lifecycle
- `docker/runtime-base/Dockerfile` — base image with agent JAR, maps env vars to `-D` system properties
- `ServerApiClient.java` — M2M token acquisition for SaaS->server API calls (agent status). Uses `X-Cameleer-Protocol-Version: 1` header
- Docker socket access: `group_add: ["0"]` in docker-compose.dev.yml (not root group membership in Dockerfile)
- Network: deployed containers join `cameleer-tenant-{slug}` (primary, isolation) + `cameleer-traefik` (routing) + `cameleer-env-{tenantId}-{envSlug}` (environment isolation)
### Bootstrap (`docker/logto-bootstrap.sh`)
Idempotent script run inside the Logto container entrypoint. **Clean slate** — no example tenant, no viewer user, no server configuration. Phases:
1. Wait for Logto health (no server to wait for — servers are provisioned per-tenant)
2. Get Management API token (reads `m-default` secret from DB)
3. Create Logto apps (SPA, Traditional Web App with `skipConsent`, M2M with Management API role + server API role)
3b. Create API resource scopes (10 platform + 3 server scopes)
4. Create org roles (owner, operator, viewer with API resource scope assignments) + M2M server role (`cameleer-m2m-server` with `server:admin` scope)
5. Create admin user (SaaS admin with Logto console access)
7b. Configure Logto Custom JWT for access tokens (maps org roles -> `roles` claim: owner->server:admin, operator->server:operator, viewer->server:viewer; saas-vendor global role -> server:admin)
8. Configure Logto sign-in branding (Cameleer colors `#C6820E`/`#D4941E`, logo from `/platform/logo.svg`)
9. Cleanup seeded Logto apps
10. Write bootstrap results to `/data/logto-bootstrap.json`
12. Create `saas-vendor` global role with all API scopes and assign to admin user (always runs — admin IS the platform admin).
The multi-tenant compose stack is: Traefik + PostgreSQL + ClickHouse + Logto (with bootstrap entrypoint) + cameleer-saas. No `cameleer3-server` or `cameleer3-server-ui` in compose — those are provisioned per-tenant by `DockerTenantProvisioner`.
### Deployment Modes (installer)
The installer (`installer/install.sh`) supports two deployment modes:
| | Multi-tenant SaaS (`DEPLOYMENT_MODE=saas`) | Standalone (`DEPLOYMENT_MODE=standalone`) |
|---|---|---|
| **Containers** | traefik, postgres, clickhouse, logto, cameleer-saas | traefik, postgres, clickhouse, server, server-ui |
| **Auth** | Logto OIDC (SaaS admin + tenant users) | Local auth (built-in admin, no identity provider) |
| **Tenant management** | SaaS admin creates/manages tenants via UI | Single server instance, no fleet management |
| **PostgreSQL** | `cameleer-postgres` image (multi-DB init) | Stock `postgres:16-alpine` (server creates schema via Flyway) |
| **Use case** | Platform vendor managing multiple customers | Single customer running the product directly |
Standalone mode generates a simpler compose with the server running directly. No Logto, no SaaS management plane, no bootstrap. The admin logs in with local credentials at `/`.
### Tenant Provisioning Flow
When SaaS admin creates a tenant via `VendorTenantService`:
**Synchronous (in `createAndProvision`):**
1. Create `TenantEntity` (status=PROVISIONING) + Logto organization
2. Create admin user in Logto with owner org role (if credentials provided)
3. Register OIDC redirect URIs for `/t/{slug}/oidc/callback` on Logto Traditional Web App
5. Generate license (tier-appropriate, 365 days)
6. Return immediately — UI shows provisioning spinner, polls via `refetchInterval`
**Asynchronous (in `provisionAsync`, `@Async`):**
7. Create tenant-isolated Docker network (`cameleer-tenant-{slug}`)
8. Create server container with env vars, Traefik labels (`traefik.docker.network`), health check, Docker socket bind, JAR volume, certs volume (ro)
9. Create UI container with `CAMELEER_API_URL`, `BASE_PATH`, Traefik strip-prefix labels
10. Wait for health check (`/api/v1/health`, not `/actuator/health` which requires auth)
11. Push license token to server via M2M API
12. Push OIDC config (Traditional Web App credentials + `additionalScopes: [urn:logto:scope:organizations, urn:logto:scope:organization_roles]`) to server for SSO
13. Update tenant status -> ACTIVE (or set `provisionError` on failure)
**Server restart** (available to SaaS admin + tenant admin):
- `POST /api/vendor/tenants/{id}/restart` (SaaS admin) and `POST /api/tenant/server/restart` (tenant)
- Calls `TenantProvisioner.stop(slug)` then `start(slug)` — restarts server + UI containers only (same image)
**Server upgrade** (available to SaaS admin + tenant admin):
- `POST /api/vendor/tenants/{id}/upgrade` (SaaS admin) and `POST /api/tenant/server/upgrade` (tenant)
- Calls `TenantProvisioner.upgrade(slug)` — removes server + UI containers, force-pulls latest images (preserves app containers, volumes, networks), then `provisionAsync()` re-creates containers with the new image + pushes license + OIDC config
**Tenant delete** cleanup:
- `DockerTenantProvisioner.remove(slug)` — label-based container removal (`cameleer.tenant={slug}`), env network cleanup, tenant network removal, JAR volume removal
- `TenantDataCleanupService.cleanup(slug)` — drops PostgreSQL `tenant_{slug}` schema, deletes ClickHouse data (GDPR)
**Password management** (tenant portal):
- `POST /api/tenant/password` — tenant admin changes own Logto password (via `@AuthenticationPrincipal` JWT subject)
- `POST /api/tenant/team/{userId}/password` — tenant admin resets a team member's Logto password (validates org membership first)
- `POST /api/tenant/server/admin-password` — tenant admin resets the server's built-in local admin password (via M2M API to `POST /api/v1/admin/users/user:admin/password`)
For detailed architecture docs, see the directory-scoped CLAUDE.md files (loaded automatically when editing code in that directory):
- **Provisioning flow, env vars, lifecycle** → `src/.../provisioning/CLAUDE.md`
- **Auth, scopes, JWT, OIDC** → `src/.../config/CLAUDE.md`
- **Docker, routing, networks, bootstrap, deployment pipeline** → `docker/CLAUDE.md`
- **Installer, deployment modes, compose templates** → `installer/CLAUDE.md` (git submodule: `cameleer-saas-installer`)
- **Frontend, sign-in UI** → `ui/CLAUDE.md`
## Database Migrations
PostgreSQL (Flyway): `src/main/resources/db/migration/`
- V001 — tenants (id, name, slug, tier, status, logto_org_id, stripe IDs, settings JSONB)
- V002 — licenses (id, tenant_id, tier, features JSONB, limits JSONB, expires_at)
- V003 — environments (tenant -> environments 1:N)
- V004 — api_keys (auth tokens for agent registration)
- V005 — apps (Camel applications)
- V006 — deployments (app versions, deployment history)
- V007 — audit_log
- V008 — app resource limits
- V010 — cleanup of migrated tables
- V011 — add provisioning fields (server_endpoint, provision_error)
- V012 — certificates table + tenants.ca_applied_at
- V013 — tenant_ca_certs (per-tenant CA certificates with PEM storage)
- V001 — consolidated baseline: tenants (with db_password, server_endpoint, provision_error, ca_applied_at), licenses, audit_log, certificates, tenant_ca_certs
- V002 — license minter: signing_keys table, tier renames, license label + grace period
- V003 — passkey MFA: vendor_auth_policy single-row config table (mfa_mode, passkey_enabled, passkey_mode)
## Related Conventions
@@ -345,10 +68,12 @@ PostgreSQL (Flyway): `src/main/resources/db/migration/`
- Docker images: CI builds and pushes all images — Dockerfiles use multi-stage builds, no local builds needed
- `cameleer-saas` — SaaS vendor management plane (frontend + JAR baked in)
- `cameleer-logto` — custom Logto with sign-in UI baked in
- `cameleer3-server` / `cameleer3-server-ui` — provisioned per-tenant (not in compose, created by `DockerTenantProvisioner`)
- `cameleer-runtime-base` — base image for deployed apps (agent JAR + JRE). CI downloads latest agent SNAPSHOT from Gitea Maven registry. Uses `CAMELEER_SERVER_RUNTIME_SERVERURL` env var (not CAMELEER_EXPORT_ENDPOINT).
- `cameleer-server` / `cameleer-server-ui` — provisioned per-tenant (not in compose, created by `DockerTenantProvisioner`)
- `cameleer-runtime-base` — base image for deployed apps (agent JAR + `cameleer-log-appender.jar` + JRE). CI downloads latest agent and log appender SNAPSHOTs from Gitea Maven registry. The Dockerfile ENTRYPOINT is overridden by `DockerRuntimeOrchestrator` at container creation; agent config uses `CAMELEER_AGENT_*` env vars set by `DeploymentExecutor`.
- `cameleer-runtime-loader` (`docker/runtime-loader/`) — tiny init-container image (busybox + 26-line `entrypoint.sh`) consumed as a sidecar by `DockerRuntimeOrchestrator` in **cameleer-server**. Per-replica: fetches the tenant JAR from a signed URL into a named volume RW-mounted at `/app/jars`, then exits 0; the main runtime container mounts the same volume RO. Source moved here from cameleer-server in April 2026 to colocate with the other infra/sidecar images. **Contract is owned by cameleer-server** (env vars `ARTIFACT_URL` + `ARTIFACT_EXPECTED_SIZE`, output path `/app/jars/app.jar`, exit 0/non-zero semantics) — don't change those without a coordinated commit on the cameleer-server side. cameleer-server's `LoaderHardeningIT` is the cross-repo regression guard; it pulls `:latest` and asserts exit 0 under the orchestrator's hardening shape.
- Docker builds: `--no-cache`, `--provenance=false` for Gitea compatibility
- `docker-compose.dev.yml` — exposes ports for direct access, sets `SPRING_PROFILES_ACTIVE: dev`. Volume-mounts `./ui/dist` into the container so local UI builds are served without rebuilding the Docker image (`SPRING_WEB_RESOURCES_STATIC_LOCATIONS` overrides classpath). Adds Docker socket mount for tenant provisioning.
- `docker-compose.yml` (root) — thin dev overlay (ports, volume mounts, `SPRING_PROFILES_ACTIVE: dev`). Chained on top of production templates from the installer submodule via `COMPOSE_FILE` in `.env`.
- Installer is a **git submodule** at `installer/` pointing to `cameleer/cameleer-saas-installer` (public repo). Compose templates live there — single source of truth, no duplication. Run `git submodule update --remote installer` to pull template updates.
- Design system: import from `@cameleer/design-system` (Gitea npm registry)
## Disabled Skills
@@ -358,7 +83,7 @@ PostgreSQL (Flyway): `src/main/resources/db/migration/`
<!-- gitnexus:start -->
# GitNexus — Code Intelligence
This project is indexed by GitNexus as **cameleer-saas** (2676 symbols, 5768 relationships, 224 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
This project is indexed by GitNexus as **cameleer-saas** (3624 symbols, 7877 relationships, 300 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
> If any GitNexus tool warns the index is stale, run `npx gitnexus analyze` in terminal first.

View File

@@ -15,17 +15,16 @@ WORKDIR /build
COPY .mvn/ .mvn/
COPY mvnw pom.xml ./
# Cache deps — BuildKit cache mount persists across --no-cache builds
RUN --mount=type=cache,target=/root/.m2/repository ./mvnw dependency:go-offline -B || true
RUN --mount=type=cache,target=/root/.m2/repository ./mvnw dependency:go-offline -U -B || true
COPY src/ src/
COPY --from=frontend /ui/dist/ src/main/resources/static/
RUN --mount=type=cache,target=/root/.m2/repository ./mvnw package -DskipTests -B
RUN --mount=type=cache,target=/root/.m2/repository ./mvnw package -DskipTests -U -B
# Runtime: target platform (amd64)
FROM eclipse-temurin:21-jre-alpine
# Runtime: BellSoft Liberica JRE 21 on Alpaquita Linux (glibc, minimal, 199 MB)
FROM bellsoft/liberica-runtime-container:jre-21-slim-glibc
WORKDIR /app
RUN addgroup -S cameleer && adduser -S cameleer -G cameleer \
&& mkdir -p /data/jars && chown -R cameleer:cameleer /data
COPY --from=build /build/target/*.jar app.jar
USER cameleer
RUN mkdir -p /data/jars && chown -R nobody:nobody /data /app
COPY --chown=nobody:nobody --from=build /build/target/*.jar app.jar
USER nobody
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]

View File

@@ -48,7 +48,7 @@ The platform runs as a Docker Compose stack:
*Ports exposed to host only with `docker-compose.dev.yml` overlay.
Per-tenant `cameleer3-server` and `cameleer3-server-ui` containers are provisioned dynamically by `DockerTenantProvisioner` — they are NOT part of the compose stack.
Per-tenant `cameleer-server` and `cameleer-server-ui` containers are provisioned dynamically by `DockerTenantProvisioner` — they are NOT part of the compose stack.
## Installation
@@ -222,7 +222,7 @@ To disable routing, set `exposedPort` to `null`.
### View the Observability Dashboard
The cameleer3-server React SPA dashboard is available at:
The cameleer-server React SPA dashboard is available at:
```
http://localhost/dashboard
@@ -233,7 +233,7 @@ This shows execution traces, route topology graphs, metrics, and logs for all de
### Check Agent & Observability Status
```bash
# Is the agent registered with cameleer3-server?
# Is the agent registered with cameleer-server?
curl "http://localhost:8080/api/apps/$APP_ID/agent-status" \
-H "Authorization: Bearer $TOKEN"
# Returns: registered, state (ACTIVE/STALE/DEAD/UNKNOWN), routeIds
@@ -303,7 +303,7 @@ Query params: `since`, `until` (ISO timestamps), `limit` (default 500), `stream`
### Dashboard
| Path | Description |
|------|-------------|
| `/dashboard` | cameleer3-server observability dashboard (forward-auth protected) |
| `/dashboard` | cameleer-server observability dashboard (forward-auth protected) |
### Vendor: Certificates (platform:admin)
| Method | Path | Description |
@@ -404,7 +404,7 @@ Output goes to `src/main/resources/static/` (configured in `vite.config.ts`). Th
### SPA Routing
Spring Boot serves `index.html` for all non-API routes via `SpaController.java`. React Router handles client-side routing. The SPA lives at `/`, while the observability dashboard (cameleer3-server) is at `/dashboard`.
Spring Boot serves `index.html` for all non-API routes via `SpaController.java`. React Router handles client-side routing. The SPA lives at `/`, while the observability dashboard (cameleer-server) is at `/dashboard`.
## Development
@@ -444,4 +444,4 @@ VERSION=local docker compose -f docker-compose.yml -f docker-compose.dev.yml up
**Ephemeral key warnings**: `No Ed25519 key files configured -- generating ephemeral keys (dev mode)` is normal in development. For production, generate keys as described above.
**Container deployment fails**: Check that Docker socket is mounted (`/var/run/docker.sock`) and the `cameleer-runtime-base` image is available. Pull it with: `docker pull gitea.siegeln.net/cameleer/cameleer-runtime-base:latest`
**Container deployment fails**: Check that Docker socket is mounted (`/var/run/docker.sock`) and the `cameleer-runtime-base` image is available. Pull it with: `docker pull registry.cameleer.io/cameleer/cameleer-runtime-base:latest`

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

BIN
audit/03-login-page.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
audit/04-login-error.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

BIN
audit/06-license-page.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 211 KiB

BIN
audit/11-search-modal.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.8 KiB

BIN
audit/21-sidebar-detail.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.1 KiB

View File

@@ -26,10 +26,10 @@
| Severity | Issue | Element |
|----------|-------|---------|
| Important | **No password visibility toggle** -- the Password input uses `type="password"` with no eye icon to reveal. Most modern login forms offer this. | Password field |
| Important | **Branding says "cameleer3"** not "Cameleer" or "Cameleer SaaS" -- the product name on the login page is the internal repo name, not the user-facing brand | `.logo` text content |
| Important | **Branding says "cameleer"** not "Cameleer" or "Cameleer SaaS" -- the product name on the login page is the internal repo name, not the user-facing brand | `.logo` text content |
| Nice-to-have | **No "Forgot password" link** -- even if it goes to a "contact admin" page, users expect this | Below password field |
| Nice-to-have | **No Enter-key submit hint** -- though Enter does work via form submit, there's no visual affordance | Form area |
| Nice-to-have | **Page title is "Sign in -- cameleer3"** -- should match product branding ("Cameleer SaaS") | `<title>` tag |
| Nice-to-have | **Page title is "Sign in -- cameleer"** -- should match product branding ("Cameleer SaaS") | `<title>` tag |
---
@@ -216,7 +216,7 @@
### Important (17)
1. No password visibility toggle on login
2. Branding says "cameleer3" instead of product name on login
2. Branding says "cameleer" instead of product name on login
3. Breadcrumbs always empty on platform pages
4. Massive empty space below dashboard content
5. Tier badge color mapping inconsistent between Dashboard and License pages
@@ -235,7 +235,7 @@
### Nice-to-have (8)
1. No "Forgot password" link on login
2. Login page title uses "cameleer3" branding
2. Login page title uses "cameleer" branding
3. No external link icon on "Open Server Dashboard"
4. Avatar shows "AD" for "admin"
5. No units on limit values

View File

@@ -356,7 +356,7 @@ These use **different tier names** (enterprise/pro/starter vs BUSINESS/HIGH/MID/
3. **Hardcoded branding** (`SignInPage.tsx:61`):
```tsx
cameleer3
cameleer
```
The brand name is hardcoded text, not sourced from configuration.

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

BIN
audit/verify-02-license.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

4974
ci-docker-log.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,36 +0,0 @@
# Development overrides: exposes ports for direct access
# Usage: docker compose -f docker-compose.yml -f docker-compose.dev.yml up
services:
cameleer-postgres:
ports:
- "5432:5432"
cameleer-logto:
ports:
- "3001:3001"
logto-bootstrap:
environment:
VENDOR_SEED_ENABLED: "true"
cameleer-saas:
ports:
- "8080:8080"
volumes:
- ./ui/dist:/app/static
- /var/run/docker.sock:/var/run/docker.sock
group_add:
- "0"
environment:
SPRING_PROFILES_ACTIVE: dev
SPRING_WEB_RESOURCES_STATIC_LOCATIONS: file:/app/static/,classpath:/static/
CAMELEER_SAAS_PROVISIONING_PUBLICHOST: ${PUBLIC_HOST:-localhost}
CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL: ${PUBLIC_PROTOCOL:-https}
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE: gitea.siegeln.net/cameleer/cameleer3-server:${VERSION:-latest}
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE: gitea.siegeln.net/cameleer/cameleer3-server-ui:${VERSION:-latest}
CAMELEER_SAAS_PROVISIONING_NETWORKNAME: cameleer-saas_cameleer
CAMELEER_SAAS_PROVISIONING_TRAEFIKNETWORK: cameleer-traefik
cameleer-clickhouse:
ports:
- "8123:8123"

View File

@@ -1,157 +1,23 @@
# Dev overrides — layered on top of installer/templates/ via COMPOSE_FILE in .env
# Usage: docker compose up (reads .env automatically)
services:
cameleer-traefik:
image: ${TRAEFIK_IMAGE:-gitea.siegeln.net/cameleer/cameleer-traefik}:${VERSION:-latest}
restart: unless-stopped
ports:
- "${HTTP_PORT:-80}:80"
- "${HTTPS_PORT:-443}:443"
- "${LOGTO_CONSOLE_PORT:-3002}:3002"
environment:
PUBLIC_HOST: ${PUBLIC_HOST:-localhost}
CERT_FILE: ${CERT_FILE:-}
KEY_FILE: ${KEY_FILE:-}
CA_FILE: ${CA_FILE:-}
volumes:
- cameleer-certs:/certs
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- cameleer
- cameleer-traefik
cameleer-postgres:
image: ${POSTGRES_IMAGE:-gitea.siegeln.net/cameleer/cameleer-postgres}:${VERSION:-latest}
restart: unless-stopped
environment:
POSTGRES_DB: ${POSTGRES_DB:-cameleer_saas}
POSTGRES_USER: ${POSTGRES_USER:-cameleer}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-cameleer_dev}
volumes:
- cameleer-pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-cameleer} -d ${POSTGRES_DB:-cameleer_saas}"]
interval: 5s
timeout: 5s
retries: 5
networks:
- cameleer
ports:
- "5432:5432"
cameleer-clickhouse:
image: ${CLICKHOUSE_IMAGE:-gitea.siegeln.net/cameleer/cameleer-clickhouse}:${VERSION:-latest}
restart: unless-stopped
environment:
CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD:-cameleer_ch}
volumes:
- cameleer-chdata:/var/lib/clickhouse
healthcheck:
test: ["CMD-SHELL", "clickhouse-client --password ${CLICKHOUSE_PASSWORD:-cameleer_ch} --query 'SELECT 1'"]
interval: 10s
timeout: 5s
retries: 3
labels:
- prometheus.scrape=true
- prometheus.path=/metrics
- prometheus.port=9363
networks:
- cameleer
ports:
- "8123:8123"
cameleer-logto:
image: ${LOGTO_IMAGE:-gitea.siegeln.net/cameleer/cameleer-logto}:${VERSION:-latest}
restart: unless-stopped
depends_on:
cameleer-postgres:
condition: service_healthy
environment:
DB_URL: postgres://${POSTGRES_USER:-cameleer}:${POSTGRES_PASSWORD:-cameleer_dev}@cameleer-postgres:5432/logto
ENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
ADMIN_ENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}:${LOGTO_CONSOLE_PORT:-3002}
TRUST_PROXY_HEADER: 1
NODE_TLS_REJECT_UNAUTHORIZED: "${NODE_TLS_REJECT:-0}"
LOGTO_ENDPOINT: http://cameleer-logto:3001
LOGTO_ADMIN_ENDPOINT: http://cameleer-logto:3002
LOGTO_PUBLIC_ENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
PUBLIC_HOST: ${PUBLIC_HOST:-localhost}
PUBLIC_PROTOCOL: ${PUBLIC_PROTOCOL:-https}
PG_HOST: cameleer-postgres
PG_USER: ${POSTGRES_USER:-cameleer}
PG_PASSWORD: ${POSTGRES_PASSWORD:-cameleer_dev}
PG_DB_SAAS: ${POSTGRES_DB:-cameleer_saas}
SAAS_ADMIN_USER: ${SAAS_ADMIN_USER:-admin}
SAAS_ADMIN_PASS: ${SAAS_ADMIN_PASS:-admin}
healthcheck:
test: ["CMD-SHELL", "node -e \"require('http').get('http://localhost:3001/oidc/.well-known/openid-configuration', r => process.exit(r.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))\" && test -f /data/logto-bootstrap.json"]
interval: 10s
timeout: 5s
retries: 60
start_period: 30s
labels:
- traefik.enable=true
- traefik.http.routers.cameleer-logto.rule=PathPrefix(`/`)
- traefik.http.routers.cameleer-logto.priority=1
- traefik.http.routers.cameleer-logto.entrypoints=websecure
- traefik.http.routers.cameleer-logto.tls=true
- traefik.http.routers.cameleer-logto.service=cameleer-logto
- traefik.http.routers.cameleer-logto.middlewares=cameleer-logto-cors
- "traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowOriginList=${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}:${LOGTO_CONSOLE_PORT:-3002}"
- traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowMethods=GET,POST,PUT,PATCH,DELETE,OPTIONS
- traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowHeaders=Authorization,Content-Type
- traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowCredentials=true
- traefik.http.services.cameleer-logto.loadbalancer.server.port=3001
- traefik.http.routers.cameleer-logto-console.rule=PathPrefix(`/`)
- traefik.http.routers.cameleer-logto-console.entrypoints=admin-console
- traefik.http.routers.cameleer-logto-console.tls=true
- traefik.http.routers.cameleer-logto-console.service=cameleer-logto-console
- traefik.http.services.cameleer-logto-console.loadbalancer.server.port=3002
volumes:
- cameleer-bootstrapdata:/data
networks:
- cameleer
ports:
- "3001:3001"
cameleer-saas:
image: ${CAMELEER_IMAGE:-gitea.siegeln.net/cameleer/cameleer-saas}:${VERSION:-latest}
restart: unless-stopped
depends_on:
cameleer-logto:
condition: service_healthy
ports:
- "8080:8080"
volumes:
- cameleer-bootstrapdata:/data/bootstrap:ro
- cameleer-certs:/certs
- /var/run/docker.sock:/var/run/docker.sock
- ./ui/dist:/app/static
environment:
# SaaS database
SPRING_DATASOURCE_URL: jdbc:postgresql://cameleer-postgres:5432/${POSTGRES_DB:-cameleer_saas}
SPRING_DATASOURCE_USERNAME: ${POSTGRES_USER:-cameleer}
SPRING_DATASOURCE_PASSWORD: ${POSTGRES_PASSWORD:-cameleer_dev}
# Identity (Logto)
CAMELEER_SAAS_IDENTITY_LOGTOENDPOINT: ${LOGTO_ENDPOINT:-http://cameleer-logto:3001}
CAMELEER_SAAS_IDENTITY_LOGTOPUBLICENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
CAMELEER_SAAS_IDENTITY_M2MCLIENTID: ${LOGTO_M2M_CLIENT_ID:-}
CAMELEER_SAAS_IDENTITY_M2MCLIENTSECRET: ${LOGTO_M2M_CLIENT_SECRET:-}
# Provisioning — passed to per-tenant server containers
CAMELEER_SAAS_PROVISIONING_PUBLICHOST: ${PUBLIC_HOST:-localhost}
CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL: ${PUBLIC_PROTOCOL:-https}
CAMELEER_SAAS_PROVISIONING_DATASOURCEUSERNAME: ${POSTGRES_USER:-cameleer}
CAMELEER_SAAS_PROVISIONING_DATASOURCEPASSWORD: ${POSTGRES_PASSWORD:-cameleer_dev}
CAMELEER_SAAS_PROVISIONING_CLICKHOUSEPASSWORD: ${CLICKHOUSE_PASSWORD:-cameleer_ch}
labels:
- traefik.enable=true
- traefik.http.routers.saas.rule=PathPrefix(`/platform`)
- traefik.http.routers.saas.entrypoints=websecure
- traefik.http.routers.saas.tls=true
- traefik.http.services.saas.loadbalancer.server.port=8080
group_add:
- "${DOCKER_GID:-0}"
networks:
- cameleer
networks:
cameleer:
driver: bridge
cameleer-traefik:
name: cameleer-traefik
driver: bridge
volumes:
cameleer-pgdata:
cameleer-chdata:
cameleer-certs:
cameleer-bootstrapdata:
SPRING_PROFILES_ACTIVE: dev
SPRING_WEB_RESOURCES_STATIC_LOCATIONS: file:/app/static/,classpath:/static/

94
docker/CLAUDE.md Normal file
View File

@@ -0,0 +1,94 @@
# Docker & Infrastructure
## Routing (single-domain, path-based via Traefik)
All services on one hostname. Infrastructure containers (Traefik, Logto) use `PUBLIC_HOST` + `PUBLIC_PROTOCOL` env vars directly. The SaaS app reads these via `CAMELEER_SAAS_PROVISIONING_PUBLICHOST` / `CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL` (Spring Boot properties `cameleer.saas.provisioning.publichost` / `cameleer.saas.provisioning.publicprotocol`).
| Path | Target | Notes |
|------|--------|-------|
| `/platform/*` | cameleer-saas:8080 | SPA + API (`server.servlet.context-path: /platform`) |
| `/platform/vendor/*` | (SPA routes) | Vendor console (platform:admin) |
| `/platform/tenant/*` | (SPA routes) | Tenant admin portal (org-scoped) |
| `/t/{slug}/*` | per-tenant server-ui | Provisioned tenant UI containers (Traefik labels) |
| `/` | redirect -> `/platform/` | Via `docker/traefik-dynamic.yml` |
| `/*` (catch-all) | cameleer-logto:3001 (priority=1) | Custom sign-in UI, OIDC, interaction |
- SPA assets at `/_app/` (Vite `assetsDir: '_app'`) to avoid conflict with Logto's `/assets/`
- Logto `ENDPOINT` = `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}` (same domain, same origin)
- TLS: `traefik-certs` init container generates self-signed cert (dev) or copies user-supplied cert via `CERT_FILE`/`KEY_FILE`/`CA_FILE` env vars. Default cert configured in `docker/traefik-dynamic.yml` (NOT static `traefik.yml` — Traefik v3 ignores `tls.stores.default` in static config). Runtime cert replacement via vendor UI (stage/activate/restore). ACME for production (future). Server containers import `/certs/ca.pem` into JVM truststore at startup via `docker-entrypoint.sh` for OIDC trust.
- Root `/` -> `/platform/` redirect via Traefik file provider (`docker/traefik-dynamic.yml`)
- LoginPage auto-redirects to Logto OIDC (no intermediate button)
- Per-tenant server containers get Traefik labels for `/t/{slug}/*` routing at provisioning time
## Docker Networks
Compose-defined networks:
| Network | Name on Host | Purpose |
|---------|-------------|---------|
| `cameleer` | `cameleer-saas_cameleer` | Compose default — shared services (DB, Logto, SaaS) |
| `cameleer-traefik` | `cameleer-traefik` (fixed `name:`) | Traefik + provisioned tenant containers |
Per-tenant networks (created dynamically by `DockerTenantProvisioner`):
| Network | Name Pattern | Purpose |
|---------|-------------|---------|
| Tenant network | `cameleer-tenant-{slug}` | Internal bridge, no internet — isolates tenant server + apps |
| Environment network | `cameleer-env-{tenantId}-{envSlug}` | Tenant-scoped (includes tenantId to prevent slug collision across tenants) |
Server containers join three networks: tenant network (primary), shared services network (`cameleer`), and traefik network. Apps deployed by the server use the tenant network as primary.
**Backend IP resolution:** Traefik's Docker provider is configured with `network: cameleer-traefik` (static `traefik.yml`). Every cameleer-managed container — saas-provisioned tenant containers (via `DockerTenantProvisioner`) and cameleer-server's per-app containers (via `DockerNetworkManager`) — is attached to `cameleer-traefik` at creation, so Traefik always resolves a reachable backend IP. Provisioned tenant containers additionally emit a `traefik.docker.network=cameleer-traefik` label as per-service defense-in-depth. (Pre-2026-04-23 the static config pointed at `network: cameleer`, a name that never matched any real network — that produced 504 Gateway Timeout on every managed app until the Traefik image was rebuilt.)
## Custom sign-in UI (`ui/sign-in/`)
Separate Vite+React SPA replacing Logto's default sign-in page. Supports both sign-in and self-service registration (registration is disabled by default until the vendor admin configures an email connector via the UI).
- Built as custom Logto Docker image (`cameleer-logto`): `ui/sign-in/Dockerfile` = node build stage + `FROM ghcr.io/logto-io/logto:latest` + install official connectors + COPY dist over `/etc/logto/packages/experience/dist/`
- Uses `@cameleer/design-system` components (Card, Input, Button, FormField, Alert)
- **Sign-in**: Logto Experience API (4-step: init -> verify password -> identify -> submit -> redirect). Auto-detects email vs username identifier.
- **Registration**: 2-phase flow. Phase 1: init Register -> send verification code to email. Phase 2: verify code -> set password -> identify (creates user) -> submit -> redirect.
- Reads `first_screen=register` from URL query params to show register form initially (set by `@logto/react` SDK's `firstScreen` option)
- `CUSTOM_UI_PATH` env var does NOT work for Logto OSS — must volume-mount or replace the experience dist directory
- Favicon bundled in `ui/sign-in/public/favicon.svg` (served by Logto, not SaaS)
## Deployment pipeline
App deployment is handled by the cameleer-server's `DeploymentExecutor` (7-stage async flow):
1. PRE_FLIGHT — validate config, check JAR exists
2. PULL_IMAGE — pull base image if missing
3. CREATE_NETWORK — ensure cameleer-traefik and cameleer-env-{slug} networks
4. START_REPLICAS — create N containers with Traefik labels
5. HEALTH_CHECK — poll `/cameleer/health` on agent port 9464
6. SWAP_TRAFFIC — stop old deployment (blue/green)
7. COMPLETE — mark RUNNING or DEGRADED
Key files:
- `DeploymentExecutor.java` (in cameleer-server) — async staged deployment, runtime type auto-detection
- `DockerRuntimeOrchestrator.java` (in cameleer-server) — Docker client, container lifecycle, builds runtime-type-specific entrypoints (spring-boot uses `-cp` + `PropertiesLauncher` with `-Dloader.path` for log appender; quarkus uses `-jar`; plain-java uses `-cp` + detected main class; native exec directly). Overrides the Dockerfile ENTRYPOINT.
- `docker/runtime-base/Dockerfile` — base image with agent JAR + `cameleer-log-appender.jar` + JRE. The Dockerfile ENTRYPOINT (`-jar /app/app.jar`) is a fallback — `DockerRuntimeOrchestrator` overrides it at container creation.
- `docker/runtime-loader/Dockerfile` + `entrypoint.sh` — tiny per-replica init-container image (busybox + 26-line shell). Consumed by cameleer-server's `DockerRuntimeOrchestrator` as a sidecar that fetches the tenant JAR from a signed URL into a named volume RW-mounted at `/app/jars`, then exits 0. The main runtime container mounts that volume RO. Image lives here so all infra/sidecar image builds are colocated, but the **runtime contract** (env vars `ARTIFACT_URL` + `ARTIFACT_EXPECTED_SIZE`, output path `/app/jars/app.jar`, exit 0/non-zero semantics) is owned by cameleer-server's orchestrator. Don't change those without a coordinated commit on the cameleer-server side; cameleer-server's `LoaderHardeningIT` is the cross-repo regression guard. Pre-creates `/app/jars` owned by `loader:loader` (UID 1000) so the orchestrator's fresh named volume initialises with that ownership — stripping that line breaks tenant deploys with "wget: Permission denied".
- `RuntimeDetector.java` (in cameleer-server) — detects runtime type from JAR manifest `Main-Class`; derives correct `PropertiesLauncher` package (Spring Boot 3.2+ vs pre-3.2)
- `ServerApiClient.java` — M2M token acquisition for SaaS->server API calls (agent status). Uses `X-Cameleer-Protocol-Version: 1` header
- Docker socket access: `group_add: ["0"]` in docker-compose.dev.yml (not root group membership in Dockerfile)
- Network: deployed containers join `cameleer-tenant-{slug}` (primary, isolation) + `cameleer-traefik` (routing) + `cameleer-env-{tenantId}-{envSlug}` (environment isolation)
## Bootstrap (`docker/logto-bootstrap.sh`)
Idempotent script run inside the Logto container entrypoint. **Clean slate** — no example tenant, no viewer user, no server configuration. Phases:
1. Wait for Logto health (no server to wait for — servers are provisioned per-tenant)
2. Get Management API token (reads `m-default` secret from DB)
3. Create Logto apps (SPA, Traditional Web App with `skipConsent`, M2M with Management API role + server API role)
3b. Create API resource scopes (1 platform + 9 tenant + 3 server scopes)
4. Create org roles (owner, operator, viewer with API resource scope assignments) + M2M server role (`cameleer-m2m-server` with `server:admin` scope)
5. Create admin user (SaaS admin with Logto console access). `SAAS_ADMIN_USER` is the admin's email address in SaaS mode — used as both the Logto username and primaryEmail. No separate `SAAS_ADMIN_EMAIL`.
7b. Configure Logto Custom JWT for access tokens (maps org roles -> `roles` claim: owner->server:admin, operator->server:operator, viewer->server:viewer; saas-vendor global role -> server:admin)
8. Configure Logto sign-in branding (Cameleer colors `#C6820E`/`#D4941E`, logo from `/platform/logo.svg`)
8c. Configure sign-in experience (sign-in only) — sets `signInMode: "SignIn"` with username+password method. Registration is disabled by default; the vendor admin enables it via the Email Connector UI after configuring SMTP delivery.
9. Cleanup seeded Logto apps
10. Write bootstrap results to `/data/logto-bootstrap.json`
12. Create `saas-vendor` global role with all API scopes and assign to admin user (always runs — admin IS the platform admin).
SMTP / email connector configuration is managed at runtime via the vendor admin UI (Email Connector page). The bootstrap no longer creates email connectors — it defaults to sign-in only mode. Registration is enabled automatically when the admin configures an email connector through the UI.
The multi-tenant compose stack is: Traefik + PostgreSQL + ClickHouse + Logto (with bootstrap entrypoint) + cameleer-saas. No `cameleer-server` or `cameleer-server-ui` in compose — those are provisioned per-tenant by `DockerTenantProvisioner`.

View File

@@ -1,6 +1,14 @@
#!/bin/sh
set -e
# Build DB_URL from individual env vars so passwords with special characters
# are properly URL-encoded (Logto only accepts a connection string)
if [ -z "$DB_URL" ]; then
ENCODED_PW=$(node -e "process.stdout.write(encodeURIComponent(process.env.PG_PASSWORD || ''))")
export DB_URL="postgres://${PG_USER:-cameleer}:${ENCODED_PW}@${PG_HOST:-localhost}:5432/logto"
echo "[entrypoint] Built DB_URL from PG_USER/PG_PASSWORD/PG_HOST"
fi
# Save the real public endpoints for after bootstrap
REAL_ENDPOINT="$ENDPOINT"
REAL_ADMIN_ENDPOINT="$ADMIN_ENDPOINT"

View File

@@ -3,7 +3,7 @@ set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE DATABASE logto;
CREATE DATABASE cameleer3;
CREATE DATABASE cameleer;
GRANT ALL PRIVILEGES ON DATABASE logto TO $POSTGRES_USER;
GRANT ALL PRIVILEGES ON DATABASE cameleer3 TO $POSTGRES_USER;
GRANT ALL PRIVILEGES ON DATABASE cameleer TO $POSTGRES_USER;
EOSQL

View File

@@ -28,12 +28,20 @@ if [ ! -f "$CERTS_DIR/cert.pem" ]; then
else
# Generate self-signed certificate
HOST="${PUBLIC_HOST:-localhost}"
AUTH="${AUTH_HOST:-$HOST}"
echo "[certs] Generating self-signed certificate for $HOST..."
# Build SAN list; deduplicate when AUTH_HOST equals PUBLIC_HOST
if [ "$AUTH" = "$HOST" ]; then
SAN="DNS:$HOST,DNS:*.$HOST"
else
SAN="DNS:$HOST,DNS:*.$HOST,DNS:$AUTH,DNS:*.$AUTH"
echo "[certs] (+ auth domain: $AUTH)"
fi
openssl req -x509 -newkey rsa:4096 \
-keyout "$CERTS_DIR/key.pem" -out "$CERTS_DIR/cert.pem" \
-days 365 -nodes \
-subj "/CN=$HOST" \
-addext "subjectAltName=DNS:$HOST,DNS:*.$HOST"
-addext "subjectAltName=$SAN"
SELF_SIGNED=true
echo "[certs] Generated self-signed certificate for $HOST."
fi

View File

@@ -1,21 +1,3 @@
http:
routers:
root-redirect:
rule: "Path(`/`)"
priority: 100
entryPoints:
- websecure
tls: {}
middlewares:
- root-to-platform
service: saas@docker
middlewares:
root-to-platform:
redirectRegex:
regex: "^(https?://[^/]+)/?$"
replacement: "${1}/platform/"
permanent: false
tls:
stores:
default:

View File

@@ -18,6 +18,6 @@ providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
network: cameleer
network: cameleer-traefik
file:
filename: /etc/traefik/dynamic.yml

View File

@@ -4,7 +4,7 @@ set -e
# Cameleer SaaS — Bootstrap Script
# Creates Logto apps, users, organizations, roles.
# Seeds cameleer_saas DB with tenant, environment, license.
# Configures cameleer3-server OIDC.
# Configures cameleer-server OIDC.
# Idempotent: checks existence before creating.
LOGTO_ENDPOINT="${LOGTO_ENDPOINT:-http://cameleer-logto:3001}"
@@ -25,18 +25,29 @@ API_RESOURCE_INDICATOR="https://api.cameleer.local"
API_RESOURCE_NAME="Cameleer SaaS API"
# Users (configurable via env vars)
# In SaaS mode, SAAS_ADMIN_USER is the admin's email address (e.g. admin@company.com).
# The local part (before @) is used as the Logto username; the full value as primaryEmail.
SAAS_ADMIN_USER="${SAAS_ADMIN_USER:-admin}"
SAAS_ADMIN_PASS="${SAAS_ADMIN_PASS:-admin}"
# Extract username (local part) for Logto — Logto rejects @ in usernames
if echo "$SAAS_ADMIN_USER" | grep -q '@'; then
ADMIN_USERNAME="${SAAS_ADMIN_USER%%@*}"
ADMIN_EMAIL="$SAAS_ADMIN_USER"
else
ADMIN_USERNAME="$SAAS_ADMIN_USER"
ADMIN_EMAIL=""
fi
# No server config — servers are provisioned dynamically by the admin console
# Redirect URIs (derived from PUBLIC_HOST and PUBLIC_PROTOCOL)
HOST="${PUBLIC_HOST:-localhost}"
AUTH="${AUTH_HOST:-$HOST}"
PROTO="${PUBLIC_PROTOCOL:-https}"
SPA_REDIRECT_URIS="[\"${PROTO}://${HOST}/platform/callback\"]"
SPA_POST_LOGOUT_URIS="[\"${PROTO}://${HOST}/platform/login\",\"${PROTO}://${HOST}/platform/\"]"
TRAD_REDIRECT_URIS="[\"${PROTO}://${HOST}/oidc/callback\",\"${PROTO}://${HOST}/server/oidc/callback\"]"
TRAD_POST_LOGOUT_URIS="[\"${PROTO}://${HOST}\",\"${PROTO}://${HOST}/server\",\"${PROTO}://${HOST}/server/login?local\"]"
TRAD_REDIRECT_URIS="[\"${PROTO}://${HOST}/oidc/callback\"]"
TRAD_POST_LOGOUT_URIS="[\"${PROTO}://${HOST}\"]"
log() { echo "[bootstrap] $1"; }
pgpass() { PGPASSWORD="${PG_PASSWORD:-cameleer_dev}"; export PGPASSWORD; }
@@ -47,8 +58,9 @@ if [ "$BOOTSTRAP_LOCAL" = "true" ]; then
HOST_ARGS=""
ADMIN_HOST_ARGS=""
else
HOST_ARGS="-H Host:${HOST}"
ADMIN_HOST_ARGS="-H Host:${HOST}:3002 -H X-Forwarded-Proto:https"
# Logto validates Host header against its ENDPOINT, which uses AUTH_HOST
HOST_ARGS="-H Host:${AUTH}"
ADMIN_HOST_ARGS="-H Host:${AUTH}:3002 -H X-Forwarded-Proto:https"
fi
# Install jq + curl if not already available (deps are baked into cameleer-logto image)
@@ -174,7 +186,7 @@ else
log "Created SPA app: $SPA_ID"
fi
# --- Traditional Web App (for cameleer3-server OIDC) ---
# --- Traditional Web App (for cameleer-server OIDC) ---
TRAD_ID=$(echo "$EXISTING_APPS" | jq -r ".[] | select(.name == \"$TRAD_APP_NAME\" and .type == \"Traditional\") | .id")
TRAD_SECRET=""
if [ -n "$TRAD_ID" ]; then
@@ -387,19 +399,27 @@ log "API resource scopes assigned to organization roles."
# ============================================================
# --- Platform Owner ---
log "Checking for platform owner user '$SAAS_ADMIN_USER'..."
ADMIN_USER_ID=$(api_get "/api/users?search=$SAAS_ADMIN_USER" | jq -r ".[] | select(.username == \"$SAAS_ADMIN_USER\") | .id")
log "Checking for platform owner user '$ADMIN_USERNAME'..."
ADMIN_USER_ID=$(api_get "/api/users?search=$ADMIN_USERNAME" | jq -r ".[] | select(.username == \"$ADMIN_USERNAME\") | .id")
if [ -n "$ADMIN_USER_ID" ]; then
log "Platform owner exists: $ADMIN_USER_ID"
else
log "Creating platform owner '$SAAS_ADMIN_USER'..."
ADMIN_RESPONSE=$(api_post "/api/users" "{
\"username\": \"$SAAS_ADMIN_USER\",
\"password\": \"$SAAS_ADMIN_PASS\",
\"name\": \"Platform Owner\"
}")
# Build user JSON — include primaryEmail only if SAAS_ADMIN_USER is an email
ADMIN_USER_JSON="{\"username\": \"$ADMIN_USERNAME\", \"password\": \"$SAAS_ADMIN_PASS\", \"name\": \"Platform Owner\""
if [ -n "$ADMIN_EMAIL" ]; then
ADMIN_USER_JSON="$ADMIN_USER_JSON, \"primaryEmail\": \"$ADMIN_EMAIL\""
log "Creating platform owner '$ADMIN_USERNAME' (email: $ADMIN_EMAIL)..."
else
log "Creating platform owner '$ADMIN_USERNAME'..."
fi
ADMIN_USER_JSON="$ADMIN_USER_JSON}"
ADMIN_RESPONSE=$(api_post "/api/users" "$ADMIN_USER_JSON")
ADMIN_USER_ID=$(echo "$ADMIN_RESPONSE" | jq -r '.id')
log "Created platform owner: $ADMIN_USER_ID"
if [ -z "$ADMIN_USER_ID" ] || [ "$ADMIN_USER_ID" = "null" ]; then
log "ERROR: Failed to create platform owner. Response: $(echo "$ADMIN_RESPONSE" | head -c 300)"
else
log "Created platform owner: $ADMIN_USER_ID"
fi
fi
# --- Grant SaaS admin Logto console access (admin tenant, port 3002) ---
@@ -439,12 +459,12 @@ else
-d "$2" "${LOGTO_ADMIN_ENDPOINT}${1}" 2>/dev/null || true
}
# Check if admin user already exists on admin tenant
ADMIN_TENANT_USER_ID=$(admin_api_get "/api/users?search=$SAAS_ADMIN_USER" | jq -r ".[] | select(.username == \"$SAAS_ADMIN_USER\") | .id" 2>/dev/null)
# Check if admin user already exists on admin tenant (uses ADMIN_USERNAME, not email)
ADMIN_TENANT_USER_ID=$(admin_api_get "/api/users?search=$ADMIN_USERNAME" | jq -r ".[] | select(.username == \"$ADMIN_USERNAME\") | .id" 2>/dev/null)
if [ -z "$ADMIN_TENANT_USER_ID" ] || [ "$ADMIN_TENANT_USER_ID" = "null" ]; then
log "Creating admin console user '$SAAS_ADMIN_USER'..."
log "Creating admin console user '$ADMIN_USERNAME'..."
ADMIN_TENANT_RESPONSE=$(admin_api_post "/api/users" "{
\"username\": \"$SAAS_ADMIN_USER\",
\"username\": \"$ADMIN_USERNAME\",
\"password\": \"$SAAS_ADMIN_PASS\",
\"name\": \"Platform Admin\"
}")
@@ -532,7 +552,15 @@ CUSTOM_JWT_SCRIPT='const getCustomJwtClaims = async ({ token, context, environme
if (role.name === "saas-vendor") roles.add("server:admin");
}
}
return roles.size > 0 ? { roles: [...roles] } : {};
const mfaFactors = context?.user?.mfaVerificationFactors || [];
const mfaEnrolled = mfaFactors.some(f => f.type === "Totp" || f.type === "WebAuthn");
const passkeyEnrolled = mfaFactors.some(f => f.type === "WebAuthn");
const claims = {};
if (roles.size > 0) claims.roles = [...roles];
claims.mfa_enrolled = mfaEnrolled;
claims.passkey_enrolled = passkeyEnrolled;
claims.mfa_method_preference = context?.user?.customData?.mfa_method_preference || null;
return claims;
};'
CUSTOM_JWT_PAYLOAD=$(jq -n --arg script "$CUSTOM_JWT_SCRIPT" '{ script: $script }')
@@ -562,6 +590,38 @@ api_patch "/api/sign-in-exp" "{
}"
log "Sign-in branding configured."
# ============================================================
# PHASE 8c: Configure sign-in experience (sign-in only)
# ============================================================
# Registration is disabled by default. The vendor admin enables it
# via the Email Connector UI after configuring SMTP delivery.
log "Configuring sign-in experience (sign-in only, no registration)..."
api_patch "/api/sign-in-exp" '{
"signInMode": "SignIn",
"signIn": {
"methods": [
{
"identifier": "email",
"password": true,
"verificationCode": false,
"isPasswordPrimary": true
},
{
"identifier": "username",
"password": true,
"verificationCode": false,
"isPasswordPrimary": true
}
]
},
"mfa": {
"factors": ["Totp", "WebAuthn", "BackupCode"],
"policy": "UserControlled"
}
}' >/dev/null 2>&1
log "Sign-in experience configured: SignIn only (registration disabled until email is configured)."
# ============================================================
# PHASE 9: Cleanup seeded apps
# ============================================================

View File

@@ -1,19 +1,17 @@
FROM eclipse-temurin:21-jre-alpine
# BellSoft Liberica JRE 21 on Alpaquita Linux (glibc, minimal, 199 MB).
# Pin by digest in production overlays.
FROM bellsoft/liberica-runtime-container:jre-21-slim-glibc
WORKDIR /app
# Agent JAR is copied during CI build from Gitea Maven registry
# ARG AGENT_JAR=cameleer3-agent-1.0-SNAPSHOT-shaded.jar
# Agent is baked in; log appender is embedded in cameleer-core.
# Tenant JAR is delivered at deploy time by cameleer-runtime-loader
# into the RO-mounted /app/jars volume.
COPY agent.jar /app/agent.jar
ENTRYPOINT exec java \
-Dcameleer.export.type=${CAMELEER_EXPORT_TYPE:-HTTP} \
-Dcameleer.export.endpoint=${CAMELEER_SERVER_URL} \
-Dcameleer.agent.name=${HOSTNAME} \
-Dcameleer.agent.application=${CAMELEER_APPLICATION_ID:-default} \
-Dcameleer.agent.environment=${CAMELEER_ENVIRONMENT_ID:-default} \
-Dcameleer.routeControl.enabled=${CAMELEER_ROUTE_CONTROL_ENABLED:-false} \
-Dcameleer.replay.enabled=${CAMELEER_REPLAY_ENABLED:-false} \
-Dcameleer.health.enabled=true \
-Dcameleer.health.port=9464 \
-javaagent:/app/agent.jar \
-jar /app/app.jar
# No ENTRYPOINT here. cameleer-server's DeploymentExecutor builds the
# per-runtime-type entrypoint (spring-boot/quarkus: -jar; plain-java:
# -cp + main; native: exec) and overrides via withCmd("sh","-c",...).
# Setting one here only creates drift between this image and the actual
# runtime command.
USER nobody

View File

@@ -0,0 +1,17 @@
# Tiny init-container image. No app code, no shell-injection surface — script
# only sees env vars set by the orchestrator.
FROM busybox:1.37-musl
# Run as non-root (UID 1000 inside the container; with userns_mode this is
# remapped to host UID ~101000 — fully unprivileged on the host).
# Pre-create /app/jars owned by `loader` so the orchestrator's named-volume
# mount inherits that ownership at first init — without it the empty named
# volume comes up as root:root 0755 and wget can't write app.jar.
RUN adduser -D -u 1000 loader && mkdir -p /app/jars && chown -R loader:loader /app
COPY entrypoint.sh /usr/local/bin/loader
RUN chmod +x /usr/local/bin/loader
USER loader
WORKDIR /app
ENTRYPOINT ["/usr/local/bin/loader"]

View File

@@ -0,0 +1,29 @@
# cameleer-runtime-loader
Init container that fetches the deployable JAR into a shared volume before the
main runtime container starts. The image is consumed by
`DockerRuntimeOrchestrator` in the **cameleer-server** repo as a tenant
sidecar — see that repo's `.claude/rules/docker-orchestration.md`
("Init-Container Loader Pattern") for the contract.
## Build
CI (`.gitea/workflows/ci.yml`, `docker` job, "Build and push runtime-loader
image" step) builds and pushes this image on every main / feature-branch
push. Manual build for local testing:
docker build -t registry.cameleer.io/cameleer/cameleer-runtime-loader:<tag> .
docker push registry.cameleer.io/cameleer/cameleer-runtime-loader:<tag>
## Contract (consumed by cameleer-server)
- Env: `ARTIFACT_URL` (signed download URL), `ARTIFACT_EXPECTED_SIZE` (bytes).
- Volume: writes `/app/jars/app.jar`.
- Exit 0 on success; non-zero on fetch/size failure.
- Runs as UID 1000 (loader user), drops all caps, read-only rootfs except `/app/jars`.
Contract regression coverage lives on the cameleer-server side
(`LoaderHardeningIT`); pulls the published `:latest` and asserts exit 0
under the orchestrator's hardening shape. Don't change the env vars,
mount path, or exit-code semantics without updating the cameleer-server
side in the same change.

View File

@@ -0,0 +1,25 @@
#!/bin/sh
# cameleer-runtime-loader: fetches one JAR from a signed URL into the shared
# /app/jars/ volume, verifies size, exits. Runs in the same hardened sandbox as
# the main container (cap_drop ALL, read-only rootfs, etc.) — only /app/jars/
# is writeable.
set -eu
: "${ARTIFACT_URL:?ARTIFACT_URL is required}"
: "${ARTIFACT_EXPECTED_SIZE:?ARTIFACT_EXPECTED_SIZE is required}"
OUT=/app/jars/app.jar
mkdir -p /app/jars
echo "loader: fetching artifact (expected $ARTIFACT_EXPECTED_SIZE bytes)"
# -q quiet, -O output, --tries=3 retry transient network blips,
# --timeout=30 cap stalls. wget exits non-zero on HTTP >=400.
wget -q --tries=3 --timeout=30 -O "$OUT" "$ARTIFACT_URL"
actual=$(wc -c < "$OUT")
if [ "$actual" -ne "$ARTIFACT_EXPECTED_SIZE" ]; then
echo "loader: size mismatch — expected $ARTIFACT_EXPECTED_SIZE, got $actual" >&2
exit 2
fi
echo "loader: artifact written to $OUT ($actual bytes)"

View File

@@ -15,12 +15,12 @@ infrastructure themselves.
The system comprises three components:
**Cameleer Agent** (`cameleer3` repo) -- A Java agent using ByteBuddy for
**Cameleer Agent** (`cameleer` repo) -- A Java agent using ByteBuddy for
zero-code bytecode instrumentation. Captures route executions, processor traces,
payloads, metrics, and route graph topology. Deployed as a `-javaagent` JAR
alongside the customer's application.
**Cameleer Server** (`cameleer3-server` repo) -- A Spring Boot observability
**Cameleer Server** (`cameleer-server` repo) -- A Spring Boot observability
backend. Receives telemetry from agents via HTTP, pushes configuration and
commands to agents via SSE. Stores data in PostgreSQL and ClickHouse. Provides
a React SPA dashboard for direct observability access. JWT auth with Ed25519
@@ -50,7 +50,7 @@ logging. Serves a React SPA that wraps the full user experience.
| | /interaction) |
v v v v
+--------------+ +--------------+ +-----------+ +------------------+
| cameleer-saas| | cameleer-saas| | Logto | | cameleer3-server |
| cameleer-saas| | cameleer-saas| | Logto | | cameleer-server |
| (API) | | (SPA) | | | | |
| :8080 | | :8080 | | :3001 | | :8081 |
+--------------+ +--------------+ +-----------+ +------------------+
@@ -79,15 +79,15 @@ logging. Serves a React SPA that wraps the full user experience.
| postgres | `postgres:16-alpine` | 5432 | cameleer | Shared PostgreSQL (3 databases) |
| logto | `ghcr.io/logto-io/logto:latest` | 3001 | cameleer | OIDC identity provider |
| logto-bootstrap | `postgres:16-alpine` (ephemeral) | -- | cameleer | One-shot bootstrap script |
| cameleer-saas | `gitea.siegeln.net/cameleer/cameleer-saas` | 8080 | cameleer | SaaS API + SPA serving |
| cameleer3-server | `gitea.siegeln.net/cameleer/cameleer3-server`| 8081 | cameleer | Observability backend |
| cameleer-saas | `registry.cameleer.io/cameleer/cameleer-saas` | 8080 | cameleer | SaaS API + SPA serving |
| cameleer-server | `registry.cameleer.io/cameleer/cameleer-server`| 8081 | cameleer | Observability backend |
| clickhouse | `clickhouse/clickhouse-server:latest` | 8123 | cameleer | Time-series telemetry storage |
### Docker Network
All services share a single Docker bridge network named `cameleer`. Customer app
containers are also attached to this network so agents can reach the
cameleer3-server.
cameleer-server.
### Volumes
@@ -105,7 +105,7 @@ The shared PostgreSQL instance hosts three databases:
- `cameleer_saas` -- SaaS platform tables (tenants, environments, apps, etc.)
- `logto` -- Logto identity provider data
- `cameleer3` -- cameleer3-server operational data
- `cameleer` -- cameleer-server operational data
The `docker/init-databases.sh` init script creates all three during first start.
@@ -128,9 +128,9 @@ The `docker/init-databases.sh` init script creates all three during first start.
|--------------------|-----------------|------------------|----------------------|--------------------------------|
| Logto user JWT | Logto | ES384 (asymmetric)| Any service via JWKS | SaaS UI users, server users |
| Logto M2M JWT | Logto | ES384 (asymmetric)| Any service via JWKS | SaaS platform -> server calls |
| Server internal JWT| cameleer3-server| HS256 (symmetric) | Issuing server only | Agents (after registration) |
| API key (opaque) | SaaS platform | N/A (SHA-256 hash)| cameleer3-server | Agent initial registration |
| Ed25519 signature | cameleer3-server| EdDSA | Agent | Server -> agent command signing|
| Server internal JWT| cameleer-server| HS256 (symmetric) | Issuing server only | Agents (after registration) |
| API key (opaque) | SaaS platform | N/A (SHA-256 hash)| cameleer-server | Agent initial registration |
| Ed25519 signature | cameleer-server| EdDSA | Agent | Server -> agent command signing|
### 3.3 Scope Model
@@ -183,7 +183,7 @@ the bootstrap script (`docker/logto-bootstrap.sh`):
4. `organization_id` claim in JWT resolves to internal tenant ID via
`TenantIsolationInterceptor`.
**SaaS platform -> cameleer3-server API (M2M):**
**SaaS platform -> cameleer-server API (M2M):**
1. SaaS platform obtains Logto M2M token (`client_credentials` grant) via
`LogtoManagementClient`.
@@ -191,7 +191,7 @@ the bootstrap script (`docker/logto-bootstrap.sh`):
3. Server validates via Logto JWKS (OIDC resource server support).
4. Server grants ADMIN role to valid M2M tokens.
**Agent -> cameleer3-server:**
**Agent -> cameleer-server:**
1. Agent reads `CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN` environment variable (API key).
2. Calls `POST /api/v1/agents/register` with the key as Bearer token.
@@ -458,9 +458,9 @@ Defined in `AuditAction.java`:
### 5.1 Server-Per-Tenant
Each tenant gets a dedicated cameleer3-server instance. The SaaS platform
Each tenant gets a dedicated cameleer-server instance. The SaaS platform
provisions and manages these servers. In the current Docker Compose topology, a
single shared cameleer3-server is used for the default tenant. Production
single shared cameleer-server is used for the default tenant. Production
deployments will run per-tenant servers as separate containers or K8s pods.
### 5.2 Customer App Deployment Flow
@@ -495,7 +495,7 @@ The deployment lifecycle is managed by `DeploymentService`:
|-----------------------------|----------------------------------------|
| `CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN` | API key for agent registration |
| `CAMELEER_EXPORT_TYPE` | `HTTP` |
| `CAMELEER_SERVER_RUNTIME_SERVERURL` | cameleer3-server internal URL |
| `CAMELEER_SERVER_RUNTIME_SERVERURL` | cameleer-server internal URL |
| `CAMELEER_APPLICATION_ID` | App slug |
| `CAMELEER_ENVIRONMENT_ID` | Environment slug |
| `CAMELEER_DISPLAY_NAME` | `{tenant}-{env}-{app}` |
@@ -524,14 +524,14 @@ Configured via `RuntimeConfig`:
## 6. Agent-Server Protocol
The agent-server protocol is defined in full in
`cameleer3/cameleer3-common/PROTOCOL.md`. This section summarizes the key
`cameleer/cameleer-common/PROTOCOL.md`. This section summarizes the key
aspects relevant to the SaaS platform.
### 6.1 Agent Registration
1. Agent starts with `CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN` environment variable (an API key
generated by the SaaS platform, prefixed with `cmk_`).
2. Agent calls `POST /api/v1/agents/register` on the cameleer3-server with the
2. Agent calls `POST /api/v1/agents/register` on the cameleer-server with the
API key as a Bearer token.
3. Server validates the key and returns:
- HMAC JWT access token (short-lived, ~1 hour)
@@ -744,7 +744,7 @@ leaks regardless of whether the request succeeded or failed.
|----------------------|-------------|------------------------------------|
| Logto access token | ~1 hour | Configured in Logto, refreshed by SDK |
| Logto refresh token | ~14 days | Used by `@logto/react` for silent refresh |
| Server agent JWT | ~1 hour | cameleer3-server `CAMELEER_JWT_SECRET` |
| Server agent JWT | ~1 hour | cameleer-server `CAMELEER_JWT_SECRET` |
| Server refresh token | ~7 days | Agent re-registers when expired |
### 8.4 Audit Logging
@@ -876,28 +876,28 @@ state (`currentTenantId`). Provides `logout` and `signIn` callbacks.
| Variable | Default | Description |
|-----------------------------------|------------------------------------|----------------------------------|
| `CAMELEER_SAAS_PROVISIONING_SERVERIMAGE` | `gitea.siegeln.net/cameleer/cameleer3-server:latest` | Docker image for per-tenant server |
| `CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE` | `gitea.siegeln.net/cameleer/cameleer3-server-ui:latest` | Docker image for per-tenant UI |
| `CAMELEER_SAAS_PROVISIONING_SERVERIMAGE` | `registry.cameleer.io/cameleer/cameleer-server:latest` | Docker image for per-tenant server |
| `CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE` | `registry.cameleer.io/cameleer/cameleer-server-ui:latest` | Docker image for per-tenant UI |
| `CAMELEER_SAAS_PROVISIONING_NETWORKNAME` | `cameleer-saas_cameleer` | Shared services Docker network |
| `CAMELEER_SAAS_PROVISIONING_TRAEFIKNETWORK` | `cameleer-traefik` | Traefik Docker network |
| `CAMELEER_SAAS_PROVISIONING_PUBLICHOST` | `localhost` | Public hostname (same as infrastructure `PUBLIC_HOST`) |
| `CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL` | `https` | Public protocol (same as infrastructure `PUBLIC_PROTOCOL`) |
| `CAMELEER_SAAS_PROVISIONING_DATASOURCEURL` | `jdbc:postgresql://cameleer-postgres:5432/cameleer3` | PostgreSQL URL passed to tenant servers |
| `CAMELEER_SAAS_PROVISIONING_DATASOURCEURL` | `jdbc:postgresql://cameleer-postgres:5432/cameleer` | PostgreSQL URL passed to tenant servers |
| `CAMELEER_SAAS_PROVISIONING_CLICKHOUSEURL` | `jdbc:clickhouse://cameleer-clickhouse:8123/cameleer` | ClickHouse URL passed to tenant servers |
### 10.2 cameleer3-server (per-tenant)
### 10.2 cameleer-server (per-tenant)
Env vars injected into provisioned per-tenant server containers by `DockerTenantProvisioner`. All server properties use the `cameleer.server.*` prefix (env vars: `CAMELEER_SERVER_*`).
| Variable | Default / Value | Description |
|------------------------------|----------------------------------------------|----------------------------------|
| `SPRING_DATASOURCE_URL` | `jdbc:postgresql://cameleer-postgres:5432/cameleer3` | PostgreSQL JDBC URL |
| `SPRING_DATASOURCE_URL` | `jdbc:postgresql://cameleer-postgres:5432/cameleer` | PostgreSQL JDBC URL |
| `SPRING_DATASOURCE_USERNAME`| `cameleer` | PostgreSQL user |
| `SPRING_DATASOURCE_PASSWORD`| `cameleer_dev` | PostgreSQL password |
| `CAMELEER_SERVER_CLICKHOUSE_URL` | `jdbc:clickhouse://cameleer-clickhouse:8123/cameleer` | ClickHouse JDBC URL |
| `CAMELEER_SERVER_TENANT_ID` | *(tenant slug)* | Tenant identifier for data isolation |
| `CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN` | *(generated)* | Agent bootstrap token |
| `CAMELEER_SERVER_SECURITY_JWTSECRET` | *(generated)* | JWT signing secret |
| `CAMELEER_SERVER_SECURITY_JWTSECRET` | *(generated, must be non-empty)* | JWT signing secret |
| `CAMELEER_SERVER_SECURITY_OIDC_ISSUERURI` | `${PUBLIC_PROTOCOL}://${PUBLIC_HOST}/oidc` | OIDC issuer for M2M tokens |
| `CAMELEER_SERVER_SECURITY_OIDC_JWKSETURI` | `http://cameleer-logto:3001/oidc/jwks` | Docker-internal JWK fetch |
| `CAMELEER_SERVER_SECURITY_OIDC_AUDIENCE` | `https://api.cameleer.local` | JWT audience validation |

View File

@@ -80,7 +80,7 @@ Note: Phase 9 (Frontend) can be developed in parallel with Phases 3-8, building
**PRD Sections:** 6 (Tenant Provisioning), 11 (Networking & Tenant Isolation)
**Gitea Epics:** #3 (Tenant Provisioning), #8 (Networking)
**Depends on:** Phase 2
**Produces:** Automated tenant provisioning pipeline. Signup creates tenant → Flux HelmRelease generated → namespace provisioned → cameleer3-server deployed → PostgreSQL schema + OpenSearch index created → tenant ACTIVE. NetworkPolicies enforced.
**Produces:** Automated tenant provisioning pipeline. Signup creates tenant → Flux HelmRelease generated → namespace provisioned → cameleer-server deployed → PostgreSQL schema + OpenSearch index created → tenant ACTIVE. NetworkPolicies enforced.
**Key deliverables:**
- Provisioning state machine (idempotent, retryable)
@@ -91,7 +91,7 @@ Note: Phase 9 (Frontend) can be developed in parallel with Phases 3-8, building
- Readiness checking (poll tenant server health)
- Tenant lifecycle operations (suspend, reactivate, delete)
- K8s NetworkPolicy templates (default deny + allow rules)
- Helm chart for cameleer3-server tenant deployment
- Helm chart for cameleer-server tenant deployment
---
@@ -143,11 +143,11 @@ Note: Phase 9 (Frontend) can be developed in parallel with Phases 3-8, building
**PRD Sections:** 8 (Observability Integration)
**Gitea Epics:** #6 (Observability Integration), #13 (Exchange Replay — gating only)
**Depends on:** Phase 3 (server already deployed per tenant), Phase 2 (license for feature gating)
**Produces:** Tenants see their cameleer3-server UI embedded in the SaaS shell. API gateway routes to tenant server. MOAT features gated by license tier.
**Produces:** Tenants see their cameleer-server UI embedded in the SaaS shell. API gateway routes to tenant server. MOAT features gated by license tier.
**Key deliverables:**
- Ingress routing rules: `/t/{tenant}/api/*` → tenant's cameleer3-server
- cameleer3-server "managed mode" configuration (trust SaaS JWT, report metrics)
- Ingress routing rules: `/t/{tenant}/api/*` → tenant's cameleer-server
- cameleer-server "managed mode" configuration (trust SaaS JWT, report metrics)
- Bootstrap token generation API
- MOAT feature gating via license (topology=all, lineage=limited/full, correlation=mid+, debugger=high+, replay=high+)
- Server UI embedding approach (iframe or reverse proxy with path rewriting)
@@ -211,7 +211,7 @@ Note: Phase 9 (Frontend) can be developed in parallel with Phases 3-8, building
- SaaS shell (navigation, tenant switcher, user menu)
- Dashboard (platform overview)
- Apps list + App deployment page (upload, config, secrets, status, logs, versions)
- Observability section (embedded cameleer3-server UI)
- Observability section (embedded cameleer-server UI)
- Team management pages
- Settings pages (tenant config, SSO/OIDC, vault connections)
- Billing pages (usage, invoices, plan management)

View File

@@ -2006,7 +2006,7 @@ available throughout request lifecycle."
**Files:**
- Create: `src/main/java/net/siegeln/cameleer/saas/config/ForwardAuthController.java`
This endpoint is called by Traefik's ForwardAuth middleware to validate requests routed to non-platform services (e.g., cameleer3-server). It validates the JWT, resolves the tenant, and returns tenant context headers.
This endpoint is called by Traefik's ForwardAuth middleware to validate requests routed to non-platform services (e.g., cameleer-server). It validates the JWT, resolves the tenant, and returns tenant context headers.
- [ ] **Step 1: Create ForwardAuthController**
@@ -2455,8 +2455,8 @@ services:
networks:
- cameleer
cameleer3-server:
image: ${CAMELEER3_SERVER_IMAGE:-gitea.siegeln.net/cameleer/cameleer3-server}:${VERSION:-latest}
cameleer-server:
image: ${CAMELEER_SERVER_IMAGE:-gitea.siegeln.net/cameleer/cameleer-server}:${VERSION:-latest}
restart: unless-stopped
depends_on:
postgres:
@@ -2539,9 +2539,9 @@ git add docker-compose.yml docker-compose.dev.yml traefik.yml docker/init-databa
git commit -m "feat: add Docker Compose production stack with Traefik + Logto
7-container stack: Traefik (reverse proxy), PostgreSQL (shared),
Logto (identity), cameleer-saas (control plane), cameleer3-server
Logto (identity), cameleer-saas (control plane), cameleer-server
(observability), ClickHouse (traces). ForwardAuth middleware for
tenant-aware routing to cameleer3-server."
tenant-aware routing to cameleer-server."
```
---

View File

@@ -2,7 +2,7 @@
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Customers can upload a Camel JAR, the platform builds a container image with cameleer3 agent auto-injected, and deploys it to a logical environment with full lifecycle management.
**Goal:** Customers can upload a Camel JAR, the platform builds a container image with cameleer agent auto-injected, and deploys it to a logical environment with full lifecycle management.
**Architecture:** Environment → App → Deployment entity hierarchy. `RuntimeOrchestrator` interface with `DockerRuntimeOrchestrator` (docker-java) implementation. Async deployment pipeline with status polling. Container logs streamed to ClickHouse. Pre-built `cameleer-runtime-base` image for fast (~1-3s) customer image builds.
@@ -164,8 +164,8 @@ public class RuntimeConfig {
@Value("${cameleer.runtime.bootstrap-token:${CAMELEER_AUTH_TOKEN:}}")
private String bootstrapToken;
@Value("${cameleer.runtime.cameleer3-server-endpoint:http://cameleer3-server:8081}")
private String cameleer3ServerEndpoint;
@Value("${cameleer.runtime.cameleer-server-endpoint:http://cameleer-server:8081}")
private String cameleerServerEndpoint;
public long getMaxJarSize() { return maxJarSize; }
public String getJarStoragePath() { return jarStoragePath; }
@@ -177,7 +177,7 @@ public class RuntimeConfig {
public String getContainerMemoryLimit() { return containerMemoryLimit; }
public int getContainerCpuShares() { return containerCpuShares; }
public String getBootstrapToken() { return bootstrapToken; }
public String getCameleer3ServerEndpoint() { return cameleer3ServerEndpoint; }
public String getCameleerServerEndpoint() { return cameleerServerEndpoint; }
public long parseMemoryLimitBytes() {
var limit = containerMemoryLimit.trim().toLowerCase();
@@ -270,7 +270,7 @@ Append to the existing `cameleer:` section in `src/main/resources/application.ym
container-memory-limit: ${CAMELEER_CONTAINER_MEMORY_LIMIT:512m}
container-cpu-shares: ${CAMELEER_CONTAINER_CPU_SHARES:512}
bootstrap-token: ${CAMELEER_AUTH_TOKEN:}
cameleer3-server-endpoint: ${CAMELEER3_SERVER_ENDPOINT:http://cameleer3-server:8081}
cameleer-server-endpoint: ${CAMELEER_SERVER_ENDPOINT:http://cameleer-server:8081}
clickhouse:
url: ${CLICKHOUSE_URL:jdbc:clickhouse://clickhouse:8123/cameleer}
```
@@ -2788,7 +2788,7 @@ public class DeploymentService {
var envVars = Map.of(
"CAMELEER_AUTH_TOKEN", env.getBootstrapToken(),
"CAMELEER_EXPORT_TYPE", "HTTP",
"CAMELEER_EXPORT_ENDPOINT", runtimeConfig.getCameleer3ServerEndpoint(),
"CAMELEER_EXPORT_ENDPOINT", runtimeConfig.getCameleerServerEndpoint(),
"CAMELEER_APPLICATION_ID", app.getSlug(),
"CAMELEER_ENVIRONMENT_ID", env.getSlug(),
"CAMELEER_DISPLAY_NAME", containerName);
@@ -3418,7 +3418,7 @@ volumes:
Add to the cameleer-saas service environment:
```yaml
CAMELEER_AUTH_TOKEN: ${CAMELEER_AUTH_TOKEN:-default-bootstrap-token}
CAMELEER3_SERVER_ENDPOINT: http://cameleer3-server:8081
CAMELEER_SERVER_ENDPOINT: http://cameleer-server:8081
CLICKHOUSE_URL: jdbc:clickhouse://clickhouse:8123/cameleer
```
@@ -3427,7 +3427,7 @@ Add to the cameleer-saas service volumes:
- jardata:/data/jars
```
Add `CAMELEER_AUTH_TOKEN` to the cameleer3-server service environment:
Add `CAMELEER_AUTH_TOKEN` to the cameleer-server service environment:
```yaml
CAMELEER_AUTH_TOKEN: ${CAMELEER_AUTH_TOKEN:-default-bootstrap-token}
```
@@ -3448,7 +3448,7 @@ FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
# Agent JAR is copied during CI build from Gitea Maven registry
# ARG AGENT_JAR=cameleer3-agent-1.0-SNAPSHOT-shaded.jar
# ARG AGENT_JAR=cameleer-agent-1.0-SNAPSHOT-shaded.jar
COPY agent.jar /app/agent.jar
ENTRYPOINT exec java \

View File

@@ -2,9 +2,9 @@
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Complete the deploy → hit endpoint → see traces loop. Serve the existing cameleer3-server dashboard, add agent connectivity verification, enable optional inbound HTTP routing for customer apps, and wire up observability data health checks.
**Goal:** Complete the deploy → hit endpoint → see traces loop. Serve the existing cameleer-server dashboard, add agent connectivity verification, enable optional inbound HTTP routing for customer apps, and wire up observability data health checks.
**Architecture:** Wiring phase — cameleer3-server already has full observability. Phase 4 adds Traefik routing for the dashboard + customer app endpoints, new API endpoints in cameleer-saas for agent-status and observability-status, and configures `CAMELEER_TENANT_ID` on the server.
**Architecture:** Wiring phase — cameleer-server already has full observability. Phase 4 adds Traefik routing for the dashboard + customer app endpoints, new API endpoints in cameleer-saas for agent-status and observability-status, and configures `CAMELEER_TENANT_ID` on the server.
**Tech Stack:** Spring Boot 3.4.3, docker-java 3.4.1, ClickHouse JDBC, Traefik v3 labels, Spring RestClient
@@ -14,7 +14,7 @@
### New Files
- `src/main/java/net/siegeln/cameleer/saas/observability/AgentStatusService.java` — Queries cameleer3-server for agent registration
- `src/main/java/net/siegeln/cameleer/saas/observability/AgentStatusService.java` — Queries cameleer-server for agent registration
- `src/main/java/net/siegeln/cameleer/saas/observability/AgentStatusController.java` — Agent status + observability status endpoints
- `src/main/java/net/siegeln/cameleer/saas/observability/dto/AgentStatusResponse.java` — Response DTO
- `src/main/java/net/siegeln/cameleer/saas/observability/dto/ObservabilityStatusResponse.java` — Response DTO
@@ -359,7 +359,7 @@ class AgentStatusServiceTest {
@BeforeEach
void setUp() {
when(runtimeConfig.getCameleer3ServerEndpoint()).thenReturn("http://cameleer3-server:8081");
when(runtimeConfig.getCameleerServerEndpoint()).thenReturn("http://cameleer-server:8081");
agentStatusService = new AgentStatusService(appRepository, environmentRepository, runtimeConfig);
}
@@ -439,7 +439,7 @@ public class AgentStatusService {
this.environmentRepository = environmentRepository;
this.runtimeConfig = runtimeConfig;
this.restClient = RestClient.builder()
.baseUrl(runtimeConfig.getCameleer3ServerEndpoint())
.baseUrl(runtimeConfig.getCameleerServerEndpoint())
.build();
}
@@ -475,7 +475,7 @@ public class AgentStatusService {
return new AgentStatusResponse(false, "NOT_REGISTERED", null,
List.of(), app.getSlug(), env.getSlug());
} catch (Exception e) {
log.warn("Failed to query agent status from cameleer3-server: {}", e.getMessage());
log.warn("Failed to query agent status from cameleer-server: {}", e.getMessage());
return new AgentStatusResponse(false, "UNKNOWN", null,
List.of(), app.getSlug(), env.getSlug());
}
@@ -651,28 +651,28 @@ public class ConnectivityHealthCheck {
@EventListener(ApplicationReadyEvent.class)
public void verifyConnectivity() {
checkCameleer3Server();
checkCameleerServer();
}
private void checkCameleer3Server() {
private void checkCameleerServer() {
try {
var client = RestClient.builder()
.baseUrl(runtimeConfig.getCameleer3ServerEndpoint())
.baseUrl(runtimeConfig.getCameleerServerEndpoint())
.build();
var response = client.get()
.uri("/actuator/health")
.retrieve()
.toBodilessEntity();
if (response.getStatusCode().is2xxSuccessful()) {
log.info("cameleer3-server connectivity: OK ({})",
runtimeConfig.getCameleer3ServerEndpoint());
log.info("cameleer-server connectivity: OK ({})",
runtimeConfig.getCameleerServerEndpoint());
} else {
log.warn("cameleer3-server connectivity: HTTP {} ({})",
response.getStatusCode(), runtimeConfig.getCameleer3ServerEndpoint());
log.warn("cameleer-server connectivity: HTTP {} ({})",
response.getStatusCode(), runtimeConfig.getCameleerServerEndpoint());
}
} catch (Exception e) {
log.warn("cameleer3-server connectivity: FAILED ({}) - {}",
runtimeConfig.getCameleer3ServerEndpoint(), e.getMessage());
log.warn("cameleer-server connectivity: FAILED ({}) - {}",
runtimeConfig.getCameleerServerEndpoint(), e.getMessage());
}
}
}
@@ -686,7 +686,7 @@ Run: `mvn compile -B -q`
```bash
git add src/main/java/net/siegeln/cameleer/saas/observability/ConnectivityHealthCheck.java
git commit -m "feat: add cameleer3-server startup connectivity check"
git commit -m "feat: add cameleer-server startup connectivity check"
```
---
@@ -700,7 +700,7 @@ git commit -m "feat: add cameleer3-server startup connectivity check"
- [ ] **Step 1: Update docker-compose.yml — add dashboard route and CAMELEER_TENANT_ID**
In the `cameleer3-server` service:
In the `cameleer-server` service:
Add to environment section:
```yaml
@@ -774,7 +774,7 @@ git commit -m "docs: update HOWTO with observability dashboard, routing, and age
| Spec Requirement | Task |
|---|---|
| Serve cameleer3-server dashboard via Traefik | Task 7 (dashboard Traefik labels) |
| Serve cameleer-server dashboard via Traefik | Task 7 (dashboard Traefik labels) |
| CAMELEER_TENANT_ID configuration | Task 7 (docker-compose env) |
| Agent connectivity verification endpoint | Task 4 (AgentStatusService + Controller) |
| Observability data health endpoint | Task 4 (ObservabilityStatusResponse) |

View File

@@ -4,7 +4,7 @@
**Goal:** Build a React SPA for managing tenants, environments, apps, and deployments. All backend APIs exist — this is the UI layer.
**Architecture:** React 19 + Vite + React Router + Zustand + TanStack Query + @cameleer/design-system. Sidebar layout matching cameleer3-server SPA. Shared Logto OIDC session. RBAC on all actions. Lives in `ui/` directory, built into Spring Boot static resources.
**Architecture:** React 19 + Vite + React Router + Zustand + TanStack Query + @cameleer/design-system. Sidebar layout matching cameleer-server SPA. Shared Logto OIDC session. RBAC on all actions. Lives in `ui/` directory, built into Spring Boot static resources.
**Tech Stack:** React 19, Vite 8, TypeScript, React Router 7, Zustand, TanStack React Query, @cameleer/design-system 0.1.31, Lucide React
@@ -332,7 +332,7 @@ git commit -m "feat: scaffold React SPA with Vite, design system, and TypeScript
- [ ] **Step 1: Create auth-store.ts**
Zustand store for auth state. Same localStorage keys as cameleer3-server SPA for SSO.
Zustand store for auth state. Same localStorage keys as cameleer-server SPA for SSO.
```typescript
import { create } from 'zustand';
@@ -1145,7 +1145,7 @@ git commit -m "feat: add SPA controller, Traefik route, CI frontend build, and H
|---|---|
| Project scaffolding (Vite, React, TS, design system) | Task 1 |
| TypeScript API types | Task 1 |
| Auth store (Zustand, same keys as cameleer3-server) | Task 2 |
| Auth store (Zustand, same keys as cameleer-server) | Task 2 |
| Login / Logto OIDC redirect / callback | Task 2 |
| Protected route | Task 2 |
| API client with auth middleware | Task 3 |

View File

@@ -2,35 +2,35 @@
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Replace the incoherent three-system auth in cameleer-saas with Logto-centric architecture, and add OIDC resource server support to cameleer3-server for M2M.
**Goal:** Replace the incoherent three-system auth in cameleer-saas with Logto-centric architecture, and add OIDC resource server support to cameleer-server for M2M.
**Architecture:** Logto is the single identity provider for all humans. Spring OAuth2 Resource Server validates Logto JWTs in both the SaaS platform and cameleer3-server. Agents authenticate with per-environment API keys exchanged for server-issued JWTs. Ed25519 command signing is unchanged. Zero trust: every service validates tokens independently via JWKS.
**Architecture:** Logto is the single identity provider for all humans. Spring OAuth2 Resource Server validates Logto JWTs in both the SaaS platform and cameleer-server. Agents authenticate with per-environment API keys exchanged for server-issued JWTs. Ed25519 command signing is unchanged. Zero trust: every service validates tokens independently via JWKS.
**Tech Stack:** Spring Boot 3.4, Spring Security OAuth2 Resource Server, Nimbus JOSE+JWT, Logto, React + @logto/react, Zustand, PostgreSQL, Flyway
**Spec:** `docs/superpowers/specs/2026-04-05-auth-overhaul-design.md`
**Repos:**
- cameleer3-server: `C:\Users\Hendrik\Documents\projects\cameleer3-server` (Phase 1)
- cameleer-server: `C:\Users\Hendrik\Documents\projects\cameleer-server` (Phase 1)
- cameleer-saas: `C:\Users\Hendrik\Documents\projects\cameleer-saas` (Phases 2-3)
- cameleer3 (agent): NO CHANGES
- cameleer (agent): NO CHANGES
---
## Phase 1: cameleer3-server — OIDC Resource Server Support
## Phase 1: cameleer-server — OIDC Resource Server Support
All Phase 1 work is in `C:\Users\Hendrik\Documents\projects\cameleer3-server`.
All Phase 1 work is in `C:\Users\Hendrik\Documents\projects\cameleer-server`.
### Task 1: Add OAuth2 Resource Server dependency and config properties
**Files:**
- Modify: `cameleer3-server-app/pom.xml`
- Modify: `cameleer3-server-app/src/main/resources/application.yml`
- Modify: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/SecurityProperties.java`
- Modify: `cameleer-server-app/pom.xml`
- Modify: `cameleer-server-app/src/main/resources/application.yml`
- Modify: `cameleer-server-app/src/main/java/com/cameleer/server/app/security/SecurityProperties.java`
- [ ] **Step 1: Add dependency to pom.xml**
In `cameleer3-server-app/pom.xml`, add after the `spring-boot-starter-security` dependency (around line 88):
In `cameleer-server-app/pom.xml`, add after the `spring-boot-starter-security` dependency (around line 88):
```xml
<dependency>
@@ -41,7 +41,7 @@ In `cameleer3-server-app/pom.xml`, add after the `spring-boot-starter-security`
- [ ] **Step 2: Add OIDC properties to application.yml**
In `cameleer3-server-app/src/main/resources/application.yml`, add two new properties under the `security:` block (after line 52):
In `cameleer-server-app/src/main/resources/application.yml`, add two new properties under the `security:` block (after line 52):
```yaml
oidc-issuer-uri: ${CAMELEER_OIDC_ISSUER_URI:}
@@ -50,7 +50,7 @@ In `cameleer3-server-app/src/main/resources/application.yml`, add two new proper
- [ ] **Step 3: Add fields to SecurityProperties.java**
In `cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/SecurityProperties.java`, add after the `jwtSecret` field (line 19):
In `cameleer-server-app/src/main/java/com/cameleer/server/app/security/SecurityProperties.java`, add after the `jwtSecret` field (line 19):
```java
private String oidcIssuerUri;
@@ -64,13 +64,13 @@ public void setOidcAudience(String oidcAudience) { this.oidcAudience = oidcAudie
- [ ] **Step 4: Verify build compiles**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && ./mvnw compile -pl cameleer3-server-app -q`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && ./mvnw compile -pl cameleer-server-app -q`
Expected: BUILD SUCCESS
- [ ] **Step 5: Commit**
```bash
git add cameleer3-server-app/pom.xml cameleer3-server-app/src/main/resources/application.yml cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/SecurityProperties.java
git add cameleer-server-app/pom.xml cameleer-server-app/src/main/resources/application.yml cameleer-server-app/src/main/java/com/cameleer/server/app/security/SecurityProperties.java
git commit -m "feat: add oauth2-resource-server dependency and OIDC config properties"
```
@@ -79,14 +79,14 @@ git commit -m "feat: add oauth2-resource-server dependency and OIDC config prope
### Task 2: Add conditional OIDC JwtDecoder bean
**Files:**
- Modify: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/SecurityBeanConfig.java`
- Modify: `cameleer-server-app/src/main/java/com/cameleer/server/app/security/SecurityBeanConfig.java`
- [ ] **Step 1: Write the failing test**
Create `cameleer3-server-app/src/test/java/com/cameleer3/server/app/security/OidcJwtDecoderBeanTest.java`:
Create `cameleer-server-app/src/test/java/com/cameleer/server/app/security/OidcJwtDecoderBeanTest.java`:
```java
package com.cameleer3.server.app.security;
package com.cameleer.server.app.security;
import org.junit.jupiter.api.Test;
import org.springframework.security.oauth2.jwt.JwtDecoder;
@@ -123,12 +123,12 @@ class OidcJwtDecoderBeanTest {
- [ ] **Step 2: Run test to verify it fails**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && ./mvnw test -pl cameleer3-server-app -Dtest=OidcJwtDecoderBeanTest -q`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && ./mvnw test -pl cameleer-server-app -Dtest=OidcJwtDecoderBeanTest -q`
Expected: FAIL — method `oidcJwtDecoder` does not exist
- [ ] **Step 3: Add the oidcJwtDecoder method to SecurityBeanConfig**
In `cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/SecurityBeanConfig.java`, add these imports at the top:
In `cameleer-server-app/src/main/java/com/cameleer/server/app/security/SecurityBeanConfig.java`, add these imports at the top:
```java
import com.nimbusds.jose.JWSAlgorithm;
@@ -216,13 +216,13 @@ Update the test to match: the test calls `config.oidcJwtDecoder(properties)` dir
- [ ] **Step 5: Run test to verify it passes**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && ./mvnw test -pl cameleer3-server-app -Dtest=OidcJwtDecoderBeanTest -q`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && ./mvnw test -pl cameleer-server-app -Dtest=OidcJwtDecoderBeanTest -q`
Expected: PASS
- [ ] **Step 6: Commit**
```bash
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/SecurityBeanConfig.java cameleer3-server-app/src/test/java/com/cameleer3/server/app/security/OidcJwtDecoderBeanTest.java
git add cameleer-server-app/src/main/java/com/cameleer/server/app/security/SecurityBeanConfig.java cameleer-server-app/src/test/java/com/cameleer/server/app/security/OidcJwtDecoderBeanTest.java
git commit -m "feat: add conditional OIDC JwtDecoder factory for Logto token validation"
```
@@ -231,18 +231,18 @@ git commit -m "feat: add conditional OIDC JwtDecoder factory for Logto token val
### Task 3: Update JwtAuthenticationFilter with OIDC fallback
**Files:**
- Modify: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/JwtAuthenticationFilter.java`
- Modify: `cameleer-server-app/src/main/java/com/cameleer/server/app/security/JwtAuthenticationFilter.java`
- [ ] **Step 1: Write the failing test**
Create `cameleer3-server-app/src/test/java/com/cameleer3/server/app/security/JwtAuthenticationFilterOidcTest.java`:
Create `cameleer-server-app/src/test/java/com/cameleer/server/app/security/JwtAuthenticationFilterOidcTest.java`:
```java
package com.cameleer3.server.app.security;
package com.cameleer.server.app.security;
import com.cameleer3.server.core.agent.AgentRegistryService;
import com.cameleer3.server.core.security.InvalidTokenException;
import com.cameleer3.server.core.security.JwtService;
import com.cameleer.server.core.agent.AgentRegistryService;
import com.cameleer.server.core.security.InvalidTokenException;
import com.cameleer.server.core.security.JwtService;
import jakarta.servlet.FilterChain;
import jakarta.servlet.ServletException;
import org.junit.jupiter.api.BeforeEach;
@@ -369,19 +369,19 @@ class JwtAuthenticationFilterOidcTest {
- [ ] **Step 2: Run test to verify it fails**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && ./mvnw test -pl cameleer3-server-app -Dtest=JwtAuthenticationFilterOidcTest -q`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && ./mvnw test -pl cameleer-server-app -Dtest=JwtAuthenticationFilterOidcTest -q`
Expected: FAIL — constructor doesn't accept 3 args
- [ ] **Step 3: Update JwtAuthenticationFilter**
Replace `cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/JwtAuthenticationFilter.java` with:
Replace `cameleer-server-app/src/main/java/com/cameleer/server/app/security/JwtAuthenticationFilter.java` with:
```java
package com.cameleer3.server.app.security;
package com.cameleer.server.app.security;
import com.cameleer3.server.core.agent.AgentRegistryService;
import com.cameleer3.server.core.security.JwtService;
import com.cameleer3.server.core.security.JwtService.JwtValidationResult;
import com.cameleer.server.core.agent.AgentRegistryService;
import com.cameleer.server.core.security.JwtService;
import com.cameleer.server.core.security.JwtService.JwtValidationResult;
import jakarta.servlet.FilterChain;
import jakarta.servlet.ServletException;
import jakarta.servlet.http.HttpServletRequest;
@@ -508,13 +508,13 @@ public class JwtAuthenticationFilter extends OncePerRequestFilter {
- [ ] **Step 4: Run tests**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && ./mvnw test -pl cameleer3-server-app -Dtest=JwtAuthenticationFilterOidcTest -q`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && ./mvnw test -pl cameleer-server-app -Dtest=JwtAuthenticationFilterOidcTest -q`
Expected: PASS (all 4 tests)
- [ ] **Step 5: Commit**
```bash
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/JwtAuthenticationFilter.java cameleer3-server-app/src/test/java/com/cameleer3/server/app/security/JwtAuthenticationFilterOidcTest.java
git add cameleer-server-app/src/main/java/com/cameleer/server/app/security/JwtAuthenticationFilter.java cameleer-server-app/src/test/java/com/cameleer/server/app/security/JwtAuthenticationFilterOidcTest.java
git commit -m "feat: add OIDC token fallback to JwtAuthenticationFilter"
```
@@ -523,8 +523,8 @@ git commit -m "feat: add OIDC token fallback to JwtAuthenticationFilter"
### Task 4: Wire OIDC decoder into SecurityConfig
**Files:**
- Modify: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/SecurityConfig.java`
- Modify: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/SecurityBeanConfig.java`
- Modify: `cameleer-server-app/src/main/java/com/cameleer/server/app/security/SecurityConfig.java`
- Modify: `cameleer-server-app/src/main/java/com/cameleer/server/app/security/SecurityBeanConfig.java`
- [ ] **Step 1: Add OIDC decoder bean creation to SecurityBeanConfig**
@@ -595,13 +595,13 @@ import org.springframework.security.oauth2.jwt.JwtDecoder;
- [ ] **Step 3: Run existing tests**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && ./mvnw test -pl cameleer3-server-app -q`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && ./mvnw test -pl cameleer-server-app -q`
Expected: All existing tests PASS (no OIDC env vars set, decoder is null, filter behaves as before)
- [ ] **Step 4: Commit**
```bash
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/SecurityConfig.java cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/SecurityBeanConfig.java
git add cameleer-server-app/src/main/java/com/cameleer/server/app/security/SecurityConfig.java cameleer-server-app/src/main/java/com/cameleer/server/app/security/SecurityBeanConfig.java
git commit -m "feat: wire optional OIDC JwtDecoder into security filter chain"
```
@@ -1685,9 +1685,9 @@ In `docker-compose.yml`, remove these two labels from `cameleer-saas` (lines 122
- traefik.http.services.forwardauth.loadbalancer.server.port=8080
```
- [ ] **Step 2: Remove ForwardAuth middleware from cameleer3-server**
- [ ] **Step 2: Remove ForwardAuth middleware from cameleer-server**
In `docker-compose.yml`, remove the forward-auth middleware labels from `cameleer3-server` (lines 158-159):
In `docker-compose.yml`, remove the forward-auth middleware labels from `cameleer-server` (lines 158-159):
```yaml
- traefik.http.routers.observe.middlewares=forward-auth
@@ -1719,7 +1719,7 @@ In `cameleer-saas` environment, remove:
CAMELEER_AUTH_TOKEN: ${CAMELEER_AUTH_TOKEN:-default-bootstrap-token}
```
In `cameleer3-server` environment, add:
In `cameleer-server` environment, add:
```yaml
CAMELEER_OIDC_ISSUER_URI: ${LOGTO_ISSUER_URI:-http://logto:3001/oidc}
CAMELEER_OIDC_AUDIENCE: ${CAMELEER_OIDC_AUDIENCE:-https://api.cameleer.local}

View File

@@ -8,41 +8,41 @@
**Tech Stack:** Java 17, Spring Boot 3.4.3, PostgreSQL 16, Flyway, JUnit 5, Testcontainers, AssertJ
**Repo:** `C:\Users\Hendrik\Documents\projects\cameleer3-server`
**Repo:** `C:\Users\Hendrik\Documents\projects\cameleer-server`
---
## File Map
### New Files
- `cameleer3-server-app/src/main/resources/db/migration/V2__claim_mapping.sql`
- `cameleer3-server-core/src/main/java/com/cameleer3/server/core/rbac/ClaimMappingRule.java`
- `cameleer3-server-core/src/main/java/com/cameleer3/server/core/rbac/ClaimMappingRepository.java`
- `cameleer3-server-core/src/main/java/com/cameleer3/server/core/rbac/ClaimMappingService.java`
- `cameleer3-server-core/src/main/java/com/cameleer3/server/core/rbac/AssignmentOrigin.java`
- `cameleer3-server-app/src/main/java/com/cameleer3/server/app/storage/PostgresClaimMappingRepository.java`
- `cameleer3-server-app/src/main/java/com/cameleer3/server/app/controller/ClaimMappingAdminController.java`
- `cameleer3-server-app/src/test/java/com/cameleer3/server/core/rbac/ClaimMappingServiceTest.java`
- `cameleer3-server-app/src/test/java/com/cameleer3/server/app/controller/ClaimMappingAdminControllerIT.java`
- `cameleer3-server-app/src/test/java/com/cameleer3/server/app/security/OidcOnlyModeIT.java`
- `cameleer-server-app/src/main/resources/db/migration/V2__claim_mapping.sql`
- `cameleer-server-core/src/main/java/com/cameleer/server/core/rbac/ClaimMappingRule.java`
- `cameleer-server-core/src/main/java/com/cameleer/server/core/rbac/ClaimMappingRepository.java`
- `cameleer-server-core/src/main/java/com/cameleer/server/core/rbac/ClaimMappingService.java`
- `cameleer-server-core/src/main/java/com/cameleer/server/core/rbac/AssignmentOrigin.java`
- `cameleer-server-app/src/main/java/com/cameleer/server/app/storage/PostgresClaimMappingRepository.java`
- `cameleer-server-app/src/main/java/com/cameleer/server/app/controller/ClaimMappingAdminController.java`
- `cameleer-server-app/src/test/java/com/cameleer/server/core/rbac/ClaimMappingServiceTest.java`
- `cameleer-server-app/src/test/java/com/cameleer/server/app/controller/ClaimMappingAdminControllerIT.java`
- `cameleer-server-app/src/test/java/com/cameleer/server/app/security/OidcOnlyModeIT.java`
### Modified Files
- `cameleer3-server-app/src/main/resources/db/migration/V1__init.sql` — no changes (immutable)
- `cameleer3-server-app/src/main/java/com/cameleer3/server/app/rbac/RbacServiceImpl.java` — add origin-aware query methods
- `cameleer3-server-app/src/main/java/com/cameleer3/server/app/storage/PostgresUserRepository.java` — add origin-aware queries
- `cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/OidcAuthController.java` — replace syncOidcRoles with claim mapping
- `cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/JwtAuthenticationFilter.java` — disable internal token path in OIDC-only mode
- `cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/SecurityConfig.java` — conditional endpoint registration
- `cameleer3-server-app/src/main/java/com/cameleer3/server/app/controller/UserAdminController.java` — disable in OIDC-only mode
- `cameleer3-server-app/src/main/java/com/cameleer3/server/app/config/AgentRegistryBeanConfig.java` — wire ClaimMappingService
- `cameleer3-server-app/src/main/resources/application.yml` — no new properties needed (OIDC config already exists)
- `cameleer-server-app/src/main/resources/db/migration/V1__init.sql` — no changes (immutable)
- `cameleer-server-app/src/main/java/com/cameleer/server/app/rbac/RbacServiceImpl.java` — add origin-aware query methods
- `cameleer-server-app/src/main/java/com/cameleer/server/app/storage/PostgresUserRepository.java` — add origin-aware queries
- `cameleer-server-app/src/main/java/com/cameleer/server/app/security/OidcAuthController.java` — replace syncOidcRoles with claim mapping
- `cameleer-server-app/src/main/java/com/cameleer/server/app/security/JwtAuthenticationFilter.java` — disable internal token path in OIDC-only mode
- `cameleer-server-app/src/main/java/com/cameleer/server/app/security/SecurityConfig.java` — conditional endpoint registration
- `cameleer-server-app/src/main/java/com/cameleer/server/app/controller/UserAdminController.java` — disable in OIDC-only mode
- `cameleer-server-app/src/main/java/com/cameleer/server/app/config/AgentRegistryBeanConfig.java` — wire ClaimMappingService
- `cameleer-server-app/src/main/resources/application.yml` — no new properties needed (OIDC config already exists)
---
### Task 1: Database Migration — Add Origin Tracking and Claim Mapping Rules
**Files:**
- Create: `cameleer3-server-app/src/main/resources/db/migration/V2__claim_mapping.sql`
- Create: `cameleer-server-app/src/main/resources/db/migration/V2__claim_mapping.sql`
- [ ] **Step 1: Write the migration**
@@ -90,14 +90,14 @@ CREATE INDEX idx_user_groups_origin ON user_groups(user_id, origin);
- [ ] **Step 2: Run migration to verify**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn flyway:migrate -pl cameleer3-server-app -Dflyway.url=jdbc:postgresql://localhost:5432/cameleer3 -Dflyway.user=cameleer -Dflyway.password=cameleer_dev`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn flyway:migrate -pl cameleer-server-app -Dflyway.url=jdbc:postgresql://localhost:5432/cameleer -Dflyway.user=cameleer -Dflyway.password=cameleer_dev`
If no local PostgreSQL, verify syntax by running the existing test suite which uses Testcontainers.
- [ ] **Step 3: Commit**
```bash
git add cameleer3-server-app/src/main/resources/db/migration/V2__claim_mapping.sql
git add cameleer-server-app/src/main/resources/db/migration/V2__claim_mapping.sql
git commit -m "feat: add claim mapping rules table and origin tracking to RBAC assignments"
```
@@ -106,14 +106,14 @@ git commit -m "feat: add claim mapping rules table and origin tracking to RBAC a
### Task 2: Core Domain — ClaimMappingRule, AssignmentOrigin, Repository Interface
**Files:**
- Create: `cameleer3-server-core/src/main/java/com/cameleer3/server/core/rbac/AssignmentOrigin.java`
- Create: `cameleer3-server-core/src/main/java/com/cameleer3/server/core/rbac/ClaimMappingRule.java`
- Create: `cameleer3-server-core/src/main/java/com/cameleer3/server/core/rbac/ClaimMappingRepository.java`
- Create: `cameleer-server-core/src/main/java/com/cameleer/server/core/rbac/AssignmentOrigin.java`
- Create: `cameleer-server-core/src/main/java/com/cameleer/server/core/rbac/ClaimMappingRule.java`
- Create: `cameleer-server-core/src/main/java/com/cameleer/server/core/rbac/ClaimMappingRepository.java`
- [ ] **Step 1: Create AssignmentOrigin enum**
```java
package com.cameleer3.server.core.rbac;
package com.cameleer.server.core.rbac;
public enum AssignmentOrigin {
direct, managed
@@ -123,7 +123,7 @@ public enum AssignmentOrigin {
- [ ] **Step 2: Create ClaimMappingRule record**
```java
package com.cameleer3.server.core.rbac;
package com.cameleer.server.core.rbac;
import java.time.Instant;
import java.util.UUID;
@@ -146,7 +146,7 @@ public record ClaimMappingRule(
- [ ] **Step 3: Create ClaimMappingRepository interface**
```java
package com.cameleer3.server.core.rbac;
package com.cameleer.server.core.rbac;
import java.util.List;
import java.util.Optional;
@@ -164,9 +164,9 @@ public interface ClaimMappingRepository {
- [ ] **Step 4: Commit**
```bash
git add cameleer3-server-core/src/main/java/com/cameleer3/server/core/rbac/AssignmentOrigin.java
git add cameleer3-server-core/src/main/java/com/cameleer3/server/core/rbac/ClaimMappingRule.java
git add cameleer3-server-core/src/main/java/com/cameleer3/server/core/rbac/ClaimMappingRepository.java
git add cameleer-server-core/src/main/java/com/cameleer/server/core/rbac/AssignmentOrigin.java
git add cameleer-server-core/src/main/java/com/cameleer/server/core/rbac/ClaimMappingRule.java
git add cameleer-server-core/src/main/java/com/cameleer/server/core/rbac/ClaimMappingRepository.java
git commit -m "feat: add ClaimMappingRule domain model and repository interface"
```
@@ -175,13 +175,13 @@ git commit -m "feat: add ClaimMappingRule domain model and repository interface"
### Task 3: Core Domain — ClaimMappingService
**Files:**
- Create: `cameleer3-server-core/src/main/java/com/cameleer3/server/core/rbac/ClaimMappingService.java`
- Create: `cameleer3-server-app/src/test/java/com/cameleer3/server/core/rbac/ClaimMappingServiceTest.java`
- Create: `cameleer-server-core/src/main/java/com/cameleer/server/core/rbac/ClaimMappingService.java`
- Create: `cameleer-server-app/src/test/java/com/cameleer/server/core/rbac/ClaimMappingServiceTest.java`
- [ ] **Step 1: Write tests for ClaimMappingService**
```java
package com.cameleer3.server.core.rbac;
package com.cameleer.server.core.rbac;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
@@ -300,13 +300,13 @@ class ClaimMappingServiceTest {
- [ ] **Step 2: Run tests to verify they fail**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn test -pl cameleer3-server-app -Dtest=ClaimMappingServiceTest -Dsurefire.failIfNoSpecifiedTests=false`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn test -pl cameleer-server-app -Dtest=ClaimMappingServiceTest -Dsurefire.failIfNoSpecifiedTests=false`
Expected: Compilation error — ClaimMappingService does not exist yet.
- [ ] **Step 3: Implement ClaimMappingService**
```java
package com.cameleer3.server.core.rbac;
package com.cameleer.server.core.rbac;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -377,14 +377,14 @@ public class ClaimMappingService {
- [ ] **Step 4: Run tests to verify they pass**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn test -pl cameleer3-server-app -Dtest=ClaimMappingServiceTest`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn test -pl cameleer-server-app -Dtest=ClaimMappingServiceTest`
Expected: All 7 tests PASS.
- [ ] **Step 5: Commit**
```bash
git add cameleer3-server-core/src/main/java/com/cameleer3/server/core/rbac/ClaimMappingService.java
git add cameleer3-server-app/src/test/java/com/cameleer3/server/core/rbac/ClaimMappingServiceTest.java
git add cameleer-server-core/src/main/java/com/cameleer/server/core/rbac/ClaimMappingService.java
git add cameleer-server-app/src/test/java/com/cameleer/server/core/rbac/ClaimMappingServiceTest.java
git commit -m "feat: implement ClaimMappingService with equals/contains/regex matching"
```
@@ -393,15 +393,15 @@ git commit -m "feat: implement ClaimMappingService with equals/contains/regex ma
### Task 4: PostgreSQL Repository — ClaimMappingRepository Implementation
**Files:**
- Create: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/storage/PostgresClaimMappingRepository.java`
- Create: `cameleer-server-app/src/main/java/com/cameleer/server/app/storage/PostgresClaimMappingRepository.java`
- [ ] **Step 1: Implement PostgresClaimMappingRepository**
```java
package com.cameleer3.server.app.storage;
package com.cameleer.server.app.storage;
import com.cameleer3.server.core.rbac.ClaimMappingRepository;
import com.cameleer3.server.core.rbac.ClaimMappingRule;
import com.cameleer.server.core.rbac.ClaimMappingRepository;
import com.cameleer.server.core.rbac.ClaimMappingRule;
import org.springframework.jdbc.core.JdbcTemplate;
import java.util.List;
@@ -479,7 +479,7 @@ public class PostgresClaimMappingRepository implements ClaimMappingRepository {
- [ ] **Step 2: Wire the bean in AgentRegistryBeanConfig (or a new RbacBeanConfig)**
Add to `cameleer3-server-app/src/main/java/com/cameleer3/server/app/config/AgentRegistryBeanConfig.java` (or create a new `RbacBeanConfig.java`):
Add to `cameleer-server-app/src/main/java/com/cameleer/server/app/config/AgentRegistryBeanConfig.java` (or create a new `RbacBeanConfig.java`):
```java
@Bean
@@ -496,8 +496,8 @@ public ClaimMappingService claimMappingService() {
- [ ] **Step 3: Commit**
```bash
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/storage/PostgresClaimMappingRepository.java
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/config/AgentRegistryBeanConfig.java
git add cameleer-server-app/src/main/java/com/cameleer/server/app/storage/PostgresClaimMappingRepository.java
git add cameleer-server-app/src/main/java/com/cameleer/server/app/config/AgentRegistryBeanConfig.java
git commit -m "feat: implement PostgresClaimMappingRepository and wire beans"
```
@@ -506,11 +506,11 @@ git commit -m "feat: implement PostgresClaimMappingRepository and wire beans"
### Task 5: Modify RbacServiceImpl — Origin-Aware Assignments
**Files:**
- Modify: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/rbac/RbacServiceImpl.java`
- Modify: `cameleer-server-app/src/main/java/com/cameleer/server/app/rbac/RbacServiceImpl.java`
- [ ] **Step 1: Add managed assignment methods to RbacService interface**
In `cameleer3-server-core/src/main/java/com/cameleer3/server/core/rbac/RbacService.java`, add:
In `cameleer-server-core/src/main/java/com/cameleer/server/core/rbac/RbacService.java`, add:
```java
void clearManagedAssignments(String userId);
@@ -592,14 +592,14 @@ public List<RoleSummary> getDirectRolesForUser(String userId) {
- [ ] **Step 5: Run existing tests**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn test -pl cameleer3-server-app`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn test -pl cameleer-server-app`
Expected: All existing tests still pass (migration adds columns with defaults).
- [ ] **Step 6: Commit**
```bash
git add cameleer3-server-core/src/main/java/com/cameleer3/server/core/rbac/RbacService.java
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/rbac/RbacServiceImpl.java
git add cameleer-server-core/src/main/java/com/cameleer/server/core/rbac/RbacService.java
git add cameleer-server-app/src/main/java/com/cameleer/server/app/rbac/RbacServiceImpl.java
git commit -m "feat: add origin-aware managed/direct assignment methods to RbacService"
```
@@ -608,7 +608,7 @@ git commit -m "feat: add origin-aware managed/direct assignment methods to RbacS
### Task 6: Modify OidcAuthController — Replace syncOidcRoles with Claim Mapping
**Files:**
- Modify: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/OidcAuthController.java`
- Modify: `cameleer-server-app/src/main/java/com/cameleer/server/app/security/OidcAuthController.java`
- [ ] **Step 1: Inject ClaimMappingService and ClaimMappingRepository**
@@ -676,13 +676,13 @@ Note: `extractAllClaims` needs to be added to `OidcTokenExchanger` — it return
- [ ] **Step 4: Run existing tests**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn test -pl cameleer3-server-app`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn test -pl cameleer-server-app`
Expected: PASS (OIDC tests may need adjustment if they test syncOidcRoles directly).
- [ ] **Step 5: Commit**
```bash
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/OidcAuthController.java
git add cameleer-server-app/src/main/java/com/cameleer/server/app/security/OidcAuthController.java
git commit -m "feat: replace syncOidcRoles with claim mapping evaluation on OIDC login"
```
@@ -691,8 +691,8 @@ git commit -m "feat: replace syncOidcRoles with claim mapping evaluation on OIDC
### Task 7: OIDC-Only Mode — Disable Local Auth When OIDC Configured
**Files:**
- Modify: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/SecurityConfig.java`
- Modify: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/JwtAuthenticationFilter.java`
- Modify: `cameleer-server-app/src/main/java/com/cameleer/server/app/security/SecurityConfig.java`
- Modify: `cameleer-server-app/src/main/java/com/cameleer/server/app/security/JwtAuthenticationFilter.java`
- [ ] **Step 1: Add isOidcEnabled() helper to SecurityConfig**
@@ -760,15 +760,15 @@ public ResponseEntity<?> resetPassword(@PathVariable String userId, @RequestBody
- [ ] **Step 5: Run full test suite**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn test -pl cameleer3-server-app`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn test -pl cameleer-server-app`
Expected: PASS.
- [ ] **Step 6: Commit**
```bash
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/SecurityConfig.java
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/security/JwtAuthenticationFilter.java
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/controller/UserAdminController.java
git add cameleer-server-app/src/main/java/com/cameleer/server/app/security/SecurityConfig.java
git add cameleer-server-app/src/main/java/com/cameleer/server/app/security/JwtAuthenticationFilter.java
git add cameleer-server-app/src/main/java/com/cameleer/server/app/controller/UserAdminController.java
git commit -m "feat: disable local auth when OIDC is configured (resource server mode)"
```
@@ -777,15 +777,15 @@ git commit -m "feat: disable local auth when OIDC is configured (resource server
### Task 8: Claim Mapping Admin Controller
**Files:**
- Create: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/controller/ClaimMappingAdminController.java`
- Create: `cameleer-server-app/src/main/java/com/cameleer/server/app/controller/ClaimMappingAdminController.java`
- [ ] **Step 1: Implement the controller**
```java
package com.cameleer3.server.app.controller;
package com.cameleer.server.app.controller;
import com.cameleer3.server.core.rbac.ClaimMappingRepository;
import com.cameleer3.server.core.rbac.ClaimMappingRule;
import com.cameleer.server.core.rbac.ClaimMappingRepository;
import com.cameleer.server.core.rbac.ClaimMappingRule;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.springframework.http.ResponseEntity;
@@ -867,13 +867,13 @@ In `SecurityConfig.filterChain()`, the `/api/v1/admin/**` path already requires
- [ ] **Step 3: Run full test suite**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn test -pl cameleer3-server-app`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn test -pl cameleer-server-app`
Expected: PASS.
- [ ] **Step 4: Commit**
```bash
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/controller/ClaimMappingAdminController.java
git add cameleer-server-app/src/main/java/com/cameleer/server/app/controller/ClaimMappingAdminController.java
git commit -m "feat: add ClaimMappingAdminController for CRUD on mapping rules"
```
@@ -882,14 +882,14 @@ git commit -m "feat: add ClaimMappingAdminController for CRUD on mapping rules"
### Task 9: Integration Test — Claim Mapping End-to-End
**Files:**
- Create: `cameleer3-server-app/src/test/java/com/cameleer3/server/app/controller/ClaimMappingAdminControllerIT.java`
- Create: `cameleer-server-app/src/test/java/com/cameleer/server/app/controller/ClaimMappingAdminControllerIT.java`
- [ ] **Step 1: Write integration test**
```java
package com.cameleer3.server.app.controller;
package com.cameleer.server.app.controller;
import com.cameleer3.server.app.AbstractPostgresIT;
import com.cameleer.server.app.AbstractPostgresIT;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.junit.jupiter.api.BeforeEach;
@@ -954,13 +954,13 @@ class ClaimMappingAdminControllerIT extends AbstractPostgresIT {
- [ ] **Step 2: Run integration tests**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn test -pl cameleer3-server-app -Dtest=ClaimMappingAdminControllerIT`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn test -pl cameleer-server-app -Dtest=ClaimMappingAdminControllerIT`
Expected: PASS.
- [ ] **Step 3: Commit**
```bash
git add cameleer3-server-app/src/test/java/com/cameleer3/server/app/controller/ClaimMappingAdminControllerIT.java
git add cameleer-server-app/src/test/java/com/cameleer/server/app/controller/ClaimMappingAdminControllerIT.java
git commit -m "test: add integration tests for claim mapping admin API"
```
@@ -970,12 +970,12 @@ git commit -m "test: add integration tests for claim mapping admin API"
- [ ] **Step 1: Run all tests**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn clean verify`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn clean verify`
Expected: All tests PASS. Build succeeds.
- [ ] **Step 2: Verify migration applies cleanly on fresh database**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn test -pl cameleer3-server-app -Dtest=AbstractPostgresIT`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn test -pl cameleer-server-app -Dtest=AbstractPostgresIT`
Expected: Testcontainers starts fresh PostgreSQL, Flyway applies V1 + V2, context loads.
- [ ] **Step 3: Commit any remaining fixes**

View File

@@ -8,37 +8,37 @@
**Tech Stack:** Java 17, Spring Boot 3.4.3, Ed25519 (JDK built-in), Nimbus JOSE JWT, JUnit 5, AssertJ
**Repo:** `C:\Users\Hendrik\Documents\projects\cameleer3-server`
**Repo:** `C:\Users\Hendrik\Documents\projects\cameleer-server`
---
## File Map
### New Files
- `cameleer3-server-core/src/main/java/com/cameleer3/server/core/license/LicenseInfo.java`
- `cameleer3-server-core/src/main/java/com/cameleer3/server/core/license/LicenseValidator.java`
- `cameleer3-server-core/src/main/java/com/cameleer3/server/core/license/LicenseGate.java`
- `cameleer3-server-core/src/main/java/com/cameleer3/server/core/license/Feature.java`
- `cameleer3-server-app/src/main/java/com/cameleer3/server/app/config/LicenseBeanConfig.java`
- `cameleer3-server-app/src/main/java/com/cameleer3/server/app/controller/LicenseAdminController.java`
- `cameleer3-server-app/src/test/java/com/cameleer3/server/core/license/LicenseValidatorTest.java`
- `cameleer3-server-app/src/test/java/com/cameleer3/server/core/license/LicenseGateTest.java`
- `cameleer-server-core/src/main/java/com/cameleer/server/core/license/LicenseInfo.java`
- `cameleer-server-core/src/main/java/com/cameleer/server/core/license/LicenseValidator.java`
- `cameleer-server-core/src/main/java/com/cameleer/server/core/license/LicenseGate.java`
- `cameleer-server-core/src/main/java/com/cameleer/server/core/license/Feature.java`
- `cameleer-server-app/src/main/java/com/cameleer/server/app/config/LicenseBeanConfig.java`
- `cameleer-server-app/src/main/java/com/cameleer/server/app/controller/LicenseAdminController.java`
- `cameleer-server-app/src/test/java/com/cameleer/server/core/license/LicenseValidatorTest.java`
- `cameleer-server-app/src/test/java/com/cameleer/server/core/license/LicenseGateTest.java`
### Modified Files
- `cameleer3-server-app/src/main/resources/application.yml` — add license config properties
- `cameleer-server-app/src/main/resources/application.yml` — add license config properties
---
### Task 1: Core Domain — LicenseInfo, Feature Enum
**Files:**
- Create: `cameleer3-server-core/src/main/java/com/cameleer3/server/core/license/Feature.java`
- Create: `cameleer3-server-core/src/main/java/com/cameleer3/server/core/license/LicenseInfo.java`
- Create: `cameleer-server-core/src/main/java/com/cameleer/server/core/license/Feature.java`
- Create: `cameleer-server-core/src/main/java/com/cameleer/server/core/license/LicenseInfo.java`
- [ ] **Step 1: Create Feature enum**
```java
package com.cameleer3.server.core.license;
package com.cameleer.server.core.license;
public enum Feature {
topology,
@@ -52,7 +52,7 @@ public enum Feature {
- [ ] **Step 2: Create LicenseInfo record**
```java
package com.cameleer3.server.core.license;
package com.cameleer.server.core.license;
import java.time.Instant;
import java.util.Map;
@@ -87,8 +87,8 @@ public record LicenseInfo(
- [ ] **Step 3: Commit**
```bash
git add cameleer3-server-core/src/main/java/com/cameleer3/server/core/license/Feature.java
git add cameleer3-server-core/src/main/java/com/cameleer3/server/core/license/LicenseInfo.java
git add cameleer-server-core/src/main/java/com/cameleer/server/core/license/Feature.java
git add cameleer-server-core/src/main/java/com/cameleer/server/core/license/LicenseInfo.java
git commit -m "feat: add LicenseInfo and Feature domain model"
```
@@ -97,13 +97,13 @@ git commit -m "feat: add LicenseInfo and Feature domain model"
### Task 2: LicenseValidator — Ed25519 JWT Verification
**Files:**
- Create: `cameleer3-server-core/src/main/java/com/cameleer3/server/core/license/LicenseValidator.java`
- Create: `cameleer3-server-app/src/test/java/com/cameleer3/server/core/license/LicenseValidatorTest.java`
- Create: `cameleer-server-core/src/main/java/com/cameleer/server/core/license/LicenseValidator.java`
- Create: `cameleer-server-app/src/test/java/com/cameleer/server/core/license/LicenseValidatorTest.java`
- [ ] **Step 1: Write tests**
```java
package com.cameleer3.server.core.license;
package com.cameleer.server.core.license;
import org.junit.jupiter.api.Test;
@@ -194,13 +194,13 @@ class LicenseValidatorTest {
- [ ] **Step 2: Run tests to verify they fail**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn test -pl cameleer3-server-app -Dtest=LicenseValidatorTest -Dsurefire.failIfNoSpecifiedTests=false`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn test -pl cameleer-server-app -Dtest=LicenseValidatorTest -Dsurefire.failIfNoSpecifiedTests=false`
Expected: Compilation error — LicenseValidator does not exist.
- [ ] **Step 3: Implement LicenseValidator**
```java
package com.cameleer3.server.core.license;
package com.cameleer.server.core.license;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
@@ -298,14 +298,14 @@ public class LicenseValidator {
- [ ] **Step 4: Run tests**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn test -pl cameleer3-server-app -Dtest=LicenseValidatorTest`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn test -pl cameleer-server-app -Dtest=LicenseValidatorTest`
Expected: All 3 tests PASS.
- [ ] **Step 5: Commit**
```bash
git add cameleer3-server-core/src/main/java/com/cameleer3/server/core/license/LicenseValidator.java
git add cameleer3-server-app/src/test/java/com/cameleer3/server/core/license/LicenseValidatorTest.java
git add cameleer-server-core/src/main/java/com/cameleer/server/core/license/LicenseValidator.java
git add cameleer-server-app/src/test/java/com/cameleer/server/core/license/LicenseValidatorTest.java
git commit -m "feat: implement LicenseValidator with Ed25519 signature verification"
```
@@ -314,13 +314,13 @@ git commit -m "feat: implement LicenseValidator with Ed25519 signature verificat
### Task 3: LicenseGate — Feature Check Service
**Files:**
- Create: `cameleer3-server-core/src/main/java/com/cameleer3/server/core/license/LicenseGate.java`
- Create: `cameleer3-server-app/src/test/java/com/cameleer3/server/core/license/LicenseGateTest.java`
- Create: `cameleer-server-core/src/main/java/com/cameleer/server/core/license/LicenseGate.java`
- Create: `cameleer-server-app/src/test/java/com/cameleer/server/core/license/LicenseGateTest.java`
- [ ] **Step 1: Write tests**
```java
package com.cameleer3.server.core.license;
package com.cameleer.server.core.license;
import org.junit.jupiter.api.Test;
@@ -366,7 +366,7 @@ class LicenseGateTest {
- [ ] **Step 2: Implement LicenseGate**
```java
package com.cameleer3.server.core.license;
package com.cameleer.server.core.license;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -405,14 +405,14 @@ public class LicenseGate {
- [ ] **Step 3: Run tests**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn test -pl cameleer3-server-app -Dtest=LicenseGateTest`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn test -pl cameleer-server-app -Dtest=LicenseGateTest`
Expected: PASS.
- [ ] **Step 4: Commit**
```bash
git add cameleer3-server-core/src/main/java/com/cameleer3/server/core/license/LicenseGate.java
git add cameleer3-server-app/src/test/java/com/cameleer3/server/core/license/LicenseGateTest.java
git add cameleer-server-core/src/main/java/com/cameleer/server/core/license/LicenseGate.java
git add cameleer-server-app/src/test/java/com/cameleer/server/core/license/LicenseGateTest.java
git commit -m "feat: implement LicenseGate for feature checking"
```
@@ -421,8 +421,8 @@ git commit -m "feat: implement LicenseGate for feature checking"
### Task 4: License Loading — Bean Config and Startup
**Files:**
- Create: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/config/LicenseBeanConfig.java`
- Modify: `cameleer3-server-app/src/main/resources/application.yml`
- Create: `cameleer-server-app/src/main/java/com/cameleer/server/app/config/LicenseBeanConfig.java`
- Modify: `cameleer-server-app/src/main/resources/application.yml`
- [ ] **Step 1: Add license config properties to application.yml**
@@ -436,11 +436,11 @@ license:
- [ ] **Step 2: Implement LicenseBeanConfig**
```java
package com.cameleer3.server.app.config;
package com.cameleer.server.app.config;
import com.cameleer3.server.core.license.LicenseGate;
import com.cameleer3.server.core.license.LicenseInfo;
import com.cameleer3.server.core.license.LicenseValidator;
import com.cameleer.server.core.license.LicenseGate;
import com.cameleer.server.core.license.LicenseInfo;
import com.cameleer.server.core.license.LicenseValidator;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
@@ -509,8 +509,8 @@ public class LicenseBeanConfig {
- [ ] **Step 3: Commit**
```bash
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/config/LicenseBeanConfig.java
git add cameleer3-server-app/src/main/resources/application.yml
git add cameleer-server-app/src/main/java/com/cameleer/server/app/config/LicenseBeanConfig.java
git add cameleer-server-app/src/main/resources/application.yml
git commit -m "feat: add license loading at startup from env var or file"
```
@@ -519,16 +519,16 @@ git commit -m "feat: add license loading at startup from env var or file"
### Task 5: License Admin API — Runtime License Update
**Files:**
- Create: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/controller/LicenseAdminController.java`
- Create: `cameleer-server-app/src/main/java/com/cameleer/server/app/controller/LicenseAdminController.java`
- [ ] **Step 1: Implement controller**
```java
package com.cameleer3.server.app.controller;
package com.cameleer.server.app.controller;
import com.cameleer3.server.core.license.LicenseGate;
import com.cameleer3.server.core.license.LicenseInfo;
import com.cameleer3.server.core.license.LicenseValidator;
import com.cameleer.server.core.license.LicenseGate;
import com.cameleer.server.core.license.LicenseInfo;
import com.cameleer.server.core.license.LicenseValidator;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.springframework.beans.factory.annotation.Value;
@@ -581,13 +581,13 @@ public class LicenseAdminController {
- [ ] **Step 2: Run full test suite**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn clean verify`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn clean verify`
Expected: PASS.
- [ ] **Step 3: Commit**
```bash
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/controller/LicenseAdminController.java
git add cameleer-server-app/src/main/java/com/cameleer/server/app/controller/LicenseAdminController.java
git commit -m "feat: add license admin API for runtime license updates"
```
@@ -611,5 +611,5 @@ public ResponseEntity<?> listDebugSessions() {
- [ ] **Step 2: Final verification**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn clean verify`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn clean verify`
Expected: All tests PASS.

View File

@@ -1,6 +1,6 @@
# Plan 3: Runtime Management in the Server
> **Status: COMPLETED** — Verified 2026-04-09. All runtime management fully ported to cameleer3-server with enhancements beyond the original plan.
> **Status: COMPLETED** — Verified 2026-04-09. All runtime management fully ported to cameleer-server with enhancements beyond the original plan.
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [x]`) syntax for tracking.
@@ -10,7 +10,7 @@
**Tech Stack:** Java 17, Spring Boot 3.4.3, docker-java (zerodep transport), PostgreSQL 16, Flyway, JUnit 5, Testcontainers
**Repo:** `C:\Users\Hendrik\Documents\projects\cameleer3-server`
**Repo:** `C:\Users\Hendrik\Documents\projects\cameleer-server`
**Source reference:** Code ported from `C:\Users\Hendrik\Documents\projects\cameleer-saas` (environment, app, deployment, runtime packages)
@@ -18,10 +18,10 @@
## File Map
### New Files — Core Module (`cameleer3-server-core`)
### New Files — Core Module (`cameleer-server-core`)
```
src/main/java/com/cameleer3/server/core/runtime/
src/main/java/com/cameleer/server/core/runtime/
├── Environment.java Record: id, slug, displayName, status, createdAt
├── EnvironmentStatus.java Enum: ACTIVE, SUSPENDED
├── EnvironmentRepository.java Interface: CRUD + findBySlug
@@ -42,10 +42,10 @@ src/main/java/com/cameleer3/server/core/runtime/
└── RoutingMode.java Enum: path, subdomain
```
### New Files — App Module (`cameleer3-server-app`)
### New Files — App Module (`cameleer-server-app`)
```
src/main/java/com/cameleer3/server/app/runtime/
src/main/java/com/cameleer/server/app/runtime/
├── DockerRuntimeOrchestrator.java Docker implementation using docker-java
├── DisabledRuntimeOrchestrator.java No-op implementation (observability-only mode)
├── RuntimeOrchestratorAutoConfig.java @Configuration: auto-detects Docker vs K8s vs disabled
@@ -53,13 +53,13 @@ src/main/java/com/cameleer3/server/app/runtime/
├── JarStorageService.java File-system JAR storage with versioning
└── ContainerLogCollector.java Collects Docker container stdout/stderr
src/main/java/com/cameleer3/server/app/storage/
src/main/java/com/cameleer/server/app/storage/
├── PostgresEnvironmentRepository.java
├── PostgresAppRepository.java
├── PostgresAppVersionRepository.java
└── PostgresDeploymentRepository.java
src/main/java/com/cameleer3/server/app/controller/
src/main/java/com/cameleer/server/app/controller/
├── EnvironmentAdminController.java CRUD endpoints under /api/v1/admin/environments
├── AppController.java App + version CRUD + JAR upload
└── DeploymentController.java Deploy, stop, restart, promote, logs
@@ -70,7 +70,7 @@ src/main/resources/db/migration/
### Modified Files
- `pom.xml` (parent) — add docker-java dependency
- `cameleer3-server-app/pom.xml` — add docker-java dependency
- `cameleer-server-app/pom.xml` — add docker-java dependency
- `application.yml` — add runtime config properties
---
@@ -78,7 +78,7 @@ src/main/resources/db/migration/
### Task 1: Add docker-java Dependency
**Files:**
- Modify: `cameleer3-server-app/pom.xml`
- Modify: `cameleer-server-app/pom.xml`
- [x] **Step 1: Add docker-java dependency**
@@ -97,13 +97,13 @@ src/main/resources/db/migration/
- [x] **Step 2: Verify build**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn compile -pl cameleer3-server-app`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn compile -pl cameleer-server-app`
Expected: BUILD SUCCESS.
- [x] **Step 3: Commit**
```bash
git add cameleer3-server-app/pom.xml
git add cameleer-server-app/pom.xml
git commit -m "chore: add docker-java dependency for runtime orchestration"
```
@@ -112,7 +112,7 @@ git commit -m "chore: add docker-java dependency for runtime orchestration"
### Task 2: Database Migration — Runtime Management Tables
**Files:**
- Create: `cameleer3-server-app/src/main/resources/db/migration/V3__runtime_management.sql`
- Create: `cameleer-server-app/src/main/resources/db/migration/V3__runtime_management.sql`
- [x] **Step 1: Write migration**
@@ -176,7 +176,7 @@ INSERT INTO environments (slug, display_name) VALUES ('default', 'Default');
- [x] **Step 2: Commit**
```bash
git add cameleer3-server-app/src/main/resources/db/migration/V3__runtime_management.sql
git add cameleer-server-app/src/main/resources/db/migration/V3__runtime_management.sql
git commit -m "feat: add runtime management database schema (environments, apps, versions, deployments)"
```
@@ -185,36 +185,36 @@ git commit -m "feat: add runtime management database schema (environments, apps,
### Task 3: Core Domain — Environment, App, AppVersion, Deployment Records
**Files:**
- Create all records in `cameleer3-server-core/src/main/java/com/cameleer3/server/core/runtime/`
- Create all records in `cameleer-server-core/src/main/java/com/cameleer/server/core/runtime/`
- [x] **Step 1: Create all domain records**
```java
// Environment.java
package com.cameleer3.server.core.runtime;
package com.cameleer.server.core.runtime;
import java.time.Instant;
import java.util.UUID;
public record Environment(UUID id, String slug, String displayName, EnvironmentStatus status, Instant createdAt) {}
// EnvironmentStatus.java
package com.cameleer3.server.core.runtime;
package com.cameleer.server.core.runtime;
public enum EnvironmentStatus { ACTIVE, SUSPENDED }
// App.java
package com.cameleer3.server.core.runtime;
package com.cameleer.server.core.runtime;
import java.time.Instant;
import java.util.UUID;
public record App(UUID id, UUID environmentId, String slug, String displayName, Instant createdAt) {}
// AppVersion.java
package com.cameleer3.server.core.runtime;
package com.cameleer.server.core.runtime;
import java.time.Instant;
import java.util.UUID;
public record AppVersion(UUID id, UUID appId, int version, String jarPath, String jarChecksum,
String jarFilename, Long jarSizeBytes, Instant uploadedAt) {}
// Deployment.java
package com.cameleer3.server.core.runtime;
package com.cameleer.server.core.runtime;
import java.time.Instant;
import java.util.UUID;
public record Deployment(UUID id, UUID appId, UUID appVersionId, UUID environmentId,
@@ -227,18 +227,18 @@ public record Deployment(UUID id, UUID appId, UUID appVersionId, UUID environmen
}
// DeploymentStatus.java
package com.cameleer3.server.core.runtime;
package com.cameleer.server.core.runtime;
public enum DeploymentStatus { STARTING, RUNNING, FAILED, STOPPED }
// RoutingMode.java
package com.cameleer3.server.core.runtime;
package com.cameleer.server.core.runtime;
public enum RoutingMode { path, subdomain }
```
- [x] **Step 2: Commit**
```bash
git add cameleer3-server-core/src/main/java/com/cameleer3/server/core/runtime/
git add cameleer-server-core/src/main/java/com/cameleer/server/core/runtime/
git commit -m "feat: add runtime management domain records"
```
@@ -253,7 +253,7 @@ git commit -m "feat: add runtime management domain records"
```java
// EnvironmentRepository.java
package com.cameleer3.server.core.runtime;
package com.cameleer.server.core.runtime;
import java.util.*;
public interface EnvironmentRepository {
List<Environment> findAll();
@@ -266,7 +266,7 @@ public interface EnvironmentRepository {
}
// AppRepository.java
package com.cameleer3.server.core.runtime;
package com.cameleer.server.core.runtime;
import java.util.*;
public interface AppRepository {
List<App> findByEnvironmentId(UUID environmentId);
@@ -277,7 +277,7 @@ public interface AppRepository {
}
// AppVersionRepository.java
package com.cameleer3.server.core.runtime;
package com.cameleer.server.core.runtime;
import java.util.*;
public interface AppVersionRepository {
List<AppVersion> findByAppId(UUID appId);
@@ -287,7 +287,7 @@ public interface AppVersionRepository {
}
// DeploymentRepository.java
package com.cameleer3.server.core.runtime;
package com.cameleer.server.core.runtime;
import java.util.*;
public interface DeploymentRepository {
List<Deployment> findByAppId(UUID appId);
@@ -305,7 +305,7 @@ public interface DeploymentRepository {
```java
// RuntimeOrchestrator.java
package com.cameleer3.server.core.runtime;
package com.cameleer.server.core.runtime;
import java.util.stream.Stream;
@@ -319,7 +319,7 @@ public interface RuntimeOrchestrator {
}
// ContainerRequest.java
package com.cameleer3.server.core.runtime;
package com.cameleer.server.core.runtime;
import java.util.Map;
public record ContainerRequest(
String containerName,
@@ -334,7 +334,7 @@ public record ContainerRequest(
) {}
// ContainerStatus.java
package com.cameleer3.server.core.runtime;
package com.cameleer.server.core.runtime;
public record ContainerStatus(String state, boolean running, int exitCode, String error) {
public static ContainerStatus notFound() {
return new ContainerStatus("not_found", false, -1, "Container not found");
@@ -345,7 +345,7 @@ public record ContainerStatus(String state, boolean running, int exitCode, Strin
- [x] **Step 3: Commit**
```bash
git add cameleer3-server-core/src/main/java/com/cameleer3/server/core/runtime/
git add cameleer-server-core/src/main/java/com/cameleer/server/core/runtime/
git commit -m "feat: add runtime repository interfaces and RuntimeOrchestrator"
```
@@ -359,7 +359,7 @@ git commit -m "feat: add runtime repository interfaces and RuntimeOrchestrator"
- [x] **Step 1: Create EnvironmentService**
```java
package com.cameleer3.server.core.runtime;
package com.cameleer.server.core.runtime;
import java.util.List;
import java.util.UUID;
@@ -395,7 +395,7 @@ public class EnvironmentService {
- [x] **Step 2: Create AppService**
```java
package com.cameleer3.server.core.runtime;
package com.cameleer.server.core.runtime;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -478,7 +478,7 @@ public class AppService {
- [x] **Step 3: Create DeploymentService**
```java
package com.cameleer3.server.core.runtime;
package com.cameleer.server.core.runtime;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -536,7 +536,7 @@ public class DeploymentService {
- [x] **Step 4: Commit**
```bash
git add cameleer3-server-core/src/main/java/com/cameleer3/server/core/runtime/
git add cameleer-server-core/src/main/java/com/cameleer/server/core/runtime/
git commit -m "feat: add EnvironmentService, AppService, DeploymentService"
```
@@ -598,14 +598,14 @@ public class RuntimeBeanConfig {
- [x] **Step 3: Run tests**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn test -pl cameleer3-server-app`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn test -pl cameleer-server-app`
Expected: PASS (Flyway applies V3 migration, context loads).
- [x] **Step 4: Commit**
```bash
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/storage/Postgres*Repository.java
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/config/RuntimeBeanConfig.java
git add cameleer-server-app/src/main/java/com/cameleer/server/app/storage/Postgres*Repository.java
git add cameleer-server-app/src/main/java/com/cameleer/server/app/config/RuntimeBeanConfig.java
git commit -m "feat: implement PostgreSQL repositories for runtime management"
```
@@ -614,16 +614,16 @@ git commit -m "feat: implement PostgreSQL repositories for runtime management"
### Task 7: Docker Runtime Orchestrator
**Files:**
- Create: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/runtime/DockerRuntimeOrchestrator.java`
- Create: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/runtime/DisabledRuntimeOrchestrator.java`
- Create: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/runtime/RuntimeOrchestratorAutoConfig.java`
- Create: `cameleer-server-app/src/main/java/com/cameleer/server/app/runtime/DockerRuntimeOrchestrator.java`
- Create: `cameleer-server-app/src/main/java/com/cameleer/server/app/runtime/DisabledRuntimeOrchestrator.java`
- Create: `cameleer-server-app/src/main/java/com/cameleer/server/app/runtime/RuntimeOrchestratorAutoConfig.java`
- [x] **Step 1: Implement DisabledRuntimeOrchestrator**
```java
package com.cameleer3.server.app.runtime;
package com.cameleer.server.app.runtime;
import com.cameleer3.server.core.runtime.*;
import com.cameleer.server.core.runtime.*;
import java.util.stream.Stream;
public class DisabledRuntimeOrchestrator implements RuntimeOrchestrator {
@@ -685,9 +685,9 @@ public String startContainer(ContainerRequest request) {
- [x] **Step 3: Implement RuntimeOrchestratorAutoConfig**
```java
package com.cameleer3.server.app.runtime;
package com.cameleer.server.app.runtime;
import com.cameleer3.server.core.runtime.RuntimeOrchestrator;
import com.cameleer.server.core.runtime.RuntimeOrchestrator;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.annotation.Bean;
@@ -718,7 +718,7 @@ public class RuntimeOrchestratorAutoConfig {
- [x] **Step 4: Commit**
```bash
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/runtime/
git add cameleer-server-app/src/main/java/com/cameleer/server/app/runtime/
git commit -m "feat: implement DockerRuntimeOrchestrator with volume-mount JAR deployment"
```
@@ -727,14 +727,14 @@ git commit -m "feat: implement DockerRuntimeOrchestrator with volume-mount JAR d
### Task 8: DeploymentExecutor — Async Deployment Pipeline
**Files:**
- Create: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/runtime/DeploymentExecutor.java`
- Create: `cameleer-server-app/src/main/java/com/cameleer/server/app/runtime/DeploymentExecutor.java`
- [x] **Step 1: Implement async deployment pipeline**
```java
package com.cameleer3.server.app.runtime;
package com.cameleer.server.app.runtime;
import com.cameleer3.server.core.runtime.*;
import com.cameleer.server.core.runtime.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.scheduling.annotation.Async;
@@ -841,7 +841,7 @@ public TaskExecutor deploymentTaskExecutor() {
- [x] **Step 3: Commit**
```bash
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/runtime/DeploymentExecutor.java
git add cameleer-server-app/src/main/java/com/cameleer/server/app/runtime/DeploymentExecutor.java
git commit -m "feat: implement async DeploymentExecutor pipeline"
```
@@ -907,9 +907,9 @@ Add to `SecurityConfig.filterChain()`:
- [x] **Step 5: Commit**
```bash
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/controller/EnvironmentAdminController.java
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/controller/AppController.java
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/controller/DeploymentController.java
git add cameleer-server-app/src/main/java/com/cameleer/server/app/controller/EnvironmentAdminController.java
git add cameleer-server-app/src/main/java/com/cameleer/server/app/controller/AppController.java
git add cameleer-server-app/src/main/java/com/cameleer/server/app/controller/DeploymentController.java
git commit -m "feat: add REST controllers for environment, app, and deployment management"
```
@@ -918,7 +918,7 @@ git commit -m "feat: add REST controllers for environment, app, and deployment m
### Task 10: Configuration and Application Properties
**Files:**
- Modify: `cameleer3-server-app/src/main/resources/application.yml`
- Modify: `cameleer-server-app/src/main/resources/application.yml`
- [x] **Step 1: Add runtime config properties**
@@ -939,13 +939,13 @@ cameleer:
- [x] **Step 2: Run full test suite**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn clean verify`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn clean verify`
Expected: PASS.
- [x] **Step 3: Commit**
```bash
git add cameleer3-server-app/src/main/resources/application.yml
git add cameleer-server-app/src/main/resources/application.yml
git commit -m "feat: add runtime management configuration properties"
```
@@ -968,7 +968,7 @@ Test deployment creation (with `DisabledRuntimeOrchestrator` — verifies the de
- [x] **Step 4: Commit**
```bash
git add cameleer3-server-app/src/test/java/com/cameleer3/server/app/controller/
git add cameleer-server-app/src/test/java/com/cameleer/server/app/controller/
git commit -m "test: add integration tests for runtime management API"
```
@@ -978,7 +978,7 @@ git commit -m "test: add integration tests for runtime management API"
- [x] **Step 1: Run full build**
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer3-server && mvn clean verify`
Run: `cd /c/Users/Hendrik/Documents/projects/cameleer-server && mvn clean verify`
Expected: All tests PASS.
- [x] **Step 2: Verify schema applies cleanly**

View File

@@ -10,7 +10,7 @@
**Repo:** `C:\Users\Hendrik\Documents\projects\cameleer-saas`
**Prerequisite:** Plans 1-3 must be implemented in cameleer3-server first.
**Prerequisite:** Plans 1-3 must be implemented in cameleer-server first.
---
@@ -212,7 +212,7 @@ git commit -m "feat: remove migrated environment/app/deployment/runtime code fro
```sql
-- V010__drop_migrated_tables.sql
-- Drop tables that have been migrated to cameleer3-server
-- Drop tables that have been migrated to cameleer-server
DROP TABLE IF EXISTS deployments CASCADE;
DROP TABLE IF EXISTS apps CASCADE;
@@ -242,7 +242,7 @@ group_add:
- "0"
```
The Docker socket mount now belongs to the `cameleer3-server` service instead.
The Docker socket mount now belongs to the `cameleer-server` service instead.
- [ ] **Step 2: Remove docker-java dependency from pom.xml**
@@ -328,7 +328,7 @@ git commit -m "feat: expand ServerApiClient with license push and health check m
- [ ] **Step 1: Create integration contract document**
Create `docs/SAAS-INTEGRATION.md` in the cameleer3-server repo documenting:
Create `docs/SAAS-INTEGRATION.md` in the cameleer-server repo documenting:
- Which server API endpoints the SaaS calls
- Required auth (M2M token with `server:admin` scope)
- License injection mechanism (`POST /api/v1/admin/license`)
@@ -339,7 +339,7 @@ Create `docs/SAAS-INTEGRATION.md` in the cameleer3-server repo documenting:
- [ ] **Step 2: Commit**
```bash
cd /c/Users/Hendrik/Documents/projects/cameleer3-server
cd /c/Users/Hendrik/Documents/projects/cameleer-server
git add docs/SAAS-INTEGRATION.md
git commit -m "docs: add SaaS integration contract documentation"
```

View File

@@ -2,7 +2,7 @@
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Redesign the cameleer-saas platform from a read-only viewer into a vendor management plane that provisions per-tenant cameleer3-server instances, with vendor CRUD and customer self-service.
**Goal:** Redesign the cameleer-saas platform from a read-only viewer into a vendor management plane that provisions per-tenant cameleer-server instances, with vendor CRUD and customer self-service.
**Architecture:** Two-persona split (vendor console at `/vendor/*`, tenant portal at `/tenant/*`). Pluggable `TenantProvisioner` interface with Docker implementation. Backend orchestrates provisioning + Logto + licensing in a single create-tenant flow. Frontend adapts sidebar/routes by persona.
@@ -410,13 +410,13 @@ Append to `application.yml`:
```yaml
cameleer:
provisioning:
server-image: ${CAMELEER_SERVER_IMAGE:gitea.siegeln.net/cameleer/cameleer3-server:latest}
server-ui-image: ${CAMELEER_SERVER_UI_IMAGE:gitea.siegeln.net/cameleer/cameleer3-server-ui:latest}
server-image: ${CAMELEER_SERVER_IMAGE:gitea.siegeln.net/cameleer/cameleer-server:latest}
server-ui-image: ${CAMELEER_SERVER_UI_IMAGE:gitea.siegeln.net/cameleer/cameleer-server-ui:latest}
network-name: ${CAMELEER_NETWORK:cameleer-saas_cameleer}
traefik-network: ${CAMELEER_TRAEFIK_NETWORK:cameleer-traefik}
public-host: ${PUBLIC_HOST:localhost}
public-protocol: ${PUBLIC_PROTOCOL:https}
datasource-url: ${CAMELEER_SERVER_DB_URL:jdbc:postgresql://postgres:5432/cameleer3}
datasource-url: ${CAMELEER_SERVER_DB_URL:jdbc:postgresql://postgres:5432/cameleer}
oidc-issuer-uri: ${PUBLIC_PROTOCOL:https}://${PUBLIC_HOST:localhost}/oidc
oidc-jwk-set-uri: http://logto:3001/oidc/jwks
cors-origins: ${PUBLIC_PROTOCOL:https}://${PUBLIC_HOST:localhost}
@@ -1877,7 +1877,7 @@ import { LayoutDashboard, ShieldCheck, Server, Users, Settings, KeyRound, Buildi
import { useAuth } from '../auth/useAuth';
import { useScopes } from '../auth/useScopes';
import { useOrgStore } from '../auth/useOrganization';
import logo from '@cameleer/design-system/assets/cameleer3-logo.svg';
import logo from '@cameleer/design-system/assets/cameleer-logo.svg';
export function Layout() {
const navigate = useNavigate();
@@ -2940,8 +2940,8 @@ This gives the SaaS container access to the Docker daemon for provisioning.
Add to the `cameleer-saas` environment section:
```yaml
CAMELEER_SERVER_IMAGE: gitea.siegeln.net/cameleer/cameleer3-server:${VERSION:-latest}
CAMELEER_SERVER_UI_IMAGE: gitea.siegeln.net/cameleer/cameleer3-server-ui:${VERSION:-latest}
CAMELEER_SERVER_IMAGE: gitea.siegeln.net/cameleer/cameleer-server:${VERSION:-latest}
CAMELEER_SERVER_UI_IMAGE: gitea.siegeln.net/cameleer/cameleer-server-ui:${VERSION:-latest}
CAMELEER_NETWORK: cameleer-saas_cameleer
CAMELEER_TRAEFIK_NETWORK: cameleer-traefik
```

View File

@@ -581,7 +581,7 @@ In `ui/sign-in/src/SignInPage.tsx`, find the logo text (line ~61):
// BEFORE:
<div className={styles.logo}>
<img src={cameleerLogo} alt="" className={styles.logoImg} />
cameleer3
cameleer
</div>
// AFTER:

View File

@@ -0,0 +1,210 @@
# Fleet Health at a Glance Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Add agent count, environment count, and agent limit columns to the vendor tenant list so the vendor can see fleet utilization at a glance.
**Architecture:** Extend the existing `VendorTenantSummary` record with three int fields. The list endpoint fetches counts from each active tenant's server via existing M2M API methods (`getAgentCount`, `getEnvironmentCount`), parallelized with `CompletableFuture`. Frontend adds two columns (Agents, Envs) to the DataTable.
**Tech Stack:** Java 21, Spring Boot, CompletableFuture, React, TypeScript, @cameleer/design-system DataTable
---
### Task 1: Extend backend — VendorTenantSummary + parallel fetch
**Files:**
- Modify: `src/main/java/net/siegeln/cameleer/saas/vendor/VendorTenantController.java`
- [ ] **Step 1: Extend the VendorTenantSummary record**
In `VendorTenantController.java`, replace the record at lines 39-48:
```java
public record VendorTenantSummary(
UUID id,
String name,
String slug,
String tier,
String status,
String serverState,
String licenseExpiry,
String provisionError,
int agentCount,
int environmentCount,
int agentLimit
) {}
```
- [ ] **Step 2: Update the listAll() endpoint to fetch counts in parallel**
Replace the `listAll()` method at lines 60-77:
```java
@GetMapping
public ResponseEntity<List<VendorTenantSummary>> listAll() {
var tenants = vendorTenantService.listAll();
// Parallel health fetch for active tenants
var futures = tenants.stream().map(tenant -> java.util.concurrent.CompletableFuture.supplyAsync(() -> {
ServerStatus status = vendorTenantService.getServerStatus(tenant);
String licenseExpiry = vendorTenantService
.getLicenseForTenant(tenant.getId())
.map(l -> l.getExpiresAt() != null ? l.getExpiresAt().toString() : null)
.orElse(null);
int agentCount = 0;
int environmentCount = 0;
int agentLimit = -1;
String endpoint = tenant.getServerEndpoint();
boolean isActive = "ACTIVE".equals(tenant.getStatus().name());
if (isActive && endpoint != null && !endpoint.isBlank() && "RUNNING".equals(status.state().name())) {
var serverApi = vendorTenantService.getServerApiClient();
agentCount = serverApi.getAgentCount(endpoint);
environmentCount = serverApi.getEnvironmentCount(endpoint);
}
var license = vendorTenantService.getLicenseForTenant(tenant.getId());
if (license.isPresent() && license.get().getLimits() != null) {
var limits = license.get().getLimits();
if (limits.containsKey("agents")) {
agentLimit = ((Number) limits.get("agents")).intValue();
}
}
return new VendorTenantSummary(
tenant.getId(), tenant.getName(), tenant.getSlug(),
tenant.getTier().name(), tenant.getStatus().name(),
status.state().name(), licenseExpiry, tenant.getProvisionError(),
agentCount, environmentCount, agentLimit
);
})).toList();
List<VendorTenantSummary> summaries = futures.stream()
.map(java.util.concurrent.CompletableFuture::join)
.toList();
return ResponseEntity.ok(summaries);
}
```
- [ ] **Step 3: Expose ServerApiClient from VendorTenantService**
Add a getter in `src/main/java/net/siegeln/cameleer/saas/vendor/VendorTenantService.java`:
```java
public ServerApiClient getServerApiClient() {
return serverApiClient;
}
```
(The `serverApiClient` field already exists in VendorTenantService — check around line 30.)
- [ ] **Step 4: Verify compilation**
Run: `./mvnw compile -pl . -q`
Expected: BUILD SUCCESS
- [ ] **Step 5: Commit**
```bash
git add src/main/java/net/siegeln/cameleer/saas/vendor/VendorTenantController.java \
src/main/java/net/siegeln/cameleer/saas/vendor/VendorTenantService.java
git commit -m "feat: add agent/env counts to vendor tenant list endpoint"
```
---
### Task 2: Update frontend types and columns
**Files:**
- Modify: `ui/src/types/api.ts`
- Modify: `ui/src/pages/vendor/VendorTenantsPage.tsx`
- [ ] **Step 1: Add fields to VendorTenantSummary TypeScript type**
In `ui/src/types/api.ts`, update the `VendorTenantSummary` interface:
```typescript
export interface VendorTenantSummary {
id: string;
name: string;
slug: string;
tier: string;
status: string;
serverState: string;
licenseExpiry: string | null;
provisionError: string | null;
agentCount: number;
environmentCount: number;
agentLimit: number;
}
```
- [ ] **Step 2: Add Agents and Envs columns to VendorTenantsPage**
In `ui/src/pages/vendor/VendorTenantsPage.tsx`, add a helper function after `statusColor`:
```typescript
function formatUsage(used: number, limit: number): string {
return limit < 0 ? `${used} / ∞` : `${used} / ${limit}`;
}
```
Then add two column entries in the `columns` array, after the `serverState` column (after line 54) and before the `licenseExpiry` column:
```typescript
{
key: 'agentCount',
header: 'Agents',
render: (_v, row) => (
<span style={{ fontFamily: 'monospace', fontSize: '0.875rem' }}>
{formatUsage(row.agentCount, row.agentLimit)}
</span>
),
},
{
key: 'environmentCount',
header: 'Envs',
render: (_v, row) => (
<span style={{ fontFamily: 'monospace', fontSize: '0.875rem' }}>
{row.environmentCount}
</span>
),
},
```
- [ ] **Step 3: Build the UI**
Run: `cd ui && npm run build`
Expected: Build succeeds with no errors.
- [ ] **Step 4: Commit**
```bash
git add ui/src/types/api.ts ui/src/pages/vendor/VendorTenantsPage.tsx
git commit -m "feat: show agent/env counts in vendor tenant list"
```
---
### Task 3: Verify end-to-end
- [ ] **Step 1: Run backend tests**
Run: `./mvnw test -pl . -q`
Expected: All tests pass. (Existing tests use mocks, the new parallel fetch doesn't break them since it only affects the controller's list mapping.)
- [ ] **Step 2: Verify in browser**
Navigate to the vendor tenant list. Confirm:
- "Agents" column shows "0 / ∞" (or actual count if agents are connected)
- "Envs" column shows "1" (or actual count)
- PROVISIONING/SUSPENDED tenants show "0" for both
- 30s auto-refresh still works
- [ ] **Step 3: Final commit and push**
```bash
git push
```

View File

@@ -1572,8 +1572,8 @@ VENDOR_PASS=${VENDOR_PASS:-}
DOCKER_SOCKET=${DOCKER_SOCKET}
# Provisioning images
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE=${REGISTRY}/cameleer3-server:${VERSION}
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE=${REGISTRY}/cameleer3-server-ui:${VERSION}
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE=${REGISTRY}/cameleer-server:${VERSION}
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE=${REGISTRY}/cameleer-server-ui:${VERSION}
EOF
log_info "Generated .env"
@@ -1793,8 +1793,8 @@ EOF
CAMELEER_SAAS_PROVISIONING_PUBLICHOST: ${PUBLIC_HOST:-localhost}
CAMELEER_SAAS_PROVISIONING_NETWORKNAME: ${COMPOSE_PROJECT_NAME:-cameleer-saas}_cameleer
CAMELEER_SAAS_PROVISIONING_TRAEFIKNETWORK: cameleer-traefik
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE: ${CAMELEER_SAAS_PROVISIONING_SERVERIMAGE:-gitea.siegeln.net/cameleer/cameleer3-server:latest}
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE: ${CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE:-gitea.siegeln.net/cameleer/cameleer3-server-ui:latest}
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE: ${CAMELEER_SAAS_PROVISIONING_SERVERIMAGE:-gitea.siegeln.net/cameleer/cameleer-server:latest}
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE: ${CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE:-gitea.siegeln.net/cameleer/cameleer-server-ui:latest}
labels:
- traefik.enable=true
- traefik.http.routers.saas.rule=PathPrefix(`/platform`)
@@ -2109,7 +2109,7 @@ EOF
| `logto` | OIDC identity provider + bootstrap |
| `cameleer-saas` | SaaS platform (Spring Boot + React) |
Per-tenant `cameleer3-server` and `cameleer3-server-ui` containers are provisioned dynamically when tenants are created.
Per-tenant `cameleer-server` and `cameleer-server-ui` containers are provisioned dynamically when tenants are created.
## Networking
@@ -2656,7 +2656,7 @@ Tasks 8-16 ────── can run in parallel with Phase 1
## Follow-up (out of scope)
- Bake `docker/server-ui-entrypoint.sh` into the `cameleer3-server-ui` image (separate repo)
- Bake `docker/server-ui-entrypoint.sh` into the `cameleer-server-ui` image (separate repo)
- Set up `install.cameleer.io` distribution endpoint
- Create release automation (tag → publish installer scripts to distribution endpoint)
- Add `docker-compose.dev.yml` overlay generation for the installer's expert mode

View File

@@ -0,0 +1,961 @@
# Externalize Docker Compose Templates — Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Replace inline docker-compose generation in installer scripts with static template files, reducing duplication and enabling user customization.
**Architecture:** Static YAML templates in `installer/templates/` are copied to the install directory. The installer writes `.env` (including `COMPOSE_FILE` to select which templates are active) and runs `docker compose up -d`. Conditional features (TLS, monitoring) are handled via compose file layering and `.env` variables instead of heredoc injection.
**Tech Stack:** Docker Compose v2, YAML, Bash, PowerShell
**Spec:** `docs/superpowers/specs/2026-04-15-externalize-compose-templates-design.md`
---
### Task 1: Create `docker-compose.yml` (infra base template)
**Files:**
- Create: `installer/templates/docker-compose.yml`
This is the shared infrastructure base — always loaded regardless of deployment mode.
- [ ] **Step 1: Create the infra base template**
```yaml
# Cameleer Infrastructure
# Shared base — always loaded. Mode-specific services in separate compose files.
services:
cameleer-traefik:
image: ${TRAEFIK_IMAGE:-gitea.siegeln.net/cameleer/cameleer-traefik}:${VERSION:-latest}
restart: unless-stopped
ports:
- "${HTTP_PORT:-80}:80"
- "${HTTPS_PORT:-443}:443"
- "${LOGTO_CONSOLE_BIND:-127.0.0.1}:${LOGTO_CONSOLE_PORT:-3002}:3002"
environment:
PUBLIC_HOST: ${PUBLIC_HOST:-localhost}
CERT_FILE: ${CERT_FILE:-}
KEY_FILE: ${KEY_FILE:-}
CA_FILE: ${CA_FILE:-}
volumes:
- cameleer-certs:/certs
- ${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock:ro
labels:
- "prometheus.io/scrape=true"
- "prometheus.io/port=8082"
- "prometheus.io/path=/metrics"
networks:
- cameleer
- cameleer-traefik
- monitoring
cameleer-postgres:
image: ${POSTGRES_IMAGE:-gitea.siegeln.net/cameleer/cameleer-postgres}:${VERSION:-latest}
restart: unless-stopped
environment:
POSTGRES_DB: ${POSTGRES_DB:-cameleer_saas}
POSTGRES_USER: ${POSTGRES_USER:-cameleer}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:?POSTGRES_PASSWORD must be set in .env}
volumes:
- cameleer-pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER:-cameleer} -d $${POSTGRES_DB:-cameleer_saas}"]
interval: 5s
timeout: 5s
retries: 5
networks:
- cameleer
- monitoring
cameleer-clickhouse:
image: ${CLICKHOUSE_IMAGE:-gitea.siegeln.net/cameleer/cameleer-clickhouse}:${VERSION:-latest}
restart: unless-stopped
environment:
CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD:?CLICKHOUSE_PASSWORD must be set in .env}
volumes:
- cameleer-chdata:/var/lib/clickhouse
healthcheck:
test: ["CMD-SHELL", "clickhouse-client --password $${CLICKHOUSE_PASSWORD} --query 'SELECT 1'"]
interval: 10s
timeout: 5s
retries: 3
labels:
- "prometheus.io/scrape=true"
- "prometheus.io/port=9363"
- "prometheus.io/path=/metrics"
networks:
- cameleer
- monitoring
volumes:
cameleer-pgdata:
cameleer-chdata:
cameleer-certs:
networks:
cameleer:
driver: bridge
cameleer-traefik:
name: cameleer-traefik
driver: bridge
monitoring:
name: cameleer-monitoring-noop
```
Key changes from the generated version:
- Logto console port always present with `LOGTO_CONSOLE_BIND` controlling exposure
- Prometheus labels unconditional on traefik and clickhouse
- `monitoring` network defined as local noop bridge
- All services join `monitoring` network
- `POSTGRES_DB` uses `${POSTGRES_DB:-cameleer_saas}` (parameterized — standalone overrides via `.env`)
- Password variables use `:?` fail-if-unset
Note: The SaaS mode uses `cameleer-postgres` (custom multi-DB image) while standalone uses `postgres:16-alpine`. The `POSTGRES_IMAGE` variable already handles this — the infra base uses `${POSTGRES_IMAGE:-...}` and standalone `.env` sets `POSTGRES_IMAGE=postgres:16-alpine`.
- [ ] **Step 2: Verify YAML is valid**
Run: `python -c "import yaml; yaml.safe_load(open('installer/templates/docker-compose.yml'))"`
Expected: No output (valid YAML). If python/yaml not available, use `docker compose -f installer/templates/docker-compose.yml config --quiet` (will fail on unset vars, but validates structure).
- [ ] **Step 3: Commit**
```bash
git add installer/templates/docker-compose.yml
git commit -m "feat(installer): add infra base docker-compose template"
```
---
### Task 2: Create `docker-compose.saas.yml` (SaaS mode template)
**Files:**
- Create: `installer/templates/docker-compose.saas.yml`
SaaS-specific services: Logto identity provider and cameleer-saas management plane.
- [ ] **Step 1: Create the SaaS template**
```yaml
# Cameleer SaaS — Logto + management plane
# Loaded in SaaS deployment mode
services:
cameleer-logto:
image: ${LOGTO_IMAGE:-gitea.siegeln.net/cameleer/cameleer-logto}:${VERSION:-latest}
restart: unless-stopped
depends_on:
cameleer-postgres:
condition: service_healthy
environment:
DB_URL: postgres://${POSTGRES_USER:-cameleer}:${POSTGRES_PASSWORD}@cameleer-postgres:5432/logto
ENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
ADMIN_ENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}:${LOGTO_CONSOLE_PORT:-3002}
TRUST_PROXY_HEADER: 1
NODE_TLS_REJECT_UNAUTHORIZED: "${NODE_TLS_REJECT:-0}"
LOGTO_ENDPOINT: http://cameleer-logto:3001
LOGTO_ADMIN_ENDPOINT: http://cameleer-logto:3002
LOGTO_PUBLIC_ENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
PUBLIC_HOST: ${PUBLIC_HOST:-localhost}
PUBLIC_PROTOCOL: ${PUBLIC_PROTOCOL:-https}
PG_HOST: cameleer-postgres
PG_USER: ${POSTGRES_USER:-cameleer}
PG_PASSWORD: ${POSTGRES_PASSWORD}
PG_DB_SAAS: cameleer_saas
SAAS_ADMIN_USER: ${SAAS_ADMIN_USER:-admin}
SAAS_ADMIN_PASS: ${SAAS_ADMIN_PASS:?SAAS_ADMIN_PASS must be set in .env}
healthcheck:
test: ["CMD-SHELL", "node -e \"require('http').get('http://localhost:3001/oidc/.well-known/openid-configuration', r => process.exit(r.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))\" && test -f /data/logto-bootstrap.json"]
interval: 10s
timeout: 5s
retries: 60
start_period: 30s
labels:
- traefik.enable=true
- traefik.http.routers.cameleer-logto.rule=PathPrefix(`/`)
- traefik.http.routers.cameleer-logto.priority=1
- traefik.http.routers.cameleer-logto.entrypoints=websecure
- traefik.http.routers.cameleer-logto.tls=true
- traefik.http.routers.cameleer-logto.service=cameleer-logto
- traefik.http.routers.cameleer-logto.middlewares=cameleer-logto-cors
- "traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowOriginList=${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}:${LOGTO_CONSOLE_PORT:-3002}"
- traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowMethods=GET,POST,PUT,PATCH,DELETE,OPTIONS
- traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowHeaders=Authorization,Content-Type
- traefik.http.middlewares.cameleer-logto-cors.headers.accessControlAllowCredentials=true
- traefik.http.services.cameleer-logto.loadbalancer.server.port=3001
- traefik.http.routers.cameleer-logto-console.rule=PathPrefix(`/`)
- traefik.http.routers.cameleer-logto-console.entrypoints=admin-console
- traefik.http.routers.cameleer-logto-console.tls=true
- traefik.http.routers.cameleer-logto-console.service=cameleer-logto-console
- traefik.http.services.cameleer-logto-console.loadbalancer.server.port=3002
volumes:
- cameleer-bootstrapdata:/data
networks:
- cameleer
- monitoring
cameleer-saas:
image: ${CAMELEER_IMAGE:-gitea.siegeln.net/cameleer/cameleer-saas}:${VERSION:-latest}
restart: unless-stopped
depends_on:
cameleer-logto:
condition: service_healthy
environment:
# SaaS database
SPRING_DATASOURCE_URL: jdbc:postgresql://cameleer-postgres:5432/cameleer_saas
SPRING_DATASOURCE_USERNAME: ${POSTGRES_USER:-cameleer}
SPRING_DATASOURCE_PASSWORD: ${POSTGRES_PASSWORD}
# Identity (Logto)
CAMELEER_SAAS_IDENTITY_LOGTOENDPOINT: http://cameleer-logto:3001
CAMELEER_SAAS_IDENTITY_LOGTOPUBLICENDPOINT: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
# Provisioning — passed to per-tenant server containers
CAMELEER_SAAS_PROVISIONING_PUBLICHOST: ${PUBLIC_HOST:-localhost}
CAMELEER_SAAS_PROVISIONING_PUBLICPROTOCOL: ${PUBLIC_PROTOCOL:-https}
CAMELEER_SAAS_PROVISIONING_NETWORKNAME: ${COMPOSE_PROJECT_NAME:-cameleer-saas}_cameleer
CAMELEER_SAAS_PROVISIONING_TRAEFIKNETWORK: cameleer-traefik
CAMELEER_SAAS_PROVISIONING_DATASOURCEUSERNAME: ${POSTGRES_USER:-cameleer}
CAMELEER_SAAS_PROVISIONING_DATASOURCEPASSWORD: ${POSTGRES_PASSWORD}
CAMELEER_SAAS_PROVISIONING_CLICKHOUSEPASSWORD: ${CLICKHOUSE_PASSWORD}
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE: ${CAMELEER_SAAS_PROVISIONING_SERVERIMAGE:-gitea.siegeln.net/cameleer/cameleer-server:latest}
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE: ${CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE:-gitea.siegeln.net/cameleer/cameleer-server-ui:latest}
labels:
- traefik.enable=true
- traefik.http.routers.saas.rule=PathPrefix(`/platform`)
- traefik.http.routers.saas.entrypoints=websecure
- traefik.http.routers.saas.tls=true
- traefik.http.services.saas.loadbalancer.server.port=8080
- "prometheus.io/scrape=true"
- "prometheus.io/port=8080"
- "prometheus.io/path=/platform/actuator/prometheus"
volumes:
- cameleer-bootstrapdata:/data/bootstrap:ro
- cameleer-certs:/certs
- ${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock
group_add:
- "${DOCKER_GID:-0}"
networks:
- cameleer
- monitoring
volumes:
cameleer-bootstrapdata:
networks:
monitoring:
name: cameleer-monitoring-noop
```
Key changes:
- Logto console traefik labels always included (harmless when port is localhost-only)
- Prometheus labels on cameleer-saas always included
- `DOCKER_GID` read from `.env` via `${DOCKER_GID:-0}` instead of inline `stat`
- Both services join `monitoring` network
- `monitoring` network redefined as noop bridge (compose merges with base definition)
- [ ] **Step 2: Commit**
```bash
git add installer/templates/docker-compose.saas.yml
git commit -m "feat(installer): add SaaS docker-compose template"
```
---
### Task 3: Create `docker-compose.server.yml` (standalone mode template)
**Files:**
- Create: `installer/templates/docker-compose.server.yml`
- Create: `installer/templates/traefik-dynamic.yml`
Standalone-specific services: cameleer-server + server-ui. Also includes the traefik dynamic config that standalone mode needs (overrides the baked-in SaaS redirect).
- [ ] **Step 1: Create the standalone template**
```yaml
# Cameleer Server (standalone)
# Loaded in standalone deployment mode
services:
cameleer-traefik:
volumes:
- ./traefik-dynamic.yml:/etc/traefik/dynamic.yml:ro
cameleer-postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: ${POSTGRES_DB:-cameleer}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER:-cameleer} -d $${POSTGRES_DB:-cameleer}"]
cameleer-server:
image: ${SERVER_IMAGE:-gitea.siegeln.net/cameleer/cameleer-server}:${VERSION:-latest}
container_name: cameleer-server
restart: unless-stopped
depends_on:
cameleer-postgres:
condition: service_healthy
environment:
CAMELEER_SERVER_TENANT_ID: default
SPRING_DATASOURCE_URL: jdbc:postgresql://cameleer-postgres:5432/${POSTGRES_DB:-cameleer}?currentSchema=tenant_default
SPRING_DATASOURCE_USERNAME: ${POSTGRES_USER:-cameleer}
SPRING_DATASOURCE_PASSWORD: ${POSTGRES_PASSWORD}
CAMELEER_SERVER_CLICKHOUSE_URL: jdbc:clickhouse://cameleer-clickhouse:8123/cameleer
CAMELEER_SERVER_CLICKHOUSE_USERNAME: default
CAMELEER_SERVER_CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD}
CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN: ${BOOTSTRAP_TOKEN:?BOOTSTRAP_TOKEN must be set in .env}
CAMELEER_SERVER_SECURITY_UIUSER: ${SERVER_ADMIN_USER:-admin}
CAMELEER_SERVER_SECURITY_UIPASSWORD: ${SERVER_ADMIN_PASS:?SERVER_ADMIN_PASS must be set in .env}
CAMELEER_SERVER_SECURITY_CORSALLOWEDORIGINS: ${PUBLIC_PROTOCOL:-https}://${PUBLIC_HOST:-localhost}
CAMELEER_SERVER_RUNTIME_ENABLED: "true"
CAMELEER_SERVER_RUNTIME_SERVERURL: http://cameleer-server:8081
CAMELEER_SERVER_RUNTIME_ROUTINGDOMAIN: ${PUBLIC_HOST:-localhost}
CAMELEER_SERVER_RUNTIME_ROUTINGMODE: path
CAMELEER_SERVER_RUNTIME_JARSTORAGEPATH: /data/jars
CAMELEER_SERVER_RUNTIME_DOCKERNETWORK: cameleer-apps
CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME: cameleer-jars
CAMELEER_SERVER_RUNTIME_BASEIMAGE: gitea.siegeln.net/cameleer/cameleer-runtime-base:${VERSION:-latest}
labels:
- traefik.enable=true
- traefik.http.routers.server-api.rule=PathPrefix(`/api`)
- traefik.http.routers.server-api.entrypoints=websecure
- traefik.http.routers.server-api.tls=true
- traefik.http.services.server-api.loadbalancer.server.port=8081
- traefik.docker.network=cameleer-traefik
healthcheck:
test: ["CMD-SHELL", "curl -sf http://localhost:8081/api/v1/health || exit 1"]
interval: 10s
timeout: 5s
retries: 30
start_period: 30s
volumes:
- jars:/data/jars
- cameleer-certs:/certs:ro
- ${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock
group_add:
- "${DOCKER_GID:-0}"
networks:
- cameleer
- cameleer-traefik
- cameleer-apps
- monitoring
cameleer-server-ui:
image: ${SERVER_UI_IMAGE:-gitea.siegeln.net/cameleer/cameleer-server-ui}:${VERSION:-latest}
restart: unless-stopped
depends_on:
cameleer-server:
condition: service_healthy
environment:
CAMELEER_API_URL: http://cameleer-server:8081
BASE_PATH: ""
labels:
- traefik.enable=true
- traefik.http.routers.ui.rule=PathPrefix(`/`)
- traefik.http.routers.ui.priority=1
- traefik.http.routers.ui.entrypoints=websecure
- traefik.http.routers.ui.tls=true
- traefik.http.services.ui.loadbalancer.server.port=80
- traefik.docker.network=cameleer-traefik
networks:
- cameleer-traefik
- monitoring
volumes:
jars:
networks:
cameleer-apps:
name: cameleer-apps
driver: bridge
monitoring:
name: cameleer-monitoring-noop
```
Key design decisions:
- `cameleer-traefik` and `cameleer-postgres` entries are **overrides** — compose merges them with the base. The postgres image switches to `postgres:16-alpine` and the healthcheck uses `${POSTGRES_DB:-cameleer}` instead of hardcoded `cameleer_saas`. Traefik gets the `traefik-dynamic.yml` volume mount.
- `DOCKER_GID` from `.env` via `${DOCKER_GID:-0}`
- `BOOTSTRAP_TOKEN` uses `:?` fail-if-unset
- Both server and server-ui join `monitoring` network
- [ ] **Step 2: Create the traefik dynamic config template**
```yaml
tls:
stores:
default:
defaultCertificate:
certFile: /certs/cert.pem
keyFile: /certs/key.pem
```
This file is only relevant in standalone mode (overrides the baked-in SaaS `/` -> `/platform/` redirect in the traefik image).
- [ ] **Step 3: Commit**
```bash
git add installer/templates/docker-compose.server.yml installer/templates/traefik-dynamic.yml
git commit -m "feat(installer): add standalone docker-compose and traefik templates"
```
---
### Task 4: Create overlay templates (TLS + monitoring)
**Files:**
- Create: `installer/templates/docker-compose.tls.yml`
- Create: `installer/templates/docker-compose.monitoring.yml`
- [ ] **Step 1: Create the TLS overlay**
```yaml
# Custom TLS certificates overlay
# Adds user-supplied certificate volume to traefik
services:
cameleer-traefik:
volumes:
- ./certs:/user-certs:ro
```
- [ ] **Step 2: Create the monitoring overlay**
```yaml
# External monitoring network overlay
# Overrides the noop monitoring bridge with a real external network
networks:
monitoring:
external: true
name: ${MONITORING_NETWORK:?MONITORING_NETWORK must be set in .env}
```
This is the key to the monitoring pattern: the base compose files define `monitoring` as a local noop bridge and all services join it. When this overlay is included in `COMPOSE_FILE`, compose merges the network definition — overriding it to point at the real external monitoring network. No per-service entries needed.
- [ ] **Step 3: Commit**
```bash
git add installer/templates/docker-compose.tls.yml installer/templates/docker-compose.monitoring.yml
git commit -m "feat(installer): add TLS and monitoring overlay templates"
```
---
### Task 5: Create `.env.example`
**Files:**
- Create: `installer/templates/.env.example`
- [ ] **Step 1: Create the documented variable reference**
```bash
# Cameleer Configuration
# Copy this file to .env and fill in the values.
# The installer generates .env automatically — this file is for reference.
# ============================================================
# Compose file assembly (set by installer)
# ============================================================
# SaaS: docker-compose.yml:docker-compose.saas.yml
# Standalone: docker-compose.yml:docker-compose.server.yml
# Add :docker-compose.tls.yml for custom TLS certificates
# Add :docker-compose.monitoring.yml for external monitoring network
COMPOSE_FILE=docker-compose.yml:docker-compose.saas.yml
# ============================================================
# Image version
# ============================================================
VERSION=latest
# ============================================================
# Public access
# ============================================================
PUBLIC_HOST=localhost
PUBLIC_PROTOCOL=https
# ============================================================
# Ports
# ============================================================
HTTP_PORT=80
HTTPS_PORT=443
# Set to 0.0.0.0 to expose Logto admin console externally (default: localhost only)
# LOGTO_CONSOLE_BIND=0.0.0.0
LOGTO_CONSOLE_PORT=3002
# ============================================================
# PostgreSQL
# ============================================================
POSTGRES_USER=cameleer
POSTGRES_PASSWORD=CHANGE_ME
# SaaS: cameleer_saas, Standalone: cameleer
POSTGRES_DB=cameleer_saas
# ============================================================
# ClickHouse
# ============================================================
CLICKHOUSE_PASSWORD=CHANGE_ME
# ============================================================
# Admin credentials (SaaS mode)
# ============================================================
SAAS_ADMIN_USER=admin
SAAS_ADMIN_PASS=CHANGE_ME
# ============================================================
# Admin credentials (standalone mode)
# ============================================================
# SERVER_ADMIN_USER=admin
# SERVER_ADMIN_PASS=CHANGE_ME
# BOOTSTRAP_TOKEN=CHANGE_ME
# ============================================================
# TLS
# ============================================================
# Set to 1 to reject unauthorized TLS certificates (production)
NODE_TLS_REJECT=0
# Custom TLS certificate paths (inside container, set by installer)
# CERT_FILE=/user-certs/cert.pem
# KEY_FILE=/user-certs/key.pem
# CA_FILE=/user-certs/ca.pem
# ============================================================
# Docker
# ============================================================
DOCKER_SOCKET=/var/run/docker.sock
# GID of the docker socket — detected by installer, used for container group_add
DOCKER_GID=0
# ============================================================
# Provisioning images (SaaS mode only)
# ============================================================
# CAMELEER_SAAS_PROVISIONING_SERVERIMAGE=gitea.siegeln.net/cameleer/cameleer-server:latest
# CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE=gitea.siegeln.net/cameleer/cameleer-server-ui:latest
# ============================================================
# Monitoring (optional)
# ============================================================
# External Docker network name for Prometheus scraping.
# Only needed when docker-compose.monitoring.yml is in COMPOSE_FILE.
# MONITORING_NETWORK=prometheus
```
- [ ] **Step 2: Commit**
```bash
git add installer/templates/.env.example
git commit -m "feat(installer): add .env.example with documented variables"
```
---
### Task 6: Update `install.sh` — replace compose generation with template copying
**Files:**
- Modify: `installer/install.sh:574-672` (generate_env_file — add COMPOSE_FILE and LOGTO_CONSOLE_BIND)
- Modify: `installer/install.sh:674-1135` (replace generate_compose_file + generate_compose_file_standalone with copy_templates)
- Modify: `installer/install.sh:1728-1731` (reinstall cleanup — delete template files)
- Modify: `installer/install.sh:1696-1710` (upgrade path — copy templates instead of generate)
- Modify: `installer/install.sh:1790-1791` (main — call copy_templates instead of generate_compose_file)
- [ ] **Step 1: Replace `generate_compose_file` and `generate_compose_file_standalone` with `copy_templates`**
Delete both functions (`generate_compose_file` at line 674 and `generate_compose_file_standalone` at line 934) and replace with:
```bash
copy_templates() {
local src
src="$(cd "$(dirname "$0")" && pwd)/templates"
# Base infra — always copied
cp "$src/docker-compose.yml" "$INSTALL_DIR/docker-compose.yml"
cp "$src/.env.example" "$INSTALL_DIR/.env.example"
# Mode-specific
if [ "$DEPLOYMENT_MODE" = "standalone" ]; then
cp "$src/docker-compose.server.yml" "$INSTALL_DIR/docker-compose.server.yml"
cp "$src/traefik-dynamic.yml" "$INSTALL_DIR/traefik-dynamic.yml"
else
cp "$src/docker-compose.saas.yml" "$INSTALL_DIR/docker-compose.saas.yml"
fi
# Optional overlays
if [ "$TLS_MODE" = "custom" ]; then
cp "$src/docker-compose.tls.yml" "$INSTALL_DIR/docker-compose.tls.yml"
fi
if [ -n "$MONITORING_NETWORK" ]; then
cp "$src/docker-compose.monitoring.yml" "$INSTALL_DIR/docker-compose.monitoring.yml"
fi
log_info "Copied docker-compose templates to $INSTALL_DIR"
}
```
- [ ] **Step 2: Update `generate_env_file` to include `COMPOSE_FILE`, `LOGTO_CONSOLE_BIND`, and `DOCKER_GID`**
In the standalone `.env` block (line 577-614), add after the `DOCKER_GID` line:
```bash
# Compose file assembly
COMPOSE_FILE=docker-compose.yml:docker-compose.server.yml$([ "$TLS_MODE" = "custom" ] && echo ":docker-compose.tls.yml")$([ -n "$MONITORING_NETWORK" ] && echo ":docker-compose.monitoring.yml")
EOF
```
In the SaaS `.env` block (line 617-668), add `LOGTO_CONSOLE_BIND` and `COMPOSE_FILE`. After the `LOGTO_CONSOLE_PORT` line:
```bash
LOGTO_CONSOLE_BIND=$([ "$LOGTO_CONSOLE_EXPOSED" = "true" ] && echo "0.0.0.0" || echo "127.0.0.1")
```
And at the end of the SaaS block, add the `COMPOSE_FILE` line:
```bash
# Compose file assembly
COMPOSE_FILE=docker-compose.yml:docker-compose.saas.yml$([ "$TLS_MODE" = "custom" ] && echo ":docker-compose.tls.yml")$([ -n "$MONITORING_NETWORK" ] && echo ":docker-compose.monitoring.yml")
```
Also add the `MONITORING_NETWORK` variable to `.env` when set:
```bash
if [ -n "$MONITORING_NETWORK" ]; then
echo "" >> "$f"
echo "# Monitoring" >> "$f"
echo "MONITORING_NETWORK=${MONITORING_NETWORK}" >> "$f"
fi
```
- [ ] **Step 3: Update `main()` — replace `generate_compose_file` call with `copy_templates`**
At line 1791, change:
```bash
generate_compose_file
```
to:
```bash
copy_templates
```
- [ ] **Step 4: Update `handle_rerun` upgrade path**
At line 1703, change:
```bash
generate_compose_file
```
to:
```bash
copy_templates
```
- [ ] **Step 5: Update reinstall cleanup to remove template files**
At lines 1728-1731, update the `rm -f` list to include all possible template files:
```bash
rm -f "$INSTALL_DIR/.env" "$INSTALL_DIR/.env.bak" "$INSTALL_DIR/.env.example" \
"$INSTALL_DIR/docker-compose.yml" "$INSTALL_DIR/docker-compose.saas.yml" \
"$INSTALL_DIR/docker-compose.server.yml" "$INSTALL_DIR/docker-compose.tls.yml" \
"$INSTALL_DIR/docker-compose.monitoring.yml" "$INSTALL_DIR/traefik-dynamic.yml" \
"$INSTALL_DIR/cameleer.conf" "$INSTALL_DIR/credentials.txt" \
"$INSTALL_DIR/INSTALL.md"
```
- [ ] **Step 6: Commit**
```bash
git add installer/install.sh
git commit -m "refactor(installer): replace sh compose generation with template copying"
```
---
### Task 7: Update `install.ps1` — replace compose generation with template copying
**Files:**
- Modify: `installer/install.ps1:574-666` (Generate-EnvFile — add COMPOSE_FILE and LOGTO_CONSOLE_BIND)
- Modify: `installer/install.ps1:671-1105` (replace Generate-ComposeFile + Generate-ComposeFileStandalone with Copy-Templates)
- Modify: `installer/install.ps1:1706-1723` (upgrade path)
- Modify: `installer/install.ps1:1746` (reinstall cleanup)
- Modify: `installer/install.ps1:1797-1798` (Main — call Copy-Templates)
- [ ] **Step 1: Replace `Generate-ComposeFile` and `Generate-ComposeFileStandalone` with `Copy-Templates`**
Delete both functions and replace with:
```powershell
function Copy-Templates {
$c = $script:cfg
$src = Join-Path $PSScriptRoot 'templates'
# Base infra — always copied
Copy-Item (Join-Path $src 'docker-compose.yml') (Join-Path $c.InstallDir 'docker-compose.yml') -Force
Copy-Item (Join-Path $src '.env.example') (Join-Path $c.InstallDir '.env.example') -Force
# Mode-specific
if ($c.DeploymentMode -eq 'standalone') {
Copy-Item (Join-Path $src 'docker-compose.server.yml') (Join-Path $c.InstallDir 'docker-compose.server.yml') -Force
Copy-Item (Join-Path $src 'traefik-dynamic.yml') (Join-Path $c.InstallDir 'traefik-dynamic.yml') -Force
} else {
Copy-Item (Join-Path $src 'docker-compose.saas.yml') (Join-Path $c.InstallDir 'docker-compose.saas.yml') -Force
}
# Optional overlays
if ($c.TlsMode -eq 'custom') {
Copy-Item (Join-Path $src 'docker-compose.tls.yml') (Join-Path $c.InstallDir 'docker-compose.tls.yml') -Force
}
if ($c.MonitoringNetwork) {
Copy-Item (Join-Path $src 'docker-compose.monitoring.yml') (Join-Path $c.InstallDir 'docker-compose.monitoring.yml') -Force
}
Log-Info "Copied docker-compose templates to $($c.InstallDir)"
}
```
- [ ] **Step 2: Update `Generate-EnvFile` to include `COMPOSE_FILE`, `LOGTO_CONSOLE_BIND`, and `MONITORING_NETWORK`**
In the standalone `.env` content block, add after `DOCKER_GID`:
```powershell
$composeFile = 'docker-compose.yml:docker-compose.server.yml'
if ($c.TlsMode -eq 'custom') { $composeFile += ':docker-compose.tls.yml' }
if ($c.MonitoringNetwork) { $composeFile += ':docker-compose.monitoring.yml' }
```
Then append to `$content`:
```powershell
$content += "`n`n# Compose file assembly`nCOMPOSE_FILE=$composeFile"
if ($c.MonitoringNetwork) {
$content += "`n`n# Monitoring`nMONITORING_NETWORK=$($c.MonitoringNetwork)"
}
```
In the SaaS `.env` content block, add `LOGTO_CONSOLE_BIND` after `LOGTO_CONSOLE_PORT`:
```powershell
$consoleBind = if ($c.LogtoConsoleExposed -eq 'true') { '0.0.0.0' } else { '127.0.0.1' }
```
Add to the content string: `LOGTO_CONSOLE_BIND=$consoleBind`
Build `COMPOSE_FILE`:
```powershell
$composeFile = 'docker-compose.yml:docker-compose.saas.yml'
if ($c.TlsMode -eq 'custom') { $composeFile += ':docker-compose.tls.yml' }
if ($c.MonitoringNetwork) { $composeFile += ':docker-compose.monitoring.yml' }
```
And append to `$content`:
```powershell
$content += "`n`n# Compose file assembly`nCOMPOSE_FILE=$composeFile"
if ($c.MonitoringNetwork) {
$content += "`n`n# Monitoring`nMONITORING_NETWORK=$($c.MonitoringNetwork)"
}
```
- [ ] **Step 3: Update `Main` — replace `Generate-ComposeFile` call with `Copy-Templates`**
At line 1798, change:
```powershell
Generate-ComposeFile
```
to:
```powershell
Copy-Templates
```
- [ ] **Step 4: Update `Handle-Rerun` upgrade path**
At line 1716, change:
```powershell
Generate-ComposeFile
```
to:
```powershell
Copy-Templates
```
- [ ] **Step 5: Update reinstall cleanup to remove template files**
At line 1746, update the filename list:
```powershell
foreach ($fname in @('.env','.env.bak','.env.example','docker-compose.yml','docker-compose.saas.yml','docker-compose.server.yml','docker-compose.tls.yml','docker-compose.monitoring.yml','traefik-dynamic.yml','cameleer.conf','credentials.txt','INSTALL.md')) {
```
- [ ] **Step 6: Commit**
```bash
git add installer/install.ps1
git commit -m "refactor(installer): replace ps1 compose generation with template copying"
```
---
### Task 8: Update existing generated install and clean up
**Files:**
- Modify: `installer/cameleer/docker-compose.yml` (replace with template copy for dev environment)
- [ ] **Step 1: Remove the old generated docker-compose.yml from the cameleer/ directory**
The `installer/cameleer/` directory contains a previously generated install. The `docker-compose.yml` there is now stale — it was generated by the old inline method. Since this is a dev environment output, remove it (it will be recreated by running the installer with the new template approach).
```bash
git rm installer/cameleer/docker-compose.yml
```
- [ ] **Step 2: Add `installer/cameleer/` to `.gitignore` if not already there**
The install output directory should not be tracked. Check if `.gitignore` already covers it. If not, add:
```
installer/cameleer/
```
This prevents generated `.env`, `credentials.txt`, and compose files from being committed.
- [ ] **Step 3: Commit**
```bash
git add -A installer/cameleer/ .gitignore
git commit -m "chore(installer): remove generated install output, add to gitignore"
```
---
### Task 9: Verify the templates produce equivalent output
**Files:** (no changes — verification only)
- [ ] **Step 1: Compare template output against the old generated compose**
Create a temporary `.env` file and run `docker compose config` to render the resolved compose. Compare against the old generated output:
```bash
cd installer/cameleer
# Back up old generated file for comparison
cp docker-compose.yml docker-compose.old.yml 2>/dev/null || true
# Create a test .env that exercises the SaaS path
cat > /tmp/test-saas.env << 'EOF'
COMPOSE_FILE=docker-compose.yml:docker-compose.saas.yml
VERSION=latest
PUBLIC_HOST=test.example.com
PUBLIC_PROTOCOL=https
HTTP_PORT=80
HTTPS_PORT=443
LOGTO_CONSOLE_PORT=3002
LOGTO_CONSOLE_BIND=0.0.0.0
POSTGRES_USER=cameleer
POSTGRES_PASSWORD=testpass
POSTGRES_DB=cameleer_saas
CLICKHOUSE_PASSWORD=testpass
SAAS_ADMIN_USER=admin
SAAS_ADMIN_PASS=testpass
NODE_TLS_REJECT=0
DOCKER_SOCKET=/var/run/docker.sock
DOCKER_GID=0
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE=gitea.siegeln.net/cameleer/cameleer-server:latest
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE=gitea.siegeln.net/cameleer/cameleer-server-ui:latest
EOF
# Render the new templates
cd ../templates
docker compose --env-file /tmp/test-saas.env config
```
Expected: A fully resolved compose with all 5 services (traefik, postgres, clickhouse, logto, saas), correct environment variables, and the monitoring noop network.
- [ ] **Step 2: Test standalone mode rendering**
```bash
cat > /tmp/test-standalone.env << 'EOF'
COMPOSE_FILE=docker-compose.yml:docker-compose.server.yml
VERSION=latest
PUBLIC_HOST=test.example.com
PUBLIC_PROTOCOL=https
HTTP_PORT=80
HTTPS_PORT=443
POSTGRES_IMAGE=postgres:16-alpine
POSTGRES_USER=cameleer
POSTGRES_PASSWORD=testpass
POSTGRES_DB=cameleer
CLICKHOUSE_PASSWORD=testpass
SERVER_ADMIN_USER=admin
SERVER_ADMIN_PASS=testpass
BOOTSTRAP_TOKEN=testtoken
DOCKER_SOCKET=/var/run/docker.sock
DOCKER_GID=0
EOF
cd ../templates
docker compose --env-file /tmp/test-standalone.env config
```
Expected: 5 services (traefik, postgres with `postgres:16-alpine` image, clickhouse, server, server-ui). Postgres `POSTGRES_DB` should be `cameleer`. Server should have all env vars resolved.
- [ ] **Step 3: Test with TLS + monitoring overlays**
```bash
cat > /tmp/test-full.env << 'EOF'
COMPOSE_FILE=docker-compose.yml:docker-compose.saas.yml:docker-compose.tls.yml:docker-compose.monitoring.yml
VERSION=latest
PUBLIC_HOST=test.example.com
PUBLIC_PROTOCOL=https
HTTP_PORT=80
HTTPS_PORT=443
LOGTO_CONSOLE_PORT=3002
LOGTO_CONSOLE_BIND=0.0.0.0
POSTGRES_USER=cameleer
POSTGRES_PASSWORD=testpass
POSTGRES_DB=cameleer_saas
CLICKHOUSE_PASSWORD=testpass
SAAS_ADMIN_USER=admin
SAAS_ADMIN_PASS=testpass
NODE_TLS_REJECT=0
DOCKER_SOCKET=/var/run/docker.sock
DOCKER_GID=0
MONITORING_NETWORK=prometheus
CAMELEER_SAAS_PROVISIONING_SERVERIMAGE=gitea.siegeln.net/cameleer/cameleer-server:latest
CAMELEER_SAAS_PROVISIONING_SERVERUIIMAGE=gitea.siegeln.net/cameleer/cameleer-server-ui:latest
EOF
cd ../templates
docker compose --env-file /tmp/test-full.env config
```
Expected: Same as SaaS mode but with `./certs:/user-certs:ro` volume on traefik and the `monitoring` network declared as `external: true` with name `prometheus`.
- [ ] **Step 4: Clean up temp files**
```bash
rm -f /tmp/test-saas.env /tmp/test-standalone.env /tmp/test-full.env
```
- [ ] **Step 5: Commit verification results as a note (optional)**
No code changes — this task is verification only. If all checks pass, proceed to the final commit.
---
### Task 10: Final commit — update CLAUDE.md deployment modes table
**Files:**
- Modify: `CLAUDE.md` (update Deployment Modes section to reference template files)
- [ ] **Step 1: Update the deployment modes documentation**
In the "Deployment Modes (installer)" section of CLAUDE.md, add a note about the template-based approach:
After the deployment modes table, add:
```markdown
The installer uses static docker-compose templates in `installer/templates/`. Templates are copied to the install directory and composed via `COMPOSE_FILE` in `.env`:
- `docker-compose.yml` — shared infrastructure (traefik, postgres, clickhouse)
- `docker-compose.saas.yml` — SaaS mode (logto, cameleer-saas)
- `docker-compose.server.yml` — standalone mode (server, server-ui)
- `docker-compose.tls.yml` — overlay: custom TLS cert volume
- `docker-compose.monitoring.yml` — overlay: external monitoring network
```
- [ ] **Step 2: Commit**
```bash
git add CLAUDE.md
git commit -m "docs: update CLAUDE.md with template-based installer architecture"
```

View File

@@ -0,0 +1,464 @@
# Per-Tenant PostgreSQL Isolation Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Give each tenant its own PostgreSQL user and schema so tenant servers can only access their own data at the database level.
**Architecture:** During provisioning, create a dedicated PG user (`tenant_<slug>`) with a matching schema. Pass per-tenant credentials and `currentSchema`/`ApplicationName` JDBC parameters to the server container. On delete, drop both schema and user. Existing tenants without `dbPassword` fall back to shared credentials for backwards compatibility.
**Tech Stack:** Java 21, Spring Boot 3.4, Flyway, PostgreSQL 16, Docker Java API
**Spec:** `docs/superpowers/specs/2026-04-15-per-tenant-pg-isolation-design.md`
---
### Task 1: Flyway Migration — add `db_password` column
**Files:**
- Create: `src/main/resources/db/migration/V015__add_tenant_db_password.sql`
- [ ] **Step 1: Create migration file**
```sql
ALTER TABLE tenants ADD COLUMN db_password VARCHAR(255);
```
- [ ] **Step 2: Verify migration applies**
Run: `mvn flyway:info -pl .` or start the app and check logs for `V015__add_tenant_db_password` in Flyway output.
- [ ] **Step 3: Commit**
```bash
git add src/main/resources/db/migration/V015__add_tenant_db_password.sql
git commit -m "feat: add db_password column to tenants table (V015)"
```
---
### Task 2: TenantEntity — add `dbPassword` field
**Files:**
- Modify: `src/main/java/net/siegeln/cameleer/saas/tenant/TenantEntity.java`
- [ ] **Step 1: Add field and accessors**
After the `provisionError` field (line 59), add:
```java
@Column(name = "db_password")
private String dbPassword;
```
After the `setProvisionError` method (line 102), add:
```java
public String getDbPassword() { return dbPassword; }
public void setDbPassword(String dbPassword) { this.dbPassword = dbPassword; }
```
- [ ] **Step 2: Commit**
```bash
git add src/main/java/net/siegeln/cameleer/saas/tenant/TenantEntity.java
git commit -m "feat: add dbPassword field to TenantEntity"
```
---
### Task 3: Create `TenantDatabaseService`
**Files:**
- Create: `src/main/java/net/siegeln/cameleer/saas/provisioning/TenantDatabaseService.java`
- [ ] **Step 1: Implement the service**
```java
package net.siegeln.cameleer.saas.provisioning;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
/**
* Creates and drops per-tenant PostgreSQL users and schemas
* on the shared cameleer database for DB-level tenant isolation.
*/
@Service
public class TenantDatabaseService {
private static final Logger log = LoggerFactory.getLogger(TenantDatabaseService.class);
private final ProvisioningProperties props;
public TenantDatabaseService(ProvisioningProperties props) {
this.props = props;
}
/**
* Create a dedicated PG user and schema for a tenant.
* Idempotent — skips if user/schema already exist.
*/
public void createTenantDatabase(String slug, String password) {
validateSlug(slug);
String url = props.datasourceUrl();
if (url == null || url.isBlank()) {
log.warn("No datasource URL configured — skipping tenant DB setup");
return;
}
String user = "tenant_" + slug;
String schema = "tenant_" + slug;
try (Connection conn = DriverManager.getConnection(url, props.datasourceUsername(), props.datasourcePassword());
Statement stmt = conn.createStatement()) {
// Create user if not exists
boolean userExists;
try (ResultSet rs = stmt.executeQuery(
"SELECT 1 FROM pg_roles WHERE rolname = '" + user + "'")) {
userExists = rs.next();
}
if (!userExists) {
stmt.execute("CREATE USER \"" + user + "\" WITH PASSWORD '" + escapePassword(password) + "'");
log.info("Created PostgreSQL user: {}", user);
} else {
// Update password on re-provision
stmt.execute("ALTER USER \"" + user + "\" WITH PASSWORD '" + escapePassword(password) + "'");
log.info("Updated password for existing PostgreSQL user: {}", user);
}
// Create schema if not exists
boolean schemaExists;
try (ResultSet rs = stmt.executeQuery(
"SELECT 1 FROM information_schema.schemata WHERE schema_name = '" + schema + "'")) {
schemaExists = rs.next();
}
if (!schemaExists) {
stmt.execute("CREATE SCHEMA \"" + schema + "\" AUTHORIZATION \"" + user + "\"");
log.info("Created PostgreSQL schema: {}", schema);
} else {
// Ensure ownership is correct
stmt.execute("ALTER SCHEMA \"" + schema + "\" OWNER TO \"" + user + "\"");
log.info("Schema {} already exists — ensured ownership", schema);
}
// Revoke access to public schema
stmt.execute("REVOKE ALL ON SCHEMA public FROM \"" + user + "\"");
} catch (Exception e) {
throw new RuntimeException("Failed to create tenant database for '" + slug + "': " + e.getMessage(), e);
}
}
/**
* Drop tenant schema (CASCADE) and user. Idempotent.
*/
public void dropTenantDatabase(String slug) {
validateSlug(slug);
String url = props.datasourceUrl();
if (url == null || url.isBlank()) {
log.warn("No datasource URL configured — skipping tenant DB cleanup");
return;
}
String user = "tenant_" + slug;
String schema = "tenant_" + slug;
try (Connection conn = DriverManager.getConnection(url, props.datasourceUsername(), props.datasourcePassword());
Statement stmt = conn.createStatement()) {
stmt.execute("DROP SCHEMA IF EXISTS \"" + schema + "\" CASCADE");
log.info("Dropped PostgreSQL schema: {}", schema);
stmt.execute("DROP USER IF EXISTS \"" + user + "\"");
log.info("Dropped PostgreSQL user: {}", user);
} catch (Exception e) {
log.warn("Failed to drop tenant database for '{}': {}", slug, e.getMessage());
}
}
private void validateSlug(String slug) {
if (slug == null || !slug.matches("^[a-z0-9-]+$")) {
throw new IllegalArgumentException("Invalid tenant slug: " + slug);
}
}
private String escapePassword(String password) {
return password.replace("'", "''");
}
}
```
- [ ] **Step 2: Commit**
```bash
git add src/main/java/net/siegeln/cameleer/saas/provisioning/TenantDatabaseService.java
git commit -m "feat: add TenantDatabaseService for per-tenant PG user+schema"
```
---
### Task 4: Add `dbPassword` to `TenantProvisionRequest`
**Files:**
- Modify: `src/main/java/net/siegeln/cameleer/saas/provisioning/TenantProvisionRequest.java`
- [ ] **Step 1: Add field to record**
Replace the entire record with:
```java
package net.siegeln.cameleer.saas.provisioning;
import java.util.UUID;
public record TenantProvisionRequest(
UUID tenantId,
String slug,
String tier,
String licenseToken,
String dbPassword
) {}
```
- [ ] **Step 2: Commit**
```bash
git add src/main/java/net/siegeln/cameleer/saas/provisioning/TenantProvisionRequest.java
git commit -m "feat: add dbPassword to TenantProvisionRequest"
```
---
### Task 5: Update `DockerTenantProvisioner` — per-tenant JDBC URL
**Files:**
- Modify: `src/main/java/net/siegeln/cameleer/saas/provisioning/DockerTenantProvisioner.java:197-200`
- [ ] **Step 1: Replace shared credentials with per-tenant credentials**
In `createServerContainer()` (line 197-200), replace:
```java
var env = new java.util.ArrayList<>(List.of(
"SPRING_DATASOURCE_URL=" + props.datasourceUrl(),
"SPRING_DATASOURCE_USERNAME=" + props.datasourceUsername(),
"SPRING_DATASOURCE_PASSWORD=" + props.datasourcePassword(),
```
With:
```java
// Per-tenant DB isolation: dedicated user+schema when dbPassword is set,
// shared credentials for backwards compatibility with pre-isolation tenants.
String dsUrl;
String dsUser;
String dsPass;
if (req.dbPassword() != null) {
dsUrl = props.datasourceUrl() + "?currentSchema=tenant_" + slug + "&ApplicationName=tenant_" + slug;
dsUser = "tenant_" + slug;
dsPass = req.dbPassword();
} else {
dsUrl = props.datasourceUrl();
dsUser = props.datasourceUsername();
dsPass = props.datasourcePassword();
}
var env = new java.util.ArrayList<>(List.of(
"SPRING_DATASOURCE_URL=" + dsUrl,
"SPRING_DATASOURCE_USERNAME=" + dsUser,
"SPRING_DATASOURCE_PASSWORD=" + dsPass,
```
- [ ] **Step 2: Commit**
```bash
git add src/main/java/net/siegeln/cameleer/saas/provisioning/DockerTenantProvisioner.java
git commit -m "feat: construct per-tenant JDBC URL with currentSchema and ApplicationName"
```
---
### Task 6: Update `VendorTenantService` — provisioning and delete flows
**Files:**
- Modify: `src/main/java/net/siegeln/cameleer/saas/vendor/VendorTenantService.java`
- [ ] **Step 1: Inject `TenantDatabaseService`**
Add to the constructor and field declarations:
```java
private final TenantDatabaseService tenantDatabaseService;
```
Add to the constructor parameter list and assignment. (Follow the existing pattern of other injected services.)
- [ ] **Step 2: Update `provisionAsync()` — create DB before containers**
In `provisionAsync()` (around line 120), add DB creation before the provision call. Replace:
```java
var provisionRequest = new TenantProvisionRequest(tenantId, slug, tier, licenseToken);
ProvisionResult result = tenantProvisioner.provision(provisionRequest);
```
With:
```java
// Create per-tenant PG user + schema
String dbPassword = UUID.randomUUID().toString().replace("-", "")
+ UUID.randomUUID().toString().replace("-", "").substring(0, 8);
try {
tenantDatabaseService.createTenantDatabase(slug, dbPassword);
} catch (Exception e) {
log.error("Failed to create tenant database for {}: {}", slug, e.getMessage(), e);
tenantRepository.findById(tenantId).ifPresent(t -> {
t.setProvisionError("Database setup failed: " + e.getMessage());
tenantRepository.save(t);
});
return;
}
// Store DB password on entity
TenantEntity tenantForDb = tenantRepository.findById(tenantId).orElse(null);
if (tenantForDb == null) {
log.error("Tenant {} disappeared during provisioning", slug);
return;
}
tenantForDb.setDbPassword(dbPassword);
tenantRepository.save(tenantForDb);
var provisionRequest = new TenantProvisionRequest(tenantId, slug, tier, licenseToken, dbPassword);
ProvisionResult result = tenantProvisioner.provision(provisionRequest);
```
- [ ] **Step 3: Update the existing `TenantProvisionRequest` constructor call in upgrade flow**
Search for any other `new TenantProvisionRequest(...)` calls. The `upgradeServer` method (or re-provision after upgrade) also creates a provision request. Update it to pass `dbPassword` from the entity:
```java
TenantEntity tenant = ...;
var provisionRequest = new TenantProvisionRequest(
tenant.getId(), tenant.getSlug(), tenant.getTier().name(),
licenseToken, tenant.getDbPassword());
```
If the tenant has `dbPassword == null` (pre-existing), this is fine — Task 5 handles the null fallback.
- [ ] **Step 4: Update `delete()` — use TenantDatabaseService**
In `delete()` (around line 306), replace:
```java
// Erase tenant data from server databases (GDPR)
dataCleanupService.cleanup(tenant.getSlug());
```
With:
```java
// Drop per-tenant PG schema + user
tenantDatabaseService.dropTenantDatabase(tenant.getSlug());
// Erase ClickHouse data (GDPR)
dataCleanupService.cleanupClickHouse(tenant.getSlug());
```
- [ ] **Step 5: Commit**
```bash
git add src/main/java/net/siegeln/cameleer/saas/vendor/VendorTenantService.java
git commit -m "feat: create per-tenant PG database during provisioning, drop on delete"
```
---
### Task 7: Refactor `TenantDataCleanupService` — ClickHouse only
**Files:**
- Modify: `src/main/java/net/siegeln/cameleer/saas/provisioning/TenantDataCleanupService.java`
- [ ] **Step 1: Remove PG logic, rename public method**
Remove the `dropPostgresSchema()` method and the `cleanup()` method. Replace with a single public method:
```java
/**
* Deletes tenant data from ClickHouse tables (GDPR data erasure).
* PostgreSQL cleanup is handled by TenantDatabaseService.
*/
public void cleanupClickHouse(String slug) {
deleteClickHouseData(slug);
}
```
Remove the `dropPostgresSchema()` private method entirely. Keep `deleteClickHouseData()` unchanged.
- [ ] **Step 2: Commit**
```bash
git add src/main/java/net/siegeln/cameleer/saas/provisioning/TenantDataCleanupService.java
git commit -m "refactor: move PG cleanup to TenantDatabaseService, keep only ClickHouse"
```
---
### Task 8: Verify end-to-end
- [ ] **Step 1: Build**
```bash
mvn compile -pl .
```
Verify no compilation errors.
- [ ] **Step 2: Deploy and test tenant creation**
Deploy the updated SaaS image. Create a new tenant via the UI. Verify in PostgreSQL:
```sql
-- Should show the new tenant user
SELECT rolname FROM pg_roles WHERE rolname LIKE 'tenant_%';
-- Should show the new tenant schema
SELECT schema_name FROM information_schema.schemata WHERE schema_name LIKE 'tenant_%';
```
- [ ] **Step 3: Verify server container env vars**
```bash
docker inspect cameleer-server-<slug> | grep -E "DATASOURCE|currentSchema|ApplicationName"
```
Expected: URL contains `?currentSchema=tenant_<slug>&ApplicationName=tenant_<slug>`, username is `tenant_<slug>`.
- [ ] **Step 4: Verify Infrastructure page**
Navigate to Vendor > Infrastructure. The PostgreSQL card should now show the tenant schema with size/tables/rows.
- [ ] **Step 5: Test tenant deletion**
Delete the tenant. Verify:
```sql
-- User should be gone
SELECT rolname FROM pg_roles WHERE rolname LIKE 'tenant_%';
-- Schema should be gone
SELECT schema_name FROM information_schema.schemata WHERE schema_name LIKE 'tenant_%';
```
- [ ] **Step 6: Commit all remaining changes**
```bash
git add -A
git commit -m "feat: per-tenant PostgreSQL isolation — complete implementation"
```

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,449 @@
# Email Template Polish Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Replace inline HTML email templates with polished, branded HTML files loaded from classpath, featuring playful desert/caravan copy, structured card layout with watermark, and proper header/footer.
**Architecture:** Extract 4 email templates from `EmailConnectorService.buildSmtpConfig()` into standalone HTML files at `src/main/resources/email-templates/`. Generate a pre-faded watermark PNG served as a static asset. Inject `ProvisioningProperties` to resolve the watermark URL at runtime.
**Tech Stack:** Java 21, Spring Boot, ImageMagick (one-time asset generation), HTML email (inline styles only)
---
### File Map
| Action | File | Purpose |
|--------|------|---------|
| Create | `src/main/resources/email-templates/register.html` | Registration verification email |
| Create | `src/main/resources/email-templates/sign-in.html` | Sign-in verification email |
| Create | `src/main/resources/email-templates/forgot-password.html` | Password reset email |
| Create | `src/main/resources/email-templates/generic.html` | Generic verification email |
| Create | `src/main/resources/static/assets/email-watermark.png` | Pre-faded logo at 7% opacity |
| Modify | `src/main/java/net/siegeln/cameleer/saas/vendor/EmailConnectorService.java` | Load templates from classpath, inject watermark URL |
| Modify | `src/main/java/net/siegeln/cameleer/saas/config/SecurityConfig.java:49` | Permit `/assets/**` for unauthenticated email clients |
| Create | `src/test/java/net/siegeln/cameleer/saas/vendor/EmailTemplateLoadingTest.java` | Verify templates load and placeholders resolve |
---
### Task 1: Generate the pre-faded watermark PNG
**Files:**
- Create: `src/main/resources/static/assets/email-watermark.png`
- [ ] **Step 1: Generate the faded watermark using ImageMagick**
Source the logo from the design-system sibling repo. Apply 7% opacity on a transparent background, output to the static assets directory:
```bash
magick "C:/Users/Hendrik/Documents/projects/design-system/assets/cameleer-logo.png" \
-channel A -evaluate Multiply 0.07 +channel \
-resize 320x320 \
"src/main/resources/static/assets/email-watermark.png"
```
If `magick` is not available, use Python Pillow as fallback:
```bash
python3 -c "
from PIL import Image
img = Image.open('C:/Users/Hendrik/Documents/projects/design-system/assets/cameleer-logo.png').convert('RGBA')
img = img.resize((320, 320), Image.LANCZOS)
r, g, b, a = img.split()
a = a.point(lambda x: int(x * 0.07))
img = Image.merge('RGBA', (r, g, b, a))
img.save('src/main/resources/static/assets/email-watermark.png')
print('Saved watermark')
"
```
- [ ] **Step 2: Verify the file exists and is reasonable size**
```bash
ls -la src/main/resources/static/assets/email-watermark.png
```
Expected: File exists, roughly 5-30 KB.
- [ ] **Step 3: Commit**
```bash
git add src/main/resources/static/assets/email-watermark.png
git commit -m "feat: add pre-faded logo watermark for email templates"
```
---
### Task 2: Permit static assets in SecurityConfig
**Files:**
- Modify: `src/main/java/net/siegeln/cameleer/saas/config/SecurityConfig.java:49`
The watermark image must be loadable by email clients without authentication. The current security config has `.anyRequest().authenticated()` as catch-all, so `/assets/**` needs an explicit permit.
- [ ] **Step 1: Add `/assets/**` to the permitAll list**
In `SecurityConfig.java`, find the existing line:
```java
.requestMatchers("/_app/**", "/favicon.ico", "/favicon.svg", "/logo.svg", "/logo-dark.svg").permitAll()
```
Change it to:
```java
.requestMatchers("/_app/**", "/assets/**", "/favicon.ico", "/favicon.svg", "/logo.svg", "/logo-dark.svg").permitAll()
```
- [ ] **Step 2: Commit**
```bash
git add src/main/java/net/siegeln/cameleer/saas/config/SecurityConfig.java
git commit -m "feat: permit /assets/** for unauthenticated access (email watermark)"
```
---
### Task 3: Create the 4 HTML email template files
**Files:**
- Create: `src/main/resources/email-templates/register.html`
- Create: `src/main/resources/email-templates/sign-in.html`
- Create: `src/main/resources/email-templates/forgot-password.html`
- Create: `src/main/resources/email-templates/generic.html`
All templates use the same card structure. The `{{code}}` placeholder is Logto's built-in substitution. The `{{watermarkUrl}}` placeholder is replaced by `EmailConnectorService` at runtime.
- [ ] **Step 1: Create `register.html`**
```html
<div style="font-family:-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,sans-serif;max-width:480px;margin:0 auto;background:#ffffff;border-radius:8px;overflow:hidden;border:1px solid #e8e0d4;">
<div style="background:#C6820E;padding:20px 24px;text-align:center;">
<span style="font-size:22px;font-weight:700;color:#ffffff;letter-spacing:0.5px;">Cameleer.io</span>
</div>
<div style="padding:32px 24px 24px;position:relative;overflow:hidden;">
<img src="{{watermarkUrl}}" width="320" height="320" style="position:absolute;top:-30px;right:-50px;width:320px;height:320px;opacity:0.07;pointer-events:none;" alt="" />
<div style="position:relative;">
<p style="color:#1a1a1a;font-size:16px;font-weight:600;margin:0 0 8px;">Welcome to the caravan!</p>
<p style="color:#444;font-size:14px;line-height:1.6;margin:0 0 24px;">Enter this code to verify your email and claim your spot. The dunes wait for no one.</p>
<div style="text-align:center;margin:0 0 24px;">
<div style="display:inline-block;background:#FDF6EC;border:2px solid #C6820E;border-radius:8px;padding:16px 32px;">
<span style="font-size:32px;font-weight:700;letter-spacing:8px;color:#C6820E;font-family:'Courier New',Courier,monospace;">{{code}}</span>
</div>
</div>
<p style="color:#888;font-size:13px;line-height:1.5;margin:0;">This code expires in 10 minutes. If you didn't request this, you can safely ignore this email — no camels were harmed.</p>
</div>
</div>
<div style="border-top:1px solid #e8e0d4;padding:16px 24px;text-align:center;">
<p style="color:#999;font-size:12px;margin:0;">Questions? Contact your administrator</p>
<p style="color:#bbb;font-size:11px;margin:6px 0 0;">Cameleer — Apache Camel observability</p>
</div>
</div>
```
- [ ] **Step 2: Create `sign-in.html`**
```html
<div style="font-family:-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,sans-serif;max-width:480px;margin:0 auto;background:#ffffff;border-radius:8px;overflow:hidden;border:1px solid #e8e0d4;">
<div style="background:#C6820E;padding:20px 24px;text-align:center;">
<span style="font-size:22px;font-weight:700;color:#ffffff;letter-spacing:0.5px;">Cameleer.io</span>
</div>
<div style="padding:32px 24px 24px;position:relative;overflow:hidden;">
<img src="{{watermarkUrl}}" width="320" height="320" style="position:absolute;top:-30px;right:-50px;width:320px;height:320px;opacity:0.07;pointer-events:none;" alt="" />
<div style="position:relative;">
<p style="color:#1a1a1a;font-size:16px;font-weight:600;margin:0 0 8px;">Back at the oasis already?</p>
<p style="color:#444;font-size:14px;line-height:1.6;margin:0 0 24px;">Here's your sign-in code. The caravan master is checking credentials.</p>
<div style="text-align:center;margin:0 0 24px;">
<div style="display:inline-block;background:#FDF6EC;border:2px solid #C6820E;border-radius:8px;padding:16px 32px;">
<span style="font-size:32px;font-weight:700;letter-spacing:8px;color:#C6820E;font-family:'Courier New',Courier,monospace;">{{code}}</span>
</div>
</div>
<p style="color:#888;font-size:13px;line-height:1.5;margin:0;">This code expires in 10 minutes.</p>
</div>
</div>
<div style="border-top:1px solid #e8e0d4;padding:16px 24px;text-align:center;">
<p style="color:#999;font-size:12px;margin:0;">Questions? Contact your administrator</p>
<p style="color:#bbb;font-size:11px;margin:6px 0 0;">Cameleer — Apache Camel observability</p>
</div>
</div>
```
- [ ] **Step 3: Create `forgot-password.html`**
```html
<div style="font-family:-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,sans-serif;max-width:480px;margin:0 auto;background:#ffffff;border-radius:8px;overflow:hidden;border:1px solid #e8e0d4;">
<div style="background:#C6820E;padding:20px 24px;text-align:center;">
<span style="font-size:22px;font-weight:700;color:#ffffff;letter-spacing:0.5px;">Cameleer.io</span>
</div>
<div style="padding:32px 24px 24px;position:relative;overflow:hidden;">
<img src="{{watermarkUrl}}" width="320" height="320" style="position:absolute;top:-30px;right:-50px;width:320px;height:320px;opacity:0.07;pointer-events:none;" alt="" />
<div style="position:relative;">
<p style="color:#1a1a1a;font-size:16px;font-weight:600;margin:0 0 8px;">Lost in the dunes?</p>
<p style="color:#444;font-size:14px;line-height:1.6;margin:0 0 24px;">No worries — enter this code to reset your password and get back on the trail.</p>
<div style="text-align:center;margin:0 0 24px;">
<div style="display:inline-block;background:#FDF6EC;border:2px solid #C6820E;border-radius:8px;padding:16px 32px;">
<span style="font-size:32px;font-weight:700;letter-spacing:8px;color:#C6820E;font-family:'Courier New',Courier,monospace;">{{code}}</span>
</div>
</div>
<p style="color:#888;font-size:13px;line-height:1.5;margin:0;">This code expires in 10 minutes. If you didn't request a password reset, you can safely ignore this email.</p>
</div>
</div>
<div style="border-top:1px solid #e8e0d4;padding:16px 24px;text-align:center;">
<p style="color:#999;font-size:12px;margin:0;">Questions? Contact your administrator</p>
<p style="color:#bbb;font-size:11px;margin:6px 0 0;">Cameleer — Apache Camel observability</p>
</div>
</div>
```
- [ ] **Step 4: Create `generic.html`**
```html
<div style="font-family:-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,sans-serif;max-width:480px;margin:0 auto;background:#ffffff;border-radius:8px;overflow:hidden;border:1px solid #e8e0d4;">
<div style="background:#C6820E;padding:20px 24px;text-align:center;">
<span style="font-size:22px;font-weight:700;color:#ffffff;letter-spacing:0.5px;">Cameleer.io</span>
</div>
<div style="padding:32px 24px 24px;position:relative;overflow:hidden;">
<img src="{{watermarkUrl}}" width="320" height="320" style="position:absolute;top:-30px;right:-50px;width:320px;height:320px;opacity:0.07;pointer-events:none;" alt="" />
<div style="position:relative;">
<p style="color:#1a1a1a;font-size:16px;font-weight:600;margin:0 0 8px;">Quick checkpoint</p>
<p style="color:#444;font-size:14px;line-height:1.6;margin:0 0 24px;">Here's your verification code. Just making sure it's really you at the reins.</p>
<div style="text-align:center;margin:0 0 24px;">
<div style="display:inline-block;background:#FDF6EC;border:2px solid #C6820E;border-radius:8px;padding:16px 32px;">
<span style="font-size:32px;font-weight:700;letter-spacing:8px;color:#C6820E;font-family:'Courier New',Courier,monospace;">{{code}}</span>
</div>
</div>
<p style="color:#888;font-size:13px;line-height:1.5;margin:0;">This code expires in 10 minutes.</p>
</div>
</div>
<div style="border-top:1px solid #e8e0d4;padding:16px 24px;text-align:center;">
<p style="color:#999;font-size:12px;margin:0;">Questions? Contact your administrator</p>
<p style="color:#bbb;font-size:11px;margin:6px 0 0;">Cameleer — Apache Camel observability</p>
</div>
</div>
```
- [ ] **Step 5: Commit**
```bash
git add src/main/resources/email-templates/
git commit -m "feat: add branded HTML email templates with desert/caravan copy"
```
---
### Task 4: Refactor EmailConnectorService to load templates from classpath
**Files:**
- Modify: `src/main/java/net/siegeln/cameleer/saas/vendor/EmailConnectorService.java`
- [ ] **Step 1: Write the failing test**
Create `src/test/java/net/siegeln/cameleer/saas/vendor/EmailTemplateLoadingTest.java`:
```java
package net.siegeln.cameleer.saas.vendor;
import org.junit.jupiter.api.Test;
import org.springframework.core.io.ClassPathResource;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import static org.junit.jupiter.api.Assertions.*;
class EmailTemplateLoadingTest {
private static final String[] TEMPLATE_FILES = {
"email-templates/register.html",
"email-templates/sign-in.html",
"email-templates/forgot-password.html",
"email-templates/generic.html"
};
@Test
void allTemplateFilesExistOnClasspath() {
for (String path : TEMPLATE_FILES) {
var resource = new ClassPathResource(path);
assertTrue(resource.exists(), "Template file missing: " + path);
}
}
@Test
void templatesContainCodePlaceholder() throws IOException {
for (String path : TEMPLATE_FILES) {
String content = new ClassPathResource(path).getContentAsString(StandardCharsets.UTF_8);
assertTrue(content.contains("{{code}}"),
path + " must contain {{code}} placeholder");
}
}
@Test
void templatesContainWatermarkPlaceholder() throws IOException {
for (String path : TEMPLATE_FILES) {
String content = new ClassPathResource(path).getContentAsString(StandardCharsets.UTF_8);
assertTrue(content.contains("{{watermarkUrl}}"),
path + " must contain {{watermarkUrl}} placeholder");
}
}
@Test
void watermarkPlaceholderIsReplaced() throws IOException {
String content = new ClassPathResource("email-templates/register.html")
.getContentAsString(StandardCharsets.UTF_8);
String resolved = content.replace("{{watermarkUrl}}",
"https://example.com/platform/assets/email-watermark.png");
assertFalse(resolved.contains("{{watermarkUrl}}"));
assertTrue(resolved.contains("https://example.com/platform/assets/email-watermark.png"));
}
@Test
void templatesContainBrandElements() throws IOException {
for (String path : TEMPLATE_FILES) {
String content = new ClassPathResource(path).getContentAsString(StandardCharsets.UTF_8);
assertTrue(content.contains("Cameleer.io"),
path + " must contain Cameleer.io header");
assertTrue(content.contains("Apache Camel observability"),
path + " must contain tagline");
assertTrue(content.contains("#C6820E"),
path + " must use brand color");
}
}
}
```
- [ ] **Step 2: Run tests to verify they pass (templates exist from Task 3)**
```bash
./mvnw test -pl . -Dtest=EmailTemplateLoadingTest -Dspring.profiles.active=test
```
Expected: All 5 tests PASS.
- [ ] **Step 3: Add `ProvisioningProperties` dependency to `EmailConnectorService`**
Replace the constructor and add the template loading logic. The full updated `EmailConnectorService.java`:
Change the imports and fields at the top of the class — add `ProvisioningProperties` import and field:
```java
import net.siegeln.cameleer.saas.provisioning.ProvisioningProperties;
import org.springframework.core.io.ClassPathResource;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
```
Replace the constructor:
```java
private final LogtoManagementClient logtoClient;
private final ProvisioningProperties provisioningProps;
public EmailConnectorService(LogtoManagementClient logtoClient, ProvisioningProperties provisioningProps) {
this.logtoClient = logtoClient;
this.provisioningProps = provisioningProps;
}
```
Replace the `buildSmtpConfig` method (lines 157-191) with:
```java
/** Load an email template from classpath and resolve the watermark URL placeholder. */
private String loadTemplate(String filename) {
try {
String content = new ClassPathResource("email-templates/" + filename)
.getContentAsString(StandardCharsets.UTF_8);
String watermarkUrl = provisioningProps.publicProtocol() + "://"
+ provisioningProps.publicHost() + "/platform/assets/email-watermark.png";
return content.replace("{{watermarkUrl}}", watermarkUrl);
} catch (IOException e) {
throw new IllegalStateException("Failed to load email template: " + filename, e);
}
}
/** Build the Logto SMTP connector config with Cameleer-branded email templates. */
private Map<String, Object> buildSmtpConfig(SmtpConfig smtp) {
var config = new HashMap<String, Object>();
config.put("host", smtp.host());
config.put("port", smtp.port());
config.put("auth", Map.of("user", smtp.username(), "pass", smtp.password()));
config.put("fromEmail", smtp.fromEmail());
config.put("templates", List.of(
Map.of(
"usageType", "Register",
"contentType", "text/html",
"subject", "Your caravan pass is almost ready",
"content", loadTemplate("register.html")
),
Map.of(
"usageType", "SignIn",
"contentType", "text/html",
"subject", "Your Cameleer sign-in code",
"content", loadTemplate("sign-in.html")
),
Map.of(
"usageType", "ForgotPassword",
"contentType", "text/html",
"subject", "Reset your Cameleer password",
"content", loadTemplate("forgot-password.html")
),
Map.of(
"usageType", "Generic",
"contentType", "text/html",
"subject", "Your Cameleer verification code",
"content", loadTemplate("generic.html")
)
));
return config;
}
```
- [ ] **Step 4: Verify the project compiles**
```bash
./mvnw compile -pl .
```
Expected: BUILD SUCCESS
- [ ] **Step 5: Run the template tests again to confirm nothing broke**
```bash
./mvnw test -pl . -Dtest=EmailTemplateLoadingTest -Dspring.profiles.active=test
```
Expected: All 5 tests PASS.
- [ ] **Step 6: Commit**
```bash
git add src/main/java/net/siegeln/cameleer/saas/vendor/EmailConnectorService.java
git add src/test/java/net/siegeln/cameleer/saas/vendor/EmailTemplateLoadingTest.java
git commit -m "feat: load email templates from classpath with watermark URL resolution"
```
---
### Task 5: Run the full test suite
**Files:** None (verification only)
- [ ] **Step 1: Run all tests**
```bash
./mvnw test -Dspring.profiles.active=test
```
Expected: BUILD SUCCESS, all tests pass. If any existing tests fail due to the new `ProvisioningProperties` constructor parameter on `EmailConnectorService`, they will need their mocks updated — but there are no existing tests for this class.
- [ ] **Step 2: Verify the watermark is accessible without auth by checking SecurityConfig**
Confirm the `/assets/**` matcher is in the `permitAll()` chain (done in Task 2). With context-path `/platform`, the full public URL will be `https://<host>/platform/assets/email-watermark.png`.
- [ ] **Step 3: Final commit if any fixes were needed**
Only if test failures required changes:
```bash
git add -A
git commit -m "fix: resolve test failures from email template refactor"
```

View File

@@ -0,0 +1,614 @@
# License Minter Integration — Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Replace UUID-based license tokens with Ed25519-signed tokens minted by `cameleer-license-minter`, with full vendor UI for configurable minting, distribution, and verification.
**Architecture:** The SaaS platform embeds `cameleer-license-minter` as a Maven dependency and calls `LicenseMinter.mint()` with an Ed25519 private key stored in the DB. Signed tokens are pushed to tenant servers via env vars and REST API. The vendor UI provides tier presets with per-limit customization, copy/email distribution as env-var bundles, and a token verification tool.
**Tech Stack:** Spring Boot 3.4, JPA/Flyway/PostgreSQL, Ed25519 (JCE), `cameleer-license-minter` + `cameleer-server-core` (LicenseInfo, LicenseValidator), React 19, @cameleer/design-system, TanStack Query.
**Decisions:**
- Tiers renamed: LOW→STARTER, MID→TEAM, HIGH→BUSINESS, BUSINESS→ENTERPRISE
- Tiers are presets only — vendor can customize any limit (becomes "Custom" in UI)
- Private key stored in DB (signing_keys table)
- Features concept dropped — server enforces caps, not feature flags
- Standalone distribution: license bundle = token + public key + tenant ID as env vars
- Verify tool: paste token → decode + validate signature → show envelope + state
---
## Phase 1: Backend Foundation
### Task 1: Maven dependency + Flyway migration
**Files:**
- Modify: `pom.xml`
- Create: `src/main/resources/db/migration/V002__license_minter.sql`
- [ ] **Step 1: Add minter dependency to pom.xml**
Add inside `<dependencies>`:
```xml
<!-- License Minter (Ed25519 signing) -->
<dependency>
<groupId>com.cameleer</groupId>
<artifactId>cameleer-license-minter</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
```
This transitively brings `cameleer-server-core` (for `LicenseInfo`, `LicenseValidator`).
- [ ] **Step 2: Create Flyway V002 migration**
```sql
-- V002: License minter integration
-- Signing keys for Ed25519 license minting
CREATE TABLE signing_keys (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
public_key_b64 TEXT NOT NULL,
private_key_b64 TEXT NOT NULL,
active BOOLEAN NOT NULL DEFAULT TRUE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Rename tiers: LOW→STARTER, MID→TEAM, HIGH→BUSINESS, BUSINESS→ENTERPRISE
UPDATE tenants SET tier = 'STARTER' WHERE tier = 'LOW';
UPDATE tenants SET tier = 'TEAM' WHERE tier = 'MID';
UPDATE tenants SET tier = 'BUSINESS' WHERE tier = 'HIGH';
UPDATE tenants SET tier = 'ENTERPRISE' WHERE tier = 'BUSINESS';
-- Fix double-rename: HIGH→BUSINESS rows that got caught by BUSINESS→ENTERPRISE
-- Use a single pass via CASE to avoid this:
-- Actually, redo with CASE statement in a single UPDATE:
-- (Replace the 4 UPDATEs above with this single safe statement)
UPDATE tenants SET tier = CASE tier
WHEN 'LOW' THEN 'STARTER'
WHEN 'MID' THEN 'TEAM'
WHEN 'HIGH' THEN 'BUSINESS'
WHEN 'BUSINESS' THEN 'ENTERPRISE'
ELSE tier
END WHERE tier IN ('LOW', 'MID', 'HIGH', 'BUSINESS');
-- Same for licenses table
UPDATE licenses SET tier = CASE tier
WHEN 'LOW' THEN 'STARTER'
WHEN 'MID' THEN 'TEAM'
WHEN 'HIGH' THEN 'BUSINESS'
WHEN 'BUSINESS' THEN 'ENTERPRISE'
ELSE tier
END WHERE tier IN ('LOW', 'MID', 'HIGH', 'BUSINESS');
-- Add new license columns
ALTER TABLE licenses ADD COLUMN label VARCHAR(255);
ALTER TABLE licenses ADD COLUMN grace_period_days INTEGER NOT NULL DEFAULT 0;
-- Drop features column (server enforces caps, not feature flags)
ALTER TABLE licenses DROP COLUMN features;
```
- [ ] **Step 3: Verify build compiles**
Run: `mvn compile -q` (just compile, no tests yet — tests will break until Tier enum is updated)
- [ ] **Step 4: Commit**
```
feat: add cameleer-license-minter dependency and V002 migration
Adds Ed25519 license minting library, signing_keys table,
renames tiers (LOW→STARTER, MID→TEAM, HIGH→BUSINESS, BUSINESS→ENTERPRISE),
adds label + grace_period_days to licenses, drops features column.
```
### Task 2: Tier enum rename + LicenseDefaults rewrite
**Files:**
- Modify: `src/main/java/net/siegeln/cameleer/saas/tenant/Tier.java`
- Modify: `src/main/java/net/siegeln/cameleer/saas/tenant/TenantEntity.java`
- Modify: `src/main/java/net/siegeln/cameleer/saas/tenant/TenantService.java`
- Modify: `src/main/java/net/siegeln/cameleer/saas/license/LicenseDefaults.java`
- [ ] **Step 1: Update Tier enum**
```java
package net.siegeln.cameleer.saas.tenant;
public enum Tier {
STARTER, TEAM, BUSINESS, ENTERPRISE
}
```
- [ ] **Step 2: Update TenantEntity default**
Change `private Tier tier = Tier.LOW;` to `private Tier tier = Tier.STARTER;`
- [ ] **Step 3: Update TenantService fallback**
Change `Tier.valueOf(request.tier()) : Tier.LOW` to `Tier.valueOf(request.tier()) : Tier.STARTER`
- [ ] **Step 4: Rewrite LicenseDefaults**
Replace entire file with 13-key limits per tier matching the handoff cap matrix. Drop `featuresForTier()`. Only `limitsForTier()`.
```java
package net.siegeln.cameleer.saas.license;
import net.siegeln.cameleer.saas.tenant.Tier;
import java.util.Map;
public final class LicenseDefaults {
private LicenseDefaults() {}
public static final int DEFAULT_GRACE_PERIOD_DAYS = 14;
public static final int DEFAULT_LICENSE_DAYS = 365;
public static Map<String, Integer> limitsForTier(Tier tier) {
return switch (tier) {
case STARTER -> Map.ofEntries(
Map.entry("max_environments", 2),
Map.entry("max_apps", 10),
Map.entry("max_agents", 20),
Map.entry("max_users", 5),
Map.entry("max_outbound_connections", 5),
Map.entry("max_alert_rules", 10),
Map.entry("max_total_cpu_millis", 8000),
Map.entry("max_total_memory_mb", 8192),
Map.entry("max_total_replicas", 25),
Map.entry("max_execution_retention_days", 7),
Map.entry("max_log_retention_days", 7),
Map.entry("max_metric_retention_days", 7),
Map.entry("max_jar_retention_count", 5)
);
case TEAM -> Map.ofEntries(
Map.entry("max_environments", 5),
Map.entry("max_apps", 50),
Map.entry("max_agents", 100),
Map.entry("max_users", 25),
Map.entry("max_outbound_connections", 25),
Map.entry("max_alert_rules", 50),
Map.entry("max_total_cpu_millis", 32000),
Map.entry("max_total_memory_mb", 32768),
Map.entry("max_total_replicas", 100),
Map.entry("max_execution_retention_days", 30),
Map.entry("max_log_retention_days", 30),
Map.entry("max_metric_retention_days", 30),
Map.entry("max_jar_retention_count", 10)
);
case BUSINESS -> Map.ofEntries(
Map.entry("max_environments", 10),
Map.entry("max_apps", 200),
Map.entry("max_agents", 500),
Map.entry("max_users", 100),
Map.entry("max_outbound_connections", 100),
Map.entry("max_alert_rules", 200),
Map.entry("max_total_cpu_millis", 128000),
Map.entry("max_total_memory_mb", 131072),
Map.entry("max_total_replicas", 500),
Map.entry("max_execution_retention_days", 90),
Map.entry("max_log_retention_days", 90),
Map.entry("max_metric_retention_days", 90),
Map.entry("max_jar_retention_count", 25)
);
case ENTERPRISE -> Map.ofEntries(
Map.entry("max_environments", 50),
Map.entry("max_apps", 1000),
Map.entry("max_agents", 5000),
Map.entry("max_users", 1000),
Map.entry("max_outbound_connections", 500),
Map.entry("max_alert_rules", 1000),
Map.entry("max_total_cpu_millis", 512000),
Map.entry("max_total_memory_mb", 524288),
Map.entry("max_total_replicas", 2000),
Map.entry("max_execution_retention_days", 365),
Map.entry("max_log_retention_days", 180),
Map.entry("max_metric_retention_days", 180),
Map.entry("max_jar_retention_count", 50)
);
};
}
}
```
- [ ] **Step 5: Commit**
```
refactor: rename tiers and rewrite LicenseDefaults to 13-key cap matrix
```
### Task 3: SigningKeyService + SigningKeyEntity
**Files:**
- Create: `src/main/java/net/siegeln/cameleer/saas/license/SigningKeyEntity.java`
- Create: `src/main/java/net/siegeln/cameleer/saas/license/SigningKeyRepository.java`
- Create: `src/main/java/net/siegeln/cameleer/saas/license/SigningKeyService.java`
- [ ] **Step 1: Create SigningKeyEntity**
JPA entity for the `signing_keys` table: id (UUID), publicKeyB64 (text), privateKeyB64 (text), active (boolean), createdAt (Instant).
- [ ] **Step 2: Create SigningKeyRepository**
JpaRepository with `Optional<SigningKeyEntity> findByActiveTrue()`.
- [ ] **Step 3: Create SigningKeyService**
Methods:
- `getOrCreateActiveKey()` → returns the active key, generating a new Ed25519 keypair on first call
- `getPublicKeyBase64()` → convenience for the active key's public key
- `getPrivateKey()` → reconstructs `PrivateKey` from stored base64
Key generation:
```java
KeyPair kp = KeyPairGenerator.getInstance("Ed25519").generateKeyPair();
String pubB64 = Base64.getEncoder().encodeToString(kp.getPublic().getEncoded());
String privB64 = Base64.getEncoder().encodeToString(kp.getPrivate().getEncoded());
```
Private key reconstruction:
```java
byte[] keyBytes = Base64.getDecoder().decode(entity.getPrivateKeyB64());
PKCS8EncodedKeySpec spec = new PKCS8EncodedKeySpec(keyBytes);
return KeyFactory.getInstance("Ed25519").generatePrivate(spec);
```
- [ ] **Step 4: Commit**
```
feat: add SigningKeyService for Ed25519 keypair management
```
### Task 4: Rewrite LicenseService + LicenseEntity
**Files:**
- Modify: `src/main/java/net/siegeln/cameleer/saas/license/LicenseEntity.java`
- Modify: `src/main/java/net/siegeln/cameleer/saas/license/LicenseService.java`
- Modify: `src/main/java/net/siegeln/cameleer/saas/license/dto/LicenseResponse.java`
- [ ] **Step 1: Update LicenseEntity**
- Remove `features` field + getter/setter
- Add `label` (String) field + getter/setter
- Add `gracePeriodDays` (int) field + getter/setter
- [ ] **Step 2: Rewrite LicenseService**
- Add `SigningKeyService` dependency
- Rewrite `generateLicense(TenantEntity, Map<String,Integer> limits, Instant expiresAt, int gracePeriodDays, String label, UUID actorId)`:
- Build `LicenseInfo(UUID.randomUUID(), tenant.getSlug(), label, limits, Instant.now(), expiresAt, gracePeriodDays)`
- Call `LicenseMinter.mint(info, signingKeyService.getPrivateKey())`
- Store signed token in entity
- Add convenience overload `generateLicense(TenantEntity, Duration, UUID actorId)` that uses tier presets
- Remove `verifyLicenseToken()` (server validates cryptographically)
- Add `verifyToken(String token)` that uses `LicenseValidator`
- [ ] **Step 3: Update LicenseResponse DTO**
Replace `features` with `label` and `gracePeriodDays`. Add `publicKeyB64` for bundle distribution.
- [ ] **Step 4: Commit**
```
feat: rewrite LicenseService to mint Ed25519-signed tokens
```
### Task 5: Update controllers + portal service
**Files:**
- Modify: `src/main/java/net/siegeln/cameleer/saas/license/LicenseController.java`
- Modify: `src/main/java/net/siegeln/cameleer/saas/vendor/VendorTenantController.java`
- Modify: `src/main/java/net/siegeln/cameleer/saas/vendor/VendorTenantService.java`
- Modify: `src/main/java/net/siegeln/cameleer/saas/portal/TenantPortalService.java`
- Modify: `src/main/java/net/siegeln/cameleer/saas/portal/TenantPortalController.java`
- [ ] **Step 1: Update VendorTenantController**
- `POST /{id}/license` now takes a request body with limits, expiresAt, gracePeriodDays, label
- Add `GET /license-presets` endpoint returning tier presets
- Add `POST /license/verify` endpoint
- Add `GET /signing-key/public` endpoint
- [ ] **Step 2: Update VendorTenantService**
- `renewLicense()` updated to accept customizable parameters
- Add `mintLicense()` method with full limit configuration
- Add `verifyToken()` delegation
- [ ] **Step 3: Update VendorTenantController response types**
- `VendorTenantSummary` — fix `agentLimit` to use `max_agents` key
- `VendorTenantDetail` — license field uses updated LicenseResponse
- [ ] **Step 4: Update TenantPortalService**
- `DashboardData` — drop features, keep limits
- `LicenseData` — drop features, add label + gracePeriodDays
- [ ] **Step 5: Commit**
```
feat: update vendor/portal APIs for Ed25519 license minting
```
### Task 6: Fix tests
**Files:**
- Modify: `src/test/java/net/siegeln/cameleer/saas/license/LicenseServiceTest.java`
- Modify: `src/test/java/net/siegeln/cameleer/saas/license/LicenseControllerTest.java`
- Modify: `src/test/java/net/siegeln/cameleer/saas/vendor/VendorTenantServiceTest.java`
- Modify: `src/test/java/net/siegeln/cameleer/saas/tenant/TenantServiceTest.java`
- Modify: `src/test/java/net/siegeln/cameleer/saas/portal/TenantPortalServiceTest.java`
- Modify: `src/test/java/net/siegeln/cameleer/saas/portal/TenantPortalControllerTest.java`
- [ ] **Step 1: Update all Tier.LOW→STARTER, Tier.MID→TEAM, Tier.HIGH→BUSINESS, Tier.BUSINESS→ENTERPRISE**
- [ ] **Step 2: Update LicenseServiceTest**
- `generateLicense_producesUuidToken` → rename to `generateLicense_producesSignedToken`, assert token contains `.` separator
- Remove feature-related assertions
- Mock `SigningKeyService` to return a test keypair
- Remove `verifyLicenseToken` tests
- [ ] **Step 3: Update LicenseControllerTest**
- Remove feature assertions (`features.correlation`)
- Update tier values in assertions
- [ ] **Step 4: Run tests**
Run: `mvn test -q`
- [ ] **Step 5: Commit**
```
test: update tests for Ed25519 license minting and tier rename
```
## Phase 2: Provisioning Integration
### Task 7: Push public key to tenant containers
**Files:**
- Modify: `src/main/java/net/siegeln/cameleer/saas/provisioning/DockerTenantProvisioner.java`
- Modify: `src/main/java/net/siegeln/cameleer/saas/vendor/VendorTenantService.java`
- [ ] **Step 1: Inject SigningKeyService into DockerTenantProvisioner**
Add `SigningKeyService` as a constructor dependency.
- [ ] **Step 2: Add CAMELEER_SERVER_LICENSE_PUBLICKEY env var**
In `createServerContainer()`, after the existing env vars, add:
```java
"CAMELEER_SERVER_LICENSE_PUBLICKEY=" + signingKeyService.getPublicKeyBase64()
```
`CAMELEER_SERVER_TENANT_ID` is already set to slug (line 218).
`CAMELEER_SERVER_LICENSE_TOKEN` is already set (line 225).
- [ ] **Step 3: Commit**
```
feat: push Ed25519 public key to tenant server containers
```
## Phase 3: Vendor API — Configurable Minting
### Task 8: Vendor license endpoints
**Files:**
- Modify: `src/main/java/net/siegeln/cameleer/saas/vendor/VendorTenantController.java`
- Modify: `src/main/java/net/siegeln/cameleer/saas/vendor/VendorTenantService.java`
- Create: `src/main/java/net/siegeln/cameleer/saas/license/dto/MintLicenseRequest.java`
- Create: `src/main/java/net/siegeln/cameleer/saas/license/dto/VerifyLicenseRequest.java`
- Create: `src/main/java/net/siegeln/cameleer/saas/license/dto/VerifyLicenseResponse.java`
- Create: `src/main/java/net/siegeln/cameleer/saas/license/dto/LicensePreset.java`
- Create: `src/main/java/net/siegeln/cameleer/saas/license/dto/LicenseBundleResponse.java`
- [ ] **Step 1: Create DTOs**
`MintLicenseRequest`: tier (optional String), limits (Map<String,Integer>), expiresAt (Instant), gracePeriodDays (Integer), label (String), pushToServer (boolean)
`VerifyLicenseRequest`: token (String)
`VerifyLicenseResponse`: valid (boolean), state (String), envelope fields (tenantId, label, limits, issuedAt, expiresAt, gracePeriodDays), error (String)
`LicensePreset`: tier (String), limits (Map<String,Integer>)
`LicenseBundleResponse`: extends LicenseResponse + adds publicKeyB64, tenantSlug (for the env-var bundle)
- [ ] **Step 2: Update VendorTenantService**
Add `mintLicense(UUID tenantId, MintLicenseRequest request, UUID actorId)`:
- Resolves limits from request (or tier preset)
- Calls `licenseService.generateLicense()` with full params
- Optionally pushes to server
- Returns the license + public key + slug for the bundle
Add `verifyToken(String token)`:
- Uses LicenseValidator from server-core
- [ ] **Step 3: Update VendorTenantController**
- `POST /{id}/license` — takes MintLicenseRequest body, returns LicenseBundleResponse
- `GET /license-presets` — returns list of LicensePreset
- `POST /license/verify` — takes VerifyLicenseRequest, returns VerifyLicenseResponse
- `GET /signing-key/public` — returns `{"publicKey": "<base64>"}`
- [ ] **Step 4: Commit**
```
feat: add vendor license minting, presets, and verify endpoints
```
## Phase 4: Vendor UI — License Minting
### Task 9: Update frontend types + hooks
**Files:**
- Modify: `ui/src/types/api.ts`
- Modify: `ui/src/api/vendor-hooks.ts`
- [ ] **Step 1: Update types**
- `LicenseResponse` — remove `features`, add `label`, `gracePeriodDays`, `publicKeyB64`, `tenantSlug`
- Add `MintLicenseRequest`, `VerifyLicenseRequest`, `VerifyLicenseResponse`, `LicensePreset`, `LicenseBundleResponse`
- `DashboardData` — remove `features`
- `TenantLicenseData` — remove `features`, add `label`, `gracePeriodDays`
- [ ] **Step 2: Update hooks**
- `useRenewLicense()` → replace with `useMintLicense(tenantId)` that takes MintLicenseRequest body
- Add `useLicensePresets()`
- Add `useVerifyLicense()`
- Add `usePublicKey()`
- [ ] **Step 3: Commit**
```
feat(ui): update types and hooks for Ed25519 license minting
```
### Task 10: License minting form on TenantDetailPage
**Files:**
- Modify: `ui/src/pages/vendor/TenantDetailPage.tsx`
- [ ] **Step 1: Replace License card**
Replace the simple "Renew License" button with a minting form:
- Tier preset dropdown (STARTER/TEAM/BUSINESS/ENTERPRISE) that pre-fills limits
- All 13 limits editable in a grid
- Expiry date picker, grace period input, label input
- "Custom" indicator when limits diverge from preset
- Actions: "Mint & Push to Server" (default), "Mint & Copy Bundle", "Mint & Email Bundle"
- [ ] **Step 2: License bundle display**
After minting, show a dialog/card with the full env-var bundle:
```
CAMELEER_SERVER_TENANT_ID=<slug>
CAMELEER_SERVER_LICENSE_PUBLICKEY=<public_key>
CAMELEER_SERVER_LICENSE_TOKEN=<token>
```
With a "Copy Bundle" button.
- [ ] **Step 3: Commit**
```
feat(ui): add license minting form with tier presets and bundle distribution
```
### Task 11: License verify tool + public key viewer
**Files:**
- Create: `ui/src/pages/vendor/LicenseVerifyPage.tsx`
- Modify: `ui/src/router.tsx` (add route)
- Modify: `ui/src/Layout.tsx` (add nav item)
- [ ] **Step 1: Create LicenseVerifyPage**
- Textarea to paste a token
- "Verify" button
- Results: valid/invalid badge, decoded envelope (tenantId, label, limits, expiry, grace period)
- State badge (ACTIVE/GRACE/EXPIRED/INVALID)
- Public key display section with copy button
- [ ] **Step 2: Add route and navigation**
Route: `/vendor/license-verify`
Nav: "License Tools" section in vendor sidebar
- [ ] **Step 3: Commit**
```
feat(ui): add license verify tool and public key viewer
```
### Task 12: Update tier color utility
**Files:**
- Modify: `ui/src/utils/tier.ts`
- [ ] **Step 1: Update tierColor**
```typescript
export function tierColor(tier: string): 'primary' | 'success' | 'warning' | 'error' | 'running' | 'auto' {
switch (tier?.toUpperCase()) {
case 'ENTERPRISE': return 'success';
case 'BUSINESS': return 'primary';
case 'TEAM': return 'running';
case 'STARTER': return 'warning';
default: return 'auto';
}
}
```
- [ ] **Step 2: Commit**
```
fix(ui): update tier color mapping for renamed tiers
```
## Phase 5: Tenant UI Updates
### Task 13: Update TenantLicensePage
**Files:**
- Modify: `ui/src/pages/tenant/TenantLicensePage.tsx`
- [ ] **Step 1: Remove features card, update limits card**
- Drop the "Features" card entirely
- Update "Limits & Usage" card to show all 13 limit keys with proper labels
- Show grace period and label if present
- [ ] **Step 2: Commit**
```
feat(ui): update tenant license page for Ed25519 model
```
### Task 14: Update TenantDashboardPage
**Files:**
- Modify: `ui/src/pages/tenant/TenantDashboardPage.tsx`
- [ ] **Step 1: Remove features references**
Drop any `features` display. Keep limits display.
- [ ] **Step 2: Commit**
```
fix(ui): remove features from tenant dashboard
```
### Task 15: Update CreateTenantPage
**Files:**
- Modify: `ui/src/pages/vendor/CreateTenantPage.tsx`
- [ ] **Step 1: Update tier options**
Change tier dropdown options from LOW/MID/HIGH/BUSINESS to STARTER/TEAM/BUSINESS/ENTERPRISE.
- [ ] **Step 2: Commit**
```
fix(ui): update tier options in create tenant form
```
---
## Verification
After all tasks:
- [ ] `mvn test` passes
- [ ] `cd ui && npm run build` succeeds
- [ ] Docker compose boots (if available)
- [ ] Verify a tenant can be created with STARTER tier
- [ ] Verify license is minted with Ed25519 signature (token contains `.`)
- [ ] Verify CAMELEER_SERVER_LICENSE_PUBLICKEY appears in container env

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,310 @@
# Security Review Fixes Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Fix 5 security vulnerabilities found in the full-codebase security review — hardcoded JWT secret, missing authorization on tenant portal admin endpoints, dead code password/settings endpoints, and unprotected tenant lookups.
**Architecture:** All fixes are surgical edits to existing files. No new files, no schema changes. The JWT secret fix adds one field to `ProvisioningProperties` and reads it in the provisioner. The authorization fixes add `@PreAuthorize` annotations. Dead code removal deletes unsafe endpoints and their unused frontend hooks.
**Tech Stack:** Spring Boot, Spring Security (`@PreAuthorize`), Spring Boot `@ConfigurationProperties`
---
### Task 1: Fix hardcoded JWT secret in DockerTenantProvisioner
**Files:**
- Modify: `src/main/java/io/cameleer/saas/provisioning/ProvisioningProperties.java`
- Modify: `src/main/java/io/cameleer/saas/provisioning/DockerTenantProvisioner.java:223`
- [ ] **Step 1: Add `jwtSecret` field to ProvisioningProperties**
In `src/main/java/io/cameleer/saas/provisioning/ProvisioningProperties.java`, add `jwtSecret` as a new field after `corsOrigins`:
```java
@ConfigurationProperties(prefix = "cameleer.saas.provisioning")
public record ProvisioningProperties(
String serverImage,
String serverUiImage,
String runtimeBaseImage,
String networkName,
String traefikNetwork,
String publicHost,
String publicProtocol,
String datasourceUrl,
String datasourceUsername,
String datasourcePassword,
String clickhouseUrl,
String clickhouseUser,
String clickhousePassword,
String oidcIssuerUri,
String oidcJwkSetUri,
String corsOrigins,
String jwtSecret
) {}
```
This binds from env var `CAMELEER_SAAS_PROVISIONING_JWTSECRET`. The installer already generates `CAMELEER_SERVER_SECURITY_JWTSECRET` — the compose template needs to also set `CAMELEER_SAAS_PROVISIONING_JWTSECRET` to the same value (or the deployer maps it manually). A missing value will be `null`, caught by the validation below.
- [ ] **Step 2: Replace hardcoded secret with property value**
In `src/main/java/io/cameleer/saas/provisioning/DockerTenantProvisioner.java`, replace line 223:
```java
// OLD:
"CAMELEER_SERVER_SECURITY_JWTSECRET=cameleer-dev-jwt-secret-change-in-production",
// NEW:
"CAMELEER_SERVER_SECURITY_JWTSECRET=" + props.jwtSecret(),
```
- [ ] **Step 3: Add startup validation for jwtSecret**
In `DockerTenantProvisioner.java`, add a validation check at the end of the constructor (after line 36):
```java
if (props.jwtSecret() == null || props.jwtSecret().isBlank()) {
log.warn("CAMELEER_SAAS_PROVISIONING_JWTSECRET is not set — provisioned servers will fail to start");
}
```
- [ ] **Step 4: Commit**
```bash
git add src/main/java/io/cameleer/saas/provisioning/ProvisioningProperties.java src/main/java/io/cameleer/saas/provisioning/DockerTenantProvisioner.java
git commit -m "fix(security): replace hardcoded JWT secret with config property
Every provisioned tenant server was using the same hardcoded dev JWT
secret ('cameleer-dev-jwt-secret-change-in-production'), visible in
source code. An attacker could forge valid JWT tokens for any tenant
server. Now reads from CAMELEER_SAAS_PROVISIONING_JWTSECRET."
```
---
### Task 2: Add authorization to TenantPortalController admin endpoints
**Files:**
- Modify: `src/main/java/io/cameleer/saas/portal/TenantPortalController.java`
- [ ] **Step 1: Add `@PreAuthorize` to team management endpoints**
Add `@PreAuthorize("hasAuthority('SCOPE_tenant:manage')")` before each of these method annotations:
Line 76 — `inviteTeamMember`:
```java
@PreAuthorize("hasAuthority('SCOPE_tenant:manage')")
@PostMapping("/team/invite")
```
Line 82 — `removeTeamMember`:
```java
@PreAuthorize("hasAuthority('SCOPE_tenant:manage')")
@DeleteMapping("/team/{userId}")
```
Line 88 — `changeTeamMemberRole`:
```java
@PreAuthorize("hasAuthority('SCOPE_tenant:manage')")
@PatchMapping("/team/{userId}/role")
```
Line 114 — `resetTeamMemberPassword`:
```java
@PreAuthorize("hasAuthority('SCOPE_tenant:manage')")
@PostMapping("/team/{userId}/password")
```
Line 176 — `resetTeamMemberMfa`:
```java
@PreAuthorize("hasAuthority('SCOPE_tenant:manage')")
@DeleteMapping("/users/{userId}/mfa")
```
- [ ] **Step 2: Add `@PreAuthorize` to server management endpoints**
Line 95 — `resetServerAdminPassword`:
```java
@PreAuthorize("hasAuthority('SCOPE_tenant:manage')")
@PostMapping("/server/admin-password")
```
Line 125 — `restartServer`:
```java
@PreAuthorize("hasAuthority('SCOPE_tenant:manage')")
@PostMapping("/server/restart")
```
Line 131 — `upgradeServer`:
```java
@PreAuthorize("hasAuthority('SCOPE_tenant:manage')")
@PostMapping("/server/upgrade")
```
- [ ] **Step 3: Add `@PreAuthorize` to CA certificate management endpoints**
Line 289 — `stageCaCert`:
```java
@PreAuthorize("hasAuthority('SCOPE_tenant:manage')")
@PostMapping("/ca")
```
Line 304 — `activateCaCert`:
```java
@PreAuthorize("hasAuthority('SCOPE_tenant:manage')")
@PostMapping("/ca/{id}/activate")
```
Line 315 — `deleteCaCert`:
```java
@PreAuthorize("hasAuthority('SCOPE_tenant:manage')")
@DeleteMapping("/ca/{id}")
```
- [ ] **Step 4: Commit**
```bash
git add src/main/java/io/cameleer/saas/portal/TenantPortalController.java
git commit -m "fix(security): add authorization to tenant portal admin endpoints
All admin-level endpoints (team invite/remove/role, password resets,
server restart/upgrade, CA cert management) were accessible to any
org member including viewers. Now require SCOPE_tenant:manage,
matching the existing pattern on PATCH /auth-settings."
```
---
### Task 3: Remove dead code endpoints (settings duplicate + password without verification)
**Files:**
- Modify: `src/main/java/io/cameleer/saas/portal/TenantPortalController.java`
- Modify: `ui/src/api/tenant-hooks.ts`
- [ ] **Step 1: Remove `PATCH /settings` endpoint from controller**
Delete lines 238-242 from `TenantPortalController.java`:
```java
// DELETE THIS ENTIRE BLOCK:
@PatchMapping("/settings")
public ResponseEntity<Void> updateSettings(@RequestBody Map<String, Object> updates) {
portalService.updateTenantSettings(updates);
return ResponseEntity.ok().build();
}
```
This is a duplicate of `PATCH /auth-settings` (line 231-235) which has proper `@PreAuthorize`. The frontend uses `useUpdateTenantAuthSettings` which calls `/auth-settings`.
- [ ] **Step 2: Remove `POST /password` endpoint from controller**
Delete lines 107-112 from `TenantPortalController.java`:
```java
// DELETE THIS ENTIRE BLOCK:
@PostMapping("/password")
public ResponseEntity<Void> changeOwnPassword(@AuthenticationPrincipal Jwt jwt,
@RequestBody PasswordChangeRequest body) {
portalService.changePassword(jwt.getSubject(), body.password());
return ResponseEntity.noContent().build();
}
```
The frontend uses `POST /api/account/password` (via `AccountSettingsPage` + `AccountController`) which correctly requires current password verification.
- [ ] **Step 3: Remove unused hooks from tenant-hooks.ts**
Remove the `useChangeOwnPassword` hook (lines 124-128):
```typescript
// DELETE THIS ENTIRE BLOCK:
export function useChangeOwnPassword() {
return useMutation<void, Error, string>({
mutationFn: (password) => api.post('/tenant/password', { password }),
});
}
```
Remove the `useUpdateTenantSettings` mutation hook (lines 152-158):
```typescript
// DELETE THIS ENTIRE BLOCK:
export function useUpdateTenantSettings() {
const qc = useQueryClient();
return useMutation<void, Error, Record<string, unknown>>({
mutationFn: (updates) => api.patch('/tenant/settings', updates),
onSuccess: () => qc.invalidateQueries({ queryKey: ['tenant', 'settings'] }),
});
}
```
Note: Keep `useTenantSettings` (the GET query hook at lines 145-150) — the `GET /settings` endpoint returns tenant info (name, slug, tier) and is a legitimate read-only endpoint.
- [ ] **Step 4: Commit**
```bash
git add src/main/java/io/cameleer/saas/portal/TenantPortalController.java ui/src/api/tenant-hooks.ts
git commit -m "fix(security): remove unsafe dead code endpoints
Remove PATCH /api/tenant/settings (duplicate of /auth-settings without
authorization — any org member could disable MFA) and POST
/api/tenant/password (allowed password change without current password
verification). Both were dead code — frontend uses the secure
alternatives. Also remove corresponding unused hooks."
```
---
### Task 4: Add authorization to TenantController lookup endpoints
**Files:**
- Modify: `src/main/java/io/cameleer/saas/tenant/TenantController.java`
- [ ] **Step 1: Add `@PreAuthorize` to getById and getBySlug**
Line 58 — `getById`:
```java
@GetMapping("/{id}")
@PreAuthorize("hasAuthority('SCOPE_platform:admin')")
public ResponseEntity<TenantResponse> getById(@PathVariable UUID id) {
```
Line 65 — `getBySlug`:
```java
@GetMapping("/by-slug/{slug}")
@PreAuthorize("hasAuthority('SCOPE_platform:admin')")
public ResponseEntity<TenantResponse> getBySlug(@PathVariable String slug) {
```
This matches the existing `@PreAuthorize` on `listAll()` and `create()` in the same controller. These are vendor-only lookup endpoints — no tenant-scoped user should access arbitrary tenant records.
- [ ] **Step 2: Commit**
```bash
git add src/main/java/io/cameleer/saas/tenant/TenantController.java
git commit -m "fix(security): restrict tenant lookup endpoints to platform admins
GET /api/tenants/{id} and GET /api/tenants/by-slug/{slug} were
accessible to any authenticated user, exposing serverEndpoint,
adminEmail, and provisionError. Now require SCOPE_platform:admin,
matching listAll() and create() in the same controller."
```
---
### Task 5: Verify and build
- [ ] **Step 1: Run the build to verify all changes compile**
```bash
cd /c/Users/Hendrik/Documents/projects/cameleer-saas && ./mvnw compile -q
```
Expected: BUILD SUCCESS with no compilation errors.
- [ ] **Step 2: Run frontend type check**
```bash
cd ui && npm run typecheck
```
Expected: No type errors from removed hooks (they were unused).

File diff suppressed because it is too large Load Diff

View File

@@ -3,7 +3,7 @@
**Date:** 2026-03-29
**Status:** Draft — Awaiting Review
**Author:** Boardroom simulation (Strategist, Skeptic, Architect, Growth Hacker)
**Gitea Issues:** cameleer/cameleer3 #57-#72 (label: MOAT)
**Gitea Issues:** cameleer/cameleer #57-#72 (label: MOAT)
## Executive Summary
@@ -32,14 +32,14 @@ Week 8-14: Live Route Debugger (agent + server + UI)
- #59 — Cross-Service Trace Correlation + Topology Map
**Debugger sub-issues:**
- #60 — Protocol: Debug session command types (`cameleer3-common`)
- #60 — Protocol: Debug session command types (`cameleer-common`)
- #61 — Agent: DebugSessionManager + breakpoint InterceptStrategy integration
- #62 — Agent: ExchangeStateSerializer + synthetic direct route wrapper
- #63 — Server: DebugSessionService + WebSocket + REST API
- #70 — UI: Debug session frontend components
**Lineage sub-issues:**
- #64 — Protocol: Lineage command types (`cameleer3-common`)
- #64 — Protocol: Lineage command types (`cameleer-common`)
- #65 — Agent: LineageManager + capture mode integration
- #66 — Server: LineageService + DiffEngine + REST API
- #71 — UI: Lineage timeline + diff viewer components
@@ -69,14 +69,14 @@ Browser (SaaS UI)
WebSocket <--------------------------------------+
| |
v |
cameleer3-server |
cameleer-server |
| POST /api/v1/debug/sessions |
| POST /api/v1/debug/sessions/{id}/step |
| POST /api/v1/debug/sessions/{id}/resume |
| DELETE /api/v1/debug/sessions/{id} |
| |
v |
SSE Command Channel --> cameleer3 agent |
SSE Command Channel --> cameleer agent |
| | |
| "start-debug" | |
| command v |
@@ -101,7 +101,7 @@ SSE Command Channel --> cameleer3 agent |
| Continue to next processor
```
### 1.3 Protocol Additions (cameleer3-common)
### 1.3 Protocol Additions (cameleer-common)
#### New SSE Commands
@@ -160,11 +160,11 @@ SSE Command Channel --> cameleer3 agent |
}
```
### 1.4 Agent Implementation (cameleer3-agent)
### 1.4 Agent Implementation (cameleer-agent)
#### DebugSessionManager
- Location: `com.cameleer3.agent.debug.DebugSessionManager`
- Location: `com.cameleer.agent.debug.DebugSessionManager`
- Stores active sessions: `ConcurrentHashMap<sessionId, DebugSession>`
- Enforces max concurrent sessions (default 3, configurable via `cameleer.debug.maxSessions`)
- Allocates **dedicated Thread** per session (NOT from Camel thread pool)
@@ -213,7 +213,7 @@ For non-direct routes (timer, jms, http, file):
3. Debug exchange enters via `ProducerTemplate.send()`
4. Remove temporary route on session completion
### 1.5 Server Implementation (cameleer3-server)
### 1.5 Server Implementation (cameleer-server)
#### REST Endpoints
@@ -308,7 +308,7 @@ Capture the full transformation history of a message flowing through a route. At
### 2.2 Architecture
```
cameleer3 agent
cameleer agent
|
| On lineage-enabled exchange:
| Before processor: capture INPUT
@@ -319,7 +319,7 @@ cameleer3 agent
POST /api/v1/data/executions (processors carry full snapshots)
|
v
cameleer3-server
cameleer-server
|
| LineageService:
| > Flatten processor tree to ordered list
@@ -334,7 +334,7 @@ GET /api/v1/executions/{id}/lineage
Browser: LineageTimeline + DiffViewer
```
### 2.3 Protocol Additions (cameleer3-common)
### 2.3 Protocol Additions (cameleer-common)
#### New SSE Commands
@@ -370,11 +370,11 @@ Browser: LineageTimeline + DiffViewer
| `EXPRESSION` | Any exchange matching a Simple/JsonPath predicate |
| `NEXT_N` | Next N exchanges on the route (countdown) |
### 2.4 Agent Implementation (cameleer3-agent)
### 2.4 Agent Implementation (cameleer-agent)
#### LineageManager
- Location: `com.cameleer3.agent.lineage.LineageManager`
- Location: `com.cameleer.agent.lineage.LineageManager`
- Stores active configs: `ConcurrentHashMap<lineageId, LineageConfig>`
- Tracks capture count per lineageId: auto-disables at `maxCaptures`
- Duration timeout via `ScheduledExecutorService`: auto-disables after expiry
@@ -412,7 +412,7 @@ cameleer.lineage.maxBodySize=65536 # 64KB for lineage captures (vs 4KB normal
cameleer.lineage.enabled=true # master switch
```
### 2.5 Server Implementation (cameleer3-server)
### 2.5 Server Implementation (cameleer-server)
#### LineageService
@@ -548,7 +548,7 @@ New (added):
| Direct/SEDA | URI prefix `direct:`, `seda:`, `vm:` | Exchange property (in-process) |
| File/FTP | URI prefix `file:`, `ftp:` | Not propagated (async) |
### 3.3 Agent Implementation (cameleer3-agent)
### 3.3 Agent Implementation (cameleer-agent)
#### Outgoing Propagation (InterceptStrategy)
@@ -597,7 +597,7 @@ execution.setHopIndex(...); // depth in distributed trace
- Parse failure: log warning, continue without context (no exchange failure)
- Only inject on outgoing processors, never on FROM consumers
### 3.4 Server Implementation: Trace Assembly (cameleer3-server)
### 3.4 Server Implementation: Trace Assembly (cameleer-server)
#### CorrelationService
@@ -665,7 +665,7 @@ CREATE INDEX idx_executions_parent_span
- **Fan-out:** parallel multicast creates multiple children from same processor
- **Circular calls:** detected via hopIndex (max depth 20)
### 3.5 Server Implementation: Topology Graph (cameleer3-server)
### 3.5 Server Implementation: Topology Graph (cameleer-server)
#### DependencyGraphService
@@ -799,11 +799,11 @@ Reserve `sourceTenantHash` in TraceContext for future use:
| Work | Repo | Issue |
|------|------|-------|
| Service topology materialized view | cameleer3-server | #69 |
| Topology REST API | cameleer3-server | #69 |
| ServiceTopologyGraph.tsx | cameleer3-server + saas | #72 |
| WebSocket infrastructure (for debugger) | cameleer3-server | #63 |
| TraceContext DTO in cameleer3-common | cameleer3 | #67 |
| Service topology materialized view | cameleer-server | #69 |
| Topology REST API | cameleer-server | #69 |
| ServiceTopologyGraph.tsx | cameleer-server + saas | #72 |
| WebSocket infrastructure (for debugger) | cameleer-server | #63 |
| TraceContext DTO in cameleer-common | cameleer | #67 |
**Ship:** Topology graph visible from existing data. Zero agent changes. Immediate visual payoff.
@@ -811,10 +811,10 @@ Reserve `sourceTenantHash` in TraceContext for future use:
| Work | Repo | Issue |
|------|------|-------|
| Lineage protocol DTOs | cameleer3-common | #64 |
| LineageManager + capture integration | cameleer3-agent | #65 |
| LineageService + DiffEngine | cameleer3-server | #66 |
| Lineage UI components | cameleer3-server + saas | #71 |
| Lineage protocol DTOs | cameleer-common | #64 |
| LineageManager + capture integration | cameleer-agent | #65 |
| LineageService + DiffEngine | cameleer-server | #66 |
| Lineage UI components | cameleer-server + saas | #71 |
**Ship:** Payload flow lineage independently usable.
@@ -822,10 +822,10 @@ Reserve `sourceTenantHash` in TraceContext for future use:
| Work | Repo | Issue |
|------|------|-------|
| Trace context header propagation | cameleer3-agent | #67 |
| Executions table migration (new columns) | cameleer3-server | #68 |
| CorrelationService + trace assembly | cameleer3-server | #68 |
| DistributedTraceView + TraceSearch UI | cameleer3-server + saas | #72 |
| Trace context header propagation | cameleer-agent | #67 |
| Executions table migration (new columns) | cameleer-server | #68 |
| CorrelationService + trace assembly | cameleer-server | #68 |
| DistributedTraceView + TraceSearch UI | cameleer-server + saas | #72 |
**Ship:** Distributed traces + topology — full correlation story.
@@ -833,11 +833,11 @@ Reserve `sourceTenantHash` in TraceContext for future use:
| Work | Repo | Issue |
|------|------|-------|
| Debug protocol DTOs | cameleer3-common | #60 |
| DebugSessionManager + InterceptStrategy | cameleer3-agent | #61 |
| ExchangeStateSerializer + synthetic wrapper | cameleer3-agent | #62 |
| DebugSessionService + WS + REST | cameleer3-server | #63 |
| Debug UI components | cameleer3-server + saas | #70 |
| Debug protocol DTOs | cameleer-common | #60 |
| DebugSessionManager + InterceptStrategy | cameleer-agent | #61 |
| ExchangeStateSerializer + synthetic wrapper | cameleer-agent | #62 |
| DebugSessionService + WS + REST | cameleer-server | #63 |
| Debug UI components | cameleer-server + saas | #70 |
**Ship:** Full browser-based route debugger with integration to lineage and correlation.

View File

@@ -10,12 +10,12 @@
## 1. Product Definition
**Cameleer SaaS** is a Camel application runtime platform with built-in observability. Customers deploy Apache Camel applications and get zero-configuration tracing, topology mapping, payload lineage, distributed correlation, live debugging, and exchange replay — powered by the cameleer3 agent (auto-injected) and cameleer3-server (managed per tenant).
**Cameleer SaaS** is a Camel application runtime platform with built-in observability. Customers deploy Apache Camel applications and get zero-configuration tracing, topology mapping, payload lineage, distributed correlation, live debugging, and exchange replay — powered by the cameleer agent (auto-injected) and cameleer-server (managed per tenant).
### Three Pillars
1. **Runtime** — Deploy and run Camel applications with automatic agent injection
2. **Observability** — Per-tenant cameleer3-server (traces, topology, lineage, correlation, debugger, replay)
2. **Observability** — Per-tenant cameleer-server (traces, topology, lineage, correlation, debugger, replay)
3. **Management** — Auth, billing, teams, provisioning, secrets, environments
### Two Deployment Modes
@@ -27,8 +27,8 @@
| Component | Role | Changes Required |
|-----------|------|------------------|
| cameleer3 (agent) | Zero-code Camel instrumentation, auto-injected into customer JARs | MOAT features (lineage, correlation, debugger, replay) |
| cameleer3-server | Per-tenant observability backend | Managed mode (trust SaaS JWT), license module, MOAT features |
| cameleer (agent) | Zero-code Camel instrumentation, auto-injected into customer JARs | MOAT features (lineage, correlation, debugger, replay) |
| cameleer-server | Per-tenant observability backend | Managed mode (trust SaaS JWT), license module, MOAT features |
| cameleer-saas (this repo) | SaaS management platform — control plane | New: everything in this document |
| design-system | Shared React component library | Used by both SaaS shell and server UI |
@@ -81,7 +81,7 @@ Single Spring Boot application with well-bounded internal modules. K8s ingress h
```
[Browser] → [Ingress (Traefik/Envoy)] → [SaaS Platform (modular Spring Boot)]
↓ (tenant routes) ↓ (provisioning)
[Tenant cameleer3-server] [Flux CD → K8s]
[Tenant cameleer-server] [Flux CD → K8s]
```
### Component Map
@@ -114,7 +114,7 @@ Single Spring Boot application with well-bounded internal modules. K8s ingress h
│ (PostgreSQL) │ │ API │ │ │
│ - tenants │ └────────┘ │ ┌─────────────────────┐ │
│ - users │ │ │ tenant-a namespace │ │
│ - teams │ ┌─────┐ │ │ ├─ cameleer3-server │ │
│ - teams │ ┌─────┐ │ │ ├─ cameleer-server │ │
│ - audit log │ │Flux │ │ │ ├─ camel-app-1 │ │
│ - licenses │ │ CD │ │ │ ├─ camel-app-2 │ │
└──────────────┘ └──┬──┘ │ │ └─ NetworkPolicies │ │
@@ -144,7 +144,7 @@ Same management platform routes to dedicated cluster(s) per customer. Dedicated
| Management Platform backend | Spring Boot 3, Java 21 |
| Management Platform frontend | React, @cameleer/design-system |
| Platform database | PostgreSQL |
| Tenant observability | cameleer3-server (Spring Boot), PostgreSQL, OpenSearch |
| Tenant observability | cameleer-server (Spring Boot), PostgreSQL, OpenSearch |
| GitOps | Flux CD |
| K8s distribution | Talos (production), k3s (dev) |
| Ingress | Traefik or Envoy |
@@ -192,7 +192,7 @@ Stores all SaaS control plane data — completely separate from tenant observabi
### Tenant Data (Shared PostgreSQL)
Each tenant's cameleer3-server uses its own PostgreSQL schema on the shared instance (dedicated instance for high/business). This is the existing cameleer3-server data model — unchanged:
Each tenant's cameleer-server uses its own PostgreSQL schema on the shared instance (dedicated instance for high/business). This is the existing cameleer-server data model — unchanged:
- Route executions, processor traces, metrics
- Route graph topology
@@ -215,12 +215,12 @@ Completely separate: Prometheus TSDB for metrics, Loki for logs.
### Architecture
The SaaS management platform is the single identity plane. It owns authentication and authorization. Per-tenant cameleer3-server instances trust SaaS-issued tokens.
The SaaS management platform is the single identity plane. It owns authentication and authorization. Per-tenant cameleer-server instances trust SaaS-issued tokens.
- Spring Security OAuth2 for OIDC federation with customer IdPs
- Ed25519 JWT signing (consistent with existing cameleer3-server pattern)
- Ed25519 JWT signing (consistent with existing cameleer-server pattern)
- Tokens carry: tenant ID, user ID, roles, feature entitlements
- cameleer3-server validates SaaS-issued JWTs in managed mode
- cameleer-server validates SaaS-issued JWTs in managed mode
- Standalone mode retains its own auth for air-gapped deployments
### RBAC Model
@@ -252,7 +252,7 @@ Customer signs up + payment
→ Create tenant record + Stripe customer/subscription
→ Generate signed license token (Ed25519)
→ Create Flux HelmRelease CR
→ Flux reconciles: namespace, ResourceQuota, NetworkPolicies, cameleer3-server
→ Flux reconciles: namespace, ResourceQuota, NetworkPolicies, cameleer-server
→ Provision PostgreSQL schema + per-tenant credentials
→ Provision OpenSearch index template + per-tenant credentials
→ Readiness check: server healthy, DB migrated, auth working
@@ -297,7 +297,7 @@ Full Cluster API automation deferred to future release.
### JAR Upload → Immutable Image
1. **Validation** — File type check, size limit per tier, SHA-256 checksum, Trivy security scan, secret detection (reject JARs with embedded credentials)
2. **Image Build** — Templated Dockerfile: distroless JRE base + customer JAR + cameleer3-agent.jar + `-javaagent` flag + agent pre-configured for tenant server. Image tagged: `registry/{tenant}/{app}:v{N}-{sha256short}`. Signed with cosign. SBOM attached.
2. **Image Build** — Templated Dockerfile: distroless JRE base + customer JAR + cameleer-agent.jar + `-javaagent` flag + agent pre-configured for tenant server. Image tagged: `registry/{tenant}/{app}:v{N}-{sha256short}`. Signed with cosign. SBOM attached.
3. **Registry Push** — Per-tenant repository in platform container registry
4. **Deploy** — K8s Deployment in tenant namespace with resource limits, secrets mounted, config injected, NetworkPolicy applied, liveness/readiness probes
@@ -350,7 +350,7 @@ Central UI for managing each deployed application:
### Architecture
Each tenant gets a dedicated cameleer3-server instance:
Each tenant gets a dedicated cameleer-server instance:
- Shared tiers: deployed in tenant's namespace
- Dedicated tiers: deployed in tenant's cluster
@@ -359,7 +359,7 @@ The SaaS API gateway routes `/t/{tenant}/api/*` to the correct server instance.
### Agent Connection
- Agent bootstrap tokens generated by the SaaS platform
- Agents connect directly to their tenant's cameleer3-server instance
- Agents connect directly to their tenant's cameleer-server instance
- Agent auto-injected into customer Camel apps deployed on the platform
- External agents (customer-hosted Camel apps) can also connect using bootstrap tokens
@@ -448,7 +448,7 @@ K8s NetworkPolicies per tenant namespace:
- **Allow:** tenant namespace → shared PostgreSQL/OpenSearch (authenticated per-tenant credentials)
- **Allow:** tenant namespace → public internet (Camel app external connectivity)
- **Allow:** SaaS platform namespace → all tenant namespaces (management access)
- **Allow:** tenant Camel apps → tenant cameleer3-server (intra-namespace)
- **Allow:** tenant Camel apps → tenant cameleer-server (intra-namespace)
### Zero-Trust Tenant Boundary
@@ -546,7 +546,7 @@ Completely separate from tenant observability data.
- TLS certificate expiry < 14 days
- Metering pipeline stale > 1 hour
- Disk usage > 80% on any PV
- Tenant cameleer3-server unhealthy > 5 minutes
- Tenant cameleer-server unhealthy > 5 minutes
- OOMKill on any tenant workload
### Dashboards
@@ -577,7 +577,7 @@ K8s Metrics → Metrics Collector → Usage Aggregator (hourly) → Stripe Usage
|-----------|------|--------|
| CPU | core·hours | K8s metrics (namespace aggregate) |
| RAM | GB·hours | K8s metrics (namespace aggregate) |
| Data volume | GB ingested | cameleer3-server reports |
| Data volume | GB ingested | cameleer-server reports |
- Aggregated per tenant, per hour, stored in platform DB before Stripe submission
- Idempotent aggregation (safe to re-run)
@@ -613,7 +613,7 @@ K8s Metrics → Metrics Collector → Usage Aggregator (hourly) → Stripe Usage
| **App → Status** | Pod health, resource usage, agent connection, events |
| **App → Logs** | Live stdout/stderr stream |
| **App → Versions** | Image history, promotion log, rollback |
| **Observe** | Embedded cameleer3-server UI (topology, traces, lineage, correlation, debugger, replay) |
| **Observe** | Embedded cameleer-server UI (topology, traces, lineage, correlation, debugger, replay) |
| **Team** | Users, roles, invites |
| **Settings** | Tenant config, SSO/OIDC, vault connections |
| **Billing** | Usage, invoices, plan management |
@@ -621,7 +621,7 @@ K8s Metrics → Metrics Collector → Usage Aggregator (hourly) → Stripe Usage
### Design
- SaaS shell built with `@cameleer/design-system`
- cameleer3-server React UI embedded (same design system, visual consistency)
- cameleer-server React UI embedded (same design system, visual consistency)
- Responsive but desktop-primary (observability tooling is a desktop workflow)
---
@@ -681,4 +681,4 @@ K8s Metrics → Metrics Collector → Usage Aggregator (hourly) → Stripe Usage
| 12 | Platform Operations & Self-Monitoring | epic, ops |
| 13 | MOAT: Exchange Replay | epic, observability |
MOAT features (Debugger, Lineage, Correlation) tracked in cameleer/cameleer3 #57#72.
MOAT features (Debugger, Lineage, Correlation) tracked in cameleer/cameleer #57#72.

Some files were not shown because too many files have changed in this diff Show More