Commit Graph

1743 Commits

Author SHA1 Message Date
hsiegeln
3334f0a1d2 chore: hand cameleer-runtime-loader image build to cameleer-saas
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 3m0s
CI / docker (push) Successful in 3m26s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 45s
The loader is infra glue (per-replica init container that fetches the
tenant JAR from a signed URL) — same shape as runtime-base, postgres,
clickhouse, traefik, logto images already living in cameleer-saas. Move
the source + CI build there so all sidecar/infra image builds are in
one place; cameleer-server's CI is back to building only what it owns
(server, server-ui).

Coordination: cameleer-saas@ac8d628 added the build step and copied the
source verbatim. Published tag path is unchanged
(gitea.siegeln.net/cameleer/cameleer-runtime-loader:latest), so running
tenant servers continue pulling the same image without disruption.

This commit:
- Deletes cameleer-runtime-loader/ (Dockerfile, entrypoint.sh, README).
- Removes the conditional "Build and push runtime-loader" step and its
  upstream "Detect runtime-loader changes" detection from .gitea/workflows/ci.yml.
  Drops the fetch-depth: 0 + outputs.loader_changed plumbing that only
  existed for the change-detection path.
- Drops cameleer-runtime-loader from the in-job and cleanup-branch image
  cleanup loops — saas owns the registry lifecycle now.
- Rewrites LoaderHardeningIT to pull the published :latest from the
  registry (via Testcontainers GenericContainer) instead of building
  from a local Dockerfile. The IT now functions as a cross-repo contract
  test: cameleer-server's hardening expectations vs. the saas-published
  artifact. Local devs need `docker login gitea.siegeln.net`; CI runners
  are pre-authenticated.
- Updates .claude/rules/docker-orchestration.md to point at the new
  source-of-truth location and reframe LoaderHardeningIT as the
  cross-repo contract test.

The image's runtime contract (ARTIFACT_URL, ARTIFACT_EXPECTED_SIZE,
/app/jars/app.jar mount, exit code semantics) is unchanged. Future
contract changes need coordinated commits across both repos.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 13:02:54 +02:00
hsiegeln
2871bdcc92 feat(ui): show spinner on the Stop confirm button while the stop call is in flight
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 3m24s
CI / docker (push) Successful in 2m35s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 1m4s
The AlertDialog from the design system already exposes a `loading` prop that
swaps the confirm button into a spinner state. Wire it to
stopDeployment.isPending so users get feedback during the (sometimes multi-
second) container-stop call instead of staring at a static button.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 12:52:20 +02:00
hsiegeln
c03b5b80a1 feat(runtime): redirect agent diagram output to tenant tmpfs
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 3m13s
CI / docker (push) Successful in 3m26s
CI / deploy (push) Successful in 1m7s
CI / deploy-feature (push) Has been skipped
The cameleer agent extracts route diagrams at startup and writes them
to ./cameleer-diagrams (default `cameleer.agent.diagram.outputdir`,
documented in AGENT-REFERENCE.md §3). With CWD /app and the orchestrator's
readonly rootfs, the directory create fails:

    RouteModelExtractor - Cameleer: Failed to create diagram output directory: ./cameleer-diagrams
    java.nio.file.FileSystemException: /app/./cameleer-diagrams: Read-only file system

The agent has no "send-to-server-but-skip-disk" knob today
(`diagram.enabled=false` would also disable the HTTP export), so the
documented mechanism is the outputdir property. Set
`CAMELEER_AGENT_DIAGRAM_OUTPUTDIR=/tmp/cameleer-diagrams` on tenant
containers — /tmp is the per-container tmpfs (writable inside the
hardening contract, ephemeral, vanishes with the container). The
diagram feature continues to work via the HTTP POST to /api/v1/data/diagrams;
the on-disk copy lands in ephemeral storage that doesn't persist.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 09:38:26 +02:00
hsiegeln
7e7bd06bca docs(handoff): runtime-base image hardening — Chainguard JRE switch for SaaS team
One-line FROM swap from eclipse-temurin:21-jre-alpine to
cgr.dev/chainguard/jre:openjdk-21 plus deletion of the dead ENTRYPOINT.
Wins: glibc (fixes hidden Netty/Snappy/JNI compatibility risk on musl),
daily rebuilds, signed images + SBOM, near-zero baseline CVEs by design.
No cameleer-server orchestrator change required; runtime contract
unchanged. Distroless and jlink/scratch covered as optional/not-recommended
follow-ups with rationale.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 09:34:09 +02:00
hsiegeln
2e2d069530 feat(runtime): capture loader logs in failure exceptions; add LoaderHardeningIT regression guard
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 3m42s
CI / docker (push) Successful in 2m36s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 54s
SonarQube / sonarqube (push) Successful in 7m24s
Two diagnostics-and-confidence follow-ups to the loader-init-container pattern.

1) DockerRuntimeOrchestrator now captures the loader's last 50 lines of
   stdout/stderr (capped at 4096 chars, 5s timeout) before the finally-remove
   and appends them to the thrown RuntimeException as
   `. loader output: <text>`. Best-effort: log-capture failures are swallowed
   and never mask the original exit. Closes the visibility gap that turned a
   simple "wget: Permission denied" into the opaque "Loader exited 1".

2) New LoaderHardeningIT spins up a Testcontainers nginx serving a 1KB
   fixture, builds the loader image fresh from cameleer-runtime-loader/,
   and runs it under the exact baseHardenedHostConfig() shape (cap_drop ALL,
   readonly rootfs, /tmp tmpfs, no-new-privileges, apparmor=docker-default,
   pids=512) bound to a fresh named volume RW at /app/jars. Asserts exit 0.
   This would have caught the volume-permission regression in CI.

GenericContainer + OneShotStartupCheckStrategy is used instead of raw
docker-java waitContainerCmd because docker-java's unshaded api version
in this project's pom and testcontainers' shaded copy disagree on
WaitContainerCmd.getCondition() — going through GenericContainer keeps
the call inside testcontainers' shaded executor.

Rules doc updated to point at the captured-output behaviour and the IT.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 23:51:25 +02:00
hsiegeln
c2efb7fbf7 fix(loader): chown /app/jars to loader so volume init gives wget write perms
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 3m43s
CI / docker (push) Successful in 2m42s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 1m2s
Root cause of "Loader exited 1" with `wget: can't open '/app/jars/app.jar':
Permission denied`. DockerRuntimeOrchestrator creates a fresh named volume
per replica and mounts it RW at /app/jars. Docker initializes empty named
volumes from the image's mountpoint contents — but /app/jars didn't exist
in the loader image, so the volume came up as root:root 0755. Loader runs
as UID 1000 and can't write to a root-owned dir.

Pre-create /app/jars in the image owned by `loader`. Volume init now
inherits loader:loader ownership and wget writes app.jar successfully.
Verified locally with the full hardening contract (cap_drop ALL, readonly
rootfs, /tmp tmpfs, no-new-privileges, apparmor=docker-default).

This is the conditional CI build's first real exercise — the loader-build
step gated on cameleer-runtime-loader/** changes will fire on this push
and produce the fixed `:latest` tag.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 23:34:03 +02:00
hsiegeln
724054296e ci(loader): build & push cameleer-runtime-loader image only when its sources change
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 3m24s
CI / docker (push) Successful in 2m28s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 48s
The init-container image referenced by DockerRuntimeOrchestrator
(`gitea.siegeln.net/cameleer/cameleer-runtime-loader:latest`) had no CI
producer; it had to be built and pushed by hand. Replicates the
cameleer-saas pattern (single docker job with multiple buildx push
steps), but gates the loader build on a path-diff so unrelated commits
don't rebuild and re-tag a sidecar that didn't change.

- build job: fetch-depth=0 + Detect runtime-loader changes step that
  diffs `${{ github.event.before }}..${{ github.sha }}` for paths under
  cameleer-runtime-loader/. Falls back to `changed=true` when no prior
  commit is reachable (first push to a branch).
- docker job: new `Build and push runtime-loader` step gated on
  `needs.build.outputs.loader_changed == 'true'`. Tags with sha and
  latest/branch-<slug>, --provenance=false for Gitea, no buildcache
  (image is alpine + script).
- Cleanup loops in docker and cleanup-branch jobs include the new
  package.
- Rules and loader README updated.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 23:13:25 +02:00
hsiegeln
f772e868e6 docs: correct loader-network reachability claim; refresh HOWTO env vars
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 4m32s
CI / docker (push) Successful in 2m55s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 55s
Final-review must-fixes:
- HOWTO.md: drop CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME; add the three new
  artifact env vars (loaderimage / artifacttokenttlseconds / artifactbaseurl).
- DeploymentExecutor @PostConstruct WARN, handoff doc, and docker-orchestration
  rule no longer claim the loader uses cameleer-traefik. The loader runs on
  the PRIMARY Docker network only — additional networks are attached after
  startContainer returns, by which time the loader has exited. SaaS still
  works because the tenant's primary network hosts the tenant server.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 17:13:56 +02:00
hsiegeln
c970120b9f docs(handoff): init-container JAR fetch — pre-merge checklist
Records what landed (19 commits, 273/273 tests green), what's required
before the branch merges (push loader image, regen OpenAPI when backend
is reachable), and the deferred items (Task 12 IT, the polish backlog).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 16:55:32 +02:00
hsiegeln
0ee763ba51 docs(rules): document ArtifactDownloadController + storage abstraction; drop JARDOCKERVOLUME
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 16:52:20 +02:00
hsiegeln
cc076b1923 fix(runtime): pre-pull loader image, plug volume-leak windows, document network dep
Pre-pull the loader image at PULL_IMAGE so the implicit pull on first
createContainerCmd doesn't bypass the 120s loader-wait timeout.

Wrap createAndStartLoader in try/catch so a create/start failure cleans
up the just-created volume; same guard around createAndStartMain on
phase-2 failures. Folds the wait-error message into the rethrown
RuntimeException so the cause chain is visible.

Add a @PostConstruct WARN when neither artifactbaseurl nor serverurl is
set so the implicit cameleer-server DNS dependency is loud at boot, and
document the loader-to-server reachability contract in
.claude/rules/docker-orchestration.md.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 16:26:35 +02:00
hsiegeln
1ddae94930 feat(runtime): init-container loader pattern + withUsernsMode (#152 hardening close)
Tasks 9+10+11 of the init-container-jar-fetch plan, landed atomically because
9 alone leaves the orchestrator+executor referencing removed ContainerRequest
fields.

ContainerRequest (core) drops jarPath/jarVolumeName/jarVolumeMountPath; adds
appVersionId, artifactDownloadUrl, artifactExpectedSize, loaderImage.

DockerRuntimeOrchestrator (app):
  - per-replica named volume "cameleer-jars-{containerName}"
  - phase 1: loader container with the volume mounted RW at /app/jars,
    ARTIFACT_URL + ARTIFACT_EXPECTED_SIZE env, full hardening contract
  - block on waitContainerCmd().awaitStatusCode(120s); on non-zero exit
    remove the loader, remove the volume, propagate RuntimeException so
    DeploymentExecutor marks the deployment FAILED. main is never created.
  - phase 2: main container with the same volume mounted RO at /app/jars
  - withUsernsMode("host:1000:65536") on BOTH containers — closes the last
    open hardening gap from issue #152
  - main entrypoint paths point at /app/jars/app.jar
  - extracted baseHardenedHostConfig() so loader and main share the
    cap_drop / security_opt / readonly / pids / tmpfs contract
  - removeContainer() also removes the per-replica volume so blue/green
    doesn't leak volumes

DeploymentExecutor (app):
  - injects ArtifactDownloadTokenSigner; new @Value props loaderimage,
    artifacttokenttlseconds, artifactbaseurl
  - replaces the temporary getVersion(...).jarPath() bridge with a signed
    URL ${artifactBaseUrl}/api/v1/artifacts/{id}?exp&sig
  - drops the Files.exists pre-flight check; AppVersion.jarSizeBytes is
    the size-of-record check now
  - drops jarDockerVolume / jarStoragePath @Value fields and the volume
    plumbing in startReplica
  - DeployCtx carries appVersionId / artifactUrl / artifactExpectedSize
    in place of jarPath

Tests:
  - DockerRuntimeOrchestratorHardeningTest updated for the new shape;
    captures HostConfig on the MAIN container and asserts cap_drop ALL
    + no-new-privileges + apparmor + readonly + pids + tmpfs + the new
    withUsernsMode("host:1000:65536")
  - DockerRuntimeOrchestratorLoaderTest (new): verifies volume create →
    loader create with RW bind → loader started → awaited → loader
    removed → main create with RO bind → main started; verifies abort
    + cleanup on loader exit != 0 (loader removed, volume removed, main
    NEVER created); verifies userns_mode applied to both containers.

Config:
  - application.yml replaces jardockervolume with loaderimage,
    artifacttokenttlseconds, artifactbaseurl

Rules updated: .claude/rules/docker-orchestration.md (loader pattern,
userns, no more bind-mount); .claude/rules/core-classes.md
(ContainerRequest field map).

Test counts after change:
  - cameleer-server-core: 116/116 unit tests pass
  - cameleer-server-app: 273/273 unit tests pass

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 16:06:56 +02:00
hsiegeln
5043e1d4a1 feat(loader): add cameleer-runtime-loader image (busybox + entrypoint)
Init container that fetches the deployable JAR from a signed URL into the
shared /app/jars/ volume before the main runtime container starts. Pairs
with the controller (Task 7) and DockerRuntimeOrchestrator (Task 10).

- Dockerfile: busybox:1.37-musl, non-root USER (UID 1000)
- entrypoint.sh: POSIX sh, set -eu, required env vars (ARTIFACT_URL,
  ARTIFACT_EXPECTED_SIZE), wget with retries/timeout, size verification
- README: build instructions and runtime contract

Smoke-tested locally (docker build + happy-path fetch + size-mismatch).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 15:51:01 +02:00
hsiegeln
940bf18aba refactor(web): authoritative Content-Length, typed Optional<AppVersion> in controller 2026-04-27 15:47:37 +02:00
hsiegeln
433155ae0c feat(web): add ArtifactDownloadController with HMAC URL auth
New permitAll endpoint GET /api/v1/artifacts/{appVersionId}?exp&sig that
the cameleer-runtime-loader init container hits to stream the deployed
JAR. Auth is the HMAC-signed URL (sig + exp) — no JWT, no bootstrap
token — so SecurityConfig permits the path and the controller does the
verification itself.

Also hardens ArtifactDownloadTokenSigner to reject null/blank jwtSecret
at construction (Task 6 review feedback I-3).

Wires the ArtifactDownloadTokenSigner bean in SecurityBeanConfig from
${cameleer.server.security.jwtsecret}, the same property the rest of
the security stack uses.

Test coverage: 200/401/404 paths via standalone-MockMvc unit test
(avoids dragging in WebConfig's audit + usage interceptors that pull
the full bean graph) plus the existing signer suite extended with a
null/blank-secret guard test.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 15:36:28 +02:00
hsiegeln
73e06d8164 test(web): cover constant-time compare path in HMAC verify
Existing rejectsTamperedSignature uses len+1 sig — short-circuits in
MessageDigest.isEqual on length mismatch. Same-length tamper test
forces the byte-by-byte compare so the constant-time branch is
exercised.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 15:30:13 +02:00
hsiegeln
25bbd759d0 feat(web): add HMAC token signer for artifact downloads 2026-04-27 15:25:57 +02:00
hsiegeln
d90cd5ef2d test(retention): cover deployed-version-skip; preserve stack on delete failure 2026-04-27 15:23:07 +02:00
hsiegeln
4abcc610d5 refactor(retention): JarRetentionJob deletes via ArtifactStore 2026-04-27 15:17:17 +02:00
hsiegeln
6b7b5ae1ff docs(runtime): mark DeploymentExecutor jarPath as Task-11 bridge
Tactical filesystem-path read of the AppVersion locator survives until the
loader init-container lands — flagged inline so future readers don't read
the staging step as steady state.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 15:11:27 +02:00
hsiegeln
07a2fd6090 refactor(core): AppService writes via ArtifactStore; remove resolveJarPath
Task 4 of the init-container JAR fetch plan: migrate AppService.uploadJar
off direct filesystem writes onto the ArtifactStore abstraction so future
backends (OCI/Zot, S3) can swap in without touching service or controller
code.

- AppService constructor now takes (AppRepository, AppVersionRepository,
  ArtifactStore, tenantId[, CreateGuard]). The store owns layout and the
  locator string written into app_versions.jar_path.
- uploadJar buffers the request body once for hashing + storage, then
  writes a scratch temp file solely for RuntimeDetector (which still
  takes a Path); scratch is unconditionally deleted in finally.
- Add coordinatesFor(AppVersion) helper so downstream callers (Task 5+)
  can derive ArtifactCoordinates without knowing the tenant binding.
- Remove resolveJarPath. DeploymentExecutor now reads jarPath directly
  off the AppVersion record; the clean cut to download-URL delivery
  lands in Task 11.
- RuntimeBeanConfig wires a FilesystemArtifactStore bean rooted at
  cameleer.server.runtime.jarstoragepath and threads tenantId into the
  AppService bean.
2026-04-27 15:05:40 +02:00
hsiegeln
5238c58dd5 refactor(storage): clean up tmp on put failure; promote DirectoryNotEmptyException import
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 14:59:25 +02:00
hsiegeln
5eb07f5047 fix(storage): atomic put + tolerate DirectoryNotEmptyException in delete 2026-04-27 14:55:38 +02:00
hsiegeln
bc8bd590a6 feat(storage): add FilesystemArtifactStore (one impl of ArtifactStore) 2026-04-27 14:48:42 +02:00
hsiegeln
9c115f892e docs(storage): add Javadoc to ArtifactStore.exists
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 14:44:41 +02:00
hsiegeln
cddf056925 feat(storage): add ArtifactStore interface 2026-04-27 14:43:24 +02:00
hsiegeln
435153da6f docs(storage): add issue #158 ref in ArtifactCoordinates Javadoc
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 14:42:15 +02:00
hsiegeln
cc17cdd020 feat(storage): add ArtifactCoordinates value type 2026-04-27 14:38:29 +02:00
hsiegeln
1427d58e00 docs(plan): init-container JAR fetch + ArtifactStore abstraction
14-task TDD plan to replace bind-mount JAR delivery with init-container
download from Cameleer over HTTP, sitting behind a new ArtifactStore
abstraction. Lands withUsernsMode hardening (last open gap from #152) and
gives storage a clean migration path to OCI (Zot) tracked separately in
issue #158.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 14:35:07 +02:00
hsiegeln
47c303afa0 docs(handoff): logout-hardening — server-side end-to-end verified
Drove the full revocation flow against a running cameleer-server-app jar
(temp postgres+clickhouse, env-var admin):

  GET  /auth/me  with fresh token             -> 200
  POST /auth/logout                            -> 204
  GET  /auth/me  with same revoked token       -> 401
  POST /auth/logout (unauthenticated)          -> 204
  users.token_revoked_before                   -> non-null
  audit_log (action=logout, category=AUTH)    -> 1 row, SUCCESS

Proves the full chain end-to-end: controller revokes, audit lands, and
the JwtAuthenticationFilter prefix-strip fix actually enforces revocation
against the bare users.user_id (the original bug).

Browser-driven SPA smoke is still owed — Playwright MCP allowlist in
this env blocks 8081, so the SPA flow was verified by code-inspection
during Tasks 4+5. OIDC-user smoke against Logto remains owed pending
post_logout_redirect_uri registration.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 12:16:43 +02:00
hsiegeln
664acf2614 Merge feature/logout-hardening: server-side revocation + RP-Initiated Logout
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 2m50s
CI / docker (push) Successful in 2m19s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 48s
Fixes a silent token-revocation bug (JwtAuthenticationFilter was looking
up users by prefixed JWT subject instead of the bare user_id), adds
POST /api/v1/auth/logout that bumps token_revoked_before, and replaces
the broken cross-origin fetch logout in the SPA with a proper top-level
RP-Initiated Logout redirect (id_token_hint + post_logout_redirect_uri
+ client_id). Adds a signed-out splash and prompt=login defence.

Operational follow-up: SaaS team must register
<base-url>/login as a post_logout_redirect_uri on each Logto tenant
client. See docs/handoff/2026-04-27-logout-hardening.md.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 12:05:27 +02:00
hsiegeln
463c6348b3 docs(handoff): logout-hardening verification notes
Records the automated outcomes (4/4 ITs pass, typecheck + build green)
and lists the three manual smoke tests still required from the SaaS
team — local-user, OIDC-user against Logto, stolen-token. The OIDC test
depends on Logto-side post_logout_redirect_uri registration; the others
can be exercised against any cameleer-server deployment.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 12:04:02 +02:00
hsiegeln
7837272a46 docs(handoff): SaaS-side post_logout_redirect_uri requirement
Operational note for the cameleer-saas / Logto admin team. Covers what
changed in cameleer-server (RP-Initiated Logout via top-level redirect
+ POST /auth/logout server-side revocation + signed-out splash +
prompt=login defence), what they need to register in Logto per tenant,
how to verify, and a failure-mode runbook table.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 12:00:54 +02:00
hsiegeln
2535741474 docs(rules): document POST /auth/logout on UiAuthController
Updates both UiAuthController listings (Auth flat + security/) so future
sessions know /logout exists, that it bumps token_revoked_before with a
+1ms race-safety bump, and that it audits under AuditCategory.AUTH.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 12:00:05 +02:00
hsiegeln
32c8786d06 feat(ui): signed-out splash + prompt=login on OIDC redirect
Two defensive layers complementing the RP-Initiated Logout in 82e25933:

1. cameleer:signed_out sessionStorage flag (set in auth-store.logout,
   read+cleared in LoginPage on mount) renders a 'You have been signed
   out successfully' card with an explicit 'Sign in again' button.
   Mirrors the cameleer-saas pattern.

2. prompt=login on the OIDC authorization redirect forces the IdP to
   re-prompt for credentials even if its session cookie somehow
   survived RP-Initiated Logout (proxy, race, misconfigured
   post_logout_redirect_uri). OIDC Core 1.0 §3.1.2.1.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 11:59:04 +02:00
hsiegeln
82e2593332 fix(ui): proper OIDC logout — server revoke + top-level redirect
Previous logout fired fetch(end_session, {mode:'no-cors'}), which is a
no-op for OIDC: cross-origin fetch never clears the IdP's session cookie.
Result: subsequent SSO clicks silently re-authenticated the prior user.

New flow:
1. Best-effort POST /auth/logout to bump token_revoked_before.
2. Clear localStorage + Zustand state.
3. Set sessionStorage 'cameleer:signed_out=1' so /login renders a
   confirmation splash (mirrors cameleer-saas pattern).
4. window.location.replace(end_session_endpoint?id_token_hint=...
   &post_logout_redirect_uri=...&client_id=...) — top-level navigation,
   the only form that actually clears the IdP session cookie.

client_id is now persisted at OIDC initiation alongside
end_session_endpoint and id_token, so logout has all three params
without an extra round-trip.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 11:57:04 +02:00
hsiegeln
da3895c31d chore(ui): regenerate OpenAPI schema for /auth/logout
Picks up the new POST /api/v1/auth/logout endpoint introduced in
90315330. Generated against a locally-running build (not the remote
generate-api:live URL, which lags behind this branch).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 11:53:40 +02:00
hsiegeln
83a10de497 fix(auth): close same-ms revocation race + tidy audit cleanup
Bumps token_revoked_before by 1ms so a JWT issued in the same millisecond
as a logout call (Date.from(Instant.now()) quantises iat to ms) does not
survive the filter's strict isBefore check.

Also extends LogoutControllerIT @AfterEach to delete the audit_log row,
keeping reused Postgres containers clean for downstream ITs.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 09:26:05 +02:00
hsiegeln
9031533077 feat(auth): add POST /auth/logout that revokes all user tokens
Bumps users.token_revoked_before = now() for the calling user, audited
under AuditCategory.AUTH. Best-effort: returns 204 even when the request
is unauthenticated, so the SPA can call it on every logout regardless of
token state. Token-rejection is enforced by the existing
JwtAuthenticationFilter revocation check (fixed in 7066795c).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 09:21:47 +02:00
hsiegeln
b4c6e45d35 test(auth): JwtRevocationIT cleanup + unrevoked-token coverage
Adds @AfterEach to delete the test users so Testcontainers reuse does
not leak an authenticated user with a future token_revoked_before into
the shared schema (visible to LicenseUsageReader.snapshot, user-admin
listing tests, etc.). Adds unrevokedUserTokenIsAccepted to pin the
revoked == null no-op branch as a first-class assertion.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 09:18:10 +02:00
hsiegeln
7066795c3c fix(auth): strip user: prefix before token-revocation lookup
JwtAuthenticationFilter compared the JWT subject (user:alice) against
users.user_id (bare alice), so token_revoked_before was never read for
any user. Strips the prefix to match the convention documented in
CLAUDE.md. Adds JwtRevocationIT as a regression.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 09:11:55 +02:00
hsiegeln
6e4977ea3b docs(plan): logout hardening implementation plan
Tracks the work to (a) fix the silently-inert token-revocation lookup in
JwtAuthenticationFilter, (b) add POST /api/v1/auth/logout that bumps
users.token_revoked_before, and (c) replace the broken cross-origin
fetch logout in the SPA with proper RP-Initiated Logout (top-level
redirect) plus a signed-out splash and prompt=login defence.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 09:01:52 +02:00
hsiegeln
1809574fe6 ci: include cameleer-license-api in maven deploy project list
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 3m31s
CI / docker (push) Successful in 2m44s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 48s
SonarQube / sonarqube (push) Successful in 8m32s
The license-api module was added in 858975f0 but the CI deploy step's
`-pl` list still only built parent + server-core + minter. server-core
now depends on cameleer-license-api, which wasn't in the registry yet,
so the deploy job failed with:

    Could not find artifact com.cameleer:cameleer-license-api:jar:1.0-SNAPSHOT
    in gitea (https://gitea.siegeln.net/api/packages/cameleer/maven)

Add cameleer-license-api to the project list so it builds and publishes
before its consumers in the same reactor invocation.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 20:41:26 +02:00
hsiegeln
858975f03f refactor(license): extract cameleer-license-api module from server-core
Some checks failed
CI / cleanup-branch (push) Has been skipped
CI / build (push) Failing after 2m57s
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Splits the pure license contract types (LicenseInfo, LicenseValidator,
LicenseState, LicenseStateMachine, LicenseLimits, DefaultTierLimits) into a
new cameleer-license-api module under package com.cameleer.license.

Why: cameleer-license-minter previously depended on cameleer-server-core for
these types, dragging cameleer-server-core + cameleer-common onto the
classpath of every minter consumer (notably cameleer-saas). The SaaS
management plane has no business carrying server-runtime types — it only
needs the license contract to mint and verify tokens.

After:
  cameleer-license-minter -> cameleer-license-api  (no server internals)
  cameleer-server-core    -> cameleer-license-api
  cameleer-saas           -> cameleer-license-minter -> cameleer-license-api

Verified: mvn -pl cameleer-license-minter dependency:tree shows the minter
no longer pulls cameleer-server-core or cameleer-common. Full reactor
verify (-DskipITs) green: 371 tests pass.

LicenseGate stays in server-core (server-runtime state holder, not contract).

Closes cameleer/cameleer-server#156

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 20:06:52 +02:00
hsiegeln
30db609aff Merge feature/auth-harmonization: capability-driven login UX
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 3m6s
CI / docker (push) Successful in 2m49s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 53s
Replaces the prompt=none → /login?local trap with a deterministic
capability endpoint (GET /api/v1/auth/capabilities). LoginPage renders
SSO-primary or local form based on caps; ?local is the explicit
admin-recovery escape hatch. Drops prompt=none from the SSO authorize
URL per RFC 9700 §4.4. Adds Vitest + IT coverage and docs.

MFA enrollment / enforcement deferred to issue #154.
2026-04-26 19:52:31 +02:00
hsiegeln
45b5f473c9 refactor(auth): post-review tidy — drop @NotNull, refresh e2e comment, use oidc.primary
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-26 19:48:20 +02:00
hsiegeln
71688dea16 docs(auth): document AuthCapabilitiesController + login routing 2026-04-26 19:41:20 +02:00
hsiegeln
b63b9aa4bb fix(ui): drop OidcCallback ?local trap on login_required 2026-04-26 19:38:15 +02:00
hsiegeln
7565cdcf2f fix(ui): try/finally in handleOidcLogin; logout redirects to /login (not ?local)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-26 19:36:23 +02:00
hsiegeln
b7d390adf4 feat(ui): capability-driven LoginPage; drop prompt=none silent SSO
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-26 19:29:59 +02:00