chore: hand cameleer-runtime-loader image build to cameleer-saas
The loader is infra glue (per-replica init container that fetches the tenant JAR from a signed URL) — same shape as runtime-base, postgres, clickhouse, traefik, logto images already living in cameleer-saas. Move the source + CI build there so all sidecar/infra image builds are in one place; cameleer-server's CI is back to building only what it owns (server, server-ui). Coordination: cameleer-saas@ac8d628 added the build step and copied the source verbatim. Published tag path is unchanged (gitea.siegeln.net/cameleer/cameleer-runtime-loader:latest), so running tenant servers continue pulling the same image without disruption. This commit: - Deletes cameleer-runtime-loader/ (Dockerfile, entrypoint.sh, README). - Removes the conditional "Build and push runtime-loader" step and its upstream "Detect runtime-loader changes" detection from .gitea/workflows/ci.yml. Drops the fetch-depth: 0 + outputs.loader_changed plumbing that only existed for the change-detection path. - Drops cameleer-runtime-loader from the in-job and cleanup-branch image cleanup loops — saas owns the registry lifecycle now. - Rewrites LoaderHardeningIT to pull the published :latest from the registry (via Testcontainers GenericContainer) instead of building from a local Dockerfile. The IT now functions as a cross-repo contract test: cameleer-server's hardening expectations vs. the saas-published artifact. Local devs need `docker login gitea.siegeln.net`; CI runners are pre-authenticated. - Updates .claude/rules/docker-orchestration.md to point at the new source-of-truth location and reframe LoaderHardeningIT as the cross-repo contract test. The image's runtime contract (ARTIFACT_URL, ARTIFACT_EXPECTED_SIZE, /app/jars/app.jar mount, exit code semantics) is unchanged. Future contract changes need coordinated commits across both repos. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -41,7 +41,7 @@ When deployed via the cameleer-saas platform, this server orchestrates customer
|
||||
`startContainer` is now a two-phase op per replica:
|
||||
|
||||
1. **Volume create** — `cameleer-jars-{containerName}` named volume (per-replica, deterministic so cleanup in `removeContainer` can derive it).
|
||||
2. **Loader container** — `loaderImage` (default `gitea.siegeln.net/cameleer/cameleer-runtime-loader:latest`), name `{containerName}-loader`, mount the volume **RW at `/app/jars`**, env vars `ARTIFACT_URL` + `ARTIFACT_EXPECTED_SIZE`. Loader downloads the JAR from the signed URL into the volume and exits 0. Orchestrator blocks on `waitContainerCmd().exec(WaitContainerResultCallback).awaitStatusCode(120, SECONDS)`. Loader container is removed in a `finally` block; on non-zero exit the volume is also removed and `RuntimeException` propagates so `DeploymentExecutor` marks the deployment FAILED. **Loader logs are captured before removal** (`captureLoaderLogs` — `logContainerCmd` with `withTail(50)`, capped at 4096 chars, 5s timeout) and appended to the thrown `RuntimeException` message as `". loader output: <text>"`. Best-effort: log-capture failures are swallowed and don't mask the original exit. The loader image's Dockerfile pre-creates `/app/jars` owned by `loader:loader` (UID 1000) so the orchestrator's fresh named volume initialises with that ownership — without it the empty volume comes up as `root:root 0755` and wget exits 1 with "Permission denied". `LoaderHardeningIT` is the regression guard.
|
||||
2. **Loader container** — `loaderImage` (default `gitea.siegeln.net/cameleer/cameleer-runtime-loader:latest`, **built and published by the cameleer-saas repo** at `docker/runtime-loader/`), name `{containerName}-loader`, mount the volume **RW at `/app/jars`**, env vars `ARTIFACT_URL` + `ARTIFACT_EXPECTED_SIZE`. Loader downloads the JAR from the signed URL into the volume and exits 0. Orchestrator blocks on `waitContainerCmd().exec(WaitContainerResultCallback).awaitStatusCode(120, SECONDS)`. Loader container is removed in a `finally` block; on non-zero exit the volume is also removed and `RuntimeException` propagates so `DeploymentExecutor` marks the deployment FAILED. **Loader logs are captured before removal** (`captureLoaderLogs` — `logContainerCmd` with `withTail(50)`, capped at 4096 chars, 5s timeout) and appended to the thrown `RuntimeException` message as `". loader output: <text>"`. Best-effort: log-capture failures are swallowed and don't mask the original exit. The loader image's Dockerfile pre-creates `/app/jars` owned by `loader:loader` (UID 1000) so the orchestrator's fresh named volume initialises with that ownership — without it the empty volume comes up as `root:root 0755` and wget exits 1 with "Permission denied". `LoaderHardeningIT` is the cross-repo contract test (pulls the published `:latest` and asserts exit 0 under the orchestrator's hardening shape).
|
||||
3. **Main container** — same hardening contract, mount the same volume **RO at `/app/jars`**, entrypoint reads `/app/jars/app.jar` (Spring Boot/Quarkus: `-jar /app/jars/app.jar`; plain Java: `-cp /app/jars/app.jar <MainClass>`; native: `exec /app/jars/app.jar`).
|
||||
|
||||
`removeContainer(id)` derives the volume name from the inspected container name (Docker prefixes it with `/`) and removes the volume after the container removes — blue/green doesn't leak volumes.
|
||||
|
||||
Reference in New Issue
Block a user