Update CLAUDE.md and .claude/rules/cicd.md to point at the new source-of-truth location (cameleer-saas/docker/runtime-loader/) and flag LoaderHardeningIT as the cross-repo contract test instead of an internal regression guard. The image's runtime contract (env vars, mount path, exit codes) is unchanged. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
3.4 KiB
3.4 KiB
paths
| paths | ||||
|---|---|---|---|---|
|
CI/CD & Deployment
- CI workflow:
.gitea/workflows/ci.yml— build -> docker -> deploy on push to main or feature branches.paths-ignoreskips the whole pipeline for docs-only /.planning//.claude//*.mdchanges (push and PR triggers). - Build step skips integration tests (
-DskipITs) — Testcontainers needs Docker daemon - Build caches (parallel
actions/cache@v4steps in thebuildjob):~/.m2/repository(key on allpom.xml),~/.npm(key onui/package-lock.json),ui/node_modules/.vite(key onui/package-lock.json+ui/vite.config.ts). UI install usesnpm ci --prefer-offline --no-audit --fund=falseso the npm cache is the primary source. - Maven build performance (set in
pom.xmlandcameleer-server-app/pom.xml):useIncrementalCompilation=trueon the compiler plugin; Surefire usesforkCount=1C+reuseForks=true(one JVM per CPU core, reused across test classes); Failsafe keepsforkCount=1+reuseForks=true. Unit tests must not rely on per-class JVM isolation. - UI build script (
ui/package.json):buildisvite buildonly — the type-check pass was split out intonpm run typecheck(run separately when you want a fulltsc --noEmitsweep). - Docker: multi-stage build (
Dockerfile),$BUILDPLATFORMfor native Maven on ARM64 runner, amd64 runtime.docker-entrypoint.shimports/certs/ca.peminto JVM truststore before starting the app (supports custom CAs for OIDC discovery withoutCAMELEER_SERVER_SECURITY_OIDCTLSSKIPVERIFY). REGISTRY_TOKENbuild arg required forcameleer-commondependency resolution- Registry:
gitea.siegeln.net/cameleer/cameleer-server(container images) cameleer-runtime-loaderimage (init container that fetches the deployable JAR before the runtime container starts) is built and pushed by cameleer-saas CI (docker/runtime-loader/in that repo) — it lives alongside the other sidecar/infra images (runtime-base, postgres, clickhouse, traefik, logto). cameleer-server consumes the image viaDockerRuntimeOrchestratorbut does not build it. Cross-repo contract is regression-tested byLoaderHardeningIThere, which pulls the published:latestand asserts exit 0 under the orchestrator's hardening contract.- K8s manifests in
deploy/— Kustomize base + overlays (main/feature), shared infra (PostgreSQL, ClickHouse, Logto) as top-level manifests - Deployment target: k3s at 192.168.50.86, namespace
cameleer(main),cam-<slug>(feature branches) - Feature branches: isolated namespace, PG schema; Traefik Ingress at
<slug>-api.cameleer.siegeln.net - Secrets managed in CI deploy step (idempotent
--dry-run=client | kubectl apply):cameleer-auth,cameleer-postgres-credentials,cameleer-clickhouse-credentials - K8s probes: server uses
/api/v1/health, PostgreSQL usespg_isready -U "$POSTGRES_USER"(env var, not hardcoded) - K8s security: server and database pods run with
securityContext.runAsNonRoot. UI (nginx) runs without securityContext (needs root for entrypoint setup). - Docker: server Dockerfile has no default credentials — all DB config comes from env vars at runtime
- Docker build uses buildx registry cache +
--provenance=falsefor Gitea compatibility - CI: branch slug sanitization extracted to
.gitea/sanitize-branch.sh, sourced by docker and deploy-feature jobs