Files
cameleer-server/docker-compose.yml
hsiegeln 1ea0258393 fix(auth): upsert UI login user_id unprefixed (drop docker seeder workaround)
Root cause of the mismatch that prompted the one-shot cameleer-seed
docker service: UiAuthController stored users.user_id as the JWT
subject "user:admin" (JWT sub format). Every env-scoped controller
(Alert, AlertSilence, AlertRule, OutboundConnectionAdmin) already
strips the "user:" prefix on the read path — so the rest of the
system expects the DB key to be the bare username. With UiAuth
storing prefixed, fresh docker stacks hit
"alert_rules_created_by_fkey violation" on the first rule create.

Fix: inside login(), compute `userId = request.username()` and use
it everywhere the DB/RBAC layer is touched (isLocked, getPasswordHash,
record/clearFailedLogins, upsert, assignRoleToUser, addUserToGroup,
getSystemRoleNames). Keep `subject = "user:" + userId` — we still
sign JWTs with the namespaced subject so JwtAuthenticationFilter can
distinguish user vs agent tokens.

refresh() and me() follow the same rule via a stripSubjectPrefix()
helper (JWT subject in, bare DB key out).

With the write path aligned, the docker bridge is no longer needed:
- Deleted deploy/docker/postgres-init.sql
- Deleted cameleer-seed service from docker-compose.yml

Scope: UiAuthController only. UserAdminController + OidcAuthController
still prefix on upsert — that's the bug class the triage identified
as "Option A or B either way OK". Not changing them now because:
  a) prod admins are provisioned unprefixed through some other path,
     so those two files aren't the docker-only failure observed;
  b) stripping them would need a data migration for any existing
     prod users stored prefixed, which is out of scope for a cleanup
     phase. Follow-up worth scheduling if we ever wire OIDC or admin-
     created users into alerting FKs.

Verified: 33/33 alerting+outbound controller ITs pass (9 outbound,
10 rules, 9 silences, 5 alert inbox).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 18:26:03 +02:00

136 lines
4.8 KiB
YAML

##
## Local development + E2E stack. Mirrors the k8s manifests in deploy/ :
## - cameleer-postgres (PG for RBAC/config/audit/alerting — Flyway migrates on server start)
## - cameleer-clickhouse (OLAP for executions/logs/metrics/stats/diagrams)
## - cameleer-server (Spring Boot backend; built from this repo's Dockerfile)
## - cameleer-ui (nginx-served SPA; built from ui/Dockerfile)
##
## Usage:
## docker compose up -d --build # full stack, detached
## docker compose up -d cameleer-postgres cameleer-clickhouse # infra only (dev via mvn/vite)
## docker compose down -v # stop + remove volumes
##
## Defaults match `application.yml` and the k8s base manifests. Production
## k8s still owns the source of truth; this compose is for local iteration
## and Playwright E2E. Secrets are non-sensitive dev placeholders.
##
services:
cameleer-postgres:
image: postgres:16
container_name: cameleer-postgres
ports:
- "5432:5432"
environment:
POSTGRES_DB: cameleer
POSTGRES_USER: cameleer
POSTGRES_PASSWORD: cameleer_dev
volumes:
- cameleer-pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U cameleer -d cameleer"]
interval: 5s
timeout: 3s
retries: 20
restart: unless-stopped
cameleer-clickhouse:
image: clickhouse/clickhouse-server:24.12
container_name: cameleer-clickhouse
ports:
- "8123:8123"
- "9000:9000"
environment:
CLICKHOUSE_DB: cameleer
CLICKHOUSE_USER: default
CLICKHOUSE_PASSWORD: ""
# Allow the default user to manage access (matches k8s StatefulSet env)
CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT: "1"
ulimits:
nofile:
soft: 262144
hard: 262144
volumes:
- cameleer-chdata:/var/lib/clickhouse
healthcheck:
# wget-less image: use clickhouse-client's ping equivalent
test: ["CMD-SHELL", "clickhouse-client --query 'SELECT 1' || exit 1"]
interval: 5s
timeout: 3s
retries: 20
restart: unless-stopped
cameleer-server:
build:
context: .
dockerfile: Dockerfile
args:
# Public cameleer-common package — token optional. Override with
# REGISTRY_TOKEN=... in the shell env if you need a private package.
REGISTRY_TOKEN: ${REGISTRY_TOKEN:-}
container_name: cameleer-server
ports:
- "8081:8081"
environment:
SPRING_DATASOURCE_URL: jdbc:postgresql://cameleer-postgres:5432/cameleer?currentSchema=tenant_default&ApplicationName=tenant_default
SPRING_DATASOURCE_USERNAME: cameleer
SPRING_DATASOURCE_PASSWORD: cameleer_dev
SPRING_FLYWAY_USER: cameleer
SPRING_FLYWAY_PASSWORD: cameleer_dev
CAMELEER_SERVER_CLICKHOUSE_URL: jdbc:clickhouse://cameleer-clickhouse:8123/cameleer
CAMELEER_SERVER_CLICKHOUSE_USERNAME: default
CAMELEER_SERVER_CLICKHOUSE_PASSWORD: ""
# Auth / UI credentials — dev defaults; change before exposing the port.
CAMELEER_SERVER_SECURITY_UIUSER: admin
CAMELEER_SERVER_SECURITY_UIPASSWORD: admin
CAMELEER_SERVER_SECURITY_UIORIGIN: http://localhost:5173
CAMELEER_SERVER_SECURITY_CORSALLOWEDORIGINS: http://localhost:5173,http://localhost:8080
CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN: dev-bootstrap-token-for-local-agent-registration
CAMELEER_SERVER_SECURITY_JWTSECRET: dev-jwt-secret-32-bytes-min-0123456789abcdef0123456789abcdef
# Runtime (Docker-in-Docker deployment) disabled for local stack
CAMELEER_SERVER_RUNTIME_ENABLED: "false"
CAMELEER_SERVER_TENANT_ID: default
# SSRF guard: allow private targets for dev (Playwright + local webhooks)
CAMELEER_SERVER_OUTBOUND_HTTP_ALLOW_PRIVATE_TARGETS: "true"
depends_on:
cameleer-postgres:
condition: service_healthy
cameleer-clickhouse:
condition: service_healthy
healthcheck:
# JRE image has wget; /api/v1/health is Actuator + Spring managed endpoint
test: ["CMD-SHELL", "wget -qO- http://localhost:8081/api/v1/health > /dev/null || exit 1"]
interval: 10s
timeout: 5s
retries: 12
start_period: 90s
restart: unless-stopped
cameleer-ui:
build:
context: ./ui
dockerfile: Dockerfile
args:
REGISTRY_TOKEN: ${REGISTRY_TOKEN:-}
container_name: cameleer-ui
# Host :8080 — Vite dev server (npm run dev:local) keeps :5173 for local iteration.
ports:
- "8080:80"
environment:
# nginx proxies /api → CAMELEER_API_URL
CAMELEER_API_URL: http://cameleer-server:8081
BASE_PATH: /
depends_on:
cameleer-server:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "wget -qO- http://localhost/healthz > /dev/null || exit 1"]
interval: 5s
timeout: 3s
retries: 10
restart: unless-stopped
volumes:
cameleer-pgdata:
cameleer-chdata: