Files
cameleer-server/HOWTO.md
hsiegeln 849265a1c6
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 1m58s
CI / docker (push) Successful in 1m19s
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Successful in 39s
CI / cleanup-branch (pull_request) Has been skipped
CI / build (pull_request) Successful in 2m2s
CI / docker (pull_request) Has been skipped
CI / deploy (pull_request) Has been skipped
CI / deploy-feature (pull_request) Has been skipped
docs(howto): brand-new local environment via docker-compose
Rewrite the "Infrastructure Setup" / "Run the Server" sections to
reflect what docker-compose.yml actually provides (full stack —
PostgreSQL + ClickHouse + server + UI — not just PostgreSQL). Adds:

- Step-by-step walkthrough for a first-run clean environment.
- Port map including the UI (8080), ClickHouse (8123/9000), PG (5432),
  server (8081).
- Dev credentials baked into compose surfaced in one place.
- Lifecycle commands (stop/start/rebuild-single-service/wipe).
- Infra-only mode for backend-via-mvn / UI-via-vite iteration.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 19:41:30 +02:00

31 KiB
Raw Blame History

HOWTO — Cameleer Server

Prerequisites

  • Java 17+
  • Maven 3.9+
  • Node.js 22+ and npm
  • Docker & Docker Compose
  • Access to the Gitea Maven registry (for cameleer-common dependency)

Build

# Build UI first (required for embedded mode)
cd ui && npm ci && npm run build && cd ..

# Backend
mvn clean compile          # compile only
mvn clean verify           # compile + run all tests (needs Docker for integration tests)

Start a brand-new local environment (Docker)

The repo ships a docker-compose.yml with the full stack: PostgreSQL, ClickHouse, the Spring Boot server, and the nginx-served SPA. All dev defaults are baked into the compose file — no .env file or extra config needed for a first run.

# 1. Clean slate (safe if this is already a first run — noop when no volumes exist)
docker compose down -v

# 2. Build + start everything. First run rebuilds both images (~24 min).
docker compose up -d --build

# 3. Watch the server come up (health check goes green in ~6090s after Flyway + ClickHouse init)
docker compose logs -f cameleer-server
#   ready when you see "Started CameleerServerApplication in ...".
#   Ctrl+C when ready — containers keep running.

# 4. Smoke test
curl -s http://localhost:8081/api/v1/health     # → {"status":"UP"}

Open the UI at http://localhost:8080 (nginx) and log in with admin / admin.

Service Host port URL / notes
Web UI (nginx) 8080 http://localhost:8080 — proxies /api to the server
Server API 8081 http://localhost:8081/api/v1/health, http://localhost:8081/api/v1/swagger-ui.html
PostgreSQL 5432 user cameleer, password cameleer_dev, db cameleer
ClickHouse 8123 (HTTP), 9000 (native) user default, no password, db cameleer

Dev credentials baked into compose (do not use in production):

Purpose Value
UI login admin / admin
Bootstrap token (agent registration) dev-bootstrap-token-for-local-agent-registration
JWT secret dev-jwt-secret-32-bytes-min-0123456789abcdef0123456789abcdef
CAMELEER_SERVER_RUNTIME_ENABLED false (Docker-in-Docker app orchestration off for the local stack)

Override any of these by editing docker-compose.yml or passing -e KEY=value to docker compose run.

Common lifecycle commands

# Stop everything but keep volumes (quick restart later)
docker compose stop

# Start again after a stop
docker compose start

# Apply changes to the server code / UI — rebuild just what changed
docker compose up -d --build cameleer-server
docker compose up -d --build cameleer-ui

# Wipe the environment completely (drops PG + ClickHouse volumes — all data gone)
docker compose down -v

# Fresh Flyway run by dropping just the PG volume (keeps ClickHouse data)
docker compose down
docker volume rm cameleer-server_cameleer-pgdata
docker compose up -d

Infra-only mode (backend via mvn / UI via Vite)

If you want to iterate on backend/UI code without rebuilding the server image on every change, start just the databases and run the server + UI locally:

# 1. Only infra containers
docker compose up -d cameleer-postgres cameleer-clickhouse

# 2. Build and run the server jar against those containers
mvn clean package -DskipTests
SPRING_DATASOURCE_URL="jdbc:postgresql://localhost:5432/cameleer?currentSchema=tenant_default&ApplicationName=tenant_default" \
SPRING_DATASOURCE_USERNAME=cameleer \
SPRING_DATASOURCE_PASSWORD=cameleer_dev \
SPRING_FLYWAY_USER=cameleer \
SPRING_FLYWAY_PASSWORD=cameleer_dev \
CAMELEER_SERVER_CLICKHOUSE_URL="jdbc:clickhouse://localhost:8123/cameleer" \
CAMELEER_SERVER_CLICKHOUSE_USERNAME=default \
CAMELEER_SERVER_CLICKHOUSE_PASSWORD= \
CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN=dev-bootstrap-token-for-local-agent-registration \
CAMELEER_SERVER_SECURITY_JWTSECRET=dev-jwt-secret-32-bytes-min-0123456789abcdef0123456789abcdef \
CAMELEER_SERVER_RUNTIME_ENABLED=false \
CAMELEER_SERVER_TENANT_ID=default \
java -jar cameleer-server-app/target/cameleer-server-app-1.0-SNAPSHOT.jar

# 3. In another terminal — Vite dev server on :5173 (proxies /api → :8081)
cd ui && npm install && npm run dev

Database schema is applied automatically: PostgreSQL via Flyway migrations on server startup, ClickHouse tables via ClickHouseSchemaInitializer. No manual DDL needed.

CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN is required for agent registration — the server fails fast on startup if it's not set. For token rotation without downtime, set CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKENPREVIOUS to the old token while rolling out the new one — the server accepts both during the overlap window.

API Endpoints

Authentication (Phase 4)

All endpoints except health, registration, and docs require a JWT Bearer token. The typical flow:

# 1. Register agent (requires bootstrap token)
curl -s -X POST http://localhost:8081/api/v1/agents/register \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer my-secret-token" \
  -d '{"agentId":"agent-1","name":"Order Service","group":"order-service-prod","version":"1.0.0","routeIds":["route-1"],"capabilities":["deep-trace","replay"]}'
# Response includes: accessToken, refreshToken, serverPublicKey (Ed25519, Base64)

# 2. Use access token for all subsequent requests
TOKEN="<accessToken from registration>"

# 3. Refresh when access token expires (1h default)
curl -s -X POST http://localhost:8081/api/v1/agents/agent-1/refresh \
  -H "Authorization: Bearer <refreshToken>"
# Response: { "accessToken": "new-jwt" }

UI Login (for browser access):

# Login with UI credentials (returns JWT tokens)
curl -s -X POST http://localhost:8081/api/v1/auth/login \
  -H "Content-Type: application/json" \
  -d '{"username":"admin","password":"admin"}'
# Response: { "accessToken": "...", "refreshToken": "..." }

# Refresh UI token
curl -s -X POST http://localhost:8081/api/v1/auth/refresh \
  -H "Content-Type: application/json" \
  -d '{"refreshToken":"<refreshToken>"}'

UI credentials are configured via CAMELEER_SERVER_SECURITY_UIUSER / CAMELEER_SERVER_SECURITY_UIPASSWORD env vars (default: admin / admin).

Public endpoints (no JWT required): GET /api/v1/health, POST /api/v1/agents/register (uses bootstrap token), POST /api/v1/auth/**, OpenAPI/Swagger docs.

Protected endpoints (JWT required): All other endpoints including ingestion, search, agent management, commands.

SSE connections: Authenticated via query parameter: /agents/{id}/events?token=<jwt> (EventSource API doesn't support custom headers).

Ed25519 signatures: All SSE command payloads (config-update, deep-trace, replay) include a signature field. Agents verify payload integrity using the serverPublicKey received during registration. The server generates a new ephemeral keypair on each startup — agents must re-register to get the new key.

RBAC (Role-Based Access Control)

JWTs carry a roles claim. Endpoints are restricted by role:

Role Access
AGENT Data ingestion (/data/** — executions, diagrams, metrics, logs), heartbeat, SSE events, command ack
VIEWER Search, execution detail, diagrams, agent list, app config (read-only)
OPERATOR VIEWER + send commands to agents, route control, replay, edit app config
ADMIN OPERATOR + user management, audit log, OIDC config, database admin (/admin/**)

The env-var local user gets ADMIN role. Agents get AGENT role at registration.

UI role gating: The sidebar hides the Admin section for non-ADMIN users. Admin routes (/admin/*) redirect to / for non-admin. The diagram node toolbar and route control bar are hidden for VIEWER. Config is a main tab (/config shows all apps, /config/:appId filters to one app with detail panel; sidebar clicks stay on config tab, route clicks resolve to parent app). VIEWER sees read-only, OPERATOR+ can edit.

OIDC Login (Optional)

OIDC configuration is stored in PostgreSQL and managed via the admin API or UI. The SPA checks if OIDC is available:

# 1. SPA checks if OIDC is available (returns 404 if not configured)
curl -s http://localhost:8081/api/v1/auth/oidc/config
# Returns: { "issuer": "...", "clientId": "...", "authorizationEndpoint": "..." }

# 2. After OIDC redirect, SPA sends the authorization code
curl -s -X POST http://localhost:8081/api/v1/auth/oidc/callback \
  -H "Content-Type: application/json" \
  -d '{"code":"auth-code-from-provider","redirectUri":"http://localhost:5173/callback"}'
# Returns: { "accessToken": "...", "refreshToken": "..." }

Local login remains available as fallback even when OIDC is enabled.

OIDC Admin Configuration (ADMIN only)

OIDC settings are managed at runtime via the admin API. No server restart needed.

# Get current OIDC config
curl -s -H "Authorization: Bearer $TOKEN" http://localhost:8081/api/v1/admin/oidc

# Save OIDC config (client_secret: send "********" to keep existing, or new value to update)
curl -s -X PUT http://localhost:8081/api/v1/admin/oidc \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOKEN" \
  -d '{
    "enabled": true,
    "issuerUri": "http://cameleer-logto:3001/oidc",
    "clientId": "your-client-id",
    "clientSecret": "your-client-secret",
    "rolesClaim": "realm_access.roles",
    "defaultRoles": ["VIEWER"]
  }'

# Test OIDC provider connectivity
curl -s -X POST http://localhost:8081/api/v1/admin/oidc/test \
  -H "Authorization: Bearer $TOKEN"

# Delete OIDC config (disables OIDC)
curl -s -X DELETE http://localhost:8081/api/v1/admin/oidc \
  -H "Authorization: Bearer $TOKEN"

Initial provisioning: OIDC can also be seeded from CAMELEER_SERVER_SECURITY_OIDC* env vars on first startup (when DB is empty). After that, the admin API takes over.

Logto Setup (OIDC Provider)

Logto is deployed alongside the Cameleer stack. After first deployment:

Logto is proxy-aware via TRUST_PROXY_HEADER=1. The LOGTO_ENDPOINT and LOGTO_ADMIN_ENDPOINT secrets define the public-facing URLs that Logto uses for OIDC discovery, issuer URI, and redirect URLs. When behind a reverse proxy (e.g., Traefik), set these to the external URLs (e.g., https://auth.cameleer.my.domain). Logto needs its own subdomain — it cannot be path-prefixed under another app.

  1. Initial setup: Open the Logto admin console (the LOGTO_ADMIN_ENDPOINT URL) and create the admin account
  2. Create SPA application: Applications → Create → Single Page App
    • Name: Cameleer UI
    • Redirect URI: your UI URL + /oidc/callback
    • Note the Client ID
  3. Create API Resource: API Resources → Create
    • Name: Cameleer Server API
    • Indicator: your API URL (e.g., https://cameleer.siegeln.net/api)
    • Add permissions: server:admin, server:operator, server:viewer
  4. Create M2M application (for SaaS platform): Applications → Create → Machine-to-Machine
    • Name: Cameleer SaaS
    • Assign the API Resource created above with server:admin scope
    • Note the Client ID and Client Secret
  5. Configure Cameleer OIDC login: Use the admin API (PUT /api/v1/admin/oidc) or the admin UI. OIDC login configuration is stored in the database — no env vars needed for the SPA OIDC flow.
  6. Configure resource server (for M2M token validation):
    CAMELEER_SERVER_SECURITY_OIDCISSUERURI=<LOGTO_ENDPOINT>/oidc
    CAMELEER_SERVER_SECURITY_OIDCJWKSETURI=http://cameleer-logto:3001/oidc/jwks
    CAMELEER_SERVER_SECURITY_OIDCAUDIENCE=<api-resource-indicator-from-step-3>
    CAMELEER_SERVER_SECURITY_OIDCTLSSKIPVERIFY=true   # optional — skip cert verification for self-signed CAs
    
    OIDCJWKSETURI is needed when the public issuer URL isn't reachable from inside containers — it fetches JWKS directly from the internal Logto service. OIDCTLSSKIPVERIFY disables certificate verification for all OIDC HTTP calls (discovery, token exchange, JWKS); use only when the provider has a self-signed CA.

SSO Behavior

When OIDC is configured and enabled, the UI automatically redirects to the OIDC provider for silent SSO (prompt=none). Users with an active provider session are signed in without seeing a login form. On first login, the provider may show a consent screen (scopes), after which subsequent logins are seamless. If auto-signup is enabled, new users are automatically provisioned with the configured default roles.

  • Bypass SSO: Navigate to /login?local to see the local login form
  • Subpath deployments: The OIDC redirect_uri respects BASE_PATH (e.g., https://host/server/oidc/callback)
  • Role sync: System roles (ADMIN/OPERATOR/VIEWER) are synced from OIDC scopes on every login — revoking a scope in the provider takes effect on next login. Manually assigned group memberships are preserved.

User Management (ADMIN only)

# List all users
curl -s -H "Authorization: Bearer $TOKEN" http://localhost:8081/api/v1/admin/users

# Update user roles
curl -s -X PUT http://localhost:8081/api/v1/admin/users/{userId}/roles \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOKEN" \
  -d '{"roles":["VIEWER","OPERATOR"]}'

# Delete user
curl -s -X DELETE http://localhost:8081/api/v1/admin/users/{userId} \
  -H "Authorization: Bearer $TOKEN"

Ingestion (POST, returns 202 Accepted)

# Post route execution data (JWT required)
curl -s -X POST http://localhost:8081/api/v1/data/executions \
  -H "Content-Type: application/json" \
  -H "X-Protocol-Version: 1" \
  -H "Authorization: Bearer $TOKEN" \
  -d '{"agentId":"agent-1","routeId":"route-1","executionId":"exec-1","status":"COMPLETED","startTime":"2026-03-11T00:00:00Z","endTime":"2026-03-11T00:00:01Z","processorExecutions":[]}'

# Post route diagram
curl -s -X POST http://localhost:8081/api/v1/data/diagrams \
  -H "Content-Type: application/json" \
  -H "X-Protocol-Version: 1" \
  -H "Authorization: Bearer $TOKEN" \
  -d '{"agentId":"agent-1","routeId":"route-1","version":1,"nodes":[],"edges":[]}'

# Post agent metrics
curl -s -X POST http://localhost:8081/api/v1/data/metrics \
  -H "Content-Type: application/json" \
  -H "X-Protocol-Version: 1" \
  -H "Authorization: Bearer $TOKEN" \
  -d '[{"agentId":"agent-1","metricName":"cpu","value":42.0,"timestamp":"2026-03-11T00:00:00Z","tags":{}}]'

# Post application log entries (raw JSON array — no wrapper)
curl -s -X POST http://localhost:8081/api/v1/data/logs \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOKEN" \
  -d '[{
    "timestamp": "2026-03-25T10:00:00Z",
    "level": "INFO",
    "loggerName": "com.acme.MyService",
    "message": "Processing order #12345",
    "threadName": "main",
    "source": "app"
  }]'

Note: The X-Protocol-Version: 1 header is required on all /api/v1/data/** endpoints. Missing or wrong version returns 400.

Health & Docs

# Health check
curl -s http://localhost:8081/api/v1/health

# OpenAPI JSON
curl -s http://localhost:8081/api/v1/api-docs

# Swagger UI
open http://localhost:8081/api/v1/swagger-ui.html

Search (Phase 2)

# Search by status (GET with basic filters)
curl -s -H "Authorization: Bearer $TOKEN" \
  "http://localhost:8081/api/v1/search/executions?status=COMPLETED&limit=10"

# Search by time range
curl -s -H "Authorization: Bearer $TOKEN" \
  "http://localhost:8081/api/v1/search/executions?timeFrom=2026-03-11T00:00:00Z&timeTo=2026-03-12T00:00:00Z"

# Advanced search (POST with full-text)
curl -s -X POST http://localhost:8081/api/v1/search/executions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOKEN" \
  -d '{"status":"FAILED","text":"NullPointerException","limit":20}'

# Transaction detail (nested processor tree)
curl -s -H "Authorization: Bearer $TOKEN" \
  http://localhost:8081/api/v1/executions/{executionId}

# Processor exchange snapshot
curl -s -H "Authorization: Bearer $TOKEN" \
  http://localhost:8081/api/v1/executions/{executionId}/processors/{index}/snapshot

# Render diagram as SVG
curl -s -H "Authorization: Bearer $TOKEN" \
  -H "Accept: image/svg+xml" \
  http://localhost:8081/api/v1/diagrams/{contentHash}/render

# Render diagram as JSON layout
curl -s -H "Authorization: Bearer $TOKEN" \
  -H "Accept: application/json" \
  http://localhost:8081/api/v1/diagrams/{contentHash}/render

Search response format: { "data": [...], "total": N, "offset": 0, "limit": 50 }

Supported search filters (GET): status, timeFrom, timeTo, correlationId, limit, offset

Additional POST filters: durationMin, durationMax, text (global full-text), textInBody, textInHeaders, textInErrors

Agent Registry & SSE (Phase 3)

# Register an agent (uses bootstrap token, not JWT — see Authentication section above)
curl -s -X POST http://localhost:8081/api/v1/agents/register \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer my-secret-token" \
  -d '{"agentId":"agent-1","name":"Order Service","group":"order-service-prod","version":"1.0.0","routeIds":["route-1","route-2"],"capabilities":["deep-trace","replay"]}'

# Heartbeat (call every 30s)
curl -s -X POST http://localhost:8081/api/v1/agents/agent-1/heartbeat \
  -H "Authorization: Bearer $TOKEN"

# List agents (optionally filter by status)
curl -s -H "Authorization: Bearer $TOKEN" "http://localhost:8081/api/v1/agents"
curl -s -H "Authorization: Bearer $TOKEN" "http://localhost:8081/api/v1/agents?status=LIVE"

# Connect to SSE event stream (JWT via query parameter)
curl -s -N "http://localhost:8081/api/v1/agents/agent-1/events?token=$TOKEN"

# Send command to single agent
curl -s -X POST http://localhost:8081/api/v1/agents/agent-1/commands \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOKEN" \
  -d '{"type":"config-update","payload":{"samplingRate":0.5}}'

# Send command to agent group
curl -s -X POST http://localhost:8081/api/v1/agents/groups/order-service-prod/commands \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOKEN" \
  -d '{"type":"deep-trace","payload":{"routeId":"route-1","durationSeconds":60}}'

# Send route control command to agent group (start/stop/suspend/resume)
curl -s -X POST http://localhost:8081/api/v1/agents/groups/order-service-prod/commands \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOKEN" \
  -d '{"type":"route-control","payload":{"routeId":"route-1","action":"stop","nonce":"unique-uuid"}}'

# Broadcast command to all live agents
curl -s -X POST http://localhost:8081/api/v1/agents/commands \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOKEN" \
  -d '{"type":"config-update","payload":{"samplingRate":1.0}}'

# Acknowledge command delivery
curl -s -X POST http://localhost:8081/api/v1/agents/agent-1/commands/{commandId}/ack \
  -H "Authorization: Bearer $TOKEN"

Agent lifecycle: LIVE (heartbeat within 90s) → STALE (missed 3 heartbeats) → DEAD (5min after STALE). DEAD agents kept indefinitely.

Server restart resilience: The agent registry is in-memory and lost on server restart. Agents auto-re-register on their next heartbeat or SSE connection — the server reconstructs registry entries from JWT claims (subject, application). Route catalog uses ClickHouse execution data as fallback until agents re-register with full route IDs. Agents should also handle 404 on heartbeat by triggering a full re-registration.

SSE events: config-update, deep-trace, replay, route-control commands pushed in real time. Server sends ping keepalive every 15s.

Command expiry: Unacknowledged commands expire after 60 seconds.

Route control responses: Route control commands return CommandGroupResponse with per-agent status, response count, and timed-out agent IDs.

Backpressure

When the write buffer is full (default capacity: 50,000), ingestion endpoints return 503 Service Unavailable. Already-buffered data is not lost.

Configuration

Key settings in cameleer-server-app/src/main/resources/application.yml. All custom properties live under cameleer.server.*. Env vars are a mechanical 1:1 mapping (dots to underscores, uppercase).

Security (cameleer.server.security.*):

Setting Default Env var Description
cameleer.server.security.bootstraptoken (required) CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN Bootstrap token for agent registration
cameleer.server.security.bootstraptokenprevious (empty) CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKENPREVIOUS Previous bootstrap token for rotation
cameleer.server.security.uiuser admin CAMELEER_SERVER_SECURITY_UIUSER UI login username
cameleer.server.security.uipassword admin CAMELEER_SERVER_SECURITY_UIPASSWORD UI login password
cameleer.server.security.uiorigin http://localhost:5173 CAMELEER_SERVER_SECURITY_UIORIGIN CORS allowed origin for UI
cameleer.server.security.corsallowedorigins (empty) CAMELEER_SERVER_SECURITY_CORSALLOWEDORIGINS Comma-separated CORS origins — overrides uiorigin when set
cameleer.server.security.jwtsecret (random) CAMELEER_SERVER_SECURITY_JWTSECRET HMAC secret for JWT signing. If set, tokens survive restarts
cameleer.server.security.accesstokenexpiryms 3600000 CAMELEER_SERVER_SECURITY_ACCESSTOKENEXPIRYMS JWT access token lifetime (1h)
cameleer.server.security.refreshtokenexpiryms 604800000 CAMELEER_SERVER_SECURITY_REFRESHTOKENEXPIRYMS Refresh token lifetime (7d)
cameleer.server.security.infrastructureendpoints true CAMELEER_SERVER_SECURITY_INFRASTRUCTUREENDPOINTS Show DB/ClickHouse admin endpoints. Set false in SaaS-managed mode

OIDC resource server (cameleer.server.security.oidc.*):

Setting Default Env var Description
cameleer.server.security.oidc.issueruri (empty) CAMELEER_SERVER_SECURITY_OIDC_ISSUERURI OIDC issuer URI — enables resource server mode
cameleer.server.security.oidc.jwkseturi (empty) CAMELEER_SERVER_SECURITY_OIDC_JWKSETURI Direct JWKS URL — bypasses OIDC discovery
cameleer.server.security.oidc.audience (empty) CAMELEER_SERVER_SECURITY_OIDC_AUDIENCE Expected JWT audience
cameleer.server.security.oidc.tlsskipverify false CAMELEER_SERVER_SECURITY_OIDC_TLSSKIPVERIFY Skip TLS cert verification for OIDC calls

Note: OIDC login configuration (issuer, client ID, client secret, roles claim, default roles) is stored in the database and managed via the admin API (PUT /api/v1/admin/oidc) or admin UI. The env vars above are for resource server mode (M2M token validation) only.

Ingestion (cameleer.server.ingestion.*):

Setting Default Env var Description
cameleer.server.ingestion.buffercapacity 50000 CAMELEER_SERVER_INGESTION_BUFFERCAPACITY Max items in write buffer
cameleer.server.ingestion.batchsize 5000 CAMELEER_SERVER_INGESTION_BATCHSIZE Items per batch insert
cameleer.server.ingestion.flushintervalms 5000 CAMELEER_SERVER_INGESTION_FLUSHINTERVALMS Buffer flush interval (ms)
cameleer.server.ingestion.bodysizelimit 16384 CAMELEER_SERVER_INGESTION_BODYSIZELIMIT Max body size per execution (bytes)

Agent registry (cameleer.server.agentregistry.*):

Setting Default Env var Description
cameleer.server.agentregistry.heartbeatintervalms 30000 CAMELEER_SERVER_AGENTREGISTRY_HEARTBEATINTERVALMS Expected heartbeat interval (ms)
cameleer.server.agentregistry.stalethresholdms 90000 CAMELEER_SERVER_AGENTREGISTRY_STALETHRESHOLDMS Time before agent marked STALE (ms)
cameleer.server.agentregistry.deadthresholdms 300000 CAMELEER_SERVER_AGENTREGISTRY_DEADTHRESHOLDMS Time after STALE before DEAD (ms)
cameleer.server.agentregistry.pingintervalms 15000 CAMELEER_SERVER_AGENTREGISTRY_PINGINTERVALMS SSE ping keepalive interval (ms)
cameleer.server.agentregistry.commandexpiryms 60000 CAMELEER_SERVER_AGENTREGISTRY_COMMANDEXPIRYMS Pending command TTL (ms)
cameleer.server.agentregistry.lifecyclecheckintervalms 10000 CAMELEER_SERVER_AGENTREGISTRY_LIFECYCLECHECKINTERVALMS Lifecycle monitor interval (ms)

Runtime (cameleer.server.runtime.*):

Setting Default Env var Description
cameleer.server.runtime.enabled true CAMELEER_SERVER_RUNTIME_ENABLED Enable Docker orchestration
cameleer.server.runtime.baseimage cameleer-runtime-base:latest CAMELEER_SERVER_RUNTIME_BASEIMAGE Base Docker image for app containers
cameleer.server.runtime.dockernetwork cameleer CAMELEER_SERVER_RUNTIME_DOCKERNETWORK Primary Docker network
cameleer.server.runtime.jarstoragepath /data/jars CAMELEER_SERVER_RUNTIME_JARSTORAGEPATH JAR file storage directory
cameleer.server.runtime.jardockervolume (empty) CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME Docker volume for JAR sharing
cameleer.server.runtime.routingmode path CAMELEER_SERVER_RUNTIME_ROUTINGMODE path or subdomain Traefik routing
cameleer.server.runtime.routingdomain localhost CAMELEER_SERVER_RUNTIME_ROUTINGDOMAIN Domain for Traefik routing labels
cameleer.server.runtime.serverurl (empty) CAMELEER_SERVER_RUNTIME_SERVERURL Server URL injected into app containers
cameleer.server.runtime.agenthealthport 9464 CAMELEER_SERVER_RUNTIME_AGENTHEALTHPORT Agent health check port
cameleer.server.runtime.healthchecktimeout 60 CAMELEER_SERVER_RUNTIME_HEALTHCHECKTIMEOUT Health check timeout (seconds)
cameleer.server.runtime.container.memorylimit 512m CAMELEER_SERVER_RUNTIME_CONTAINER_MEMORYLIMIT Default memory limit for app containers
cameleer.server.runtime.container.cpushares 512 CAMELEER_SERVER_RUNTIME_CONTAINER_CPUSHARES Default CPU shares for app containers

Other (cameleer.server.*):

Setting Default Env var Description
cameleer.server.catalog.discoveryttldays 7 CAMELEER_SERVER_CATALOG_DISCOVERYTTLDAYS Days before stale discovered apps auto-hide from sidebar
cameleer.server.tenant.id default CAMELEER_SERVER_TENANT_ID Tenant identifier
cameleer.server.indexer.debouncems 2000 CAMELEER_SERVER_INDEXER_DEBOUNCEMS Search indexer debounce delay (ms)
cameleer.server.indexer.queuesize 10000 CAMELEER_SERVER_INDEXER_QUEUESIZE Search indexer queue capacity
cameleer.server.license.token (empty) CAMELEER_SERVER_LICENSE_TOKEN License token
cameleer.server.license.publickey (empty) CAMELEER_SERVER_LICENSE_PUBLICKEY License verification public key
cameleer.server.clickhouse.url jdbc:clickhouse://localhost:8123/cameleer CAMELEER_SERVER_CLICKHOUSE_URL ClickHouse JDBC URL
cameleer.server.clickhouse.username default CAMELEER_SERVER_CLICKHOUSE_USERNAME ClickHouse user
cameleer.server.clickhouse.password (empty) CAMELEER_SERVER_CLICKHOUSE_PASSWORD ClickHouse password

Web UI Development

cd ui
npm install
npm run dev          # Vite dev server on http://localhost:5173 (proxies /api to :8081)
npm run build        # Production build to ui/dist/

Login with admin / admin (or whatever CAMELEER_SERVER_SECURITY_UIUSER / CAMELEER_SERVER_SECURITY_UIPASSWORD are set to).

The UI uses runtime configuration via public/config.js. In Kubernetes, a ConfigMap overrides this file to set the correct API base URL.

Regenerate API Types

When the backend OpenAPI spec changes:

cd ui
npm run generate-api   # Requires backend running on :8081

Running Tests

Integration tests use Testcontainers (starts PostgreSQL automatically — requires Docker):

# All tests
mvn verify

# Unit tests only (no Docker needed)
mvn test -pl cameleer-server-core

# Specific integration test
mvn test -pl cameleer-server-app -Dtest=ExecutionControllerIT

Verify Database Data

After posting data and waiting for the flush interval (1s default):

docker exec -it cameleer-server-postgres-1 psql -U cameleer -d cameleer \
  -c "SELECT count(*) FROM route_executions"

Kubernetes Deployment

The full stack is deployed to k3s via CI/CD on push to main. K8s manifests are in deploy/.

Architecture

cameleer namespace:
  PostgreSQL (StatefulSet, 10Gi PVC)       ← cameleer-postgres:5432 (ClusterIP)
  ClickHouse (StatefulSet, 10Gi PVC)       ← cameleer-clickhouse:8123 (ClusterIP)
  cameleer-server (Deployment)            ← NodePort 30081
  cameleer-ui (Deployment, Nginx)         ← NodePort 30090
  cameleer-deploy-demo (Deployment)        ← NodePort 30092
  Logto Server (Deployment)               ← NodePort 30951/30952
  Logto PostgreSQL (StatefulSet, 1Gi)     ← ClusterIP

cameleer-demo namespace:
  (deployed Camel applications — managed by cameleer-deploy-demo)

Access (from your network)

Service URL
Web UI http://192.168.50.86:30090
Server API http://192.168.50.86:30081/api/v1/health
Swagger UI http://192.168.50.86:30081/api/v1/swagger-ui.html
Deploy Demo http://192.168.50.86:30092
Logto API LOGTO_ENDPOINT secret (NodePort 30951 direct, or behind reverse proxy)
Logto Admin LOGTO_ADMIN_ENDPOINT secret (NodePort 30952 direct, or behind reverse proxy)

CI/CD Pipeline

Push to main triggers: build (UI npm + Maven, unit tests) → docker (buildx amd64 for server + UI, push to Gitea registry) → deploy (kubectl apply + rolling update).

Required Gitea org secrets: REGISTRY_TOKEN, KUBECONFIG_BASE64, CAMELEER_SERVER_SECURITY_BOOTSTRAPTOKEN, CAMELEER_SERVER_SECURITY_JWTSECRET, POSTGRES_USER, POSTGRES_PASSWORD, POSTGRES_DB, CLICKHOUSE_USER, CLICKHOUSE_PASSWORD, CAMELEER_SERVER_SECURITY_UIUSER (optional), CAMELEER_SERVER_SECURITY_UIPASSWORD (optional), LOGTO_PG_USER, LOGTO_PG_PASSWORD, LOGTO_ENDPOINT (public-facing Logto URL, e.g., https://auth.cameleer.my.domain), LOGTO_ADMIN_ENDPOINT (admin console URL), CAMELEER_SERVER_SECURITY_OIDCISSUERURI (optional, for resource server M2M token validation), CAMELEER_SERVER_SECURITY_OIDCAUDIENCE (optional, API resource indicator), CAMELEER_SERVER_SECURITY_OIDCTLSSKIPVERIFY (optional, skip TLS cert verification for self-signed CAs).

Manual K8s Commands

# Check pod status
kubectl -n cameleer get pods

# View server logs
kubectl -n cameleer logs -f deploy/cameleer-server

# View PostgreSQL logs
kubectl -n cameleer logs -f statefulset/cameleer-postgres

# View ClickHouse logs
kubectl -n cameleer logs -f statefulset/cameleer-clickhouse

# Restart server
kubectl -n cameleer rollout restart deployment/cameleer-server