Compare commits
156 Commits
2835d08418
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c5b6f2bbad | ||
| 83c3ac3ef3 | |||
| 7dd7317cb8 | |||
| 2654271494 | |||
|
|
888f589934 | ||
|
|
9aad2f3871 | ||
|
|
cbaac2bfa5 | ||
|
|
7529a9ce99 | ||
|
|
09309de982 | ||
|
|
56c41814fc | ||
|
|
68704e15b4 | ||
|
|
510206c752 | ||
|
|
58e9695b4c | ||
|
|
f27a0044f1 | ||
|
|
5c9323cfed | ||
|
|
2dcbd5a772 | ||
|
|
f9b5f235cc | ||
|
|
0b419db9f1 | ||
|
|
5f6f9e523d | ||
|
|
35319dc666 | ||
|
|
3c2409ed6e | ||
|
|
ca401363ec | ||
|
|
b5ee9e1d1f | ||
|
|
75a41929c4 | ||
|
|
d58c8cde2e | ||
|
|
64608a7677 | ||
|
|
48ce75bf38 | ||
|
|
0bbe5d6623 | ||
|
|
e1ac896a6e | ||
|
|
58009d7c23 | ||
|
|
b799d55835 | ||
|
|
166568edea | ||
|
|
f049a0a6a0 | ||
|
|
f8e382c217 | ||
|
|
c7e5c7fa2d | ||
|
|
0995ab35c4 | ||
|
|
480a53c80c | ||
|
|
d3ce5e861b | ||
|
|
e5c8fff0f9 | ||
|
|
21db92ff00 | ||
|
|
165c9f10e3 | ||
|
|
ade1733418 | ||
|
|
0cf64b2928 | ||
|
|
0fc9c8cb4c | ||
|
|
fe4a6dbf24 | ||
|
|
9cfe3985d0 | ||
|
|
18da187960 | ||
|
|
9c1bd24f16 | ||
|
|
177673ba62 | ||
|
|
77f5c82dfe | ||
|
|
663a6624a7 | ||
|
|
cc3cd610b2 | ||
|
|
b6239bdb6b | ||
|
|
0ae27ad9ed | ||
|
|
e00848dc65 | ||
|
|
f31975e0ef | ||
|
|
2c0cf7dc9c | ||
|
|
fb7b15f539 | ||
|
|
1d7009d69c | ||
|
|
99a91a57be | ||
|
|
427988bcc8 | ||
|
|
a208f2eec7 | ||
|
|
13f218d522 | ||
|
|
900fba5af6 | ||
|
|
b3d1dd377d | ||
|
|
e36c82c4db | ||
|
|
d192f6b57c | ||
|
|
fe1681e6e8 | ||
|
|
571f85cd0f | ||
|
|
25d2a3014a | ||
|
|
1a97e2146e | ||
|
|
d1150e5dd8 | ||
|
|
b0995d84bc | ||
|
|
9756a20223 | ||
|
|
1b4b522233 | ||
|
|
48217e0034 | ||
|
|
c3ecff9d45 | ||
|
|
07099357af | ||
|
|
ed0e616109 | ||
|
|
382e1801a7 | ||
|
|
2312a7304d | ||
|
|
47d5611462 | ||
|
|
9043dc00b0 | ||
|
|
a141e99a07 | ||
|
|
15d00f039c | ||
|
|
064c302073 | ||
|
|
35748ea7a1 | ||
|
|
e558494f8d | ||
|
|
1f0ab002d6 | ||
|
|
242ef1f0af | ||
|
|
c6aef5ab35 | ||
|
|
007597715a | ||
|
|
b6e54db6ec | ||
|
|
e9f523f2b8 | ||
|
|
653f983a08 | ||
|
|
459cdfe427 | ||
|
|
652346dcd4 | ||
|
|
5304c8ee01 | ||
|
|
2c82f29aef | ||
|
|
4371372a26 | ||
|
|
f8dccaae2b | ||
|
|
9ecc9ee72a | ||
|
|
9c54313ff1 | ||
|
|
e5eb48b0fa | ||
|
|
b655de3975 | ||
|
|
4e19f925c6 | ||
|
|
8a7f9cb370 | ||
|
|
b5ecd39100 | ||
|
|
629a009b36 | ||
|
|
ffdaeabc9f | ||
|
|
703bd412ed | ||
|
|
4d4c59efe3 | ||
|
|
837e5d46f5 | ||
|
|
0a71bca7b8 | ||
|
|
b7b6bd2a96 | ||
|
|
d33c039a17 | ||
|
|
6d5ce60608 | ||
|
|
d595746830 | ||
|
|
5a7c0ce4bc | ||
|
|
3a649f40cd | ||
|
|
b1bdb88ea4 | ||
|
|
0e4166bd5f | ||
|
|
42fb6c8b8c | ||
|
|
1579f10a41 | ||
|
|
063a4a5532 | ||
|
|
98a7b7819f | ||
|
|
e96c3cd0cf | ||
|
|
b7c0a225f5 | ||
|
|
f487e6caef | ||
|
|
bb06c4c689 | ||
|
|
5c48b780b2 | ||
|
|
4f5a11f715 | ||
|
|
cc193a1075 | ||
|
|
08efdfa9c5 | ||
|
|
00c7c0cd71 | ||
|
|
d067490f71 | ||
|
|
52ff385b04 | ||
|
|
6052975750 | ||
|
|
0434299d53 | ||
|
|
97f25b4c7e | ||
|
|
6591f2fde3 | ||
|
|
24464c0772 | ||
|
|
e4ccce1e3b | ||
|
|
76352c0d6f | ||
|
|
e716dbf8ca | ||
|
|
76129d407e | ||
|
|
9b1240274d | ||
|
|
a79eafeaf4 | ||
|
|
9b851c4622 | ||
|
|
d3e86b9d77 | ||
|
|
7f9cfc7f18 | ||
|
|
06fa7d832f | ||
|
|
d580b6e90c | ||
|
|
ff95187707 | ||
|
|
1a376eb25f | ||
|
|
58ec67aef9 |
@@ -53,18 +53,18 @@ Env-scoped read-path controllers (`AlertController`, `AlertRuleController`, `Ale
|
||||
|
||||
### Env-scoped (user-facing data & config)
|
||||
|
||||
- `AppController` — `/api/v1/environments/{envSlug}/apps`. GET list / POST create / GET `{appSlug}` / DELETE `{appSlug}` / GET `{appSlug}/versions` / POST `{appSlug}/versions` (JAR upload) / PUT `{appSlug}/container-config`. App slug uniqueness is per-env (`(env, app_slug)` is the natural key). `CreateAppRequest` body has no env (path), validates slug regex.
|
||||
- `DeploymentController` — `/api/v1/environments/{envSlug}/apps/{appSlug}/deployments`. GET list / POST create (body `{ appVersionId }`) / POST `{id}/stop` / POST `{id}/promote` (body `{ targetEnvironment: slug }` — target app slug must exist in target env) / GET `{id}/logs`.
|
||||
- `ApplicationConfigController` — `/api/v1/environments/{envSlug}`. GET `/config` (list), GET/PUT `/apps/{appSlug}/config`, GET `/apps/{appSlug}/processor-routes`, POST `/apps/{appSlug}/config/test-expression`. PUT also pushes `CONFIG_UPDATE` to LIVE agents in this env.
|
||||
- `AppController` — `/api/v1/environments/{envSlug}/apps`. GET list / POST create / GET `{appSlug}` / DELETE `{appSlug}` / GET `{appSlug}/versions` / POST `{appSlug}/versions` (JAR upload) / PUT `{appSlug}/container-config` / GET `{appSlug}/dirty-state` (returns `DirtyStateResponse{dirty, lastSuccessfulDeploymentId, differences}` — compares current JAR+config against last RUNNING deployment snapshot; dirty=true when no snapshot exists). App slug uniqueness is per-env (`(env, app_slug)` is the natural key). `CreateAppRequest` body has no env (path), validates slug regex. Injects `DirtyStateCalculator` bean (registered in `RuntimeBeanConfig`, requires `ObjectMapper` with `JavaTimeModule`).
|
||||
- `DeploymentController` — `/api/v1/environments/{envSlug}/apps/{appSlug}/deployments`. GET list / POST create (body `{ appVersionId }`) / POST `{id}/stop` / POST `{id}/promote` (body `{ targetEnvironment: slug }` — target app slug must exist in target env) / GET `{id}/logs`. All lifecycle ops (`POST /` deploy, `POST /{id}/stop`, `POST /{id}/promote`) audited under `AuditCategory.DEPLOYMENT`. Action codes: `deploy_app`, `stop_deployment`, `promote_deployment`. Acting user resolved via the `user:` prefix-strip convention; both SUCCESS and FAILURE branches write audit rows. `created_by` (TEXT, nullable) populated from `SecurityContextHolder` and surfaced on the `Deployment` DTO.
|
||||
- `ApplicationConfigController` — `/api/v1/environments/{envSlug}`. GET `/config` (list), GET/PUT `/apps/{appSlug}/config`, GET `/apps/{appSlug}/processor-routes`, POST `/apps/{appSlug}/config/test-expression`. PUT accepts `?apply=staged|live` (default `live`). `live` saves to DB and pushes `CONFIG_UPDATE` SSE to live agents in this env (existing behavior); `staged` saves to DB only, skipping the SSE push — used by the unified app deployment page. Audit action is `stage_app_config` for staged writes, `update_app_config` for live. Invalid `apply` values return 400.
|
||||
- `AppSettingsController` — `/api/v1/environments/{envSlug}`. GET `/app-settings` (list), GET/PUT/DELETE `/apps/{appSlug}/settings`. ADMIN/OPERATOR only.
|
||||
- `SearchController` — `/api/v1/environments/{envSlug}`. GET `/executions`, POST `/executions/search`, GET `/stats`, `/stats/timeseries`, `/stats/timeseries/by-app`, `/stats/timeseries/by-route`, `/stats/punchcard`, `/attributes/keys`, `/errors/top`.
|
||||
- `LogQueryController` — GET `/api/v1/environments/{envSlug}/logs` (filters: source (multi, comma-split, OR-joined), level (multi, comma-split, OR-joined), application, agentId, exchangeId, logger, q, time range; sort asc/desc). Cursor-paginated, returns `{ data, nextCursor, hasMore, levelCounts }`; cursor is base64url of `"{timestampIso}|{insert_id_uuid}"` — same-millisecond tiebreak via the `insert_id` UUID column on `logs`.
|
||||
- `SearchController` — `/api/v1/environments/{envSlug}`. GET `/executions`, POST `/executions/search`, GET `/stats`, `/stats/timeseries`, `/stats/timeseries/by-app`, `/stats/timeseries/by-route`, `/stats/punchcard`, `/attributes/keys`, `/errors/top`. GET `/executions` accepts repeat `attr` query params: `attr=order` (key-exists), `attr=order:47` (exact), `attr=order:4*` (wildcard — `*` maps to SQL LIKE `%`). First `:` splits key/value; later colons stay in the value. Invalid keys → 400. POST `/executions/search` accepts the same filters via `SearchRequest.attributeFilters` in the body.
|
||||
- `LogQueryController` — GET `/api/v1/environments/{envSlug}/logs` (filters: source (multi, comma-split, OR-joined), level (multi, comma-split, OR-joined), application, agentId, exchangeId, logger, q, time range, instanceIds (multi, comma-split, AND-joined as WHERE instance_id IN (...) — used by the Checkpoint detail drawer to scope logs to a deployment's replicas); sort asc/desc). Cursor-paginated, returns `{ data, nextCursor, hasMore, levelCounts }`; cursor is base64url of `"{timestampIso}|{insert_id_uuid}"` — same-millisecond tiebreak via the `insert_id` UUID column on `logs`.
|
||||
- `RouteCatalogController` — GET `/api/v1/environments/{envSlug}/routes` (merged route catalog from registry + ClickHouse; env filter unconditional).
|
||||
- `RouteMetricsController` — GET `/api/v1/environments/{envSlug}/routes/metrics`, GET `/api/v1/environments/{envSlug}/routes/metrics/processors`.
|
||||
- `AgentListController` — GET `/api/v1/environments/{envSlug}/agents` (registered agents with runtime metrics, filtered to env).
|
||||
- `AgentEventsController` — GET `/api/v1/environments/{envSlug}/agents/events` (lifecycle events; cursor-paginated, returns `{ data, nextCursor, hasMore }`; order `(timestamp DESC, insert_id DESC)`; cursor is base64url of `"{timestampIso}|{insert_id_uuid}"` — `insert_id` is a stable UUID column used as a same-millisecond tiebreak).
|
||||
- `AgentMetricsController` — GET `/api/v1/environments/{envSlug}/agents/{agentId}/metrics` (JVM/Camel metrics). Rejects cross-env agents (404) as defence-in-depth.
|
||||
- `DiagramRenderController` — GET `/api/v1/environments/{envSlug}/apps/{appSlug}/routes/{routeId}/diagram` (env-scoped lookup). Also GET `/api/v1/diagrams/{contentHash}/render` (flat — content hashes are globally unique).
|
||||
- `DiagramRenderController` — GET `/api/v1/environments/{envSlug}/apps/{appSlug}/routes/{routeId}/diagram` returns the most recent diagram for (app, env, route) via `DiagramStore.findLatestContentHashForAppRoute`. Registry-independent — routes whose publishing agents were removed still resolve. Also GET `/api/v1/diagrams/{contentHash}/render` (flat — content hashes are globally unique), the point-in-time path consumed by the exchange viewer via `ExecutionDetail.diagramContentHash`.
|
||||
- `AlertRuleController` — `/api/v1/environments/{envSlug}/alerts/rules`. GET list / POST create / GET `{id}` / PUT `{id}` / DELETE `{id}` / POST `{id}/enable` / POST `{id}/disable` / POST `{id}/render-preview` / POST `{id}/test-evaluate`. OPERATOR+ for mutations, VIEWER+ for reads. CRITICAL: attribute keys in `ExchangeMatchCondition.filter.attributes` are validated at rule-save time against `^[a-zA-Z0-9._-]+$` — they are later inlined into ClickHouse SQL. `AgentLifecycleCondition` is allowlist-only — the `AgentLifecycleEventType` enum (REGISTERED / RE_REGISTERED / DEREGISTERED / WENT_STALE / WENT_DEAD / RECOVERED) plus the record compact ctor (non-empty `eventTypes`, `withinSeconds ≥ 1`) do the validation; custom agent-emitted event types are tracked in backlog issue #145. Webhook validation: verifies `outboundConnectionId` exists and `isAllowedInEnvironment`. Null notification templates default to `""` (NOT NULL constraint). Audit: `ALERT_RULE_CHANGE`.
|
||||
- `AlertController` — `/api/v1/environments/{envSlug}/alerts`. GET list (inbox filtered by userId/groupIds/roleNames via `InAppInboxQuery`; optional multi-value `state`, `severity`, tri-state `acked`, tri-state `read` query params; soft-deleted rows always excluded) / GET `/unread-count` / GET `{id}` / POST `{id}/ack` / POST `{id}/read` / POST `/bulk-read` / POST `/bulk-ack` (VIEWER+) / DELETE `{id}` (OPERATOR+, soft-delete) / POST `/bulk-delete` (OPERATOR+) / POST `{id}/restore` (OPERATOR+, clears `deleted_at`). `requireLiveInstance` helper returns 404 on soft-deleted rows; `restore` explicitly fetches regardless of `deleted_at`. `BulkIdsRequest` is the shared body for bulk-read/ack/delete (`{ instanceIds }`). `AlertDto` includes `readAt`; `deletedAt` is intentionally NOT on the wire. Inbox SQL: `? = ANY(target_user_ids) OR target_group_ids && ? OR target_role_names && ?` — requires at least one matching target (no broadcast concept).
|
||||
- `AlertSilenceController` — `/api/v1/environments/{envSlug}/alerts/silences`. GET list / POST create / DELETE `{id}`. 422 if `endsAt <= startsAt`. OPERATOR+ for mutations, VIEWER+ for list. Audit: `ALERT_SILENCE_CHANGE`.
|
||||
@@ -109,6 +109,7 @@ Env-scoped read-path controllers (`AlertController`, `AlertRuleController`, `Ale
|
||||
- `UsageAnalyticsController` — GET `/api/v1/admin/usage` (ClickHouse `usage_events`).
|
||||
- `ClickHouseAdminController` — GET `/api/v1/admin/clickhouse/**` (conditional on `infrastructureendpoints` flag).
|
||||
- `DatabaseAdminController` — GET `/api/v1/admin/database/**` (conditional on `infrastructureendpoints` flag).
|
||||
- `ServerMetricsAdminController` — `/api/v1/admin/server-metrics/**`. GET `/catalog`, GET `/instances`, POST `/query`. Generic read API over the `server_metrics` ClickHouse table so SaaS dashboards don't need direct CH access. Delegates to `ServerMetricsQueryStore` (impl `ClickHouseServerMetricsQueryStore`). Visibility matches ClickHouse/Database admin: `@ConditionalOnProperty(infrastructureendpoints, matchIfMissing=true)` + class-level `@PreAuthorize("hasRole('ADMIN')")`. Validation: metric/tag regex `^[a-zA-Z0-9._]+$`, statistic regex `^[a-z_]+$`, `to - from ≤ 31 days`, stepSeconds ∈ [10, 3600], response capped at 500 series. `IllegalArgumentException` → 400. `/query` supports `raw` + `delta` modes (delta does per-`server_instance_id` positive-clipped differences, then aggregates across instances). Derived `statistic=mean` for timers computes `sum(total|total_time)/sum(count)` per bucket.
|
||||
|
||||
### Other (flat)
|
||||
|
||||
@@ -118,10 +119,10 @@ Env-scoped read-path controllers (`AlertController`, `AlertRuleController`, `Ale
|
||||
## runtime/ — Docker orchestration
|
||||
|
||||
- `DockerRuntimeOrchestrator` — implements RuntimeOrchestrator; Docker Java client (zerodep transport), container lifecycle
|
||||
- `DeploymentExecutor` — @Async staged deploy: PRE_FLIGHT -> PULL_IMAGE -> CREATE_NETWORK -> START_REPLICAS -> HEALTH_CHECK -> SWAP_TRAFFIC -> COMPLETE. Container names are `{tenantId}-{envSlug}-{appSlug}-{replicaIndex}` (globally unique on Docker daemon). Sets per-replica `CAMELEER_AGENT_INSTANCEID` env var to `{envSlug}-{appSlug}-{replicaIndex}`.
|
||||
- `DeploymentExecutor` — @Async staged deploy: PRE_FLIGHT -> PULL_IMAGE -> CREATE_NETWORK -> START_REPLICAS -> HEALTH_CHECK -> SWAP_TRAFFIC -> COMPLETE. Container names are `{tenantId}-{envSlug}-{appSlug}-{replicaIndex}-{generation}`, where `generation` is the first 8 chars of the deployment UUID — old and new replicas coexist during a blue/green swap. Per-replica `CAMELEER_AGENT_INSTANCEID` env var is `{envSlug}-{appSlug}-{replicaIndex}-{generation}`. Branches on `DeploymentStrategy.fromWire(config.deploymentStrategy())`: **blue-green** (default) starts all N → waits for all healthy → stops old (partial health = FAILED, preserves old untouched); **rolling** replaces replicas one at a time with rollback only for in-flight new containers (already-replaced old stay stopped; un-replaced old keep serving). DEGRADED is now only set by `DockerEventMonitor` post-deploy, never by the executor.
|
||||
- `DockerNetworkManager` — ensures bridge networks (cameleer-traefik, cameleer-env-{slug}), connects containers
|
||||
- `DockerEventMonitor` — persistent Docker event stream listener (die, oom, start, stop), updates deployment status
|
||||
- `TraefikLabelBuilder` — generates Traefik Docker labels for path-based or subdomain routing. Also emits `cameleer.replica` and `cameleer.instance-id` labels per container for labels-first identity.
|
||||
- `TraefikLabelBuilder` — generates Traefik Docker labels for path-based or subdomain routing. Per-container identity labels: `cameleer.replica` (index), `cameleer.generation` (deployment-scoped 8-char id — for Prometheus/Grafana deploy-boundary annotations), `cameleer.instance-id` (`{envSlug}-{appSlug}-{replicaIndex}-{generation}`). Router/service label keys are generation-agnostic so load balancing spans old + new replicas during a blue/green overlap.
|
||||
- `PrometheusLabelBuilder` — generates Prometheus Docker labels (`prometheus.scrape/path/port`) per runtime type for `docker_sd_configs` auto-discovery
|
||||
- `ContainerLogForwarder` — streams Docker container stdout/stderr to ClickHouse with `source='container'`. One follow-stream thread per container, batches lines every 2s/50 lines via `ClickHouseLogStore.insertBufferedBatch()`. 60-second max capture timeout.
|
||||
- `DisabledRuntimeOrchestrator` — no-op when runtime not enabled
|
||||
@@ -129,11 +130,13 @@ Env-scoped read-path controllers (`AlertController`, `AlertRuleController`, `Ale
|
||||
## metrics/ — Prometheus observability
|
||||
|
||||
- `ServerMetrics` — centralized business metrics: gauges (agents by state, SSE connections, buffer depths), counters (ingestion drops, agent transitions, deployment outcomes, auth failures), timers (flush duration, deployment duration). Exposed via `/api/v1/prometheus`.
|
||||
- `ServerInstanceIdConfig` — `@Configuration`, exposes `@Bean("serverInstanceId") String`. Resolution precedence: `cameleer.server.instance-id` property → `HOSTNAME` env → `InetAddress.getLocalHost()` → random UUID. Fixed at boot; rotates across restarts so counters restart cleanly.
|
||||
- `ServerMetricsSnapshotScheduler` — `@Scheduled(fixedDelayString = "${cameleer.server.self-metrics.interval-ms:60000}")`. Walks `MeterRegistry.getMeters()` each tick, emits one `ServerMetricSample` per `Measurement` (Timer/DistributionSummary produce multiple rows per meter — one per Micrometer `Statistic`). Skips non-finite values; logs and swallows store failures. Disabled via `cameleer.server.self-metrics.enabled=false` (`@ConditionalOnProperty`). Write-only — no query endpoint yet; inspect via `/api/v1/admin/clickhouse/query`.
|
||||
|
||||
## storage/ — PostgreSQL repositories (JdbcTemplate)
|
||||
|
||||
- `PostgresAppRepository`, `PostgresAppVersionRepository`, `PostgresEnvironmentRepository`
|
||||
- `PostgresDeploymentRepository` — includes JSONB replica_states, deploy_stage, findByContainerId
|
||||
- `PostgresDeploymentRepository` — includes JSONB replica_states, deploy_stage, findByContainerId. Also carries `deployed_config_snapshot` JSONB (Flyway V3) populated by `DeploymentExecutor` via `saveDeployedConfigSnapshot(UUID, DeploymentConfigSnapshot)` on successful RUNNING transition. Consumed by `DirtyStateCalculator` for the `/apps/{slug}/dirty-state` endpoint and by the UI for checkpoint restore.
|
||||
- `PostgresUserRepository`, `PostgresRoleRepository`, `PostgresGroupRepository`
|
||||
- `PostgresAuditRepository`, `PostgresOidcConfigRepository`, `PostgresClaimMappingRepository`, `PostgresSensitiveKeysRepository`
|
||||
- `PostgresAppSettingsRepository`, `PostgresApplicationConfigRepository`, `PostgresThresholdRepository`. Both `app_settings` and `application_config` are env-scoped (PK `(app_id, environment)` / `(application, environment)`); finders take `(app, env)` — no env-agnostic variants.
|
||||
@@ -145,6 +148,8 @@ Env-scoped read-path controllers (`AlertController`, `AlertRuleController`, `Ale
|
||||
- `ClickHouseDiagramStore`, `ClickHouseAgentEventRepository`
|
||||
- `ClickHouseUsageTracker` — usage_events for billing
|
||||
- `ClickHouseRouteCatalogStore` — persistent route catalog with first_seen cache, warm-loaded on startup
|
||||
- `ClickHouseServerMetricsStore` — periodic dumps of the server's own Micrometer registry into the `server_metrics` table. Tenant-stamped (bound at the scheduler, not the bean); no `environment` column (server straddles envs). Batch-insert via `JdbcTemplate.batchUpdate` with `Map(String, String)` tag binding. Written by `ServerMetricsSnapshotScheduler`.
|
||||
- `ClickHouseServerMetricsQueryStore` — read side of `server_metrics` for dashboards. Implements `ServerMetricsQueryStore`. `catalog(from,to)` returns name+type+statistics+tagKeys, `listInstances(from,to)` returns server_instance_ids with first/last seen, `query(request)` builds bucketed time-series with `raw` or `delta` mode and supports a derived `mean` statistic for timers. All identifier inputs regex-validated; tenant_id always bound; max range 31 days; series count capped at 500. Exposed via `ServerMetricsAdminController`.
|
||||
|
||||
## search/ — ClickHouse search and log stores
|
||||
|
||||
|
||||
@@ -8,8 +8,11 @@ paths:
|
||||
|
||||
# CI/CD & Deployment
|
||||
|
||||
- CI workflow: `.gitea/workflows/ci.yml` — build -> docker -> deploy on push to main or feature branches
|
||||
- CI workflow: `.gitea/workflows/ci.yml` — build -> docker -> deploy on push to main or feature branches. `paths-ignore` skips the whole pipeline for docs-only / `.planning/` / `.claude/` / `*.md` changes (push and PR triggers).
|
||||
- Build step skips integration tests (`-DskipITs`) — Testcontainers needs Docker daemon
|
||||
- Build caches (parallel `actions/cache@v4` steps in the `build` job): `~/.m2/repository` (key on all `pom.xml`), `~/.npm` (key on `ui/package-lock.json`), `ui/node_modules/.vite` (key on `ui/package-lock.json` + `ui/vite.config.ts`). UI install uses `npm ci --prefer-offline --no-audit --fund=false` so the npm cache is the primary source.
|
||||
- Maven build performance (set in `pom.xml` and `cameleer-server-app/pom.xml`): `useIncrementalCompilation=true` on the compiler plugin; Surefire uses `forkCount=1C` + `reuseForks=true` (one JVM per CPU core, reused across test classes); Failsafe keeps `forkCount=1` + `reuseForks=true`. Unit tests must not rely on per-class JVM isolation.
|
||||
- UI build script (`ui/package.json`): `build` is `vite build` only — the type-check pass was split out into `npm run typecheck` (run separately when you want a full `tsc --noEmit` sweep).
|
||||
- Docker: multi-stage build (`Dockerfile`), `$BUILDPLATFORM` for native Maven on ARM64 runner, amd64 runtime. `docker-entrypoint.sh` imports `/certs/ca.pem` into JVM truststore before starting the app (supports custom CAs for OIDC discovery without `CAMELEER_SERVER_SECURITY_OIDCTLSSKIPVERIFY`).
|
||||
- `REGISTRY_TOKEN` build arg required for `cameleer-common` dependency resolution
|
||||
- Registry: `gitea.siegeln.net/cameleer/cameleer-server` (container images)
|
||||
|
||||
@@ -28,15 +28,16 @@ paths:
|
||||
- `AppVersion` — record: id, appId, version, jarPath, detectedRuntimeType, detectedMainClass
|
||||
- `Environment` — record: id, slug, displayName, production, enabled, defaultContainerConfig, jarRetentionCount, color, createdAt. `color` is one of the 8 preset palette values validated by `EnvironmentColor.VALUES` and CHECK-constrained in PostgreSQL (V2 migration).
|
||||
- `EnvironmentColor` — constants: `DEFAULT = "slate"`, `VALUES = {slate,red,amber,green,teal,blue,purple,pink}`, `isValid(String)`.
|
||||
- `Deployment` — record: id, appId, appVersionId, environmentId, status, targetState, deploymentStrategy, replicaStates (JSONB), deployStage, containerId, containerName
|
||||
- `DeploymentStatus` — enum: STOPPED, STARTING, RUNNING, DEGRADED, STOPPING, FAILED
|
||||
- `Deployment` — record: id, appId, appVersionId, environmentId, status, targetState, deploymentStrategy, replicaStates (JSONB), deployStage, containerId, containerName, createdBy (String, user_id reference; nullable for pre-V4 historical rows)
|
||||
- `DeploymentStatus` — enum: STOPPED, STARTING, RUNNING, DEGRADED, STOPPING, FAILED. `DEGRADED` is reserved for post-deploy drift (a replica died after RUNNING); `DeploymentExecutor` now marks partial-healthy deploys FAILED, not DEGRADED.
|
||||
- `DeployStage` — enum: PRE_FLIGHT, PULL_IMAGE, CREATE_NETWORK, START_REPLICAS, HEALTH_CHECK, SWAP_TRAFFIC, COMPLETE
|
||||
- `DeploymentService` — createDeployment (deletes terminal deployments first), markRunning, markFailed, markStopped
|
||||
- `DeploymentStrategy` — enum: BLUE_GREEN, ROLLING. Stored on `ResolvedContainerConfig.deploymentStrategy` as kebab-case string (`"blue-green"` / `"rolling"`). `fromWire(String)` is the only conversion entry point; unknown/null inputs fall back to BLUE_GREEN so the executor dispatch site never null-checks or throws.
|
||||
- `DeploymentService` — createDeployment (calls `deleteFailedByAppAndEnvironment` first so FAILED rows don't pile up; STOPPED rows are preserved as restorable checkpoints), markRunning, markFailed, markStopped
|
||||
- `RuntimeType` — enum: AUTO, SPRING_BOOT, QUARKUS, PLAIN_JAVA, NATIVE
|
||||
- `RuntimeDetector` — probes JAR files at upload time: detects runtime from manifest Main-Class (Spring Boot loader, Quarkus entry point, plain Java) or native binary (non-ZIP magic bytes)
|
||||
- `ContainerRequest` — record: 20 fields for Docker container creation (includes runtimeType, customArgs, mainClass)
|
||||
- `ContainerStatus` — record: state, running, exitCode, error
|
||||
- `ResolvedContainerConfig` — record: typed config with memoryLimitMb, memoryReserveMb, cpuRequest, cpuLimit, appPort, exposedPorts, customEnvVars, stripPathPrefix, sslOffloading, routingMode, routingDomain, serverUrl, replicas, deploymentStrategy, routeControlEnabled, replayEnabled, runtimeType, customArgs, extraNetworks
|
||||
- `ResolvedContainerConfig` — record: typed config with memoryLimitMb, memoryReserveMb, cpuRequest, cpuLimit, appPort, exposedPorts, customEnvVars, stripPathPrefix, sslOffloading, routingMode, routingDomain, serverUrl, replicas, deploymentStrategy, routeControlEnabled, replayEnabled, runtimeType, customArgs, extraNetworks, externalRouting (default `true`; when `false`, `TraefikLabelBuilder` strips all `traefik.*` labels so the container is not publicly routed), certResolver (server-wide, sourced from `CAMELEER_SERVER_RUNTIME_CERTRESOLVER`; when blank the `tls.certresolver` label is omitted — use for dev installs with a static TLS store)
|
||||
- `RoutingMode` — enum for routing strategies
|
||||
- `ConfigMerger` — pure function: resolve(globalDefaults, envConfig, appConfig) -> ResolvedContainerConfig
|
||||
- `RuntimeOrchestrator` — interface: startContainer, stopContainer, getContainerStatus, getLogs, startLogCapture, stopLogCapture
|
||||
@@ -46,14 +47,15 @@ paths:
|
||||
## search/ — Execution search and stats
|
||||
|
||||
- `SearchService` — search, count, stats, statsForApp, statsForRoute, timeseries, timeseriesForApp, timeseriesForRoute, timeseriesGroupedByApp, timeseriesGroupedByRoute, slaCompliance, slaCountsByApp, slaCountsByRoute, topErrors, activeErrorTypes, punchcard, distinctAttributeKeys. `statsForRoute`/`timeseriesForRoute` take `(routeId, applicationId)` — app filter is applied to `stats_1m_route`.
|
||||
- `SearchRequest` / `SearchResult` — search DTOs
|
||||
- `SearchRequest` / `SearchResult` — search DTOs. `SearchRequest.attributeFilters: List<AttributeFilter>` carries structured facet filters for execution attributes — key-only (exists), exact (key=value), or wildcard (`*` in value). The 21-arg legacy ctor is preserved for call-site churn; the compact ctor normalises null → `List.of()`.
|
||||
- `AttributeFilter(key, value)` — record with key regex `^[a-zA-Z0-9._-]+$` (inlined into SQL, same constraint as alerting), `value == null` means key-exists, `value` containing `*` becomes a SQL LIKE pattern via `toLikePattern()`.
|
||||
- `ExecutionStats`, `ExecutionSummary` — stats aggregation records
|
||||
- `StatsTimeseries`, `TopError` — timeseries and error DTOs
|
||||
- `LogSearchRequest` / `LogSearchResponse` — log search DTOs. `LogSearchRequest.sources` / `levels` are `List<String>` (null-normalized, multi-value OR); `cursor` + `limit` + `sort` drive keyset pagination. Response carries `nextCursor` + `hasMore` + per-level `levelCounts`.
|
||||
|
||||
## storage/ — Storage abstractions
|
||||
|
||||
- `ExecutionStore`, `MetricsStore`, `MetricsQueryStore`, `StatsStore`, `DiagramStore`, `RouteCatalogStore`, `SearchIndex`, `LogIndex` — interfaces
|
||||
- `ExecutionStore`, `MetricsStore`, `MetricsQueryStore`, `StatsStore`, `DiagramStore`, `RouteCatalogStore`, `SearchIndex`, `LogIndex` — interfaces. `DiagramStore.findLatestContentHashForAppRoute(appId, routeId, env)` resolves the latest diagram by (app, env, route) without consulting the agent registry, so routes whose publishing agents were removed between app versions still resolve. `findContentHashForRoute(route, instance)` is retained for the ingestion path that stamps a per-execution `diagramContentHash` at ingest time (point-in-time link from `ExecutionDetail`/`ExecutionSummary`).
|
||||
- `RouteCatalogEntry` — record: applicationId, routeId, environment, firstSeen, lastSeen
|
||||
- `LogEntryResult` — log query result record
|
||||
- `model/` — `ExecutionDocument`, `MetricTimeSeries`, `MetricsSnapshot`
|
||||
@@ -79,7 +81,7 @@ paths:
|
||||
- `AppSettings`, `AppSettingsRepository` — per-app-per-env settings config and persistence. Record carries `(applicationId, environment, …)`; repository methods are `findByApplicationAndEnvironment`, `findByEnvironment`, `save`, `delete(appId, env)`. `AppSettings.defaults(appId, env)` produces a default instance scoped to an environment.
|
||||
- `ThresholdConfig`, `ThresholdRepository` — alerting threshold config and persistence
|
||||
- `AuditService` — audit logging facade
|
||||
- `AuditRecord`, `AuditResult`, `AuditCategory` (enum: `INFRA, AUTH, USER_MGMT, CONFIG, RBAC, AGENT, OUTBOUND_CONNECTION_CHANGE, OUTBOUND_HTTP_TRUST_CHANGE`), `AuditRepository` — audit trail records and persistence
|
||||
- `AuditRecord`, `AuditResult`, `AuditCategory` (enum: `INFRA, AUTH, USER_MGMT, CONFIG, RBAC, AGENT, OUTBOUND_CONNECTION_CHANGE, OUTBOUND_HTTP_TRUST_CHANGE, ALERT_RULE_CHANGE, ALERT_SILENCE_CHANGE, DEPLOYMENT`), `AuditRepository` — audit trail records and persistence
|
||||
|
||||
## http/ — Outbound HTTP primitives (cross-cutting)
|
||||
|
||||
|
||||
@@ -13,19 +13,28 @@ paths:
|
||||
When deployed via the cameleer-saas platform, this server orchestrates customer app containers using Docker. Key components:
|
||||
|
||||
- **ConfigMerger** (`core/runtime/ConfigMerger.java`) — pure function: resolve(globalDefaults, envConfig, appConfig) -> ResolvedContainerConfig. Three-layer merge: global (application.yml) -> environment (defaultContainerConfig JSONB) -> app (containerConfig JSONB). Includes `runtimeType` (default `"auto"`) and `customArgs` (default `""`).
|
||||
- **TraefikLabelBuilder** (`app/runtime/TraefikLabelBuilder.java`) — generates Traefik Docker labels for path-based (`/{envSlug}/{appSlug}/`) or subdomain-based (`{appSlug}-{envSlug}.{domain}`) routing. Supports strip-prefix and SSL offloading toggles. Also sets per-replica identity labels: `cameleer.replica` (index) and `cameleer.instance-id` (`{envSlug}-{appSlug}-{replicaIndex}`). Internal processing uses labels (not container name parsing) for extensibility.
|
||||
- **TraefikLabelBuilder** (`app/runtime/TraefikLabelBuilder.java`) — generates Traefik Docker labels for path-based (`/{envSlug}/{appSlug}/`) or subdomain-based (`{appSlug}-{envSlug}.{domain}`) routing. Supports strip-prefix and SSL offloading toggles. Per-replica identity labels: `cameleer.replica` (index), `cameleer.generation` (8-char deployment UUID prefix — pin Prometheus/Grafana deploy boundaries with this), `cameleer.instance-id` (`{envSlug}-{appSlug}-{replicaIndex}-{generation}`). Traefik router/service keys deliberately omit the generation so load balancing spans old + new replicas during a blue/green overlap. When `ResolvedContainerConfig.externalRouting()` is `false` (UI: Resources → External Routing, default `true`), the builder emits ONLY the identity labels (`managed-by`, `cameleer.*`) and skips every `traefik.*` label — the container stays on `cameleer-traefik` and the per-env network (so sibling containers can still reach it via Docker DNS) but is invisible to Traefik. The `tls.certresolver` label is emitted only when `CAMELEER_SERVER_RUNTIME_CERTRESOLVER` is set to a non-blank resolver name (matching a resolver configured in the Traefik static config). When unset (dev installs backed by a static TLS store) only `tls=true` is emitted and Traefik serves the default cert from the TLS store.
|
||||
- **PrometheusLabelBuilder** (`app/runtime/PrometheusLabelBuilder.java`) — generates Prometheus `docker_sd_configs` labels per resolved runtime type: Spring Boot `/actuator/prometheus:8081`, Quarkus/native `/q/metrics:9000`, plain Java `/metrics:9464`. Labels merged into container metadata alongside Traefik labels at deploy time.
|
||||
- **DockerNetworkManager** (`app/runtime/DockerNetworkManager.java`) — manages two Docker network tiers:
|
||||
- `cameleer-traefik` — shared network; Traefik, server, and all app containers attach here. Server joined via docker-compose with `cameleer-server` DNS alias.
|
||||
- `cameleer-env-{slug}` — per-environment isolated network; containers in the same environment discover each other via Docker DNS. In SaaS mode, env networks are tenant-scoped: `cameleer-env-{tenantId}-{envSlug}` (overloaded `envNetworkName(tenantId, envSlug)` method) to prevent cross-tenant collisions when multiple tenants have identically-named environments.
|
||||
- **DockerEventMonitor** (`app/runtime/DockerEventMonitor.java`) — persistent Docker event stream listener for containers with `managed-by=cameleer-server` label. Detects die/oom/start/stop events and updates deployment replica states. Periodic reconciliation (@Scheduled every 30s) inspects actual container state and corrects deployment status mismatches (fixes stale DEGRADED with all replicas healthy).
|
||||
- **DeploymentProgress** (`ui/src/components/DeploymentProgress.tsx`) — UI step indicator showing 7 deploy stages with amber active/green completed styling.
|
||||
- **ContainerLogForwarder** (`app/runtime/ContainerLogForwarder.java`) — streams Docker container stdout/stderr to ClickHouse `logs` table with `source='container'`. Uses `docker logs --follow` per container, batches lines every 2s or 50 lines. Parses Docker timestamp prefix, infers log level via regex. `DeploymentExecutor` starts capture after each replica launches with the replica's `instanceId` (`{envSlug}-{appSlug}-{replicaIndex}`); `DockerEventMonitor` stops capture on die/oom. 60-second max capture timeout with 30s cleanup scheduler. Thread pool of 10 daemon threads. Container logs use the same `instanceId` as the agent (set via `CAMELEER_AGENT_INSTANCEID` env var) for unified log correlation at the instance level.
|
||||
- **ContainerLogForwarder** (`app/runtime/ContainerLogForwarder.java`) — streams Docker container stdout/stderr to ClickHouse `logs` table with `source='container'`. Uses `docker logs --follow` per container, batches lines every 2s or 50 lines. Parses Docker timestamp prefix, infers log level via regex. `DeploymentExecutor` starts capture after each replica launches with the replica's `instanceId` (`{envSlug}-{appSlug}-{replicaIndex}-{generation}`); `DockerEventMonitor` stops capture on die/oom. 60-second max capture timeout with 30s cleanup scheduler. Thread pool of 10 daemon threads. Container logs use the same `instanceId` as the agent (set via `CAMELEER_AGENT_INSTANCEID` env var) for unified log correlation at the instance level. Instance-id changes per deployment — cross-deploy queries aggregate on `application + environment` (and optionally `replica_index`).
|
||||
- **StartupLogPanel** (`ui/src/components/StartupLogPanel.tsx`) — collapsible log panel rendered below `DeploymentProgress`. Queries `/api/v1/logs?source=container&application={appSlug}&environment={envSlug}`. Auto-polls every 3s while deployment is STARTING; shows green "live" badge during polling, red "stopped" badge on FAILED. Uses `useStartupLogs` hook and `LogViewer` (design system).
|
||||
|
||||
## DeploymentExecutor Details
|
||||
|
||||
Primary network for app containers is set via `CAMELEER_SERVER_RUNTIME_DOCKERNETWORK` env var (in SaaS mode: `cameleer-tenant-{slug}`); apps also connect to `cameleer-traefik` (routing) and `cameleer-env-{tenantId}-{envSlug}` (per-environment discovery) as additional networks. Resolves `runtimeType: auto` to concrete type from `AppVersion.detectedRuntimeType` at PRE_FLIGHT (fails deployment if unresolvable). Builds Docker entrypoint per runtime type (all JVM types use `-javaagent:/app/agent.jar -jar`, plain Java uses `-cp` with main class, native runs binary directly). Sets per-replica `CAMELEER_AGENT_INSTANCEID` env var to `{envSlug}-{appSlug}-{replicaIndex}` so container logs and agent logs share the same instance identity. Sets `CAMELEER_AGENT_*` env vars from `ResolvedContainerConfig` (routeControlEnabled, replayEnabled, health port). These are startup-only agent properties — changing them requires redeployment.
|
||||
Primary network for app containers is set via `CAMELEER_SERVER_RUNTIME_DOCKERNETWORK` env var (in SaaS mode: `cameleer-tenant-{slug}`); apps also connect to `cameleer-traefik` (routing) and `cameleer-env-{tenantId}-{envSlug}` (per-environment discovery) as additional networks. Resolves `runtimeType: auto` to concrete type from `AppVersion.detectedRuntimeType` at PRE_FLIGHT (fails deployment if unresolvable). Builds Docker entrypoint per runtime type (all JVM types use `-javaagent:/app/agent.jar -jar`, plain Java uses `-cp` with main class, native runs binary directly). Sets per-replica `CAMELEER_AGENT_INSTANCEID` env var to `{envSlug}-{appSlug}-{replicaIndex}-{generation}` so container logs and agent logs share the same instance identity. Sets `CAMELEER_AGENT_*` env vars from `ResolvedContainerConfig` (routeControlEnabled, replayEnabled, health port). These are startup-only agent properties — changing them requires redeployment.
|
||||
|
||||
**Container naming** — `{tenantId}-{envSlug}-{appSlug}-{replicaIndex}-{generation}`, where `generation` is the first 8 characters of the deployment UUID. The generation suffix lets old + new replicas coexist during a blue/green swap (deterministic names without a generation used to 409). All lookups across the executor, `DockerEventMonitor`, and `ContainerLogForwarder` key on container **id**, not name — the name is operator-visibility only.
|
||||
|
||||
**Strategy dispatch** — `DeploymentStrategy.fromWire(config.deploymentStrategy())` branches the executor. Unknown values fall back to BLUE_GREEN so misconfiguration never throws at runtime.
|
||||
|
||||
- **Blue/green** (default): start all N new replicas → wait for ALL healthy → stop the previous deployment. Resource peak ≈ 2× replicas for the health-check window. Partial health aborts with status FAILED; the previous deployment is preserved untouched (user's safety net).
|
||||
- **Rolling**: replace replicas one at a time — start new[i] → wait healthy → stop old[i] → next. Resource peak = replicas + 1. Mid-rollout health failure stops in-flight new containers and aborts; already-replaced old replicas are NOT restored (not reversible) but un-replaced old[i+1..N] keep serving traffic. User redeploys to recover.
|
||||
|
||||
Traffic routing is implicit: Traefik labels (`cameleer.app`, `cameleer.environment`) are generation-agnostic, so new replicas attract load balancing as soon as they come up healthy — no explicit swap step.
|
||||
|
||||
## Deployment Status Model
|
||||
|
||||
@@ -34,17 +43,13 @@ Primary network for app containers is set via `CAMELEER_SERVER_RUNTIME_DOCKERNET
|
||||
| `STOPPED` | Intentionally stopped or initial state |
|
||||
| `STARTING` | Deploy in progress |
|
||||
| `RUNNING` | All replicas healthy and serving |
|
||||
| `DEGRADED` | Some replicas healthy, some dead |
|
||||
| `DEGRADED` | Post-deploy: a replica died after the deploy was marked RUNNING. Set by `DockerEventMonitor` reconciliation, never by `DeploymentExecutor` directly. |
|
||||
| `STOPPING` | Graceful shutdown in progress |
|
||||
| `FAILED` | Terminal failure (pre-flight, health check, or crash) |
|
||||
| `FAILED` | Terminal failure (pre-flight, health check, or crash). Partial-healthy deploys now mark FAILED — DEGRADED is reserved for post-deploy drift. |
|
||||
|
||||
**Replica support**: deployments can specify a replica count. `DEGRADED` is used when at least one but not all replicas are healthy.
|
||||
**Deploy stages** (`DeployStage`): PRE_FLIGHT -> PULL_IMAGE -> CREATE_NETWORK -> START_REPLICAS -> HEALTH_CHECK -> SWAP_TRAFFIC -> COMPLETE (or FAILED at any stage). Rolling reuses the same stage labels inside the per-replica loop; the UI progress bar shows the most recent stage.
|
||||
|
||||
**Deploy stages** (`DeployStage`): PRE_FLIGHT -> PULL_IMAGE -> CREATE_NETWORK -> START_REPLICAS -> HEALTH_CHECK -> SWAP_TRAFFIC -> COMPLETE (or FAILED at any stage).
|
||||
|
||||
**Blue/green strategy**: when re-deploying, new replicas are started and health-checked before old ones are stopped, minimising downtime.
|
||||
|
||||
**Deployment uniqueness**: `DeploymentService.createDeployment()` deletes any STOPPED/FAILED deployments for the same app+environment before creating a new one, preventing duplicate rows.
|
||||
**Deployment retention**: `DeploymentService.createDeployment()` deletes FAILED deployments for the same app+environment before creating a new one, preventing failed-attempt buildup. STOPPED deployments are preserved as restorable checkpoints — the UI Checkpoints disclosure lists every deployment with a non-null `deployed_config_snapshot` (RUNNING, DEGRADED, STOPPED) minus the current one.
|
||||
|
||||
## JAR Management
|
||||
|
||||
|
||||
@@ -8,7 +8,9 @@ paths:
|
||||
|
||||
# Prometheus Metrics
|
||||
|
||||
Server exposes `/api/v1/prometheus` (unauthenticated, Prometheus text format). Spring Boot Actuator provides JVM, GC, thread pool, and `http.server.requests` metrics automatically. Business metrics via `ServerMetrics` component:
|
||||
Server exposes `/api/v1/prometheus` (unauthenticated, Prometheus text format). Spring Boot Actuator provides JVM, GC, thread pool, and `http.server.requests` metrics automatically. Business metrics via `ServerMetrics` component.
|
||||
|
||||
The same `MeterRegistry` is also snapshotted to ClickHouse every 60 s by `ServerMetricsSnapshotScheduler` (see "Server self-metrics persistence" at the bottom of this file) — so historical server-health data survives restarts without an external Prometheus.
|
||||
|
||||
## Gauges (auto-polled)
|
||||
|
||||
@@ -83,3 +85,23 @@ Mean processing time = `camel.route.policy.total_time / camel.route.policy.count
|
||||
| `cameleer.sse.reconnects.count` | counter | `instanceId` |
|
||||
| `cameleer.taps.evaluated.count` | counter | `instanceId` |
|
||||
| `cameleer.metrics.exported.count` | counter | `instanceId` |
|
||||
|
||||
## Server self-metrics persistence
|
||||
|
||||
`ServerMetricsSnapshotScheduler` walks `MeterRegistry.getMeters()` every 60 s (configurable via `cameleer.server.self-metrics.interval-ms`) and writes one row per Micrometer `Measurement` to the ClickHouse `server_metrics` table. Full registry is captured — Spring Boot Actuator series (`jvm.*`, `process.*`, `http.server.requests`, `hikaricp.*`, `jdbc.*`, `tomcat.*`, `logback.events`, `system.*`) plus `cameleer.*` and `alerting_*`.
|
||||
|
||||
**Table** (`cameleer-server-app/src/main/resources/clickhouse/init.sql`):
|
||||
|
||||
```
|
||||
server_metrics(tenant_id, collected_at, server_instance_id,
|
||||
metric_name, metric_type, statistic, metric_value,
|
||||
tags Map(String,String), server_received_at)
|
||||
```
|
||||
|
||||
- `metric_type` — lowercase Micrometer `Meter.Type` (counter, gauge, timer, distribution_summary, long_task_timer, other)
|
||||
- `statistic` — Micrometer `Statistic.getTagValueRepresentation()` (value, count, total, total_time, max, mean, active_tasks, duration). Timers emit 3 rows per tick (count + total_time + max); gauges/counters emit 1 (`statistic='value'` or `'count'`).
|
||||
- No `environment` column — the server is env-agnostic.
|
||||
- `tenant_id` threaded from `cameleer.server.tenant.id` (single-tenant per server).
|
||||
- `server_instance_id` resolved once at boot by `ServerInstanceIdConfig` (property → HOSTNAME → localhost → UUID fallback). Rotates across restarts so counter resets are unambiguous.
|
||||
- TTL: 90 days (vs 365 for `agent_metrics`). Write-only in v1 — no query endpoint or UI page. Inspect via ClickHouse admin: `/api/v1/admin/clickhouse/query` or direct SQL.
|
||||
- Toggle off entirely with `cameleer.server.self-metrics.enabled=false` (uses `@ConditionalOnProperty`).
|
||||
|
||||
@@ -10,13 +10,18 @@ The UI has 4 main tabs: **Exchanges**, **Dashboard**, **Runtime**, **Deployments
|
||||
- **Exchanges** — route execution search and detail (`ui/src/pages/Exchanges/`)
|
||||
- **Dashboard** — metrics and stats with L1/L2/L3 drill-down (`ui/src/pages/DashboardTab/`)
|
||||
- **Runtime** — live agent status, logs, commands (`ui/src/pages/RuntimeTab/`). AgentHealth supports compact view (dense health-tinted cards) and expanded view (full GroupCard+DataTable per app). View mode persisted to localStorage.
|
||||
- **Deployments** — app management, JAR upload, deployment lifecycle (`ui/src/pages/AppsTab/`)
|
||||
- Config sub-tabs: **Monitoring | Resources | Variables | Traces & Taps | Route Recording**
|
||||
- Create app: full page at `/apps/new` (not a modal)
|
||||
- Deployment progress: `ui/src/components/DeploymentProgress.tsx` (7-stage step indicator)
|
||||
- **Deployments** — unified app deployment page (`ui/src/pages/AppsTab/`)
|
||||
- Routes: `/apps` (list, `AppListView` in `AppsTab.tsx`), `/apps/new` + `/apps/:slug` (both render `AppDeploymentPage`).
|
||||
- Identity & Artifact section always visible; name editable pre-first-deploy, read-only after. JAR picker client-stages; new JAR + any form edits flip the primary button from `Save` to `Redeploy`. Environment fixed to the currently-selected env (no selector).
|
||||
- Config sub-tabs: **Monitoring | Resources | Variables | Sensitive Keys | Deployment | ● Traces & Taps | ● Route Recording**. The four staged tabs feed dirty detection; the `●` live tabs apply in real-time (amber LiveBanner + default `?apply=live` on their writes) and never mark dirty.
|
||||
- Primary action state machine: `Save` → `Uploading… N%` (during JAR upload; button shows percent with a tinted progress-fill overlay) → `Redeploy` → `Deploying…` during active deploy. Upload progress sourced from `useUploadJar` (XHR `upload.onprogress` → page-level `uploadPct` state). The button is disabled during `uploading` and `deploying`.
|
||||
- Checkpoints render as a collapsible `CheckpointsTable` (default **collapsed**) **inside the Identity & Artifact `configGrid`** as an in-grid row (`Checkpoints | ▸ Expand (N)` / `▾ Collapse (N)`). `CheckpointsTable` returns a React.Fragment of grid-ready children so the label + trigger align with the other identity rows; when opened, a third grid child spans both columns via `grid-column: 1 / -1` so the 7-column table gets full width. Wired through `IdentitySection.checkpointsSlot` — `CheckpointDetailDrawer` stays in `IdentitySection.children` because it portals. Columns: Version · JAR (filename) · Deployed by · Deployed (relative `timeAgo` + user-locale sub-line via `new Date(iso).toLocaleString()`) · Strategy · Outcome · ›. Row click opens the drawer. Drawer tabs are ordered **Config | Logs** with `Config` as the default. Config panel has Snapshot / Diff vs current view modes. Replica filter in the Logs panel uses DS `Select`. Restore lives in the drawer footer (forces review). Visible row cap = `Environment.jarRetentionCount` (default 10 if 0/null); older rows accessible via "Show older (N)" expander. Currently-running deployment is excluded — represented separately by `StatusCard`. The empty-checkpoints case returns `null` (no row). The legacy `Checkpoints.tsx` row-list component is gone.
|
||||
- Deployment tab: `StatusCard` + `DeploymentProgress` (during STARTING / FAILED) + flex-grow `StartupLogPanel` (no fixed maxHeight). Auto-activates when a deploy starts. The former `HistoryDisclosure` is retired — per-deployment config and logs live in the Checkpoints drawer. `StartupLogPanel` header mirrors the Runtime Application Log pattern: title + live/stopped badge + `N entries` + sort toggle (↑/↓, default **desc**) + refresh icon (`RefreshCw`). Sort drives the backend fetch via `useStartupLogs(…, sort)` so the 500-line limit returns the window closest to the user's interest; display order matches fetch order. Refresh scrolls to the latest edge (top for desc, bottom for asc). Sort + refresh buttons disable while a refetch is in flight. 3s polling while STARTING is unchanged.
|
||||
- Unsaved-change router blocker uses DS `AlertDialog` (not `window.beforeunload`). Env switch intentionally discards edits without warning.
|
||||
|
||||
**Admin pages** (ADMIN-only, under `/admin/`):
|
||||
- **Sensitive Keys** (`ui/src/pages/Admin/SensitiveKeysPage.tsx`) — global sensitive key masking config. Shows agent built-in defaults as outlined Badge reference, editable Tag pills for custom keys, amber-highlighted push-to-agents toggle. Keys add to (not replace) agent defaults. Per-app sensitive key additions managed via `ApplicationConfigController` API. Note: `AppConfigDetailPage.tsx` exists but is not routed in `router.tsx`.
|
||||
- **Server Metrics** (`ui/src/pages/Admin/ServerMetricsAdminPage.tsx`) — dashboard over the `server_metrics` ClickHouse table. Visibility matches Database/ClickHouse pages: gated on `capabilities.infrastructureEndpoints` in `buildAdminTreeNodes`; backend is `@ConditionalOnProperty(infrastructureendpoints) + @PreAuthorize('hasRole(ADMIN)')`. Uses the generic `/api/v1/admin/server-metrics/{catalog,instances,query}` API via `ui/src/api/queries/admin/serverMetrics.ts` hooks (`useServerMetricsCatalog`, `useServerMetricsInstances`, `useServerMetricsSeries`), all three of which take a `ServerMetricsRange = { from: Date; to: Date }`. Time range is driven by the global TopBar picker via `useGlobalFilters()` — no page-local selector; bucket size auto-scales through `stepSecondsFor(windowSeconds)` (10 s up to 1 h buckets). Toolbar is just server-instance badges. Sections: Server health (agents/ingestion/auth), JVM (memory/CPU/GC/threads), HTTP & DB pools, Alerting (conditional on catalog), Deployments (conditional on catalog). Each panel is a `ThemedChart` with `Line`/`Area` children from the design system; multi-series responses are flattened into overlap rows by bucket timestamp. Alerting and Deployments rows are hidden when their metrics aren't in the catalog (zero-deploy / alerting-disabled installs).
|
||||
|
||||
## Key UI Files
|
||||
|
||||
@@ -35,6 +40,7 @@ The UI has 4 main tabs: **Exchanges**, **Dashboard**, **Runtime**, **Deployments
|
||||
- `ui/src/api/queries/agents.ts` — `useAgents` for agent list, `useInfiniteAgentEvents` for cursor-paginated timeline stream
|
||||
- `ui/src/hooks/useInfiniteStream.ts` — tanstack `useInfiniteQuery` wrapper with top-gated auto-refetch, flattened `items[]`, and `refresh()` invalidator
|
||||
- `ui/src/components/InfiniteScrollArea.tsx` — scrollable container with IntersectionObserver top/bottom sentinels. Streaming log/event views use this + `useInfiniteStream`. Bounded views (LogTab, StartupLogPanel) keep `useLogs`/`useStartupLogs`
|
||||
- `ui/src/components/SideDrawer.tsx` — project-local right-slide drawer (DS has Modal but no Drawer). Portal-rendered, ESC + transparent-backdrop click closes, sticky header/footer, sizes md/lg/xl. Currently consumed only by `CheckpointDetailDrawer` — promote to `@cameleer/design-system` once a second consumer appears.
|
||||
|
||||
## Alerts
|
||||
|
||||
|
||||
@@ -5,8 +5,20 @@ on:
|
||||
branches: [main, 'feature/**', 'fix/**', 'feat/**']
|
||||
tags-ignore:
|
||||
- 'v*'
|
||||
paths-ignore:
|
||||
- '.planning/**'
|
||||
- 'docs/**'
|
||||
- '**/*.md'
|
||||
- '.claude/**'
|
||||
- 'AGENTS.md'
|
||||
- 'CLAUDE.md'
|
||||
pull_request:
|
||||
branches: [main]
|
||||
paths-ignore:
|
||||
- '.planning/**'
|
||||
- 'docs/**'
|
||||
- '**/*.md'
|
||||
- '.claude/**'
|
||||
delete:
|
||||
|
||||
jobs:
|
||||
@@ -45,11 +57,25 @@ jobs:
|
||||
key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
|
||||
restore-keys: ${{ runner.os }}-maven-
|
||||
|
||||
- name: Cache npm registry
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.npm
|
||||
key: ${{ runner.os }}-npm-${{ hashFiles('ui/package-lock.json') }}
|
||||
restore-keys: ${{ runner.os }}-npm-
|
||||
|
||||
- name: Cache Vite build artifacts
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ui/node_modules/.vite
|
||||
key: ${{ runner.os }}-vite-${{ hashFiles('ui/package-lock.json', 'ui/vite.config.ts') }}
|
||||
restore-keys: ${{ runner.os }}-vite-
|
||||
|
||||
- name: Build UI
|
||||
working-directory: ui
|
||||
run: |
|
||||
echo '//gitea.siegeln.net/api/packages/cameleer/npm/:_authToken=${REGISTRY_TOKEN}' >> .npmrc
|
||||
npm ci
|
||||
npm ci --prefer-offline --no-audit --fund=false
|
||||
npm run build
|
||||
env:
|
||||
REGISTRY_TOKEN: ${{ secrets.REGISTRY_TOKEN }}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
<!-- gitnexus:start -->
|
||||
# GitNexus — Code Intelligence
|
||||
|
||||
This project is indexed by GitNexus as **cameleer-server** (8893 symbols, 23049 relationships, 300 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
|
||||
This project is indexed by GitNexus as **cameleer-server** (9731 symbols, 24987 relationships, 300 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
|
||||
|
||||
> If any GitNexus tool warns the index is stale, run `npx gitnexus analyze` in terminal first.
|
||||
|
||||
|
||||
13
CLAUDE.md
13
CLAUDE.md
@@ -22,8 +22,19 @@ Cameleer Server — observability server that receives, stores, and serves Camel
|
||||
```bash
|
||||
mvn clean compile # Compile all modules
|
||||
mvn clean verify # Full build with tests
|
||||
mvn clean verify -DskipITs # Fast: unit tests only (no Testcontainers)
|
||||
```
|
||||
|
||||
### Faster local builds
|
||||
|
||||
- **Surefire reuses forks** (`cameleer-server-app/pom.xml`): unit tests run with `forkCount=1C` + `reuseForks=true` — one JVM per CPU core, reused across classes. Test classes that mutate static state must clean up after themselves.
|
||||
- **Testcontainers reuse** — opt-in per developer. Add to `~/.testcontainers.properties`:
|
||||
```
|
||||
testcontainers.reuse.enable=true
|
||||
```
|
||||
Then `AbstractPostgresIT` containers persist across `mvn verify` runs (saves ~20s per run). Stop them manually when you need a clean DB: `docker rm -f $(docker ps -aq --filter label=org.testcontainers.reuse=true)`.
|
||||
- **UI build** dropped redundant `tsc --noEmit` from `npm run build` (Vite/esbuild type-checks during bundling). Run `npm run typecheck` explicitly when you want a full type-check pass.
|
||||
|
||||
## Run
|
||||
|
||||
```bash
|
||||
@@ -85,7 +96,7 @@ When adding, removing, or renaming classes, controllers, endpoints, UI component
|
||||
<!-- gitnexus:start -->
|
||||
# GitNexus — Code Intelligence
|
||||
|
||||
This project is indexed by GitNexus as **cameleer-server** (8893 symbols, 23049 relationships, 300 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
|
||||
This project is indexed by GitNexus as **cameleer-server** (9731 symbols, 24987 relationships, 300 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
|
||||
|
||||
> If any GitNexus tool warns the index is stale, run `npx gitnexus analyze` in terminal first.
|
||||
|
||||
|
||||
1
HOWTO.md
1
HOWTO.md
@@ -499,6 +499,7 @@ Key settings in `cameleer-server-app/src/main/resources/application.yml`. All cu
|
||||
| `cameleer.server.runtime.routingmode` | `path` | `CAMELEER_SERVER_RUNTIME_ROUTINGMODE` | `path` or `subdomain` Traefik routing |
|
||||
| `cameleer.server.runtime.routingdomain` | `localhost` | `CAMELEER_SERVER_RUNTIME_ROUTINGDOMAIN` | Domain for Traefik routing labels |
|
||||
| `cameleer.server.runtime.serverurl` | *(empty)* | `CAMELEER_SERVER_RUNTIME_SERVERURL` | Server URL injected into app containers |
|
||||
| `cameleer.server.runtime.certresolver` | *(empty)* | `CAMELEER_SERVER_RUNTIME_CERTRESOLVER` | Traefik TLS cert resolver name (e.g. `letsencrypt`). Blank = omit the `tls.certresolver` label and let Traefik serve the default TLS-store cert |
|
||||
| `cameleer.server.runtime.agenthealthport` | `9464` | `CAMELEER_SERVER_RUNTIME_AGENTHEALTHPORT` | Agent health check port |
|
||||
| `cameleer.server.runtime.healthchecktimeout` | `60` | `CAMELEER_SERVER_RUNTIME_HEALTHCHECKTIMEOUT` | Health check timeout (seconds) |
|
||||
| `cameleer.server.runtime.container.memorylimit` | `512m` | `CAMELEER_SERVER_RUNTIME_CONTAINER_MEMORYLIMIT` | Default memory limit for app containers |
|
||||
|
||||
@@ -189,8 +189,8 @@
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-surefire-plugin</artifactId>
|
||||
<configuration>
|
||||
<forkCount>1</forkCount>
|
||||
<reuseForks>false</reuseForks>
|
||||
<forkCount>1C</forkCount>
|
||||
<reuseForks>true</reuseForks>
|
||||
</configuration>
|
||||
</plugin>
|
||||
<plugin>
|
||||
|
||||
@@ -61,7 +61,8 @@ public class LogPatternEvaluator implements ConditionEvaluator<LogPatternConditi
|
||||
to,
|
||||
null, // cursor
|
||||
1, // limit (count query; value irrelevant)
|
||||
"desc" // sort
|
||||
"desc", // sort
|
||||
null // instanceIds
|
||||
);
|
||||
return logStore.countLogs(req);
|
||||
});
|
||||
|
||||
@@ -9,6 +9,7 @@ import com.cameleer.server.core.runtime.AppService;
|
||||
import com.cameleer.server.core.runtime.AppVersionRepository;
|
||||
import com.cameleer.server.core.runtime.DeploymentRepository;
|
||||
import com.cameleer.server.core.runtime.DeploymentService;
|
||||
import com.cameleer.server.core.runtime.DirtyStateCalculator;
|
||||
import com.cameleer.server.core.runtime.EnvironmentRepository;
|
||||
import com.cameleer.server.core.runtime.EnvironmentService;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
@@ -64,6 +65,11 @@ public class RuntimeBeanConfig {
|
||||
return new DeploymentService(deployRepo, appService, envService);
|
||||
}
|
||||
|
||||
@Bean
|
||||
public DirtyStateCalculator dirtyStateCalculator(ObjectMapper objectMapper) {
|
||||
return new DirtyStateCalculator(objectMapper);
|
||||
}
|
||||
|
||||
@Bean(name = "deploymentTaskExecutor")
|
||||
public Executor deploymentTaskExecutor() {
|
||||
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
|
||||
|
||||
@@ -9,6 +9,8 @@ import com.cameleer.server.app.storage.ClickHouseRouteCatalogStore;
|
||||
import com.cameleer.server.core.storage.RouteCatalogStore;
|
||||
import com.cameleer.server.app.storage.ClickHouseMetricsQueryStore;
|
||||
import com.cameleer.server.app.storage.ClickHouseMetricsStore;
|
||||
import com.cameleer.server.app.storage.ClickHouseServerMetricsQueryStore;
|
||||
import com.cameleer.server.app.storage.ClickHouseServerMetricsStore;
|
||||
import com.cameleer.server.app.storage.ClickHouseStatsStore;
|
||||
import com.cameleer.server.core.admin.AuditRepository;
|
||||
import com.cameleer.server.core.admin.AuditService;
|
||||
@@ -67,6 +69,19 @@ public class StorageBeanConfig {
|
||||
return new ClickHouseMetricsQueryStore(tenantProperties.getId(), clickHouseJdbc);
|
||||
}
|
||||
|
||||
@Bean
|
||||
public ServerMetricsStore clickHouseServerMetricsStore(
|
||||
@Qualifier("clickHouseJdbcTemplate") JdbcTemplate clickHouseJdbc) {
|
||||
return new ClickHouseServerMetricsStore(clickHouseJdbc);
|
||||
}
|
||||
|
||||
@Bean
|
||||
public ServerMetricsQueryStore clickHouseServerMetricsQueryStore(
|
||||
TenantProperties tenantProperties,
|
||||
@Qualifier("clickHouseJdbcTemplate") JdbcTemplate clickHouseJdbc) {
|
||||
return new ClickHouseServerMetricsQueryStore(tenantProperties.getId(), clickHouseJdbc);
|
||||
}
|
||||
|
||||
// ── Execution Store ──────────────────────────────────────────────────
|
||||
|
||||
@Bean
|
||||
|
||||
@@ -1,14 +1,24 @@
|
||||
package com.cameleer.server.app.controller;
|
||||
|
||||
import com.cameleer.common.model.ApplicationConfig;
|
||||
import com.cameleer.server.app.dto.DirtyStateResponse;
|
||||
import com.cameleer.server.app.storage.PostgresApplicationConfigRepository;
|
||||
import com.cameleer.server.app.storage.PostgresDeploymentRepository;
|
||||
import com.cameleer.server.app.web.EnvPath;
|
||||
import com.cameleer.server.core.runtime.App;
|
||||
import com.cameleer.server.core.runtime.AppService;
|
||||
import com.cameleer.server.core.runtime.AppVersion;
|
||||
import com.cameleer.server.core.runtime.AppVersionRepository;
|
||||
import com.cameleer.server.core.runtime.Deployment;
|
||||
import com.cameleer.server.core.runtime.DeploymentConfigSnapshot;
|
||||
import com.cameleer.server.core.runtime.DirtyStateCalculator;
|
||||
import com.cameleer.server.core.runtime.DirtyStateResult;
|
||||
import com.cameleer.server.core.runtime.Environment;
|
||||
import com.cameleer.server.core.runtime.RuntimeType;
|
||||
import io.swagger.v3.oas.annotations.Operation;
|
||||
import io.swagger.v3.oas.annotations.responses.ApiResponse;
|
||||
import io.swagger.v3.oas.annotations.tags.Tag;
|
||||
import org.springframework.http.HttpStatus;
|
||||
import org.springframework.http.MediaType;
|
||||
import org.springframework.http.ResponseEntity;
|
||||
import org.springframework.security.access.prepost.PreAuthorize;
|
||||
@@ -22,8 +32,10 @@ import org.springframework.web.bind.annotation.RequestMapping;
|
||||
import org.springframework.web.bind.annotation.RequestParam;
|
||||
import org.springframework.web.bind.annotation.RestController;
|
||||
import org.springframework.web.multipart.MultipartFile;
|
||||
import org.springframework.web.server.ResponseStatusException;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Comparator;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.UUID;
|
||||
@@ -40,9 +52,21 @@ import java.util.UUID;
|
||||
public class AppController {
|
||||
|
||||
private final AppService appService;
|
||||
private final AppVersionRepository appVersionRepository;
|
||||
private final PostgresApplicationConfigRepository configRepository;
|
||||
private final PostgresDeploymentRepository deploymentRepository;
|
||||
private final DirtyStateCalculator dirtyCalc;
|
||||
|
||||
public AppController(AppService appService) {
|
||||
public AppController(AppService appService,
|
||||
AppVersionRepository appVersionRepository,
|
||||
PostgresApplicationConfigRepository configRepository,
|
||||
PostgresDeploymentRepository deploymentRepository,
|
||||
DirtyStateCalculator dirtyCalc) {
|
||||
this.appService = appService;
|
||||
this.appVersionRepository = appVersionRepository;
|
||||
this.configRepository = configRepository;
|
||||
this.deploymentRepository = deploymentRepository;
|
||||
this.dirtyCalc = dirtyCalc;
|
||||
}
|
||||
|
||||
@GetMapping
|
||||
@@ -120,6 +144,47 @@ public class AppController {
|
||||
}
|
||||
}
|
||||
|
||||
@GetMapping("/{appSlug}/dirty-state")
|
||||
@Operation(summary = "Check whether the app's current config differs from the last successful deploy",
|
||||
description = "Returns dirty=true when the desired state (current JAR + agent config + container config) "
|
||||
+ "would produce a changed deployment. When no successful deploy exists yet, dirty=true.")
|
||||
@ApiResponse(responseCode = "200", description = "Dirty-state computed")
|
||||
@ApiResponse(responseCode = "404", description = "App not found in this environment")
|
||||
public ResponseEntity<DirtyStateResponse> getDirtyState(@EnvPath Environment env,
|
||||
@PathVariable String appSlug) {
|
||||
App app;
|
||||
try {
|
||||
app = appService.getByEnvironmentAndSlug(env.id(), appSlug);
|
||||
} catch (IllegalArgumentException e) {
|
||||
throw new ResponseStatusException(HttpStatus.NOT_FOUND, "App not found");
|
||||
}
|
||||
|
||||
// Latest JAR version (newest first — findByAppId orders by version DESC)
|
||||
List<AppVersion> versions = appVersionRepository.findByAppId(app.id());
|
||||
UUID latestVersionId = versions.isEmpty() ? null
|
||||
: versions.stream().max(Comparator.comparingInt(AppVersion::version))
|
||||
.map(AppVersion::id).orElse(null);
|
||||
|
||||
// Desired agent config
|
||||
ApplicationConfig agentConfig = configRepository
|
||||
.findByApplicationAndEnvironment(appSlug, env.slug())
|
||||
.orElse(null);
|
||||
|
||||
// Container config
|
||||
Map<String, Object> containerConfig = app.containerConfig();
|
||||
|
||||
// Last successful deployment snapshot
|
||||
Deployment lastSuccessful = deploymentRepository
|
||||
.findLatestSuccessfulByAppAndEnv(app.id(), env.id())
|
||||
.orElse(null);
|
||||
DeploymentConfigSnapshot snapshot = lastSuccessful != null ? lastSuccessful.deployedConfigSnapshot() : null;
|
||||
|
||||
DirtyStateResult result = dirtyCalc.compute(latestVersionId, agentConfig, containerConfig, snapshot);
|
||||
|
||||
String lastId = lastSuccessful != null ? lastSuccessful.id().toString() : null;
|
||||
return ResponseEntity.ok(new DirtyStateResponse(result.dirty(), lastId, result.differences()));
|
||||
}
|
||||
|
||||
private static final java.util.regex.Pattern CUSTOM_ARGS_PATTERN =
|
||||
java.util.regex.Pattern.compile("^[-a-zA-Z0-9_.=:/\\s+\"']*$");
|
||||
|
||||
|
||||
@@ -24,6 +24,7 @@ import com.cameleer.server.core.storage.DiagramStore;
|
||||
import com.fasterxml.jackson.core.JsonProcessingException;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import io.swagger.v3.oas.annotations.Operation;
|
||||
import io.swagger.v3.oas.annotations.Parameter;
|
||||
import io.swagger.v3.oas.annotations.responses.ApiResponse;
|
||||
import io.swagger.v3.oas.annotations.tags.Tag;
|
||||
import jakarta.servlet.http.HttpServletRequest;
|
||||
@@ -33,6 +34,7 @@ import org.springframework.http.HttpStatus;
|
||||
import org.springframework.http.ResponseEntity;
|
||||
import org.springframework.security.core.Authentication;
|
||||
import org.springframework.web.bind.annotation.*;
|
||||
import org.springframework.web.server.ResponseStatusException;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
@@ -108,13 +110,23 @@ public class ApplicationConfigController {
|
||||
|
||||
@PutMapping("/apps/{appSlug}/config")
|
||||
@Operation(summary = "Update application config for this environment",
|
||||
description = "Saves config and pushes CONFIG_UPDATE to LIVE agents of this application in the given environment")
|
||||
@ApiResponse(responseCode = "200", description = "Config saved and pushed")
|
||||
description = "Saves config. When apply=live (default), also pushes CONFIG_UPDATE to LIVE agents. "
|
||||
+ "When apply=staged, persists without a live push — the next successful deploy applies it.")
|
||||
@ApiResponse(responseCode = "200", description = "Config saved (and pushed if apply=live)")
|
||||
@ApiResponse(responseCode = "400", description = "Unknown apply value (must be 'staged' or 'live')")
|
||||
public ResponseEntity<ConfigUpdateResponse> updateConfig(@EnvPath Environment env,
|
||||
@PathVariable String appSlug,
|
||||
@Parameter(name = "apply",
|
||||
description = "When to apply: 'live' (default) saves and pushes CONFIG_UPDATE to live agents immediately; 'staged' saves without pushing — the next successful deploy applies it.")
|
||||
@RequestParam(name = "apply", defaultValue = "live") String apply,
|
||||
@RequestBody ApplicationConfig config,
|
||||
Authentication auth,
|
||||
HttpServletRequest httpRequest) {
|
||||
if (!"staged".equalsIgnoreCase(apply) && !"live".equalsIgnoreCase(apply)) {
|
||||
throw new ResponseStatusException(HttpStatus.BAD_REQUEST,
|
||||
"Unknown apply value '" + apply + "' — must be 'staged' or 'live'");
|
||||
}
|
||||
|
||||
String updatedBy = auth != null ? auth.getName() : "system";
|
||||
|
||||
config.setApplication(appSlug);
|
||||
@@ -126,14 +138,24 @@ public class ApplicationConfigController {
|
||||
List<String> perAppKeys = extractSensitiveKeys(saved);
|
||||
List<String> mergedKeys = SensitiveKeysMerger.merge(globalKeys, perAppKeys);
|
||||
|
||||
CommandGroupResponse pushResult = pushConfigToAgentsWithMergedKeys(appSlug, env.slug(), saved, mergedKeys);
|
||||
log.info("Config v{} saved for '{}', pushed to {} agent(s), {} responded",
|
||||
saved.getVersion(), appSlug, pushResult.total(), pushResult.responded());
|
||||
CommandGroupResponse pushResult;
|
||||
if ("staged".equalsIgnoreCase(apply)) {
|
||||
pushResult = new CommandGroupResponse(true, 0, 0, List.of(), List.of());
|
||||
log.info("Config v{} staged for '{}' (no live push)", saved.getVersion(), appSlug);
|
||||
} else {
|
||||
pushResult = pushConfigToAgentsWithMergedKeys(appSlug, env.slug(), saved, mergedKeys);
|
||||
log.info("Config v{} saved for '{}', pushed to {} agent(s), {} responded",
|
||||
saved.getVersion(), appSlug, pushResult.total(), pushResult.responded());
|
||||
}
|
||||
|
||||
auditService.log("update_app_config", AuditCategory.CONFIG, appSlug,
|
||||
auditService.log(
|
||||
"staged".equalsIgnoreCase(apply) ? "stage_app_config" : "update_app_config",
|
||||
AuditCategory.CONFIG, appSlug,
|
||||
Map.of("environment", env.slug(), "version", saved.getVersion(),
|
||||
"apply", apply.toLowerCase(),
|
||||
"agentsPushed", pushResult.total(),
|
||||
"responded", pushResult.responded(), "timedOut", pushResult.timedOut().size()),
|
||||
"responded", pushResult.responded(),
|
||||
"timedOut", pushResult.timedOut().size()),
|
||||
AuditResult.SUCCESS, httpRequest);
|
||||
|
||||
return ResponseEntity.ok(new ConfigUpdateResponse(saved, pushResult));
|
||||
|
||||
@@ -196,7 +196,16 @@ public class CatalogController {
|
||||
}
|
||||
|
||||
Set<String> routeIds = routesByApp.getOrDefault(slug, Set.of());
|
||||
List<String> agentIds = agents.stream().map(AgentInfo::instanceId).toList();
|
||||
|
||||
// Resolve the env slug for this row early so fromUri can survive
|
||||
// cross-env queries (env==null) against managed apps.
|
||||
String rowEnvSlug = envSlug;
|
||||
if (app != null && rowEnvSlug.isEmpty()) {
|
||||
try {
|
||||
rowEnvSlug = envService.getById(app.environmentId()).slug();
|
||||
} catch (Exception ignored) {}
|
||||
}
|
||||
final String resolvedEnvSlug = rowEnvSlug;
|
||||
|
||||
// Routes
|
||||
List<RouteSummary> routeSummaries = routeIds.stream()
|
||||
@@ -204,7 +213,7 @@ public class CatalogController {
|
||||
String key = slug + "/" + routeId;
|
||||
long count = routeExchangeCounts.getOrDefault(key, 0L);
|
||||
Instant lastSeen = routeLastSeen.get(key);
|
||||
String fromUri = resolveFromEndpointUri(routeId, agentIds);
|
||||
String fromUri = resolveFromEndpointUri(slug, routeId, resolvedEnvSlug);
|
||||
String state = routeStateRegistry.getState(slug, routeId).name().toLowerCase();
|
||||
String routeState = "started".equals(state) ? null : state;
|
||||
return new RouteSummary(routeId, count, lastSeen, fromUri, routeState);
|
||||
@@ -258,15 +267,9 @@ public class CatalogController {
|
||||
String healthTooltip = buildHealthTooltip(app != null, deployStatus, agentHealth, agents.size());
|
||||
|
||||
String displayName = app != null ? app.displayName() : slug;
|
||||
String appEnvSlug = envSlug;
|
||||
if (app != null && appEnvSlug.isEmpty()) {
|
||||
try {
|
||||
appEnvSlug = envService.getById(app.environmentId()).slug();
|
||||
} catch (Exception ignored) {}
|
||||
}
|
||||
|
||||
catalog.add(new CatalogApp(
|
||||
slug, displayName, app != null, appEnvSlug,
|
||||
slug, displayName, app != null, resolvedEnvSlug,
|
||||
health, healthTooltip, agents.size(), routeSummaries, agentSummaries,
|
||||
totalExchanges, deploymentSummary
|
||||
));
|
||||
@@ -275,8 +278,11 @@ public class CatalogController {
|
||||
return ResponseEntity.ok(catalog);
|
||||
}
|
||||
|
||||
private String resolveFromEndpointUri(String routeId, List<String> agentIds) {
|
||||
return diagramStore.findContentHashForRouteByAgents(routeId, agentIds)
|
||||
private String resolveFromEndpointUri(String applicationId, String routeId, String environment) {
|
||||
if (environment == null || environment.isBlank()) {
|
||||
return null;
|
||||
}
|
||||
return diagramStore.findLatestContentHashForAppRoute(applicationId, routeId, environment)
|
||||
.flatMap(diagramStore::findByContentHash)
|
||||
.map(RouteGraph::getRoot)
|
||||
.map(root -> root.getEndpointUri())
|
||||
|
||||
@@ -2,8 +2,13 @@ package com.cameleer.server.app.controller;
|
||||
|
||||
import com.cameleer.server.app.runtime.DeploymentExecutor;
|
||||
import com.cameleer.server.app.web.EnvPath;
|
||||
import com.cameleer.server.core.admin.AuditCategory;
|
||||
import com.cameleer.server.core.admin.AuditResult;
|
||||
import com.cameleer.server.core.admin.AuditService;
|
||||
import com.cameleer.server.core.runtime.App;
|
||||
import com.cameleer.server.core.runtime.AppService;
|
||||
import com.cameleer.server.core.runtime.AppVersion;
|
||||
import com.cameleer.server.core.runtime.AppVersionRepository;
|
||||
import com.cameleer.server.core.runtime.Deployment;
|
||||
import com.cameleer.server.core.runtime.DeploymentService;
|
||||
import com.cameleer.server.core.runtime.Environment;
|
||||
@@ -12,14 +17,18 @@ import com.cameleer.server.core.runtime.RuntimeOrchestrator;
|
||||
import io.swagger.v3.oas.annotations.Operation;
|
||||
import io.swagger.v3.oas.annotations.responses.ApiResponse;
|
||||
import io.swagger.v3.oas.annotations.tags.Tag;
|
||||
import jakarta.servlet.http.HttpServletRequest;
|
||||
import org.springframework.http.HttpStatus;
|
||||
import org.springframework.http.ResponseEntity;
|
||||
import org.springframework.security.access.prepost.PreAuthorize;
|
||||
import org.springframework.security.core.context.SecurityContextHolder;
|
||||
import org.springframework.web.bind.annotation.GetMapping;
|
||||
import org.springframework.web.bind.annotation.PathVariable;
|
||||
import org.springframework.web.bind.annotation.PostMapping;
|
||||
import org.springframework.web.bind.annotation.RequestBody;
|
||||
import org.springframework.web.bind.annotation.RequestMapping;
|
||||
import org.springframework.web.bind.annotation.RestController;
|
||||
import org.springframework.web.server.ResponseStatusException;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
@@ -42,17 +51,23 @@ public class DeploymentController {
|
||||
private final RuntimeOrchestrator orchestrator;
|
||||
private final AppService appService;
|
||||
private final EnvironmentService environmentService;
|
||||
private final AuditService auditService;
|
||||
private final AppVersionRepository appVersionRepository;
|
||||
|
||||
public DeploymentController(DeploymentService deploymentService,
|
||||
DeploymentExecutor deploymentExecutor,
|
||||
RuntimeOrchestrator orchestrator,
|
||||
AppService appService,
|
||||
EnvironmentService environmentService) {
|
||||
EnvironmentService environmentService,
|
||||
AuditService auditService,
|
||||
AppVersionRepository appVersionRepository) {
|
||||
this.deploymentService = deploymentService;
|
||||
this.deploymentExecutor = deploymentExecutor;
|
||||
this.orchestrator = orchestrator;
|
||||
this.appService = appService;
|
||||
this.environmentService = environmentService;
|
||||
this.auditService = auditService;
|
||||
this.appVersionRepository = appVersionRepository;
|
||||
}
|
||||
|
||||
@GetMapping
|
||||
@@ -86,13 +101,25 @@ public class DeploymentController {
|
||||
@ApiResponse(responseCode = "202", description = "Deployment accepted and starting")
|
||||
public ResponseEntity<Deployment> deploy(@EnvPath Environment env,
|
||||
@PathVariable String appSlug,
|
||||
@RequestBody DeployRequest request) {
|
||||
@RequestBody DeployRequest request,
|
||||
HttpServletRequest httpRequest) {
|
||||
try {
|
||||
App app = appService.getByEnvironmentAndSlug(env.id(), appSlug);
|
||||
Deployment deployment = deploymentService.createDeployment(app.id(), request.appVersionId(), env.id());
|
||||
AppVersion appVersion = appVersionRepository.findById(request.appVersionId())
|
||||
.orElseThrow(() -> new IllegalArgumentException("AppVersion not found: " + request.appVersionId()));
|
||||
Deployment deployment = deploymentService.createDeployment(app.id(), request.appVersionId(), env.id(), currentUserId());
|
||||
deploymentExecutor.executeAsync(deployment);
|
||||
auditService.log("deploy_app", AuditCategory.DEPLOYMENT, deployment.id().toString(),
|
||||
Map.of("appSlug", appSlug, "envSlug", env.slug(),
|
||||
"appVersionId", request.appVersionId().toString(),
|
||||
"jarFilename", appVersion.jarFilename() != null ? appVersion.jarFilename() : "",
|
||||
"version", appVersion.version()),
|
||||
AuditResult.SUCCESS, httpRequest);
|
||||
return ResponseEntity.accepted().body(deployment);
|
||||
} catch (IllegalArgumentException e) {
|
||||
auditService.log("deploy_app", AuditCategory.DEPLOYMENT, null,
|
||||
Map.of("appSlug", appSlug, "envSlug", env.slug(), "error", e.getMessage()),
|
||||
AuditResult.FAILURE, httpRequest);
|
||||
return ResponseEntity.notFound().build();
|
||||
}
|
||||
}
|
||||
@@ -103,12 +130,19 @@ public class DeploymentController {
|
||||
@ApiResponse(responseCode = "404", description = "Deployment not found")
|
||||
public ResponseEntity<Deployment> stop(@EnvPath Environment env,
|
||||
@PathVariable String appSlug,
|
||||
@PathVariable UUID deploymentId) {
|
||||
@PathVariable UUID deploymentId,
|
||||
HttpServletRequest httpRequest) {
|
||||
try {
|
||||
Deployment deployment = deploymentService.getById(deploymentId);
|
||||
deploymentExecutor.stopDeployment(deployment);
|
||||
auditService.log("stop_deployment", AuditCategory.DEPLOYMENT, deploymentId.toString(),
|
||||
Map.of("appSlug", appSlug, "envSlug", env.slug()),
|
||||
AuditResult.SUCCESS, httpRequest);
|
||||
return ResponseEntity.ok(deploymentService.getById(deploymentId));
|
||||
} catch (IllegalArgumentException e) {
|
||||
auditService.log("stop_deployment", AuditCategory.DEPLOYMENT, deploymentId.toString(),
|
||||
Map.of("appSlug", appSlug, "envSlug", env.slug(), "error", e.getMessage()),
|
||||
AuditResult.FAILURE, httpRequest);
|
||||
return ResponseEntity.notFound().build();
|
||||
}
|
||||
}
|
||||
@@ -122,18 +156,26 @@ public class DeploymentController {
|
||||
public ResponseEntity<?> promote(@EnvPath Environment env,
|
||||
@PathVariable String appSlug,
|
||||
@PathVariable UUID deploymentId,
|
||||
@RequestBody PromoteRequest request) {
|
||||
@RequestBody PromoteRequest request,
|
||||
HttpServletRequest httpRequest) {
|
||||
try {
|
||||
App sourceApp = appService.getByEnvironmentAndSlug(env.id(), appSlug);
|
||||
Deployment source = deploymentService.getById(deploymentId);
|
||||
Environment targetEnv = environmentService.getBySlug(request.targetEnvironment());
|
||||
// Target must also have the app with the same slug
|
||||
App targetApp = appService.getByEnvironmentAndSlug(targetEnv.id(), appSlug);
|
||||
Deployment promoted = deploymentService.promote(targetApp.id(), source.appVersionId(), targetEnv.id());
|
||||
Deployment promoted = deploymentService.promote(targetApp.id(), source.appVersionId(), targetEnv.id(), currentUserId());
|
||||
deploymentExecutor.executeAsync(promoted);
|
||||
auditService.log("promote_deployment", AuditCategory.DEPLOYMENT, promoted.id().toString(),
|
||||
Map.of("sourceEnv", env.slug(), "targetEnv", request.targetEnvironment(),
|
||||
"appSlug", appSlug, "appVersionId", source.appVersionId().toString()),
|
||||
AuditResult.SUCCESS, httpRequest);
|
||||
return ResponseEntity.accepted().body(promoted);
|
||||
} catch (IllegalArgumentException e) {
|
||||
return ResponseEntity.status(org.springframework.http.HttpStatus.NOT_FOUND)
|
||||
auditService.log("promote_deployment", AuditCategory.DEPLOYMENT, deploymentId.toString(),
|
||||
Map.of("sourceEnv", env.slug(), "targetEnv", request.targetEnvironment(),
|
||||
"appSlug", appSlug, "error", e.getMessage()),
|
||||
AuditResult.FAILURE, httpRequest);
|
||||
return ResponseEntity.status(HttpStatus.NOT_FOUND)
|
||||
.body(Map.of("error", e.getMessage()));
|
||||
}
|
||||
}
|
||||
@@ -157,6 +199,15 @@ public class DeploymentController {
|
||||
}
|
||||
}
|
||||
|
||||
private String currentUserId() {
|
||||
var auth = SecurityContextHolder.getContext().getAuthentication();
|
||||
if (auth == null || auth.getName() == null) {
|
||||
throw new ResponseStatusException(HttpStatus.UNAUTHORIZED, "No authentication");
|
||||
}
|
||||
String name = auth.getName();
|
||||
return name.startsWith("user:") ? name.substring(5) : name;
|
||||
}
|
||||
|
||||
public record DeployRequest(UUID appVersionId) {}
|
||||
public record PromoteRequest(String targetEnvironment) {}
|
||||
}
|
||||
|
||||
@@ -2,8 +2,6 @@ package com.cameleer.server.app.controller;
|
||||
|
||||
import com.cameleer.common.graph.RouteGraph;
|
||||
import com.cameleer.server.app.web.EnvPath;
|
||||
import com.cameleer.server.core.agent.AgentInfo;
|
||||
import com.cameleer.server.core.agent.AgentRegistryService;
|
||||
import com.cameleer.server.core.diagram.DiagramLayout;
|
||||
import com.cameleer.server.core.diagram.DiagramRenderer;
|
||||
import com.cameleer.server.core.runtime.Environment;
|
||||
@@ -21,7 +19,6 @@ import org.springframework.web.bind.annotation.PathVariable;
|
||||
import org.springframework.web.bind.annotation.RequestParam;
|
||||
import org.springframework.web.bind.annotation.RestController;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Optional;
|
||||
|
||||
/**
|
||||
@@ -42,14 +39,11 @@ public class DiagramRenderController {
|
||||
|
||||
private final DiagramStore diagramStore;
|
||||
private final DiagramRenderer diagramRenderer;
|
||||
private final AgentRegistryService registryService;
|
||||
|
||||
public DiagramRenderController(DiagramStore diagramStore,
|
||||
DiagramRenderer diagramRenderer,
|
||||
AgentRegistryService registryService) {
|
||||
DiagramRenderer diagramRenderer) {
|
||||
this.diagramStore = diagramStore;
|
||||
this.diagramRenderer = diagramRenderer;
|
||||
this.registryService = registryService;
|
||||
}
|
||||
|
||||
@GetMapping("/api/v1/diagrams/{contentHash}/render")
|
||||
@@ -90,8 +84,8 @@ public class DiagramRenderController {
|
||||
|
||||
@GetMapping("/api/v1/environments/{envSlug}/apps/{appSlug}/routes/{routeId}/diagram")
|
||||
@Operation(summary = "Find the latest diagram for this app's route in this environment",
|
||||
description = "Resolves agents in this env for this app, then looks up the latest diagram for the route "
|
||||
+ "they reported. Env scope prevents a dev route from returning a prod diagram.")
|
||||
description = "Returns the most recently stored diagram for (app, env, route). Independent of the "
|
||||
+ "agent registry, so routes removed from the current app version still resolve.")
|
||||
@ApiResponse(responseCode = "200", description = "Diagram layout returned")
|
||||
@ApiResponse(responseCode = "404", description = "No diagram found")
|
||||
public ResponseEntity<DiagramLayout> findByAppAndRoute(
|
||||
@@ -99,15 +93,7 @@ public class DiagramRenderController {
|
||||
@PathVariable String appSlug,
|
||||
@PathVariable String routeId,
|
||||
@RequestParam(defaultValue = "LR") String direction) {
|
||||
List<String> agentIds = registryService.findByApplicationAndEnvironment(appSlug, env.slug()).stream()
|
||||
.map(AgentInfo::instanceId)
|
||||
.toList();
|
||||
|
||||
if (agentIds.isEmpty()) {
|
||||
return ResponseEntity.notFound().build();
|
||||
}
|
||||
|
||||
Optional<String> contentHash = diagramStore.findContentHashForRouteByAgents(routeId, agentIds);
|
||||
Optional<String> contentHash = diagramStore.findLatestContentHashForAppRoute(appSlug, routeId, env.slug());
|
||||
if (contentHash.isEmpty()) {
|
||||
return ResponseEntity.notFound().build();
|
||||
}
|
||||
|
||||
@@ -44,6 +44,7 @@ public class LogQueryController {
|
||||
@RequestParam(required = false) String exchangeId,
|
||||
@RequestParam(required = false) String logger,
|
||||
@RequestParam(required = false) String source,
|
||||
@RequestParam(required = false) String instanceIds,
|
||||
@RequestParam(required = false) String from,
|
||||
@RequestParam(required = false) String to,
|
||||
@RequestParam(required = false) String cursor,
|
||||
@@ -69,12 +70,21 @@ public class LogQueryController {
|
||||
.toList();
|
||||
}
|
||||
|
||||
List<String> instanceIdList = List.of();
|
||||
if (instanceIds != null && !instanceIds.isEmpty()) {
|
||||
instanceIdList = Arrays.stream(instanceIds.split(","))
|
||||
.map(String::trim)
|
||||
.filter(s -> !s.isEmpty())
|
||||
.toList();
|
||||
}
|
||||
|
||||
Instant fromInstant = from != null ? Instant.parse(from) : null;
|
||||
Instant toInstant = to != null ? Instant.parse(to) : null;
|
||||
|
||||
LogSearchRequest request = new LogSearchRequest(
|
||||
searchText, levels, application, instanceId, exchangeId,
|
||||
logger, env.slug(), sources, fromInstant, toInstant, cursor, limit, sort);
|
||||
logger, env.slug(), sources, fromInstant, toInstant, cursor, limit, sort,
|
||||
instanceIdList);
|
||||
|
||||
LogSearchResponse result = logIndex.search(request);
|
||||
|
||||
|
||||
@@ -132,13 +132,12 @@ public class RouteCatalogController {
|
||||
List<AgentInfo> agents = agentsByApp.getOrDefault(appId, List.of());
|
||||
|
||||
Set<String> routeIds = routesByApp.getOrDefault(appId, Set.of());
|
||||
List<String> agentIds = agents.stream().map(AgentInfo::instanceId).toList();
|
||||
List<RouteSummary> routeSummaries = routeIds.stream()
|
||||
.map(routeId -> {
|
||||
String key = appId + "/" + routeId;
|
||||
long count = routeExchangeCounts.getOrDefault(key, 0L);
|
||||
Instant lastSeen = routeLastSeen.get(key);
|
||||
String fromUri = resolveFromEndpointUri(routeId, agentIds);
|
||||
String fromUri = resolveFromEndpointUri(appId, routeId, envSlug);
|
||||
String state = routeStateRegistry.getState(appId, routeId).name().toLowerCase();
|
||||
String routeState = "started".equals(state) ? null : state;
|
||||
return new RouteSummary(routeId, count, lastSeen, fromUri, routeState);
|
||||
@@ -160,8 +159,8 @@ public class RouteCatalogController {
|
||||
return ResponseEntity.ok(catalog);
|
||||
}
|
||||
|
||||
private String resolveFromEndpointUri(String routeId, List<String> agentIds) {
|
||||
return diagramStore.findContentHashForRouteByAgents(routeId, agentIds)
|
||||
private String resolveFromEndpointUri(String applicationId, String routeId, String environment) {
|
||||
return diagramStore.findLatestContentHashForAppRoute(applicationId, routeId, environment)
|
||||
.flatMap(diagramStore::findByContentHash)
|
||||
.map(RouteGraph::getRoot)
|
||||
.map(root -> root.getEndpointUri())
|
||||
|
||||
@@ -4,6 +4,7 @@ import com.cameleer.server.app.web.EnvPath;
|
||||
import com.cameleer.server.core.admin.AppSettings;
|
||||
import com.cameleer.server.core.admin.AppSettingsRepository;
|
||||
import com.cameleer.server.core.runtime.Environment;
|
||||
import com.cameleer.server.core.search.AttributeFilter;
|
||||
import com.cameleer.server.core.search.ExecutionStats;
|
||||
import com.cameleer.server.core.search.ExecutionSummary;
|
||||
import com.cameleer.server.core.search.SearchRequest;
|
||||
@@ -14,6 +15,7 @@ import com.cameleer.server.core.search.TopError;
|
||||
import com.cameleer.server.core.storage.StatsStore;
|
||||
import io.swagger.v3.oas.annotations.Operation;
|
||||
import io.swagger.v3.oas.annotations.tags.Tag;
|
||||
import org.springframework.http.HttpStatus;
|
||||
import org.springframework.http.ResponseEntity;
|
||||
import org.springframework.web.bind.annotation.GetMapping;
|
||||
import org.springframework.web.bind.annotation.PostMapping;
|
||||
@@ -21,8 +23,10 @@ import org.springframework.web.bind.annotation.RequestBody;
|
||||
import org.springframework.web.bind.annotation.RequestMapping;
|
||||
import org.springframework.web.bind.annotation.RequestParam;
|
||||
import org.springframework.web.bind.annotation.RestController;
|
||||
import org.springframework.web.server.ResponseStatusException;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
@@ -57,11 +61,19 @@ public class SearchController {
|
||||
@RequestParam(name = "agentId", required = false) String instanceId,
|
||||
@RequestParam(required = false) String processorType,
|
||||
@RequestParam(required = false) String application,
|
||||
@RequestParam(name = "attr", required = false) List<String> attr,
|
||||
@RequestParam(defaultValue = "0") int offset,
|
||||
@RequestParam(defaultValue = "50") int limit,
|
||||
@RequestParam(required = false) String sortField,
|
||||
@RequestParam(required = false) String sortDir) {
|
||||
|
||||
List<AttributeFilter> attributeFilters;
|
||||
try {
|
||||
attributeFilters = parseAttrParams(attr);
|
||||
} catch (IllegalArgumentException e) {
|
||||
throw new ResponseStatusException(HttpStatus.BAD_REQUEST, e.getMessage(), e);
|
||||
}
|
||||
|
||||
SearchRequest request = new SearchRequest(
|
||||
status, timeFrom, timeTo,
|
||||
null, null,
|
||||
@@ -72,12 +84,36 @@ public class SearchController {
|
||||
offset, limit,
|
||||
sortField, sortDir,
|
||||
null,
|
||||
env.slug()
|
||||
env.slug(),
|
||||
attributeFilters
|
||||
);
|
||||
|
||||
return ResponseEntity.ok(searchService.search(request));
|
||||
}
|
||||
|
||||
/**
|
||||
* Parses {@code attr} query params of the form {@code key} (key-only) or {@code key:value}
|
||||
* (exact or wildcard via {@code *}). Splits on the first {@code :}; later colons are part of
|
||||
* the value. Blank / null list → empty result. Key validation is delegated to
|
||||
* {@link AttributeFilter}'s compact constructor, which throws {@link IllegalArgumentException}
|
||||
* on invalid keys (mapped to 400 by the caller).
|
||||
*/
|
||||
static List<AttributeFilter> parseAttrParams(List<String> raw) {
|
||||
if (raw == null || raw.isEmpty()) return List.of();
|
||||
List<AttributeFilter> out = new ArrayList<>(raw.size());
|
||||
for (String entry : raw) {
|
||||
if (entry == null || entry.isBlank()) continue;
|
||||
int colon = entry.indexOf(':');
|
||||
if (colon < 0) {
|
||||
out.add(new AttributeFilter(entry.trim(), null));
|
||||
} else {
|
||||
out.add(new AttributeFilter(entry.substring(0, colon).trim(),
|
||||
entry.substring(colon + 1)));
|
||||
}
|
||||
}
|
||||
return out;
|
||||
}
|
||||
|
||||
@PostMapping("/executions/search")
|
||||
@Operation(summary = "Advanced search with all filters",
|
||||
description = "Env from the path overrides any environment field in the body.")
|
||||
|
||||
@@ -0,0 +1,148 @@
|
||||
package com.cameleer.server.app.controller;
|
||||
|
||||
import com.cameleer.server.core.storage.ServerMetricsQueryStore;
|
||||
import com.cameleer.server.core.storage.model.ServerInstanceInfo;
|
||||
import com.cameleer.server.core.storage.model.ServerMetricCatalogEntry;
|
||||
import com.cameleer.server.core.storage.model.ServerMetricQueryRequest;
|
||||
import com.cameleer.server.core.storage.model.ServerMetricQueryResponse;
|
||||
import io.swagger.v3.oas.annotations.Operation;
|
||||
import io.swagger.v3.oas.annotations.tags.Tag;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
|
||||
import org.springframework.http.ResponseEntity;
|
||||
import org.springframework.security.access.prepost.PreAuthorize;
|
||||
import org.springframework.web.bind.annotation.ExceptionHandler;
|
||||
import org.springframework.web.bind.annotation.GetMapping;
|
||||
import org.springframework.web.bind.annotation.PostMapping;
|
||||
import org.springframework.web.bind.annotation.RequestBody;
|
||||
import org.springframework.web.bind.annotation.RequestMapping;
|
||||
import org.springframework.web.bind.annotation.RequestParam;
|
||||
import org.springframework.web.bind.annotation.RestController;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* Generic read API over the ClickHouse {@code server_metrics} table. Lets
|
||||
* SaaS control planes build server-health dashboards without requiring direct
|
||||
* ClickHouse access.
|
||||
*
|
||||
* <p>Three endpoints cover all 17 panels in {@code docs/server-self-metrics.md}:
|
||||
* <ul>
|
||||
* <li>{@code GET /catalog} — discover available metric names, types, statistics, and tags</li>
|
||||
* <li>{@code POST /query} — generic time-series query with aggregation, grouping, filtering, and counter-delta mode</li>
|
||||
* <li>{@code GET /instances} — list server instances (useful for partitioning counter math)</li>
|
||||
* </ul>
|
||||
*
|
||||
* <p>Visibility matches {@code ClickHouseAdminController} / {@code DatabaseAdminController}:
|
||||
* <ul>
|
||||
* <li>Conditional on {@code cameleer.server.security.infrastructureendpoints=true} (default).</li>
|
||||
* <li>Class-level {@code @PreAuthorize("hasRole('ADMIN')")} on top of the
|
||||
* {@code /api/v1/admin/**} catch-all in {@code SecurityConfig}.</li>
|
||||
* </ul>
|
||||
*/
|
||||
@ConditionalOnProperty(
|
||||
name = "cameleer.server.security.infrastructureendpoints",
|
||||
havingValue = "true",
|
||||
matchIfMissing = true
|
||||
)
|
||||
@RestController
|
||||
@RequestMapping("/api/v1/admin/server-metrics")
|
||||
@PreAuthorize("hasRole('ADMIN')")
|
||||
@Tag(name = "Server Self-Metrics",
|
||||
description = "Read API over the server's own Micrometer registry snapshots (ADMIN only)")
|
||||
public class ServerMetricsAdminController {
|
||||
|
||||
/** Default lookback window for catalog/instances when from/to are omitted. */
|
||||
private static final long DEFAULT_LOOKBACK_SECONDS = 3_600L;
|
||||
|
||||
private final ServerMetricsQueryStore store;
|
||||
|
||||
public ServerMetricsAdminController(ServerMetricsQueryStore store) {
|
||||
this.store = store;
|
||||
}
|
||||
|
||||
@GetMapping("/catalog")
|
||||
@Operation(summary = "List metric names observed in the window",
|
||||
description = "For each metric_name, returns metric_type, the set of statistics emitted, and the union of tag keys.")
|
||||
public ResponseEntity<List<ServerMetricCatalogEntry>> catalog(
|
||||
@RequestParam(required = false) String from,
|
||||
@RequestParam(required = false) String to) {
|
||||
Instant[] window = resolveWindow(from, to);
|
||||
return ResponseEntity.ok(store.catalog(window[0], window[1]));
|
||||
}
|
||||
|
||||
@GetMapping("/instances")
|
||||
@Operation(summary = "List server_instance_id values observed in the window",
|
||||
description = "Returns first/last seen timestamps — use to partition counter-delta computations.")
|
||||
public ResponseEntity<List<ServerInstanceInfo>> instances(
|
||||
@RequestParam(required = false) String from,
|
||||
@RequestParam(required = false) String to) {
|
||||
Instant[] window = resolveWindow(from, to);
|
||||
return ResponseEntity.ok(store.listInstances(window[0], window[1]));
|
||||
}
|
||||
|
||||
@PostMapping("/query")
|
||||
@Operation(summary = "Generic time-series query",
|
||||
description = "Returns bucketed series for a single metric_name. Supports aggregation (avg/sum/max/min/latest), group-by-tag, filter-by-tag, counter delta mode, and a derived 'mean' statistic for timers.")
|
||||
public ResponseEntity<ServerMetricQueryResponse> query(@RequestBody QueryBody body) {
|
||||
ServerMetricQueryRequest request = new ServerMetricQueryRequest(
|
||||
body.metric(),
|
||||
body.statistic(),
|
||||
parseInstant(body.from(), "from"),
|
||||
parseInstant(body.to(), "to"),
|
||||
body.stepSeconds(),
|
||||
body.groupByTags(),
|
||||
body.filterTags(),
|
||||
body.aggregation(),
|
||||
body.mode(),
|
||||
body.serverInstanceIds());
|
||||
return ResponseEntity.ok(store.query(request));
|
||||
}
|
||||
|
||||
@ExceptionHandler(IllegalArgumentException.class)
|
||||
public ResponseEntity<Map<String, String>> handleBadRequest(IllegalArgumentException e) {
|
||||
return ResponseEntity.badRequest().body(Map.of("error", e.getMessage()));
|
||||
}
|
||||
|
||||
private static Instant[] resolveWindow(String from, String to) {
|
||||
Instant toI = to != null ? parseInstant(to, "to") : Instant.now();
|
||||
Instant fromI = from != null
|
||||
? parseInstant(from, "from")
|
||||
: toI.minusSeconds(DEFAULT_LOOKBACK_SECONDS);
|
||||
if (!fromI.isBefore(toI)) {
|
||||
throw new IllegalArgumentException("from must be strictly before to");
|
||||
}
|
||||
return new Instant[]{fromI, toI};
|
||||
}
|
||||
|
||||
private static Instant parseInstant(String raw, String field) {
|
||||
if (raw == null || raw.isBlank()) {
|
||||
throw new IllegalArgumentException(field + " is required");
|
||||
}
|
||||
try {
|
||||
return Instant.parse(raw);
|
||||
} catch (Exception e) {
|
||||
throw new IllegalArgumentException(
|
||||
field + " must be an ISO-8601 instant (e.g. 2026-04-23T10:00:00Z)");
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Request body for {@link #query(QueryBody)}. Uses ISO-8601 strings on
|
||||
* the wire so the OpenAPI schema stays language-neutral.
|
||||
*/
|
||||
public record QueryBody(
|
||||
String metric,
|
||||
String statistic,
|
||||
String from,
|
||||
String to,
|
||||
Integer stepSeconds,
|
||||
List<String> groupByTags,
|
||||
Map<String, String> filterTags,
|
||||
String aggregation,
|
||||
String mode,
|
||||
List<String> serverInstanceIds
|
||||
) {
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,12 @@
|
||||
package com.cameleer.server.app.dto;
|
||||
|
||||
import com.cameleer.server.core.runtime.DirtyStateResult;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
public record DirtyStateResponse(
|
||||
boolean dirty,
|
||||
String lastSuccessfulDeploymentId,
|
||||
List<DirtyStateResult.Difference> differences
|
||||
) {
|
||||
}
|
||||
@@ -6,8 +6,10 @@ import com.cameleer.server.core.admin.AuditService;
|
||||
import jakarta.servlet.http.HttpServletRequest;
|
||||
import jakarta.servlet.http.HttpServletResponse;
|
||||
import org.springframework.stereotype.Component;
|
||||
import org.springframework.util.AntPathMatcher;
|
||||
import org.springframework.web.servlet.HandlerInterceptor;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
|
||||
@@ -22,7 +24,9 @@ import java.util.Set;
|
||||
public class AuditInterceptor implements HandlerInterceptor {
|
||||
|
||||
private static final Set<String> AUDITABLE_METHODS = Set.of("POST", "PUT", "DELETE");
|
||||
private static final Set<String> EXCLUDED_PATHS = Set.of("/api/v1/search/executions");
|
||||
private static final List<String> EXCLUDED_PATH_PATTERNS = List.of(
|
||||
"/api/v1/environments/*/executions/search");
|
||||
private static final AntPathMatcher PATH_MATCHER = new AntPathMatcher();
|
||||
|
||||
private final AuditService auditService;
|
||||
|
||||
@@ -41,8 +45,10 @@ public class AuditInterceptor implements HandlerInterceptor {
|
||||
}
|
||||
|
||||
String path = request.getRequestURI();
|
||||
if (EXCLUDED_PATHS.contains(path)) {
|
||||
return;
|
||||
for (String pattern : EXCLUDED_PATH_PATTERNS) {
|
||||
if (PATH_MATCHER.match(pattern, path)) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
AuditResult result = response.getStatus() < 400 ? AuditResult.SUCCESS : AuditResult.FAILURE;
|
||||
|
||||
|
||||
@@ -0,0 +1,63 @@
|
||||
package com.cameleer.server.app.metrics;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.beans.factory.annotation.Value;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
|
||||
import java.net.InetAddress;
|
||||
import java.net.UnknownHostException;
|
||||
import java.util.UUID;
|
||||
|
||||
/**
|
||||
* Resolves a stable identifier for this server process, used as the
|
||||
* {@code server_instance_id} on every server_metrics sample. The value is
|
||||
* fixed at boot, so counters restart cleanly whenever the id rotates.
|
||||
*
|
||||
* <p>Precedence:
|
||||
* <ol>
|
||||
* <li>{@code cameleer.server.instance-id} property / {@code CAMELEER_SERVER_INSTANCE_ID} env
|
||||
* <li>{@code HOSTNAME} env (populated by Docker/Kubernetes)
|
||||
* <li>{@link InetAddress#getLocalHost()} hostname
|
||||
* <li>Random UUID (fallback — only hit when DNS and env are both silent)
|
||||
* </ol>
|
||||
*/
|
||||
@Configuration
|
||||
public class ServerInstanceIdConfig {
|
||||
|
||||
private static final Logger log = LoggerFactory.getLogger(ServerInstanceIdConfig.class);
|
||||
|
||||
@Bean("serverInstanceId")
|
||||
public String serverInstanceId(
|
||||
@Value("${cameleer.server.instance-id:}") String configuredId) {
|
||||
if (!isBlank(configuredId)) {
|
||||
log.info("Server instance id resolved from configuration: {}", configuredId);
|
||||
return configuredId;
|
||||
}
|
||||
|
||||
String hostnameEnv = System.getenv("HOSTNAME");
|
||||
if (!isBlank(hostnameEnv)) {
|
||||
log.info("Server instance id resolved from HOSTNAME env: {}", hostnameEnv);
|
||||
return hostnameEnv;
|
||||
}
|
||||
|
||||
try {
|
||||
String localHost = InetAddress.getLocalHost().getHostName();
|
||||
if (!isBlank(localHost)) {
|
||||
log.info("Server instance id resolved from localhost lookup: {}", localHost);
|
||||
return localHost;
|
||||
}
|
||||
} catch (UnknownHostException e) {
|
||||
log.debug("InetAddress.getLocalHost() failed, falling back to UUID: {}", e.getMessage());
|
||||
}
|
||||
|
||||
String fallback = UUID.randomUUID().toString();
|
||||
log.warn("Server instance id could not be resolved; using random UUID {}", fallback);
|
||||
return fallback;
|
||||
}
|
||||
|
||||
private static boolean isBlank(String s) {
|
||||
return s == null || s.isBlank();
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,106 @@
|
||||
package com.cameleer.server.app.metrics;
|
||||
|
||||
import com.cameleer.server.core.storage.ServerMetricsStore;
|
||||
import com.cameleer.server.core.storage.model.ServerMetricSample;
|
||||
import io.micrometer.core.instrument.Measurement;
|
||||
import io.micrometer.core.instrument.Meter;
|
||||
import io.micrometer.core.instrument.MeterRegistry;
|
||||
import io.micrometer.core.instrument.Tag;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.beans.factory.annotation.Qualifier;
|
||||
import org.springframework.beans.factory.annotation.Value;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
|
||||
import org.springframework.scheduling.annotation.Scheduled;
|
||||
import org.springframework.stereotype.Component;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.ArrayList;
|
||||
import java.util.LinkedHashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* Periodically snapshots every meter in the server's {@link MeterRegistry}
|
||||
* and writes the result to ClickHouse via {@link ServerMetricsStore}. This
|
||||
* gives us historical server-health data (buffer depths, agent transitions,
|
||||
* flush latency, JVM memory, HTTP response counts, etc.) without requiring
|
||||
* an external Prometheus.
|
||||
*
|
||||
* <p>Each Micrometer {@link Meter#measure() measurement} becomes one row, so
|
||||
* a single Timer produces rows for {@code count}, {@code total_time}, and
|
||||
* {@code max} each tick. Counter values are cumulative since meter
|
||||
* registration (Prometheus convention) — callers compute rate() themselves.
|
||||
*
|
||||
* <p>Disabled via {@code cameleer.server.self-metrics.enabled=false}.
|
||||
*/
|
||||
@Component
|
||||
@ConditionalOnProperty(
|
||||
prefix = "cameleer.server.self-metrics",
|
||||
name = "enabled",
|
||||
havingValue = "true",
|
||||
matchIfMissing = true)
|
||||
public class ServerMetricsSnapshotScheduler {
|
||||
|
||||
private static final Logger log = LoggerFactory.getLogger(ServerMetricsSnapshotScheduler.class);
|
||||
|
||||
private final MeterRegistry registry;
|
||||
private final ServerMetricsStore store;
|
||||
private final String tenantId;
|
||||
private final String serverInstanceId;
|
||||
|
||||
public ServerMetricsSnapshotScheduler(
|
||||
MeterRegistry registry,
|
||||
ServerMetricsStore store,
|
||||
@Value("${cameleer.server.tenant.id:default}") String tenantId,
|
||||
@Qualifier("serverInstanceId") String serverInstanceId) {
|
||||
this.registry = registry;
|
||||
this.store = store;
|
||||
this.tenantId = tenantId;
|
||||
this.serverInstanceId = serverInstanceId;
|
||||
}
|
||||
|
||||
@Scheduled(fixedDelayString = "${cameleer.server.self-metrics.interval-ms:60000}",
|
||||
initialDelayString = "${cameleer.server.self-metrics.interval-ms:60000}")
|
||||
public void snapshot() {
|
||||
try {
|
||||
Instant now = Instant.now();
|
||||
List<ServerMetricSample> batch = new ArrayList<>();
|
||||
|
||||
for (Meter meter : registry.getMeters()) {
|
||||
Meter.Id id = meter.getId();
|
||||
Map<String, String> tags = flattenTags(id.getTagsAsIterable());
|
||||
String type = id.getType().name().toLowerCase();
|
||||
|
||||
for (Measurement m : meter.measure()) {
|
||||
double v = m.getValue();
|
||||
if (!Double.isFinite(v)) continue;
|
||||
batch.add(new ServerMetricSample(
|
||||
tenantId,
|
||||
now,
|
||||
serverInstanceId,
|
||||
id.getName(),
|
||||
type,
|
||||
m.getStatistic().getTagValueRepresentation(),
|
||||
v,
|
||||
tags));
|
||||
}
|
||||
}
|
||||
|
||||
if (!batch.isEmpty()) {
|
||||
store.insertBatch(batch);
|
||||
log.debug("Persisted {} server self-metric samples", batch.size());
|
||||
}
|
||||
} catch (Exception e) {
|
||||
log.warn("Server self-metrics snapshot failed: {}", e.getMessage());
|
||||
}
|
||||
}
|
||||
|
||||
private static Map<String, String> flattenTags(Iterable<Tag> tags) {
|
||||
Map<String, String> out = new LinkedHashMap<>();
|
||||
for (Tag t : tags) {
|
||||
out.put(t.getKey(), t.getValue());
|
||||
}
|
||||
return out;
|
||||
}
|
||||
}
|
||||
@@ -1,6 +1,8 @@
|
||||
package com.cameleer.server.app.runtime;
|
||||
|
||||
import com.cameleer.common.model.ApplicationConfig;
|
||||
import com.cameleer.server.app.metrics.ServerMetrics;
|
||||
import com.cameleer.server.app.storage.PostgresApplicationConfigRepository;
|
||||
import com.cameleer.server.app.storage.PostgresDeploymentRepository;
|
||||
import com.cameleer.server.core.runtime.*;
|
||||
import org.slf4j.Logger;
|
||||
@@ -25,6 +27,7 @@ public class DeploymentExecutor {
|
||||
private final EnvironmentService envService;
|
||||
private final DeploymentRepository deploymentRepository;
|
||||
private final PostgresDeploymentRepository pgDeployRepo;
|
||||
private final PostgresApplicationConfigRepository applicationConfigRepository;
|
||||
|
||||
@Autowired(required = false)
|
||||
private DockerNetworkManager networkManager;
|
||||
@@ -59,6 +62,9 @@ public class DeploymentExecutor {
|
||||
@Value("${cameleer.server.runtime.serverurl:}")
|
||||
private String globalServerUrl;
|
||||
|
||||
@Value("${cameleer.server.runtime.certresolver:}")
|
||||
private String globalCertResolver;
|
||||
|
||||
@Value("${cameleer.server.runtime.jardockervolume:}")
|
||||
private String jarDockerVolume;
|
||||
|
||||
@@ -75,15 +81,45 @@ public class DeploymentExecutor {
|
||||
DeploymentService deploymentService,
|
||||
AppService appService,
|
||||
EnvironmentService envService,
|
||||
DeploymentRepository deploymentRepository) {
|
||||
DeploymentRepository deploymentRepository,
|
||||
PostgresApplicationConfigRepository applicationConfigRepository) {
|
||||
this.orchestrator = orchestrator;
|
||||
this.deploymentService = deploymentService;
|
||||
this.appService = appService;
|
||||
this.envService = envService;
|
||||
this.deploymentRepository = deploymentRepository;
|
||||
this.pgDeployRepo = (PostgresDeploymentRepository) deploymentRepository;
|
||||
this.applicationConfigRepository = applicationConfigRepository;
|
||||
}
|
||||
|
||||
/** Deployment-scoped id suffix — distinguishes container names and
|
||||
* CAMELEER_AGENT_INSTANCEID across redeploys so old + new replicas can
|
||||
* coexist during a blue/green swap. First 8 chars of the deployment UUID. */
|
||||
static String generationOf(Deployment deployment) {
|
||||
return deployment.id().toString().substring(0, 8);
|
||||
}
|
||||
|
||||
/**
|
||||
* Per-deployment context assembled once at the top of executeAsync and passed
|
||||
* into strategy handlers. Keeps the strategy methods readable instead of
|
||||
* threading 12 positional args.
|
||||
*/
|
||||
private record DeployCtx(
|
||||
Deployment deployment,
|
||||
App app,
|
||||
Environment env,
|
||||
ResolvedContainerConfig config,
|
||||
String jarPath,
|
||||
String resolvedRuntimeType,
|
||||
String mainClass,
|
||||
String generation,
|
||||
String primaryNetwork,
|
||||
List<String> additionalNets,
|
||||
Map<String, String> baseEnvVars,
|
||||
Map<String, String> prometheusLabels,
|
||||
long deployStart
|
||||
) {}
|
||||
|
||||
@Async("deploymentTaskExecutor")
|
||||
public void executeAsync(Deployment deployment) {
|
||||
long deployStart = System.currentTimeMillis();
|
||||
@@ -91,13 +127,15 @@ public class DeploymentExecutor {
|
||||
App app = appService.getById(deployment.appId());
|
||||
Environment env = envService.getById(deployment.environmentId());
|
||||
String jarPath = appService.resolveJarPath(deployment.appVersionId());
|
||||
String generation = generationOf(deployment);
|
||||
|
||||
var globalDefaults = new ConfigMerger.GlobalRuntimeDefaults(
|
||||
parseMemoryLimitMb(globalMemoryLimit),
|
||||
globalCpuShares,
|
||||
globalRoutingMode,
|
||||
globalRoutingDomain,
|
||||
globalServerUrl.isBlank() ? "http://cameleer-server:8081" : globalServerUrl
|
||||
globalServerUrl.isBlank() ? "http://cameleer-server:8081" : globalServerUrl,
|
||||
globalCertResolver.isBlank() ? null : globalCertResolver
|
||||
);
|
||||
ResolvedContainerConfig config = ConfigMerger.resolve(
|
||||
globalDefaults, env.defaultContainerConfig(), app.containerConfig());
|
||||
@@ -139,7 +177,6 @@ public class DeploymentExecutor {
|
||||
updateStage(deployment.id(), DeployStage.CREATE_NETWORK);
|
||||
// Primary network: use configured CAMELEER_DOCKER_NETWORK (tenant-isolated in SaaS mode)
|
||||
String primaryNetwork = dockerNetwork;
|
||||
String envNet = null;
|
||||
List<String> additionalNets = new ArrayList<>();
|
||||
if (networkManager != null) {
|
||||
networkManager.ensureNetwork(primaryNetwork);
|
||||
@@ -147,7 +184,7 @@ public class DeploymentExecutor {
|
||||
networkManager.ensureNetwork(DockerNetworkManager.TRAEFIK_NETWORK);
|
||||
additionalNets.add(DockerNetworkManager.TRAEFIK_NETWORK);
|
||||
// Per-environment network scoped to tenant to prevent cross-tenant collisions
|
||||
envNet = DockerNetworkManager.envNetworkName(tenantId, env.slug());
|
||||
String envNet = DockerNetworkManager.envNetworkName(tenantId, env.slug());
|
||||
networkManager.ensureNetwork(envNet);
|
||||
additionalNets.add(envNet);
|
||||
}
|
||||
@@ -162,111 +199,21 @@ public class DeploymentExecutor {
|
||||
}
|
||||
}
|
||||
|
||||
// === START REPLICAS ===
|
||||
updateStage(deployment.id(), DeployStage.START_REPLICAS);
|
||||
DeployCtx ctx = new DeployCtx(
|
||||
deployment, app, env, config, jarPath,
|
||||
resolvedRuntimeType, mainClass, generation,
|
||||
primaryNetwork, additionalNets,
|
||||
buildEnvVars(app, env, config),
|
||||
PrometheusLabelBuilder.build(resolvedRuntimeType),
|
||||
deployStart);
|
||||
|
||||
Map<String, String> baseEnvVars = buildEnvVars(app, env, config);
|
||||
Map<String, String> prometheusLabels = PrometheusLabelBuilder.build(resolvedRuntimeType);
|
||||
|
||||
List<Map<String, Object>> replicaStates = new ArrayList<>();
|
||||
List<String> newContainerIds = new ArrayList<>();
|
||||
|
||||
for (int i = 0; i < config.replicas(); i++) {
|
||||
String instanceId = env.slug() + "-" + app.slug() + "-" + i;
|
||||
String containerName = tenantId + "-" + instanceId;
|
||||
|
||||
// Per-replica labels (include replica index and instance-id)
|
||||
Map<String, String> labels = TraefikLabelBuilder.build(app.slug(), env.slug(), tenantId, config, i);
|
||||
labels.putAll(prometheusLabels);
|
||||
|
||||
// Per-replica env vars (set agent instance ID to match container log identity)
|
||||
Map<String, String> replicaEnvVars = new LinkedHashMap<>(baseEnvVars);
|
||||
replicaEnvVars.put("CAMELEER_AGENT_INSTANCEID", instanceId);
|
||||
|
||||
String volumeName = jarDockerVolume != null && !jarDockerVolume.isBlank() ? jarDockerVolume : null;
|
||||
ContainerRequest request = new ContainerRequest(
|
||||
containerName, baseImage, jarPath,
|
||||
volumeName, jarStoragePath,
|
||||
primaryNetwork,
|
||||
additionalNets,
|
||||
replicaEnvVars, labels,
|
||||
config.memoryLimitBytes(), config.memoryReserveBytes(),
|
||||
config.dockerCpuShares(), config.dockerCpuQuota(),
|
||||
config.exposedPorts(), agentHealthPort,
|
||||
"on-failure", 3,
|
||||
resolvedRuntimeType, config.customArgs(), mainClass
|
||||
);
|
||||
|
||||
String containerId = orchestrator.startContainer(request);
|
||||
newContainerIds.add(containerId);
|
||||
|
||||
// Connect to additional networks after container is started
|
||||
for (String net : additionalNets) {
|
||||
if (networkManager != null) {
|
||||
networkManager.connectContainer(containerId, net);
|
||||
}
|
||||
}
|
||||
|
||||
orchestrator.startLogCapture(containerId, instanceId, app.slug(), env.slug(), tenantId);
|
||||
|
||||
replicaStates.add(Map.of(
|
||||
"index", i,
|
||||
"containerId", containerId,
|
||||
"containerName", containerName,
|
||||
"status", "STARTING"
|
||||
));
|
||||
// Dispatch on strategy. Unknown values fall back to BLUE_GREEN via fromWire.
|
||||
DeploymentStrategy strategy = DeploymentStrategy.fromWire(config.deploymentStrategy());
|
||||
switch (strategy) {
|
||||
case BLUE_GREEN -> deployBlueGreen(ctx);
|
||||
case ROLLING -> deployRolling(ctx);
|
||||
}
|
||||
|
||||
pgDeployRepo.updateReplicaStates(deployment.id(), replicaStates);
|
||||
|
||||
// === HEALTH CHECK ===
|
||||
updateStage(deployment.id(), DeployStage.HEALTH_CHECK);
|
||||
int healthyCount = waitForAnyHealthy(newContainerIds, healthCheckTimeout);
|
||||
|
||||
if (healthyCount == 0) {
|
||||
for (String cid : newContainerIds) {
|
||||
try { orchestrator.stopContainer(cid); orchestrator.removeContainer(cid); }
|
||||
catch (Exception e) { log.warn("Cleanup failed for {}: {}", cid, e.getMessage()); }
|
||||
}
|
||||
pgDeployRepo.updateDeployStage(deployment.id(), null);
|
||||
deploymentService.markFailed(deployment.id(), "No replicas passed health check within " + healthCheckTimeout + "s");
|
||||
serverMetrics.recordDeploymentOutcome("FAILED");
|
||||
serverMetrics.recordDeploymentDuration(deployStart);
|
||||
return;
|
||||
}
|
||||
|
||||
replicaStates = updateReplicaHealth(replicaStates, newContainerIds);
|
||||
pgDeployRepo.updateReplicaStates(deployment.id(), replicaStates);
|
||||
|
||||
// === SWAP TRAFFIC ===
|
||||
updateStage(deployment.id(), DeployStage.SWAP_TRAFFIC);
|
||||
|
||||
Optional<Deployment> existing = deploymentRepository.findActiveByAppIdAndEnvironmentId(
|
||||
deployment.appId(), deployment.environmentId());
|
||||
if (existing.isPresent() && !existing.get().id().equals(deployment.id())) {
|
||||
stopDeploymentContainers(existing.get());
|
||||
deploymentService.markStopped(existing.get().id());
|
||||
log.info("Stopped previous deployment {} for replacement", existing.get().id());
|
||||
}
|
||||
|
||||
// === COMPLETE ===
|
||||
updateStage(deployment.id(), DeployStage.COMPLETE);
|
||||
|
||||
String primaryContainerId = newContainerIds.get(0);
|
||||
DeploymentStatus finalStatus = healthyCount == config.replicas()
|
||||
? DeploymentStatus.RUNNING : DeploymentStatus.DEGRADED;
|
||||
deploymentService.markRunning(deployment.id(), primaryContainerId);
|
||||
if (finalStatus == DeploymentStatus.DEGRADED) {
|
||||
deploymentRepository.updateStatus(deployment.id(), DeploymentStatus.DEGRADED,
|
||||
primaryContainerId, null);
|
||||
}
|
||||
|
||||
pgDeployRepo.updateDeployStage(deployment.id(), null);
|
||||
serverMetrics.recordDeploymentOutcome(finalStatus.name());
|
||||
serverMetrics.recordDeploymentDuration(deployStart);
|
||||
log.info("Deployment {} is {} ({}/{} replicas healthy)",
|
||||
deployment.id(), finalStatus, healthyCount, config.replicas());
|
||||
|
||||
} catch (Exception e) {
|
||||
log.error("Deployment {} FAILED: {}", deployment.id(), e.getMessage(), e);
|
||||
pgDeployRepo.updateDeployStage(deployment.id(), null);
|
||||
@@ -276,6 +223,262 @@ public class DeploymentExecutor {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Blue/green strategy: start all N new replicas (coexisting with the old
|
||||
* ones thanks to the gen-suffixed container names), wait for ALL healthy,
|
||||
* then stop the previous deployment. Strict all-healthy — partial failure
|
||||
* preserves the previous deployment untouched.
|
||||
*/
|
||||
private void deployBlueGreen(DeployCtx ctx) {
|
||||
ResolvedContainerConfig config = ctx.config();
|
||||
Deployment deployment = ctx.deployment();
|
||||
|
||||
// === START REPLICAS ===
|
||||
updateStage(deployment.id(), DeployStage.START_REPLICAS);
|
||||
List<Map<String, Object>> replicaStates = new ArrayList<>();
|
||||
List<String> newContainerIds = new ArrayList<>();
|
||||
for (int i = 0; i < config.replicas(); i++) {
|
||||
Map<String, Object> state = new LinkedHashMap<>();
|
||||
String containerId = startReplica(ctx, i, state);
|
||||
newContainerIds.add(containerId);
|
||||
replicaStates.add(state);
|
||||
}
|
||||
pgDeployRepo.updateReplicaStates(deployment.id(), replicaStates);
|
||||
|
||||
// === HEALTH CHECK ===
|
||||
updateStage(deployment.id(), DeployStage.HEALTH_CHECK);
|
||||
int healthyCount = waitForAllHealthy(newContainerIds, healthCheckTimeout);
|
||||
|
||||
if (healthyCount < config.replicas()) {
|
||||
// Strict abort: tear down new replicas, leave the previous deployment untouched.
|
||||
for (String cid : newContainerIds) {
|
||||
try { orchestrator.stopContainer(cid); orchestrator.removeContainer(cid); }
|
||||
catch (Exception e) { log.warn("Cleanup failed for {}: {}", cid, e.getMessage()); }
|
||||
}
|
||||
pgDeployRepo.updateDeployStage(deployment.id(), null);
|
||||
String reason = String.format(
|
||||
"blue-green: %d/%d replicas healthy within %ds; preserving previous deployment",
|
||||
healthyCount, config.replicas(), healthCheckTimeout);
|
||||
deploymentService.markFailed(deployment.id(), reason);
|
||||
serverMetrics.recordDeploymentOutcome("FAILED");
|
||||
serverMetrics.recordDeploymentDuration(ctx.deployStart());
|
||||
return;
|
||||
}
|
||||
|
||||
replicaStates = updateReplicaHealth(replicaStates, newContainerIds);
|
||||
pgDeployRepo.updateReplicaStates(deployment.id(), replicaStates);
|
||||
|
||||
// === SWAP TRAFFIC ===
|
||||
// All new replicas are healthy; Traefik labels are already attracting
|
||||
// traffic to them. Stop the previous deployment now — the swap is
|
||||
// implicit in the label-driven load balancer.
|
||||
updateStage(deployment.id(), DeployStage.SWAP_TRAFFIC);
|
||||
Optional<Deployment> previous = deploymentRepository.findActiveByAppIdAndEnvironmentIdExcluding(
|
||||
deployment.appId(), deployment.environmentId(), deployment.id());
|
||||
if (previous.isPresent()) {
|
||||
log.info("blue-green: stopping previous deployment {} now that new replicas are healthy",
|
||||
previous.get().id());
|
||||
stopDeploymentContainers(previous.get());
|
||||
deploymentService.markStopped(previous.get().id());
|
||||
}
|
||||
|
||||
// === COMPLETE ===
|
||||
updateStage(deployment.id(), DeployStage.COMPLETE);
|
||||
persistSnapshotAndMarkRunning(ctx, newContainerIds.get(0));
|
||||
log.info("Deployment {} is RUNNING (blue-green, {}/{} replicas healthy)",
|
||||
deployment.id(), healthyCount, config.replicas());
|
||||
}
|
||||
|
||||
/**
|
||||
* Rolling strategy: replace replicas one at a time — start new[i], wait
|
||||
* healthy, stop old[i]. On any replica's health failure, stop the
|
||||
* in-flight new container, leave remaining old replicas serving, mark
|
||||
* FAILED. Already-replaced old containers are not restored (can't unring
|
||||
* that bell) — user redeploys to recover.
|
||||
*
|
||||
* Resource peak: replicas + 1 (briefly while a new replica warms up
|
||||
* before its counterpart is stopped).
|
||||
*/
|
||||
private void deployRolling(DeployCtx ctx) {
|
||||
ResolvedContainerConfig config = ctx.config();
|
||||
Deployment deployment = ctx.deployment();
|
||||
|
||||
// Capture previous deployment's per-index container ids up front.
|
||||
Optional<Deployment> previousOpt = deploymentRepository.findActiveByAppIdAndEnvironmentIdExcluding(
|
||||
deployment.appId(), deployment.environmentId(), deployment.id());
|
||||
Map<Integer, String> oldContainerByIndex = new LinkedHashMap<>();
|
||||
if (previousOpt.isPresent() && previousOpt.get().replicaStates() != null) {
|
||||
for (Map<String, Object> r : previousOpt.get().replicaStates()) {
|
||||
Object idx = r.get("index");
|
||||
Object cid = r.get("containerId");
|
||||
if (idx instanceof Number n && cid instanceof String s) {
|
||||
oldContainerByIndex.put(n.intValue(), s);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// === START REPLICAS ===
|
||||
updateStage(deployment.id(), DeployStage.START_REPLICAS);
|
||||
List<Map<String, Object>> replicaStates = new ArrayList<>();
|
||||
List<String> newContainerIds = new ArrayList<>();
|
||||
|
||||
for (int i = 0; i < config.replicas(); i++) {
|
||||
// Start new replica i (gen-suffixed name; coexists with old[i]).
|
||||
Map<String, Object> state = new LinkedHashMap<>();
|
||||
String newCid = startReplica(ctx, i, state);
|
||||
newContainerIds.add(newCid);
|
||||
replicaStates.add(state);
|
||||
pgDeployRepo.updateReplicaStates(deployment.id(), replicaStates);
|
||||
|
||||
// === HEALTH CHECK (per-replica) ===
|
||||
updateStage(deployment.id(), DeployStage.HEALTH_CHECK);
|
||||
boolean healthy = waitForOneHealthy(newCid, healthCheckTimeout);
|
||||
if (!healthy) {
|
||||
// Abort: stop this in-flight new replica AND any new replicas
|
||||
// started so far. Already-stopped old replicas stay stopped
|
||||
// (rolling is not reversible). Remaining un-replaced old
|
||||
// replicas keep serving traffic.
|
||||
for (String cid : newContainerIds) {
|
||||
try { orchestrator.stopContainer(cid); orchestrator.removeContainer(cid); }
|
||||
catch (Exception e) { log.warn("Cleanup failed for {}: {}", cid, e.getMessage()); }
|
||||
}
|
||||
pgDeployRepo.updateDeployStage(deployment.id(), null);
|
||||
String reason = String.format(
|
||||
"rolling: replica %d failed to reach healthy within %ds; %d previous replicas still running",
|
||||
i, healthCheckTimeout, oldContainerByIndex.size());
|
||||
deploymentService.markFailed(deployment.id(), reason);
|
||||
serverMetrics.recordDeploymentOutcome("FAILED");
|
||||
serverMetrics.recordDeploymentDuration(ctx.deployStart());
|
||||
return;
|
||||
}
|
||||
|
||||
// Health check passed: update replica status to RUNNING, stop the
|
||||
// corresponding old[i] if present, and continue with replica i+1.
|
||||
replicaStates = updateReplicaHealth(replicaStates, newContainerIds);
|
||||
pgDeployRepo.updateReplicaStates(deployment.id(), replicaStates);
|
||||
|
||||
String oldCid = oldContainerByIndex.remove(i);
|
||||
if (oldCid != null) {
|
||||
try {
|
||||
orchestrator.stopContainer(oldCid);
|
||||
orchestrator.removeContainer(oldCid);
|
||||
log.info("rolling: replaced replica {} (old={}, new={})", i, oldCid, newCid);
|
||||
} catch (Exception e) {
|
||||
log.warn("rolling: failed to stop old replica {} ({}): {}", i, oldCid, e.getMessage());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// === SWAP TRAFFIC ===
|
||||
// Any old replicas with indices >= new.replicas (e.g., when replica
|
||||
// count shrank) are still running; sweep them now so the old
|
||||
// deployment can be marked STOPPED.
|
||||
updateStage(deployment.id(), DeployStage.SWAP_TRAFFIC);
|
||||
for (Map.Entry<Integer, String> e : oldContainerByIndex.entrySet()) {
|
||||
try {
|
||||
orchestrator.stopContainer(e.getValue());
|
||||
orchestrator.removeContainer(e.getValue());
|
||||
log.info("rolling: stopped leftover old replica {} ({})", e.getKey(), e.getValue());
|
||||
} catch (Exception ex) {
|
||||
log.warn("rolling: failed to stop leftover old replica {}: {}", e.getKey(), ex.getMessage());
|
||||
}
|
||||
}
|
||||
if (previousOpt.isPresent()) {
|
||||
deploymentService.markStopped(previousOpt.get().id());
|
||||
}
|
||||
|
||||
// === COMPLETE ===
|
||||
updateStage(deployment.id(), DeployStage.COMPLETE);
|
||||
persistSnapshotAndMarkRunning(ctx, newContainerIds.get(0));
|
||||
log.info("Deployment {} is RUNNING (rolling, {}/{} replicas replaced)",
|
||||
deployment.id(), config.replicas(), config.replicas());
|
||||
}
|
||||
|
||||
/** Poll a single container until healthy or the timeout expires. Returns
|
||||
* true on healthy, false on timeout or thread interrupt. */
|
||||
private boolean waitForOneHealthy(String containerId, int timeoutSeconds) {
|
||||
long deadline = System.currentTimeMillis() + (timeoutSeconds * 1000L);
|
||||
while (System.currentTimeMillis() < deadline) {
|
||||
ContainerStatus status = orchestrator.getContainerStatus(containerId);
|
||||
if ("healthy".equals(status.state())) return true;
|
||||
try { Thread.sleep(2000); } catch (InterruptedException e) {
|
||||
Thread.currentThread().interrupt();
|
||||
return false;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/** Start one replica container with the gen-suffixed name and return its
|
||||
* container id. Fills `stateOut` with the replicaStates JSONB row. */
|
||||
private String startReplica(DeployCtx ctx, int i, Map<String, Object> stateOut) {
|
||||
Environment env = ctx.env();
|
||||
App app = ctx.app();
|
||||
ResolvedContainerConfig config = ctx.config();
|
||||
|
||||
String instanceId = env.slug() + "-" + app.slug() + "-" + i + "-" + ctx.generation();
|
||||
String containerName = tenantId + "-" + instanceId;
|
||||
|
||||
Map<String, String> labels = TraefikLabelBuilder.build(
|
||||
app.slug(), env.slug(), tenantId, config, i, ctx.generation());
|
||||
labels.putAll(ctx.prometheusLabels());
|
||||
|
||||
Map<String, String> replicaEnvVars = new LinkedHashMap<>(ctx.baseEnvVars());
|
||||
replicaEnvVars.put("CAMELEER_AGENT_INSTANCEID", instanceId);
|
||||
|
||||
String volumeName = jarDockerVolume != null && !jarDockerVolume.isBlank() ? jarDockerVolume : null;
|
||||
ContainerRequest request = new ContainerRequest(
|
||||
containerName, baseImage, ctx.jarPath(),
|
||||
volumeName, jarStoragePath,
|
||||
ctx.primaryNetwork(),
|
||||
ctx.additionalNets(),
|
||||
replicaEnvVars, labels,
|
||||
config.memoryLimitBytes(), config.memoryReserveBytes(),
|
||||
config.dockerCpuShares(), config.dockerCpuQuota(),
|
||||
config.exposedPorts(), agentHealthPort,
|
||||
"on-failure", 3,
|
||||
ctx.resolvedRuntimeType(), config.customArgs(), ctx.mainClass()
|
||||
);
|
||||
|
||||
String containerId = orchestrator.startContainer(request);
|
||||
|
||||
// Connect to additional networks after container is started
|
||||
for (String net : ctx.additionalNets()) {
|
||||
if (networkManager != null) {
|
||||
networkManager.connectContainer(containerId, net);
|
||||
}
|
||||
}
|
||||
|
||||
orchestrator.startLogCapture(containerId, instanceId, app.slug(), env.slug(), tenantId);
|
||||
|
||||
stateOut.put("index", i);
|
||||
stateOut.put("containerId", containerId);
|
||||
stateOut.put("containerName", containerName);
|
||||
stateOut.put("status", "STARTING");
|
||||
return containerId;
|
||||
}
|
||||
|
||||
/** Persist the deployment snapshot and mark the deployment RUNNING.
|
||||
* Finalizes the deploy in a single place shared by all strategy paths. */
|
||||
private void persistSnapshotAndMarkRunning(DeployCtx ctx, String primaryContainerId) {
|
||||
Deployment deployment = ctx.deployment();
|
||||
ApplicationConfig agentConfig = applicationConfigRepository
|
||||
.findByApplicationAndEnvironment(ctx.app().slug(), ctx.env().slug())
|
||||
.orElse(null);
|
||||
List<String> snapshotSensitiveKeys = agentConfig != null ? agentConfig.getSensitiveKeys() : null;
|
||||
DeploymentConfigSnapshot snapshot = new DeploymentConfigSnapshot(
|
||||
deployment.appVersionId(),
|
||||
agentConfig,
|
||||
ctx.app().containerConfig(),
|
||||
snapshotSensitiveKeys);
|
||||
pgDeployRepo.saveDeployedConfigSnapshot(deployment.id(), snapshot);
|
||||
|
||||
deploymentService.markRunning(deployment.id(), primaryContainerId);
|
||||
pgDeployRepo.updateDeployStage(deployment.id(), null);
|
||||
serverMetrics.recordDeploymentOutcome("RUNNING");
|
||||
serverMetrics.recordDeploymentDuration(ctx.deployStart());
|
||||
}
|
||||
|
||||
public void stopDeployment(Deployment deployment) {
|
||||
pgDeployRepo.updateTargetState(deployment.id(), "STOPPED");
|
||||
deploymentRepository.updateStatus(deployment.id(), DeploymentStatus.STOPPING,
|
||||
@@ -341,7 +544,10 @@ public class DeploymentExecutor {
|
||||
return envVars;
|
||||
}
|
||||
|
||||
private int waitForAnyHealthy(List<String> containerIds, int timeoutSeconds) {
|
||||
/** Poll until all containers are healthy or the timeout expires. Returns
|
||||
* the healthy count at return time — == ids.size() on full success, less
|
||||
* if the timeout won. */
|
||||
private int waitForAllHealthy(List<String> containerIds, int timeoutSeconds) {
|
||||
long deadline = System.currentTimeMillis() + (timeoutSeconds * 1000L);
|
||||
int lastHealthy = 0;
|
||||
while (System.currentTimeMillis() < deadline) {
|
||||
@@ -403,6 +609,10 @@ public class DeploymentExecutor {
|
||||
map.put("runtimeType", config.runtimeType());
|
||||
map.put("customArgs", config.customArgs());
|
||||
map.put("extraNetworks", config.extraNetworks());
|
||||
map.put("externalRouting", config.externalRouting());
|
||||
if (config.certResolver() != null) {
|
||||
map.put("certResolver", config.certResolver());
|
||||
}
|
||||
return map;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -10,19 +10,28 @@ public final class TraefikLabelBuilder {
|
||||
private TraefikLabelBuilder() {}
|
||||
|
||||
public static Map<String, String> build(String appSlug, String envSlug, String tenantId,
|
||||
ResolvedContainerConfig config, int replicaIndex) {
|
||||
ResolvedContainerConfig config, int replicaIndex,
|
||||
String generation) {
|
||||
// Traefik router/service keys stay generation-agnostic so load balancing
|
||||
// spans old + new replicas during a blue/green overlap. instance-id and
|
||||
// the new generation label carry the per-deploy identity.
|
||||
String svc = envSlug + "-" + appSlug;
|
||||
String instanceId = envSlug + "-" + appSlug + "-" + replicaIndex;
|
||||
String instanceId = envSlug + "-" + appSlug + "-" + replicaIndex + "-" + generation;
|
||||
Map<String, String> labels = new LinkedHashMap<>();
|
||||
|
||||
labels.put("traefik.enable", "true");
|
||||
labels.put("managed-by", "cameleer-server");
|
||||
labels.put("cameleer.tenant", tenantId);
|
||||
labels.put("cameleer.app", appSlug);
|
||||
labels.put("cameleer.environment", envSlug);
|
||||
labels.put("cameleer.replica", String.valueOf(replicaIndex));
|
||||
labels.put("cameleer.generation", generation);
|
||||
labels.put("cameleer.instance-id", instanceId);
|
||||
|
||||
if (!config.externalRouting()) {
|
||||
return labels;
|
||||
}
|
||||
|
||||
labels.put("traefik.enable", "true");
|
||||
labels.put("traefik.http.services." + svc + ".loadbalancer.server.port",
|
||||
String.valueOf(config.appPort()));
|
||||
|
||||
@@ -46,7 +55,10 @@ public final class TraefikLabelBuilder {
|
||||
|
||||
if (config.sslOffloading()) {
|
||||
labels.put("traefik.http.routers." + svc + ".tls", "true");
|
||||
labels.put("traefik.http.routers." + svc + ".tls.certresolver", "default");
|
||||
if (config.certResolver() != null && !config.certResolver().isBlank()) {
|
||||
labels.put("traefik.http.routers." + svc + ".tls.certresolver",
|
||||
config.certResolver());
|
||||
}
|
||||
}
|
||||
|
||||
return labels;
|
||||
|
||||
@@ -122,6 +122,14 @@ public class ClickHouseLogStore implements LogIndex {
|
||||
baseParams.add(request.instanceId());
|
||||
}
|
||||
|
||||
if (!request.instanceIds().isEmpty()) {
|
||||
String placeholders = String.join(", ", Collections.nCopies(request.instanceIds().size(), "?"));
|
||||
baseConditions.add("instance_id IN (" + placeholders + ")");
|
||||
for (String id : request.instanceIds()) {
|
||||
baseParams.add(id);
|
||||
}
|
||||
}
|
||||
|
||||
if (request.exchangeId() != null && !request.exchangeId().isEmpty()) {
|
||||
baseConditions.add("(exchange_id = ?" +
|
||||
" OR (mapContains(mdc, 'cameleer.exchangeId') AND mdc['cameleer.exchangeId'] = ?)" +
|
||||
@@ -281,6 +289,14 @@ public class ClickHouseLogStore implements LogIndex {
|
||||
params.add(request.instanceId());
|
||||
}
|
||||
|
||||
if (!request.instanceIds().isEmpty()) {
|
||||
String placeholders = String.join(", ", Collections.nCopies(request.instanceIds().size(), "?"));
|
||||
conditions.add("instance_id IN (" + placeholders + ")");
|
||||
for (String id : request.instanceIds()) {
|
||||
params.add(id);
|
||||
}
|
||||
}
|
||||
|
||||
if (request.exchangeId() != null && !request.exchangeId().isEmpty()) {
|
||||
conditions.add("(exchange_id = ?" +
|
||||
" OR (mapContains(mdc, 'cameleer.exchangeId') AND mdc['cameleer.exchangeId'] = ?)" +
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package com.cameleer.server.app.search;
|
||||
|
||||
import com.cameleer.server.core.alerting.AlertMatchSpec;
|
||||
import com.cameleer.server.core.search.AttributeFilter;
|
||||
import com.cameleer.server.core.search.ExecutionSummary;
|
||||
import com.cameleer.server.core.search.SearchRequest;
|
||||
import com.cameleer.server.core.search.SearchResult;
|
||||
@@ -256,6 +257,23 @@ public class ClickHouseSearchIndex implements SearchIndex {
|
||||
params.add(likeTerm);
|
||||
}
|
||||
|
||||
// Structured attribute filters. Keys were validated at AttributeFilter construction
|
||||
// time against ^[a-zA-Z0-9._-]+$ so they are safe to single-quote-inline; the JSON path
|
||||
// argument of JSONExtractString does not accept a ? placeholder in ClickHouse JDBC
|
||||
// (same constraint as countExecutionsForAlerting below). Values are parameter-bound.
|
||||
for (AttributeFilter filter : request.attributeFilters()) {
|
||||
String escapedKey = filter.key().replace("'", "\\'");
|
||||
if (filter.isKeyOnly()) {
|
||||
conditions.add("JSONHas(attributes, '" + escapedKey + "')");
|
||||
} else if (filter.isWildcard()) {
|
||||
conditions.add("JSONExtractString(attributes, '" + escapedKey + "') LIKE ?");
|
||||
params.add(filter.toLikePattern());
|
||||
} else {
|
||||
conditions.add("JSONExtractString(attributes, '" + escapedKey + "') = ?");
|
||||
params.add(filter.value());
|
||||
}
|
||||
}
|
||||
|
||||
return String.join(" AND ", conditions);
|
||||
}
|
||||
|
||||
|
||||
@@ -16,8 +16,6 @@ import java.security.MessageDigest;
|
||||
import java.security.NoSuchAlgorithmException;
|
||||
import java.sql.Timestamp;
|
||||
import java.time.Instant;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.HexFormat;
|
||||
import java.util.List;
|
||||
@@ -57,6 +55,12 @@ public class ClickHouseDiagramStore implements DiagramStore {
|
||||
ORDER BY created_at DESC LIMIT 1
|
||||
""";
|
||||
|
||||
private static final String SELECT_HASH_FOR_APP_ROUTE = """
|
||||
SELECT content_hash FROM route_diagrams
|
||||
WHERE tenant_id = ? AND application_id = ? AND environment = ? AND route_id = ?
|
||||
ORDER BY created_at DESC LIMIT 1
|
||||
""";
|
||||
|
||||
private static final String SELECT_DEFINITIONS_FOR_APP = """
|
||||
SELECT DISTINCT route_id, definition FROM route_diagrams
|
||||
WHERE tenant_id = ? AND application_id = ? AND environment = ?
|
||||
@@ -68,6 +72,8 @@ public class ClickHouseDiagramStore implements DiagramStore {
|
||||
|
||||
// (routeId + "\0" + instanceId) → contentHash
|
||||
private final ConcurrentHashMap<String, String> hashCache = new ConcurrentHashMap<>();
|
||||
// (applicationId + "\0" + environment + "\0" + routeId) → most recent contentHash
|
||||
private final ConcurrentHashMap<String, String> appRouteHashCache = new ConcurrentHashMap<>();
|
||||
// contentHash → deserialized RouteGraph
|
||||
private final ConcurrentHashMap<String, RouteGraph> graphCache = new ConcurrentHashMap<>();
|
||||
|
||||
@@ -92,12 +98,37 @@ public class ClickHouseDiagramStore implements DiagramStore {
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to warm diagram hash cache — lookups will fall back to ClickHouse: {}", e.getMessage());
|
||||
}
|
||||
|
||||
try {
|
||||
jdbc.query(
|
||||
"SELECT application_id, environment, route_id, " +
|
||||
"argMax(content_hash, created_at) AS content_hash " +
|
||||
"FROM route_diagrams WHERE tenant_id = ? " +
|
||||
"GROUP BY application_id, environment, route_id",
|
||||
rs -> {
|
||||
String key = appRouteCacheKey(
|
||||
rs.getString("application_id"),
|
||||
rs.getString("environment"),
|
||||
rs.getString("route_id"));
|
||||
appRouteHashCache.put(key, rs.getString("content_hash"));
|
||||
},
|
||||
tenantId);
|
||||
log.info("Diagram app-route cache warmed: {} entries", appRouteHashCache.size());
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to warm diagram app-route cache — lookups will fall back to ClickHouse: {}", e.getMessage());
|
||||
}
|
||||
}
|
||||
|
||||
private static String cacheKey(String routeId, String instanceId) {
|
||||
return routeId + "\0" + instanceId;
|
||||
}
|
||||
|
||||
private static String appRouteCacheKey(String applicationId, String environment, String routeId) {
|
||||
return (applicationId != null ? applicationId : "") + "\0"
|
||||
+ (environment != null ? environment : "") + "\0"
|
||||
+ (routeId != null ? routeId : "");
|
||||
}
|
||||
|
||||
@Override
|
||||
public void store(TaggedDiagram diagram) {
|
||||
try {
|
||||
@@ -122,6 +153,7 @@ public class ClickHouseDiagramStore implements DiagramStore {
|
||||
|
||||
// Update caches
|
||||
hashCache.put(cacheKey(routeId, agentId), contentHash);
|
||||
appRouteHashCache.put(appRouteCacheKey(applicationId, environment, routeId), contentHash);
|
||||
graphCache.put(contentHash, graph);
|
||||
|
||||
log.debug("Stored diagram for route={} agent={} with hash={}", routeId, agentId, contentHash);
|
||||
@@ -170,33 +202,29 @@ public class ClickHouseDiagramStore implements DiagramStore {
|
||||
}
|
||||
|
||||
@Override
|
||||
public Optional<String> findContentHashForRouteByAgents(String routeId, List<String> agentIds) {
|
||||
if (agentIds == null || agentIds.isEmpty()) {
|
||||
public Optional<String> findLatestContentHashForAppRoute(String applicationId,
|
||||
String routeId,
|
||||
String environment) {
|
||||
if (applicationId == null || applicationId.isBlank()
|
||||
|| routeId == null || routeId.isBlank()
|
||||
|| environment == null || environment.isBlank()) {
|
||||
return Optional.empty();
|
||||
}
|
||||
|
||||
// Try cache first — return first hit
|
||||
for (String agentId : agentIds) {
|
||||
String cached = hashCache.get(cacheKey(routeId, agentId));
|
||||
if (cached != null) {
|
||||
return Optional.of(cached);
|
||||
}
|
||||
String key = appRouteCacheKey(applicationId, environment, routeId);
|
||||
String cached = appRouteHashCache.get(key);
|
||||
if (cached != null) {
|
||||
return Optional.of(cached);
|
||||
}
|
||||
|
||||
// Fall back to ClickHouse
|
||||
String placeholders = String.join(", ", Collections.nCopies(agentIds.size(), "?"));
|
||||
String sql = "SELECT content_hash FROM route_diagrams " +
|
||||
"WHERE tenant_id = ? AND route_id = ? AND instance_id IN (" + placeholders + ") " +
|
||||
"ORDER BY created_at DESC LIMIT 1";
|
||||
var params = new ArrayList<Object>();
|
||||
params.add(tenantId);
|
||||
params.add(routeId);
|
||||
params.addAll(agentIds);
|
||||
List<Map<String, Object>> rows = jdbc.queryForList(sql, params.toArray());
|
||||
List<Map<String, Object>> rows = jdbc.queryForList(
|
||||
SELECT_HASH_FOR_APP_ROUTE, tenantId, applicationId, environment, routeId);
|
||||
if (rows.isEmpty()) {
|
||||
return Optional.empty();
|
||||
}
|
||||
return Optional.of((String) rows.get(0).get("content_hash"));
|
||||
String hash = (String) rows.get(0).get("content_hash");
|
||||
appRouteHashCache.put(key, hash);
|
||||
return Optional.of(hash);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
||||
@@ -0,0 +1,408 @@
|
||||
package com.cameleer.server.app.storage;
|
||||
|
||||
import com.cameleer.server.core.storage.ServerMetricsQueryStore;
|
||||
import com.cameleer.server.core.storage.model.ServerInstanceInfo;
|
||||
import com.cameleer.server.core.storage.model.ServerMetricCatalogEntry;
|
||||
import com.cameleer.server.core.storage.model.ServerMetricPoint;
|
||||
import com.cameleer.server.core.storage.model.ServerMetricQueryRequest;
|
||||
import com.cameleer.server.core.storage.model.ServerMetricQueryResponse;
|
||||
import com.cameleer.server.core.storage.model.ServerMetricSeries;
|
||||
import org.springframework.jdbc.core.JdbcTemplate;
|
||||
|
||||
import java.sql.Array;
|
||||
import java.sql.Timestamp;
|
||||
import java.time.Duration;
|
||||
import java.time.Instant;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collections;
|
||||
import java.util.LinkedHashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.TreeSet;
|
||||
import java.util.regex.Pattern;
|
||||
|
||||
/**
|
||||
* ClickHouse-backed {@link ServerMetricsQueryStore}.
|
||||
*
|
||||
* <p>Safety rules for every query:
|
||||
* <ul>
|
||||
* <li>tenant_id always bound as a parameter — no cross-tenant reads.</li>
|
||||
* <li>Identifier-like inputs (metric name, statistic, tag keys,
|
||||
* aggregation, mode) are regex-validated. Tag keys flow through the
|
||||
* query as JDBC parameter-bound values of {@code tags[?]} map lookups,
|
||||
* so even with a "safe" regex they cannot inject SQL.</li>
|
||||
* <li>Literal values ({@code from}, {@code to}, tag filter values,
|
||||
* server_instance_id allow-list) always go through {@code ?}.</li>
|
||||
* <li>The time range is capped at {@link #MAX_RANGE}.</li>
|
||||
* <li>Result cardinality is capped at {@link #MAX_SERIES} series.</li>
|
||||
* </ul>
|
||||
*/
|
||||
public class ClickHouseServerMetricsQueryStore implements ServerMetricsQueryStore {
|
||||
|
||||
private static final Pattern SAFE_IDENTIFIER = Pattern.compile("^[a-zA-Z0-9._]+$");
|
||||
private static final Pattern SAFE_STATISTIC = Pattern.compile("^[a-z_]+$");
|
||||
|
||||
private static final Set<String> AGGREGATIONS = Set.of("avg", "sum", "max", "min", "latest");
|
||||
private static final Set<String> MODES = Set.of("raw", "delta");
|
||||
|
||||
/** Maximum {@code to - from} window accepted by the API. */
|
||||
static final Duration MAX_RANGE = Duration.ofDays(31);
|
||||
|
||||
/** Clamp bounds and default for {@code stepSeconds}. */
|
||||
static final int MIN_STEP = 10;
|
||||
static final int MAX_STEP = 3600;
|
||||
static final int DEFAULT_STEP = 60;
|
||||
|
||||
/** Defence against group-by explosion — limit the series count per response. */
|
||||
static final int MAX_SERIES = 500;
|
||||
|
||||
private final String tenantId;
|
||||
private final JdbcTemplate jdbc;
|
||||
|
||||
public ClickHouseServerMetricsQueryStore(String tenantId, JdbcTemplate jdbc) {
|
||||
this.tenantId = tenantId;
|
||||
this.jdbc = jdbc;
|
||||
}
|
||||
|
||||
// ── catalog ─────────────────────────────────────────────────────────
|
||||
|
||||
@Override
|
||||
public List<ServerMetricCatalogEntry> catalog(Instant from, Instant to) {
|
||||
requireRange(from, to);
|
||||
String sql = """
|
||||
SELECT
|
||||
metric_name,
|
||||
any(metric_type) AS metric_type,
|
||||
arraySort(groupUniqArray(statistic)) AS statistics,
|
||||
arraySort(arrayDistinct(arrayFlatten(groupArray(mapKeys(tags))))) AS tag_keys
|
||||
FROM server_metrics
|
||||
WHERE tenant_id = ?
|
||||
AND collected_at >= ?
|
||||
AND collected_at < ?
|
||||
GROUP BY metric_name
|
||||
ORDER BY metric_name
|
||||
""";
|
||||
return jdbc.query(sql, (rs, n) -> new ServerMetricCatalogEntry(
|
||||
rs.getString("metric_name"),
|
||||
rs.getString("metric_type"),
|
||||
arrayToStringList(rs.getArray("statistics")),
|
||||
arrayToStringList(rs.getArray("tag_keys"))
|
||||
), tenantId, Timestamp.from(from), Timestamp.from(to));
|
||||
}
|
||||
|
||||
// ── instances ───────────────────────────────────────────────────────
|
||||
|
||||
@Override
|
||||
public List<ServerInstanceInfo> listInstances(Instant from, Instant to) {
|
||||
requireRange(from, to);
|
||||
String sql = """
|
||||
SELECT
|
||||
server_instance_id,
|
||||
min(collected_at) AS first_seen,
|
||||
max(collected_at) AS last_seen
|
||||
FROM server_metrics
|
||||
WHERE tenant_id = ?
|
||||
AND collected_at >= ?
|
||||
AND collected_at < ?
|
||||
GROUP BY server_instance_id
|
||||
ORDER BY last_seen DESC
|
||||
""";
|
||||
return jdbc.query(sql, (rs, n) -> new ServerInstanceInfo(
|
||||
rs.getString("server_instance_id"),
|
||||
rs.getTimestamp("first_seen").toInstant(),
|
||||
rs.getTimestamp("last_seen").toInstant()
|
||||
), tenantId, Timestamp.from(from), Timestamp.from(to));
|
||||
}
|
||||
|
||||
// ── query ───────────────────────────────────────────────────────────
|
||||
|
||||
@Override
|
||||
public ServerMetricQueryResponse query(ServerMetricQueryRequest request) {
|
||||
if (request == null) throw new IllegalArgumentException("request is required");
|
||||
String metric = requireSafeIdentifier(request.metric(), "metric");
|
||||
requireRange(request.from(), request.to());
|
||||
|
||||
String aggregation = request.aggregation() != null ? request.aggregation().toLowerCase() : "avg";
|
||||
if (!AGGREGATIONS.contains(aggregation)) {
|
||||
throw new IllegalArgumentException("aggregation must be one of " + AGGREGATIONS);
|
||||
}
|
||||
|
||||
String mode = request.mode() != null ? request.mode().toLowerCase() : "raw";
|
||||
if (!MODES.contains(mode)) {
|
||||
throw new IllegalArgumentException("mode must be one of " + MODES);
|
||||
}
|
||||
|
||||
int step = request.stepSeconds() != null ? request.stepSeconds() : DEFAULT_STEP;
|
||||
if (step < MIN_STEP || step > MAX_STEP) {
|
||||
throw new IllegalArgumentException(
|
||||
"stepSeconds must be in [" + MIN_STEP + "," + MAX_STEP + "]");
|
||||
}
|
||||
|
||||
String statistic = request.statistic();
|
||||
if (statistic != null && !SAFE_STATISTIC.matcher(statistic).matches()) {
|
||||
throw new IllegalArgumentException("statistic contains unsafe characters");
|
||||
}
|
||||
|
||||
List<String> groupByTags = request.groupByTags() != null
|
||||
? request.groupByTags() : List.of();
|
||||
for (String t : groupByTags) requireSafeIdentifier(t, "groupByTag");
|
||||
|
||||
Map<String, String> filterTags = request.filterTags() != null
|
||||
? request.filterTags() : Map.of();
|
||||
for (String t : filterTags.keySet()) requireSafeIdentifier(t, "filterTag key");
|
||||
|
||||
List<String> instanceAllowList = request.serverInstanceIds() != null
|
||||
? request.serverInstanceIds() : List.of();
|
||||
|
||||
boolean isDelta = "delta".equals(mode);
|
||||
boolean isMean = "mean".equals(statistic);
|
||||
|
||||
String sql = isDelta
|
||||
? buildDeltaSql(step, groupByTags, filterTags, instanceAllowList, statistic, isMean)
|
||||
: buildRawSql(step, groupByTags, filterTags, instanceAllowList,
|
||||
statistic, aggregation, isMean);
|
||||
|
||||
List<Object> params = buildParams(groupByTags, metric, statistic, isMean,
|
||||
request.from(), request.to(),
|
||||
filterTags, instanceAllowList);
|
||||
|
||||
List<Row> rows = jdbc.query(sql, (rs, n) -> {
|
||||
int idx = 1;
|
||||
Instant bucket = rs.getTimestamp(idx++).toInstant();
|
||||
List<String> tagValues = new ArrayList<>(groupByTags.size());
|
||||
for (int g = 0; g < groupByTags.size(); g++) {
|
||||
tagValues.add(rs.getString(idx++));
|
||||
}
|
||||
double value = rs.getDouble(idx);
|
||||
return new Row(bucket, tagValues, value);
|
||||
}, params.toArray());
|
||||
|
||||
return assembleSeries(rows, metric, statistic, aggregation, mode, step, groupByTags);
|
||||
}
|
||||
|
||||
// ── SQL builders ────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Builds a single-pass SQL for raw mode:
|
||||
* <pre>{@code
|
||||
* SELECT bucket, tag0, ..., <agg>(metric_value) AS value
|
||||
* FROM server_metrics WHERE ...
|
||||
* GROUP BY bucket, tag0, ...
|
||||
* ORDER BY bucket, tag0, ...
|
||||
* }</pre>
|
||||
* For {@code statistic=mean}, replaces the aggregate with
|
||||
* {@code sumIf(value, statistic IN ('total','total_time')) / nullIf(sumIf(value, statistic='count'), 0)}.
|
||||
*/
|
||||
private String buildRawSql(int step, List<String> groupByTags,
|
||||
Map<String, String> filterTags,
|
||||
List<String> instanceAllowList,
|
||||
String statistic, String aggregation, boolean isMean) {
|
||||
StringBuilder s = new StringBuilder(512);
|
||||
s.append("SELECT\n toDateTime64(toStartOfInterval(collected_at, INTERVAL ")
|
||||
.append(step).append(" SECOND), 3) AS bucket");
|
||||
for (int i = 0; i < groupByTags.size(); i++) {
|
||||
s.append(",\n tags[?] AS tag").append(i);
|
||||
}
|
||||
s.append(",\n ").append(isMean ? meanExpr() : scalarAggExpr(aggregation))
|
||||
.append(" AS value\nFROM server_metrics\n");
|
||||
appendWhereClause(s, filterTags, instanceAllowList, statistic, isMean);
|
||||
s.append("GROUP BY bucket");
|
||||
for (int i = 0; i < groupByTags.size(); i++) s.append(", tag").append(i);
|
||||
s.append("\nORDER BY bucket");
|
||||
for (int i = 0; i < groupByTags.size(); i++) s.append(", tag").append(i);
|
||||
return s.toString();
|
||||
}
|
||||
|
||||
/**
|
||||
* Builds a three-level SQL for delta mode. Inner fills one
|
||||
* (bucket, instance, tag-group) row via {@code max(metric_value)};
|
||||
* middle computes positive-clipped per-instance differences via a
|
||||
* window function; outer sums across instances.
|
||||
*/
|
||||
private String buildDeltaSql(int step, List<String> groupByTags,
|
||||
Map<String, String> filterTags,
|
||||
List<String> instanceAllowList,
|
||||
String statistic, boolean isMean) {
|
||||
StringBuilder s = new StringBuilder(1024);
|
||||
s.append("SELECT bucket");
|
||||
for (int i = 0; i < groupByTags.size(); i++) s.append(", tag").append(i);
|
||||
s.append(", sum(delta) AS value FROM (\n");
|
||||
|
||||
// Middle: per-instance positive-clipped delta using window.
|
||||
s.append(" SELECT bucket");
|
||||
for (int i = 0; i < groupByTags.size(); i++) s.append(", tag").append(i);
|
||||
s.append(", server_instance_id, greatest(0, value - coalesce(any(value) OVER (")
|
||||
.append("PARTITION BY server_instance_id");
|
||||
for (int i = 0; i < groupByTags.size(); i++) s.append(", tag").append(i);
|
||||
s.append(" ORDER BY bucket ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING), value)) AS delta FROM (\n");
|
||||
|
||||
// Inner: one representative value per (bucket, instance, tag-group).
|
||||
s.append(" SELECT\n toDateTime64(toStartOfInterval(collected_at, INTERVAL ")
|
||||
.append(step).append(" SECOND), 3) AS bucket,\n server_instance_id");
|
||||
for (int i = 0; i < groupByTags.size(); i++) {
|
||||
s.append(",\n tags[?] AS tag").append(i);
|
||||
}
|
||||
s.append(",\n ").append(isMean ? meanExpr() : "max(metric_value)")
|
||||
.append(" AS value\n FROM server_metrics\n");
|
||||
appendWhereClause(s, filterTags, instanceAllowList, statistic, isMean);
|
||||
s.append(" GROUP BY bucket, server_instance_id");
|
||||
for (int i = 0; i < groupByTags.size(); i++) s.append(", tag").append(i);
|
||||
s.append("\n ) AS bucketed\n) AS deltas\n");
|
||||
|
||||
s.append("GROUP BY bucket");
|
||||
for (int i = 0; i < groupByTags.size(); i++) s.append(", tag").append(i);
|
||||
s.append("\nORDER BY bucket");
|
||||
for (int i = 0; i < groupByTags.size(); i++) s.append(", tag").append(i);
|
||||
return s.toString();
|
||||
}
|
||||
|
||||
/**
|
||||
* WHERE clause shared by both raw and delta SQL shapes. Appended at the
|
||||
* correct indent under either the single {@code FROM server_metrics}
|
||||
* (raw) or the innermost one (delta).
|
||||
*/
|
||||
private void appendWhereClause(StringBuilder s, Map<String, String> filterTags,
|
||||
List<String> instanceAllowList,
|
||||
String statistic, boolean isMean) {
|
||||
s.append(" WHERE tenant_id = ?\n")
|
||||
.append(" AND metric_name = ?\n");
|
||||
if (isMean) {
|
||||
s.append(" AND statistic IN ('count', 'total', 'total_time')\n");
|
||||
} else if (statistic != null) {
|
||||
s.append(" AND statistic = ?\n");
|
||||
}
|
||||
s.append(" AND collected_at >= ?\n")
|
||||
.append(" AND collected_at < ?\n");
|
||||
for (int i = 0; i < filterTags.size(); i++) {
|
||||
s.append(" AND tags[?] = ?\n");
|
||||
}
|
||||
if (!instanceAllowList.isEmpty()) {
|
||||
s.append(" AND server_instance_id IN (")
|
||||
.append("?,".repeat(instanceAllowList.size() - 1)).append("?)\n");
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* SQL-positional params for both raw and delta queries (same relative
|
||||
* order because the WHERE clause is emitted by {@link #appendWhereClause}
|
||||
* only once, with the {@code tags[?]} select-list placeholders appearing
|
||||
* earlier in the SQL text).
|
||||
*/
|
||||
private List<Object> buildParams(List<String> groupByTags, String metric,
|
||||
String statistic, boolean isMean,
|
||||
Instant from, Instant to,
|
||||
Map<String, String> filterTags,
|
||||
List<String> instanceAllowList) {
|
||||
List<Object> params = new ArrayList<>();
|
||||
// SELECT-list tags[?] placeholders
|
||||
params.addAll(groupByTags);
|
||||
// WHERE
|
||||
params.add(tenantId);
|
||||
params.add(metric);
|
||||
if (!isMean && statistic != null) params.add(statistic);
|
||||
params.add(Timestamp.from(from));
|
||||
params.add(Timestamp.from(to));
|
||||
for (Map.Entry<String, String> e : filterTags.entrySet()) {
|
||||
params.add(e.getKey());
|
||||
params.add(e.getValue());
|
||||
}
|
||||
params.addAll(instanceAllowList);
|
||||
return params;
|
||||
}
|
||||
|
||||
private static String scalarAggExpr(String aggregation) {
|
||||
return switch (aggregation) {
|
||||
case "avg" -> "avg(metric_value)";
|
||||
case "sum" -> "sum(metric_value)";
|
||||
case "max" -> "max(metric_value)";
|
||||
case "min" -> "min(metric_value)";
|
||||
case "latest" -> "argMax(metric_value, collected_at)";
|
||||
default -> throw new IllegalStateException("unreachable: " + aggregation);
|
||||
};
|
||||
}
|
||||
|
||||
private static String meanExpr() {
|
||||
return "sumIf(metric_value, statistic IN ('total', 'total_time'))"
|
||||
+ " / nullIf(sumIf(metric_value, statistic = 'count'), 0)";
|
||||
}
|
||||
|
||||
// ── response assembly ───────────────────────────────────────────────
|
||||
|
||||
private ServerMetricQueryResponse assembleSeries(
|
||||
List<Row> rows, String metric, String statistic,
|
||||
String aggregation, String mode, int step, List<String> groupByTags) {
|
||||
|
||||
Map<List<String>, List<ServerMetricPoint>> bySignature = new LinkedHashMap<>();
|
||||
for (Row r : rows) {
|
||||
if (Double.isNaN(r.value) || Double.isInfinite(r.value)) continue;
|
||||
bySignature.computeIfAbsent(r.tagValues, k -> new ArrayList<>())
|
||||
.add(new ServerMetricPoint(r.bucket, r.value));
|
||||
}
|
||||
|
||||
if (bySignature.size() > MAX_SERIES) {
|
||||
throw new IllegalArgumentException(
|
||||
"query produced " + bySignature.size()
|
||||
+ " series; reduce groupByTags or tighten filterTags (max "
|
||||
+ MAX_SERIES + ")");
|
||||
}
|
||||
|
||||
List<ServerMetricSeries> series = new ArrayList<>(bySignature.size());
|
||||
for (Map.Entry<List<String>, List<ServerMetricPoint>> e : bySignature.entrySet()) {
|
||||
Map<String, String> tags = new LinkedHashMap<>();
|
||||
for (int i = 0; i < groupByTags.size(); i++) {
|
||||
tags.put(groupByTags.get(i), e.getKey().get(i));
|
||||
}
|
||||
series.add(new ServerMetricSeries(Collections.unmodifiableMap(tags), e.getValue()));
|
||||
}
|
||||
|
||||
return new ServerMetricQueryResponse(metric,
|
||||
statistic != null ? statistic : "value",
|
||||
aggregation, mode, step, series);
|
||||
}
|
||||
|
||||
// ── helpers ─────────────────────────────────────────────────────────
|
||||
|
||||
private static void requireRange(Instant from, Instant to) {
|
||||
if (from == null || to == null) {
|
||||
throw new IllegalArgumentException("from and to are required");
|
||||
}
|
||||
if (!from.isBefore(to)) {
|
||||
throw new IllegalArgumentException("from must be strictly before to");
|
||||
}
|
||||
if (Duration.between(from, to).compareTo(MAX_RANGE) > 0) {
|
||||
throw new IllegalArgumentException(
|
||||
"time range exceeds maximum of " + MAX_RANGE.toDays() + " days");
|
||||
}
|
||||
}
|
||||
|
||||
private static String requireSafeIdentifier(String value, String field) {
|
||||
if (value == null || value.isBlank()) {
|
||||
throw new IllegalArgumentException(field + " is required");
|
||||
}
|
||||
if (!SAFE_IDENTIFIER.matcher(value).matches()) {
|
||||
throw new IllegalArgumentException(
|
||||
field + " contains unsafe characters (allowed: [a-zA-Z0-9._])");
|
||||
}
|
||||
return value;
|
||||
}
|
||||
|
||||
private static List<String> arrayToStringList(Array array) {
|
||||
if (array == null) return List.of();
|
||||
try {
|
||||
Object[] values = (Object[]) array.getArray();
|
||||
Set<String> sorted = new TreeSet<>();
|
||||
for (Object v : values) {
|
||||
if (v != null) sorted.add(v.toString());
|
||||
}
|
||||
return List.copyOf(sorted);
|
||||
} catch (Exception e) {
|
||||
return List.of();
|
||||
} finally {
|
||||
try { array.free(); } catch (Exception ignore) { }
|
||||
}
|
||||
}
|
||||
|
||||
private record Row(Instant bucket, List<String> tagValues, double value) {
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,46 @@
|
||||
package com.cameleer.server.app.storage;
|
||||
|
||||
import com.cameleer.server.core.storage.ServerMetricsStore;
|
||||
import com.cameleer.server.core.storage.model.ServerMetricSample;
|
||||
import org.springframework.jdbc.core.JdbcTemplate;
|
||||
|
||||
import java.sql.Timestamp;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
public class ClickHouseServerMetricsStore implements ServerMetricsStore {
|
||||
|
||||
private final JdbcTemplate jdbc;
|
||||
|
||||
public ClickHouseServerMetricsStore(JdbcTemplate jdbc) {
|
||||
this.jdbc = jdbc;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void insertBatch(List<ServerMetricSample> samples) {
|
||||
if (samples.isEmpty()) return;
|
||||
|
||||
jdbc.batchUpdate("""
|
||||
INSERT INTO server_metrics
|
||||
(tenant_id, collected_at, server_instance_id, metric_name,
|
||||
metric_type, statistic, metric_value, tags)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
samples.stream().map(s -> new Object[]{
|
||||
s.tenantId(),
|
||||
Timestamp.from(s.collectedAt()),
|
||||
s.serverInstanceId(),
|
||||
s.metricName(),
|
||||
s.metricType(),
|
||||
s.statistic(),
|
||||
s.value(),
|
||||
tagsToClickHouseMap(s.tags())
|
||||
}).toList());
|
||||
}
|
||||
|
||||
private Map<String, String> tagsToClickHouseMap(Map<String, String> tags) {
|
||||
if (tags == null || tags.isEmpty()) return new HashMap<>();
|
||||
return new HashMap<>(tags);
|
||||
}
|
||||
}
|
||||
@@ -1,6 +1,7 @@
|
||||
package com.cameleer.server.app.storage;
|
||||
|
||||
import com.cameleer.server.core.runtime.Deployment;
|
||||
import com.cameleer.server.core.runtime.DeploymentConfigSnapshot;
|
||||
import com.cameleer.server.core.runtime.DeploymentRepository;
|
||||
import com.cameleer.server.core.runtime.DeploymentStatus;
|
||||
import com.fasterxml.jackson.core.type.TypeReference;
|
||||
@@ -21,7 +22,7 @@ public class PostgresDeploymentRepository implements DeploymentRepository {
|
||||
private static final String SELECT_COLS =
|
||||
"id, app_id, app_version_id, environment_id, status, target_state, deployment_strategy, " +
|
||||
"replica_states, deploy_stage, container_id, container_name, error_message, " +
|
||||
"resolved_config, deployed_at, stopped_at, created_at";
|
||||
"resolved_config, deployed_config_snapshot, deployed_at, stopped_at, created_at, created_by";
|
||||
|
||||
private final JdbcTemplate jdbc;
|
||||
private final ObjectMapper objectMapper;
|
||||
@@ -62,6 +63,16 @@ public class PostgresDeploymentRepository implements DeploymentRepository {
|
||||
return results.isEmpty() ? Optional.empty() : Optional.of(results.get(0));
|
||||
}
|
||||
|
||||
@Override
|
||||
public Optional<Deployment> findActiveByAppIdAndEnvironmentIdExcluding(UUID appId, UUID environmentId, UUID excludeDeploymentId) {
|
||||
var results = jdbc.query(
|
||||
"SELECT " + SELECT_COLS + " FROM deployments WHERE app_id = ? AND environment_id = ? " +
|
||||
"AND status IN ('STARTING', 'RUNNING', 'DEGRADED') AND id <> ? " +
|
||||
"ORDER BY created_at DESC LIMIT 1",
|
||||
(rs, rowNum) -> mapRow(rs), appId, environmentId, excludeDeploymentId);
|
||||
return results.isEmpty() ? Optional.empty() : Optional.of(results.get(0));
|
||||
}
|
||||
|
||||
public List<Deployment> findByStatus(List<DeploymentStatus> statuses) {
|
||||
String placeholders = String.join(",", statuses.stream().map(s -> "'" + s.name() + "'").toList());
|
||||
return jdbc.query(
|
||||
@@ -70,10 +81,10 @@ public class PostgresDeploymentRepository implements DeploymentRepository {
|
||||
}
|
||||
|
||||
@Override
|
||||
public UUID create(UUID appId, UUID appVersionId, UUID environmentId, String containerName) {
|
||||
public UUID create(UUID appId, UUID appVersionId, UUID environmentId, String containerName, String createdBy) {
|
||||
UUID id = UUID.randomUUID();
|
||||
jdbc.update("INSERT INTO deployments (id, app_id, app_version_id, environment_id, container_name) VALUES (?, ?, ?, ?, ?)",
|
||||
id, appId, appVersionId, environmentId, containerName);
|
||||
jdbc.update("INSERT INTO deployments (id, app_id, app_version_id, environment_id, container_name, created_by) VALUES (?, ?, ?, ?, ?, ?)",
|
||||
id, appId, appVersionId, environmentId, containerName, createdBy);
|
||||
return id;
|
||||
}
|
||||
|
||||
@@ -115,8 +126,8 @@ public class PostgresDeploymentRepository implements DeploymentRepository {
|
||||
}
|
||||
|
||||
@Override
|
||||
public void deleteTerminalByAppAndEnvironment(UUID appId, UUID environmentId) {
|
||||
jdbc.update("DELETE FROM deployments WHERE app_id = ? AND environment_id = ? AND status IN ('STOPPED', 'FAILED')",
|
||||
public void deleteFailedByAppAndEnvironment(UUID appId, UUID environmentId) {
|
||||
jdbc.update("DELETE FROM deployments WHERE app_id = ? AND environment_id = ? AND status = 'FAILED'",
|
||||
appId, environmentId);
|
||||
}
|
||||
|
||||
@@ -129,6 +140,27 @@ public class PostgresDeploymentRepository implements DeploymentRepository {
|
||||
}
|
||||
}
|
||||
|
||||
public void saveDeployedConfigSnapshot(UUID id, DeploymentConfigSnapshot snapshot) {
|
||||
try {
|
||||
String json = snapshot != null ? objectMapper.writeValueAsString(snapshot) : null;
|
||||
jdbc.update("UPDATE deployments SET deployed_config_snapshot = ?::jsonb WHERE id = ?", json, id);
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Failed to serialize deployed_config_snapshot", e);
|
||||
}
|
||||
}
|
||||
|
||||
public Optional<Deployment> findLatestSuccessfulByAppAndEnv(UUID appId, UUID envId) {
|
||||
// DEGRADED deploys also carry a snapshot (executor writes before the RUNNING/DEGRADED
|
||||
// split), and represent a config that reached COMPLETE stage — restorable for the user.
|
||||
var results = jdbc.query(
|
||||
"SELECT " + SELECT_COLS + " FROM deployments "
|
||||
+ "WHERE app_id = ? AND environment_id = ? "
|
||||
+ "AND status IN ('RUNNING', 'DEGRADED') AND deployed_config_snapshot IS NOT NULL "
|
||||
+ "ORDER BY deployed_at DESC NULLS LAST LIMIT 1",
|
||||
(rs, rowNum) -> mapRow(rs), appId, envId);
|
||||
return results.isEmpty() ? Optional.empty() : Optional.of(results.get(0));
|
||||
}
|
||||
|
||||
public Optional<Deployment> findByContainerId(String containerId) {
|
||||
var results = jdbc.query(
|
||||
"SELECT " + SELECT_COLS + " FROM deployments WHERE replica_states::text LIKE ? " +
|
||||
@@ -158,6 +190,15 @@ public class PostgresDeploymentRepository implements DeploymentRepository {
|
||||
throw new SQLException("Failed to deserialize resolved_config", e);
|
||||
}
|
||||
}
|
||||
DeploymentConfigSnapshot deployedConfigSnapshot = null;
|
||||
String snapshotJson = rs.getString("deployed_config_snapshot");
|
||||
if (snapshotJson != null) {
|
||||
try {
|
||||
deployedConfigSnapshot = objectMapper.readValue(snapshotJson, DeploymentConfigSnapshot.class);
|
||||
} catch (Exception e) {
|
||||
throw new SQLException("Failed to deserialize deployed_config_snapshot", e);
|
||||
}
|
||||
}
|
||||
return new Deployment(
|
||||
UUID.fromString(rs.getString("id")),
|
||||
UUID.fromString(rs.getString("app_id")),
|
||||
@@ -172,9 +213,11 @@ public class PostgresDeploymentRepository implements DeploymentRepository {
|
||||
rs.getString("container_name"),
|
||||
rs.getString("error_message"),
|
||||
resolvedConfig,
|
||||
deployedConfigSnapshot,
|
||||
deployedAt != null ? deployedAt.toInstant() : null,
|
||||
stoppedAt != null ? stoppedAt.toInstant() : null,
|
||||
rs.getTimestamp("created_at").toInstant()
|
||||
rs.getTimestamp("created_at").toInstant(),
|
||||
rs.getString("created_by")
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -55,6 +55,7 @@ cameleer:
|
||||
routingmode: ${CAMELEER_SERVER_RUNTIME_ROUTINGMODE:path}
|
||||
routingdomain: ${CAMELEER_SERVER_RUNTIME_ROUTINGDOMAIN:localhost}
|
||||
serverurl: ${CAMELEER_SERVER_RUNTIME_SERVERURL:}
|
||||
certresolver: ${CAMELEER_SERVER_RUNTIME_CERTRESOLVER:}
|
||||
jardockervolume: ${CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME:}
|
||||
indexer:
|
||||
debouncems: ${CAMELEER_SERVER_INDEXER_DEBOUNCEMS:2000}
|
||||
@@ -111,6 +112,10 @@ cameleer:
|
||||
url: ${CAMELEER_SERVER_CLICKHOUSE_URL:jdbc:clickhouse://localhost:8123/cameleer}
|
||||
username: ${CAMELEER_SERVER_CLICKHOUSE_USERNAME:default}
|
||||
password: ${CAMELEER_SERVER_CLICKHOUSE_PASSWORD:}
|
||||
self-metrics:
|
||||
enabled: ${CAMELEER_SERVER_SELFMETRICS_ENABLED:true}
|
||||
interval-ms: ${CAMELEER_SERVER_SELFMETRICS_INTERVALMS:60000}
|
||||
instance-id: ${CAMELEER_SERVER_INSTANCE_ID:}
|
||||
|
||||
springdoc:
|
||||
api-docs:
|
||||
|
||||
@@ -401,6 +401,29 @@ CREATE TABLE IF NOT EXISTS route_catalog (
|
||||
ENGINE = ReplacingMergeTree(last_seen)
|
||||
ORDER BY (tenant_id, environment, application_id, route_id);
|
||||
|
||||
-- ── Server Self-Metrics ────────────────────────────────────────────────
|
||||
-- Periodic snapshot of the server's own Micrometer registry (written by
|
||||
-- ServerMetricsSnapshotScheduler). No `environment` column — the server
|
||||
-- straddles environments. `statistic` distinguishes Timer/DistributionSummary
|
||||
-- sub-measurements (count, total_time, max, mean) from plain counter/gauge values.
|
||||
|
||||
CREATE TABLE IF NOT EXISTS server_metrics (
|
||||
tenant_id LowCardinality(String) DEFAULT 'default',
|
||||
collected_at DateTime64(3),
|
||||
server_instance_id LowCardinality(String),
|
||||
metric_name LowCardinality(String),
|
||||
metric_type LowCardinality(String),
|
||||
statistic LowCardinality(String) DEFAULT 'value',
|
||||
metric_value Float64,
|
||||
tags Map(String, String) DEFAULT map(),
|
||||
server_received_at DateTime64(3) DEFAULT now64(3)
|
||||
)
|
||||
ENGINE = MergeTree()
|
||||
PARTITION BY (tenant_id, toYYYYMM(collected_at))
|
||||
ORDER BY (tenant_id, collected_at, server_instance_id, metric_name, statistic)
|
||||
TTL toDateTime(collected_at) + INTERVAL 90 DAY DELETE
|
||||
SETTINGS index_granularity = 8192;
|
||||
|
||||
-- insert_id tiebreak for keyset pagination (fixes same-millisecond cursor collision).
|
||||
-- IF NOT EXISTS on ADD COLUMN is idempotent. MATERIALIZE COLUMN is a background mutation,
|
||||
-- effectively a no-op once all parts are already materialized.
|
||||
|
||||
@@ -0,0 +1,7 @@
|
||||
-- V3: per-deployment config snapshot for "last known good" + dirty detection
|
||||
-- Captures {jarVersionId, agentConfig, containerConfig} at the moment a
|
||||
-- deployment transitions to RUNNING. Historical rows are NULL; dirty detection
|
||||
-- treats NULL as "everything dirty" and the next successful Redeploy populates it.
|
||||
|
||||
ALTER TABLE deployments
|
||||
ADD COLUMN deployed_config_snapshot JSONB;
|
||||
@@ -0,0 +1,8 @@
|
||||
-- V4: add created_by column to deployments for audit trail
|
||||
-- Captures which user initiated a deployment. Nullable for backwards compatibility;
|
||||
-- pre-V4 historical deployments will have NULL.
|
||||
|
||||
ALTER TABLE deployments
|
||||
ADD COLUMN created_by TEXT REFERENCES users(user_id);
|
||||
|
||||
CREATE INDEX idx_deployments_created_by ON deployments (created_by);
|
||||
@@ -21,10 +21,12 @@ public abstract class AbstractPostgresIT {
|
||||
postgres = new PostgreSQLContainer<>("postgres:16")
|
||||
.withDatabaseName("cameleer")
|
||||
.withUsername("cameleer")
|
||||
.withPassword("test");
|
||||
.withPassword("test")
|
||||
.withReuse(true);
|
||||
postgres.start();
|
||||
|
||||
clickhouse = new ClickHouseContainer("clickhouse/clickhouse-server:24.12");
|
||||
clickhouse = new ClickHouseContainer("clickhouse/clickhouse-server:24.12")
|
||||
.withReuse(true);
|
||||
clickhouse.start();
|
||||
}
|
||||
|
||||
|
||||
@@ -48,7 +48,7 @@ class DeploymentStateEvaluatorTest {
|
||||
private Deployment deployment(DeploymentStatus status) {
|
||||
return new Deployment(DEP_ID, APP_ID, UUID.randomUUID(), ENV_ID, status,
|
||||
null, null, List.of(), null, null, "orders-0", null,
|
||||
Map.of(), NOW.minusSeconds(60), null, NOW.minusSeconds(120));
|
||||
Map.of(), null, NOW.minusSeconds(60), null, NOW.minusSeconds(120), "test-user");
|
||||
}
|
||||
|
||||
@Test
|
||||
|
||||
@@ -52,10 +52,14 @@ class SchemaBootstrapIT extends AbstractPostgresIT {
|
||||
|
||||
@Test
|
||||
void alerting_enums_exist() {
|
||||
// Scope to current schema's namespace — Testcontainers reuse can otherwise
|
||||
// expose enums from a previous run's tenant_default schema alongside public.
|
||||
var enums = jdbcTemplate.queryForList("""
|
||||
SELECT typname FROM pg_type
|
||||
WHERE typname IN ('severity_enum','condition_kind_enum','alert_state_enum',
|
||||
'target_kind_enum','notification_status_enum')
|
||||
SELECT t.typname FROM pg_type t
|
||||
JOIN pg_namespace n ON n.oid = t.typnamespace
|
||||
WHERE t.typname IN ('severity_enum','condition_kind_enum','alert_state_enum',
|
||||
'target_kind_enum','notification_status_enum')
|
||||
AND n.nspname = current_schema()
|
||||
""", String.class);
|
||||
assertThat(enums).containsExactlyInAnyOrder(
|
||||
"severity_enum", "condition_kind_enum", "alert_state_enum",
|
||||
@@ -86,6 +90,7 @@ class SchemaBootstrapIT extends AbstractPostgresIT {
|
||||
SELECT column_name FROM information_schema.columns
|
||||
WHERE table_name = 'alert_instances'
|
||||
AND column_name IN ('read_at','deleted_at')
|
||||
AND table_schema = current_schema()
|
||||
""", String.class);
|
||||
assertThat(cols).containsExactlyInAnyOrder("read_at", "deleted_at");
|
||||
}
|
||||
@@ -96,13 +101,16 @@ class SchemaBootstrapIT extends AbstractPostgresIT {
|
||||
SELECT COUNT(*)::int FROM pg_indexes
|
||||
WHERE indexname = 'alert_instances_open_rule_uq'
|
||||
AND tablename = 'alert_instances'
|
||||
AND schemaname = current_schema()
|
||||
""", Integer.class);
|
||||
assertThat(count).isEqualTo(1);
|
||||
|
||||
Boolean isUnique = jdbcTemplate.queryForObject("""
|
||||
SELECT indisunique FROM pg_index
|
||||
JOIN pg_class ON pg_class.oid = pg_index.indexrelid
|
||||
WHERE pg_class.relname = 'alert_instances_open_rule_uq'
|
||||
JOIN pg_class c ON c.oid = pg_index.indexrelid
|
||||
JOIN pg_namespace n ON n.oid = c.relnamespace
|
||||
WHERE c.relname = 'alert_instances_open_rule_uq'
|
||||
AND n.nspname = current_schema()
|
||||
""", Boolean.class);
|
||||
assertThat(isUnique).isTrue();
|
||||
}
|
||||
|
||||
@@ -0,0 +1,239 @@
|
||||
package com.cameleer.server.app.controller;
|
||||
|
||||
import com.cameleer.server.app.AbstractPostgresIT;
|
||||
import com.cameleer.server.app.TestSecurityHelper;
|
||||
import com.cameleer.server.app.dto.DirtyStateResponse;
|
||||
import com.cameleer.server.app.storage.PostgresDeploymentRepository;
|
||||
import com.cameleer.server.core.runtime.ContainerStatus;
|
||||
import com.cameleer.server.core.runtime.Deployment;
|
||||
import com.cameleer.server.core.runtime.DeploymentStatus;
|
||||
import com.cameleer.server.core.runtime.RuntimeOrchestrator;
|
||||
import com.fasterxml.jackson.databind.JsonNode;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.test.mock.mockito.MockBean;
|
||||
import org.springframework.boot.test.web.client.TestRestTemplate;
|
||||
import org.springframework.core.io.ByteArrayResource;
|
||||
import org.springframework.http.HttpEntity;
|
||||
import org.springframework.http.HttpHeaders;
|
||||
import org.springframework.http.HttpMethod;
|
||||
import org.springframework.http.MediaType;
|
||||
import org.springframework.test.annotation.DirtiesContext;
|
||||
import org.springframework.util.LinkedMultiValueMap;
|
||||
import org.springframework.util.MultiValueMap;
|
||||
|
||||
import java.util.UUID;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.awaitility.Awaitility.await;
|
||||
import static org.mockito.ArgumentMatchers.any;
|
||||
import static org.mockito.Mockito.when;
|
||||
|
||||
/**
|
||||
* Integration tests for GET /api/v1/environments/{envSlug}/apps/{appSlug}/dirty-state.
|
||||
*
|
||||
* <p>Uses @MockBean RuntimeOrchestrator (same pattern as DeploymentSnapshotIT).
|
||||
* @DirtiesContext prevents context cache conflicts when both IT classes are loaded together.</p>
|
||||
*/
|
||||
@DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS)
|
||||
class AppDirtyStateIT extends AbstractPostgresIT {
|
||||
|
||||
@MockBean
|
||||
RuntimeOrchestrator runtimeOrchestrator;
|
||||
|
||||
@Autowired
|
||||
private TestRestTemplate restTemplate;
|
||||
|
||||
@Autowired
|
||||
private ObjectMapper objectMapper;
|
||||
|
||||
@Autowired
|
||||
private TestSecurityHelper securityHelper;
|
||||
|
||||
@Autowired
|
||||
private PostgresDeploymentRepository deploymentRepository;
|
||||
|
||||
private String operatorJwt;
|
||||
|
||||
@BeforeEach
|
||||
void setUp() {
|
||||
operatorJwt = securityHelper.operatorToken();
|
||||
jdbcTemplate.update("DELETE FROM deployments");
|
||||
jdbcTemplate.update("DELETE FROM app_versions");
|
||||
jdbcTemplate.update("DELETE FROM apps");
|
||||
jdbcTemplate.update("DELETE FROM application_config WHERE environment = 'default'");
|
||||
|
||||
// Ensure test-operator exists in users table (required for deployments.created_by FK)
|
||||
jdbcTemplate.update(
|
||||
"INSERT INTO users (user_id, provider, display_name) VALUES ('test-operator', 'local', 'Test Operator') ON CONFLICT (user_id) DO NOTHING");
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Test 1: no deployment ever → dirty=true, lastSuccessfulDeploymentId=null
|
||||
// -----------------------------------------------------------------------
|
||||
|
||||
@Test
|
||||
void dirtyState_noDeployEver_returnsDirtyTrue() throws Exception {
|
||||
String appSlug = "ds-nodeploy-" + UUID.randomUUID().toString().substring(0, 8);
|
||||
post("/api/v1/environments/default/apps",
|
||||
String.format("{\"slug\": \"%s\", \"displayName\": \"DS No Deploy\"}", appSlug),
|
||||
operatorJwt);
|
||||
uploadJar(appSlug, ("fake-jar-" + appSlug).getBytes());
|
||||
put("/api/v1/environments/default/apps/" + appSlug + "/config",
|
||||
"{\"samplingRate\": 0.5}", operatorJwt);
|
||||
|
||||
DirtyStateResponse body = getDirtyState("default", appSlug);
|
||||
|
||||
assertThat(body.dirty()).isTrue();
|
||||
assertThat(body.lastSuccessfulDeploymentId()).isNull();
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Test 2: after a successful deploy with matching desired state → dirty=false
|
||||
// -----------------------------------------------------------------------
|
||||
|
||||
@Test
|
||||
void dirtyState_afterSuccessfulDeploy_matchingDesiredState_returnsDirtyFalse() throws Exception {
|
||||
String fakeContainerId = "fake-cid-" + UUID.randomUUID();
|
||||
when(runtimeOrchestrator.isEnabled()).thenReturn(true);
|
||||
when(runtimeOrchestrator.startContainer(any())).thenReturn(fakeContainerId);
|
||||
when(runtimeOrchestrator.getContainerStatus(fakeContainerId))
|
||||
.thenReturn(new ContainerStatus("healthy", true, 0, null));
|
||||
|
||||
String appSlug = "ds-clean-" + UUID.randomUUID().toString().substring(0, 8);
|
||||
post("/api/v1/environments/default/apps",
|
||||
String.format("{\"slug\": \"%s\", \"displayName\": \"DS Clean\"}", appSlug),
|
||||
operatorJwt);
|
||||
put("/api/v1/environments/default/apps/" + appSlug + "/container-config",
|
||||
"{\"runtimeType\": \"spring-boot\", \"appPort\": 8081}", operatorJwt);
|
||||
String versionId = uploadJar(appSlug, ("fake-jar-clean-" + appSlug).getBytes());
|
||||
put("/api/v1/environments/default/apps/" + appSlug + "/config",
|
||||
"{\"samplingRate\": 0.25}", operatorJwt);
|
||||
|
||||
// Deploy and wait for RUNNING
|
||||
JsonNode deploy = post(
|
||||
"/api/v1/environments/default/apps/" + appSlug + "/deployments",
|
||||
String.format("{\"appVersionId\": \"%s\"}", versionId),
|
||||
operatorJwt);
|
||||
String deploymentId = deploy.path("id").asText();
|
||||
|
||||
await().atMost(30, TimeUnit.SECONDS).pollInterval(500, TimeUnit.MILLISECONDS)
|
||||
.untilAsserted(() -> {
|
||||
Deployment d = deploymentRepository.findById(UUID.fromString(deploymentId))
|
||||
.orElseThrow(() -> new AssertionError("Deployment not found"));
|
||||
assertThat(d.status()).isEqualTo(DeploymentStatus.RUNNING);
|
||||
});
|
||||
|
||||
// Desired state matches what was deployed → dirty=false
|
||||
DirtyStateResponse body = getDirtyState("default", appSlug);
|
||||
|
||||
assertThat(body.dirty()).isFalse();
|
||||
assertThat(body.differences()).isEmpty();
|
||||
assertThat(body.lastSuccessfulDeploymentId()).isEqualTo(deploymentId);
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Test 3: after successful deploy, config changed → dirty=true
|
||||
// -----------------------------------------------------------------------
|
||||
|
||||
@Test
|
||||
void dirtyState_afterSuccessfulDeploy_configChanged_returnsDirtyTrue() throws Exception {
|
||||
String fakeContainerId = "fake-cid2-" + UUID.randomUUID();
|
||||
when(runtimeOrchestrator.isEnabled()).thenReturn(true);
|
||||
when(runtimeOrchestrator.startContainer(any())).thenReturn(fakeContainerId);
|
||||
when(runtimeOrchestrator.getContainerStatus(fakeContainerId))
|
||||
.thenReturn(new ContainerStatus("healthy", true, 0, null));
|
||||
|
||||
String appSlug = "ds-dirty-" + UUID.randomUUID().toString().substring(0, 8);
|
||||
post("/api/v1/environments/default/apps",
|
||||
String.format("{\"slug\": \"%s\", \"displayName\": \"DS Dirty\"}", appSlug),
|
||||
operatorJwt);
|
||||
put("/api/v1/environments/default/apps/" + appSlug + "/container-config",
|
||||
"{\"runtimeType\": \"spring-boot\", \"appPort\": 8081}", operatorJwt);
|
||||
String versionId = uploadJar(appSlug, ("fake-jar-dirty-" + appSlug).getBytes());
|
||||
put("/api/v1/environments/default/apps/" + appSlug + "/config",
|
||||
"{\"samplingRate\": 0.1}", operatorJwt);
|
||||
|
||||
// Deploy and wait for RUNNING
|
||||
JsonNode deploy = post(
|
||||
"/api/v1/environments/default/apps/" + appSlug + "/deployments",
|
||||
String.format("{\"appVersionId\": \"%s\"}", versionId),
|
||||
operatorJwt);
|
||||
String deploymentId = deploy.path("id").asText();
|
||||
|
||||
await().atMost(30, TimeUnit.SECONDS).pollInterval(500, TimeUnit.MILLISECONDS)
|
||||
.untilAsserted(() -> {
|
||||
Deployment d = deploymentRepository.findById(UUID.fromString(deploymentId))
|
||||
.orElseThrow(() -> new AssertionError("Deployment not found"));
|
||||
assertThat(d.status()).isEqualTo(DeploymentStatus.RUNNING);
|
||||
});
|
||||
|
||||
// Change samplingRate after deploy
|
||||
put("/api/v1/environments/default/apps/" + appSlug + "/config",
|
||||
"{\"samplingRate\": 0.9}", operatorJwt);
|
||||
|
||||
// Now desired state differs from snapshot → dirty=true
|
||||
DirtyStateResponse body = getDirtyState("default", appSlug);
|
||||
|
||||
assertThat(body.dirty()).isTrue();
|
||||
assertThat(body.lastSuccessfulDeploymentId()).isEqualTo(deploymentId);
|
||||
assertThat(body.differences()).isNotEmpty();
|
||||
assertThat(body.differences())
|
||||
.anyMatch(d -> d.field().contains("samplingRate"));
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Helpers
|
||||
// -----------------------------------------------------------------------
|
||||
|
||||
private DirtyStateResponse getDirtyState(String envSlug, String appSlug) {
|
||||
HttpHeaders headers = securityHelper.authHeaders(operatorJwt);
|
||||
var response = restTemplate.exchange(
|
||||
"/api/v1/environments/" + envSlug + "/apps/" + appSlug + "/dirty-state",
|
||||
HttpMethod.GET,
|
||||
new HttpEntity<>(headers),
|
||||
DirtyStateResponse.class);
|
||||
assertThat(response.getStatusCode().value()).isEqualTo(200);
|
||||
return response.getBody();
|
||||
}
|
||||
|
||||
private JsonNode post(String path, String json, String jwt) throws Exception {
|
||||
HttpHeaders headers = securityHelper.authHeaders(jwt);
|
||||
var response = restTemplate.exchange(
|
||||
path, HttpMethod.POST,
|
||||
new HttpEntity<>(json, headers),
|
||||
String.class);
|
||||
return objectMapper.readTree(response.getBody());
|
||||
}
|
||||
|
||||
private void put(String path, String json, String jwt) {
|
||||
HttpHeaders headers = securityHelper.authHeaders(jwt);
|
||||
restTemplate.exchange(path, HttpMethod.PUT, new HttpEntity<>(json, headers), String.class);
|
||||
}
|
||||
|
||||
private String uploadJar(String appSlug, byte[] content) throws Exception {
|
||||
ByteArrayResource resource = new ByteArrayResource(content) {
|
||||
@Override
|
||||
public String getFilename() { return "app.jar"; }
|
||||
};
|
||||
MultiValueMap<String, Object> body = new LinkedMultiValueMap<>();
|
||||
body.add("file", resource);
|
||||
|
||||
HttpHeaders headers = new HttpHeaders();
|
||||
headers.set("Authorization", "Bearer " + operatorJwt);
|
||||
headers.set("X-Cameleer-Protocol-Version", "1");
|
||||
headers.setContentType(MediaType.MULTIPART_FORM_DATA);
|
||||
|
||||
var response = restTemplate.exchange(
|
||||
"/api/v1/environments/default/apps/" + appSlug + "/versions",
|
||||
HttpMethod.POST,
|
||||
new HttpEntity<>(body, headers),
|
||||
String.class);
|
||||
|
||||
JsonNode versionNode = objectMapper.readTree(response.getBody());
|
||||
return versionNode.path("id").asText();
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,200 @@
|
||||
package com.cameleer.server.app.controller;
|
||||
|
||||
import com.cameleer.common.model.ApplicationConfig;
|
||||
import com.cameleer.server.app.AbstractPostgresIT;
|
||||
import com.cameleer.server.app.TestSecurityHelper;
|
||||
import com.cameleer.server.app.storage.PostgresApplicationConfigRepository;
|
||||
import com.cameleer.server.core.agent.AgentRegistryService;
|
||||
import com.cameleer.server.core.agent.CommandType;
|
||||
import org.junit.jupiter.api.AfterEach;
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.test.mock.mockito.SpyBean;
|
||||
import org.springframework.boot.test.web.client.TestRestTemplate;
|
||||
import org.springframework.http.HttpEntity;
|
||||
import org.springframework.http.HttpMethod;
|
||||
import org.springframework.http.HttpStatus;
|
||||
import org.springframework.http.ResponseEntity;
|
||||
import org.springframework.test.annotation.DirtiesContext;
|
||||
import org.springframework.test.annotation.DirtiesContext.ClassMode;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.UUID;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.mockito.ArgumentMatchers.any;
|
||||
import static org.mockito.ArgumentMatchers.eq;
|
||||
import static org.mockito.Mockito.never;
|
||||
import static org.mockito.Mockito.verify;
|
||||
|
||||
@DirtiesContext(classMode = ClassMode.AFTER_CLASS)
|
||||
class ApplicationConfigControllerIT extends AbstractPostgresIT {
|
||||
|
||||
/**
|
||||
* Spy on the real AgentRegistryService bean so we can verify whether
|
||||
* addGroupCommandWithReplies was invoked (live) or skipped (staged).
|
||||
*/
|
||||
@SpyBean
|
||||
AgentRegistryService registryService;
|
||||
|
||||
@Autowired private TestRestTemplate restTemplate;
|
||||
@Autowired private TestSecurityHelper securityHelper;
|
||||
@Autowired private PostgresApplicationConfigRepository configRepository;
|
||||
|
||||
private String operatorJwt;
|
||||
/** Unique env slug per test to avoid cross-test pollution. */
|
||||
private String envSlug;
|
||||
private UUID envId;
|
||||
/** Unique app slug per test run to avoid cross-test row collisions. */
|
||||
private String appSlug;
|
||||
|
||||
@BeforeEach
|
||||
void setUp() {
|
||||
operatorJwt = securityHelper.operatorToken();
|
||||
envSlug = "cfg-it-" + UUID.randomUUID().toString().substring(0, 8);
|
||||
envId = UUID.randomUUID();
|
||||
appSlug = "paygw-" + UUID.randomUUID().toString().substring(0, 8);
|
||||
|
||||
jdbcTemplate.update(
|
||||
"INSERT INTO environments (id, slug, display_name) VALUES (?, ?, ?) ON CONFLICT (id) DO NOTHING",
|
||||
envId, envSlug, envSlug);
|
||||
}
|
||||
|
||||
@AfterEach
|
||||
void cleanUp() {
|
||||
jdbcTemplate.update("DELETE FROM application_config WHERE environment = ?", envSlug);
|
||||
jdbcTemplate.update("DELETE FROM environments WHERE id = ?", envId);
|
||||
}
|
||||
|
||||
// ── helpers ──────────────────────────────────────────────────────────────
|
||||
|
||||
private void registerLiveAgent(String agentId) {
|
||||
// Use the bootstrap HTTP endpoint — same pattern as AgentCommandControllerIT.
|
||||
String body = """
|
||||
{
|
||||
"instanceId": "%s",
|
||||
"applicationId": "%s",
|
||||
"environmentId": "%s",
|
||||
"version": "1.0.0",
|
||||
"routeIds": ["route-1"],
|
||||
"capabilities": {}
|
||||
}
|
||||
""".formatted(agentId, appSlug, envSlug);
|
||||
restTemplate.postForEntity(
|
||||
"/api/v1/agents/register",
|
||||
new HttpEntity<>(body, securityHelper.bootstrapHeaders()),
|
||||
String.class);
|
||||
}
|
||||
|
||||
private ResponseEntity<String> putConfig(String apply) {
|
||||
String url = "/api/v1/environments/" + envSlug + "/apps/" + appSlug + "/config"
|
||||
+ (apply != null ? "?apply=" + apply : "");
|
||||
String body = """
|
||||
{"samplingRate": 0.1, "metricsEnabled": true}
|
||||
""";
|
||||
return restTemplate.exchange(url, HttpMethod.PUT,
|
||||
new HttpEntity<>(body, securityHelper.authHeaders(operatorJwt)), String.class);
|
||||
}
|
||||
|
||||
// ── tests ─────────────────────────────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void putConfig_staged_savesButDoesNotPush() {
|
||||
// Given — one LIVE agent registered for (appSlug, envSlug)
|
||||
String agentId = "staged-agent-" + UUID.randomUUID().toString().substring(0, 8);
|
||||
registerLiveAgent(agentId);
|
||||
|
||||
// When — PUT with apply=staged
|
||||
ResponseEntity<String> response = putConfig("staged");
|
||||
|
||||
// Then — HTTP 200
|
||||
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
|
||||
// And — DB has the new config
|
||||
ApplicationConfig saved = configRepository
|
||||
.findByApplicationAndEnvironment(appSlug, envSlug)
|
||||
.orElseThrow(() -> new AssertionError("Config not found in DB"));
|
||||
assertThat(saved.getSamplingRate()).isEqualTo(0.1);
|
||||
|
||||
// And — NO CONFIG_UPDATE was pushed to any agent
|
||||
verify(registryService, never())
|
||||
.addGroupCommandWithReplies(eq(appSlug), eq(envSlug), eq(CommandType.CONFIG_UPDATE), any());
|
||||
}
|
||||
|
||||
@Test
|
||||
void putConfig_live_savesAndPushes() {
|
||||
// Given — one LIVE agent registered for (appSlug, envSlug)
|
||||
String agentId = "live-agent-" + UUID.randomUUID().toString().substring(0, 8);
|
||||
registerLiveAgent(agentId);
|
||||
|
||||
// When — PUT without apply param (default is live)
|
||||
ResponseEntity<String> response = putConfig(null);
|
||||
|
||||
// Then — HTTP 200
|
||||
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
|
||||
// And — DB has the new config
|
||||
ApplicationConfig saved = configRepository
|
||||
.findByApplicationAndEnvironment(appSlug, envSlug)
|
||||
.orElseThrow(() -> new AssertionError("Config not found in DB"));
|
||||
assertThat(saved.getSamplingRate()).isEqualTo(0.1);
|
||||
|
||||
// And — CONFIG_UPDATE was pushed (addGroupCommandWithReplies called once)
|
||||
verify(registryService)
|
||||
.addGroupCommandWithReplies(eq(appSlug), eq(envSlug), eq(CommandType.CONFIG_UPDATE), any());
|
||||
}
|
||||
|
||||
@Test
|
||||
void putConfig_liveExplicit_savesAndPushes() {
|
||||
// Same as above but with explicit apply=live
|
||||
String agentId = "live-explicit-" + UUID.randomUUID().toString().substring(0, 8);
|
||||
registerLiveAgent(agentId);
|
||||
|
||||
ResponseEntity<String> response = putConfig("live");
|
||||
|
||||
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
verify(registryService)
|
||||
.addGroupCommandWithReplies(eq(appSlug), eq(envSlug), eq(CommandType.CONFIG_UPDATE), any());
|
||||
}
|
||||
|
||||
@Test
|
||||
void putConfig_unknownApplyValue_returns400() {
|
||||
ResponseEntity<String> response = putConfig("BOGUS");
|
||||
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.BAD_REQUEST);
|
||||
|
||||
int auditCount = jdbcTemplate.queryForObject(
|
||||
"SELECT COUNT(*) FROM audit_log WHERE target = ?", Integer.class, appSlug);
|
||||
assertThat(auditCount).isZero();
|
||||
}
|
||||
|
||||
@Test
|
||||
void putConfig_staged_auditActionIsStagedAppConfig() {
|
||||
registerLiveAgent("audit-agent-" + UUID.randomUUID().toString().substring(0, 8));
|
||||
|
||||
ResponseEntity<String> response = putConfig("staged");
|
||||
|
||||
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
|
||||
List<String> actions = jdbcTemplate.queryForList(
|
||||
"SELECT action FROM audit_log WHERE target = ? ORDER BY timestamp DESC",
|
||||
String.class, appSlug);
|
||||
assertThat(actions).hasSize(1);
|
||||
assertThat(actions.get(0)).isEqualTo("stage_app_config");
|
||||
}
|
||||
|
||||
@Test
|
||||
void putConfig_live_auditActionIsUpdateAppConfig() {
|
||||
registerLiveAgent("audit-agent-live-" + UUID.randomUUID().toString().substring(0, 8));
|
||||
|
||||
ResponseEntity<String> response = putConfig(null);
|
||||
|
||||
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
|
||||
List<String> actions = jdbcTemplate.queryForList(
|
||||
"SELECT action FROM audit_log WHERE target = ? ORDER BY timestamp DESC",
|
||||
String.class, appSlug);
|
||||
assertThat(actions).hasSize(1);
|
||||
assertThat(actions.get(0)).isEqualTo("update_app_config");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,253 @@
|
||||
package com.cameleer.server.app.controller;
|
||||
|
||||
import com.cameleer.server.app.AbstractPostgresIT;
|
||||
import com.cameleer.server.app.TestSecurityHelper;
|
||||
import com.fasterxml.jackson.databind.JsonNode;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.test.web.client.TestRestTemplate;
|
||||
import org.springframework.core.io.ByteArrayResource;
|
||||
import org.springframework.http.HttpEntity;
|
||||
import org.springframework.http.HttpHeaders;
|
||||
import org.springframework.http.HttpMethod;
|
||||
import org.springframework.http.HttpStatus;
|
||||
import org.springframework.http.MediaType;
|
||||
import org.springframework.http.ResponseEntity;
|
||||
import org.springframework.util.LinkedMultiValueMap;
|
||||
import org.springframework.util.MultiValueMap;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.UUID;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
class DeploymentControllerAuditIT extends AbstractPostgresIT {
|
||||
|
||||
@Autowired
|
||||
private TestRestTemplate restTemplate;
|
||||
|
||||
@Autowired
|
||||
private ObjectMapper objectMapper;
|
||||
|
||||
@Autowired
|
||||
private TestSecurityHelper securityHelper;
|
||||
|
||||
private String aliceJwt;
|
||||
private String adminJwt;
|
||||
private String appSlug;
|
||||
private String versionId;
|
||||
|
||||
@BeforeEach
|
||||
void setUp() throws Exception {
|
||||
// Mint JWT for alice (OPERATOR) — subject must start with "user:" for JwtAuthenticationFilter
|
||||
aliceJwt = securityHelper.createToken("user:alice", "user", List.of("OPERATOR"));
|
||||
adminJwt = securityHelper.adminToken();
|
||||
|
||||
// Clean up deployment-related tables and test-created environments
|
||||
jdbcTemplate.update("DELETE FROM deployments");
|
||||
jdbcTemplate.update("DELETE FROM app_versions");
|
||||
jdbcTemplate.update("DELETE FROM apps");
|
||||
jdbcTemplate.update("DELETE FROM environments WHERE slug LIKE 'promote-target-%'");
|
||||
jdbcTemplate.update("DELETE FROM audit_log");
|
||||
|
||||
// Ensure alice exists in the users table (required for deployments.created_by FK)
|
||||
jdbcTemplate.update(
|
||||
"INSERT INTO users (user_id, provider, display_name) VALUES ('alice', 'local', 'Alice Test') ON CONFLICT (user_id) DO NOTHING");
|
||||
|
||||
// Create app in the seeded "default" environment
|
||||
appSlug = "audit-test-" + UUID.randomUUID().toString().substring(0, 8);
|
||||
String appJson = String.format("""
|
||||
{"slug": "%s", "displayName": "Audit Test App"}
|
||||
""", appSlug);
|
||||
ResponseEntity<String> appResponse = restTemplate.exchange(
|
||||
"/api/v1/environments/default/apps", HttpMethod.POST,
|
||||
new HttpEntity<>(appJson, authHeaders(aliceJwt)),
|
||||
String.class);
|
||||
assertThat(appResponse.getStatusCode()).isEqualTo(HttpStatus.CREATED);
|
||||
|
||||
// Upload a JAR version
|
||||
byte[] jarContent = "fake-jar-for-audit-test".getBytes();
|
||||
ByteArrayResource resource = new ByteArrayResource(jarContent) {
|
||||
@Override
|
||||
public String getFilename() {
|
||||
return "audit-test.jar";
|
||||
}
|
||||
};
|
||||
MultiValueMap<String, Object> body = new LinkedMultiValueMap<>();
|
||||
body.add("file", resource);
|
||||
HttpHeaders headers = new HttpHeaders();
|
||||
headers.set("Authorization", "Bearer " + aliceJwt);
|
||||
headers.set("X-Cameleer-Protocol-Version", "1");
|
||||
headers.setContentType(MediaType.MULTIPART_FORM_DATA);
|
||||
ResponseEntity<String> versionResponse = restTemplate.exchange(
|
||||
"/api/v1/environments/default/apps/" + appSlug + "/versions", HttpMethod.POST,
|
||||
new HttpEntity<>(body, headers),
|
||||
String.class);
|
||||
assertThat(versionResponse.getStatusCode().is2xxSuccessful()).isTrue();
|
||||
versionId = objectMapper.readTree(versionResponse.getBody()).path("id").asText();
|
||||
}
|
||||
|
||||
@Test
|
||||
void deploy_writes_audit_row_with_DEPLOYMENT_category_and_alice_actor() throws Exception {
|
||||
String json = String.format("""
|
||||
{"appVersionId": "%s"}
|
||||
""", versionId);
|
||||
|
||||
ResponseEntity<String> response = restTemplate.exchange(
|
||||
"/api/v1/environments/default/apps/" + appSlug + "/deployments", HttpMethod.POST,
|
||||
new HttpEntity<>(json, authHeaders(aliceJwt)),
|
||||
String.class);
|
||||
|
||||
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.ACCEPTED);
|
||||
|
||||
Map<String, Object> row = queryAuditRow("deploy_app");
|
||||
assertThat(row).isNotNull();
|
||||
assertThat(row.get("username")).isEqualTo("alice");
|
||||
assertThat(row.get("action")).isEqualTo("deploy_app");
|
||||
assertThat(row.get("category")).isEqualTo("DEPLOYMENT");
|
||||
assertThat(row.get("result")).isEqualTo("SUCCESS");
|
||||
assertThat(row.get("target")).isNotNull();
|
||||
assertThat(row.get("target").toString()).isNotBlank();
|
||||
}
|
||||
|
||||
@Test
|
||||
void stop_writes_audit_row() throws Exception {
|
||||
// First deploy
|
||||
String deployJson = String.format("""
|
||||
{"appVersionId": "%s"}
|
||||
""", versionId);
|
||||
ResponseEntity<String> deployResponse = restTemplate.exchange(
|
||||
"/api/v1/environments/default/apps/" + appSlug + "/deployments", HttpMethod.POST,
|
||||
new HttpEntity<>(deployJson, authHeaders(aliceJwt)),
|
||||
String.class);
|
||||
assertThat(deployResponse.getStatusCode()).isEqualTo(HttpStatus.ACCEPTED);
|
||||
String deploymentId = objectMapper.readTree(deployResponse.getBody()).path("id").asText();
|
||||
|
||||
// Clear audit log to isolate stop audit row
|
||||
jdbcTemplate.update("DELETE FROM audit_log");
|
||||
|
||||
// Stop the deployment
|
||||
ResponseEntity<String> stopResponse = restTemplate.exchange(
|
||||
"/api/v1/environments/default/apps/" + appSlug + "/deployments/" + deploymentId + "/stop",
|
||||
HttpMethod.POST,
|
||||
new HttpEntity<>(authHeadersNoBody(aliceJwt)),
|
||||
String.class);
|
||||
assertThat(stopResponse.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
|
||||
Map<String, Object> row = queryAuditRow("stop_deployment");
|
||||
assertThat(row).isNotNull();
|
||||
assertThat(row.get("username")).isEqualTo("alice");
|
||||
assertThat(row.get("action")).isEqualTo("stop_deployment");
|
||||
assertThat(row.get("category")).isEqualTo("DEPLOYMENT");
|
||||
assertThat(row.get("result")).isEqualTo("SUCCESS");
|
||||
assertThat(row.get("target").toString()).isEqualTo(deploymentId);
|
||||
}
|
||||
|
||||
@Test
|
||||
void promote_writes_audit_row() throws Exception {
|
||||
// Create a second environment for promotion target
|
||||
String targetEnvSlug = "promote-target-" + UUID.randomUUID().toString().substring(0, 8);
|
||||
String envJson = String.format("""
|
||||
{"slug": "%s", "displayName": "Promote Target Env"}
|
||||
""", targetEnvSlug);
|
||||
ResponseEntity<String> envResponse = restTemplate.exchange(
|
||||
"/api/v1/admin/environments", HttpMethod.POST,
|
||||
new HttpEntity<>(envJson, authHeaders(adminJwt)),
|
||||
String.class);
|
||||
assertThat(envResponse.getStatusCode()).isEqualTo(HttpStatus.CREATED);
|
||||
|
||||
// Create the same app slug in the target environment
|
||||
String appJson = String.format("""
|
||||
{"slug": "%s", "displayName": "Audit Test App (target)"}
|
||||
""", appSlug);
|
||||
ResponseEntity<String> targetAppResponse = restTemplate.exchange(
|
||||
"/api/v1/environments/" + targetEnvSlug + "/apps", HttpMethod.POST,
|
||||
new HttpEntity<>(appJson, authHeaders(aliceJwt)),
|
||||
String.class);
|
||||
assertThat(targetAppResponse.getStatusCode()).isEqualTo(HttpStatus.CREATED);
|
||||
|
||||
// Deploy in source (default) env
|
||||
String deployJson = String.format("""
|
||||
{"appVersionId": "%s"}
|
||||
""", versionId);
|
||||
ResponseEntity<String> deployResponse = restTemplate.exchange(
|
||||
"/api/v1/environments/default/apps/" + appSlug + "/deployments", HttpMethod.POST,
|
||||
new HttpEntity<>(deployJson, authHeaders(aliceJwt)),
|
||||
String.class);
|
||||
assertThat(deployResponse.getStatusCode()).isEqualTo(HttpStatus.ACCEPTED);
|
||||
String deploymentId = objectMapper.readTree(deployResponse.getBody()).path("id").asText();
|
||||
|
||||
// Clear audit log to isolate promote audit row
|
||||
jdbcTemplate.update("DELETE FROM audit_log");
|
||||
|
||||
// Promote to target env
|
||||
String promoteJson = String.format("""
|
||||
{"targetEnvironment": "%s"}
|
||||
""", targetEnvSlug);
|
||||
ResponseEntity<String> promoteResponse = restTemplate.exchange(
|
||||
"/api/v1/environments/default/apps/" + appSlug + "/deployments/" + deploymentId + "/promote",
|
||||
HttpMethod.POST,
|
||||
new HttpEntity<>(promoteJson, authHeaders(aliceJwt)),
|
||||
String.class);
|
||||
assertThat(promoteResponse.getStatusCode()).isEqualTo(HttpStatus.ACCEPTED);
|
||||
|
||||
Map<String, Object> row = queryAuditRow("promote_deployment");
|
||||
assertThat(row).isNotNull();
|
||||
assertThat(row.get("username")).isEqualTo("alice");
|
||||
assertThat(row.get("action")).isEqualTo("promote_deployment");
|
||||
assertThat(row.get("category")).isEqualTo("DEPLOYMENT");
|
||||
assertThat(row.get("result")).isEqualTo("SUCCESS");
|
||||
assertThat(row.get("target")).isNotNull();
|
||||
assertThat(row.get("target").toString()).isNotBlank();
|
||||
}
|
||||
|
||||
@Test
|
||||
void deploy_with_unknown_appVersion_writes_FAILURE_audit_row() throws Exception {
|
||||
String unknownVersionId = UUID.randomUUID().toString();
|
||||
String json = String.format("""
|
||||
{"appVersionId": "%s"}
|
||||
""", unknownVersionId);
|
||||
|
||||
ResponseEntity<String> response = restTemplate.exchange(
|
||||
"/api/v1/environments/default/apps/" + appSlug + "/deployments", HttpMethod.POST,
|
||||
new HttpEntity<>(json, authHeaders(aliceJwt)),
|
||||
String.class);
|
||||
|
||||
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.NOT_FOUND);
|
||||
|
||||
Map<String, Object> row = queryAuditRow("deploy_app");
|
||||
assertThat(row).isNotNull();
|
||||
assertThat(row.get("username")).isEqualTo("alice");
|
||||
assertThat(row.get("action")).isEqualTo("deploy_app");
|
||||
assertThat(row.get("category")).isEqualTo("DEPLOYMENT");
|
||||
assertThat(row.get("result")).isEqualTo("FAILURE");
|
||||
}
|
||||
|
||||
// ---- helpers ----
|
||||
|
||||
private HttpHeaders authHeaders(String jwt) {
|
||||
HttpHeaders headers = new HttpHeaders();
|
||||
headers.set("Authorization", "Bearer " + jwt);
|
||||
headers.set("X-Cameleer-Protocol-Version", "1");
|
||||
headers.setContentType(MediaType.APPLICATION_JSON);
|
||||
return headers;
|
||||
}
|
||||
|
||||
private HttpHeaders authHeadersNoBody(String jwt) {
|
||||
HttpHeaders headers = new HttpHeaders();
|
||||
headers.set("Authorization", "Bearer " + jwt);
|
||||
headers.set("X-Cameleer-Protocol-Version", "1");
|
||||
return headers;
|
||||
}
|
||||
|
||||
/** Query the most recent audit_log row for the given action. Returns null if not found. */
|
||||
private Map<String, Object> queryAuditRow(String action) {
|
||||
List<Map<String, Object>> rows = jdbcTemplate.queryForList(
|
||||
"SELECT username, action, category, target, result FROM audit_log WHERE action = ? ORDER BY timestamp DESC LIMIT 1",
|
||||
action);
|
||||
return rows.isEmpty() ? null : rows.get(0);
|
||||
}
|
||||
}
|
||||
@@ -48,6 +48,10 @@ class DeploymentControllerIT extends AbstractPostgresIT {
|
||||
jdbcTemplate.update("DELETE FROM app_versions");
|
||||
jdbcTemplate.update("DELETE FROM apps");
|
||||
|
||||
// Ensure test-operator exists in users table (required for deployments.created_by FK)
|
||||
jdbcTemplate.update(
|
||||
"INSERT INTO users (user_id, provider, display_name) VALUES ('test-operator', 'local', 'Test Operator') ON CONFLICT (user_id) DO NOTHING");
|
||||
|
||||
// Get default environment ID
|
||||
ResponseEntity<String> envResponse = restTemplate.exchange(
|
||||
"/api/v1/admin/environments", HttpMethod.GET,
|
||||
|
||||
@@ -166,6 +166,157 @@ class DiagramRenderControllerIT extends AbstractPostgresIT {
|
||||
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.NOT_FOUND);
|
||||
}
|
||||
|
||||
@Test
|
||||
void findByAppAndRoute_returnsLatestDiagram_noLiveAgentPrereq() {
|
||||
// The env-scoped /routes/{routeId}/diagram endpoint no longer depends
|
||||
// on the agent registry — routes whose publishing agents have been
|
||||
// removed must still resolve. The seed step stored a diagram for
|
||||
// route "render-test-route" under app "test-group" / env "default",
|
||||
// so the same lookup must succeed even though the registry-driven
|
||||
// "find agents for app" path used to be a hard 404 prerequisite.
|
||||
HttpHeaders headers = securityHelper.authHeadersNoBody(viewerJwt);
|
||||
headers.set("Accept", "application/json");
|
||||
|
||||
ResponseEntity<String> response = restTemplate.exchange(
|
||||
"/api/v1/environments/default/apps/test-group/routes/render-test-route/diagram",
|
||||
HttpMethod.GET,
|
||||
new HttpEntity<>(headers),
|
||||
String.class);
|
||||
|
||||
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
assertThat(response.getBody()).contains("nodes");
|
||||
assertThat(response.getBody()).contains("edges");
|
||||
}
|
||||
|
||||
@Test
|
||||
void findByAppAndRoute_returns404ForUnknownRoute() {
|
||||
HttpHeaders headers = securityHelper.authHeadersNoBody(viewerJwt);
|
||||
headers.set("Accept", "application/json");
|
||||
|
||||
ResponseEntity<String> response = restTemplate.exchange(
|
||||
"/api/v1/environments/default/apps/test-group/routes/nonexistent-route/diagram",
|
||||
HttpMethod.GET,
|
||||
new HttpEntity<>(headers),
|
||||
String.class);
|
||||
|
||||
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.NOT_FOUND);
|
||||
}
|
||||
|
||||
@Test
|
||||
void exchangeDiagramHash_pinsPointInTimeEvenAfterNewerVersion() throws Exception {
|
||||
// Point-in-time guarantee: an execution's stored diagramContentHash
|
||||
// must keep resolving to the route shape captured at execution time,
|
||||
// even after a newer diagram version for the same route is stored.
|
||||
// Content-hash addressing + never-delete of route_diagrams makes this
|
||||
// automatic — this test locks the invariant in.
|
||||
HttpHeaders viewerHeaders = securityHelper.authHeadersNoBody(viewerJwt);
|
||||
viewerHeaders.set("Accept", "application/json");
|
||||
|
||||
// Snapshot the pinned v1 render via the flat content-hash endpoint
|
||||
// BEFORE a newer version is stored, so the post-v2 fetch can compare
|
||||
// byte-for-byte.
|
||||
ResponseEntity<String> pinnedBefore = restTemplate.exchange(
|
||||
"/api/v1/diagrams/{hash}/render",
|
||||
HttpMethod.GET,
|
||||
new HttpEntity<>(viewerHeaders),
|
||||
String.class,
|
||||
contentHash);
|
||||
assertThat(pinnedBefore.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
|
||||
// Also snapshot the by-route "latest" render for the same route.
|
||||
ResponseEntity<String> latestBefore = restTemplate.exchange(
|
||||
"/api/v1/environments/default/apps/test-group/routes/render-test-route/diagram",
|
||||
HttpMethod.GET,
|
||||
new HttpEntity<>(viewerHeaders),
|
||||
String.class);
|
||||
assertThat(latestBefore.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
|
||||
// Store a materially different v2 for the same (app, env, route).
|
||||
// The renderer walks the `root` tree (not the legacy flat `nodes`
|
||||
// list that the seed payload uses), so v2 uses the tree shape and
|
||||
// will render non-empty output — letting us detect the version flip.
|
||||
String newerDiagramJson = """
|
||||
{
|
||||
"routeId": "render-test-route",
|
||||
"description": "v2 with extra step",
|
||||
"version": 2,
|
||||
"root": {
|
||||
"id": "n1",
|
||||
"type": "ENDPOINT",
|
||||
"label": "timer:tick-v2",
|
||||
"children": [
|
||||
{
|
||||
"id": "n2",
|
||||
"type": "BEAN",
|
||||
"label": "myBeanV2",
|
||||
"children": [
|
||||
{
|
||||
"id": "n3",
|
||||
"type": "TO",
|
||||
"label": "log:out-v2",
|
||||
"children": [
|
||||
{"id": "n4", "type": "TO", "label": "log:audit"}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"edges": [
|
||||
{"source": "n1", "target": "n2", "edgeType": "FLOW"},
|
||||
{"source": "n2", "target": "n3", "edgeType": "FLOW"},
|
||||
{"source": "n3", "target": "n4", "edgeType": "FLOW"}
|
||||
]
|
||||
}
|
||||
""";
|
||||
restTemplate.postForEntity(
|
||||
"/api/v1/data/diagrams",
|
||||
new HttpEntity<>(newerDiagramJson, securityHelper.authHeaders(jwt)),
|
||||
String.class);
|
||||
|
||||
// Invariant 1: The execution's stored diagramContentHash must not
|
||||
// drift — exchanges stay pinned to the version captured at ingest.
|
||||
ResponseEntity<String> detailAfter = restTemplate.exchange(
|
||||
"/api/v1/environments/default/executions?correlationId=render-probe-corr",
|
||||
HttpMethod.GET,
|
||||
new HttpEntity<>(viewerHeaders),
|
||||
String.class);
|
||||
JsonNode search = objectMapper.readTree(detailAfter.getBody());
|
||||
String execId = search.get("data").get(0).get("executionId").asText();
|
||||
ResponseEntity<String> exec = restTemplate.exchange(
|
||||
"/api/v1/executions/" + execId,
|
||||
HttpMethod.GET,
|
||||
new HttpEntity<>(viewerHeaders),
|
||||
String.class);
|
||||
JsonNode execBody = objectMapper.readTree(exec.getBody());
|
||||
assertThat(execBody.path("diagramContentHash").asText()).isEqualTo(contentHash);
|
||||
|
||||
// Invariant 2: The pinned render (by H1) must be byte-identical
|
||||
// before and after v2 is stored — content-hash addressing is stable.
|
||||
ResponseEntity<String> pinnedAfter = restTemplate.exchange(
|
||||
"/api/v1/diagrams/{hash}/render",
|
||||
HttpMethod.GET,
|
||||
new HttpEntity<>(viewerHeaders),
|
||||
String.class,
|
||||
contentHash);
|
||||
assertThat(pinnedAfter.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
assertThat(pinnedAfter.getBody()).isEqualTo(pinnedBefore.getBody());
|
||||
|
||||
// Invariant 3: The by-route "latest" endpoint must now surface v2,
|
||||
// so its body differs from the pre-v2 snapshot. Retry briefly to
|
||||
// absorb the diagram-ingest flush path.
|
||||
await().atMost(20, SECONDS).untilAsserted(() -> {
|
||||
ResponseEntity<String> latestAfter = restTemplate.exchange(
|
||||
"/api/v1/environments/default/apps/test-group/routes/render-test-route/diagram",
|
||||
HttpMethod.GET,
|
||||
new HttpEntity<>(viewerHeaders),
|
||||
String.class);
|
||||
assertThat(latestAfter.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
assertThat(latestAfter.getBody()).isNotEqualTo(latestBefore.getBody());
|
||||
assertThat(latestAfter.getBody()).contains("myBeanV2");
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
void getWithNoAcceptHeader_defaultsToSvg() {
|
||||
HttpHeaders headers = securityHelper.authHeadersNoBody(viewerJwt);
|
||||
|
||||
@@ -166,6 +166,42 @@ class SearchControllerIT extends AbstractPostgresIT {
|
||||
""", i, i, i, i, i));
|
||||
}
|
||||
|
||||
// Executions 11-12: carry structured attributes used by the attribute-filter tests.
|
||||
ingest("""
|
||||
{
|
||||
"exchangeId": "ex-search-attr-1",
|
||||
"applicationId": "test-group",
|
||||
"instanceId": "test-agent-search-it",
|
||||
"routeId": "search-route-attr-1",
|
||||
"correlationId": "corr-attr-alpha",
|
||||
"status": "COMPLETED",
|
||||
"startTime": "2026-03-12T10:00:00Z",
|
||||
"endTime": "2026-03-12T10:00:00.050Z",
|
||||
"durationMs": 50,
|
||||
"attributes": {"order": "12345", "tenant": "acme"},
|
||||
"chunkSeq": 0,
|
||||
"final": true,
|
||||
"processors": []
|
||||
}
|
||||
""");
|
||||
ingest("""
|
||||
{
|
||||
"exchangeId": "ex-search-attr-2",
|
||||
"applicationId": "test-group",
|
||||
"instanceId": "test-agent-search-it",
|
||||
"routeId": "search-route-attr-2",
|
||||
"correlationId": "corr-attr-beta",
|
||||
"status": "COMPLETED",
|
||||
"startTime": "2026-03-12T10:01:00Z",
|
||||
"endTime": "2026-03-12T10:01:00.050Z",
|
||||
"durationMs": 50,
|
||||
"attributes": {"order": "99999"},
|
||||
"chunkSeq": 0,
|
||||
"final": true,
|
||||
"processors": []
|
||||
}
|
||||
""");
|
||||
|
||||
// Wait for async ingestion + search indexing via REST (no raw SQL).
|
||||
// Probe the last seeded execution to avoid false positives from
|
||||
// other test classes that may have written into the shared CH tables.
|
||||
@@ -174,6 +210,11 @@ class SearchControllerIT extends AbstractPostgresIT {
|
||||
JsonNode body = objectMapper.readTree(r.getBody());
|
||||
assertThat(body.get("total").asLong()).isGreaterThanOrEqualTo(1);
|
||||
});
|
||||
await().atMost(30, SECONDS).untilAsserted(() -> {
|
||||
ResponseEntity<String> r = searchGet("?correlationId=corr-attr-beta");
|
||||
JsonNode body = objectMapper.readTree(r.getBody());
|
||||
assertThat(body.get("total").asLong()).isGreaterThanOrEqualTo(1);
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
@@ -371,6 +412,69 @@ class SearchControllerIT extends AbstractPostgresIT {
|
||||
assertThat(body.get("limit").asInt()).isEqualTo(50);
|
||||
}
|
||||
|
||||
@Test
|
||||
void attrParam_exactMatch_filtersToMatchingExecution() throws Exception {
|
||||
ResponseEntity<String> response = searchGet("?attr=order:12345&correlationId=corr-attr-alpha");
|
||||
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
|
||||
JsonNode body = objectMapper.readTree(response.getBody());
|
||||
assertThat(body.get("total").asLong()).isEqualTo(1);
|
||||
assertThat(body.get("data").get(0).get("correlationId").asText()).isEqualTo("corr-attr-alpha");
|
||||
}
|
||||
|
||||
@Test
|
||||
void attrParam_keyOnly_matchesAnyExecutionCarryingTheKey() throws Exception {
|
||||
ResponseEntity<String> response = searchGet("?attr=tenant&correlationId=corr-attr-alpha");
|
||||
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
|
||||
JsonNode body = objectMapper.readTree(response.getBody());
|
||||
assertThat(body.get("total").asLong()).isEqualTo(1);
|
||||
assertThat(body.get("data").get(0).get("correlationId").asText()).isEqualTo("corr-attr-alpha");
|
||||
}
|
||||
|
||||
@Test
|
||||
void attrParam_multipleValues_produceIntersection() throws Exception {
|
||||
// order:99999 AND tenant=* should yield zero — exec-attr-2 has order=99999 but no tenant.
|
||||
ResponseEntity<String> response = searchGet("?attr=order:99999&attr=tenant");
|
||||
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
|
||||
JsonNode body = objectMapper.readTree(response.getBody());
|
||||
assertThat(body.get("total").asLong()).isZero();
|
||||
}
|
||||
|
||||
@Test
|
||||
void attrParam_invalidKey_returns400() throws Exception {
|
||||
ResponseEntity<String> response = searchGet("?attr=bad%20key:x");
|
||||
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.BAD_REQUEST);
|
||||
}
|
||||
|
||||
@Test
|
||||
void attributeFilters_inPostBody_filtersCorrectly() throws Exception {
|
||||
ResponseEntity<String> response = searchPost("""
|
||||
{
|
||||
"attributeFilters": [
|
||||
{"key": "order", "value": "12345"}
|
||||
],
|
||||
"correlationId": "corr-attr-alpha"
|
||||
}
|
||||
""");
|
||||
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
|
||||
JsonNode body = objectMapper.readTree(response.getBody());
|
||||
assertThat(body.get("total").asLong()).isEqualTo(1);
|
||||
assertThat(body.get("data").get(0).get("correlationId").asText()).isEqualTo("corr-attr-alpha");
|
||||
}
|
||||
|
||||
@Test
|
||||
void attrParam_wildcardValue_matchesOnPrefix() throws Exception {
|
||||
ResponseEntity<String> response = searchGet("?attr=order:1*&correlationId=corr-attr-alpha");
|
||||
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
|
||||
JsonNode body = objectMapper.readTree(response.getBody());
|
||||
assertThat(body.get("total").asLong()).isEqualTo(1);
|
||||
assertThat(body.get("data").get(0).get("correlationId").asText()).isEqualTo("corr-attr-alpha");
|
||||
}
|
||||
|
||||
// --- Helper methods ---
|
||||
|
||||
private void ingest(String json) {
|
||||
|
||||
@@ -0,0 +1,314 @@
|
||||
package com.cameleer.server.app.controller;
|
||||
|
||||
import com.cameleer.server.app.AbstractPostgresIT;
|
||||
import com.cameleer.server.app.TestSecurityHelper;
|
||||
import com.fasterxml.jackson.databind.JsonNode;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.test.web.client.TestRestTemplate;
|
||||
import org.springframework.http.HttpEntity;
|
||||
import org.springframework.http.HttpHeaders;
|
||||
import org.springframework.http.HttpMethod;
|
||||
import org.springframework.http.HttpStatus;
|
||||
import org.springframework.http.ResponseEntity;
|
||||
|
||||
import java.sql.Timestamp;
|
||||
import java.time.Instant;
|
||||
import java.util.Map;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
class ServerMetricsAdminControllerIT extends AbstractPostgresIT {
|
||||
|
||||
@Autowired
|
||||
private TestRestTemplate restTemplate;
|
||||
|
||||
@Autowired
|
||||
private TestSecurityHelper securityHelper;
|
||||
|
||||
private final ObjectMapper mapper = new ObjectMapper();
|
||||
|
||||
private HttpHeaders adminJson;
|
||||
private HttpHeaders adminGet;
|
||||
private HttpHeaders viewerGet;
|
||||
|
||||
@BeforeEach
|
||||
void seedAndAuth() {
|
||||
adminJson = securityHelper.adminHeaders();
|
||||
adminGet = securityHelper.authHeadersNoBody(securityHelper.adminToken());
|
||||
viewerGet = securityHelper.authHeadersNoBody(securityHelper.viewerToken());
|
||||
|
||||
// Fresh rows for each test. The Spring-context ClickHouse JdbcTemplate
|
||||
// lives in a different bean; reach for it here by executing through
|
||||
// the same JdbcTemplate used by the store via the ClickHouseConfig bean.
|
||||
org.springframework.jdbc.core.JdbcTemplate ch = clickhouseJdbc();
|
||||
ch.execute("TRUNCATE TABLE server_metrics");
|
||||
|
||||
Instant t0 = Instant.parse("2026-04-23T10:00:00Z");
|
||||
// Gauge: cameleer.agents.connected, two states, two buckets.
|
||||
insert(ch, "default", t0, "srv-A", "cameleer.agents.connected", "gauge", "value", 3.0,
|
||||
Map.of("state", "live"));
|
||||
insert(ch, "default", t0.plusSeconds(60), "srv-A", "cameleer.agents.connected", "gauge", "value", 4.0,
|
||||
Map.of("state", "live"));
|
||||
insert(ch, "default", t0, "srv-A", "cameleer.agents.connected", "gauge", "value", 1.0,
|
||||
Map.of("state", "stale"));
|
||||
insert(ch, "default", t0.plusSeconds(60), "srv-A", "cameleer.agents.connected", "gauge", "value", 0.0,
|
||||
Map.of("state", "stale"));
|
||||
|
||||
// Counter: cumulative drops, +5 per minute on srv-A.
|
||||
insert(ch, "default", t0, "srv-A", "cameleer.ingestion.drops", "counter", "count", 0.0, Map.of("reason", "buffer_full"));
|
||||
insert(ch, "default", t0.plusSeconds(60), "srv-A", "cameleer.ingestion.drops", "counter", "count", 5.0, Map.of("reason", "buffer_full"));
|
||||
insert(ch, "default", t0.plusSeconds(120), "srv-A", "cameleer.ingestion.drops", "counter", "count", 10.0, Map.of("reason", "buffer_full"));
|
||||
// Simulated restart to srv-B: counter resets to 0, then climbs to 2.
|
||||
insert(ch, "default", t0.plusSeconds(180), "srv-B", "cameleer.ingestion.drops", "counter", "count", 0.0, Map.of("reason", "buffer_full"));
|
||||
insert(ch, "default", t0.plusSeconds(240), "srv-B", "cameleer.ingestion.drops", "counter", "count", 2.0, Map.of("reason", "buffer_full"));
|
||||
|
||||
// Timer mean inputs: two buckets, 2 samples each (count=2, total_time=30).
|
||||
insert(ch, "default", t0, "srv-A", "cameleer.ingestion.flush.duration", "timer", "count", 2.0, Map.of("type", "execution"));
|
||||
insert(ch, "default", t0, "srv-A", "cameleer.ingestion.flush.duration", "timer", "total_time", 30.0, Map.of("type", "execution"));
|
||||
insert(ch, "default", t0.plusSeconds(60), "srv-A", "cameleer.ingestion.flush.duration", "timer", "count", 4.0, Map.of("type", "execution"));
|
||||
insert(ch, "default", t0.plusSeconds(60), "srv-A", "cameleer.ingestion.flush.duration", "timer", "total_time", 100.0, Map.of("type", "execution"));
|
||||
}
|
||||
|
||||
// ── catalog ─────────────────────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void catalog_listsSeededMetricsWithStatisticsAndTagKeys() throws Exception {
|
||||
ResponseEntity<String> r = restTemplate.exchange(
|
||||
"/api/v1/admin/server-metrics/catalog?from=2026-04-23T09:00:00Z&to=2026-04-23T11:00:00Z",
|
||||
HttpMethod.GET, new HttpEntity<>(adminGet), String.class);
|
||||
assertThat(r.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
|
||||
JsonNode body = mapper.readTree(r.getBody());
|
||||
assertThat(body.isArray()).isTrue();
|
||||
|
||||
JsonNode drops = findByField(body, "metricName", "cameleer.ingestion.drops");
|
||||
assertThat(drops.get("metricType").asText()).isEqualTo("counter");
|
||||
assertThat(asStringList(drops.get("statistics"))).contains("count");
|
||||
assertThat(asStringList(drops.get("tagKeys"))).contains("reason");
|
||||
|
||||
JsonNode timer = findByField(body, "metricName", "cameleer.ingestion.flush.duration");
|
||||
assertThat(asStringList(timer.get("statistics"))).contains("count", "total_time");
|
||||
}
|
||||
|
||||
// ── instances ───────────────────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void instances_listsDistinctServerInstanceIdsWithFirstAndLastSeen() throws Exception {
|
||||
ResponseEntity<String> r = restTemplate.exchange(
|
||||
"/api/v1/admin/server-metrics/instances?from=2026-04-23T09:00:00Z&to=2026-04-23T11:00:00Z",
|
||||
HttpMethod.GET, new HttpEntity<>(adminGet), String.class);
|
||||
assertThat(r.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
|
||||
JsonNode body = mapper.readTree(r.getBody());
|
||||
assertThat(body.isArray()).isTrue();
|
||||
assertThat(body.size()).isEqualTo(2);
|
||||
// Ordered by last_seen DESC — srv-B saw a later row.
|
||||
assertThat(body.get(0).get("serverInstanceId").asText()).isEqualTo("srv-B");
|
||||
assertThat(body.get(1).get("serverInstanceId").asText()).isEqualTo("srv-A");
|
||||
}
|
||||
|
||||
// ── query — gauge with group-by-tag ─────────────────────────────────
|
||||
|
||||
@Test
|
||||
void query_gaugeWithGroupByTag_returnsSeriesPerTagValue() throws Exception {
|
||||
String requestBody = """
|
||||
{
|
||||
"metric": "cameleer.agents.connected",
|
||||
"statistic": "value",
|
||||
"from": "2026-04-23T09:59:00Z",
|
||||
"to": "2026-04-23T10:02:00Z",
|
||||
"stepSeconds": 60,
|
||||
"groupByTags": ["state"],
|
||||
"aggregation": "avg",
|
||||
"mode": "raw"
|
||||
}
|
||||
""";
|
||||
|
||||
ResponseEntity<String> r = restTemplate.postForEntity(
|
||||
"/api/v1/admin/server-metrics/query",
|
||||
new HttpEntity<>(requestBody, adminJson), String.class);
|
||||
assertThat(r.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
|
||||
JsonNode body = mapper.readTree(r.getBody());
|
||||
assertThat(body.get("metric").asText()).isEqualTo("cameleer.agents.connected");
|
||||
assertThat(body.get("statistic").asText()).isEqualTo("value");
|
||||
assertThat(body.get("mode").asText()).isEqualTo("raw");
|
||||
assertThat(body.get("stepSeconds").asInt()).isEqualTo(60);
|
||||
|
||||
JsonNode series = body.get("series");
|
||||
assertThat(series.isArray()).isTrue();
|
||||
assertThat(series.size()).isEqualTo(2);
|
||||
|
||||
JsonNode live = findByTag(series, "state", "live");
|
||||
assertThat(live.get("points").size()).isEqualTo(2);
|
||||
assertThat(live.get("points").get(0).get("v").asDouble()).isEqualTo(3.0);
|
||||
assertThat(live.get("points").get(1).get("v").asDouble()).isEqualTo(4.0);
|
||||
}
|
||||
|
||||
// ── query — counter delta across instance rotation ──────────────────
|
||||
|
||||
@Test
|
||||
void query_counterDelta_clipsNegativesAcrossInstanceRotation() throws Exception {
|
||||
String requestBody = """
|
||||
{
|
||||
"metric": "cameleer.ingestion.drops",
|
||||
"statistic": "count",
|
||||
"from": "2026-04-23T09:59:00Z",
|
||||
"to": "2026-04-23T10:05:00Z",
|
||||
"stepSeconds": 60,
|
||||
"groupByTags": ["reason"],
|
||||
"aggregation": "sum",
|
||||
"mode": "delta"
|
||||
}
|
||||
""";
|
||||
|
||||
ResponseEntity<String> r = restTemplate.postForEntity(
|
||||
"/api/v1/admin/server-metrics/query",
|
||||
new HttpEntity<>(requestBody, adminJson), String.class);
|
||||
assertThat(r.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
|
||||
JsonNode body = mapper.readTree(r.getBody());
|
||||
JsonNode reason = findByTag(body.get("series"), "reason", "buffer_full");
|
||||
// Deltas: 0 (first bucket on srv-A), 5, 5, 0 (first on srv-B, clipped), 2.
|
||||
// Sum across the window should be 12 if we tally all positive deltas.
|
||||
double sum = 0;
|
||||
for (JsonNode p : reason.get("points")) sum += p.get("v").asDouble();
|
||||
assertThat(sum).isEqualTo(12.0);
|
||||
// No individual point may be negative.
|
||||
for (JsonNode p : reason.get("points")) {
|
||||
assertThat(p.get("v").asDouble()).isGreaterThanOrEqualTo(0.0);
|
||||
}
|
||||
}
|
||||
|
||||
// ── query — derived 'mean' statistic for timers ─────────────────────
|
||||
|
||||
@Test
|
||||
void query_timerMeanStatistic_computesTotalOverCountPerBucket() throws Exception {
|
||||
String requestBody = """
|
||||
{
|
||||
"metric": "cameleer.ingestion.flush.duration",
|
||||
"statistic": "mean",
|
||||
"from": "2026-04-23T09:59:00Z",
|
||||
"to": "2026-04-23T10:02:00Z",
|
||||
"stepSeconds": 60,
|
||||
"groupByTags": ["type"],
|
||||
"aggregation": "avg",
|
||||
"mode": "raw"
|
||||
}
|
||||
""";
|
||||
|
||||
ResponseEntity<String> r = restTemplate.postForEntity(
|
||||
"/api/v1/admin/server-metrics/query",
|
||||
new HttpEntity<>(requestBody, adminJson), String.class);
|
||||
assertThat(r.getStatusCode()).isEqualTo(HttpStatus.OK);
|
||||
|
||||
JsonNode body = mapper.readTree(r.getBody());
|
||||
JsonNode points = findByTag(body.get("series"), "type", "execution").get("points");
|
||||
// Bucket 0: 30 / 2 = 15.0
|
||||
// Bucket 1: 100 / 4 = 25.0
|
||||
assertThat(points.get(0).get("v").asDouble()).isEqualTo(15.0);
|
||||
assertThat(points.get(1).get("v").asDouble()).isEqualTo(25.0);
|
||||
}
|
||||
|
||||
// ── query — input validation ────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void query_rejectsUnsafeMetricName() {
|
||||
String requestBody = """
|
||||
{
|
||||
"metric": "cameleer.agents; DROP TABLE server_metrics",
|
||||
"from": "2026-04-23T09:59:00Z",
|
||||
"to": "2026-04-23T10:02:00Z"
|
||||
}
|
||||
""";
|
||||
|
||||
ResponseEntity<String> r = restTemplate.postForEntity(
|
||||
"/api/v1/admin/server-metrics/query",
|
||||
new HttpEntity<>(requestBody, adminJson), String.class);
|
||||
assertThat(r.getStatusCode()).isEqualTo(HttpStatus.BAD_REQUEST);
|
||||
}
|
||||
|
||||
@Test
|
||||
void query_rejectsRangeBeyondMax() {
|
||||
String requestBody = """
|
||||
{
|
||||
"metric": "cameleer.agents.connected",
|
||||
"from": "2026-01-01T00:00:00Z",
|
||||
"to": "2026-04-23T00:00:00Z"
|
||||
}
|
||||
""";
|
||||
|
||||
ResponseEntity<String> r = restTemplate.postForEntity(
|
||||
"/api/v1/admin/server-metrics/query",
|
||||
new HttpEntity<>(requestBody, adminJson), String.class);
|
||||
assertThat(r.getStatusCode()).isEqualTo(HttpStatus.BAD_REQUEST);
|
||||
}
|
||||
|
||||
// ── authorization ───────────────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void allEndpoints_requireAdminRole() {
|
||||
ResponseEntity<String> catalog = restTemplate.exchange(
|
||||
"/api/v1/admin/server-metrics/catalog",
|
||||
HttpMethod.GET, new HttpEntity<>(viewerGet), String.class);
|
||||
assertThat(catalog.getStatusCode()).isEqualTo(HttpStatus.FORBIDDEN);
|
||||
|
||||
ResponseEntity<String> instances = restTemplate.exchange(
|
||||
"/api/v1/admin/server-metrics/instances",
|
||||
HttpMethod.GET, new HttpEntity<>(viewerGet), String.class);
|
||||
assertThat(instances.getStatusCode()).isEqualTo(HttpStatus.FORBIDDEN);
|
||||
|
||||
HttpHeaders viewerPost = securityHelper.authHeaders(securityHelper.viewerToken());
|
||||
ResponseEntity<String> query = restTemplate.exchange(
|
||||
"/api/v1/admin/server-metrics/query",
|
||||
HttpMethod.POST, new HttpEntity<>("{}", viewerPost), String.class);
|
||||
assertThat(query.getStatusCode()).isEqualTo(HttpStatus.FORBIDDEN);
|
||||
}
|
||||
|
||||
// ── helpers ─────────────────────────────────────────────────────────
|
||||
|
||||
private org.springframework.jdbc.core.JdbcTemplate clickhouseJdbc() {
|
||||
return org.springframework.test.util.AopTestUtils.getTargetObject(
|
||||
applicationContext.getBean("clickHouseJdbcTemplate"));
|
||||
}
|
||||
|
||||
@Autowired
|
||||
private org.springframework.context.ApplicationContext applicationContext;
|
||||
|
||||
private static void insert(org.springframework.jdbc.core.JdbcTemplate jdbc,
|
||||
String tenantId, Instant collectedAt, String serverInstanceId,
|
||||
String metricName, String metricType, String statistic,
|
||||
double value, Map<String, String> tags) {
|
||||
jdbc.update("""
|
||||
INSERT INTO server_metrics
|
||||
(tenant_id, collected_at, server_instance_id,
|
||||
metric_name, metric_type, statistic, metric_value, tags)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
tenantId, Timestamp.from(collectedAt), serverInstanceId,
|
||||
metricName, metricType, statistic, value, tags);
|
||||
}
|
||||
|
||||
private static JsonNode findByField(JsonNode array, String field, String value) {
|
||||
for (JsonNode n : array) {
|
||||
if (value.equals(n.path(field).asText())) return n;
|
||||
}
|
||||
throw new AssertionError("no element with " + field + "=" + value);
|
||||
}
|
||||
|
||||
private static JsonNode findByTag(JsonNode seriesArray, String tagKey, String tagValue) {
|
||||
for (JsonNode s : seriesArray) {
|
||||
if (tagValue.equals(s.path("tags").path(tagKey).asText())) return s;
|
||||
}
|
||||
throw new AssertionError("no series with tag " + tagKey + "=" + tagValue);
|
||||
}
|
||||
|
||||
private static java.util.List<String> asStringList(JsonNode arr) {
|
||||
java.util.List<String> out = new java.util.ArrayList<>();
|
||||
if (arr != null) for (JsonNode n : arr) out.add(n.asText());
|
||||
return out;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,130 @@
|
||||
package com.cameleer.server.app.metrics;
|
||||
|
||||
import com.cameleer.server.core.storage.ServerMetricsStore;
|
||||
import com.cameleer.server.core.storage.model.ServerMetricSample;
|
||||
import io.micrometer.core.instrument.Counter;
|
||||
import io.micrometer.core.instrument.Gauge;
|
||||
import io.micrometer.core.instrument.MeterRegistry;
|
||||
import io.micrometer.core.instrument.Timer;
|
||||
import io.micrometer.core.instrument.simple.SimpleMeterRegistry;
|
||||
import org.junit.jupiter.api.Test;
|
||||
|
||||
import java.time.Duration;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
class ServerMetricsSnapshotSchedulerTest {
|
||||
|
||||
@Test
|
||||
void snapshot_capturesCounterGaugeAndTimerMeasurements() {
|
||||
MeterRegistry registry = new SimpleMeterRegistry();
|
||||
|
||||
Counter counter = Counter.builder("cameleer.test.counter")
|
||||
.tag("env", "dev")
|
||||
.register(registry);
|
||||
counter.increment(3);
|
||||
|
||||
AtomicInteger gaugeSource = new AtomicInteger(42);
|
||||
Gauge.builder("cameleer.test.gauge", gaugeSource, AtomicInteger::doubleValue)
|
||||
.register(registry);
|
||||
|
||||
Timer timer = Timer.builder("cameleer.test.timer").register(registry);
|
||||
timer.record(Duration.ofMillis(5));
|
||||
timer.record(Duration.ofMillis(15));
|
||||
|
||||
RecordingStore store = new RecordingStore();
|
||||
ServerMetricsSnapshotScheduler scheduler =
|
||||
new ServerMetricsSnapshotScheduler(registry, store, "tenant-7", "server-A");
|
||||
|
||||
scheduler.snapshot();
|
||||
|
||||
assertThat(store.batches).hasSize(1);
|
||||
List<ServerMetricSample> samples = store.batches.get(0);
|
||||
|
||||
// Every sample is stamped with tenant + instance + finite value
|
||||
assertThat(samples).allSatisfy(s -> {
|
||||
assertThat(s.tenantId()).isEqualTo("tenant-7");
|
||||
assertThat(s.serverInstanceId()).isEqualTo("server-A");
|
||||
assertThat(Double.isFinite(s.value())).isTrue();
|
||||
assertThat(s.collectedAt()).isNotNull();
|
||||
});
|
||||
|
||||
// Counter -> 1 row with statistic=count, value=3, tag propagated
|
||||
List<ServerMetricSample> counterRows = samples.stream()
|
||||
.filter(s -> s.metricName().equals("cameleer.test.counter"))
|
||||
.toList();
|
||||
assertThat(counterRows).hasSize(1);
|
||||
assertThat(counterRows.get(0).statistic()).isEqualTo("count");
|
||||
assertThat(counterRows.get(0).metricType()).isEqualTo("counter");
|
||||
assertThat(counterRows.get(0).value()).isEqualTo(3.0);
|
||||
assertThat(counterRows.get(0).tags()).containsEntry("env", "dev");
|
||||
|
||||
// Gauge -> 1 row with statistic=value
|
||||
List<ServerMetricSample> gaugeRows = samples.stream()
|
||||
.filter(s -> s.metricName().equals("cameleer.test.gauge"))
|
||||
.toList();
|
||||
assertThat(gaugeRows).hasSize(1);
|
||||
assertThat(gaugeRows.get(0).statistic()).isEqualTo("value");
|
||||
assertThat(gaugeRows.get(0).metricType()).isEqualTo("gauge");
|
||||
assertThat(gaugeRows.get(0).value()).isEqualTo(42.0);
|
||||
|
||||
// Timer -> emits multiple statistics (count, total_time, max)
|
||||
List<ServerMetricSample> timerRows = samples.stream()
|
||||
.filter(s -> s.metricName().equals("cameleer.test.timer"))
|
||||
.toList();
|
||||
assertThat(timerRows).isNotEmpty();
|
||||
// SimpleMeterRegistry emits Statistic.TOTAL ("total"); other registries (Prometheus)
|
||||
// emit TOTAL_TIME ("total_time"). Accept either so the test isn't registry-coupled.
|
||||
assertThat(timerRows).extracting(ServerMetricSample::statistic)
|
||||
.contains("count", "max");
|
||||
assertThat(timerRows).extracting(ServerMetricSample::statistic)
|
||||
.containsAnyOf("total_time", "total");
|
||||
assertThat(timerRows).allSatisfy(s ->
|
||||
assertThat(s.metricType()).isEqualTo("timer"));
|
||||
ServerMetricSample count = timerRows.stream()
|
||||
.filter(s -> s.statistic().equals("count"))
|
||||
.findFirst().orElseThrow();
|
||||
assertThat(count.value()).isEqualTo(2.0);
|
||||
}
|
||||
|
||||
@Test
|
||||
void snapshot_withEmptyRegistry_doesNotWriteBatch() {
|
||||
MeterRegistry registry = new SimpleMeterRegistry();
|
||||
// Force removal of any auto-registered meters (SimpleMeterRegistry has none by default).
|
||||
RecordingStore store = new RecordingStore();
|
||||
ServerMetricsSnapshotScheduler scheduler =
|
||||
new ServerMetricsSnapshotScheduler(registry, store, "t", "s");
|
||||
|
||||
scheduler.snapshot();
|
||||
|
||||
assertThat(store.batches).isEmpty();
|
||||
}
|
||||
|
||||
@Test
|
||||
void snapshot_swallowsStoreFailures() {
|
||||
MeterRegistry registry = new SimpleMeterRegistry();
|
||||
Counter.builder("cameleer.test").register(registry).increment();
|
||||
|
||||
ServerMetricsStore throwingStore = batch -> {
|
||||
throw new RuntimeException("clickhouse down");
|
||||
};
|
||||
|
||||
ServerMetricsSnapshotScheduler scheduler =
|
||||
new ServerMetricsSnapshotScheduler(registry, throwingStore, "t", "s");
|
||||
|
||||
// Must not propagate — the scheduler thread would otherwise die.
|
||||
scheduler.snapshot();
|
||||
}
|
||||
|
||||
private static final class RecordingStore implements ServerMetricsStore {
|
||||
final List<List<ServerMetricSample>> batches = new ArrayList<>();
|
||||
|
||||
@Override
|
||||
public void insertBatch(List<ServerMetricSample> samples) {
|
||||
batches.add(List.copyOf(samples));
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -34,6 +34,10 @@ class OutboundConnectionAdminControllerIT extends AbstractPostgresIT {
|
||||
@org.junit.jupiter.api.AfterEach
|
||||
void cleanupRows() {
|
||||
jdbcTemplate.update("DELETE FROM outbound_connections WHERE tenant_id = 'default'");
|
||||
// Clear deployments.created_by for our test users — sibling ITs
|
||||
// (DeploymentControllerIT etc.) may have left rows that FK-block user deletion.
|
||||
jdbcTemplate.update(
|
||||
"DELETE FROM deployments WHERE created_by IN ('test-admin','test-operator','test-viewer')");
|
||||
jdbcTemplate.update("DELETE FROM users WHERE user_id IN ('test-admin','test-operator','test-viewer')");
|
||||
}
|
||||
|
||||
|
||||
@@ -0,0 +1,194 @@
|
||||
package com.cameleer.server.app.runtime;
|
||||
|
||||
import com.cameleer.server.app.AbstractPostgresIT;
|
||||
import com.cameleer.server.app.TestSecurityHelper;
|
||||
import com.cameleer.server.app.storage.PostgresDeploymentRepository;
|
||||
import com.cameleer.server.core.runtime.ContainerStatus;
|
||||
import com.cameleer.server.core.runtime.Deployment;
|
||||
import com.cameleer.server.core.runtime.DeploymentStatus;
|
||||
import com.cameleer.server.core.runtime.RuntimeOrchestrator;
|
||||
import com.fasterxml.jackson.databind.JsonNode;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.test.mock.mockito.MockBean;
|
||||
import org.springframework.boot.test.web.client.TestRestTemplate;
|
||||
import org.springframework.core.io.ByteArrayResource;
|
||||
import org.springframework.http.HttpEntity;
|
||||
import org.springframework.http.HttpHeaders;
|
||||
import org.springframework.http.HttpMethod;
|
||||
import org.springframework.http.MediaType;
|
||||
import org.springframework.test.context.TestPropertySource;
|
||||
import org.springframework.util.LinkedMultiValueMap;
|
||||
import org.springframework.util.MultiValueMap;
|
||||
|
||||
import java.util.UUID;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.awaitility.Awaitility.await;
|
||||
import static org.mockito.ArgumentMatchers.any;
|
||||
import static org.mockito.Mockito.never;
|
||||
import static org.mockito.Mockito.verify;
|
||||
import static org.mockito.Mockito.when;
|
||||
|
||||
/**
|
||||
* Verifies the blue-green deployment strategy: start all new → health-check
|
||||
* all → stop old. Strict all-healthy — partial failure preserves the previous
|
||||
* deployment untouched.
|
||||
*/
|
||||
@TestPropertySource(properties = "cameleer.server.runtime.healthchecktimeout=2")
|
||||
class BlueGreenStrategyIT extends AbstractPostgresIT {
|
||||
|
||||
@MockBean
|
||||
RuntimeOrchestrator runtimeOrchestrator;
|
||||
|
||||
@Autowired private TestRestTemplate restTemplate;
|
||||
@Autowired private ObjectMapper objectMapper;
|
||||
@Autowired private TestSecurityHelper securityHelper;
|
||||
@Autowired private PostgresDeploymentRepository deploymentRepository;
|
||||
|
||||
private String operatorJwt;
|
||||
private String appSlug;
|
||||
private String versionId;
|
||||
|
||||
@BeforeEach
|
||||
void setUp() throws Exception {
|
||||
operatorJwt = securityHelper.operatorToken();
|
||||
|
||||
jdbcTemplate.update("DELETE FROM deployments");
|
||||
jdbcTemplate.update("DELETE FROM app_versions");
|
||||
jdbcTemplate.update("DELETE FROM apps");
|
||||
jdbcTemplate.update("DELETE FROM application_config WHERE environment = 'default'");
|
||||
|
||||
// Ensure test-operator exists in users table (required for deployments.created_by FK)
|
||||
jdbcTemplate.update(
|
||||
"INSERT INTO users (user_id, provider, display_name) VALUES ('test-operator', 'local', 'Test Operator') ON CONFLICT (user_id) DO NOTHING");
|
||||
|
||||
when(runtimeOrchestrator.isEnabled()).thenReturn(true);
|
||||
|
||||
appSlug = "bg-" + UUID.randomUUID().toString().substring(0, 8);
|
||||
post("/api/v1/environments/default/apps", String.format("""
|
||||
{"slug": "%s", "displayName": "BG App"}
|
||||
""", appSlug), operatorJwt);
|
||||
put("/api/v1/environments/default/apps/" + appSlug + "/container-config", """
|
||||
{"runtimeType": "spring-boot", "appPort": 8081, "replicas": 2, "deploymentStrategy": "blue-green"}
|
||||
""", operatorJwt);
|
||||
versionId = uploadJar(appSlug, ("bg-jar-" + appSlug).getBytes());
|
||||
}
|
||||
|
||||
@Test
|
||||
void blueGreen_allHealthy_stopsOldAfterNew() throws Exception {
|
||||
when(runtimeOrchestrator.startContainer(any()))
|
||||
.thenReturn("old-0", "old-1", "new-0", "new-1");
|
||||
ContainerStatus healthy = new ContainerStatus("healthy", true, 0, null);
|
||||
when(runtimeOrchestrator.getContainerStatus("old-0")).thenReturn(healthy);
|
||||
when(runtimeOrchestrator.getContainerStatus("old-1")).thenReturn(healthy);
|
||||
when(runtimeOrchestrator.getContainerStatus("new-0")).thenReturn(healthy);
|
||||
when(runtimeOrchestrator.getContainerStatus("new-1")).thenReturn(healthy);
|
||||
|
||||
String firstDeployId = triggerDeploy();
|
||||
awaitStatus(firstDeployId, DeploymentStatus.RUNNING);
|
||||
|
||||
String secondDeployId = triggerDeploy();
|
||||
awaitStatus(secondDeployId, DeploymentStatus.RUNNING);
|
||||
|
||||
// Previous deployment was stopped once new was healthy
|
||||
Deployment first = deploymentRepository.findById(UUID.fromString(firstDeployId)).orElseThrow();
|
||||
assertThat(first.status()).isEqualTo(DeploymentStatus.STOPPED);
|
||||
|
||||
verify(runtimeOrchestrator).stopContainer("old-0");
|
||||
verify(runtimeOrchestrator).stopContainer("old-1");
|
||||
verify(runtimeOrchestrator, never()).stopContainer("new-0");
|
||||
verify(runtimeOrchestrator, never()).stopContainer("new-1");
|
||||
|
||||
// New deployment has both new replicas recorded
|
||||
Deployment second = deploymentRepository.findById(UUID.fromString(secondDeployId)).orElseThrow();
|
||||
assertThat(second.replicaStates()).hasSize(2);
|
||||
}
|
||||
|
||||
@Test
|
||||
void blueGreen_partialHealthy_preservesOldAndMarksFailed() throws Exception {
|
||||
when(runtimeOrchestrator.startContainer(any()))
|
||||
.thenReturn("old-0", "old-1", "new-0", "new-1");
|
||||
ContainerStatus healthy = new ContainerStatus("healthy", true, 0, null);
|
||||
ContainerStatus starting = new ContainerStatus("starting", true, 0, null);
|
||||
when(runtimeOrchestrator.getContainerStatus("old-0")).thenReturn(healthy);
|
||||
when(runtimeOrchestrator.getContainerStatus("old-1")).thenReturn(healthy);
|
||||
when(runtimeOrchestrator.getContainerStatus("new-0")).thenReturn(healthy);
|
||||
when(runtimeOrchestrator.getContainerStatus("new-1")).thenReturn(starting);
|
||||
|
||||
String firstDeployId = triggerDeploy();
|
||||
awaitStatus(firstDeployId, DeploymentStatus.RUNNING);
|
||||
|
||||
String secondDeployId = triggerDeploy();
|
||||
awaitStatus(secondDeployId, DeploymentStatus.FAILED);
|
||||
|
||||
Deployment second = deploymentRepository.findById(UUID.fromString(secondDeployId)).orElseThrow();
|
||||
assertThat(second.errorMessage())
|
||||
.contains("blue-green")
|
||||
.contains("1/2");
|
||||
|
||||
// Previous deployment stays RUNNING — blue-green's safety promise.
|
||||
Deployment first = deploymentRepository.findById(UUID.fromString(firstDeployId)).orElseThrow();
|
||||
assertThat(first.status()).isEqualTo(DeploymentStatus.RUNNING);
|
||||
|
||||
verify(runtimeOrchestrator, never()).stopContainer("old-0");
|
||||
verify(runtimeOrchestrator, never()).stopContainer("old-1");
|
||||
// Cleanup ran on both new replicas.
|
||||
verify(runtimeOrchestrator).stopContainer("new-0");
|
||||
verify(runtimeOrchestrator).stopContainer("new-1");
|
||||
}
|
||||
|
||||
// ---- helpers ----
|
||||
|
||||
private String triggerDeploy() throws Exception {
|
||||
JsonNode deployResponse = post(
|
||||
"/api/v1/environments/default/apps/" + appSlug + "/deployments",
|
||||
String.format("{\"appVersionId\": \"%s\"}", versionId), operatorJwt);
|
||||
return deployResponse.path("id").asText();
|
||||
}
|
||||
|
||||
private void awaitStatus(String deployId, DeploymentStatus expected) {
|
||||
await().atMost(30, TimeUnit.SECONDS)
|
||||
.pollInterval(500, TimeUnit.MILLISECONDS)
|
||||
.untilAsserted(() -> {
|
||||
Deployment d = deploymentRepository.findById(UUID.fromString(deployId))
|
||||
.orElseThrow(() -> new AssertionError("Deployment not found: " + deployId));
|
||||
assertThat(d.status()).isEqualTo(expected);
|
||||
});
|
||||
}
|
||||
|
||||
private JsonNode post(String path, String json, String jwt) throws Exception {
|
||||
HttpHeaders headers = securityHelper.authHeaders(jwt);
|
||||
var response = restTemplate.exchange(path, HttpMethod.POST,
|
||||
new HttpEntity<>(json, headers), String.class);
|
||||
return objectMapper.readTree(response.getBody());
|
||||
}
|
||||
|
||||
private void put(String path, String json, String jwt) {
|
||||
HttpHeaders headers = securityHelper.authHeaders(jwt);
|
||||
restTemplate.exchange(path, HttpMethod.PUT,
|
||||
new HttpEntity<>(json, headers), String.class);
|
||||
}
|
||||
|
||||
private String uploadJar(String appSlug, byte[] content) throws Exception {
|
||||
ByteArrayResource resource = new ByteArrayResource(content) {
|
||||
@Override public String getFilename() { return "app.jar"; }
|
||||
};
|
||||
MultiValueMap<String, Object> body = new LinkedMultiValueMap<>();
|
||||
body.add("file", resource);
|
||||
|
||||
HttpHeaders headers = new HttpHeaders();
|
||||
headers.set("Authorization", "Bearer " + operatorJwt);
|
||||
headers.set("X-Cameleer-Protocol-Version", "1");
|
||||
headers.setContentType(MediaType.MULTIPART_FORM_DATA);
|
||||
|
||||
var response = restTemplate.exchange(
|
||||
"/api/v1/environments/default/apps/" + appSlug + "/versions",
|
||||
HttpMethod.POST, new HttpEntity<>(body, headers), String.class);
|
||||
JsonNode versionNode = objectMapper.readTree(response.getBody());
|
||||
return versionNode.path("id").asText();
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,289 @@
|
||||
package com.cameleer.server.app.runtime;
|
||||
|
||||
import com.cameleer.common.model.ApplicationConfig;
|
||||
import com.cameleer.server.app.AbstractPostgresIT;
|
||||
import com.cameleer.server.app.TestSecurityHelper;
|
||||
import com.cameleer.server.app.storage.PostgresDeploymentRepository;
|
||||
import com.cameleer.server.core.runtime.ContainerStatus;
|
||||
import com.cameleer.server.core.runtime.Deployment;
|
||||
import com.cameleer.server.core.runtime.DeploymentStatus;
|
||||
import com.cameleer.server.core.runtime.RuntimeOrchestrator;
|
||||
import com.fasterxml.jackson.databind.JsonNode;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.test.mock.mockito.MockBean;
|
||||
import org.springframework.boot.test.web.client.TestRestTemplate;
|
||||
import org.springframework.core.io.ByteArrayResource;
|
||||
import org.springframework.http.HttpEntity;
|
||||
import org.springframework.http.HttpHeaders;
|
||||
import org.springframework.http.HttpMethod;
|
||||
import org.springframework.http.MediaType;
|
||||
import org.springframework.test.context.TestPropertySource;
|
||||
import org.springframework.util.LinkedMultiValueMap;
|
||||
import org.springframework.util.MultiValueMap;
|
||||
|
||||
import java.util.UUID;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.concurrent.atomic.AtomicReference;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.awaitility.Awaitility.await;
|
||||
import static org.mockito.ArgumentMatchers.any;
|
||||
import static org.mockito.Mockito.when;
|
||||
|
||||
/**
|
||||
* Verifies that DeploymentExecutor writes DeploymentConfigSnapshot on successful
|
||||
* RUNNING transition and does NOT write it on a FAILED path (both the
|
||||
* startContainer-throws path and the health-check-fails path).
|
||||
*/
|
||||
@TestPropertySource(properties = "cameleer.server.runtime.healthchecktimeout=2")
|
||||
class DeploymentSnapshotIT extends AbstractPostgresIT {
|
||||
|
||||
@MockBean
|
||||
RuntimeOrchestrator runtimeOrchestrator;
|
||||
|
||||
@Autowired
|
||||
private TestRestTemplate restTemplate;
|
||||
|
||||
@Autowired
|
||||
private ObjectMapper objectMapper;
|
||||
|
||||
@Autowired
|
||||
private TestSecurityHelper securityHelper;
|
||||
|
||||
@Autowired
|
||||
private PostgresDeploymentRepository deploymentRepository;
|
||||
|
||||
private String operatorJwt;
|
||||
private String adminJwt;
|
||||
|
||||
@BeforeEach
|
||||
void setUp() throws Exception {
|
||||
operatorJwt = securityHelper.operatorToken();
|
||||
adminJwt = securityHelper.adminToken();
|
||||
|
||||
// Clean up between tests
|
||||
jdbcTemplate.update("DELETE FROM deployments");
|
||||
jdbcTemplate.update("DELETE FROM app_versions");
|
||||
jdbcTemplate.update("DELETE FROM apps");
|
||||
jdbcTemplate.update("DELETE FROM application_config WHERE environment = 'default'");
|
||||
|
||||
// Ensure test-operator exists in users table (required for deployments.created_by FK)
|
||||
jdbcTemplate.update(
|
||||
"INSERT INTO users (user_id, provider, display_name) VALUES ('test-operator', 'local', 'Test Operator') ON CONFLICT (user_id) DO NOTHING");
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Test 1: snapshot is populated when deployment reaches RUNNING
|
||||
// -----------------------------------------------------------------------
|
||||
|
||||
@Test
|
||||
void snapshot_isPopulated_whenDeploymentReachesRunning() throws Exception {
|
||||
// --- given: mock orchestrator that simulates a healthy single-replica container ---
|
||||
String fakeContainerId = "fake-container-" + UUID.randomUUID();
|
||||
|
||||
when(runtimeOrchestrator.isEnabled()).thenReturn(true);
|
||||
when(runtimeOrchestrator.startContainer(any()))
|
||||
.thenReturn(fakeContainerId);
|
||||
when(runtimeOrchestrator.getContainerStatus(fakeContainerId))
|
||||
.thenReturn(new ContainerStatus("healthy", true, 0, null));
|
||||
|
||||
// --- given: create app with explicit runtimeType so auto-detection is not needed ---
|
||||
String appSlug = "snap-success-" + UUID.randomUUID().toString().substring(0, 8);
|
||||
String containerConfigJson = """
|
||||
{"runtimeType": "spring-boot", "appPort": 8081}
|
||||
""";
|
||||
String createAppJson = String.format("""
|
||||
{"slug": "%s", "displayName": "Snapshot Success App"}
|
||||
""", appSlug);
|
||||
|
||||
JsonNode createdApp = post("/api/v1/environments/default/apps", createAppJson, operatorJwt);
|
||||
String appId = createdApp.path("id").asText();
|
||||
|
||||
// --- given: update containerConfig to set runtimeType ---
|
||||
put("/api/v1/environments/default/apps/" + appSlug + "/container-config",
|
||||
containerConfigJson, operatorJwt);
|
||||
|
||||
// --- given: upload a JAR (fake bytes; real file written to disk by AppService) ---
|
||||
String versionId = uploadJar(appSlug, ("fake-jar-bytes-" + appSlug).getBytes());
|
||||
|
||||
// --- given: save agentConfig with samplingRate = 0.25 ---
|
||||
String configJson = """
|
||||
{"samplingRate": 0.25}
|
||||
""";
|
||||
put("/api/v1/environments/default/apps/" + appSlug + "/config", configJson, operatorJwt);
|
||||
|
||||
// --- when: trigger deploy ---
|
||||
String deployJson = String.format("""
|
||||
{"appVersionId": "%s"}
|
||||
""", versionId);
|
||||
JsonNode deployResponse = post(
|
||||
"/api/v1/environments/default/apps/" + appSlug + "/deployments",
|
||||
deployJson, operatorJwt);
|
||||
String deploymentId = deployResponse.path("id").asText();
|
||||
|
||||
// --- await RUNNING (async executor) ---
|
||||
AtomicReference<Deployment> deploymentRef = new AtomicReference<>();
|
||||
await().atMost(30, TimeUnit.SECONDS)
|
||||
.pollInterval(500, TimeUnit.MILLISECONDS)
|
||||
.untilAsserted(() -> {
|
||||
Deployment d = deploymentRepository.findById(UUID.fromString(deploymentId))
|
||||
.orElseThrow(() -> new AssertionError("Deployment not found: " + deploymentId));
|
||||
assertThat(d.status()).isEqualTo(DeploymentStatus.RUNNING);
|
||||
deploymentRef.set(d);
|
||||
});
|
||||
|
||||
// --- then: snapshot is populated ---
|
||||
Deployment deployed = deploymentRef.get();
|
||||
assertThat(deployed.deployedConfigSnapshot()).isNotNull();
|
||||
assertThat(deployed.deployedConfigSnapshot().jarVersionId())
|
||||
.isEqualTo(UUID.fromString(versionId));
|
||||
assertThat(deployed.deployedConfigSnapshot().agentConfig()).isNotNull();
|
||||
assertThat(deployed.deployedConfigSnapshot().agentConfig().getSamplingRate())
|
||||
.isEqualTo(0.25);
|
||||
assertThat(deployed.deployedConfigSnapshot().containerConfig())
|
||||
.containsEntry("runtimeType", "spring-boot")
|
||||
.containsEntry("appPort", 8081);
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Test 2: snapshot is NOT populated when deployment fails
|
||||
// -----------------------------------------------------------------------
|
||||
|
||||
@Test
|
||||
void snapshot_isNotPopulated_whenDeploymentFails() throws Exception {
|
||||
// --- given: mock orchestrator that throws on startContainer ---
|
||||
when(runtimeOrchestrator.isEnabled()).thenReturn(true);
|
||||
when(runtimeOrchestrator.startContainer(any()))
|
||||
.thenThrow(new RuntimeException("Simulated container start failure"));
|
||||
|
||||
// --- given: create app with explicit runtimeType ---
|
||||
String appSlug = "snap-fail-" + UUID.randomUUID().toString().substring(0, 8);
|
||||
String createAppJson = String.format("""
|
||||
{"slug": "%s", "displayName": "Snapshot Fail App"}
|
||||
""", appSlug);
|
||||
post("/api/v1/environments/default/apps", createAppJson, operatorJwt);
|
||||
|
||||
put("/api/v1/environments/default/apps/" + appSlug + "/container-config",
|
||||
"""
|
||||
{"runtimeType": "spring-boot", "appPort": 8081}
|
||||
""", operatorJwt);
|
||||
|
||||
String versionId = uploadJar(appSlug, ("fake-jar-fail-" + appSlug).getBytes());
|
||||
|
||||
// --- when: trigger deploy ---
|
||||
String deployJson = String.format("""
|
||||
{"appVersionId": "%s"}
|
||||
""", versionId);
|
||||
JsonNode deployResponse = post(
|
||||
"/api/v1/environments/default/apps/" + appSlug + "/deployments",
|
||||
deployJson, operatorJwt);
|
||||
String deploymentId = deployResponse.path("id").asText();
|
||||
|
||||
// --- await FAILED (async executor catches exception and marks failed) ---
|
||||
await().atMost(30, TimeUnit.SECONDS)
|
||||
.pollInterval(500, TimeUnit.MILLISECONDS)
|
||||
.untilAsserted(() -> {
|
||||
Deployment d = deploymentRepository.findById(UUID.fromString(deploymentId))
|
||||
.orElseThrow(() -> new AssertionError("Deployment not found: " + deploymentId));
|
||||
assertThat(d.status()).isEqualTo(DeploymentStatus.FAILED);
|
||||
});
|
||||
|
||||
// --- then: snapshot is null ---
|
||||
Deployment failed = deploymentRepository.findById(UUID.fromString(deploymentId)).orElseThrow();
|
||||
assertThat(failed.deployedConfigSnapshot()).isNull();
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Test 3: snapshot is NOT populated when the health check never passes.
|
||||
// This exercises the early-exit path in DeploymentExecutor (line ~231) —
|
||||
// startContainer succeeds, but no replica ever reports healthy, so
|
||||
// waitForAnyHealthy returns 0 before the snapshot-write point.
|
||||
// -----------------------------------------------------------------------
|
||||
|
||||
@Test
|
||||
void snapshot_isNotPopulated_whenHealthCheckFails() throws Exception {
|
||||
// --- given: container starts but never becomes healthy ---
|
||||
String fakeContainerId = "fake-unhealthy-" + UUID.randomUUID();
|
||||
|
||||
when(runtimeOrchestrator.isEnabled()).thenReturn(true);
|
||||
when(runtimeOrchestrator.startContainer(any())).thenReturn(fakeContainerId);
|
||||
when(runtimeOrchestrator.getContainerStatus(fakeContainerId))
|
||||
.thenReturn(new ContainerStatus("starting", true, 0, null));
|
||||
|
||||
String appSlug = "snap-unhealthy-" + UUID.randomUUID().toString().substring(0, 8);
|
||||
post("/api/v1/environments/default/apps", String.format("""
|
||||
{"slug": "%s", "displayName": "Snapshot Unhealthy App"}
|
||||
""", appSlug), operatorJwt);
|
||||
put("/api/v1/environments/default/apps/" + appSlug + "/container-config",
|
||||
"""
|
||||
{"runtimeType": "spring-boot", "appPort": 8081}
|
||||
""", operatorJwt);
|
||||
String versionId = uploadJar(appSlug, ("fake-jar-unhealthy-" + appSlug).getBytes());
|
||||
|
||||
// --- when: trigger deploy ---
|
||||
JsonNode deployResponse = post(
|
||||
"/api/v1/environments/default/apps/" + appSlug + "/deployments",
|
||||
String.format("{\"appVersionId\": \"%s\"}", versionId), operatorJwt);
|
||||
String deploymentId = deployResponse.path("id").asText();
|
||||
|
||||
// --- await FAILED (healthchecktimeout overridden to 2s in @TestPropertySource) ---
|
||||
await().atMost(30, TimeUnit.SECONDS)
|
||||
.pollInterval(500, TimeUnit.MILLISECONDS)
|
||||
.untilAsserted(() -> {
|
||||
Deployment d = deploymentRepository.findById(UUID.fromString(deploymentId))
|
||||
.orElseThrow(() -> new AssertionError("Deployment not found: " + deploymentId));
|
||||
assertThat(d.status()).isEqualTo(DeploymentStatus.FAILED);
|
||||
});
|
||||
|
||||
// --- then: snapshot is null (snapshot-write is gated behind health check) ---
|
||||
Deployment failed = deploymentRepository.findById(UUID.fromString(deploymentId)).orElseThrow();
|
||||
assertThat(failed.deployedConfigSnapshot()).isNull();
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Helpers
|
||||
// -----------------------------------------------------------------------
|
||||
|
||||
private JsonNode post(String path, String json, String jwt) throws Exception {
|
||||
HttpHeaders headers = securityHelper.authHeaders(jwt);
|
||||
var response = restTemplate.exchange(
|
||||
path, HttpMethod.POST,
|
||||
new HttpEntity<>(json, headers),
|
||||
String.class);
|
||||
return objectMapper.readTree(response.getBody());
|
||||
}
|
||||
|
||||
private void put(String path, String json, String jwt) {
|
||||
HttpHeaders headers = securityHelper.authHeaders(jwt);
|
||||
restTemplate.exchange(
|
||||
path, HttpMethod.PUT,
|
||||
new HttpEntity<>(json, headers),
|
||||
String.class);
|
||||
}
|
||||
|
||||
private String uploadJar(String appSlug, byte[] content) throws Exception {
|
||||
ByteArrayResource resource = new ByteArrayResource(content) {
|
||||
@Override
|
||||
public String getFilename() { return "app.jar"; }
|
||||
};
|
||||
MultiValueMap<String, Object> body = new LinkedMultiValueMap<>();
|
||||
body.add("file", resource);
|
||||
|
||||
HttpHeaders headers = new HttpHeaders();
|
||||
headers.set("Authorization", "Bearer " + operatorJwt);
|
||||
headers.set("X-Cameleer-Protocol-Version", "1");
|
||||
headers.setContentType(MediaType.MULTIPART_FORM_DATA);
|
||||
|
||||
var response = restTemplate.exchange(
|
||||
"/api/v1/environments/default/apps/" + appSlug + "/versions",
|
||||
HttpMethod.POST,
|
||||
new HttpEntity<>(body, headers),
|
||||
String.class);
|
||||
|
||||
JsonNode versionNode = objectMapper.readTree(response.getBody());
|
||||
return versionNode.path("id").asText();
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,198 @@
|
||||
package com.cameleer.server.app.runtime;
|
||||
|
||||
import com.cameleer.server.app.AbstractPostgresIT;
|
||||
import com.cameleer.server.app.TestSecurityHelper;
|
||||
import com.cameleer.server.app.storage.PostgresDeploymentRepository;
|
||||
import com.cameleer.server.core.runtime.ContainerStatus;
|
||||
import com.cameleer.server.core.runtime.Deployment;
|
||||
import com.cameleer.server.core.runtime.DeploymentStatus;
|
||||
import com.cameleer.server.core.runtime.RuntimeOrchestrator;
|
||||
import com.fasterxml.jackson.databind.JsonNode;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.mockito.InOrder;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.test.mock.mockito.MockBean;
|
||||
import org.springframework.boot.test.web.client.TestRestTemplate;
|
||||
import org.springframework.core.io.ByteArrayResource;
|
||||
import org.springframework.http.HttpEntity;
|
||||
import org.springframework.http.HttpHeaders;
|
||||
import org.springframework.http.HttpMethod;
|
||||
import org.springframework.http.MediaType;
|
||||
import org.springframework.test.context.TestPropertySource;
|
||||
import org.springframework.util.LinkedMultiValueMap;
|
||||
import org.springframework.util.MultiValueMap;
|
||||
|
||||
import java.util.UUID;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.awaitility.Awaitility.await;
|
||||
import static org.mockito.ArgumentMatchers.any;
|
||||
import static org.mockito.Mockito.inOrder;
|
||||
import static org.mockito.Mockito.never;
|
||||
import static org.mockito.Mockito.times;
|
||||
import static org.mockito.Mockito.verify;
|
||||
import static org.mockito.Mockito.when;
|
||||
|
||||
/**
|
||||
* Verifies the rolling deployment strategy: per-replica start → health → stop
|
||||
* old. Mid-rollout health failure preserves remaining un-replaced old replicas;
|
||||
* already-stopped old replicas are not restored.
|
||||
*/
|
||||
@TestPropertySource(properties = "cameleer.server.runtime.healthchecktimeout=2")
|
||||
class RollingStrategyIT extends AbstractPostgresIT {
|
||||
|
||||
@MockBean
|
||||
RuntimeOrchestrator runtimeOrchestrator;
|
||||
|
||||
@Autowired private TestRestTemplate restTemplate;
|
||||
@Autowired private ObjectMapper objectMapper;
|
||||
@Autowired private TestSecurityHelper securityHelper;
|
||||
@Autowired private PostgresDeploymentRepository deploymentRepository;
|
||||
|
||||
private String operatorJwt;
|
||||
private String appSlug;
|
||||
private String versionId;
|
||||
|
||||
@BeforeEach
|
||||
void setUp() throws Exception {
|
||||
operatorJwt = securityHelper.operatorToken();
|
||||
|
||||
jdbcTemplate.update("DELETE FROM deployments");
|
||||
jdbcTemplate.update("DELETE FROM app_versions");
|
||||
jdbcTemplate.update("DELETE FROM apps");
|
||||
jdbcTemplate.update("DELETE FROM application_config WHERE environment = 'default'");
|
||||
|
||||
// Ensure test-operator exists in users table (required for deployments.created_by FK)
|
||||
jdbcTemplate.update(
|
||||
"INSERT INTO users (user_id, provider, display_name) VALUES ('test-operator', 'local', 'Test Operator') ON CONFLICT (user_id) DO NOTHING");
|
||||
|
||||
when(runtimeOrchestrator.isEnabled()).thenReturn(true);
|
||||
|
||||
appSlug = "roll-" + UUID.randomUUID().toString().substring(0, 8);
|
||||
post("/api/v1/environments/default/apps", String.format("""
|
||||
{"slug": "%s", "displayName": "Rolling App"}
|
||||
""", appSlug), operatorJwt);
|
||||
put("/api/v1/environments/default/apps/" + appSlug + "/container-config", """
|
||||
{"runtimeType": "spring-boot", "appPort": 8081, "replicas": 2, "deploymentStrategy": "rolling"}
|
||||
""", operatorJwt);
|
||||
versionId = uploadJar(appSlug, ("roll-jar-" + appSlug).getBytes());
|
||||
}
|
||||
|
||||
@Test
|
||||
void rolling_allHealthy_replacesOneByOne() throws Exception {
|
||||
when(runtimeOrchestrator.startContainer(any()))
|
||||
.thenReturn("old-0", "old-1", "new-0", "new-1");
|
||||
ContainerStatus healthy = new ContainerStatus("healthy", true, 0, null);
|
||||
when(runtimeOrchestrator.getContainerStatus("old-0")).thenReturn(healthy);
|
||||
when(runtimeOrchestrator.getContainerStatus("old-1")).thenReturn(healthy);
|
||||
when(runtimeOrchestrator.getContainerStatus("new-0")).thenReturn(healthy);
|
||||
when(runtimeOrchestrator.getContainerStatus("new-1")).thenReturn(healthy);
|
||||
|
||||
String firstDeployId = triggerDeploy();
|
||||
awaitStatus(firstDeployId, DeploymentStatus.RUNNING);
|
||||
|
||||
String secondDeployId = triggerDeploy();
|
||||
awaitStatus(secondDeployId, DeploymentStatus.RUNNING);
|
||||
|
||||
// Rolling invariant: old-0 is stopped BEFORE old-1 (replicas replaced
|
||||
// one at a time, not all at once). Checking stop order is sufficient —
|
||||
// a blue-green path would have both stops adjacent at the end with no
|
||||
// interleaved starts; rolling interleaves starts between stops.
|
||||
InOrder inOrder = inOrder(runtimeOrchestrator);
|
||||
inOrder.verify(runtimeOrchestrator).stopContainer("old-0");
|
||||
inOrder.verify(runtimeOrchestrator).stopContainer("old-1");
|
||||
|
||||
// Total of 4 startContainer calls: 2 for first deploy, 2 for rolling.
|
||||
verify(runtimeOrchestrator, times(4)).startContainer(any());
|
||||
// New replicas were not stopped — they're the running ones now.
|
||||
verify(runtimeOrchestrator, never()).stopContainer("new-0");
|
||||
verify(runtimeOrchestrator, never()).stopContainer("new-1");
|
||||
|
||||
Deployment first = deploymentRepository.findById(UUID.fromString(firstDeployId)).orElseThrow();
|
||||
assertThat(first.status()).isEqualTo(DeploymentStatus.STOPPED);
|
||||
}
|
||||
|
||||
@Test
|
||||
void rolling_failsMidRollout_preservesRemainingOld() throws Exception {
|
||||
when(runtimeOrchestrator.startContainer(any()))
|
||||
.thenReturn("old-0", "old-1", "new-0", "new-1");
|
||||
ContainerStatus healthy = new ContainerStatus("healthy", true, 0, null);
|
||||
ContainerStatus starting = new ContainerStatus("starting", true, 0, null);
|
||||
when(runtimeOrchestrator.getContainerStatus("old-0")).thenReturn(healthy);
|
||||
when(runtimeOrchestrator.getContainerStatus("old-1")).thenReturn(healthy);
|
||||
when(runtimeOrchestrator.getContainerStatus("new-0")).thenReturn(healthy);
|
||||
when(runtimeOrchestrator.getContainerStatus("new-1")).thenReturn(starting);
|
||||
|
||||
String firstDeployId = triggerDeploy();
|
||||
awaitStatus(firstDeployId, DeploymentStatus.RUNNING);
|
||||
|
||||
String secondDeployId = triggerDeploy();
|
||||
awaitStatus(secondDeployId, DeploymentStatus.FAILED);
|
||||
|
||||
Deployment second = deploymentRepository.findById(UUID.fromString(secondDeployId)).orElseThrow();
|
||||
assertThat(second.errorMessage())
|
||||
.contains("rolling")
|
||||
.contains("replica 1");
|
||||
|
||||
// old-0 was replaced before the failure; old-1 was never touched.
|
||||
verify(runtimeOrchestrator).stopContainer("old-0");
|
||||
verify(runtimeOrchestrator, never()).stopContainer("old-1");
|
||||
// Cleanup stops both new replicas started so far.
|
||||
verify(runtimeOrchestrator).stopContainer("new-0");
|
||||
verify(runtimeOrchestrator).stopContainer("new-1");
|
||||
}
|
||||
|
||||
// ---- helpers (same pattern as BlueGreenStrategyIT) ----
|
||||
|
||||
private String triggerDeploy() throws Exception {
|
||||
JsonNode deployResponse = post(
|
||||
"/api/v1/environments/default/apps/" + appSlug + "/deployments",
|
||||
String.format("{\"appVersionId\": \"%s\"}", versionId), operatorJwt);
|
||||
return deployResponse.path("id").asText();
|
||||
}
|
||||
|
||||
private void awaitStatus(String deployId, DeploymentStatus expected) {
|
||||
await().atMost(30, TimeUnit.SECONDS)
|
||||
.pollInterval(500, TimeUnit.MILLISECONDS)
|
||||
.untilAsserted(() -> {
|
||||
Deployment d = deploymentRepository.findById(UUID.fromString(deployId))
|
||||
.orElseThrow(() -> new AssertionError("Deployment not found: " + deployId));
|
||||
assertThat(d.status()).isEqualTo(expected);
|
||||
});
|
||||
}
|
||||
|
||||
private JsonNode post(String path, String json, String jwt) throws Exception {
|
||||
HttpHeaders headers = securityHelper.authHeaders(jwt);
|
||||
var response = restTemplate.exchange(path, HttpMethod.POST,
|
||||
new HttpEntity<>(json, headers), String.class);
|
||||
return objectMapper.readTree(response.getBody());
|
||||
}
|
||||
|
||||
private void put(String path, String json, String jwt) {
|
||||
HttpHeaders headers = securityHelper.authHeaders(jwt);
|
||||
restTemplate.exchange(path, HttpMethod.PUT,
|
||||
new HttpEntity<>(json, headers), String.class);
|
||||
}
|
||||
|
||||
private String uploadJar(String appSlug, byte[] content) throws Exception {
|
||||
ByteArrayResource resource = new ByteArrayResource(content) {
|
||||
@Override public String getFilename() { return "app.jar"; }
|
||||
};
|
||||
MultiValueMap<String, Object> body = new LinkedMultiValueMap<>();
|
||||
body.add("file", resource);
|
||||
|
||||
HttpHeaders headers = new HttpHeaders();
|
||||
headers.set("Authorization", "Bearer " + operatorJwt);
|
||||
headers.set("X-Cameleer-Protocol-Version", "1");
|
||||
headers.setContentType(MediaType.MULTIPART_FORM_DATA);
|
||||
|
||||
var response = restTemplate.exchange(
|
||||
"/api/v1/environments/default/apps/" + appSlug + "/versions",
|
||||
HttpMethod.POST, new HttpEntity<>(body, headers), String.class);
|
||||
JsonNode versionNode = objectMapper.readTree(response.getBody());
|
||||
return versionNode.path("id").asText();
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,90 @@
|
||||
package com.cameleer.server.app.runtime;
|
||||
|
||||
import com.cameleer.server.core.runtime.ResolvedContainerConfig;
|
||||
import org.junit.jupiter.api.Test;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import static org.junit.jupiter.api.Assertions.*;
|
||||
|
||||
class TraefikLabelBuilderTest {
|
||||
|
||||
private static ResolvedContainerConfig config(boolean externalRouting, String certResolver) {
|
||||
return new ResolvedContainerConfig(
|
||||
512, null, 500, null,
|
||||
8080, List.of(), Map.of(),
|
||||
true, true,
|
||||
"path", "example.com", "https://cameleer.example.com",
|
||||
1, "blue-green",
|
||||
true, true,
|
||||
"spring-boot", "", List.of(),
|
||||
externalRouting,
|
||||
certResolver
|
||||
);
|
||||
}
|
||||
|
||||
@Test
|
||||
void build_emitsTraefikLabelsWhenExternalRoutingEnabled() {
|
||||
Map<String, String> labels = TraefikLabelBuilder.build(
|
||||
"myapp", "dev", "acme", config(true, null), 0, "abcdef01");
|
||||
|
||||
assertEquals("true", labels.get("traefik.enable"));
|
||||
assertEquals("8080", labels.get("traefik.http.services.dev-myapp.loadbalancer.server.port"));
|
||||
assertEquals("PathPrefix(`/dev/myapp/`)", labels.get("traefik.http.routers.dev-myapp.rule"));
|
||||
}
|
||||
|
||||
@Test
|
||||
void build_omitsAllTraefikLabelsWhenExternalRoutingDisabled() {
|
||||
Map<String, String> labels = TraefikLabelBuilder.build(
|
||||
"myapp", "dev", "acme", config(false, null), 0, "abcdef01");
|
||||
|
||||
long traefikLabelCount = labels.keySet().stream()
|
||||
.filter(k -> k.startsWith("traefik."))
|
||||
.count();
|
||||
assertEquals(0, traefikLabelCount, "expected no traefik.* labels but found: " + labels);
|
||||
}
|
||||
|
||||
@Test
|
||||
void build_preservesIdentityLabelsWhenExternalRoutingDisabled() {
|
||||
Map<String, String> labels = TraefikLabelBuilder.build(
|
||||
"myapp", "dev", "acme", config(false, null), 2, "abcdef01");
|
||||
|
||||
assertEquals("cameleer-server", labels.get("managed-by"));
|
||||
assertEquals("acme", labels.get("cameleer.tenant"));
|
||||
assertEquals("myapp", labels.get("cameleer.app"));
|
||||
assertEquals("dev", labels.get("cameleer.environment"));
|
||||
assertEquals("2", labels.get("cameleer.replica"));
|
||||
assertEquals("abcdef01", labels.get("cameleer.generation"));
|
||||
assertEquals("dev-myapp-2-abcdef01", labels.get("cameleer.instance-id"));
|
||||
}
|
||||
|
||||
@Test
|
||||
void build_emitsCertResolverLabelWhenConfigured() {
|
||||
Map<String, String> labels = TraefikLabelBuilder.build(
|
||||
"myapp", "dev", "acme", config(true, "letsencrypt"), 0, "abcdef01");
|
||||
|
||||
assertEquals("true", labels.get("traefik.http.routers.dev-myapp.tls"));
|
||||
assertEquals("letsencrypt", labels.get("traefik.http.routers.dev-myapp.tls.certresolver"));
|
||||
}
|
||||
|
||||
@Test
|
||||
void build_omitsCertResolverLabelWhenNull() {
|
||||
Map<String, String> labels = TraefikLabelBuilder.build(
|
||||
"myapp", "dev", "acme", config(true, null), 0, "abcdef01");
|
||||
|
||||
assertEquals("true", labels.get("traefik.http.routers.dev-myapp.tls"),
|
||||
"sslOffloading=true should still mark the router TLS-enabled");
|
||||
assertNull(labels.get("traefik.http.routers.dev-myapp.tls.certresolver"),
|
||||
"cert resolver label must be omitted when none is configured");
|
||||
}
|
||||
|
||||
@Test
|
||||
void build_omitsCertResolverLabelWhenBlank() {
|
||||
Map<String, String> labels = TraefikLabelBuilder.build(
|
||||
"myapp", "dev", "acme", config(true, " "), 0, "abcdef01");
|
||||
|
||||
assertNull(labels.get("traefik.http.routers.dev-myapp.tls.certresolver"),
|
||||
"whitespace-only cert resolver must be treated as unset");
|
||||
}
|
||||
}
|
||||
@@ -79,7 +79,8 @@ class ClickHouseLogStoreCountIT {
|
||||
base.plusSeconds(30),
|
||||
null,
|
||||
100,
|
||||
"desc"));
|
||||
"desc",
|
||||
null));
|
||||
|
||||
assertThat(count).isEqualTo(3);
|
||||
}
|
||||
@@ -102,7 +103,8 @@ class ClickHouseLogStoreCountIT {
|
||||
base.plusSeconds(30),
|
||||
null,
|
||||
100,
|
||||
"desc"));
|
||||
"desc",
|
||||
null));
|
||||
|
||||
assertThat(count).isZero();
|
||||
}
|
||||
@@ -120,7 +122,7 @@ class ClickHouseLogStoreCountIT {
|
||||
null, List.of("ERROR"), "orders", null, null, null,
|
||||
"dev", List.of(),
|
||||
base.minusSeconds(1), base.plusSeconds(60),
|
||||
null, 100, "desc"));
|
||||
null, 100, "desc", null));
|
||||
|
||||
assertThat(devCount).isEqualTo(2);
|
||||
}
|
||||
|
||||
@@ -53,7 +53,7 @@ class ClickHouseLogStoreIT {
|
||||
}
|
||||
|
||||
private LogSearchRequest req(String application) {
|
||||
return new LogSearchRequest(null, null, application, null, null, null, null, null, null, null, null, 100, "desc");
|
||||
return new LogSearchRequest(null, null, application, null, null, null, null, null, null, null, null, 100, "desc", null);
|
||||
}
|
||||
|
||||
// ── Tests ─────────────────────────────────────────────────────────────
|
||||
@@ -99,7 +99,7 @@ class ClickHouseLogStoreIT {
|
||||
));
|
||||
|
||||
LogSearchResponse result = store.search(new LogSearchRequest(
|
||||
null, List.of("ERROR"), "my-app", null, null, null, null, null, null, null, null, 100, "desc"));
|
||||
null, List.of("ERROR"), "my-app", null, null, null, null, null, null, null, null, 100, "desc", null));
|
||||
|
||||
assertThat(result.data()).hasSize(1);
|
||||
assertThat(result.data().get(0).level()).isEqualTo("ERROR");
|
||||
@@ -116,7 +116,7 @@ class ClickHouseLogStoreIT {
|
||||
));
|
||||
|
||||
LogSearchResponse result = store.search(new LogSearchRequest(
|
||||
null, List.of("WARN", "ERROR"), "my-app", null, null, null, null, null, null, null, null, 100, "desc"));
|
||||
null, List.of("WARN", "ERROR"), "my-app", null, null, null, null, null, null, null, null, 100, "desc", null));
|
||||
|
||||
assertThat(result.data()).hasSize(2);
|
||||
}
|
||||
@@ -130,7 +130,7 @@ class ClickHouseLogStoreIT {
|
||||
));
|
||||
|
||||
LogSearchResponse result = store.search(new LogSearchRequest(
|
||||
"order #12345", null, "my-app", null, null, null, null, null, null, null, null, 100, "desc"));
|
||||
"order #12345", null, "my-app", null, null, null, null, null, null, null, null, 100, "desc", null));
|
||||
|
||||
assertThat(result.data()).hasSize(1);
|
||||
assertThat(result.data().get(0).message()).contains("order #12345");
|
||||
@@ -147,7 +147,7 @@ class ClickHouseLogStoreIT {
|
||||
));
|
||||
|
||||
LogSearchResponse result = store.search(new LogSearchRequest(
|
||||
null, null, "my-app", null, "exchange-abc", null, null, null, null, null, null, 100, "desc"));
|
||||
null, null, "my-app", null, "exchange-abc", null, null, null, null, null, null, 100, "desc", null));
|
||||
|
||||
assertThat(result.data()).hasSize(1);
|
||||
assertThat(result.data().get(0).message()).isEqualTo("msg with exchange");
|
||||
@@ -170,7 +170,7 @@ class ClickHouseLogStoreIT {
|
||||
Instant to = Instant.parse("2026-03-31T13:00:00Z");
|
||||
|
||||
LogSearchResponse result = store.search(new LogSearchRequest(
|
||||
null, null, "my-app", null, null, null, null, null, from, to, null, 100, "desc"));
|
||||
null, null, "my-app", null, null, null, null, null, from, to, null, 100, "desc", null));
|
||||
|
||||
assertThat(result.data()).hasSize(1);
|
||||
assertThat(result.data().get(0).message()).isEqualTo("noon");
|
||||
@@ -188,7 +188,7 @@ class ClickHouseLogStoreIT {
|
||||
|
||||
// No application filter — should return both
|
||||
LogSearchResponse result = store.search(new LogSearchRequest(
|
||||
null, null, null, null, null, null, null, null, null, null, null, 100, "desc"));
|
||||
null, null, null, null, null, null, null, null, null, null, null, 100, "desc", null));
|
||||
|
||||
assertThat(result.data()).hasSize(2);
|
||||
}
|
||||
@@ -202,7 +202,7 @@ class ClickHouseLogStoreIT {
|
||||
));
|
||||
|
||||
LogSearchResponse result = store.search(new LogSearchRequest(
|
||||
null, null, "my-app", null, null, "OrderProcessor", null, null, null, null, null, 100, "desc"));
|
||||
null, null, "my-app", null, null, "OrderProcessor", null, null, null, null, null, 100, "desc", null));
|
||||
|
||||
assertThat(result.data()).hasSize(1);
|
||||
assertThat(result.data().get(0).loggerName()).contains("OrderProcessor");
|
||||
@@ -221,7 +221,7 @@ class ClickHouseLogStoreIT {
|
||||
|
||||
// Page 1: limit 2
|
||||
LogSearchResponse page1 = store.search(new LogSearchRequest(
|
||||
null, null, "my-app", null, null, null, null, null, null, null, null, 2, "desc"));
|
||||
null, null, "my-app", null, null, null, null, null, null, null, null, 2, "desc", null));
|
||||
|
||||
assertThat(page1.data()).hasSize(2);
|
||||
assertThat(page1.hasMore()).isTrue();
|
||||
@@ -230,7 +230,7 @@ class ClickHouseLogStoreIT {
|
||||
|
||||
// Page 2: use cursor
|
||||
LogSearchResponse page2 = store.search(new LogSearchRequest(
|
||||
null, null, "my-app", null, null, null, null, null, null, null, page1.nextCursor(), 2, "desc"));
|
||||
null, null, "my-app", null, null, null, null, null, null, null, page1.nextCursor(), 2, "desc", null));
|
||||
|
||||
assertThat(page2.data()).hasSize(2);
|
||||
assertThat(page2.hasMore()).isTrue();
|
||||
@@ -238,7 +238,7 @@ class ClickHouseLogStoreIT {
|
||||
|
||||
// Page 3: last page
|
||||
LogSearchResponse page3 = store.search(new LogSearchRequest(
|
||||
null, null, "my-app", null, null, null, null, null, null, null, page2.nextCursor(), 2, "desc"));
|
||||
null, null, "my-app", null, null, null, null, null, null, null, page2.nextCursor(), 2, "desc", null));
|
||||
|
||||
assertThat(page3.data()).hasSize(1);
|
||||
assertThat(page3.hasMore()).isFalse();
|
||||
@@ -257,7 +257,7 @@ class ClickHouseLogStoreIT {
|
||||
|
||||
// Filter for ERROR only, but counts should include all levels
|
||||
LogSearchResponse result = store.search(new LogSearchRequest(
|
||||
null, List.of("ERROR"), "my-app", null, null, null, null, null, null, null, null, 100, "desc"));
|
||||
null, List.of("ERROR"), "my-app", null, null, null, null, null, null, null, null, 100, "desc", null));
|
||||
|
||||
assertThat(result.data()).hasSize(1);
|
||||
assertThat(result.levelCounts()).containsEntry("INFO", 2L);
|
||||
@@ -275,7 +275,7 @@ class ClickHouseLogStoreIT {
|
||||
));
|
||||
|
||||
LogSearchResponse result = store.search(new LogSearchRequest(
|
||||
null, null, "my-app", null, null, null, null, null, null, null, null, 100, "asc"));
|
||||
null, null, "my-app", null, null, null, null, null, null, null, null, 100, "asc", null));
|
||||
|
||||
assertThat(result.data()).hasSize(3);
|
||||
assertThat(result.data().get(0).message()).isEqualTo("msg-1");
|
||||
@@ -340,7 +340,7 @@ class ClickHouseLogStoreIT {
|
||||
|
||||
LogSearchResponse result = store.search(new LogSearchRequest(
|
||||
null, null, "my-app", null, null, null, null,
|
||||
List.of("container"), null, null, null, 100, "desc"));
|
||||
List.of("container"), null, null, null, 100, "desc", null));
|
||||
|
||||
assertThat(result.data()).hasSize(1);
|
||||
assertThat(result.data().get(0).message()).isEqualTo("container msg");
|
||||
@@ -365,7 +365,7 @@ class ClickHouseLogStoreIT {
|
||||
|
||||
LogSearchResponse result = store.search(new LogSearchRequest(
|
||||
null, null, "my-app", null, null, null, null,
|
||||
List.of("app", "container"), null, null, null, 100, "desc"));
|
||||
List.of("app", "container"), null, null, null, 100, "desc", null));
|
||||
|
||||
assertThat(result.data()).hasSize(2);
|
||||
assertThat(result.data()).extracting(LogEntryResult::message)
|
||||
@@ -388,7 +388,7 @@ class ClickHouseLogStoreIT {
|
||||
for (int page = 0; page < 10; page++) {
|
||||
LogSearchResponse resp = store.search(new LogSearchRequest(
|
||||
null, null, "my-app", null, null, null, null, null,
|
||||
null, null, cursor, 2, "desc"));
|
||||
null, null, cursor, 2, "desc", null));
|
||||
for (LogEntryResult r : resp.data()) {
|
||||
assertThat(seen.add(r.message())).as("duplicate row returned: " + r.message()).isTrue();
|
||||
}
|
||||
|
||||
@@ -0,0 +1,196 @@
|
||||
package com.cameleer.server.app.search;
|
||||
|
||||
import com.cameleer.server.core.ingestion.BufferedLogEntry;
|
||||
import com.cameleer.server.core.search.LogSearchRequest;
|
||||
import com.cameleer.server.core.search.LogSearchResponse;
|
||||
import com.cameleer.common.model.LogEntry;
|
||||
import com.cameleer.server.app.ClickHouseTestHelper;
|
||||
import com.zaxxer.hikari.HikariDataSource;
|
||||
import org.junit.jupiter.api.AfterEach;
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.springframework.jdbc.core.JdbcTemplate;
|
||||
import org.testcontainers.clickhouse.ClickHouseContainer;
|
||||
import org.testcontainers.junit.jupiter.Container;
|
||||
import org.testcontainers.junit.jupiter.Testcontainers;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.List;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
/**
|
||||
* Integration test for the {@code instanceIds} multi-value filter on
|
||||
* {@link ClickHouseLogStore#search(LogSearchRequest)}.
|
||||
*
|
||||
* <p>Three rows are seeded with distinct {@code instance_id} values:
|
||||
* <ul>
|
||||
* <li>{@code prod-app1-0-aaa11111} — included in filter</li>
|
||||
* <li>{@code prod-app1-1-aaa11111} — included in filter</li>
|
||||
* <li>{@code prod-app1-0-bbb22222} — excluded from filter</li>
|
||||
* </ul>
|
||||
*/
|
||||
@Testcontainers
|
||||
class ClickHouseLogStoreInstanceIdsIT {
|
||||
|
||||
@Container
|
||||
static final ClickHouseContainer clickhouse =
|
||||
new ClickHouseContainer("clickhouse/clickhouse-server:24.12");
|
||||
|
||||
private JdbcTemplate jdbc;
|
||||
private ClickHouseLogStore store;
|
||||
|
||||
private static final String TENANT = "default";
|
||||
private static final String ENV = "prod";
|
||||
private static final String APP = "app1";
|
||||
private static final String INST_A = "prod-app1-0-aaa11111";
|
||||
private static final String INST_B = "prod-app1-1-aaa11111";
|
||||
private static final String INST_C = "prod-app1-0-bbb22222";
|
||||
|
||||
@BeforeEach
|
||||
void setUp() throws Exception {
|
||||
HikariDataSource ds = new HikariDataSource();
|
||||
ds.setJdbcUrl(clickhouse.getJdbcUrl());
|
||||
ds.setUsername(clickhouse.getUsername());
|
||||
ds.setPassword(clickhouse.getPassword());
|
||||
|
||||
jdbc = new JdbcTemplate(ds);
|
||||
ClickHouseTestHelper.executeInitSql(jdbc);
|
||||
jdbc.execute("TRUNCATE TABLE logs");
|
||||
|
||||
store = new ClickHouseLogStore(TENANT, jdbc);
|
||||
|
||||
Instant base = Instant.parse("2026-04-23T09:00:00Z");
|
||||
seedLog(INST_A, base, "msg-from-replica-0-gen-aaa");
|
||||
seedLog(INST_B, base.plusSeconds(1), "msg-from-replica-1-gen-aaa");
|
||||
seedLog(INST_C, base.plusSeconds(2), "msg-from-replica-0-gen-bbb");
|
||||
}
|
||||
|
||||
@AfterEach
|
||||
void tearDown() {
|
||||
jdbc.execute("TRUNCATE TABLE logs");
|
||||
}
|
||||
|
||||
private void seedLog(String instanceId, Instant ts, String message) {
|
||||
LogEntry entry = new LogEntry(ts, "INFO", "com.example.Svc", message, "main", null, null);
|
||||
store.insertBufferedBatch(List.of(
|
||||
new BufferedLogEntry(TENANT, ENV, instanceId, APP, entry)));
|
||||
}
|
||||
|
||||
// ── Tests ─────────────────────────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void search_instanceIds_returnsOnlyMatchingInstances() {
|
||||
LogSearchResponse result = store.search(new LogSearchRequest(
|
||||
null,
|
||||
List.of(),
|
||||
APP,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
ENV,
|
||||
List.of(),
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
100,
|
||||
"desc",
|
||||
List.of(INST_A, INST_B)));
|
||||
|
||||
assertThat(result.data()).hasSize(2);
|
||||
assertThat(result.data())
|
||||
.extracting(r -> r.instanceId())
|
||||
.containsExactlyInAnyOrder(INST_A, INST_B);
|
||||
assertThat(result.data())
|
||||
.extracting(r -> r.instanceId())
|
||||
.doesNotContain(INST_C);
|
||||
}
|
||||
|
||||
@Test
|
||||
void search_emptyInstanceIds_returnsAllRows() {
|
||||
LogSearchResponse result = store.search(new LogSearchRequest(
|
||||
null,
|
||||
List.of(),
|
||||
APP,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
ENV,
|
||||
List.of(),
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
100,
|
||||
"desc",
|
||||
List.of()));
|
||||
|
||||
assertThat(result.data()).hasSize(3);
|
||||
}
|
||||
|
||||
@Test
|
||||
void search_nullInstanceIds_returnsAllRows() {
|
||||
LogSearchResponse result = store.search(new LogSearchRequest(
|
||||
null,
|
||||
List.of(),
|
||||
APP,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
ENV,
|
||||
List.of(),
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
100,
|
||||
"desc",
|
||||
null));
|
||||
|
||||
assertThat(result.data()).hasSize(3);
|
||||
}
|
||||
|
||||
@Test
|
||||
void search_instanceIds_singleValue_filtersToOneReplica() {
|
||||
LogSearchResponse result = store.search(new LogSearchRequest(
|
||||
null,
|
||||
List.of(),
|
||||
APP,
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
ENV,
|
||||
List.of(),
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
100,
|
||||
"desc",
|
||||
List.of(INST_C)));
|
||||
|
||||
assertThat(result.data()).hasSize(1);
|
||||
assertThat(result.data().get(0).instanceId()).isEqualTo(INST_C);
|
||||
assertThat(result.data().get(0).message()).isEqualTo("msg-from-replica-0-gen-bbb");
|
||||
}
|
||||
|
||||
@Test
|
||||
void search_instanceIds_doesNotConflictWithSingularInstanceId() {
|
||||
// Singular instanceId=INST_A AND instanceIds=[INST_B] → intersection = empty
|
||||
// (both conditions apply: instance_id = A AND instance_id IN (B))
|
||||
LogSearchResponse result = store.search(new LogSearchRequest(
|
||||
null,
|
||||
List.of(),
|
||||
APP,
|
||||
INST_A, // singular
|
||||
null,
|
||||
null,
|
||||
ENV,
|
||||
List.of(),
|
||||
null,
|
||||
null,
|
||||
null,
|
||||
100,
|
||||
"desc",
|
||||
List.of(INST_B))); // plural — no overlap
|
||||
|
||||
assertThat(result.data()).isEmpty();
|
||||
}
|
||||
}
|
||||
@@ -2,6 +2,7 @@ package com.cameleer.server.app.search;
|
||||
|
||||
import com.cameleer.server.app.storage.ClickHouseExecutionStore;
|
||||
import com.cameleer.server.core.ingestion.MergedExecution;
|
||||
import com.cameleer.server.core.search.AttributeFilter;
|
||||
import com.cameleer.server.core.search.ExecutionSummary;
|
||||
import com.cameleer.server.core.search.SearchRequest;
|
||||
import com.cameleer.server.core.search.SearchResult;
|
||||
@@ -62,7 +63,7 @@ class ClickHouseSearchIndexIT {
|
||||
500L,
|
||||
"", "", "", "", "", "",
|
||||
"hash-abc", "FULL",
|
||||
"{\"order\":\"12345\"}", "", "", "", "", "", "{\"env\":\"prod\"}",
|
||||
"", "", "", "", "", "", "{\"order\":\"12345\",\"tenant\":\"acme\"}",
|
||||
"", "",
|
||||
false, false,
|
||||
null, null
|
||||
@@ -79,7 +80,7 @@ class ClickHouseSearchIndexIT {
|
||||
"java.lang.NPE\n at Foo.bar(Foo.java:42)",
|
||||
"NullPointerException", "RUNTIME", "", "",
|
||||
"", "FULL",
|
||||
"", "", "", "", "", "", "",
|
||||
"", "", "", "", "", "", "{\"order\":\"99999\"}",
|
||||
"", "",
|
||||
false, false,
|
||||
null, null
|
||||
@@ -309,4 +310,59 @@ class ClickHouseSearchIndexIT {
|
||||
assertThat(result.total()).isEqualTo(1);
|
||||
assertThat(result.data().get(0).executionId()).isEqualTo("exec-1");
|
||||
}
|
||||
|
||||
@Test
|
||||
void search_byAttributeFilter_exactMatch_matchesExec1() {
|
||||
SearchRequest request = new SearchRequest(
|
||||
null, null, null, null, null, null, null, null, null, null,
|
||||
null, null, null, null, null, 0, 50, null, null, null, null,
|
||||
List.of(new AttributeFilter("order", "12345")));
|
||||
|
||||
SearchResult<ExecutionSummary> result = searchIndex.search(request);
|
||||
|
||||
assertThat(result.total()).isEqualTo(1);
|
||||
assertThat(result.data().get(0).executionId()).isEqualTo("exec-1");
|
||||
}
|
||||
|
||||
@Test
|
||||
void search_byAttributeFilter_keyOnly_matchesExec1AndExec2() {
|
||||
SearchRequest request = new SearchRequest(
|
||||
null, null, null, null, null, null, null, null, null, null,
|
||||
null, null, null, null, null, 0, 50, null, null, null, null,
|
||||
List.of(new AttributeFilter("order", null)));
|
||||
|
||||
SearchResult<ExecutionSummary> result = searchIndex.search(request);
|
||||
|
||||
assertThat(result.total()).isEqualTo(2);
|
||||
assertThat(result.data()).extracting(ExecutionSummary::executionId)
|
||||
.containsExactlyInAnyOrder("exec-1", "exec-2");
|
||||
}
|
||||
|
||||
@Test
|
||||
void search_byAttributeFilter_wildcardValue_matchesExec1Only() {
|
||||
SearchRequest request = new SearchRequest(
|
||||
null, null, null, null, null, null, null, null, null, null,
|
||||
null, null, null, null, null, 0, 50, null, null, null, null,
|
||||
List.of(new AttributeFilter("order", "123*")));
|
||||
|
||||
SearchResult<ExecutionSummary> result = searchIndex.search(request);
|
||||
|
||||
assertThat(result.total()).isEqualTo(1);
|
||||
assertThat(result.data().get(0).executionId()).isEqualTo("exec-1");
|
||||
}
|
||||
|
||||
@Test
|
||||
void search_byAttributeFilter_multipleFiltersAreAnded() {
|
||||
SearchRequest request = new SearchRequest(
|
||||
null, null, null, null, null, null, null, null, null, null,
|
||||
null, null, null, null, null, 0, 50, null, null, null, null,
|
||||
List.of(
|
||||
new AttributeFilter("order", "12345"),
|
||||
new AttributeFilter("tenant", "acme")));
|
||||
|
||||
SearchResult<ExecutionSummary> result = searchIndex.search(request);
|
||||
|
||||
assertThat(result.total()).isEqualTo(1);
|
||||
assertThat(result.data().get(0).executionId()).isEqualTo("exec-1");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -155,21 +155,51 @@ class ClickHouseDiagramStoreIT {
|
||||
}
|
||||
|
||||
@Test
|
||||
void findContentHashForRouteByAgents_returnsHash() {
|
||||
RouteGraph graph = buildGraph("route-4", "node-z");
|
||||
store.store(tagged("agent-10", "app-b", graph));
|
||||
store.store(tagged("agent-20", "app-b", graph));
|
||||
void findLatestContentHashForAppRoute_returnsLatestAcrossInstances() throws InterruptedException {
|
||||
// v1 published by one agent, v2 by a different agent. The app+env+route
|
||||
// resolver must pick v2 regardless of which instance produced it, and
|
||||
// must keep working even if neither instance is "live" anywhere.
|
||||
RouteGraph v1 = buildGraph("evolving-route", "n-a");
|
||||
v1.setDescription("v1");
|
||||
RouteGraph v2 = buildGraph("evolving-route", "n-a", "n-b");
|
||||
v2.setDescription("v2");
|
||||
|
||||
Optional<String> result = store.findContentHashForRouteByAgents(
|
||||
"route-4", java.util.List.of("agent-10", "agent-20"));
|
||||
store.store(new TaggedDiagram("publisher-old", "versioned-app", "default", v1));
|
||||
Thread.sleep(10);
|
||||
store.store(new TaggedDiagram("publisher-new", "versioned-app", "default", v2));
|
||||
|
||||
assertThat(result).isPresent();
|
||||
Optional<String> hashOpt = store.findLatestContentHashForAppRoute(
|
||||
"versioned-app", "evolving-route", "default");
|
||||
assertThat(hashOpt).isPresent();
|
||||
|
||||
RouteGraph retrieved = store.findByContentHash(hashOpt.get()).orElseThrow();
|
||||
assertThat(retrieved.getDescription()).isEqualTo("v2");
|
||||
}
|
||||
|
||||
@Test
|
||||
void findContentHashForRouteByAgents_emptyListReturnsEmpty() {
|
||||
Optional<String> result = store.findContentHashForRouteByAgents("route-x", java.util.List.of());
|
||||
assertThat(result).isEmpty();
|
||||
void findLatestContentHashForAppRoute_isolatesByAppAndEnv() {
|
||||
RouteGraph graph = buildGraph("shared-route", "node-1");
|
||||
store.store(new TaggedDiagram("a1", "app-alpha", "dev", graph));
|
||||
store.store(new TaggedDiagram("a2", "app-beta", "prod", graph));
|
||||
|
||||
// Same route id exists across two (app, env) combos. The resolver must
|
||||
// return empty for a mismatch on either dimension.
|
||||
assertThat(store.findLatestContentHashForAppRoute("app-alpha", "shared-route", "dev"))
|
||||
.isPresent();
|
||||
assertThat(store.findLatestContentHashForAppRoute("app-alpha", "shared-route", "prod"))
|
||||
.isEmpty();
|
||||
assertThat(store.findLatestContentHashForAppRoute("app-beta", "shared-route", "dev"))
|
||||
.isEmpty();
|
||||
assertThat(store.findLatestContentHashForAppRoute("app-gamma", "shared-route", "dev"))
|
||||
.isEmpty();
|
||||
}
|
||||
|
||||
@Test
|
||||
void findLatestContentHashForAppRoute_emptyInputsReturnEmpty() {
|
||||
assertThat(store.findLatestContentHashForAppRoute(null, "r", "default")).isEmpty();
|
||||
assertThat(store.findLatestContentHashForAppRoute("app", null, "default")).isEmpty();
|
||||
assertThat(store.findLatestContentHashForAppRoute("app", "r", null)).isEmpty();
|
||||
assertThat(store.findLatestContentHashForAppRoute("", "r", "default")).isEmpty();
|
||||
}
|
||||
|
||||
@Test
|
||||
|
||||
@@ -0,0 +1,117 @@
|
||||
package com.cameleer.server.app.storage;
|
||||
|
||||
import com.cameleer.server.core.storage.model.ServerMetricSample;
|
||||
import com.zaxxer.hikari.HikariDataSource;
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.springframework.jdbc.core.JdbcTemplate;
|
||||
import org.testcontainers.clickhouse.ClickHouseContainer;
|
||||
import org.testcontainers.junit.jupiter.Container;
|
||||
import org.testcontainers.junit.jupiter.Testcontainers;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
@Testcontainers
|
||||
class ClickHouseServerMetricsStoreIT {
|
||||
|
||||
@Container
|
||||
static final ClickHouseContainer clickhouse =
|
||||
new ClickHouseContainer("clickhouse/clickhouse-server:24.12");
|
||||
|
||||
private JdbcTemplate jdbc;
|
||||
private ClickHouseServerMetricsStore store;
|
||||
|
||||
@BeforeEach
|
||||
void setUp() {
|
||||
HikariDataSource ds = new HikariDataSource();
|
||||
ds.setJdbcUrl(clickhouse.getJdbcUrl());
|
||||
ds.setUsername(clickhouse.getUsername());
|
||||
ds.setPassword(clickhouse.getPassword());
|
||||
|
||||
jdbc = new JdbcTemplate(ds);
|
||||
|
||||
jdbc.execute("""
|
||||
CREATE TABLE IF NOT EXISTS server_metrics (
|
||||
tenant_id LowCardinality(String) DEFAULT 'default',
|
||||
collected_at DateTime64(3),
|
||||
server_instance_id LowCardinality(String),
|
||||
metric_name LowCardinality(String),
|
||||
metric_type LowCardinality(String),
|
||||
statistic LowCardinality(String) DEFAULT 'value',
|
||||
metric_value Float64,
|
||||
tags Map(String, String) DEFAULT map(),
|
||||
server_received_at DateTime64(3) DEFAULT now64(3)
|
||||
)
|
||||
ENGINE = MergeTree()
|
||||
ORDER BY (tenant_id, collected_at, server_instance_id, metric_name, statistic)
|
||||
""");
|
||||
|
||||
jdbc.execute("TRUNCATE TABLE server_metrics");
|
||||
|
||||
store = new ClickHouseServerMetricsStore(jdbc);
|
||||
}
|
||||
|
||||
@Test
|
||||
void insertBatch_roundTripsAllColumns() {
|
||||
Instant ts = Instant.parse("2026-04-23T12:00:00Z");
|
||||
store.insertBatch(List.of(
|
||||
new ServerMetricSample("tenant-a", ts, "srv-1",
|
||||
"cameleer.ingestion.drops", "counter", "count", 17.0,
|
||||
Map.of("reason", "buffer_full")),
|
||||
new ServerMetricSample("tenant-a", ts, "srv-1",
|
||||
"jvm.memory.used", "gauge", "value", 1_048_576.0,
|
||||
Map.of("area", "heap", "id", "G1 Eden Space"))
|
||||
));
|
||||
|
||||
Integer count = jdbc.queryForObject(
|
||||
"SELECT count() FROM server_metrics WHERE tenant_id = 'tenant-a'",
|
||||
Integer.class);
|
||||
assertThat(count).isEqualTo(2);
|
||||
|
||||
Double dropsValue = jdbc.queryForObject(
|
||||
"""
|
||||
SELECT metric_value FROM server_metrics
|
||||
WHERE tenant_id = 'tenant-a'
|
||||
AND server_instance_id = 'srv-1'
|
||||
AND metric_name = 'cameleer.ingestion.drops'
|
||||
AND statistic = 'count'
|
||||
""",
|
||||
Double.class);
|
||||
assertThat(dropsValue).isEqualTo(17.0);
|
||||
|
||||
String heapArea = jdbc.queryForObject(
|
||||
"""
|
||||
SELECT tags['area'] FROM server_metrics
|
||||
WHERE tenant_id = 'tenant-a'
|
||||
AND metric_name = 'jvm.memory.used'
|
||||
""",
|
||||
String.class);
|
||||
assertThat(heapArea).isEqualTo("heap");
|
||||
}
|
||||
|
||||
@Test
|
||||
void insertBatch_emptyList_doesNothing() {
|
||||
store.insertBatch(List.of());
|
||||
|
||||
Integer count = jdbc.queryForObject(
|
||||
"SELECT count() FROM server_metrics", Integer.class);
|
||||
assertThat(count).isEqualTo(0);
|
||||
}
|
||||
|
||||
@Test
|
||||
void insertBatch_nullTags_storesEmptyMap() {
|
||||
store.insertBatch(List.of(
|
||||
new ServerMetricSample("default", Instant.parse("2026-04-23T12:00:00Z"),
|
||||
"srv-2", "process.cpu.usage", "gauge", "value", 0.12, null)
|
||||
));
|
||||
|
||||
Integer count = jdbc.queryForObject(
|
||||
"SELECT count() FROM server_metrics WHERE server_instance_id = 'srv-2'",
|
||||
Integer.class);
|
||||
assertThat(count).isEqualTo(1);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,77 @@
|
||||
package com.cameleer.server.app.storage;
|
||||
|
||||
import com.cameleer.server.app.AbstractPostgresIT;
|
||||
import com.cameleer.server.core.runtime.Deployment;
|
||||
import com.cameleer.server.core.runtime.DeploymentService;
|
||||
import org.junit.jupiter.api.AfterEach;
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.jdbc.core.JdbcTemplate;
|
||||
|
||||
import java.util.UUID;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
class PostgresDeploymentRepositoryCreatedByIT extends AbstractPostgresIT {
|
||||
|
||||
@Autowired DeploymentService deploymentService;
|
||||
@Autowired JdbcTemplate jdbc;
|
||||
|
||||
private UUID appId;
|
||||
private UUID envId;
|
||||
private UUID versionId;
|
||||
|
||||
@BeforeEach
|
||||
void seedAppAndVersion() {
|
||||
// Clean up to avoid conflicts across test runs
|
||||
jdbc.update("DELETE FROM deployments");
|
||||
jdbc.update("DELETE FROM app_versions");
|
||||
jdbc.update("DELETE FROM apps");
|
||||
jdbc.update("DELETE FROM users WHERE user_id IN ('alice', 'bob')");
|
||||
|
||||
envId = jdbc.queryForObject(
|
||||
"SELECT id FROM environments WHERE slug = 'default'", UUID.class);
|
||||
|
||||
// Seed users (alice, bob) — use the bare user_id convention; provider is NOT NULL
|
||||
jdbc.update("INSERT INTO users (user_id, provider) VALUES (?, 'LOCAL') " +
|
||||
"ON CONFLICT (user_id) DO NOTHING", "alice");
|
||||
jdbc.update("INSERT INTO users (user_id, provider) VALUES (?, 'LOCAL') " +
|
||||
"ON CONFLICT (user_id) DO NOTHING", "bob");
|
||||
|
||||
// Seed app
|
||||
appId = UUID.randomUUID();
|
||||
jdbc.update("INSERT INTO apps (id, environment_id, slug, display_name) " +
|
||||
"VALUES (?, ?, 'test-app', 'Test App')",
|
||||
appId, envId);
|
||||
|
||||
// Seed version
|
||||
versionId = UUID.randomUUID();
|
||||
jdbc.update("INSERT INTO app_versions (id, app_id, version, jar_path, jar_checksum) " +
|
||||
"VALUES (?, ?, 1, '/tmp/x.jar', 'abc')",
|
||||
versionId, appId);
|
||||
}
|
||||
|
||||
@AfterEach
|
||||
void cleanup() {
|
||||
jdbc.update("DELETE FROM deployments");
|
||||
jdbc.update("DELETE FROM app_versions");
|
||||
jdbc.update("DELETE FROM apps");
|
||||
jdbc.update("DELETE FROM users WHERE user_id IN ('alice', 'bob')");
|
||||
}
|
||||
|
||||
@Test
|
||||
void createDeployment_persists_createdBy_and_returns_it() {
|
||||
Deployment d = deploymentService.createDeployment(appId, versionId, envId, "alice");
|
||||
assertThat(d.createdBy()).isEqualTo("alice");
|
||||
String fromDb = jdbc.queryForObject(
|
||||
"SELECT created_by FROM deployments WHERE id = ?", String.class, d.id());
|
||||
assertThat(fromDb).isEqualTo("alice");
|
||||
}
|
||||
|
||||
@Test
|
||||
void promote_persists_createdBy() {
|
||||
Deployment promoted = deploymentService.promote(appId, versionId, envId, "bob");
|
||||
assertThat(promoted.createdBy()).isEqualTo("bob");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,129 @@
|
||||
package com.cameleer.server.app.storage;
|
||||
|
||||
import com.cameleer.common.model.ApplicationConfig;
|
||||
import com.cameleer.server.app.AbstractPostgresIT;
|
||||
import com.cameleer.server.core.runtime.Deployment;
|
||||
import com.cameleer.server.core.runtime.DeploymentConfigSnapshot;
|
||||
import org.junit.jupiter.api.AfterEach;
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
|
||||
import java.util.Map;
|
||||
import java.util.UUID;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
class PostgresDeploymentRepositoryIT extends AbstractPostgresIT {
|
||||
|
||||
@Autowired PostgresDeploymentRepository repository;
|
||||
|
||||
private UUID envId;
|
||||
private UUID appId;
|
||||
private UUID appVersionId;
|
||||
|
||||
@BeforeEach
|
||||
void setup() {
|
||||
envId = UUID.randomUUID();
|
||||
jdbcTemplate.update(
|
||||
"INSERT INTO environments (id, slug, display_name) VALUES (?, ?, ?)",
|
||||
envId, "test-env-" + envId, "Test Env");
|
||||
|
||||
appId = UUID.randomUUID();
|
||||
jdbcTemplate.update(
|
||||
"INSERT INTO apps (id, environment_id, slug, display_name) VALUES (?, ?, ?, ?)",
|
||||
appId, envId, "app-it-" + appId, "App IT");
|
||||
|
||||
appVersionId = UUID.randomUUID();
|
||||
jdbcTemplate.update(
|
||||
"INSERT INTO app_versions (id, app_id, version, jar_path, jar_checksum) VALUES (?, ?, ?, ?, ?)",
|
||||
appVersionId, appId, 1, "/tmp/app.jar", "deadbeef");
|
||||
}
|
||||
|
||||
@AfterEach
|
||||
void cleanup() {
|
||||
jdbcTemplate.update("DELETE FROM deployments WHERE app_id = ?", appId);
|
||||
jdbcTemplate.update("DELETE FROM app_versions WHERE app_id = ?", appId);
|
||||
jdbcTemplate.update("DELETE FROM apps WHERE id = ?", appId);
|
||||
jdbcTemplate.update("DELETE FROM environments WHERE id = ?", envId);
|
||||
}
|
||||
|
||||
@Test
|
||||
void deployedConfigSnapshot_roundtrips() {
|
||||
// given — create a deployment then store a snapshot
|
||||
ApplicationConfig agentConfig = new ApplicationConfig();
|
||||
agentConfig.setApplication("app-it");
|
||||
agentConfig.setEnvironment("staging");
|
||||
agentConfig.setVersion(3);
|
||||
agentConfig.setSamplingRate(0.5);
|
||||
|
||||
UUID jarVersionId = UUID.randomUUID();
|
||||
DeploymentConfigSnapshot snapshot = new DeploymentConfigSnapshot(
|
||||
jarVersionId,
|
||||
agentConfig,
|
||||
Map.of("memoryLimitMb", 1024, "replicas", 2),
|
||||
null
|
||||
);
|
||||
|
||||
// pre-V4 rows: no creator (createdBy is nullable)
|
||||
UUID deploymentId = repository.create(appId, appVersionId, envId, "test-container", null);
|
||||
repository.saveDeployedConfigSnapshot(deploymentId, snapshot);
|
||||
|
||||
// when — load it back
|
||||
Deployment loaded = repository.findById(deploymentId).orElseThrow();
|
||||
|
||||
// then
|
||||
assertThat(loaded.deployedConfigSnapshot().jarVersionId()).isEqualTo(jarVersionId);
|
||||
assertThat(loaded.deployedConfigSnapshot().agentConfig().getSamplingRate()).isEqualTo(0.5);
|
||||
assertThat(loaded.deployedConfigSnapshot().containerConfig()).containsEntry("memoryLimitMb", 1024);
|
||||
}
|
||||
|
||||
@Test
|
||||
void deployedConfigSnapshot_nullByDefault() {
|
||||
// deployments created without a snapshot must return null (not throw)
|
||||
UUID deploymentId = repository.create(appId, appVersionId, envId, "test-container-null", null);
|
||||
|
||||
Deployment loaded = repository.findById(deploymentId).orElseThrow();
|
||||
|
||||
assertThat(loaded.deployedConfigSnapshot()).isNull();
|
||||
}
|
||||
|
||||
@Test
|
||||
void deleteFailedByAppAndEnvironment_keepsStoppedAndActive() {
|
||||
// given: one STOPPED (checkpoint), one FAILED, one RUNNING
|
||||
UUID stoppedId = repository.create(appId, appVersionId, envId, "stopped", null);
|
||||
repository.updateStatus(stoppedId, com.cameleer.server.core.runtime.DeploymentStatus.STOPPED, null, null);
|
||||
|
||||
UUID failedId = repository.create(appId, appVersionId, envId, "failed", null);
|
||||
repository.updateStatus(failedId, com.cameleer.server.core.runtime.DeploymentStatus.FAILED, null, "boom");
|
||||
|
||||
UUID runningId = repository.create(appId, appVersionId, envId, "running", null);
|
||||
repository.updateStatus(runningId, com.cameleer.server.core.runtime.DeploymentStatus.RUNNING, "c1", null);
|
||||
|
||||
// when
|
||||
repository.deleteFailedByAppAndEnvironment(appId, envId);
|
||||
|
||||
// then: STOPPED and RUNNING survive; FAILED is gone
|
||||
assertThat(repository.findById(stoppedId)).isPresent();
|
||||
assertThat(repository.findById(runningId)).isPresent();
|
||||
assertThat(repository.findById(failedId)).isEmpty();
|
||||
}
|
||||
|
||||
@Test
|
||||
void deployedConfigSnapshot_canBeClearedToNull() {
|
||||
UUID jarVersionId = UUID.randomUUID();
|
||||
DeploymentConfigSnapshot snapshot = new DeploymentConfigSnapshot(
|
||||
jarVersionId,
|
||||
new ApplicationConfig(),
|
||||
Map.of(),
|
||||
null
|
||||
);
|
||||
|
||||
UUID deploymentId = repository.create(appId, appVersionId, envId, "test-container-clear", null);
|
||||
repository.saveDeployedConfigSnapshot(deploymentId, snapshot);
|
||||
repository.saveDeployedConfigSnapshot(deploymentId, null);
|
||||
|
||||
Deployment loaded = repository.findById(deploymentId).orElseThrow();
|
||||
assertThat(loaded.deployedConfigSnapshot()).isNull();
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,58 @@
|
||||
package com.cameleer.server.app.storage;
|
||||
|
||||
import com.cameleer.server.app.AbstractPostgresIT;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.jdbc.core.JdbcTemplate;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
class V4DeploymentCreatedByMigrationIT extends AbstractPostgresIT {
|
||||
|
||||
@Autowired JdbcTemplate jdbc;
|
||||
|
||||
@Test
|
||||
void created_by_column_exists_with_correct_type_and_nullable() {
|
||||
// Scope to current schema — Testcontainers reuse can otherwise leave
|
||||
// a previous run's tenant_default schema visible alongside public.
|
||||
List<Map<String, Object>> cols = jdbc.queryForList(
|
||||
"SELECT column_name, data_type, is_nullable " +
|
||||
"FROM information_schema.columns " +
|
||||
"WHERE table_name = 'deployments' AND column_name = 'created_by' " +
|
||||
" AND table_schema = current_schema()"
|
||||
);
|
||||
assertThat(cols).hasSize(1);
|
||||
assertThat(cols.get(0)).containsEntry("data_type", "text");
|
||||
assertThat(cols.get(0)).containsEntry("is_nullable", "YES");
|
||||
}
|
||||
|
||||
@Test
|
||||
void created_by_index_exists() {
|
||||
Integer count = jdbc.queryForObject(
|
||||
"SELECT count(*)::int FROM pg_indexes " +
|
||||
"WHERE tablename = 'deployments' AND indexname = 'idx_deployments_created_by' " +
|
||||
" AND schemaname = current_schema()",
|
||||
Integer.class
|
||||
);
|
||||
assertThat(count).isEqualTo(1);
|
||||
}
|
||||
|
||||
@Test
|
||||
void created_by_has_fk_to_users() {
|
||||
Integer count = jdbc.queryForObject(
|
||||
"SELECT count(*)::int FROM information_schema.table_constraints tc " +
|
||||
"JOIN information_schema.constraint_column_usage ccu " +
|
||||
" ON tc.constraint_name = ccu.constraint_name " +
|
||||
"WHERE tc.table_name = 'deployments' " +
|
||||
" AND tc.constraint_type = 'FOREIGN KEY' " +
|
||||
" AND ccu.table_name = 'users' " +
|
||||
" AND ccu.column_name = 'user_id' " +
|
||||
" AND tc.table_schema = current_schema()",
|
||||
Integer.class
|
||||
);
|
||||
assertThat(count).isGreaterThanOrEqualTo(1);
|
||||
}
|
||||
}
|
||||
@@ -3,5 +3,6 @@ package com.cameleer.server.core.admin;
|
||||
public enum AuditCategory {
|
||||
INFRA, AUTH, USER_MGMT, CONFIG, RBAC, AGENT,
|
||||
OUTBOUND_CONNECTION_CHANGE, OUTBOUND_HTTP_TRUST_CHANGE,
|
||||
ALERT_RULE_CHANGE, ALERT_SILENCE_CHANGE
|
||||
ALERT_RULE_CHANGE, ALERT_SILENCE_CHANGE,
|
||||
DEPLOYMENT
|
||||
}
|
||||
|
||||
@@ -33,7 +33,9 @@ public final class ConfigMerger {
|
||||
boolVal(appConfig, envConfig, "replayEnabled", true),
|
||||
stringVal(appConfig, envConfig, "runtimeType", "auto"),
|
||||
stringVal(appConfig, envConfig, "customArgs", ""),
|
||||
stringList(appConfig, envConfig, "extraNetworks")
|
||||
stringList(appConfig, envConfig, "extraNetworks"),
|
||||
boolVal(appConfig, envConfig, "externalRouting", true),
|
||||
global.certResolver()
|
||||
);
|
||||
}
|
||||
|
||||
@@ -107,6 +109,7 @@ public final class ConfigMerger {
|
||||
int cpuRequest,
|
||||
String routingMode,
|
||||
String routingDomain,
|
||||
String serverUrl
|
||||
String serverUrl,
|
||||
String certResolver
|
||||
) {}
|
||||
}
|
||||
|
||||
@@ -19,14 +19,23 @@ public record Deployment(
|
||||
String containerName,
|
||||
String errorMessage,
|
||||
Map<String, Object> resolvedConfig,
|
||||
DeploymentConfigSnapshot deployedConfigSnapshot,
|
||||
Instant deployedAt,
|
||||
Instant stoppedAt,
|
||||
Instant createdAt
|
||||
Instant createdAt,
|
||||
String createdBy
|
||||
) {
|
||||
public Deployment withStatus(DeploymentStatus newStatus) {
|
||||
return new Deployment(id, appId, appVersionId, environmentId, newStatus,
|
||||
targetState, deploymentStrategy, replicaStates, deployStage,
|
||||
containerId, containerName, errorMessage, resolvedConfig,
|
||||
deployedAt, stoppedAt, createdAt);
|
||||
deployedConfigSnapshot, deployedAt, stoppedAt, createdAt, createdBy);
|
||||
}
|
||||
|
||||
public Deployment withDeployedConfigSnapshot(DeploymentConfigSnapshot snapshot) {
|
||||
return new Deployment(id, appId, appVersionId, environmentId, status,
|
||||
targetState, deploymentStrategy, replicaStates, deployStage,
|
||||
containerId, containerName, errorMessage, resolvedConfig,
|
||||
snapshot, deployedAt, stoppedAt, createdAt, createdBy);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,22 @@
|
||||
package com.cameleer.server.core.runtime;
|
||||
|
||||
import com.cameleer.common.model.ApplicationConfig;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.UUID;
|
||||
|
||||
/**
|
||||
* Snapshot of the config that was deployed, captured at the moment a deployment
|
||||
* transitions to RUNNING. Used for "last known good" restore (checkpoints) and
|
||||
* for dirty-state detection on the deployment page.
|
||||
*
|
||||
* <p>This is persisted as JSONB in {@code deployments.deployed_config_snapshot}.</p>
|
||||
*/
|
||||
public record DeploymentConfigSnapshot(
|
||||
UUID jarVersionId,
|
||||
ApplicationConfig agentConfig,
|
||||
Map<String, Object> containerConfig,
|
||||
List<String> sensitiveKeys
|
||||
) {
|
||||
}
|
||||
@@ -9,9 +9,11 @@ public interface DeploymentRepository {
|
||||
List<Deployment> findByEnvironmentId(UUID environmentId);
|
||||
Optional<Deployment> findById(UUID id);
|
||||
Optional<Deployment> findActiveByAppIdAndEnvironmentId(UUID appId, UUID environmentId);
|
||||
UUID create(UUID appId, UUID appVersionId, UUID environmentId, String containerName);
|
||||
Optional<Deployment> findActiveByAppIdAndEnvironmentIdExcluding(UUID appId, UUID environmentId, UUID excludeDeploymentId);
|
||||
UUID create(UUID appId, UUID appVersionId, UUID environmentId, String containerName, String createdBy);
|
||||
void updateStatus(UUID id, DeploymentStatus status, String containerId, String errorMessage);
|
||||
void markDeployed(UUID id);
|
||||
void markStopped(UUID id);
|
||||
void deleteTerminalByAppAndEnvironment(UUID appId, UUID environmentId);
|
||||
/** Delete FAILED deployments for this (app, env). STOPPED deployments are preserved as checkpoints. */
|
||||
void deleteFailedByAppAndEnvironment(UUID appId, UUID environmentId);
|
||||
}
|
||||
|
||||
@@ -23,19 +23,19 @@ public class DeploymentService {
|
||||
public Deployment getById(UUID id) { return deployRepo.findById(id).orElseThrow(() -> new IllegalArgumentException("Deployment not found: " + id)); }
|
||||
|
||||
/** Create a deployment record. Actual container start is handled by DeploymentExecutor (async). */
|
||||
public Deployment createDeployment(UUID appId, UUID appVersionId, UUID environmentId) {
|
||||
public Deployment createDeployment(UUID appId, UUID appVersionId, UUID environmentId, String createdBy) {
|
||||
App app = appService.getById(appId);
|
||||
Environment env = envService.getById(environmentId);
|
||||
String containerName = env.slug() + "-" + app.slug();
|
||||
|
||||
deployRepo.deleteTerminalByAppAndEnvironment(appId, environmentId);
|
||||
UUID deploymentId = deployRepo.create(appId, appVersionId, environmentId, containerName);
|
||||
deployRepo.deleteFailedByAppAndEnvironment(appId, environmentId);
|
||||
UUID deploymentId = deployRepo.create(appId, appVersionId, environmentId, containerName, createdBy);
|
||||
return deployRepo.findById(deploymentId).orElseThrow();
|
||||
}
|
||||
|
||||
/** Promote: deploy the same app version to a different environment. */
|
||||
public Deployment promote(UUID appId, UUID appVersionId, UUID targetEnvironmentId) {
|
||||
return createDeployment(appId, appVersionId, targetEnvironmentId);
|
||||
public Deployment promote(UUID appId, UUID appVersionId, UUID targetEnvironmentId, String createdBy) {
|
||||
return createDeployment(appId, appVersionId, targetEnvironmentId, createdBy);
|
||||
}
|
||||
|
||||
public void markRunning(UUID deploymentId, String containerId) {
|
||||
|
||||
@@ -0,0 +1,31 @@
|
||||
package com.cameleer.server.core.runtime;
|
||||
|
||||
/**
|
||||
* Supported deployment strategies. Persisted as a kebab-case string on
|
||||
* ApplicationConfig / ResolvedContainerConfig; {@link #fromWire(String)} is
|
||||
* the only conversion entry point and falls back to {@link #BLUE_GREEN} for
|
||||
* unknown or null input so the executor never has to null-check.
|
||||
*/
|
||||
public enum DeploymentStrategy {
|
||||
BLUE_GREEN("blue-green"),
|
||||
ROLLING("rolling");
|
||||
|
||||
private final String wire;
|
||||
|
||||
DeploymentStrategy(String wire) {
|
||||
this.wire = wire;
|
||||
}
|
||||
|
||||
public String toWire() {
|
||||
return wire;
|
||||
}
|
||||
|
||||
public static DeploymentStrategy fromWire(String value) {
|
||||
if (value == null) return BLUE_GREEN;
|
||||
String normalized = value.trim().toLowerCase();
|
||||
for (DeploymentStrategy s : values()) {
|
||||
if (s.wire.equals(normalized)) return s;
|
||||
}
|
||||
return BLUE_GREEN;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,103 @@
|
||||
package com.cameleer.server.core.runtime;
|
||||
|
||||
import com.cameleer.common.model.ApplicationConfig;
|
||||
import com.fasterxml.jackson.databind.JsonNode;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import com.fasterxml.jackson.databind.node.ObjectNode;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Objects;
|
||||
import java.util.Set;
|
||||
import java.util.TreeSet;
|
||||
import java.util.UUID;
|
||||
|
||||
/**
|
||||
* Compares the app's current desired state (JAR + agent config + container config) to the
|
||||
* config snapshot from the last successful deployment, producing a structured dirty result.
|
||||
*
|
||||
* <p>Pure logic — no IO, no Spring. Safe to unit-test as a POJO.
|
||||
* Caller must supply an {@link ObjectMapper} configured with {@code JavaTimeModule} so that
|
||||
* {@code ApplicationConfig.updatedAt} (an {@link java.time.Instant}) serialises correctly.</p>
|
||||
*/
|
||||
public class DirtyStateCalculator {
|
||||
|
||||
// Live-pushed fields are excluded from the deploy diff: changes to them take effect
|
||||
// via SSE config-update without a redeploy, so they are not "pending deploy" when they
|
||||
// differ from the last successful deployment snapshot. See ui/rules: the Traces & Taps
|
||||
// and Route Recording tabs apply with ?apply=live and "never mark dirty".
|
||||
private static final Set<String> AGENT_CONFIG_IGNORED_KEYS = Set.of(
|
||||
"version", "updatedAt", "updatedBy", "environment", "application",
|
||||
"taps", "tapVersion", "tracedProcessors", "routeRecording"
|
||||
);
|
||||
|
||||
private final ObjectMapper mapper;
|
||||
|
||||
public DirtyStateCalculator(ObjectMapper mapper) {
|
||||
this.mapper = mapper;
|
||||
}
|
||||
|
||||
private JsonNode scrubAgentConfig(JsonNode node) {
|
||||
if (!(node instanceof ObjectNode obj)) return node;
|
||||
ObjectNode copy = obj.deepCopy();
|
||||
for (String k : AGENT_CONFIG_IGNORED_KEYS) copy.remove(k);
|
||||
return copy;
|
||||
}
|
||||
|
||||
public DirtyStateResult compute(UUID desiredJarVersionId,
|
||||
ApplicationConfig desiredAgentConfig,
|
||||
Map<String, Object> desiredContainerConfig,
|
||||
DeploymentConfigSnapshot snapshot) {
|
||||
List<DirtyStateResult.Difference> diffs = new ArrayList<>();
|
||||
|
||||
if (snapshot == null) {
|
||||
diffs.add(new DirtyStateResult.Difference("snapshot", "(none)", "(none)"));
|
||||
return new DirtyStateResult(true, diffs);
|
||||
}
|
||||
|
||||
if (!Objects.equals(desiredJarVersionId, snapshot.jarVersionId())) {
|
||||
diffs.add(new DirtyStateResult.Difference("jarVersionId",
|
||||
String.valueOf(desiredJarVersionId), String.valueOf(snapshot.jarVersionId())));
|
||||
}
|
||||
|
||||
compareJson("agentConfig",
|
||||
scrubAgentConfig(mapper.valueToTree(desiredAgentConfig)),
|
||||
scrubAgentConfig(mapper.valueToTree(snapshot.agentConfig())),
|
||||
diffs);
|
||||
compareJson("containerConfig", mapper.valueToTree(desiredContainerConfig),
|
||||
mapper.valueToTree(snapshot.containerConfig()), diffs);
|
||||
|
||||
return new DirtyStateResult(!diffs.isEmpty(), diffs);
|
||||
}
|
||||
|
||||
private void compareJson(String prefix, JsonNode desired, JsonNode deployed,
|
||||
List<DirtyStateResult.Difference> diffs) {
|
||||
if (!(desired instanceof ObjectNode desiredObj) || !(deployed instanceof ObjectNode deployedObj)) {
|
||||
if (!Objects.equals(desired, deployed)) {
|
||||
diffs.add(new DirtyStateResult.Difference(prefix,
|
||||
nodeToString(desired), nodeToString(deployed)));
|
||||
}
|
||||
return;
|
||||
}
|
||||
TreeSet<String> keys = new TreeSet<>();
|
||||
desiredObj.fieldNames().forEachRemaining(keys::add);
|
||||
deployedObj.fieldNames().forEachRemaining(keys::add);
|
||||
for (String key : keys) {
|
||||
JsonNode d = desiredObj.get(key);
|
||||
JsonNode p = deployedObj.get(key);
|
||||
if (Objects.equals(d, p)) continue;
|
||||
if (d instanceof ObjectNode && p instanceof ObjectNode) {
|
||||
compareJson(prefix + "." + key, d, p, diffs);
|
||||
} else {
|
||||
diffs.add(new DirtyStateResult.Difference(prefix + "." + key, nodeToString(d), nodeToString(p)));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static String nodeToString(JsonNode n) {
|
||||
if (n == null) return "(none)";
|
||||
if (n.isValueNode()) return n.asText();
|
||||
return n.toString(); // arrays/objects: compact JSON
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,7 @@
|
||||
package com.cameleer.server.core.runtime;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
public record DirtyStateResult(boolean dirty, List<Difference> differences) {
|
||||
public record Difference(String field, String staged, String deployed) {}
|
||||
}
|
||||
@@ -22,7 +22,9 @@ public record ResolvedContainerConfig(
|
||||
boolean replayEnabled,
|
||||
String runtimeType,
|
||||
String customArgs,
|
||||
List<String> extraNetworks
|
||||
List<String> extraNetworks,
|
||||
boolean externalRouting,
|
||||
String certResolver
|
||||
) {
|
||||
public long memoryLimitBytes() {
|
||||
return (long) memoryLimitMb * 1024 * 1024;
|
||||
|
||||
@@ -0,0 +1,60 @@
|
||||
package com.cameleer.server.core.search;
|
||||
|
||||
import java.util.regex.Pattern;
|
||||
|
||||
/**
|
||||
* Structured attribute filter for execution search.
|
||||
* <p>
|
||||
* Value semantics:
|
||||
* <ul>
|
||||
* <li>{@code value == null} or blank -> key-exists check</li>
|
||||
* <li>{@code value} contains {@code *} -> wildcard match (translated to SQL LIKE pattern)</li>
|
||||
* <li>otherwise -> exact match</li>
|
||||
* </ul>
|
||||
* <p>
|
||||
* Keys must match {@code ^[a-zA-Z0-9._-]+$} — they are later inlined into
|
||||
* ClickHouse SQL via {@code JSONExtractString}, which does not accept a
|
||||
* parameter placeholder for the JSON path. Values are always parameter-bound.
|
||||
*/
|
||||
public record AttributeFilter(String key, String value) {
|
||||
|
||||
private static final Pattern KEY_PATTERN = Pattern.compile("^[a-zA-Z0-9._-]+$");
|
||||
|
||||
public AttributeFilter {
|
||||
if (key == null || !KEY_PATTERN.matcher(key).matches()) {
|
||||
throw new IllegalArgumentException(
|
||||
"Invalid attribute key: must match " + KEY_PATTERN.pattern() + ", got: " + key);
|
||||
}
|
||||
if (value != null && value.isBlank()) {
|
||||
value = null;
|
||||
}
|
||||
}
|
||||
|
||||
public boolean isKeyOnly() {
|
||||
return value == null;
|
||||
}
|
||||
|
||||
public boolean isWildcard() {
|
||||
return value != null && value.indexOf('*') >= 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns a SQL LIKE pattern for wildcard matches with {@code %} / {@code _} / {@code \}
|
||||
* in the source value escaped, or {@code null} for exact / key-only filters.
|
||||
*/
|
||||
public String toLikePattern() {
|
||||
if (!isWildcard()) return null;
|
||||
StringBuilder sb = new StringBuilder(value.length() + 4);
|
||||
for (int i = 0; i < value.length(); i++) {
|
||||
char c = value.charAt(i);
|
||||
switch (c) {
|
||||
case '\\' -> sb.append("\\\\");
|
||||
case '%' -> sb.append("\\%");
|
||||
case '_' -> sb.append("\\_");
|
||||
case '*' -> sb.append('%');
|
||||
default -> sb.append(c);
|
||||
}
|
||||
}
|
||||
return sb.toString();
|
||||
}
|
||||
}
|
||||
@@ -9,7 +9,7 @@ import java.util.List;
|
||||
* @param q free-text search across message and stack trace
|
||||
* @param levels log level filter (e.g. ["WARN","ERROR"]), OR-joined
|
||||
* @param application application ID filter (nullable = all apps)
|
||||
* @param instanceId agent instance ID filter
|
||||
* @param instanceId agent instance ID filter (single value; coexists with instanceIds)
|
||||
* @param exchangeId Camel exchange ID filter
|
||||
* @param logger logger name substring filter
|
||||
* @param environment optional environment filter (e.g. "dev", "staging", "prod")
|
||||
@@ -19,6 +19,9 @@ import java.util.List;
|
||||
* @param cursor ISO timestamp cursor for keyset pagination
|
||||
* @param limit page size (1-500, default 100)
|
||||
* @param sort sort direction: "asc" or "desc" (default "desc")
|
||||
* @param instanceIds multi-value instance ID filter (IN clause); scopes logs to one deployment's
|
||||
* replicas when provided. Both instanceId and instanceIds may coexist — both
|
||||
* conditions apply (AND). Empty/null means no additional filtering.
|
||||
*/
|
||||
public record LogSearchRequest(
|
||||
String q,
|
||||
@@ -33,7 +36,8 @@ public record LogSearchRequest(
|
||||
Instant to,
|
||||
String cursor,
|
||||
int limit,
|
||||
String sort
|
||||
String sort,
|
||||
List<String> instanceIds
|
||||
) {
|
||||
|
||||
private static final int DEFAULT_LIMIT = 100;
|
||||
@@ -45,5 +49,6 @@ public record LogSearchRequest(
|
||||
if (sort == null || !"asc".equalsIgnoreCase(sort)) sort = "desc";
|
||||
if (levels == null) levels = List.of();
|
||||
if (sources == null) sources = List.of();
|
||||
if (instanceIds == null) instanceIds = List.of();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -54,7 +54,8 @@ public record SearchRequest(
|
||||
String sortField,
|
||||
String sortDir,
|
||||
String afterExecutionId,
|
||||
String environment
|
||||
String environment,
|
||||
List<AttributeFilter> attributeFilters
|
||||
) {
|
||||
|
||||
private static final int DEFAULT_LIMIT = 50;
|
||||
@@ -83,6 +84,24 @@ public record SearchRequest(
|
||||
if (offset < 0) offset = 0;
|
||||
if (sortField == null || !ALLOWED_SORT_FIELDS.contains(sortField)) sortField = "startTime";
|
||||
if (!"asc".equalsIgnoreCase(sortDir)) sortDir = "desc";
|
||||
if (attributeFilters == null) attributeFilters = List.of();
|
||||
}
|
||||
|
||||
/** Legacy 21-arg constructor preserved for existing call sites — defaults attributeFilters to empty. */
|
||||
public SearchRequest(
|
||||
String status, Instant timeFrom, Instant timeTo,
|
||||
Long durationMin, Long durationMax, String correlationId,
|
||||
String text, String textInBody, String textInHeaders, String textInErrors,
|
||||
String routeId, String instanceId, String processorType,
|
||||
String applicationId, List<String> instanceIds,
|
||||
int offset, int limit, String sortField, String sortDir,
|
||||
String afterExecutionId, String environment
|
||||
) {
|
||||
this(status, timeFrom, timeTo, durationMin, durationMax, correlationId,
|
||||
text, textInBody, textInHeaders, textInErrors,
|
||||
routeId, instanceId, processorType, applicationId, instanceIds,
|
||||
offset, limit, sortField, sortDir, afterExecutionId, environment,
|
||||
List.of());
|
||||
}
|
||||
|
||||
/** Returns the snake_case column name for ORDER BY. */
|
||||
@@ -96,7 +115,8 @@ public record SearchRequest(
|
||||
status, timeFrom, timeTo, durationMin, durationMax, correlationId,
|
||||
text, textInBody, textInHeaders, textInErrors,
|
||||
routeId, instanceId, processorType, applicationId, resolvedInstanceIds,
|
||||
offset, limit, sortField, sortDir, afterExecutionId, environment
|
||||
offset, limit, sortField, sortDir, afterExecutionId, environment,
|
||||
attributeFilters
|
||||
);
|
||||
}
|
||||
|
||||
@@ -106,7 +126,8 @@ public record SearchRequest(
|
||||
status, timeFrom, timeTo, durationMin, durationMax, correlationId,
|
||||
text, textInBody, textInHeaders, textInErrors,
|
||||
routeId, instanceId, processorType, applicationId, instanceIds,
|
||||
offset, limit, sortField, sortDir, afterExecutionId, env
|
||||
offset, limit, sortField, sortDir, afterExecutionId, env,
|
||||
attributeFilters
|
||||
);
|
||||
}
|
||||
|
||||
@@ -122,7 +143,8 @@ public record SearchRequest(
|
||||
status, ts, timeTo, durationMin, durationMax, correlationId,
|
||||
text, textInBody, textInHeaders, textInErrors,
|
||||
routeId, instanceId, processorType, applicationId, instanceIds,
|
||||
offset, limit, sortField, sortDir, afterExecutionId, environment
|
||||
offset, limit, sortField, sortDir, afterExecutionId, environment,
|
||||
attributeFilters
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,7 +3,6 @@ package com.cameleer.server.core.storage;
|
||||
import com.cameleer.common.graph.RouteGraph;
|
||||
import com.cameleer.server.core.ingestion.TaggedDiagram;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
|
||||
@@ -15,7 +14,18 @@ public interface DiagramStore {
|
||||
|
||||
Optional<String> findContentHashForRoute(String routeId, String instanceId);
|
||||
|
||||
Optional<String> findContentHashForRouteByAgents(String routeId, List<String> instanceIds);
|
||||
/**
|
||||
* Return the most recently stored {@code content_hash} for the given
|
||||
* {@code (applicationId, environment, routeId)} triple, regardless of the
|
||||
* agent instance that produced it.
|
||||
*
|
||||
* <p>Unlike {@link #findContentHashForRoute(String, String)}, this lookup
|
||||
* is independent of the agent registry — so it keeps working for routes
|
||||
* whose publishing agents have since been redeployed or removed.
|
||||
*/
|
||||
Optional<String> findLatestContentHashForAppRoute(String applicationId,
|
||||
String routeId,
|
||||
String environment);
|
||||
|
||||
Map<String, String> findProcessorRouteMapping(String applicationId, String environment);
|
||||
}
|
||||
|
||||
@@ -0,0 +1,36 @@
|
||||
package com.cameleer.server.core.storage;
|
||||
|
||||
import com.cameleer.server.core.storage.model.ServerInstanceInfo;
|
||||
import com.cameleer.server.core.storage.model.ServerMetricCatalogEntry;
|
||||
import com.cameleer.server.core.storage.model.ServerMetricQueryRequest;
|
||||
import com.cameleer.server.core.storage.model.ServerMetricQueryResponse;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
* Read-side access to the ClickHouse {@code server_metrics} table. Exposed
|
||||
* to dashboards through {@code /api/v1/admin/server-metrics/**} so SaaS
|
||||
* control planes don't need direct ClickHouse access.
|
||||
*/
|
||||
public interface ServerMetricsQueryStore {
|
||||
|
||||
/**
|
||||
* Catalog of metric names observed in {@code [from, to)} along with their
|
||||
* type, the set of statistics emitted, and the union of tag keys seen.
|
||||
*/
|
||||
List<ServerMetricCatalogEntry> catalog(Instant from, Instant to);
|
||||
|
||||
/**
|
||||
* Distinct {@code server_instance_id} values that wrote at least one
|
||||
* sample in {@code [from, to)}, with first/last seen timestamps.
|
||||
*/
|
||||
List<ServerInstanceInfo> listInstances(Instant from, Instant to);
|
||||
|
||||
/**
|
||||
* Generic time-series query. See {@link ServerMetricQueryRequest} for
|
||||
* request semantics. Implementations must enforce input validation and
|
||||
* reject unsafe inputs with {@link IllegalArgumentException}.
|
||||
*/
|
||||
ServerMetricQueryResponse query(ServerMetricQueryRequest request);
|
||||
}
|
||||
@@ -0,0 +1,16 @@
|
||||
package com.cameleer.server.core.storage;
|
||||
|
||||
import com.cameleer.server.core.storage.model.ServerMetricSample;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
* Sink for periodic snapshots of the server's own Micrometer meter registry.
|
||||
* Implementations persist the samples (e.g. to ClickHouse) so server
|
||||
* self-metrics survive restarts and can be queried historically without an
|
||||
* external Prometheus.
|
||||
*/
|
||||
public interface ServerMetricsStore {
|
||||
|
||||
void insertBatch(List<ServerMetricSample> samples);
|
||||
}
|
||||
@@ -0,0 +1,15 @@
|
||||
package com.cameleer.server.core.storage.model;
|
||||
|
||||
import java.time.Instant;
|
||||
|
||||
/**
|
||||
* One row of the {@code /api/v1/admin/server-metrics/instances} response.
|
||||
* Used by dashboards to partition counter-delta computations across server
|
||||
* process boundaries (each boot rotates the id).
|
||||
*/
|
||||
public record ServerInstanceInfo(
|
||||
String serverInstanceId,
|
||||
Instant firstSeen,
|
||||
Instant lastSeen
|
||||
) {
|
||||
}
|
||||
@@ -0,0 +1,17 @@
|
||||
package com.cameleer.server.core.storage.model;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
* One row of the {@code /api/v1/admin/server-metrics/catalog} response.
|
||||
* Surfaces the set of statistics and tag keys observed for a metric across
|
||||
* the requested window, so dashboards can build selectors without ClickHouse
|
||||
* access.
|
||||
*/
|
||||
public record ServerMetricCatalogEntry(
|
||||
String metricName,
|
||||
String metricType,
|
||||
List<String> statistics,
|
||||
List<String> tagKeys
|
||||
) {
|
||||
}
|
||||
@@ -0,0 +1,10 @@
|
||||
package com.cameleer.server.core.storage.model;
|
||||
|
||||
import java.time.Instant;
|
||||
|
||||
/** One {@code (bucket, value)} point of a server-metrics series. */
|
||||
public record ServerMetricPoint(
|
||||
Instant t,
|
||||
double v
|
||||
) {
|
||||
}
|
||||
@@ -0,0 +1,40 @@
|
||||
package com.cameleer.server.core.storage.model;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* Request contract for the generic server-metrics time-series query.
|
||||
*
|
||||
* <p>{@code aggregation} controls how multiple samples within a bucket
|
||||
* collapse: {@code avg|sum|max|min|latest}. {@code mode} controls counter
|
||||
* handling: {@code raw} returns values as stored (cumulative for counters),
|
||||
* {@code delta} returns per-bucket positive-clipped differences computed
|
||||
* per {@code server_instance_id}.
|
||||
*
|
||||
* <p>{@code statistic} filters which Micrometer sub-measurement to read
|
||||
* ({@code value} / {@code count} / {@code total_time} / {@code total} /
|
||||
* {@code max} / {@code mean}). {@code mean} is a derived statistic for
|
||||
* timers: {@code sum(total_time|total) / sum(count)} per bucket.
|
||||
*
|
||||
* <p>{@code groupByTags} splits the output into one series per unique tag
|
||||
* combination. {@code filterTags} narrows the input to samples whose tag
|
||||
* map matches every entry.
|
||||
*
|
||||
* <p>{@code serverInstanceIds} is an optional allow-list. When null or
|
||||
* empty all instances observed in the window are included.
|
||||
*/
|
||||
public record ServerMetricQueryRequest(
|
||||
String metric,
|
||||
String statistic,
|
||||
Instant from,
|
||||
Instant to,
|
||||
Integer stepSeconds,
|
||||
List<String> groupByTags,
|
||||
Map<String, String> filterTags,
|
||||
String aggregation,
|
||||
String mode,
|
||||
List<String> serverInstanceIds
|
||||
) {
|
||||
}
|
||||
@@ -0,0 +1,14 @@
|
||||
package com.cameleer.server.core.storage.model;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
/** Response of the generic server-metrics time-series query. */
|
||||
public record ServerMetricQueryResponse(
|
||||
String metric,
|
||||
String statistic,
|
||||
String aggregation,
|
||||
String mode,
|
||||
int stepSeconds,
|
||||
List<ServerMetricSeries> series
|
||||
) {
|
||||
}
|
||||
@@ -0,0 +1,23 @@
|
||||
package com.cameleer.server.core.storage.model;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* A single sample of the server's own Micrometer registry, captured by a
|
||||
* scheduled snapshot and destined for the ClickHouse {@code server_metrics}
|
||||
* table. One {@code ServerMetricSample} per Micrometer {@code Measurement},
|
||||
* so Timers and DistributionSummaries produce multiple samples per tick
|
||||
* (distinguished by {@link #statistic()}).
|
||||
*/
|
||||
public record ServerMetricSample(
|
||||
String tenantId,
|
||||
Instant collectedAt,
|
||||
String serverInstanceId,
|
||||
String metricName,
|
||||
String metricType,
|
||||
String statistic,
|
||||
double value,
|
||||
Map<String, String> tags
|
||||
) {
|
||||
}
|
||||
@@ -0,0 +1,14 @@
|
||||
package com.cameleer.server.core.storage.model;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* One series of the server-metrics query response, identified by its
|
||||
* {@link #tags} group (empty map when the query had no {@code groupByTags}).
|
||||
*/
|
||||
public record ServerMetricSeries(
|
||||
Map<String, String> tags,
|
||||
List<ServerMetricPoint> points
|
||||
) {
|
||||
}
|
||||
@@ -9,4 +9,10 @@ class AuditCategoryTest {
|
||||
assertThat(AuditCategory.valueOf("ALERT_RULE_CHANGE")).isNotNull();
|
||||
assertThat(AuditCategory.valueOf("ALERT_SILENCE_CHANGE")).isNotNull();
|
||||
}
|
||||
|
||||
@Test
|
||||
void deploymentCategoryPresent() {
|
||||
assertThat(AuditCategory.valueOf("DEPLOYMENT"))
|
||||
.isEqualTo(AuditCategory.DEPLOYMENT);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -22,7 +22,7 @@ class ChunkAccumulatorTest {
|
||||
public void store(com.cameleer.server.core.ingestion.TaggedDiagram d) {}
|
||||
public Optional<com.cameleer.common.graph.RouteGraph> findByContentHash(String h) { return Optional.empty(); }
|
||||
public Optional<String> findContentHashForRoute(String r, String a) { return Optional.empty(); }
|
||||
public Optional<String> findContentHashForRouteByAgents(String r, List<String> a) { return Optional.empty(); }
|
||||
public Optional<String> findLatestContentHashForAppRoute(String app, String r, String env) { return Optional.empty(); }
|
||||
public Map<String, String> findProcessorRouteMapping(String app, String env) { return Map.of(); }
|
||||
};
|
||||
|
||||
|
||||
@@ -0,0 +1,34 @@
|
||||
package com.cameleer.server.core.runtime;
|
||||
|
||||
import org.junit.jupiter.api.Test;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
class DeploymentStrategyTest {
|
||||
|
||||
@Test
|
||||
void fromWire_knownValues() {
|
||||
assertThat(DeploymentStrategy.fromWire("blue-green")).isEqualTo(DeploymentStrategy.BLUE_GREEN);
|
||||
assertThat(DeploymentStrategy.fromWire("rolling")).isEqualTo(DeploymentStrategy.ROLLING);
|
||||
}
|
||||
|
||||
@Test
|
||||
void fromWire_caseInsensitiveAndTrims() {
|
||||
assertThat(DeploymentStrategy.fromWire("BLUE-GREEN")).isEqualTo(DeploymentStrategy.BLUE_GREEN);
|
||||
assertThat(DeploymentStrategy.fromWire(" Rolling ")).isEqualTo(DeploymentStrategy.ROLLING);
|
||||
}
|
||||
|
||||
@Test
|
||||
void fromWire_unknownOrNullFallsBackToBlueGreen() {
|
||||
assertThat(DeploymentStrategy.fromWire(null)).isEqualTo(DeploymentStrategy.BLUE_GREEN);
|
||||
assertThat(DeploymentStrategy.fromWire("")).isEqualTo(DeploymentStrategy.BLUE_GREEN);
|
||||
assertThat(DeploymentStrategy.fromWire("canary")).isEqualTo(DeploymentStrategy.BLUE_GREEN);
|
||||
}
|
||||
|
||||
@Test
|
||||
void toWire_roundTrips() {
|
||||
for (DeploymentStrategy s : DeploymentStrategy.values()) {
|
||||
assertThat(DeploymentStrategy.fromWire(s.toWire())).isEqualTo(s);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,187 @@
|
||||
package com.cameleer.server.core.runtime;
|
||||
|
||||
import com.cameleer.common.model.ApplicationConfig;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import com.fasterxml.jackson.datatype.jsr310.JavaTimeModule;
|
||||
import org.junit.jupiter.api.Test;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.UUID;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
class DirtyStateCalculatorTest {
|
||||
|
||||
private static final DirtyStateCalculator CALC = new DirtyStateCalculator(
|
||||
new ObjectMapper().registerModule(new JavaTimeModule()));
|
||||
|
||||
@Test
|
||||
void noSnapshot_meansEverythingDirty() {
|
||||
DirtyStateCalculator calc = CALC;
|
||||
|
||||
ApplicationConfig desiredAgent = new ApplicationConfig();
|
||||
desiredAgent.setSamplingRate(1.0);
|
||||
Map<String, Object> desiredContainer = Map.of("memoryLimitMb", 512);
|
||||
|
||||
DirtyStateResult result = calc.compute(UUID.randomUUID(), desiredAgent, desiredContainer, null);
|
||||
|
||||
assertThat(result.dirty()).isTrue();
|
||||
assertThat(result.differences()).extracting(DirtyStateResult.Difference::field)
|
||||
.contains("snapshot");
|
||||
}
|
||||
|
||||
@Test
|
||||
void identicalSnapshot_isClean() {
|
||||
DirtyStateCalculator calc = CALC;
|
||||
|
||||
ApplicationConfig cfg = new ApplicationConfig();
|
||||
cfg.setSamplingRate(0.5);
|
||||
Map<String, Object> container = Map.of("memoryLimitMb", 512);
|
||||
|
||||
UUID jarId = UUID.randomUUID();
|
||||
DeploymentConfigSnapshot snap = new DeploymentConfigSnapshot(jarId, cfg, container, null);
|
||||
DirtyStateResult result = calc.compute(jarId, cfg, container, snap);
|
||||
|
||||
assertThat(result.dirty()).isFalse();
|
||||
assertThat(result.differences()).isEmpty();
|
||||
}
|
||||
|
||||
@Test
|
||||
void differentJar_marksJarField() {
|
||||
DirtyStateCalculator calc = CALC;
|
||||
ApplicationConfig cfg = new ApplicationConfig();
|
||||
Map<String, Object> container = Map.of();
|
||||
UUID v1 = UUID.randomUUID();
|
||||
UUID v2 = UUID.randomUUID();
|
||||
DeploymentConfigSnapshot snap = new DeploymentConfigSnapshot(v1, cfg, container, null);
|
||||
|
||||
DirtyStateResult result = calc.compute(v2, cfg, container, snap);
|
||||
|
||||
assertThat(result.dirty()).isTrue();
|
||||
assertThat(result.differences()).extracting(DirtyStateResult.Difference::field)
|
||||
.contains("jarVersionId");
|
||||
}
|
||||
|
||||
@Test
|
||||
void differentSamplingRate_marksAgentField() {
|
||||
DirtyStateCalculator calc = CALC;
|
||||
|
||||
ApplicationConfig deployedCfg = new ApplicationConfig();
|
||||
deployedCfg.setSamplingRate(0.5);
|
||||
ApplicationConfig desiredCfg = new ApplicationConfig();
|
||||
desiredCfg.setSamplingRate(1.0);
|
||||
UUID jarId = UUID.randomUUID();
|
||||
DeploymentConfigSnapshot snap = new DeploymentConfigSnapshot(jarId, deployedCfg, Map.of(), null);
|
||||
|
||||
DirtyStateResult result = calc.compute(jarId, desiredCfg, Map.of(), snap);
|
||||
|
||||
assertThat(result.dirty()).isTrue();
|
||||
assertThat(result.differences()).extracting(DirtyStateResult.Difference::field)
|
||||
.contains("agentConfig.samplingRate");
|
||||
}
|
||||
|
||||
@Test
|
||||
void differentContainerMemory_marksContainerField() {
|
||||
DirtyStateCalculator calc = CALC;
|
||||
ApplicationConfig cfg = new ApplicationConfig();
|
||||
UUID jarId = UUID.randomUUID();
|
||||
DeploymentConfigSnapshot snap = new DeploymentConfigSnapshot(jarId, cfg, Map.of("memoryLimitMb", 512), null);
|
||||
|
||||
DirtyStateResult result = calc.compute(jarId, cfg, Map.of("memoryLimitMb", 1024), snap);
|
||||
|
||||
assertThat(result.dirty()).isTrue();
|
||||
assertThat(result.differences()).extracting(DirtyStateResult.Difference::field)
|
||||
.contains("containerConfig.memoryLimitMb");
|
||||
}
|
||||
|
||||
@Test
|
||||
void nullAgentConfigInSnapshot_marksAgentConfigDiff() {
|
||||
DirtyStateCalculator calc = CALC;
|
||||
ApplicationConfig desired = new ApplicationConfig();
|
||||
desired.setSamplingRate(1.0);
|
||||
UUID jarId = UUID.randomUUID();
|
||||
DeploymentConfigSnapshot snap = new DeploymentConfigSnapshot(jarId, null, Map.of(), null);
|
||||
|
||||
DirtyStateResult result = calc.compute(jarId, desired, Map.of(), snap);
|
||||
|
||||
assertThat(result.dirty()).isTrue();
|
||||
assertThat(result.differences()).extracting(DirtyStateResult.Difference::field)
|
||||
.contains("agentConfig");
|
||||
}
|
||||
|
||||
@Test
|
||||
void nestedAgentField_reportsDeepPath() {
|
||||
DirtyStateCalculator calc = CALC;
|
||||
|
||||
ApplicationConfig deployed = new ApplicationConfig();
|
||||
deployed.setSensitiveKeys(List.of("password", "token"));
|
||||
ApplicationConfig desired = new ApplicationConfig();
|
||||
desired.setSensitiveKeys(List.of("password", "token", "secret"));
|
||||
UUID jarId = UUID.randomUUID();
|
||||
DeploymentConfigSnapshot snap = new DeploymentConfigSnapshot(jarId, deployed, Map.of(), null);
|
||||
|
||||
DirtyStateResult result = calc.compute(jarId, desired, Map.of(), snap);
|
||||
|
||||
assertThat(result.dirty()).isTrue();
|
||||
assertThat(result.differences()).extracting(DirtyStateResult.Difference::field)
|
||||
.anyMatch(f -> f.startsWith("agentConfig.sensitiveKeys"));
|
||||
}
|
||||
|
||||
@Test
|
||||
void livePushedFields_doNotMarkDirty() {
|
||||
// Taps, tracedProcessors, and routeRecording apply via live SSE push (never redeploy),
|
||||
// so they must not appear as "pending deploy" when they differ from the last deploy snapshot.
|
||||
ApplicationConfig deployed = new ApplicationConfig();
|
||||
deployed.setTracedProcessors(Map.of("proc-1", "DEBUG"));
|
||||
deployed.setRouteRecording(Map.of("route-a", true));
|
||||
deployed.setTapVersion(1);
|
||||
|
||||
ApplicationConfig desired = new ApplicationConfig();
|
||||
desired.setTracedProcessors(Map.of("proc-1", "TRACE", "proc-2", "DEBUG"));
|
||||
desired.setRouteRecording(Map.of("route-a", false, "route-b", true));
|
||||
desired.setTapVersion(5);
|
||||
|
||||
UUID jarId = UUID.randomUUID();
|
||||
DeploymentConfigSnapshot snap = new DeploymentConfigSnapshot(jarId, deployed, Map.of(), null);
|
||||
DirtyStateResult result = CALC.compute(jarId, desired, Map.of(), snap);
|
||||
|
||||
assertThat(result.dirty()).isFalse();
|
||||
assertThat(result.differences()).isEmpty();
|
||||
}
|
||||
|
||||
@Test
|
||||
void stringField_differenceValueIsUnquoted() {
|
||||
DirtyStateCalculator calc = CALC;
|
||||
|
||||
ApplicationConfig deployed = new ApplicationConfig();
|
||||
deployed.setApplicationLogLevel("INFO");
|
||||
ApplicationConfig desired = new ApplicationConfig();
|
||||
desired.setApplicationLogLevel("DEBUG");
|
||||
UUID jarId = UUID.randomUUID();
|
||||
DeploymentConfigSnapshot snap = new DeploymentConfigSnapshot(jarId, deployed, Map.of(), null);
|
||||
|
||||
DirtyStateResult result = calc.compute(jarId, desired, Map.of(), snap);
|
||||
|
||||
DirtyStateResult.Difference diff = result.differences().stream()
|
||||
.filter(d -> d.field().equals("agentConfig.applicationLogLevel"))
|
||||
.findFirst().orElseThrow();
|
||||
assertThat(diff.staged()).isEqualTo("DEBUG");
|
||||
assertThat(diff.deployed()).isEqualTo("INFO");
|
||||
}
|
||||
|
||||
@Test
|
||||
void versionBumpDoesNotMarkDirty() {
|
||||
ApplicationConfig deployedCfg = new ApplicationConfig();
|
||||
deployedCfg.setSamplingRate(0.5);
|
||||
deployedCfg.setVersion(1);
|
||||
ApplicationConfig desiredCfg = new ApplicationConfig();
|
||||
desiredCfg.setSamplingRate(0.5);
|
||||
desiredCfg.setVersion(2); // bumped by save
|
||||
UUID jarId = UUID.randomUUID();
|
||||
DeploymentConfigSnapshot snap = new DeploymentConfigSnapshot(jarId, deployedCfg, Map.of(), null);
|
||||
|
||||
DirtyStateResult result = CALC.compute(jarId, desiredCfg, Map.of(), snap);
|
||||
assertThat(result.dirty()).isFalse();
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,88 @@
|
||||
package com.cameleer.server.core.search;
|
||||
|
||||
import org.junit.jupiter.api.Test;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.assertj.core.api.Assertions.assertThatThrownBy;
|
||||
|
||||
class AttributeFilterTest {
|
||||
|
||||
@Test
|
||||
void keyOnly_blankValue_normalizesToNull() {
|
||||
AttributeFilter f = new AttributeFilter("order", "");
|
||||
assertThat(f.value()).isNull();
|
||||
assertThat(f.isKeyOnly()).isTrue();
|
||||
assertThat(f.isWildcard()).isFalse();
|
||||
}
|
||||
|
||||
@Test
|
||||
void keyOnly_nullValue_isKeyOnly() {
|
||||
AttributeFilter f = new AttributeFilter("order", null);
|
||||
assertThat(f.isKeyOnly()).isTrue();
|
||||
}
|
||||
|
||||
@Test
|
||||
void exactValue_isNotWildcard() {
|
||||
AttributeFilter f = new AttributeFilter("order", "47");
|
||||
assertThat(f.isKeyOnly()).isFalse();
|
||||
assertThat(f.isWildcard()).isFalse();
|
||||
}
|
||||
|
||||
@Test
|
||||
void starInValue_isWildcard() {
|
||||
AttributeFilter f = new AttributeFilter("order", "47*");
|
||||
assertThat(f.isWildcard()).isTrue();
|
||||
}
|
||||
|
||||
@Test
|
||||
void invalidKey_throws() {
|
||||
assertThatThrownBy(() -> new AttributeFilter("bad key", "x"))
|
||||
.isInstanceOf(IllegalArgumentException.class)
|
||||
.hasMessageContaining("attribute key");
|
||||
}
|
||||
|
||||
@Test
|
||||
void blankKey_throws() {
|
||||
assertThatThrownBy(() -> new AttributeFilter(" ", null))
|
||||
.isInstanceOf(IllegalArgumentException.class);
|
||||
}
|
||||
|
||||
@Test
|
||||
void wildcardPattern_escapesLikeMetaCharacters() {
|
||||
AttributeFilter f = new AttributeFilter("order", "a_b%c\\d*");
|
||||
assertThat(f.toLikePattern()).isEqualTo("a\\_b\\%c\\\\d%");
|
||||
}
|
||||
|
||||
@Test
|
||||
void exactValue_toLikePattern_returnsNull() {
|
||||
AttributeFilter f = new AttributeFilter("order", "47");
|
||||
assertThat(f.toLikePattern()).isNull();
|
||||
}
|
||||
|
||||
@Test
|
||||
void searchRequest_canonicalCtor_acceptsAttributeFilters() {
|
||||
SearchRequest r = new SearchRequest(
|
||||
null, null, null, null, null, null, null, null, null, null,
|
||||
null, null, null, null, null, 0, 50, null, null, null, null,
|
||||
java.util.List.of(new AttributeFilter("order", "47")));
|
||||
assertThat(r.attributeFilters()).hasSize(1);
|
||||
assertThat(r.attributeFilters().get(0).key()).isEqualTo("order");
|
||||
}
|
||||
|
||||
@Test
|
||||
void searchRequest_legacyCtor_defaultsAttributeFiltersToEmpty() {
|
||||
SearchRequest r = new SearchRequest(
|
||||
null, null, null, null, null, null, null, null, null, null,
|
||||
null, null, null, null, null, 0, 50, null, null, null, null);
|
||||
assertThat(r.attributeFilters()).isEmpty();
|
||||
}
|
||||
|
||||
@Test
|
||||
void searchRequest_compactCtor_normalizesNullAttributeFilters() {
|
||||
SearchRequest r = new SearchRequest(
|
||||
null, null, null, null, null, null, null, null, null, null,
|
||||
null, null, null, null, null, 0, 50, null, null, null, null,
|
||||
null);
|
||||
assertThat(r.attributeFilters()).isNotNull().isEmpty();
|
||||
}
|
||||
}
|
||||
@@ -204,6 +204,21 @@ All query endpoints require JWT with `VIEWER` role or higher.
|
||||
| `GET /api/v1/agents/events-log` | Agent lifecycle event history |
|
||||
| `GET /api/v1/agents/{id}/metrics` | Agent-level metrics time series |
|
||||
|
||||
### Server Self-Metrics
|
||||
|
||||
The server snapshots its own Micrometer registry into ClickHouse every 60 s (table `server_metrics`) — JVM, HTTP, DB pools, agent/ingestion business metrics, and alerting metrics. Use this instead of running an external Prometheus when building a server-health dashboard. The live scrape endpoint `/api/v1/prometheus` remains available for traditional scraping.
|
||||
|
||||
Two ways to consume:
|
||||
|
||||
| Consumer | How |
|
||||
|---|---|
|
||||
| Web UI (built-in) | `/admin/server-metrics` — 17 panels across Server Health / JVM / HTTP & DB / Alerting / Deployments with a 15 min–7 d time picker. ADMIN-only, hidden when `infrastructureendpoints=false`. |
|
||||
| Programmatic | Generic REST API under `/api/v1/admin/server-metrics/{catalog,instances,query}`. Same visibility rules. Designed for SaaS control planes that embed server health in their own console. |
|
||||
|
||||
Persistence can be disabled entirely with `cameleer.server.self-metrics.enabled=false`. Snapshot cadence via `cameleer.server.self-metrics.interval-ms` (default `60000`).
|
||||
|
||||
See [`docs/server-self-metrics.md`](./server-self-metrics.md) for the full metric catalog, API contract, and ready-to-paste query bodies for each panel.
|
||||
|
||||
---
|
||||
|
||||
## Application Configuration
|
||||
|
||||
183
docs/handoff/2026-04-23-deployment-page-handoff.md
Normal file
183
docs/handoff/2026-04-23-deployment-page-handoff.md
Normal file
@@ -0,0 +1,183 @@
|
||||
# Handoff — Unified App Deployment Page
|
||||
|
||||
**Session:** 2026-04-22 → 2026-04-23
|
||||
**Branch:** `main` (43 commits ahead of `origin/main` before push — all committed directly per explicit user consent)
|
||||
**Base commit (session start):** `1a376eb2`
|
||||
**Head commit (session end):** `0a71bca7`
|
||||
|
||||
## What landed
|
||||
|
||||
Full implementation of the unified app deployment page replacing the old `CreateAppView` / `AppDetailView` split. Key artefacts:
|
||||
|
||||
- **Spec:** `docs/superpowers/specs/2026-04-22-app-deployment-page-design.md`
|
||||
- **Plan:** `docs/superpowers/plans/2026-04-22-app-deployment-page.md`
|
||||
- **Routes:** `/apps` (list, unchanged), `/apps/new` + `/apps/:slug` (both render new `AppDeploymentPage`)
|
||||
|
||||
### Backend delivered (cameleer-server)
|
||||
|
||||
- Flyway V3 adds `deployments.deployed_config_snapshot JSONB`
|
||||
- `DeploymentConfigSnapshot` record: `(UUID jarVersionId, ApplicationConfig agentConfig, Map<String,Object> containerConfig, List<String> sensitiveKeys)`
|
||||
- `DeploymentExecutor` captures snapshot on successful RUNNING transition (not FAILED)
|
||||
- `PostgresDeploymentRepository.saveDeployedConfigSnapshot(UUID, DeploymentConfigSnapshot)` + `findLatestSuccessfulByAppAndEnv(appId, envId)`
|
||||
- `ApplicationConfigController.updateConfig` accepts `?apply=staged|live` (default `live` for back-compat); staged skips SSE push; 400 on unknown
|
||||
- `AppController.getDirtyState` → `GET /api/v1/environments/{envSlug}/apps/{appSlug}/dirty-state` returning `{dirty, lastSuccessfulDeploymentId, differences[]}`
|
||||
- `DirtyStateCalculator` pure service (cameleer-server-core), scrubs volatile fields (`version`, `updatedAt`, `updatedBy`, `environment`, `application`) from agent-config comparison, recurses into nested objects
|
||||
- Integration tests: `PostgresDeploymentRepositoryIT` (3), `DeploymentSnapshotIT` (2), `ApplicationConfigControllerIT` (6), `AppDirtyStateIT` (3), `DirtyStateCalculatorTest` (9)
|
||||
- OpenAPI + `schema.d.ts` regenerated
|
||||
|
||||
### UI delivered (cameleer-server/ui)
|
||||
|
||||
New directory `ui/src/pages/AppsTab/AppDeploymentPage/`:
|
||||
|
||||
```
|
||||
index.tsx # Main composition (524 lines)
|
||||
IdentitySection.tsx # Name + slug + env pill + JAR + Current Version
|
||||
Checkpoints.tsx # Collapsible disclosure of past successful deploys
|
||||
PrimaryActionButton.tsx # Save / Redeploy / Deploying… state machine
|
||||
AppDeploymentPage.module.css # Page-local styles
|
||||
ConfigTabs/
|
||||
MonitoringTab.tsx # Engine, payload, log levels, metrics, sampling, replay, route control
|
||||
ResourcesTab.tsx # CPU / memory / ports / replicas / runtime / networks
|
||||
VariablesTab.tsx # Env vars (Table / Properties / YAML / .env via EnvEditor)
|
||||
SensitiveKeysTab.tsx # Per-app keys + global baseline reference
|
||||
TracesTapsTab.tsx # Live-apply with LiveBanner
|
||||
RouteRecordingTab.tsx # Live-apply with LiveBanner
|
||||
LiveBanner.tsx # Shared amber "changes apply immediately" banner
|
||||
DeploymentTab/
|
||||
DeploymentTab.tsx # Composition: StatusCard + DeploymentProgress + StartupLogPanel + History
|
||||
StatusCard.tsx # RUNNING / STARTING / FAILED indicator + replica count + URL + actions
|
||||
HistoryDisclosure.tsx # Past deployments table with inline log expansion
|
||||
hooks/
|
||||
useDeploymentPageState.ts # Form-state orchestrator (monitoring, resources, variables, sensitiveKeys)
|
||||
useFormDirty.ts # Per-tab dirty computation via JSON.stringify compare
|
||||
useUnsavedChangesBlocker.ts # React Router v6 useBlocker + DS AlertDialog
|
||||
utils/
|
||||
deriveAppName.ts # Filename → app name pure function
|
||||
deriveAppName.test.ts # 9 Vitest cases
|
||||
```
|
||||
|
||||
Touched shared files:
|
||||
- `ui/src/components/StartupLogPanel.tsx` — accepts `className`, flex-grows in container (dropped fixed 300px maxHeight)
|
||||
- `ui/src/api/queries/admin/apps.ts` — added `useDirtyState`, `Deployment.deployedConfigSnapshot` type
|
||||
- `ui/src/api/queries/commands.ts` — `useUpdateApplicationConfig` accepts `apply?: 'staged' | 'live'`
|
||||
- `ui/src/router.tsx` — routes `/apps/new` and `/apps/:appId` to `AppDeploymentPage`
|
||||
- `ui/src/pages/AppsTab/AppsTab.tsx` — shrunk 1387 → 109 lines (list only)
|
||||
|
||||
### Docs delivered
|
||||
|
||||
- `.claude/rules/ui.md` — Deployments bullet rewritten for the unified page
|
||||
- `.claude/rules/app-classes.md` — `ApplicationConfigController` gains `?apply` note; `AppController` gains dirty-state endpoint; `PostgresDeploymentRepository` notes the snapshot column
|
||||
- `docs/superpowers/specs/2026-04-22-app-deployment-page-design.md`
|
||||
- `docs/superpowers/plans/2026-04-22-app-deployment-page.md`
|
||||
|
||||
## Gitea issues opened this session (cameleer/cameleer-server)
|
||||
|
||||
### [#147 — Concurrent-edit protection on app deployment page (optimistic locking)](https://gitea.siegeln.net/cameleer/cameleer-server/issues/147)
|
||||
Deferred during brainstorming. Two browser sessions editing the same app have no last-write-wins protection. Proposed fix is `If-Match` / `ETag` on config + container-config + JAR upload endpoints using `app.updated_at`. Not blocking single-operator use.
|
||||
|
||||
### [#148 — Persist deployment-page monitoring fields end-to-end](https://gitea.siegeln.net/cameleer/cameleer-server/issues/148)
|
||||
**Important.** The Monitoring tab renders five controls that are currently **UI-only**: `payloadSize` + `payloadUnit`, `metricsInterval`, `replayEnabled`, `routeControlEnabled`. They do not persist to the agent because the fields don't exist on `com.cameleer.common.model.ApplicationConfig` and aren't part of the agent protocol. The old `CreateAppView` had the same gap — this is not a new regression, but the user has stated these must actually affect agent behavior. Fix requires cross-repo work (cameleer-common model additions + cameleer-server server wiring + cameleer agent protocol handling + agent-side gating behaviour).
|
||||
|
||||
## Open gaps to tackle next session
|
||||
|
||||
### 1. Task 13.1 — finish manual browser QA
|
||||
|
||||
Partial coverage so far: save/redeploy happy path, ENV pill styling, tab seam, variables view switcher, toast (all landed + verified). Still unverified:
|
||||
|
||||
- Checkpoint restore flow (hydrate form from past snapshot → Save → Redeploy)
|
||||
- Deploy failure path (FAILED status → snapshot stays null → primary button still shows Redeploy)
|
||||
- Unsaved-changes dialog on in-app navigation (sidebar click with dirty form)
|
||||
- Env switch with dirty form (should discard silently)
|
||||
- End-to-end deploy against real Docker daemon — see "Docker deploy setup" below
|
||||
- Per-tab `*` dirty marker visibility across all 4 staged tabs
|
||||
|
||||
### 2. Docker deploy setup (needed to fully exercise E2E)
|
||||
|
||||
Current `docker-compose.yml` sets `CAMELEER_SERVER_RUNTIME_ENABLED: "false"` so `DisabledRuntimeOrchestrator` rejects deploys with `UnsupportedOperationException`. To actually test deploy end-to-end, pick one:
|
||||
|
||||
- **Path A (quick):** `docker compose up -d cameleer-postgres cameleer-clickhouse` only, then `mvn -pl cameleer-server-app spring-boot:run` on the host + `npm run dev` for the UI. Server uses host Docker daemon directly. Runtime enabled by default via `application.yml`.
|
||||
- **Path B (compose-native):** enable runtime in compose by mounting `/var/run/docker.sock`, setting `CAMELEER_SERVER_RUNTIME_ENABLED: "true"` + `CAMELEER_SERVER_RUNTIME_DOCKERNETWORK: cameleer-traefik`, pre-creating the `cameleer-traefik` network, adding `CAMELEER_SERVER_RUNTIME_JARDOCKERVOLUME` for shared JAR storage, and adding a Traefik service for routing. This is a fully separate task — would need its own plan.
|
||||
|
||||
Recommend Path A for finishing QA; Path B only if you want compose to be fully deployable.
|
||||
|
||||
### 3. Deferred code-review items
|
||||
|
||||
All flagged during the final integration review. None are blockers; each is a follow-up.
|
||||
|
||||
- **DEGRADED deployments aren't checkpoints** — `PostgresDeploymentRepository.findLatestSuccessfulByAppAndEnv` filters `status = 'RUNNING'` but the executor writes the snapshot before the status is resolved (so a DEGRADED deployment has a snapshot). Either include `DEGRADED` in the filter, or skip snapshot on DEGRADED. Pick one; document the choice.
|
||||
- **`Checkpoints.tsx` restore on null snapshot is a silent no-op** — should surface a toast like "This checkpoint predates snapshotting and cannot be restored." Currently returns early with no feedback.
|
||||
- **Missing IT: FAILED deploy leaves snapshot NULL** — `DeploymentSnapshotIT` tests the success case and general "snapshot appears on RUNNING" but doesn't explicitly lock in the FAILED → null guarantee. Add a one-line assertion.
|
||||
- **`HistoryDisclosure` expanded log doesn't `scrollIntoView`** — on long histories the startup-log panel opens off-screen. Minor UX rough edge.
|
||||
- **OpenAPI `@Parameter` missing on `apply` query param** — not critical, just improves generated Swagger docs. Add `@Parameter(name = "apply", description = "staged | live (default: live)")` to `ApplicationConfigController.updateConfig`.
|
||||
|
||||
### 4. Minor tech debt introduced this session
|
||||
|
||||
- `samplingRate` normalization hack in `useDeploymentPageState.ts`: `Number.isInteger(x) ? \`${x}.0\` : String(x)` — works around `1.0` parsing back as `1`, but breaks for values like `1.10` (round-trips to `1.1`). A cleaner fix is to compare as numbers, not strings, in `useFormDirty`.
|
||||
- `useDirtyState` defaults to `?? true` during loading (so the button defaults to `Redeploy`, the fail-safe choice). Spurious Redeploy clicks are harmless, but the "Save (disabled)" UX would be more correct during initial load. Consider a loading-aware ternary if it becomes user-visible.
|
||||
- `ApplicationConfigController.updateConfig` returns `ResponseEntity.status(400).build()` (empty body) on unknown `apply` values. Consider a structured error body consistent with other 400s in the codebase.
|
||||
- GitNexus index stats (`AGENTS.md`, `CLAUDE.md`) refreshed several times during the session — these are auto-generated and will refresh again on next `npx gitnexus analyze`.
|
||||
|
||||
### 5. Behavioural caveats to know about
|
||||
|
||||
- **Agent config writes from the Dashboard / Runtime pages** still use `useUpdateApplicationConfig` with default `apply='live'` — they push SSE immediately as before. Only Deployment-page writes use `apply=staged`. This is by design.
|
||||
- **Traces & Taps + Route Recording tabs** on the Deployment page write with `apply='live'` (immediate SSE). They do **not** participate in dirty detection. The LiveBanner explains this to the user.
|
||||
- **Slug is immutable** — enforced both server-side (regex + Jackson drops unknown fields on PUT) and client-side (IdentitySection renders slug as `MonoText`, never `Input`).
|
||||
- **Environment is immutable after create** — the deployment page has no env selector; the environment chip is read-only and colored via `envColorVar` per the env's configured color.
|
||||
- **Dirty detection ignores `version`, `updatedAt`, `updatedBy`, `environment`, `application`** on agent config — these get bumped server-side on every save and would otherwise spuriously mark the page dirty. Scrubbing happens in `DirtyStateCalculator.scrubAgentConfig`.
|
||||
|
||||
## Recommended next-session kickoff
|
||||
|
||||
1. Run `docker compose up -d cameleer-postgres cameleer-clickhouse`, then `mvn -pl cameleer-server-app spring-boot:run` and `npm run dev` in two terminals.
|
||||
2. Walk through the rest of Task 13.1 (checkpoint restore, deploy failure, unsaved dialog, env switch).
|
||||
3. File any new bugs found. Address the deferred review items (section 3) in small PR-sized commits.
|
||||
4. Decide which of #148's cross-repo work to tackle — cleanest path is: (a) extend `ApplicationConfig` in cameleer-common, (b) wire server side, (c) coordinate agent-side behaviour gating.
|
||||
5. If you want compose-native deploy, open a separate ticket or spec for Path B from "Docker deploy setup" above.
|
||||
|
||||
## Commit range summary
|
||||
|
||||
```
|
||||
1a376eb2..0a71bca7 (43 commits)
|
||||
ff951877 db(deploy): add deployments.deployed_config_snapshot column (V3)
|
||||
d580b6e9 core(deploy): add DeploymentConfigSnapshot record
|
||||
06fa7d83 core(deploy): type jarVersionId as UUID (match domain convention)
|
||||
7f9cfc7f core(deploy): add deployedConfigSnapshot field to Deployment model
|
||||
d3e86b9d storage(deploy): persist deployed_config_snapshot as JSONB
|
||||
9b851c46 test(deploy): autowire repository in snapshot IT (JavaTimeModule-safe)
|
||||
a79eafea runtime(deploy): capture config snapshot on RUNNING transition
|
||||
9b124027 test(deploy): assert containerConfig round-trip + strict RUNNING in snapshot IT
|
||||
76129d40 api(config): ?apply=staged|live gates SSE push on PUT /apps/{slug}/config
|
||||
e716dbf8 test(config): verify audit action in staged/live config IT
|
||||
76352c0d test(config): tighten audit assertions + @DirtiesContext on ApplicationConfigControllerIT
|
||||
e4ccce1e core(deploy): add DirtyStateCalculator + DirtyStateResult
|
||||
24464c07 core(deploy): recurse into nested diffs + unquote scalar values in DirtyStateCalculator
|
||||
6591f2fd api(apps): GET /apps/{slug}/dirty-state returns desired-vs-deployed diff
|
||||
97f25b4c test(deploy): register JavaTimeModule in DirtyStateCalculator unit test
|
||||
0434299d api(schema): regenerate OpenAPI + schema.d.ts for deployment page
|
||||
60529757 ui(deploy): scaffold AppDeploymentPage + route /apps/new and /apps/:slug
|
||||
52ff385b ui(api): add useDirtyState + apply=staged|live on useUpdateApplicationConfig
|
||||
d067490f ui(deploy): add deriveAppName pure function + tests
|
||||
00c7c0cd ui(deploy): Identity & Artifact section with filename auto-derive
|
||||
08efdfa9 ui(deploy): Checkpoints disclosure (hides current deployment, flags pruned JARs)
|
||||
cc193a10 ui(deploy): add useDeploymentPageState orchestrator hook
|
||||
4f5a11f7 ui(deploy): extract MonitoringTab component
|
||||
5c48b780 ui(deploy): extract ResourcesTab component
|
||||
bb06c4c6 ui(deploy): extract VariablesTab component
|
||||
f487e6ca ui(deploy): extract SensitiveKeysTab component
|
||||
b7c0a225 ui(deploy): LiveBanner component for live-apply tabs
|
||||
e96c3cd0 ui(deploy): Traces & Taps + Route Recording tabs with live banner
|
||||
98a7b781 ui(deploy): StatusCard for Deployment tab
|
||||
063a4a55 ui(deploy): HistoryDisclosure with inline log expansion
|
||||
1579f10a ui(deploy): DeploymentTab + flex-grow StartupLogPanel
|
||||
42fb6c8b ui(deploy): useFormDirty hook for per-tab dirty markers
|
||||
0e4166bd ui(deploy): PrimaryActionButton + computeMode state-machine helper
|
||||
b1bdb88e ui(deploy): compose page — save/redeploy/checkpoints wired end-to-end
|
||||
3a649f40 ui(deploy): router blocker + DS dialog for unsaved edits
|
||||
5a7c0ce4 ui(deploy): delete CreateAppView + AppDetailView + ConfigSubTab
|
||||
d5957468 docs(rules): update ui.md Deployments bullet for unified deployment page
|
||||
6d5ce606 docs(rules): document ?apply flag + snapshot column in app-classes
|
||||
d33c039a fix(deploy): address final review — sensitiveKeys snapshot, dirty scrubbing, transition race, refetch invalidations
|
||||
b7b6bd2a ui(deploy): port missing agent-config fields, var-view switcher, env pill, tab seam
|
||||
0a71bca7 fix(deploy): redeploy button after save, disable save when clean, success toast
|
||||
```
|
||||
|
||||
Plus this handoff commit + the GitNexus index-stats refresh.
|
||||
522
docs/server-self-metrics.md
Normal file
522
docs/server-self-metrics.md
Normal file
@@ -0,0 +1,522 @@
|
||||
# Server Self-Metrics — Reference for Dashboard Builders
|
||||
|
||||
This is the reference for anyone building a server-health dashboard on top of the Cameleer server. It documents the `server_metrics` ClickHouse table, every series you can expect to find in it, and the queries we recommend for each dashboard panel.
|
||||
|
||||
> **tl;dr** — Every 60 s, every meter in the server's Micrometer registry (all `cameleer.*`, all `alerting_*`, and the full Spring Boot Actuator set) is written into ClickHouse as one row per `(meter, statistic)` pair. No external Prometheus required.
|
||||
|
||||
---
|
||||
|
||||
## Built-in admin dashboard
|
||||
|
||||
The server ships a ready-to-use dashboard at **`/admin/server-metrics`** in the web UI. It renders the 17 panels listed below using `ThemedChart` from the design system. The window is driven by the app-wide time-range control in the TopBar (same one used by Exchanges, Dashboard, and Runtime), so every panel automatically reflects the range you've selected globally. Visibility mirrors the Database and ClickHouse admin pages:
|
||||
|
||||
- Requires the `ADMIN` role.
|
||||
- Hidden when `cameleer.server.security.infrastructureendpoints=false` (both the backend endpoints and the sidebar entry disappear).
|
||||
|
||||
Use this page for single-tenant installs and dev/staging — it's the fastest path to "is the server healthy right now?". For multi-tenant control planes, cross-environment rollups, or embedding metrics inside an existing operations console, call the REST API below instead.
|
||||
|
||||
---
|
||||
|
||||
## Table schema
|
||||
|
||||
```sql
|
||||
server_metrics (
|
||||
tenant_id LowCardinality(String) DEFAULT 'default',
|
||||
collected_at DateTime64(3),
|
||||
server_instance_id LowCardinality(String),
|
||||
metric_name LowCardinality(String),
|
||||
metric_type LowCardinality(String), -- counter|gauge|timer|distribution_summary|long_task_timer|other
|
||||
statistic LowCardinality(String) DEFAULT 'value',
|
||||
metric_value Float64,
|
||||
tags Map(String, String) DEFAULT map(),
|
||||
server_received_at DateTime64(3) DEFAULT now64(3)
|
||||
)
|
||||
ENGINE = MergeTree()
|
||||
PARTITION BY (tenant_id, toYYYYMM(collected_at))
|
||||
ORDER BY (tenant_id, collected_at, server_instance_id, metric_name, statistic)
|
||||
TTL toDateTime(collected_at) + INTERVAL 90 DAY DELETE
|
||||
```
|
||||
|
||||
### What each column means
|
||||
|
||||
| Column | Notes |
|
||||
|---|---|
|
||||
| `tenant_id` | Always filter by this. One tenant per server deployment. |
|
||||
| `server_instance_id` | Stable id per server process: property → `HOSTNAME` env → DNS → random UUID. **Rotates on restart**, so counters restart cleanly. |
|
||||
| `metric_name` | Raw Micrometer meter name. Dots, not underscores. |
|
||||
| `metric_type` | Lowercase Micrometer `Meter.Type`. |
|
||||
| `statistic` | Which `Measurement` this row is. Counters/gauges → `value` or `count`. Timers → three rows per tick: `count`, `total_time` (or `total`), `max`. Distribution summaries → same shape. |
|
||||
| `metric_value` | `Float64`. Non-finite values (NaN / ±∞) are dropped before insert. |
|
||||
| `tags` | `Map(String, String)`. Micrometer tags copied verbatim. |
|
||||
|
||||
### Counter semantics (important)
|
||||
|
||||
Counters are **cumulative totals since meter registration**, same convention as Prometheus. To get a rate, compute a delta within a `server_instance_id`:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
toStartOfMinute(collected_at) AS minute,
|
||||
metric_value - any(metric_value) OVER (
|
||||
PARTITION BY server_instance_id, metric_name, tags
|
||||
ORDER BY collected_at
|
||||
ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING
|
||||
) AS per_minute_delta
|
||||
FROM server_metrics
|
||||
WHERE metric_name = 'cameleer.ingestion.drops'
|
||||
AND statistic = 'count'
|
||||
ORDER BY minute;
|
||||
```
|
||||
|
||||
On restart the `server_instance_id` rotates, so a simple `LAG()` partitioned by `server_instance_id` gives monotonic segments without fighting counter resets.
|
||||
|
||||
### Retention
|
||||
|
||||
90 days, TTL-enforced. Long-term trend analysis is out of scope — ship raw data to an external warehouse if you need more.
|
||||
|
||||
---
|
||||
|
||||
## How to query
|
||||
|
||||
Use the REST API — `/api/v1/admin/server-metrics/**`. It does the tenant filter, range bounding, counter-delta math, and input validation for you, so the dashboard never needs direct ClickHouse access. ADMIN role required (standard `/api/v1/admin/**` RBAC gate).
|
||||
|
||||
### `GET /catalog`
|
||||
|
||||
Enumerate every `metric_name` observed in a window, with its `metric_type`, the set of statistics emitted, and the union of tag keys.
|
||||
|
||||
```
|
||||
GET /api/v1/admin/server-metrics/catalog?from=2026-04-22T00:00:00Z&to=2026-04-23T00:00:00Z
|
||||
Authorization: Bearer <admin-jwt>
|
||||
```
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"metricName": "cameleer.agents.connected",
|
||||
"metricType": "gauge",
|
||||
"statistics": ["value"],
|
||||
"tagKeys": ["state"]
|
||||
},
|
||||
{
|
||||
"metricName": "cameleer.ingestion.drops",
|
||||
"metricType": "counter",
|
||||
"statistics": ["count"],
|
||||
"tagKeys": ["reason"]
|
||||
},
|
||||
...
|
||||
]
|
||||
```
|
||||
|
||||
`from`/`to` are optional; default is the last 1 h.
|
||||
|
||||
### `GET /instances`
|
||||
|
||||
Enumerate the `server_instance_id` values that wrote at least one sample in the window, with `firstSeen` / `lastSeen`. Use this when you need to annotate restarts on a graph or reason about counter-delta partitions.
|
||||
|
||||
```
|
||||
GET /api/v1/admin/server-metrics/instances?from=2026-04-22T00:00:00Z&to=2026-04-23T00:00:00Z
|
||||
```
|
||||
|
||||
```json
|
||||
[
|
||||
{ "serverInstanceId": "srv-prod-b", "firstSeen": "2026-04-22T14:30:00Z", "lastSeen": "2026-04-23T00:00:00Z" },
|
||||
{ "serverInstanceId": "srv-prod-a", "firstSeen": "2026-04-22T00:00:00Z", "lastSeen": "2026-04-22T14:25:00Z" }
|
||||
]
|
||||
```
|
||||
|
||||
### `POST /query` — generic time-series
|
||||
|
||||
The workhorse. One endpoint covers every panel in the dashboard.
|
||||
|
||||
```
|
||||
POST /api/v1/admin/server-metrics/query
|
||||
Authorization: Bearer <admin-jwt>
|
||||
Content-Type: application/json
|
||||
```
|
||||
|
||||
Request body:
|
||||
|
||||
```json
|
||||
{
|
||||
"metric": "cameleer.ingestion.drops",
|
||||
"statistic": "count",
|
||||
"from": "2026-04-22T00:00:00Z",
|
||||
"to": "2026-04-23T00:00:00Z",
|
||||
"stepSeconds": 60,
|
||||
"groupByTags": ["reason"],
|
||||
"filterTags": { },
|
||||
"aggregation": "sum",
|
||||
"mode": "delta",
|
||||
"serverInstanceIds": null
|
||||
}
|
||||
```
|
||||
|
||||
Response:
|
||||
|
||||
```json
|
||||
{
|
||||
"metric": "cameleer.ingestion.drops",
|
||||
"statistic": "count",
|
||||
"aggregation": "sum",
|
||||
"mode": "delta",
|
||||
"stepSeconds": 60,
|
||||
"series": [
|
||||
{
|
||||
"tags": { "reason": "buffer_full" },
|
||||
"points": [
|
||||
{ "t": "2026-04-22T00:00:00.000Z", "v": 0.0 },
|
||||
{ "t": "2026-04-22T00:01:00.000Z", "v": 5.0 },
|
||||
{ "t": "2026-04-22T00:02:00.000Z", "v": 5.0 }
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Request field reference
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|---|---|---|---|
|
||||
| `metric` | string | yes | Metric name. Regex `^[a-zA-Z0-9._]+$`. |
|
||||
| `statistic` | string | no | `value` / `count` / `total` / `total_time` / `max` / `mean`. `mean` is a derived statistic for timers: `sum(total_time \| total) / sum(count)` per bucket. |
|
||||
| `from`, `to` | ISO-8601 instant | yes | Half-open window. `to - from ≤ 31 days`. |
|
||||
| `stepSeconds` | int | no | Bucket size. Clamped to [10, 3600]. Default 60. |
|
||||
| `groupByTags` | string[] | no | Emit one series per unique combination of these tag values. Tag keys regex `^[a-zA-Z0-9._]+$`. |
|
||||
| `filterTags` | map<string,string> | no | Narrow to samples whose tag map contains every entry. Values bound via parameter — no injection. |
|
||||
| `aggregation` | string | no | Within-bucket reducer for raw mode: `avg` (default), `sum`, `max`, `min`, `latest`. For `mode=delta` this controls cross-instance aggregation (defaults to `sum` of per-instance deltas). |
|
||||
| `mode` | string | no | `raw` (default) or `delta`. Delta mode computes per-`server_instance_id` positive-clipped differences and then aggregates across instances — so you get a rate-like time series that survives server restarts. |
|
||||
| `serverInstanceIds` | string[] | no | Allow-list. When null or empty, every instance in the window is included. |
|
||||
|
||||
#### Validation errors
|
||||
|
||||
Any `IllegalArgumentException` surfaces as `400 Bad Request` with `{"error": "…"}`. Triggers:
|
||||
- unsafe characters in identifiers
|
||||
- `from ≥ to` or range > 31 days
|
||||
- `stepSeconds` outside [10, 3600]
|
||||
- result cardinality > 500 series (reduce `groupByTags` or tighten `filterTags`)
|
||||
|
||||
### Direct ClickHouse (fallback)
|
||||
|
||||
If you need something the generic query can't express (complex joins, percentile aggregates, materialized-view rollups), reach for `/api/v1/admin/clickhouse/query` (`infrastructureendpoints=true`, ADMIN) or a dedicated read-only CH user scoped to `server_metrics`. All direct queries must filter by `tenant_id`.
|
||||
|
||||
---
|
||||
|
||||
## Metric catalog
|
||||
|
||||
Every series below is populated. Names follow Micrometer conventions (dots, not underscores). Use these as the starting point for dashboard panels — pick the handful you care about, ignore the rest.
|
||||
|
||||
### Cameleer business metrics — agent + ingestion
|
||||
|
||||
Source: `cameleer-server-app/.../metrics/ServerMetrics.java`.
|
||||
|
||||
| Metric | Type | Statistic | Tags | Meaning |
|
||||
|---|---|---|---|---|
|
||||
| `cameleer.agents.connected` | gauge | `value` | `state` (live/stale/dead/shutdown) | Count of agents in each lifecycle state |
|
||||
| `cameleer.agents.sse.active` | gauge | `value` | — | Active SSE connections (command channel) |
|
||||
| `cameleer.agents.transitions` | counter | `count` | `transition` (went_stale/went_dead/recovered) | Cumulative lifecycle transitions |
|
||||
| `cameleer.ingestion.buffer.size` | gauge | `value` | `type` (execution/processor/log/metrics) | Write buffer depth — spikes mean ingestion is lagging |
|
||||
| `cameleer.ingestion.accumulator.pending` | gauge | `value` | — | Unfinalized execution chunks in the accumulator |
|
||||
| `cameleer.ingestion.drops` | counter | `count` | `reason` (buffer_full/no_agent/no_identity) | Dropped payloads. Any non-zero rate here is bad. |
|
||||
| `cameleer.ingestion.flush.duration` | timer | `count`, `total_time`/`total`, `max` | `type` (execution/processor/log) | Flush latency per type |
|
||||
|
||||
### Cameleer business metrics — deploy + auth
|
||||
|
||||
| Metric | Type | Statistic | Tags | Meaning |
|
||||
|---|---|---|---|---|
|
||||
| `cameleer.deployments.outcome` | counter | `count` | `status` (running/failed/degraded) | Deploy outcome tally since boot |
|
||||
| `cameleer.deployments.duration` | timer | `count`, `total_time`/`total`, `max` | — | End-to-end deploy latency |
|
||||
| `cameleer.auth.failures` | counter | `count` | `reason` (invalid_token/revoked/oidc_rejected) | Auth failure breakdown — watch for spikes |
|
||||
|
||||
### Alerting subsystem metrics
|
||||
|
||||
Source: `cameleer-server-app/.../alerting/metrics/AlertingMetrics.java`.
|
||||
|
||||
| Metric | Type | Statistic | Tags | Meaning |
|
||||
|---|---|---|---|---|
|
||||
| `alerting_rules_total` | gauge | `value` | `state` (enabled/disabled) | Cached 30 s from PostgreSQL `alert_rules` |
|
||||
| `alerting_instances_total` | gauge | `value` | `state` (firing/resolved/ack'd etc.) | Cached 30 s from PostgreSQL `alert_instances` |
|
||||
| `alerting_eval_errors_total` | counter | `count` | `kind` (condition kind) | Evaluator exceptions per kind |
|
||||
| `alerting_circuit_opened_total` | counter | `count` | `kind` | Circuit-breaker open transitions per kind |
|
||||
| `alerting_eval_duration_seconds` | timer | `count`, `total_time`/`total`, `max` | `kind` | Per-kind evaluation latency |
|
||||
| `alerting_webhook_delivery_duration_seconds` | timer | `count`, `total_time`/`total`, `max` | — | Outbound webhook POST latency |
|
||||
| `alerting_notifications_total` | counter | `count` | `status` (sent/failed/retry/giving_up) | Notification outcomes |
|
||||
|
||||
### JVM — memory, GC, threads, classes
|
||||
|
||||
From Spring Boot Actuator (`JvmMemoryMetrics`, `JvmGcMetrics`, `JvmThreadMetrics`, `ClassLoaderMetrics`).
|
||||
|
||||
| Metric | Type | Tags | Meaning |
|
||||
|---|---|---|---|
|
||||
| `jvm.memory.used` | gauge | `area` (heap/nonheap), `id` (pool name) | Bytes used per pool |
|
||||
| `jvm.memory.committed` | gauge | `area`, `id` | Bytes committed per pool |
|
||||
| `jvm.memory.max` | gauge | `area`, `id` | Pool max |
|
||||
| `jvm.memory.usage.after.gc` | gauge | `area`, `id` | Usage right after the last collection |
|
||||
| `jvm.buffer.memory.used` | gauge | `id` (direct/mapped) | NIO buffer bytes |
|
||||
| `jvm.buffer.count` | gauge | `id` | NIO buffer count |
|
||||
| `jvm.buffer.total.capacity` | gauge | `id` | NIO buffer capacity |
|
||||
| `jvm.threads.live` | gauge | — | Current live thread count |
|
||||
| `jvm.threads.daemon` | gauge | — | Current daemon thread count |
|
||||
| `jvm.threads.peak` | gauge | — | Peak thread count since start |
|
||||
| `jvm.threads.started` | counter | — | Cumulative threads started |
|
||||
| `jvm.threads.states` | gauge | `state` (runnable/blocked/waiting/…) | Threads per state |
|
||||
| `jvm.classes.loaded` | gauge | — | Currently-loaded classes |
|
||||
| `jvm.classes.unloaded` | counter | — | Cumulative unloaded classes |
|
||||
| `jvm.gc.pause` | timer | `action`, `cause` | Stop-the-world pause times — watch `max` |
|
||||
| `jvm.gc.concurrent.phase.time` | timer | `action`, `cause` | Concurrent-phase durations (G1/ZGC) |
|
||||
| `jvm.gc.memory.allocated` | counter | — | Bytes allocated in the young gen |
|
||||
| `jvm.gc.memory.promoted` | counter | — | Bytes promoted to old gen |
|
||||
| `jvm.gc.overhead` | gauge | — | Fraction of CPU spent in GC (0–1) |
|
||||
| `jvm.gc.live.data.size` | gauge | — | Live data after last collection |
|
||||
| `jvm.gc.max.data.size` | gauge | — | Max old-gen size |
|
||||
| `jvm.info` | gauge | `vendor`, `runtime`, `version` | Constant `1.0`; tags carry the real info |
|
||||
|
||||
### Process and system
|
||||
|
||||
| Metric | Type | Tags | Meaning |
|
||||
|---|---|---|---|
|
||||
| `process.cpu.usage` | gauge | — | CPU share consumed by this JVM (0–1) |
|
||||
| `process.cpu.time` | gauge | — | Cumulative CPU time (ns) |
|
||||
| `process.uptime` | gauge | — | ms since start |
|
||||
| `process.start.time` | gauge | — | Epoch start |
|
||||
| `process.files.open` | gauge | — | Open FDs |
|
||||
| `process.files.max` | gauge | — | FD ulimit |
|
||||
| `system.cpu.count` | gauge | — | Cores visible to the JVM |
|
||||
| `system.cpu.usage` | gauge | — | System-wide CPU (0–1) |
|
||||
| `system.load.average.1m` | gauge | — | 1-min load (Unix only) |
|
||||
| `disk.free` | gauge | `path` | Free bytes on the mount that holds the JAR |
|
||||
| `disk.total` | gauge | `path` | Total bytes |
|
||||
|
||||
### HTTP server
|
||||
|
||||
| Metric | Type | Tags | Meaning |
|
||||
|---|---|---|---|
|
||||
| `http.server.requests` | timer | `method`, `uri`, `status`, `outcome`, `exception` | Inbound HTTP: count, total_time/total, max |
|
||||
| `http.server.requests.active` | long_task_timer | `method`, `uri` | In-flight requests — `active_tasks` statistic |
|
||||
|
||||
`uri` is the Spring-templated path (`/api/v1/environments/{envSlug}/apps/{appSlug}`), not the raw URL — cardinality stays bounded.
|
||||
|
||||
### Tomcat
|
||||
|
||||
| Metric | Type | Tags | Meaning |
|
||||
|---|---|---|---|
|
||||
| `tomcat.sessions.active.current` | gauge | — | Currently active sessions |
|
||||
| `tomcat.sessions.active.max` | gauge | — | Max concurrent sessions observed |
|
||||
| `tomcat.sessions.alive.max` | gauge | — | Longest session lifetime (s) |
|
||||
| `tomcat.sessions.created` | counter | — | Cumulative session creates |
|
||||
| `tomcat.sessions.expired` | counter | — | Cumulative expirations |
|
||||
| `tomcat.sessions.rejected` | counter | — | Session creates refused |
|
||||
| `tomcat.threads.current` | gauge | `name` | Connector thread count |
|
||||
| `tomcat.threads.busy` | gauge | `name` | Connector threads currently serving a request |
|
||||
| `tomcat.threads.config.max` | gauge | `name` | Configured max |
|
||||
|
||||
### HikariCP (PostgreSQL pool)
|
||||
|
||||
| Metric | Type | Tags | Meaning |
|
||||
|---|---|---|---|
|
||||
| `hikaricp.connections` | gauge | `pool` | Total connections |
|
||||
| `hikaricp.connections.active` | gauge | `pool` | In-use |
|
||||
| `hikaricp.connections.idle` | gauge | `pool` | Idle |
|
||||
| `hikaricp.connections.pending` | gauge | `pool` | Threads waiting for a connection |
|
||||
| `hikaricp.connections.min` | gauge | `pool` | Configured min |
|
||||
| `hikaricp.connections.max` | gauge | `pool` | Configured max |
|
||||
| `hikaricp.connections.creation` | timer | `pool` | Time to open a new connection |
|
||||
| `hikaricp.connections.acquire` | timer | `pool` | Time to acquire from the pool |
|
||||
| `hikaricp.connections.usage` | timer | `pool` | Time a connection was in use |
|
||||
| `hikaricp.connections.timeout` | counter | `pool` | Pool acquisition timeouts — any non-zero rate is a problem |
|
||||
|
||||
Pools are named. You'll see `HikariPool-1` (PostgreSQL) and a separate pool for ClickHouse (`clickHouseJdbcTemplate`).
|
||||
|
||||
### JDBC generic
|
||||
|
||||
| Metric | Type | Tags | Meaning |
|
||||
|---|---|---|---|
|
||||
| `jdbc.connections.min` | gauge | `name` | Same data as Hikari, surfaced generically |
|
||||
| `jdbc.connections.max` | gauge | `name` | |
|
||||
| `jdbc.connections.active` | gauge | `name` | |
|
||||
| `jdbc.connections.idle` | gauge | `name` | |
|
||||
|
||||
### Logging
|
||||
|
||||
| Metric | Type | Tags | Meaning |
|
||||
|---|---|---|---|
|
||||
| `logback.events` | counter | `level` (error/warn/info/debug/trace) | Log events emitted since start — `{level=error}` is a useful panel |
|
||||
|
||||
### Spring Boot lifecycle
|
||||
|
||||
| Metric | Type | Tags | Meaning |
|
||||
|---|---|---|---|
|
||||
| `application.started.time` | timer | `main.application.class` | Cold-start duration |
|
||||
| `application.ready.time` | timer | `main.application.class` | Time to ready |
|
||||
|
||||
### Flyway
|
||||
|
||||
| Metric | Type | Tags | Meaning |
|
||||
|---|---|---|---|
|
||||
| `flyway.migrations` | gauge | — | Number of migrations applied (current schema) |
|
||||
|
||||
### Executor pools (if any `@Async` executors exist)
|
||||
|
||||
When a `ThreadPoolTaskExecutor` bean is registered and tagged, Micrometer adds:
|
||||
|
||||
| Metric | Type | Tags | Meaning |
|
||||
|---|---|---|---|
|
||||
| `executor.active` | gauge | `name` | Currently-running tasks |
|
||||
| `executor.queued` | gauge | `name` | Queued tasks |
|
||||
| `executor.queue.remaining` | gauge | `name` | Queue headroom |
|
||||
| `executor.pool.size` | gauge | `name` | Current pool size |
|
||||
| `executor.pool.core` | gauge | `name` | Core size |
|
||||
| `executor.pool.max` | gauge | `name` | Max size |
|
||||
| `executor.completed` | counter | `name` | Completed tasks |
|
||||
|
||||
---
|
||||
|
||||
## Suggested dashboard panels
|
||||
|
||||
Below are 17 panels, each expressed as a single `POST /api/v1/admin/server-metrics/query` body. Tenant is implicit in the JWT — the server filters by tenant server-side. `{from}` and `{to}` are dashboard variables.
|
||||
|
||||
### Row: server health (top of dashboard)
|
||||
|
||||
1. **Agents by state** — stacked area.
|
||||
```json
|
||||
{ "metric": "cameleer.agents.connected", "statistic": "value",
|
||||
"from": "{from}", "to": "{to}", "stepSeconds": 60,
|
||||
"groupByTags": ["state"], "aggregation": "avg", "mode": "raw" }
|
||||
```
|
||||
|
||||
2. **Ingestion buffer depth by type** — line chart.
|
||||
```json
|
||||
{ "metric": "cameleer.ingestion.buffer.size", "statistic": "value",
|
||||
"from": "{from}", "to": "{to}", "stepSeconds": 60,
|
||||
"groupByTags": ["type"], "aggregation": "avg", "mode": "raw" }
|
||||
```
|
||||
|
||||
3. **Ingestion drops per minute** — bar chart.
|
||||
```json
|
||||
{ "metric": "cameleer.ingestion.drops", "statistic": "count",
|
||||
"from": "{from}", "to": "{to}", "stepSeconds": 60,
|
||||
"groupByTags": ["reason"], "mode": "delta" }
|
||||
```
|
||||
|
||||
4. **Auth failures per minute** — same shape as drops, grouped by `reason`.
|
||||
```json
|
||||
{ "metric": "cameleer.auth.failures", "statistic": "count",
|
||||
"from": "{from}", "to": "{to}", "stepSeconds": 60,
|
||||
"groupByTags": ["reason"], "mode": "delta" }
|
||||
```
|
||||
|
||||
### Row: JVM
|
||||
|
||||
5. **Heap used vs committed vs max** — area chart (three overlay queries).
|
||||
```json
|
||||
{ "metric": "jvm.memory.used", "statistic": "value",
|
||||
"from": "{from}", "to": "{to}", "stepSeconds": 60,
|
||||
"filterTags": { "area": "heap" }, "aggregation": "sum", "mode": "raw" }
|
||||
```
|
||||
Repeat with `"metric": "jvm.memory.committed"` and `"metric": "jvm.memory.max"`.
|
||||
|
||||
6. **CPU %** — line.
|
||||
```json
|
||||
{ "metric": "process.cpu.usage", "statistic": "value",
|
||||
"from": "{from}", "to": "{to}", "stepSeconds": 60, "aggregation": "avg", "mode": "raw" }
|
||||
```
|
||||
Overlay with `"metric": "system.cpu.usage"`.
|
||||
|
||||
7. **GC pause — max per cause**.
|
||||
```json
|
||||
{ "metric": "jvm.gc.pause", "statistic": "max",
|
||||
"from": "{from}", "to": "{to}", "stepSeconds": 60,
|
||||
"groupByTags": ["cause"], "aggregation": "max", "mode": "raw" }
|
||||
```
|
||||
|
||||
8. **Thread count** — three overlay lines: `jvm.threads.live`, `jvm.threads.daemon`, `jvm.threads.peak` each with `statistic=value, aggregation=avg, mode=raw`.
|
||||
|
||||
### Row: HTTP + DB
|
||||
|
||||
9. **HTTP mean latency by URI** — top-N URIs.
|
||||
```json
|
||||
{ "metric": "http.server.requests", "statistic": "mean",
|
||||
"from": "{from}", "to": "{to}", "stepSeconds": 60,
|
||||
"groupByTags": ["uri"], "filterTags": { "outcome": "SUCCESS" },
|
||||
"aggregation": "avg", "mode": "raw" }
|
||||
```
|
||||
For p99 proxy, repeat with `"statistic": "max"`.
|
||||
|
||||
10. **HTTP error rate** — two queries, divide client-side: total requests and 5xx requests.
|
||||
```json
|
||||
{ "metric": "http.server.requests", "statistic": "count",
|
||||
"from": "{from}", "to": "{to}", "stepSeconds": 60,
|
||||
"mode": "delta", "aggregation": "sum" }
|
||||
```
|
||||
Then for the 5xx series, add `"filterTags": { "outcome": "SERVER_ERROR" }` and divide.
|
||||
|
||||
11. **HikariCP pool saturation** — overlay two queries.
|
||||
```json
|
||||
{ "metric": "hikaricp.connections.active", "statistic": "value",
|
||||
"from": "{from}", "to": "{to}", "stepSeconds": 60,
|
||||
"groupByTags": ["pool"], "aggregation": "avg", "mode": "raw" }
|
||||
```
|
||||
Overlay with `"metric": "hikaricp.connections.pending"`.
|
||||
|
||||
12. **Hikari acquire timeouts per minute**.
|
||||
```json
|
||||
{ "metric": "hikaricp.connections.timeout", "statistic": "count",
|
||||
"from": "{from}", "to": "{to}", "stepSeconds": 60,
|
||||
"groupByTags": ["pool"], "mode": "delta" }
|
||||
```
|
||||
|
||||
### Row: alerting (collapsible)
|
||||
|
||||
13. **Alerting instances by state** — stacked.
|
||||
```json
|
||||
{ "metric": "alerting_instances_total", "statistic": "value",
|
||||
"from": "{from}", "to": "{to}", "stepSeconds": 60,
|
||||
"groupByTags": ["state"], "aggregation": "avg", "mode": "raw" }
|
||||
```
|
||||
|
||||
14. **Eval errors per minute by kind**.
|
||||
```json
|
||||
{ "metric": "alerting_eval_errors_total", "statistic": "count",
|
||||
"from": "{from}", "to": "{to}", "stepSeconds": 60,
|
||||
"groupByTags": ["kind"], "mode": "delta" }
|
||||
```
|
||||
|
||||
15. **Webhook delivery — max per minute**.
|
||||
```json
|
||||
{ "metric": "alerting_webhook_delivery_duration_seconds", "statistic": "max",
|
||||
"from": "{from}", "to": "{to}", "stepSeconds": 60,
|
||||
"aggregation": "max", "mode": "raw" }
|
||||
```
|
||||
|
||||
### Row: deployments (runtime-enabled only)
|
||||
|
||||
16. **Deploy outcomes per hour**.
|
||||
```json
|
||||
{ "metric": "cameleer.deployments.outcome", "statistic": "count",
|
||||
"from": "{from}", "to": "{to}", "stepSeconds": 3600,
|
||||
"groupByTags": ["status"], "mode": "delta" }
|
||||
```
|
||||
|
||||
17. **Deploy duration mean**.
|
||||
```json
|
||||
{ "metric": "cameleer.deployments.duration", "statistic": "mean",
|
||||
"from": "{from}", "to": "{to}", "stepSeconds": 300,
|
||||
"aggregation": "avg", "mode": "raw" }
|
||||
```
|
||||
For p99 proxy, repeat with `"statistic": "max"`.
|
||||
|
||||
---
|
||||
|
||||
## Notes for the dashboard implementer
|
||||
|
||||
- **Use the REST API.** The server handles tenant filtering, counter deltas, range bounds, and input validation. Direct ClickHouse is a fallback for the handful of cases the generic query can't express.
|
||||
- **`total_time` vs `total`.** SimpleMeterRegistry and PrometheusMeterRegistry disagree on the tag value for Timer cumulative duration. The server uses PrometheusMeterRegistry in production, so expect `total_time`. The derived `statistic=mean` handles both transparently.
|
||||
- **Cardinality warning:** `http.server.requests` tags include `uri` and `status`. The server templates URIs, but if someone adds an endpoint that embeds a high-cardinality path segment without `@PathVariable`, you'll see explosion here. The API caps responses at 500 series; you'll get a 400 if you blow past it.
|
||||
- **The dashboard is read-only.** There's no write path — only the server writes into `server_metrics`.
|
||||
|
||||
---
|
||||
|
||||
## Changelog
|
||||
|
||||
- 2026-04-23 — initial write. Write-only backend.
|
||||
- 2026-04-23 — added generic REST API (`/api/v1/admin/server-metrics/{catalog,instances,query}`) so dashboards don't need direct ClickHouse access. All 17 suggested panels now expressed as single-endpoint queries.
|
||||
- 2026-04-24 — shipped the built-in `/admin/server-metrics` UI dashboard. Gated by `infrastructureendpoints` + ADMIN, identical visibility to `/admin/{database,clickhouse}`. Source: `ui/src/pages/Admin/ServerMetricsAdminPage.tsx`.
|
||||
- 2026-04-24 — dashboard now uses the global time-range control (`useGlobalFilters`) instead of a page-local picker. Bucket size auto-scales with the selected window (10 s → 1 h). Query hooks now take a `ServerMetricsRange = { from: Date; to: Date }` instead of a `windowSeconds` number so they work for any absolute or rolling range the TopBar supplies.
|
||||
3319
docs/superpowers/plans/2026-04-22-app-deployment-page.md
Normal file
3319
docs/superpowers/plans/2026-04-22-app-deployment-page.md
Normal file
File diff suppressed because it is too large
Load Diff
1016
docs/superpowers/plans/2026-04-23-checkpoints-grid-row.md
Normal file
1016
docs/superpowers/plans/2026-04-23-checkpoints-grid-row.md
Normal file
File diff suppressed because it is too large
Load Diff
2177
docs/superpowers/plans/2026-04-23-checkpoints-table-redesign.md
Normal file
2177
docs/superpowers/plans/2026-04-23-checkpoints-table-redesign.md
Normal file
File diff suppressed because it is too large
Load Diff
1359
docs/superpowers/plans/2026-04-23-deployment-page-polish.md
Normal file
1359
docs/superpowers/plans/2026-04-23-deployment-page-polish.md
Normal file
File diff suppressed because it is too large
Load Diff
225
docs/superpowers/plans/2026-04-23-deployment-strategies.md
Normal file
225
docs/superpowers/plans/2026-04-23-deployment-strategies.md
Normal file
@@ -0,0 +1,225 @@
|
||||
# Deployment Strategies (blue-green + rolling) — Implementation Plan
|
||||
|
||||
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||
|
||||
**Goal:** Make `deploymentStrategy` actually affect runtime behavior. Support **blue-green** (all-at-once, default) and **rolling** (per-replica) deployments with correct semantics. Unblock real blue/green by giving each deployment a unique container-name generation suffix so old + new replicas can coexist during the swap.
|
||||
|
||||
**Current state (interim fix landed in `f8dccaae`):** strategy field exists but executor doesn't branch on it; a destroy-then-start flow runs regardless. This plan replaces that interim behavior.
|
||||
|
||||
**Architecture:**
|
||||
- Append an 8-char **`gen`** suffix (first 8 chars of `deployment.id`) to container name AND `CAMELEER_AGENT_INSTANCEID`. Unique per deployment; no new DB state.
|
||||
- Add a `cameleer.generation` Docker label so Grafana/Prometheus can pin deploy boundaries without regex on instance-id.
|
||||
- Branch `DeploymentExecutor.executeAsync` on strategy:
|
||||
- **blue-green**: start all N new → health-check all → stop all old. Strict all-healthy: partial = FAILED (old stays running).
|
||||
- **rolling**: per-replica loop: start new[i] → health-check → stop old[i] → next. Mid-rollout failure → stop failed new[i], leave remaining old[i..n] running, mark FAILED.
|
||||
- Keep destroy-then-start as the fallback for unknown strategy values (safety net).
|
||||
|
||||
**Reference:** interim-fix commit `f8dccaae`; investigation summary in the session log.
|
||||
|
||||
---
|
||||
|
||||
## File Structure
|
||||
|
||||
### Backend (new / modified)
|
||||
|
||||
- **Create:** `cameleer-server-core/src/main/java/com/cameleer/server/core/runtime/DeploymentStrategy.java` — enum `BLUE_GREEN, ROLLING`; `fromWire(String)` with blue-green fallback; `toWire()` → "blue-green" / "rolling".
|
||||
- **Modify:** `cameleer-server-app/src/main/java/com/cameleer/server/app/runtime/DeploymentExecutor.java` — add `gen` computation, strategy branching, per-strategy START_REPLICAS + HEALTH_CHECK + SWAP_TRAFFIC flows. Rewrite the body of `executeAsync` so stages 4–6 dispatch on strategy. Extract helper methods `deployBlueGreen` and `deployRolling` to keep each path readable.
|
||||
- **Modify:** `cameleer-server-app/src/main/java/com/cameleer/server/app/runtime/TraefikLabelBuilder.java` — take `gen` argument; emit `cameleer.generation` label; `cameleer.instance-id` becomes `{envSlug}-{appSlug}-{replicaIndex}-{gen}`.
|
||||
- **Modify:** `cameleer-server-core/src/main/java/com/cameleer/server/core/runtime/DeploymentService.java` — `containerName` stored on the row becomes `env.slug() + "-" + app.slug()` (unchanged — already just the group-name for DB/operator visibility; real Docker name is computed in the executor).
|
||||
- **Modify:** `cameleer-server-app/src/test/java/com/cameleer/server/app/controller/DeploymentControllerIT.java` — update the single assertion that pins `container_name` format if any (spotted at line ~112 in the investigation).
|
||||
- **Create:** `cameleer-server-app/src/test/java/com/cameleer/server/app/runtime/BlueGreenStrategyIT.java` — two tests: all-replicas-healthy path stops old after new, and partial-healthy aborts preserving old.
|
||||
- **Create:** `cameleer-server-app/src/test/java/com/cameleer/server/app/runtime/RollingStrategyIT.java` — two tests: happy rolling 3→3 replacement, and fail-on-replica-1 preserves remaining old replicas.
|
||||
|
||||
### UI
|
||||
|
||||
- **Modify:** `ui/src/pages/AppsTab/AppDeploymentPage/ConfigTabs/ResourcesTab.tsx` — confirm the strategy dropdown offers "blue-green" and "rolling" with descriptive labels + a hint line.
|
||||
- **Modify:** `ui/src/pages/AppsTab/AppDeploymentPage/DeploymentTab/StatusCard.tsx` — surface `deployment.deploymentStrategy` as a small text/badge near the version badge (read-only).
|
||||
|
||||
### Docs + rules
|
||||
|
||||
- **Modify:** `.claude/rules/docker-orchestration.md` — rewrite the "DeploymentExecutor Details" and "Blue/green strategy" sections to describe the new behavior and the `gen` suffix; retire the interim destroy-then-start note.
|
||||
- **Modify:** `.claude/rules/app-classes.md` — update the `DeploymentExecutor` bullet under `runtime/`.
|
||||
- **Modify:** `.claude/rules/core-classes.md` — note new `DeploymentStrategy` enum under `runtime/`.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1 — Core: DeploymentStrategy enum + gen utility
|
||||
|
||||
### Task 1.1: DeploymentStrategy enum
|
||||
|
||||
**Files:** Create `cameleer-server-core/src/main/java/com/cameleer/server/core/runtime/DeploymentStrategy.java`.
|
||||
|
||||
- [ ] Create enum with two constants `BLUE_GREEN`, `ROLLING`.
|
||||
- [ ] Add `toWire()` returning `"blue-green"` / `"rolling"`.
|
||||
- [ ] Add `fromWire(String)` — case-insensitive match; unknown or null → `BLUE_GREEN` with no throw (safety fallback). Returns enum, never null.
|
||||
|
||||
**Verification:** unit test covering known + unknown + null inputs.
|
||||
|
||||
### Task 1.2: Generation suffix helper
|
||||
|
||||
- [ ] Decide location — inline static helper on `DeploymentExecutor` is fine (`private static String gen(UUID id) { return id.toString().substring(0,8); }`). No new file needed.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2 — Executor: gen-suffixed naming + `cameleer.generation` label
|
||||
|
||||
This phase is purely the naming change; no strategy branching yet. After this phase, redeploy still uses the destroy-then-start interim, but containers carry the new names + label.
|
||||
|
||||
### Task 2.1: TraefikLabelBuilder — accept `gen`, emit generation label
|
||||
|
||||
**Files:** Modify `TraefikLabelBuilder.java`.
|
||||
|
||||
- [ ] Add `String gen` as a new arg on `build(...)`.
|
||||
- [ ] Change `instanceId` construction: `envSlug + "-" + appSlug + "-" + replicaIndex + "-" + gen`.
|
||||
- [ ] Add label `cameleer.generation = gen`.
|
||||
- [ ] Leave the Traefik router/service label keys using `svc = envSlug + "-" + appSlug` (unchanged — routing is generation-agnostic so load balancing across old+new works automatically).
|
||||
|
||||
### Task 2.2: DeploymentExecutor — compute gen once, thread through
|
||||
|
||||
**Files:** Modify `DeploymentExecutor.executeAsync`.
|
||||
|
||||
- [ ] At the top of the try block (after `env`, `app`, `config` resolution), compute `String gen = gen(deployment.id());`.
|
||||
- [ ] In the replica loop: `String instanceId = env.slug() + "-" + app.slug() + "-" + i + "-" + gen;` and `String containerName = tenantId + "-" + instanceId;`.
|
||||
- [ ] Pass `gen` to `TraefikLabelBuilder.build(...)`.
|
||||
- [ ] Set `CAMELEER_AGENT_INSTANCEID=instanceId` (already done, just verify the new value propagates).
|
||||
- [ ] Leave `replicaStates[].containerName` stored as the new full name.
|
||||
|
||||
### Task 2.3: Update the one brittle test
|
||||
|
||||
**Files:** Modify `DeploymentControllerIT.java`.
|
||||
|
||||
- [ ] Relax the container-name assertion to `startsWith("default-default-deploy-test-")` or similar — verify behavior, not exact suffix.
|
||||
|
||||
**Verification after Phase 2:**
|
||||
- `mvn -pl cameleer-server-app -am test -Dtest=DeploymentSnapshotIT,DeploymentControllerIT,PostgresDeploymentRepositoryIT`
|
||||
- All green; container names now include gen; redeploy still works via the interim destroy-then-start flow (which will be replaced in Phase 3).
|
||||
|
||||
---
|
||||
|
||||
## Phase 3 — Blue-green strategy (default)
|
||||
|
||||
### Task 3.1: Extract `deployBlueGreen(...)` helper
|
||||
|
||||
**Files:** Modify `DeploymentExecutor.java`.
|
||||
|
||||
- [ ] Move the current START_REPLICAS → HEALTH_CHECK → SWAP_TRAFFIC body into a new `private void deployBlueGreen(...)` method.
|
||||
- [ ] Signature: take `deployment`, `app`, `env`, `config`, `resolvedRuntimeType`, `mainClass`, `gen`, `primaryNetwork`, `additionalNets`.
|
||||
|
||||
### Task 3.2: Reorder for proper blue-green
|
||||
|
||||
- [ ] Remove the pre-flight "stop previous" block added in `f8dccaae` (will be replaced by post-health swap).
|
||||
- [ ] Order: start all new → wait all healthy → find previous active (via `findActiveByAppIdAndEnvironmentIdExcluding`) → stop old containers + mark old row STOPPED.
|
||||
- [ ] Strict all-healthy: if `healthyCount < config.replicas()`, stop the new containers we just started, mark deployment FAILED with `"blue-green: %d/%d replicas healthy; preserving previous deployment"`. Do **not** touch the old deployment.
|
||||
|
||||
### Task 3.3: Wire strategy dispatch
|
||||
|
||||
- [ ] At the point where `deployBlueGreen` is called, check `DeploymentStrategy.fromWire(config.deploymentStrategy())` and dispatch. For this phase, always call `deployBlueGreen`.
|
||||
- [ ] `ROLLING` dispatches to `deployRolling(...)` implemented in Phase 4 (stub it to throw `UnsupportedOperationException` for now — will be replaced before this phase lands).
|
||||
|
||||
---
|
||||
|
||||
## Phase 4 — Rolling strategy
|
||||
|
||||
### Task 4.1: `deployRolling(...)` helper
|
||||
|
||||
**Files:** Modify `DeploymentExecutor.java`.
|
||||
|
||||
- [ ] Same signature as `deployBlueGreen`.
|
||||
- [ ] Look up previous deployment once at entry via `findActiveByAppIdAndEnvironmentIdExcluding`. Capture its `replicaStates` into a map keyed by replica index.
|
||||
- [ ] For `i` from 0 to `config.replicas() - 1`:
|
||||
- [ ] Start new replica `i` (with gen-suffixed name).
|
||||
- [ ] Wait for this single container to go healthy (per-replica `waitForOneHealthy(containerId, timeoutSeconds)`; reuse `healthCheckTimeout` per replica or introduce a smaller per-replica budget).
|
||||
- [ ] On success: stop the corresponding old replica `i` by `containerId` from the previous deployment's replicaStates (if present); log continue.
|
||||
- [ ] On failure: stop + remove all new replicas started so far, mark deployment FAILED with `"rolling: replica %d failed to reach healthy; preserved %d previous replicas"`. Do **not** touch the already-replaced replicas from previous deployment (they're already stopped) or the not-yet-replaced ones (they keep serving).
|
||||
- [ ] After the loop succeeds for all replicas, mark the previous deployment row STOPPED (its containers are all stopped).
|
||||
|
||||
### Task 4.2: Add `waitForOneHealthy`
|
||||
|
||||
- [ ] Variant of `waitForAnyHealthy` that polls a single container id. Returns boolean. Same sleep cadence.
|
||||
|
||||
### Task 4.3: Replace the Phase 3 stub
|
||||
|
||||
- [ ] `ROLLING` dispatch calls `deployRolling` instead of throwing.
|
||||
|
||||
---
|
||||
|
||||
## Phase 5 — Integration tests
|
||||
|
||||
Each IT extends `AbstractPostgresIT`, uses `@MockBean RuntimeOrchestrator`, and overrides `cameleer.server.runtime.healthchecktimeout=2` via `@TestPropertySource`.
|
||||
|
||||
### Task 5.1: BlueGreenStrategyIT
|
||||
|
||||
**Files:** Create `BlueGreenStrategyIT.java`.
|
||||
|
||||
- [ ] **Test 1 `blueGreen_allHealthy_stopsOldAfterNew`:** seed a previous RUNNING deployment (2 replicas). Trigger redeploy with `containerConfig.deploymentStrategy=blue-green` + replicas=2. Mock orchestrator: new containers return `healthy`. Await new deployment RUNNING. Assert: previous deployment has status STOPPED, its container IDs had `stopContainer`+`removeContainer` called; new deployment replicaStates contain the two new container IDs; `cameleer.generation` label on both new container requests.
|
||||
- [ ] **Test 2 `blueGreen_partialHealthy_preservesOldAndMarksFailed`:** seed previous RUNNING (2 replicas). New deploy with replicas=2. Mock: container A healthy, container B starting forever. Await new deployment FAILED. Assert: previous deployment still RUNNING; its container IDs were **not** stopped; new deployment errorMessage contains "1/2 replicas healthy".
|
||||
|
||||
### Task 5.2: RollingStrategyIT
|
||||
|
||||
**Files:** Create `RollingStrategyIT.java`.
|
||||
|
||||
- [ ] **Test 1 `rolling_allHealthy_replacesOneByOne`:** seed previous RUNNING (3 replicas). New deploy with strategy=rolling, replicas=3. Mock: new containers all healthy. Use `ArgumentCaptor` on `startContainer` to observe start order. Assert: start[0] → stop[old0] → start[1] → stop[old1] → start[2] → stop[old2]; new deployment RUNNING with 3 replicaStates; old deployment STOPPED.
|
||||
- [ ] **Test 2 `rolling_failsMidRollout_preservesRemainingOld`:** seed previous RUNNING (3 replicas). New deploy strategy=rolling. Mock: new[0] healthy, new[1] never healthy. Await FAILED. Assert: new[0] was stopped during cleanup; old[0] was stopped (replaced before the failure); old[1] + old[2] still RUNNING; new deployment errorMessage contains "replica 1".
|
||||
|
||||
---
|
||||
|
||||
## Phase 6 — UI strategy indicator
|
||||
|
||||
### Task 6.1: Strategy dropdown polish
|
||||
|
||||
**Files:** Modify `ResourcesTab.tsx`.
|
||||
|
||||
- [ ] Verify the `<select>` has options `blue-green` and `rolling`.
|
||||
- [ ] Add a one-line description under the dropdown: "Blue-green: start all new, swap when healthy. Rolling: replace one replica at a time."
|
||||
|
||||
### Task 6.2: Strategy on StatusCard
|
||||
|
||||
**Files:** Modify `DeploymentTab/StatusCard.tsx`.
|
||||
|
||||
- [ ] Add a small subtle text line in the grid: `<span>Strategy</span><span>{deployment.deploymentStrategy}</span>` (read-only, mono text ok).
|
||||
|
||||
---
|
||||
|
||||
## Phase 7 — Docs + rules updates
|
||||
|
||||
### Task 7.1: Update `.claude/rules/docker-orchestration.md`
|
||||
|
||||
- [ ] Replace the "DeploymentExecutor Details" section with the new flow (gen suffix, strategy dispatch, per-strategy ordering).
|
||||
- [ ] Update the "Deployment Status Model" table — `DEGRADED` now means "post-deploy replica crashed"; failed-during-deploy is always `FAILED`.
|
||||
- [ ] Add a short "Deployment Strategies" section: behavior of blue-green vs rolling, resource peak, failure semantics.
|
||||
|
||||
### Task 7.2: Update `.claude/rules/app-classes.md`
|
||||
|
||||
- [ ] Under `runtime/` → `DeploymentExecutor` bullet: add "branches on `DeploymentStrategy.fromWire(config.deploymentStrategy())`. Container name format: `{tenantId}-{envSlug}-{appSlug}-{replicaIndex}-{gen}` where gen = 8-char prefix of deployment UUID."
|
||||
|
||||
### Task 7.3: Update `.claude/rules/core-classes.md`
|
||||
|
||||
- [ ] Add under `runtime/`: `DeploymentStrategy` — enum BLUE_GREEN, ROLLING; `fromWire` falls back to BLUE_GREEN; note stored as kebab-case string on config.
|
||||
|
||||
---
|
||||
|
||||
## Rollout sequence
|
||||
|
||||
1. Phase 1 (enum + helper) — trivial, land as one commit.
|
||||
2. Phase 2 (naming + generation label) — one commit; interim destroy-then-start still active; regenerates no OpenAPI (no controller change).
|
||||
3. Phase 3 (blue-green as default) — one commit replacing the interim flow. This is where real behavior changes.
|
||||
4. Phase 4 (rolling) — one commit.
|
||||
5. Phase 5 (4 ITs) — one commit; run `mvn test` against affected modules.
|
||||
6. Phase 6 (UI) — one commit; `npx tsc` clean.
|
||||
7. Phase 7 (docs) — one commit.
|
||||
|
||||
Total: 7 commits, all atomic.
|
||||
|
||||
## Acceptance
|
||||
|
||||
- Existing `DeploymentSnapshotIT` still passes.
|
||||
- New `BlueGreenStrategyIT` (2 tests) and `RollingStrategyIT` (2 tests) pass.
|
||||
- Browser QA: redeploy with `deploymentStrategy=blue-green` vs `rolling` produces the expected container timeline (inspect via `docker ps`); Prometheus metrics show continuity across deploys when queried by `{cameleer_app, cameleer_environment}`; the `cameleer_generation` label flips per deploy.
|
||||
- `.claude/rules/docker-orchestration.md` reflects the new behavior.
|
||||
|
||||
## Non-goals
|
||||
|
||||
- Automatic rollback on blue-green partial failure (old is left running; user redeploys).
|
||||
- Automatic rollback on rolling mid-failure (remaining old replicas keep running; user redeploys).
|
||||
- Per-replica `HEALTH_CHECK` stage label in the UI progress bar — the 7-stage progress is reused as-is; strategy dictates internal looping.
|
||||
- Strategy field validation at container-config save time (executor's `fromWire` fallback absorbs unknown values — consider a follow-up for strict validation if it becomes an issue).
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user