Compare commits

...

140 Commits

Author SHA1 Message Date
hsiegeln
dafd7adb00 chore: upgrade @cameleer/design-system to v0.0.3
Some checks failed
CI / docker (push) Has been cancelled
CI / build (push) Has been cancelled
CI / deploy (push) Has been cancelled
CI / deploy-feature (push) Has been cancelled
CI / cleanup-branch (push) Has been cancelled
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 15:42:38 +01:00
hsiegeln
44eecfa5cd deleted obsolote files
All checks were successful
CI / build (push) Successful in 1m21s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 43s
CI / deploy (push) Successful in 38s
CI / deploy-feature (push) Has been skipped
2026-03-24 10:24:13 +01:00
hsiegeln
ff76751629 refactor: rename agent group→application across entire codebase
All checks were successful
CI / build (push) Successful in 1m22s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 52s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
Complete the group→application terminology rename in the agent
registry subsystem:

- AgentInfo: field group → application, all wither methods updated
- AgentRegistryService: findByGroup → findByApplication
- AgentInstanceResponse: field group → application (API response)
- AgentRegistrationRequest: field group → application (API request)
- JwtServiceImpl: parameter names group → application (JWT claim
  string "group" preserved for token backward compatibility)
- All controllers, lifecycle monitor, command controller updated
- Integration tests: JSON request bodies "group" → "application"
- Frontend: schema.d.ts, openapi.json, agent queries, AgentHealth

RBAC group references (groups table, GroupAdminController, etc.)
are NOT affected — they are a separate domain concept.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 08:48:12 +01:00
hsiegeln
413839452c fix: use statsForApp when application is set without routeId
All checks were successful
CI / build (push) Successful in 1m21s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 44s
CI / deploy (push) Successful in 38s
CI / deploy-feature (push) Has been skipped
The stats endpoint was calling statsForRoute(null, agentIds) when
only application was set — this filtered by route_id=null, returning
zero results. Now correctly routes to statsForApp/timeseriesForApp
which queries the stats_1m_app continuous aggregate by application_name.

Also reverts the group parameter alias workaround — the deployed
backend correctly accepts 'application'.

Three code paths now:
- No filters → stats_1m_all (global)
- application only → stats_1m_app (per-app)
- routeId (±application) → stats_1m_route (per-route)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 08:28:05 +01:00
hsiegeln
c33e899be7 fix: accept both 'application' and 'group' query params in search API
All checks were successful
CI / build (push) Successful in 1m22s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 50s
CI / deploy (push) Successful in 37s
CI / deploy-feature (push) Has been skipped
The backend was renamed from group→application but Docker build cache
may serve old code. Accept 'group' as a fallback alias so the UI works
with both old and new backends. Applies to GET /search/executions,
/search/stats, and /search/stats/timeseries.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 08:25:05 +01:00
hsiegeln
180514a039 fix: align RBAC user management styling with mock design
All checks were successful
CI / build (push) Successful in 1m19s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 52s
CI / deploy (push) Successful in 38s
CI / deploy-feature (push) Has been skipped
- Split pane: card layout with border, border-radius, box-shadow
  matching mock's bordered panel look
- List pane: bg-surface background, padded header with border-bottom
- Entity items: border-bottom separators instead of gap spacing,
  flex-start alignment for multi-line content
- Detail pane: bg-surface background, 20px padding, right border-radius
- User meta line: show email + group path (like mock's "email · group")
- Create form: raised background with bottom border

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 08:21:11 +01:00
hsiegeln
60fced56ed fix: format Documents column with user locale in OpenSearch admin
All checks were successful
CI / build (push) Successful in 1m25s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 1m0s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 08:17:06 +01:00
hsiegeln
515c942623 feat: add admin tab navigation between subpages
All checks were successful
CI / build (push) Successful in 1m19s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 52s
CI / deploy (push) Successful in 38s
CI / deploy-feature (push) Has been skipped
Add AdminLayout wrapper with Tabs component for navigating between
admin sections: User Management, Audit Log, OIDC, Database, OpenSearch.

Nest all /admin/* routes under AdminLayout using React Router's
Outlet pattern so the tab bar persists across admin page navigation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 22:17:33 +01:00
hsiegeln
3ccd4b6548 fix: self-host fonts instead of loading from Google Fonts CDN
All checks were successful
CI / build (push) Successful in 1m23s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 56s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
Loading fonts from fonts.googleapis.com sends user IP addresses to
Google on every page load — a GDPR violation. Self-host DM Sans and
JetBrains Mono as woff2 files bundled with the UI.

- Download DM Sans (400/500/600/700 + 400 italic) woff2 files
- Download JetBrains Mono (400/500/600) woff2 files
- Replace @import url(googleapis) with local @font-face declarations
- Both fonts are OFL-licensed (free to self-host)
- Total size: ~135KB for all 8 font files

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 22:06:59 +01:00
hsiegeln
dad608e3a2 fix: display timestamps in user's local timezone, not UTC
Some checks failed
CI / build (push) Successful in 1m17s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 51s
CI / deploy-feature (push) Has been cancelled
CI / deploy (push) Has been cancelled
Two places in Dashboard used toISOString() for display, which always
renders UTC. Changed to toLocaleString() for the user's local timezone.

- Exchanges table "Started" column
- Detail panel "Timestamp" field

API query parameters correctly continue using toISOString() (UTC).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 22:00:44 +01:00
hsiegeln
7479dd6daf fix: convert Instant to Timestamp for JDBC agent metrics query
Some checks failed
CI / docker (push) Has been cancelled
CI / deploy (push) Has been cancelled
CI / deploy-feature (push) Has been cancelled
CI / cleanup-branch (push) Has been cancelled
CI / build (push) Has been cancelled
PostgreSQL JDBC driver can't infer SQL type for java.time.Instant.
Convert from/to parameters to java.sql.Timestamp before binding.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 21:59:22 +01:00
hsiegeln
e4dff0cad1 fix: align RoutesMetrics with mock — chart titles, Invalid Date bug
All checks were successful
CI / build (push) Successful in 1m20s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 50s
CI / deploy (push) Successful in 38s
CI / deploy-feature (push) Has been skipped
- Fix Invalid Date in Errors bar chart (guard against null timestamps)
- Table header: "Route Metrics" → "Per-Route Performance"
- Chart titles: add units — "Throughput (msg/s)", "Latency (ms)",
  "Errors by Route", "Message Volume (msg/min)"
- Add yLabel to charts for axis labels

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 21:55:29 +01:00
hsiegeln
717367252c fix: align AgentInstance page with mock design
All checks were successful
CI / build (push) Successful in 1m13s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 49s
CI / deploy (push) Successful in 40s
CI / deploy-feature (push) Has been skipped
- Chart headers: add current value meta text (CPU %, memory MB, TPS,
  error rate, thread count) matching mock layout
- Bottom section: 2-column grid with log placeholder (left) and
  timeline events (right) matching mock layout
- Timeline header: show "Timeline" + event count like mock
- Remove duplicate EmptyState placeholder

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 21:51:44 +01:00
hsiegeln
a06808a2a2 fix: align AgentHealth page with mock design
Some checks failed
CI / build (push) Successful in 1m18s
CI / cleanup-branch (push) Has been skipped
CI / deploy (push) Has been cancelled
CI / deploy-feature (push) Has been cancelled
CI / docker (push) Has been cancelled
- DetailPanel: switch from tabs to flat children layout (fixes stale
  tab state bug), add position:fixed override, key on agent id
- Stat strip: colored status breakdown (live/stale/dead), msg/s detail
  on TPS, "requires attention" on dead count
- Scope trail: simplified to "X/Y live" label
- Event card header: rename "Event Log" to "Timeline" with count badge
- Remove unused Breadcrumb, scopeItems, groupHealth

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 21:50:16 +01:00
hsiegeln
6b750df1c4 fix: remove hardcoded locales from UI formatting
All checks were successful
CI / build (push) Successful in 1m21s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 52s
CI / deploy (push) Successful in 37s
CI / deploy-feature (push) Has been skipped
Use browser default locale instead of hardcoded 'en-US' and 'en-GB'
for number and time formatting.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 21:44:16 +01:00
hsiegeln
ea56bcf2d7 fix: split Flyway migration — DDL in V1, policies in V2
All checks were successful
CI / build (push) Successful in 1m20s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 43s
CI / deploy (push) Successful in 1m16s
CI / deploy-feature (push) Has been skipped
TimescaleDB add_continuous_aggregate_policy and add_compression_policy
cannot run inside a transaction block. Move all policy calls to V2
with flyway:executeInTransaction=false directive.

Also fix stats_1m_processor_detail: add WITH NO DATA and
materialized_only = false.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 21:34:35 +01:00
hsiegeln
826466aa55 fix: cast diagram layout response type to fix TS build error
Some checks failed
CI / build (push) Successful in 1m13s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 53s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Failing after 1m16s
The render endpoint returns a union type (SVG string | JSON object).
Cast to DiagramLayout interface so .nodes is accessible. Also rename
useDiagramByRoute parameter from group to application.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 21:25:36 +01:00
hsiegeln
6a5dba4eba refactor: rename group_name→application_name in DB, OpenSearch, SQL
Some checks failed
CI / build (push) Failing after 41s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Consolidate V1-V7 Flyway migrations into single V1__init.sql with
all columns renamed from group_name to application_name. Requires
fresh database (wipe flyway_schema_history, all data).

- DB columns: executions.group_name → application_name,
  processor_executions.group_name → application_name
- Continuous aggregates: all views updated to use application_name
- OpenSearch field: group_name → application_name in index/query
- All Java SQL strings updated to match new column names
- Delete V2-V7 migration files (folded into V1)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 21:24:19 +01:00
hsiegeln
8ad0016a8e refactor: rename group/groupName to application/applicationName
Some checks failed
CI / build (push) Failing after 40s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
The execution-related "group" concept actually represents the
application name. Rename all Java fields, API parameters, and frontend
types from groupName→applicationName and group→application for clarity.

- Java records: ExecutionSummary, ExecutionDetail, ExecutionDocument,
  ExecutionRecord, ProcessorRecord
- API params: SearchRequest.group→application, SearchController
  @RequestParam group→application
- Services: IngestionService, DetailService, SearchIndexer, StatsStore
- Frontend: schema.d.ts, Dashboard, ExchangeDetail, RouteDetail,
  executions query hooks

Database column names (group_name) and OpenSearch field names are
unchanged — only the API-facing Java/TS field names are renamed.

RBAC group references (groups table, GroupRepository, GroupsTab) are
a separate domain concept and are NOT affected by this change.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 21:21:38 +01:00
hsiegeln
3c226de62f fix: use diagramContentHash for Route Flow instead of groupName
Some checks failed
CI / build (push) Failing after 51s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
The deployed backend doesn't return groupName on ExecutionDetail or
ExecutionSummary (Docker build cache issue). Switch diagram lookup to
use diagramContentHash which is always available in the detail response.

- Dashboard: useDiagramLayout(detail.diagramContentHash) instead of
  useDiagramByRoute(groupName, routeId)
- ExchangeDetail: same change

Route Flow now renders correctly in both the slide-in panel and the
full exchange detail page.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 21:13:01 +01:00
hsiegeln
c8c62a98bb fix: add groupName to ExecutionSummary in schema.d.ts
Some checks failed
CI / build (push) Successful in 1m12s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 1m10s
CI / deploy (push) Failing after 2m19s
CI / deploy-feature (push) Has been skipped
The Java record was updated but the OpenAPI schema was not regenerated,
causing a TypeScript build error in CI.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 21:03:45 +01:00
hsiegeln
2ae2871822 fix: add groupName to ExecutionDetail, rewrite ExchangeDetail to match mock
Some checks failed
CI / build (push) Failing after 40s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
- Add groupName field to ExecutionDetail record and DetailService
- Dashboard: fix TDZ error (rows referenced before definition), add
  selectedRow fallback for diagram groupName lookup
- ExchangeDetail: rewrite to match mock layout — auto-select first
  processor, Message IN/OUT split panels with header key-value rows,
  error panel for failed processors, Timeline/Flow toggle buttons
- Track diagram-mapping utility (was untracked, caused CI build failure)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 21:02:14 +01:00
hsiegeln
a950feaef1 fix: Dashboard DetailPanel uses flat scrollable layout matching mock
Some checks failed
CI / build (push) Failing after 41s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Changed from tabs-based to children-based DetailPanel layout:
- Flat scrollable sections: Open Details → Overview → Errors → Route Flow → Processor Timeline
- Title shows "route — exchangeId" matching mock pattern
- Removed unused state (detailTab, processorIdx)
- Added panelSectionMeta CSS for duration display in timeline header

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 20:51:23 +01:00
hsiegeln
695969d759 fix: DetailPanel slide-in now visible — fixed empty content bug and positioning
Some checks failed
CI / build (push) Failing after 39s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
- Only render DetailPanel when detail data is loaded (key={selectedId} forces remount
  so internal activeTab state resets correctly)
- Override DetailPanel CSS with position:fixed to overlay on right side
  (AppShell layout doesn't support detail prop from child pages)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 20:47:43 +01:00
hsiegeln
a72b0954db fix: add groupName to ExecutionSummary, locale format stat values, inspect column, fix duplicate keys
Some checks failed
CI / build (push) Failing after 40s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
- Added groupName field to ExecutionSummary Java record and OpenSearch mapper
- Dashboard stat cards use locale-formatted numbers (en-US)
- Added inspect column (↗) linking directly to exchange detail page
- Fixed duplicate React key warning from two columns sharing executionId key

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 20:41:46 +01:00
hsiegeln
4572230c9c fix: align all pages with design system mocks — stat cards, tables, detail panels
Some checks failed
CI / build (push) Failing after 40s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Dashboard: correct stat card labels (Exchanges/Success Rate/Errors/Throughput/Latency p99),
add detail text, trends, sparklines on all cards, Agent column, LIVE badge,
expanded detail panel with Agent/Correlation/Timestamp, "Open full details" link.

Agent Health: per-group meta (TPS/routes) in GroupCard header, proper HTML table
with column headers for instance list.

Agent Instance: stat card detail props (heap info, start date), scope trail with
inline status/version/routes badges.

Routes: 5th In-Flight stat card, enriched stat card props (detail/trend/sparkline),
SLA threshold line on latency chart.

Exchange Detail: Agent stat box in header.

Also: vite proxy CORS fix, cross-env dev scripts.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 20:28:56 +01:00
hsiegeln
752d7ec0e7 feat: add Users tab with split-pane layout, inline create, detail panel
Some checks failed
CI / build (push) Failing after 39s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 18:32:45 +01:00
hsiegeln
9ab38dfc59 feat: add Groups tab with hierarchy management and member/role assignment
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 18:32:18 +01:00
hsiegeln
907bcd5017 feat: add Roles tab with system role protection and principal display
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 18:32:07 +01:00
hsiegeln
83caf4be5b feat: align Agent Instance with mock — JVM charts, process info, stat cards, log placeholder
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 18:29:25 +01:00
hsiegeln
1533bea2a6 refactor: restructure RBAC page to container + tab components, add CSS module
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 18:28:52 +01:00
hsiegeln
94d1e81852 feat: add Route Detail page with diagram, processor stats, and tabbed sections
Replaces the filtered RoutesMetrics view at /routes/:appId/:routeId with a
dedicated RouteDetail page showing route diagram, processor stats table,
performance charts, recent executions, and client-side grouped error patterns.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 18:25:58 +01:00
hsiegeln
8e27f45a2b feat: add default roles and ConfirmDialog to OIDC config
Adds a Default Roles section with Tag components for viewing/removing roles and an Input+Button for adding new ones. Replaces the plain delete button with a ConfirmDialog requiring typed confirmation. Introduces OidcConfigPage.module.css for CSS module layout classes.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 18:25:14 +01:00
hsiegeln
a86f56f588 feat: add Timeline/Flow toggle to Exchange Detail
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 18:22:45 +01:00
hsiegeln
651cf9de6e feat: add correlation chain and processor count to Exchange Detail
Adds a recursive processor count stat to the exchange header, and a
Correlation Chain section that visualises related executions sharing
the same correlationId, with the current exchange highlighted.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 18:19:50 +01:00
hsiegeln
63d8078688 feat: align Dashboard stat cards with mock, add errors section to DetailPanel
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 18:19:33 +01:00
hsiegeln
ee69dbedfc feat: use TopBar onLogout prop, add ToastProvider
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 18:17:38 +01:00
hsiegeln
313d871948 chore: update design system to v0.0.2, regenerate schema.d.ts
Bumped @cameleer/design-system from ^0.0.1 to ^0.0.2 (adds onLogout prop to TopBar).
Fetched openapi.json from remote backend, stripped /api/v1 prefix, patched
ExecutionDetail with groupName and children fields to match UI expectations,
then regenerated schema.d.ts via openapi-typescript. TypeScript compiles clean.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 18:16:15 +01:00
hsiegeln
f4d2693561 feat: enrich AgentInstanceResponse with version/capabilities, add password reset endpoint
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 18:13:37 +01:00
hsiegeln
2051572ee2 feat: add GET /agents/{id}/metrics endpoint for JVM metrics
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 18:11:22 +01:00
hsiegeln
cc433b4215 feat: add GET /routes/metrics/processors endpoint
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 18:10:54 +01:00
hsiegeln
31b60c4e24 feat: add V7 migration for per-processor-id continuous aggregate 2026-03-23 18:09:24 +01:00
hsiegeln
017a0c218e docs: add UI mock alignment design spec and implementation plan
Comprehensive spec and 20-task plan to close all gaps between
@cameleer/design-system v0.0.2 mocks and the current server UI.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 18:06:26 +01:00
hsiegeln
4ff01681d4 style: add CSS modules to all pages matching design system mock layouts
All checks were successful
CI / build (push) Successful in 1m18s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 50s
CI / deploy (push) Successful in 50s
CI / deploy-feature (push) Has been skipped
Replace inline styles with semantic CSS module classes for proper visual
structure: card wrappers with borders/shadows, grid layouts for stat
strips and charts, section headers, and typography classes.

Pages updated: Dashboard, ExchangeDetail, RoutesMetrics, AgentHealth,
AgentInstance.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 18:16:16 +01:00
hsiegeln
f2744e3094 fix: correct response field mappings and add logout button
All checks were successful
CI / build (push) Successful in 1m28s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 50s
CI / deploy (push) Successful in 38s
CI / deploy-feature (push) Has been skipped
- SearchResult uses 'data' not 'items', 'total' not 'totalCount'
- ExecutionStats uses 'p99LatencyMs' not 'p99DurationMs'
- TimeseriesBucket uses 'time' not 'timestamp'
- Add user Dropdown with logout action to LayoutShell

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 18:06:49 +01:00
hsiegeln
ea5b5a685d fix: correct SearchRequest field names (offset/limit, sortField/sortDir)
All checks were successful
CI / build (push) Successful in 1m19s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 50s
CI / deploy (push) Successful in 40s
CI / deploy-feature (push) Has been skipped
Dashboard was sending page/size but backend expects offset/limit.
Schema also had sort/order instead of sortField/sortDir.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 17:55:27 +01:00
hsiegeln
045d9ea890 fix: correct page directory casing for case-sensitive filesystems
All checks were successful
CI / build (push) Successful in 1m16s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 1m12s
CI / deploy (push) Successful in 1m1s
CI / deploy-feature (push) Has been skipped
Rename admin/ → Admin/ and swagger/ → Swagger/ to match router imports.
Windows is case-insensitive so the mismatch was invisible locally.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 17:43:42 +01:00
hsiegeln
9613bddc60 docs: add UI dev instructions and configurable API proxy target
Some checks failed
CI / build (push) Failing after 38s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 17:42:17 +01:00
hsiegeln
2b111c603c feat: migrate UI to @cameleer/design-system, add backend endpoints
Some checks failed
CI / build (push) Failing after 47s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Backend:
- Add agent_events table (V5) and lifecycle event recording
- Add route catalog endpoint (GET /routes/catalog)
- Add route metrics endpoint (GET /routes/metrics)
- Add agent events endpoint (GET /agents/events-log)
- Enrich AgentInstanceResponse with tps, errorRate, activeRoutes, uptimeSeconds
- Add TimescaleDB retention/compression policies (V6)

Frontend:
- Replace custom Mission Control UI with @cameleer/design-system components
- Rebuild all pages: Dashboard, ExchangeDetail, RoutesMetrics, AgentHealth,
  AgentInstance, RBAC, AuditLog, OIDC, DatabaseAdmin, OpenSearchAdmin, Swagger
- New LayoutShell with design system AppShell, Sidebar, TopBar, CommandPalette
- Consume design system from Gitea npm registry (@cameleer/design-system@0.0.1)
- Add .npmrc for scoped registry, update Dockerfile with REGISTRY_TOKEN arg

CI:
- Pass REGISTRY_TOKEN build-arg to UI Docker build step

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 17:38:39 +01:00
hsiegeln
82124c3145 fix: remove RBAC user_roles insert from agent registration
All checks were successful
CI / build (push) Successful in 1m22s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 42s
CI / deploy (push) Successful in 44s
CI / deploy-feature (push) Has been skipped
Agents are transient and should not be persisted in the users table.
The assignRoleToUser call caused a FK violation (user_roles → users),
resulting in HTTP 500 on registration. The AGENT role is already
embedded directly in the JWT claims.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-18 22:10:48 +01:00
hsiegeln
17ef48e392 fix: return rotated refresh token from agent token refresh endpoint
All checks were successful
CI / build (push) Successful in 1m22s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 56s
CI / deploy (push) Successful in 47s
CI / deploy-feature (push) Has been skipped
Previously the refresh endpoint only returned a new accessToken, causing
agents to lose their refreshToken after the first refresh cycle and
forcing a full re-registration every ~2 hours.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-18 16:44:16 +01:00
4085f42160 Merge pull request 'fix/admin-scope-filtering' (#88) from fix/admin-scope-filtering into main
All checks were successful
CI / build (push) Successful in 1m15s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 15s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
Reviewed-on: cameleer/cameleer3-server#88
2026-03-18 11:21:52 +01:00
hsiegeln
0fcbe83cc2 refactor: consolidate oidc_config and admin_thresholds into generic server_config table
All checks were successful
CI / build (push) Successful in 1m19s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 42s
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Successful in 34s
CI / build (pull_request) Successful in 1m23s
CI / cleanup-branch (pull_request) Has been skipped
CI / docker (pull_request) Has been skipped
CI / deploy (pull_request) Has been skipped
CI / deploy-feature (pull_request) Has been skipped
Single JSONB key-value table replaces two singleton config tables, making
future config types trivial to add. Also fixes pre-existing IT failures:
Flyway URL not overridden by Testcontainers, threshold test ordering.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-18 11:16:31 +01:00
hsiegeln
5a0a915cc6 fix: scope admin infra pages to current tenant's tables and indices
All checks were successful
CI / build (push) Successful in 1m14s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 44s
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Successful in 35s
Database tables filtered to current_schema(), active queries to
current_database(), OpenSearch indices to configured index-prefix.
Delete endpoint rejects indices outside application scope.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-18 09:29:06 +01:00
f01487ccb4 Merge pull request 'feature/rbac-management' (#86) from feature/rbac-management into main
All checks were successful
CI / build (push) Successful in 1m12s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 16s
CI / deploy (push) Successful in 1m14s
CI / deploy-feature (push) Has been skipped
Reviewed-on: cameleer/cameleer3-server#86
2026-03-17 19:51:13 +01:00
hsiegeln
033cfcf5fc refactor: rework audit log to full-width table with filter bar
All checks were successful
CI / build (push) Successful in 1m12s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 54s
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Successful in 36s
CI / build (pull_request) Successful in 1m10s
CI / cleanup-branch (pull_request) Has been skipped
CI / docker (pull_request) Has been skipped
CI / deploy (pull_request) Has been skipped
CI / deploy-feature (pull_request) Has been skipped
Replace split-pane layout with a table-based design: horizontal filter
bar, full-width data table with sticky headers, expandable detail rows
showing IP/user-agent/JSON payload, and bottom pagination.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 19:39:55 +01:00
hsiegeln
6d650cdf34 feat: harmonize admin pages to split-pane layout with shared CSS
All checks were successful
CI / build (push) Successful in 1m12s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 52s
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Successful in 35s
Extract shared admin layout styles into AdminLayout.module.css and
convert all admin pages to consistent patterns: Database/OpenSearch/
Audit Log use split-pane master/detail, OIDC uses full-width detail-only
with unified panelHeader treatment across all pages.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 19:30:38 +01:00
hsiegeln
6f5b5b8655 feat: add password support for local user creation and per-user login
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 19:08:19 +01:00
hsiegeln
653ef958ed fix: add edit mode for parent group dropdown, fix updateGroup to preserve parent
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 19:05:57 +01:00
hsiegeln
48b17f83a3 fix: handle empty 200 responses in adminFetch to fix stale UI after mutations
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 19:04:41 +01:00
hsiegeln
9d08e74913 feat: SHA-based avatar colors, user create/edit, editable names, +Add visibility
All checks were successful
CI / build (push) Successful in 1m11s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 56s
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Successful in 35s
- Add hashColor utility for unique avatar colors derived from entity names
- Add user creation form with username/displayName/email fields
- Add useCreateUser and useUpdateUser mutation hooks
- Make display names editable on all detail panes (click to edit)
- Protect built-in entities: Admins group and system roles not editable
- Make +Add chip more visible with amber border and background
- Send empty string instead of null for role description on create
- Add .editNameInput CSS for inline name editing

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 18:52:07 +01:00
hsiegeln
f42e6279e6 fix: null safety in role/group creation, add user create/update endpoints
- RoleAdminController.createRole: default null description to "" and null scope to "custom"
- RoleAdminController.updateRole: pass null audit details to avoid NPE when name is null
- GroupAdminController.updateGroup: pass null audit details to avoid NPE when name is null
- UserAdminController: add POST / createUser endpoint with default VIEWER role assignment
- UserAdminController: add PUT /{userId} updateUser endpoint for displayName/email updates
2026-03-17 18:49:34 +01:00
hsiegeln
d025919f8d feat: add group create, delete, role assignment, and parent dropdown
All checks were successful
CI / build (push) Successful in 1m9s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 52s
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Successful in 36s
- Add inline create form with name and parent group selection
- Add delete button with confirmation dialog (protected for built-in Admins group)
- Add role assignment with MultiSelectDropdown and remove buttons on chips
- Add parent group dropdown with cycle prevention (excludes self and descendants)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 18:35:04 +01:00
hsiegeln
db6143f9da feat: add role create and delete with system role protection
- Add create role form with name, description, and scope fields
- Add delete button on role detail view for non-system roles
- Use ConfirmDeleteDialog for safe deletion confirmation
- System roles protected from deletion (button hidden)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 18:34:46 +01:00
hsiegeln
4821ddebba feat: add user delete, group/role assignment, and date format fix
- Add delete button with self-delete guard (parses JWT sub claim)
- Add ConfirmDeleteDialog for safe user deletion
- Add MultiSelectDropdown for group membership assignment with remove buttons
- Add MultiSelectDropdown for direct role assignment with remove buttons
- Inherited roles show source but no remove button
- Change Created date format from date-only to full locale string
- Remove unused formatDate helper

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 18:34:40 +01:00
hsiegeln
65001e0ed0 feat: add MultiSelectDropdown component and CRUD styles 2026-03-17 18:32:16 +01:00
hsiegeln
1881aca0e4 fix: sort RBAC dashboard diagram columns consistently 2026-03-17 18:32:14 +01:00
hsiegeln
4842507ff3 feat: seed built-in Admins group and assign admin users on login
- Add V2 Flyway migration to create built-in Admins group (id: ...0010) with ADMIN role
- Add ADMINS_GROUP_ID constant to SystemRole
- Add user to Admins group on successful local login alongside role assignment
2026-03-17 18:30:16 +01:00
hsiegeln
708aae720c chore: regenerate OpenAPI spec and TypeScript types for RBAC endpoints
All checks were successful
CI / build (push) Successful in 1m11s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 51s
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Successful in 36s
Remove dead UserInfo type export, patch PositionedNode.children.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 18:11:10 +01:00
hsiegeln
ebe97bd386 feat: add RBAC management UI with dashboard, users, groups, and roles tabs
All checks were successful
CI / build (push) Successful in 1m14s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 54s
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Successful in 35s
Tab-based admin page at /admin/rbac with split-pane entity views,
inheritance visualization, OIDC badges, and role/group management.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 17:58:24 +01:00
hsiegeln
01295c84d8 feat: add Group, Role, and RBAC stats admin controllers
GroupAdminController with cycle detection, RoleAdminController
with system role protection, RbacStatsController for dashboard.
Rewrite UserAdminController to use RbacService.
2026-03-17 17:47:26 +01:00
hsiegeln
eb0cc8c141 feat: replace flat users.roles with relational RBAC model
New package com.cameleer3.server.core.rbac with SystemRole constants,
detail/summary records, GroupRepository, RoleRepository, RbacService.
Remove roles field from UserInfo. Implement PostgresGroupRepository,
PostgresRoleRepository, RbacServiceImpl with inheritance computation.
Update UiAuthController, OidcAuthController, AgentRegistrationController
to assign roles via user_roles table. JWT populated from effective system roles.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 17:44:32 +01:00
hsiegeln
b06b3f52a8 refactor: consolidate V1-V10 Flyway migrations into single V1__init.sql
Add RBAC tables (roles, groups, group_roles, user_groups, user_roles)
with system role seeds and join indexes. Drop users.roles TEXT[] column.
2026-03-17 17:34:15 +01:00
ecd76bda97 Merge pull request 'feature/admin-infrastructure' (#79) from feature/admin-infrastructure into main
All checks were successful
CI / build (push) Successful in 1m10s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 16s
CI / deploy (push) Successful in 37s
CI / deploy-feature (push) Has been skipped
Reviewed-on: cameleer/cameleer3-server#79
2026-03-17 16:51:10 +01:00
hsiegeln
4bc48afbf8 chore: regenerate OpenAPI spec and TypeScript types for admin endpoints
All checks were successful
CI / build (push) Successful in 1m11s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 52s
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Successful in 37s
CI / build (pull_request) Successful in 1m9s
CI / cleanup-branch (pull_request) Has been skipped
CI / docker (pull_request) Has been skipped
CI / deploy (pull_request) Has been skipped
CI / deploy-feature (pull_request) Has been skipped
Downloaded from deployed feature branch server. Patched PositionedNode
to include children field (missing from server-generated spec).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 16:37:43 +01:00
hsiegeln
038b663b8c fix: align frontend interfaces with backend DTO field names
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 16:36:11 +01:00
hsiegeln
329e4b0b16 added RBAC mock and spec to examples 2026-03-17 16:21:25 +01:00
hsiegeln
7c949274c5 feat: add Audit Log admin page with filtering, pagination, and detail expansion
All checks were successful
CI / build (push) Successful in 1m22s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 3m47s
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Successful in 22s
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 16:11:16 +01:00
hsiegeln
6b9988f43a feat: add OpenSearch admin page with pipeline, indices, performance, and thresholds UI
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 16:11:01 +01:00
hsiegeln
0edbdea2eb feat: add Database admin page with pool, tables, queries, and thresholds UI
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 16:10:56 +01:00
hsiegeln
b61c32729b feat: add React Query hooks for admin infrastructure endpoints
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 16:09:31 +01:00
hsiegeln
9fbda7715c feat: restructure admin sidebar with collapsible sub-navigation and new routes
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 16:09:23 +01:00
hsiegeln
4d5a4842b9 feat: add shared admin UI components (StatusBadge, RefreshableCard, ConfirmDeleteDialog)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 16:09:14 +01:00
hsiegeln
321b8808cc feat: add ThresholdAdminController and AuditLogController with integration tests
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 15:57:23 +01:00
hsiegeln
c6da858c2f feat: add OpenSearchAdminController with status, pipeline, indices, performance, and delete endpoints
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 15:57:18 +01:00
hsiegeln
c6b2f7c331 feat: add DatabaseAdminController with status, pool, tables, queries, and kill endpoints
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 15:57:14 +01:00
hsiegeln
0cea8af6bc feat: add response/request DTOs for admin infrastructure endpoints
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 15:51:31 +01:00
hsiegeln
1d6ae00b1c feat: wire AuditService, enable method security, retrofit audit logging into existing controllers
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 15:51:22 +01:00
hsiegeln
e8842e3bdc feat: add Postgres implementations for AuditRepository and ThresholdRepository
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 15:51:13 +01:00
hsiegeln
4d33592015 feat: add ThresholdConfig, ThresholdRepository, SearchIndexerStats, and instrument SearchIndexer
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 15:43:16 +01:00
hsiegeln
a0944a1c72 feat: add audit domain model, repository interface, AuditService, and unit test
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 15:36:21 +01:00
hsiegeln
fa3bc592d1 feat: add Flyway V9 (thresholds) and V10 (audit_log) migrations 2026-03-17 15:32:20 +01:00
hsiegeln
950f16be7a docs: fix plan review issues for infrastructure overview
Some checks failed
CI / docker (push) Has been cancelled
CI / deploy (push) Has been cancelled
CI / deploy-feature (push) Has been cancelled
CI / cleanup-branch (push) Has been cancelled
CI / build (push) Has been cancelled
- Fix AuthController → UiAuthController throughout
- Flesh out PostgresAuditRepository.find() with full dynamic query implementation
- Flesh out OpenSearchAdminController getStatus/getIndices/getPerformance methods
- Fix HikariCP maxWait → getConnectionTimeout()
- Add AuditServiceTest unit test task step
- Add complete ThresholdConfigRequest with validation logic
- Fix audit log from/to params: Instant → LocalDate with @DateTimeFormat
- Fill in React Query hook placeholder bodies
- Resolve extractUsername() duplication (inline in controller)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 15:24:56 +01:00
hsiegeln
a634bf9f9d docs: address spec review feedback for infrastructure overview
- Document SearchIndexerStats interface and required SearchIndexer changes
- Add @EnableMethodSecurity prerequisite and retrofit of existing controllers
- Limit audit log free-text search to indexed text columns (not JSONB)
- Split migrations into V9 (thresholds) and V10 (audit_log)
- Add user_agent field to audit records for SOC2 forensics
- Add thresholds validation rules, pagination limits, error response shapes
- Clarify SPA forwarding, single-row pattern, OpenSearch client reuse
- Add audit log retention note for Phase 2

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 15:01:53 +01:00
hsiegeln
2bcbff3ee6 docs: add infrastructure overview design spec
Covers admin navigation restructuring, database/OpenSearch monitoring pages,
configurable thresholds, database-backed audit log (SOC2), and phased
implementation plan.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 14:55:47 +01:00
hsiegeln
fc412f7251 fix: use relative API URL in feature branch UI to eliminate CORS errors
All checks were successful
CI / build (push) Successful in 1m4s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 13s
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Successful in 34s
Browser requests now go to the UI origin and nginx proxies them to the
backend within the cluster. Removes the separate API Ingress host rule
since API traffic no longer needs its own subdomain.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 13:49:01 +01:00
hsiegeln
82117deaab fix: pass credentials to Flyway when using separate datasource URL
All checks were successful
CI / build (push) Successful in 1m6s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 42s
CI / deploy (push) Successful in 40s
CI / deploy-feature (push) Has been skipped
When spring.flyway.url is set independently, Spring Boot does not
inherit credentials from spring.datasource. Add explicit user/password
to both application.yml and K8s deployment to prevent "no password"
failures on feature branch deployments.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 13:34:41 +01:00
hsiegeln
247fdb01c0 fix: separate Flyway and app datasource search paths for schema isolation
Some checks failed
CI / build (push) Successful in 1m6s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 41s
CI / deploy (push) Failing after 2m19s
CI / deploy-feature (push) Has been skipped
Flyway needs public in the search_path to access TimescaleDB extension
functions (create_hypertable). The app datasource must NOT include public
to prevent accidental cross-schema reads from production data.

- spring.flyway.url: currentSchema=<branch>,public (extensions accessible)
- spring.datasource.url: currentSchema=<branch> (strict isolation)
- SPRING_FLYWAY_URL env var added to K8s base manifest

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 13:26:01 +01:00
hsiegeln
b393d262cb refactor: remove OIDC env var config and seeder
All checks were successful
CI / build (push) Successful in 1m7s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 41s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
OIDC configuration is already fully database-backed (oidc_config table,
admin API, OidcConfigRepository). Remove the redundant env var binding
(SecurityProperties.Oidc), the env-to-DB seeder (oidcConfigSeeder), and
the OIDC section from application.yml.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 13:20:35 +01:00
hsiegeln
ff3a046f5a refactor: remove OIDC config from K8s manifests
All checks were successful
CI / build (push) Successful in 1m8s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 41s
CI / deploy (push) Successful in 37s
CI / deploy-feature (push) Has been skipped
OIDC configuration should be managed by the server itself (database-backed),
not injected via K8s secrets. Remove all CAMELEER_OIDC_* env vars from
deployment manifests and the cameleer-oidc secret from CI. The server
defaults to OIDC disabled via application.yml.

This also fixes the Kustomize strategic merge conflict where the feature
overlay tried to set value on an env var that had valueFrom in the base.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 13:12:41 +01:00
hsiegeln
88df324b4b fix: preserve directory structure for feature overlay kustomize build
All checks were successful
CI / build (push) Successful in 1m7s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 14s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
Kustomize rejects absolute paths for resource references. Instead of
rewriting ../../base to an absolute path, copy both base and overlay
into a temp directory preserving the relative path structure.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 12:58:55 +01:00
hsiegeln
c1cf8ae260 chore: remove old flat deploy manifests superseded by Kustomize
Some checks failed
CI / build (push) Successful in 1m7s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 14s
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Failing after 19s
deploy/server.yaml and deploy/ui.yaml are no longer referenced by CI,
which now uses deploy/base/ + deploy/overlays/main/ via kubectl apply -k.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 11:52:58 +01:00
hsiegeln
229463a2e8 fix: switch deploy containers from bitnami/kubectl to alpine/k8s
All checks were successful
CI / build (push) Successful in 1m8s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 13s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
bitnami/kubectl lacks a package manager in the Gitea Actions runner,
so tool installation fails. alpine/k8s:1.32.3 ships with kubectl,
kustomize, git, jq, curl, and sed pre-installed.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 11:39:58 +01:00
hsiegeln
15f20d22ad feat: add feature branch deployments with per-branch isolation
Some checks failed
CI / build (push) Successful in 1m8s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 42s
CI / deploy (push) Failing after 5s
CI / deploy-feature (push) Has been skipped
Enable deploying feature branches into isolated environments on the same
k3s cluster. Each branch gets its own namespace (cam-<slug>), PostgreSQL
schema, and OpenSearch index prefix for data isolation while sharing the
underlying infrastructure.

- Make OpenSearch index prefix and DB schema configurable via env vars
  (defaults preserve existing behavior)
- Restructure deploy/ into Kustomize base + overlays (main/feature)
- Extend CI to build Docker images for all branches, not just main
- Add deploy-feature job with namespace creation, secret copying,
  Traefik Ingress routing (<slug>-api/ui.cameleer.siegeln.net)
- Add cleanup-branch job to remove namespace, PG schema, OS indices
  on branch deletion
- Install required tools (git, jq, curl) in CI deploy containers

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 11:35:07 +01:00
hsiegeln
672544660f fix: enable trackTotalHits for accurate OpenSearch result counts
All checks were successful
CI / build (push) Successful in 1m16s
CI / docker (push) Successful in 3m41s
CI / deploy (push) Successful in 44s
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 09:54:50 +01:00
966db8545b Merge pull request 'fix: correct PostgreSQL mountPath and add external NodePort services' (#72) from feature/storage-layer-refactor into main
All checks were successful
CI / build (push) Successful in 1m9s
CI / docker (push) Successful in 41s
CI / deploy (push) Successful in 39s
Reviewed-on: cameleer/cameleer3-server#72
2026-03-17 01:04:06 +01:00
hsiegeln
c346babe33 fix: correct PostgreSQL mountPath and add external NodePort services
All checks were successful
CI / build (pull_request) Successful in 1m9s
CI / docker (pull_request) Has been skipped
CI / deploy (pull_request) Has been skipped
- Fix postgres.yaml mountPath to /home/postgres/pgdata matching timescaledb-ha PGDATA
- Add NodePort services for external access: PostgreSQL (30432), OpenSearch (30920)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 01:00:20 +01:00
8c2215ba58 Merge pull request 'refactor: replace ClickHouse with PostgreSQL/TimescaleDB + OpenSearch' (#71) from feature/storage-layer-refactor into main
All checks were successful
CI / build (push) Successful in 1m8s
CI / docker (push) Successful in 3m15s
CI / deploy (push) Successful in 3m34s
Reviewed-on: cameleer/cameleer3-server#71
2026-03-17 00:34:27 +01:00
hsiegeln
c316e80d7f chore: update docs and config for PostgreSQL/OpenSearch storage layer
All checks were successful
CI / build (pull_request) Successful in 1m20s
CI / docker (pull_request) Has been skipped
CI / deploy (pull_request) Has been skipped
- Set failsafe reuseForks=true to reuse JVM across IT classes (faster test suite)
- Replace ClickHouse with PostgreSQL+OpenSearch in docker-compose.yml
- Remove redundant docker-compose.dev.yml
- Update CLAUDE.md and HOWTO.md to reflect new storage stack

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 00:26:50 +01:00
hsiegeln
796be06a09 fix: resolve all integration test failures after storage layer refactor
- Use singleton container pattern for PostgreSQL + OpenSearch testcontainers
  (fixes container lifecycle issues with @TestInstance(PER_CLASS))
- Fix table name route_executions → executions in DetailControllerIT and
  ExecutionControllerIT
- Serialize processor headers as JSON (ObjectMapper) instead of Map.toString()
  for JSONB column compatibility
- Add nested mapping for processors field in OpenSearch index template
- Use .keyword sub-field for term queries on dynamically mapped text fields
- Add wildcard fallback queries for all text searches (substring matching)
- Isolate stats tests with unique route names to prevent data contamination
- Wait for OpenSearch indexing in SearchControllerIT with targeted Awaitility
- Reduce OpenSearch debounce to 100ms in test profile

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 00:02:19 +01:00
hsiegeln
26f5a2ce3b fix: update remaining ITs for synchronous ingestion and PostgreSQL storage
- SearchControllerIT: remove @TestInstance(PER_CLASS), use @BeforeEach with
  static guard, fix table name (route_executions -> executions), remove
  Awaitility polling
- OpenSearchIndexIT: replace Thread.sleep with explicit index refresh via
  OpenSearchClient
- DiagramLinkingIT: fix table name, remove Awaitility awaits (writes are
  synchronous)
- IngestionSchemaIT: rewrite queries for PostgreSQL relational model
  (processor_executions table instead of ClickHouse array columns)
- PostgresStatsStoreIT: use explicit time bounds in
  refresh_continuous_aggregate calls
- IngestionService: populate diagramContentHash during execution ingestion
  by looking up the latest diagram for the route+agent

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 22:03:29 +01:00
hsiegeln
d23b899f00 fix: prefix user tokens with 'user:' for JwtAuthenticationFilter routing 2026-03-16 21:01:57 +01:00
hsiegeln
288c7a86b5 chore: add docker-compose.dev.yml for local PostgreSQL + OpenSearch 2026-03-16 20:04:27 +01:00
hsiegeln
9f74e47ecf fix: use correct role-based JWT tokens in all integration tests
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 20:03:38 +01:00
hsiegeln
39f9925e71 fix: restore test config (bootstrap token, ingestion, agent-registry) and add @ActiveProfiles 2026-03-16 19:41:05 +01:00
hsiegeln
af03ecdf42 fix: use WITH NO DATA for continuous aggregates to avoid transaction block error 2026-03-16 19:32:54 +01:00
hsiegeln
0723f48e5b fix: disable Flyway transaction for continuous aggregate migration 2026-03-16 19:24:12 +01:00
hsiegeln
2634f60e59 fix: use timescaledb-ha image in K8s manifest for toolkit support 2026-03-16 19:14:15 +01:00
hsiegeln
3c0e615fb7 fix: use timescaledb-ha image which includes toolkit extension 2026-03-16 19:13:47 +01:00
hsiegeln
589da1b6d6 fix: use asCompatibleSubstituteFor for TimescaleDB Testcontainer image 2026-03-16 19:06:54 +01:00
hsiegeln
41e2038190 fix: use ChronoUnit for Instant arithmetic in PostgresStatsStoreIT 2026-03-16 19:04:42 +01:00
hsiegeln
ea687a342c deploy: remove obsolete ClickHouse K8s manifest 2026-03-16 19:01:26 +01:00
hsiegeln
cea16b38ed ci: update workflow for PostgreSQL + OpenSearch deployment
Replace ClickHouse credentials secret with postgres-credentials and
opensearch-credentials secrets. Update deploy step to apply postgres.yaml
and opensearch.yaml manifests instead of clickhouse.yaml, with appropriate
rollout status checks for each StatefulSet.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 19:00:20 +01:00
hsiegeln
a344be3a49 deploy: replace ClickHouse with PostgreSQL/TimescaleDB + OpenSearch in K8s manifests
- Dockerfile: update default SPRING_DATASOURCE_URL to jdbc:postgresql, add OPENSEARCH_URL default env
- deploy/postgres.yaml: new TimescaleDB StatefulSet + headless Service (10Gi PVC, pg_isready probes)
- deploy/opensearch.yaml: new OpenSearch 2.19.0 StatefulSet + headless Service (10Gi PVC, single-node, security disabled)
- deploy/server.yaml: switch datasource env from clickhouse-credentials to postgres-credentials, add OPENSEARCH_URL

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 18:58:35 +01:00
hsiegeln
565b548ac1 refactor: remove all ClickHouse code, old interfaces, and SQL migrations
- Delete all ClickHouse storage implementations and config
- Delete old core interfaces (ExecutionRepository, DiagramRepository, MetricsRepository, SearchEngine, RawExecutionRow)
- Delete ClickHouse SQL migration files
- Delete AbstractClickHouseIT
- Update controllers to use new store interfaces (DiagramStore, ExecutionStore)
- Fix IngestionService calls in controllers for new synchronous API
- Migrate all ITs from AbstractClickHouseIT to AbstractPostgresIT
- Fix count() syntax and remove ClickHouse-specific test assertions
- Update TreeReconstructionTest for new buildTree() method

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 18:56:13 +01:00
hsiegeln
7dbfaf0932 feat: wire new storage beans, add MetricsFlushScheduler and RetentionScheduler
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 18:27:58 +01:00
hsiegeln
f7d7302694 feat: implement OpenSearchIndex with full-text and wildcard search
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 18:25:55 +01:00
hsiegeln
c48e0bdfde feat: implement debounced SearchIndexer for async OpenSearch indexing 2026-03-16 18:25:54 +01:00
hsiegeln
5932b5d969 feat: implement PostgresDiagramStore, PostgresUserRepository, PostgresOidcConfigRepository, PostgresMetricsStore
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 18:23:21 +01:00
hsiegeln
527e2cf017 feat: implement PostgresStatsStore querying continuous aggregates 2026-03-16 18:22:44 +01:00
hsiegeln
9fd02c4edb feat: implement PostgresExecutionStore with upsert and dedup
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 18:20:57 +01:00
hsiegeln
85ebe76111 refactor: IngestionService uses synchronous ExecutionStore writes with event publishing
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 18:18:54 +01:00
hsiegeln
adf4b44d78 refactor: DetailService uses ExecutionStore, tree built from parentProcessorId 2026-03-16 18:18:53 +01:00
hsiegeln
84b93d74c7 refactor: SearchService uses SearchIndex + StatsStore instead of SearchEngine 2026-03-16 18:18:52 +01:00
hsiegeln
a55fc3c10d feat: add new storage interfaces for PostgreSQL/OpenSearch backends
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 18:16:53 +01:00
hsiegeln
55ed3be71a feat: add ExecutionDocument model and ExecutionUpdatedEvent
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 18:16:42 +01:00
hsiegeln
41a9a975fd config: switch datasource to PostgreSQL, add OpenSearch and Flyway config
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 18:15:33 +01:00
hsiegeln
0eeae70369 test: add TimescaleDB test base class and Flyway migration smoke test 2026-03-16 18:15:32 +01:00
hsiegeln
8a637df65c feat: add Flyway migrations for PostgreSQL/TimescaleDB schema
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 18:13:53 +01:00
hsiegeln
5bed108d3b chore: swap ClickHouse deps for PostgreSQL, Flyway, OpenSearch
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 18:13:45 +01:00
363 changed files with 41093 additions and 14309 deletions

View File

@@ -2,15 +2,17 @@ name: CI
on:
push:
branches: [main]
branches: [main, 'feature/**', 'fix/**', 'feat/**']
tags-ignore:
- 'v*'
pull_request:
branches: [main]
delete:
jobs:
build:
runs-on: ubuntu-latest
if: github.event_name != 'delete'
container:
image: maven:3.9-eclipse-temurin-17
steps:
@@ -60,7 +62,7 @@ jobs:
docker:
needs: build
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
if: github.event_name == 'push'
container:
image: docker:27
steps:
@@ -74,15 +76,36 @@ jobs:
run: echo "$REGISTRY_TOKEN" | docker login gitea.siegeln.net -u cameleer --password-stdin
env:
REGISTRY_TOKEN: ${{ secrets.REGISTRY_TOKEN }}
- name: Compute branch slug
run: |
sanitize_branch() {
echo "$1" | sed -E 's#^(feature|fix|feat|hotfix)/##' \
| tr '[:upper:]' '[:lower:]' \
| sed 's/[^a-z0-9-]/-/g' \
| sed 's/--*/-/g; s/^-//; s/-$//' \
| cut -c1-20 \
| sed 's/-$//'
}
if [ "$GITHUB_REF_NAME" = "main" ]; then
echo "BRANCH_SLUG=main" >> "$GITHUB_ENV"
echo "IMAGE_TAGS=latest" >> "$GITHUB_ENV"
else
SLUG=$(sanitize_branch "$GITHUB_REF_NAME")
echo "BRANCH_SLUG=$SLUG" >> "$GITHUB_ENV"
echo "IMAGE_TAGS=branch-$SLUG" >> "$GITHUB_ENV"
fi
- name: Set up QEMU for cross-platform builds
run: docker run --rm --privileged tonistiigi/binfmt --install all
- name: Build and push server
run: |
docker buildx create --use --name cibuilder
TAGS="-t gitea.siegeln.net/cameleer/cameleer3-server:${{ github.sha }}"
for TAG in $IMAGE_TAGS; do
TAGS="$TAGS -t gitea.siegeln.net/cameleer/cameleer3-server:$TAG"
done
docker buildx build --platform linux/amd64 \
--build-arg REGISTRY_TOKEN="$REGISTRY_TOKEN" \
-t gitea.siegeln.net/cameleer/cameleer3-server:${{ github.sha }} \
-t gitea.siegeln.net/cameleer/cameleer3-server:latest \
$TAGS \
--cache-from type=registry,ref=gitea.siegeln.net/cameleer/cameleer3-server:buildcache \
--cache-to type=registry,ref=gitea.siegeln.net/cameleer/cameleer3-server:buildcache,mode=max \
--provenance=false \
@@ -91,10 +114,14 @@ jobs:
REGISTRY_TOKEN: ${{ secrets.REGISTRY_TOKEN }}
- name: Build and push UI
run: |
TAGS="-t gitea.siegeln.net/cameleer/cameleer3-server-ui:${{ github.sha }}"
for TAG in $IMAGE_TAGS; do
TAGS="$TAGS -t gitea.siegeln.net/cameleer/cameleer3-server-ui:$TAG"
done
docker buildx build --platform linux/amd64 \
-f ui/Dockerfile \
-t gitea.siegeln.net/cameleer/cameleer3-server-ui:${{ github.sha }} \
-t gitea.siegeln.net/cameleer/cameleer3-server-ui:latest \
--build-arg REGISTRY_TOKEN="$REGISTRY_TOKEN" \
$TAGS \
--cache-from type=registry,ref=gitea.siegeln.net/cameleer/cameleer3-server-ui:buildcache \
--cache-to type=registry,ref=gitea.siegeln.net/cameleer/cameleer3-server-ui:buildcache,mode=max \
--provenance=false \
@@ -110,13 +137,28 @@ jobs:
API="https://gitea.siegeln.net/api/v1"
AUTH="Authorization: token ${REGISTRY_TOKEN}"
CURRENT_SHA="${{ github.sha }}"
# Build list of tags to keep
KEEP_TAGS="latest buildcache $CURRENT_SHA"
if [ "$BRANCH_SLUG" != "main" ]; then
KEEP_TAGS="$KEEP_TAGS branch-$BRANCH_SLUG"
fi
for PKG in cameleer3-server cameleer3-server-ui; do
curl -sf -H "$AUTH" "$API/packages/cameleer/container/$PKG" | \
jq -r '.[] | "\(.id) \(.version)"' | \
while read id version; do
if [ "$version" != "latest" ] && [ "$version" != "$CURRENT_SHA" ]; then
echo "Deleting old image tag: $PKG:$version"
curl -sf -X DELETE -H "$AUTH" "$API/packages/cameleer/container/$PKG/$version"
SHOULD_KEEP=false
for KEEP in $KEEP_TAGS; do
if [ "$version" = "$KEEP" ]; then
SHOULD_KEEP=true
break
fi
done
if [ "$SHOULD_KEEP" = "false" ]; then
# Only clean up images for this branch
if [ "$BRANCH_SLUG" = "main" ] || echo "$version" | grep -q "branch-$BRANCH_SLUG"; then
echo "Deleting old image tag: $PKG:$version"
curl -sf -X DELETE -H "$AUTH" "$API/packages/cameleer/container/$PKG/$version"
fi
fi
done
done
@@ -129,7 +171,7 @@ jobs:
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
container:
image: bitnami/kubectl:latest
image: alpine/k8s:1.32.3
steps:
- name: Checkout
run: |
@@ -161,10 +203,17 @@ jobs:
--from-literal=CAMELEER_JWT_SECRET="${CAMELEER_JWT_SECRET}" \
--dry-run=client -o yaml | kubectl apply -f -
kubectl create secret generic clickhouse-credentials \
kubectl create secret generic postgres-credentials \
--namespace=cameleer \
--from-literal=CLICKHOUSE_USER="$CLICKHOUSE_USER" \
--from-literal=CLICKHOUSE_PASSWORD="$CLICKHOUSE_PASSWORD" \
--from-literal=POSTGRES_USER="$POSTGRES_USER" \
--from-literal=POSTGRES_PASSWORD="$POSTGRES_PASSWORD" \
--from-literal=POSTGRES_DB="${POSTGRES_DB:-cameleer}" \
--dry-run=client -o yaml | kubectl apply -f -
kubectl create secret generic opensearch-credentials \
--namespace=cameleer \
--from-literal=OPENSEARCH_USER="${OPENSEARCH_USER:-admin}" \
--from-literal=OPENSEARCH_PASSWORD="$OPENSEARCH_PASSWORD" \
--dry-run=client -o yaml | kubectl apply -f -
kubectl create secret generic authentik-credentials \
@@ -174,26 +223,20 @@ jobs:
--from-literal=AUTHENTIK_SECRET_KEY="${AUTHENTIK_SECRET_KEY}" \
--dry-run=client -o yaml | kubectl apply -f -
kubectl create secret generic cameleer-oidc \
--namespace=cameleer \
--from-literal=CAMELEER_OIDC_ENABLED="${CAMELEER_OIDC_ENABLED:-false}" \
--from-literal=CAMELEER_OIDC_ISSUER="${CAMELEER_OIDC_ISSUER}" \
--from-literal=CAMELEER_OIDC_CLIENT_ID="${CAMELEER_OIDC_CLIENT_ID}" \
--from-literal=CAMELEER_OIDC_CLIENT_SECRET="${CAMELEER_OIDC_CLIENT_SECRET}" \
--dry-run=client -o yaml | kubectl apply -f -
kubectl apply -f deploy/postgres.yaml
kubectl -n cameleer rollout status statefulset/postgres --timeout=120s
kubectl apply -f deploy/clickhouse.yaml
kubectl -n cameleer rollout status statefulset/clickhouse --timeout=120s
kubectl apply -f deploy/opensearch.yaml
kubectl -n cameleer rollout status statefulset/opensearch --timeout=180s
kubectl apply -f deploy/authentik.yaml
kubectl -n cameleer rollout status deployment/authentik-server --timeout=180s
kubectl apply -f deploy/server.yaml
kubectl apply -k deploy/overlays/main
kubectl -n cameleer set image deployment/cameleer3-server \
server=gitea.siegeln.net/cameleer/cameleer3-server:${{ github.sha }}
kubectl -n cameleer rollout status deployment/cameleer3-server --timeout=120s
kubectl apply -f deploy/ui.yaml
kubectl -n cameleer set image deployment/cameleer3-ui \
ui=gitea.siegeln.net/cameleer/cameleer3-server-ui:${{ github.sha }}
kubectl -n cameleer rollout status deployment/cameleer3-ui --timeout=120s
@@ -203,12 +246,149 @@ jobs:
CAMELEER_JWT_SECRET: ${{ secrets.CAMELEER_JWT_SECRET }}
CAMELEER_UI_USER: ${{ secrets.CAMELEER_UI_USER }}
CAMELEER_UI_PASSWORD: ${{ secrets.CAMELEER_UI_PASSWORD }}
CLICKHOUSE_USER: ${{ secrets.CLICKHOUSE_USER }}
CLICKHOUSE_PASSWORD: ${{ secrets.CLICKHOUSE_PASSWORD }}
POSTGRES_USER: ${{ secrets.POSTGRES_USER }}
POSTGRES_PASSWORD: ${{ secrets.POSTGRES_PASSWORD }}
POSTGRES_DB: ${{ secrets.POSTGRES_DB }}
OPENSEARCH_USER: ${{ secrets.OPENSEARCH_USER }}
OPENSEARCH_PASSWORD: ${{ secrets.OPENSEARCH_PASSWORD }}
AUTHENTIK_PG_USER: ${{ secrets.AUTHENTIK_PG_USER }}
AUTHENTIK_PG_PASSWORD: ${{ secrets.AUTHENTIK_PG_PASSWORD }}
AUTHENTIK_SECRET_KEY: ${{ secrets.AUTHENTIK_SECRET_KEY }}
CAMELEER_OIDC_ENABLED: ${{ secrets.CAMELEER_OIDC_ENABLED }}
CAMELEER_OIDC_ISSUER: ${{ secrets.CAMELEER_OIDC_ISSUER }}
CAMELEER_OIDC_CLIENT_ID: ${{ secrets.CAMELEER_OIDC_CLIENT_ID }}
CAMELEER_OIDC_CLIENT_SECRET: ${{ secrets.CAMELEER_OIDC_CLIENT_SECRET }}
deploy-feature:
needs: docker
runs-on: ubuntu-latest
if: github.ref != 'refs/heads/main' && github.event_name == 'push'
container:
image: alpine/k8s:1.32.3
steps:
- name: Checkout
run: |
git clone --depth=1 --branch=${GITHUB_REF_NAME} https://cameleer:${REGISTRY_TOKEN}@gitea.siegeln.net/${GITHUB_REPOSITORY}.git .
env:
REGISTRY_TOKEN: ${{ secrets.REGISTRY_TOKEN }}
- name: Configure kubectl
run: |
mkdir -p ~/.kube
echo "$KUBECONFIG_B64" | base64 -d > ~/.kube/config
env:
KUBECONFIG_B64: ${{ secrets.KUBECONFIG_BASE64 }}
- name: Compute branch variables
run: |
sanitize_branch() {
echo "$1" | sed -E 's#^(feature|fix|feat|hotfix)/##' \
| tr '[:upper:]' '[:lower:]' \
| sed 's/[^a-z0-9-]/-/g' \
| sed 's/--*/-/g; s/^-//; s/-$//' \
| cut -c1-20 \
| sed 's/-$//'
}
SLUG=$(sanitize_branch "$GITHUB_REF_NAME")
NS="cam-${SLUG}"
SCHEMA="cam_$(echo $SLUG | tr '-' '_')"
echo "BRANCH_SLUG=$SLUG" >> "$GITHUB_ENV"
echo "BRANCH_NS=$NS" >> "$GITHUB_ENV"
echo "BRANCH_SCHEMA=$SCHEMA" >> "$GITHUB_ENV"
- name: Create namespace
run: kubectl create namespace "$BRANCH_NS" --dry-run=client -o yaml | kubectl apply -f -
- name: Copy secrets from cameleer namespace
run: |
for SECRET in gitea-registry postgres-credentials opensearch-credentials cameleer-auth; do
kubectl get secret "$SECRET" -n cameleer -o json \
| jq 'del(.metadata.namespace, .metadata.resourceVersion, .metadata.uid, .metadata.creationTimestamp, .metadata.managedFields)' \
| kubectl apply -n "$BRANCH_NS" -f -
done
- name: Substitute placeholders and deploy
run: |
# Work on a copy preserving the directory structure so ../../base resolves
mkdir -p /tmp/feature-deploy/deploy/overlays
cp -r deploy/base /tmp/feature-deploy/deploy/base
cp -r deploy/overlays/feature /tmp/feature-deploy/deploy/overlays/feature
# Substitute all BRANCH_* placeholders
for f in /tmp/feature-deploy/deploy/overlays/feature/*.yaml; do
sed -i \
-e "s|BRANCH_NAMESPACE|${BRANCH_NS}|g" \
-e "s|BRANCH_SCHEMA|${BRANCH_SCHEMA}|g" \
-e "s|BRANCH_SLUG|${BRANCH_SLUG}|g" \
-e "s|BRANCH_SHA|${{ github.sha }}|g" \
"$f"
done
kubectl apply -k /tmp/feature-deploy/deploy/overlays/feature
- name: Wait for init-job
run: |
kubectl -n "$BRANCH_NS" wait --for=condition=complete job/init-schema --timeout=60s || \
echo "Warning: init-schema job did not complete in time"
- name: Wait for server rollout
run: kubectl -n "$BRANCH_NS" rollout status deployment/cameleer3-server --timeout=120s
- name: Wait for UI rollout
run: kubectl -n "$BRANCH_NS" rollout status deployment/cameleer3-ui --timeout=60s
- name: Print deployment URLs
run: |
echo "===================================="
echo "Feature branch deployed!"
echo "API: http://${BRANCH_SLUG}-api.cameleer.siegeln.net"
echo "UI: http://${BRANCH_SLUG}.cameleer.siegeln.net"
echo "===================================="
env:
REGISTRY_TOKEN: ${{ secrets.REGISTRY_TOKEN }}
cleanup-branch:
runs-on: ubuntu-latest
if: github.event_name == 'delete' && github.event.ref_type == 'branch'
container:
image: alpine/k8s:1.32.3
steps:
- name: Configure kubectl
run: |
mkdir -p ~/.kube
echo "$KUBECONFIG_B64" | base64 -d > ~/.kube/config
env:
KUBECONFIG_B64: ${{ secrets.KUBECONFIG_BASE64 }}
- name: Compute branch variables
run: |
sanitize_branch() {
echo "$1" | sed -E 's#^(feature|fix|feat|hotfix)/##' \
| tr '[:upper:]' '[:lower:]' \
| sed 's/[^a-z0-9-]/-/g' \
| sed 's/--*/-/g; s/^-//; s/-$//' \
| cut -c1-20 \
| sed 's/-$//'
}
SLUG=$(sanitize_branch "${{ github.event.ref }}")
NS="cam-${SLUG}"
SCHEMA="cam_$(echo $SLUG | tr '-' '_')"
echo "BRANCH_SLUG=$SLUG" >> "$GITHUB_ENV"
echo "BRANCH_NS=$NS" >> "$GITHUB_ENV"
echo "BRANCH_SCHEMA=$SCHEMA" >> "$GITHUB_ENV"
- name: Delete namespace
run: kubectl delete namespace "$BRANCH_NS" --ignore-not-found
- name: Drop PostgreSQL schema
run: |
kubectl run cleanup-schema-${BRANCH_SLUG} \
--namespace=cameleer \
--image=postgres:16 \
--restart=Never \
--env="PGPASSWORD=$(kubectl get secret postgres-credentials -n cameleer -o jsonpath='{.data.POSTGRES_PASSWORD}' | base64 -d)" \
--command -- sh -c "psql -h postgres -U $(kubectl get secret postgres-credentials -n cameleer -o jsonpath='{.data.POSTGRES_USER}' | base64 -d) -d cameleer3 -c 'DROP SCHEMA IF EXISTS ${BRANCH_SCHEMA} CASCADE'"
kubectl wait --for=condition=Ready pod/cleanup-schema-${BRANCH_SLUG} -n cameleer --timeout=30s || true
kubectl wait --for=jsonpath='{.status.phase}'=Succeeded pod/cleanup-schema-${BRANCH_SLUG} -n cameleer --timeout=60s || true
kubectl delete pod cleanup-schema-${BRANCH_SLUG} -n cameleer --ignore-not-found
- name: Delete OpenSearch indices
run: |
kubectl run cleanup-indices-${BRANCH_SLUG} \
--namespace=cameleer \
--image=curlimages/curl:latest \
--restart=Never \
--command -- curl -sf -X DELETE "http://opensearch:9200/cam-${BRANCH_SLUG}-*"
kubectl wait --for=jsonpath='{.status.phase}'=Succeeded pod/cleanup-indices-${BRANCH_SLUG} -n cameleer --timeout=60s || true
kubectl delete pod cleanup-indices-${BRANCH_SLUG} -n cameleer --ignore-not-found
- name: Cleanup Docker images
run: |
API="https://gitea.siegeln.net/api/v1"
AUTH="Authorization: token ${REGISTRY_TOKEN}"
for PKG in cameleer3-server cameleer3-server-ui; do
# Delete branch-specific tag
curl -sf -X DELETE -H "$AUTH" "$API/packages/cameleer/container/$PKG/branch-${BRANCH_SLUG}" || true
done
env:
REGISTRY_TOKEN: ${{ secrets.REGISTRY_TOKEN }}

View File

@@ -38,20 +38,25 @@ java -jar cameleer3-server-app/target/cameleer3-server-app-1.0-SNAPSHOT.jar
- Jackson `JavaTimeModule` for `Instant` deserialization
- Communication: receives HTTP POST data from agents, serves SSE event streams for config push/commands
- Maintains agent instance registry with states: LIVE → STALE → DEAD
- Storage: ClickHouse for structured data, text index for full-text search
- Storage: PostgreSQL (TimescaleDB) for structured data, OpenSearch for full-text search
- Security: JWT auth with RBAC (AGENT/VIEWER/OPERATOR/ADMIN roles), Ed25519 config signing, bootstrap token for registration
- OIDC: Optional external identity provider support (token exchange pattern). Configured via `CAMELEER_OIDC_*` env vars
- User persistence: ClickHouse `users` table, admin CRUD at `/api/v1/admin/users`
- OIDC: Optional external identity provider support (token exchange pattern). Configured via admin API, stored in database (`server_config` table)
- User persistence: PostgreSQL `users` table, admin CRUD at `/api/v1/admin/users`
## CI/CD & Deployment
- CI workflow: `.gitea/workflows/ci.yml` — build → docker → deploy on push to main
- CI workflow: `.gitea/workflows/ci.yml` — build → docker → deploy on push to main or feature branches
- Build step skips integration tests (`-DskipITs`) — Testcontainers needs Docker daemon
- Docker: multi-stage build (`Dockerfile`), `$BUILDPLATFORM` for native Maven on ARM64 runner, amd64 runtime
- `REGISTRY_TOKEN` build arg required for `cameleer3-common` dependency resolution
- Registry: `gitea.siegeln.net/cameleer/cameleer3-server` (container images)
- K8s manifests in `deploy/`ClickHouse StatefulSet + server Deployment + NodePort Service (30081)
- Deployment target: k3s at 192.168.50.86, namespace `cameleer`
- Secrets managed in CI deploy step (idempotent `--dry-run=client | kubectl apply`): `cameleer-auth`, `clickhouse-credentials`, `CAMELEER_JWT_SECRET`
- K8s probes: server uses `/api/v1/health`, ClickHouse uses `/ping`
- K8s manifests in `deploy/`Kustomize base + overlays (main/feature), shared infra (PostgreSQL, OpenSearch, Authentik) as top-level manifests
- Deployment target: k3s at 192.168.50.86, namespace `cameleer` (main), `cam-<slug>` (feature branches)
- Feature branches: isolated namespace, PG schema, OpenSearch index prefix; Traefik Ingress at `<slug>-api.cameleer.siegeln.net`
- Secrets managed in CI deploy step (idempotent `--dry-run=client | kubectl apply`): `cameleer-auth`, `postgres-credentials`, `opensearch-credentials`
- K8s probes: server uses `/api/v1/health`, PostgreSQL uses `pg_isready`, OpenSearch uses `/_cluster/health`
- Docker build uses buildx registry cache + `--provenance=false` for Gitea compatibility
## Disabled Skills
- Do NOT use any `gsd:*` skills in this project. This includes all `/gsd:` prefixed commands.

View File

@@ -18,9 +18,10 @@ FROM eclipse-temurin:17-jre
WORKDIR /app
COPY --from=build /build/cameleer3-server-app/target/cameleer3-server-app-*.jar /app/server.jar
ENV SPRING_DATASOURCE_URL=jdbc:ch://clickhouse:8123/cameleer3
ENV SPRING_DATASOURCE_URL=jdbc:postgresql://postgres:5432/cameleer3
ENV SPRING_DATASOURCE_USERNAME=cameleer
ENV SPRING_DATASOURCE_PASSWORD=cameleer_dev
ENV OPENSEARCH_URL=http://opensearch:9200
EXPOSE 8081
ENTRYPOINT exec java -jar /app/server.jar

View File

@@ -21,20 +21,20 @@ mvn clean verify # compile + run all tests (needs Docker for integrati
## Infrastructure Setup
Start ClickHouse:
Start PostgreSQL and OpenSearch:
```bash
docker compose up -d
```
This starts ClickHouse 25.3 and automatically runs the schema init scripts (`clickhouse/init/01-schema.sql`, `clickhouse/init/02-search-columns.sql`, `clickhouse/init/03-users.sql`).
This starts TimescaleDB (PostgreSQL 16) and OpenSearch 2.19. The database schema is applied automatically via Flyway migrations on server startup.
| Service | Port | Purpose |
|------------|------|------------------|
| ClickHouse | 8123 | HTTP API (JDBC) |
| ClickHouse | 9000 | Native protocol |
| Service | Port | Purpose |
|------------|------|----------------------|
| PostgreSQL | 5432 | JDBC (Spring JDBC) |
| OpenSearch | 9200 | REST API (full-text) |
ClickHouse credentials: `cameleer` / `cameleer_dev`, database `cameleer3`.
PostgreSQL credentials: `cameleer` / `cameleer_dev`, database `cameleer3`.
## Run the Server
@@ -109,7 +109,7 @@ The env-var local user gets `ADMIN` role. Agents get `AGENT` role at registratio
### OIDC Login (Optional)
OIDC configuration is stored in ClickHouse and managed via the admin API or UI. The SPA checks if OIDC is available:
OIDC configuration is stored in PostgreSQL and managed via the admin API or UI. The SPA checks if OIDC is available:
```bash
# 1. SPA checks if OIDC is available (returns 404 if not configured)
@@ -340,9 +340,8 @@ Key settings in `cameleer3-server-app/src/main/resources/application.yml`:
|---------|---------|-------------|
| `server.port` | 8081 | Server port |
| `ingestion.buffer-capacity` | 50000 | Max items in write buffer |
| `ingestion.batch-size` | 5000 | Items per ClickHouse batch insert |
| `ingestion.batch-size` | 5000 | Items per batch insert |
| `ingestion.flush-interval-ms` | 1000 | Buffer flush interval (ms) |
| `ingestion.data-ttl-days` | 30 | ClickHouse TTL for auto-deletion |
| `agent-registry.heartbeat-interval-seconds` | 30 | Expected heartbeat interval |
| `agent-registry.stale-threshold-seconds` | 90 | Time before agent marked STALE |
| `agent-registry.dead-threshold-seconds` | 300 | Time after STALE before DEAD |
@@ -386,7 +385,7 @@ npm run generate-api # Requires backend running on :8081
## Running Tests
Integration tests use Testcontainers (starts ClickHouse automatically — requires Docker):
Integration tests use Testcontainers (starts PostgreSQL and OpenSearch automatically — requires Docker):
```bash
# All tests
@@ -399,14 +398,13 @@ mvn test -pl cameleer3-server-core
mvn test -pl cameleer3-server-app -Dtest=ExecutionControllerIT
```
## Verify ClickHouse Data
## Verify Database Data
After posting data and waiting for the flush interval (1s default):
```bash
docker exec -it cameleer3-server-clickhouse-1 clickhouse-client \
--user cameleer --password cameleer_dev -d cameleer3 \
-q "SELECT count() FROM route_executions"
docker exec -it cameleer3-server-postgres-1 psql -U cameleer -d cameleer3 \
-c "SELECT count(*) FROM route_executions"
```
## Kubernetes Deployment
@@ -417,7 +415,8 @@ The full stack is deployed to k3s via CI/CD on push to `main`. K8s manifests are
```
cameleer namespace:
ClickHouse (StatefulSet, 2Gi PVC) clickhouse:8123 (ClusterIP)
PostgreSQL (StatefulSet, 10Gi PVC) ← postgres:5432 (ClusterIP)
OpenSearch (StatefulSet, 10Gi PVC) ← opensearch:9200 (ClusterIP)
cameleer3-server (Deployment) ← NodePort 30081
cameleer3-ui (Deployment, Nginx) ← NodePort 30090
Authentik Server (Deployment) ← NodePort 30950
@@ -439,7 +438,7 @@ cameleer namespace:
Push to `main` triggers: **build** (UI npm + Maven, unit tests) → **docker** (buildx amd64 for server + UI, push to Gitea registry) → **deploy** (kubectl apply + rolling update).
Required Gitea org secrets: `REGISTRY_TOKEN`, `KUBECONFIG_BASE64`, `CAMELEER_AUTH_TOKEN`, `CAMELEER_JWT_SECRET`, `CLICKHOUSE_USER`, `CLICKHOUSE_PASSWORD`, `CAMELEER_UI_USER` (optional), `CAMELEER_UI_PASSWORD` (optional), `AUTHENTIK_PG_PASSWORD`, `AUTHENTIK_SECRET_KEY`, `CAMELEER_OIDC_ENABLED`, `CAMELEER_OIDC_ISSUER`, `CAMELEER_OIDC_CLIENT_ID`, `CAMELEER_OIDC_CLIENT_SECRET`.
Required Gitea org secrets: `REGISTRY_TOKEN`, `KUBECONFIG_BASE64`, `CAMELEER_AUTH_TOKEN`, `CAMELEER_JWT_SECRET`, `POSTGRES_USER`, `POSTGRES_PASSWORD`, `POSTGRES_DB`, `OPENSEARCH_USER`, `OPENSEARCH_PASSWORD`, `CAMELEER_UI_USER` (optional), `CAMELEER_UI_PASSWORD` (optional), `AUTHENTIK_PG_USER`, `AUTHENTIK_PG_PASSWORD`, `AUTHENTIK_SECRET_KEY`, `CAMELEER_OIDC_ENABLED`, `CAMELEER_OIDC_ISSUER`, `CAMELEER_OIDC_CLIENT_ID`, `CAMELEER_OIDC_CLIENT_SECRET`.
### Manual K8s Commands
@@ -450,8 +449,11 @@ kubectl -n cameleer get pods
# View server logs
kubectl -n cameleer logs -f deploy/cameleer3-server
# View ClickHouse logs
kubectl -n cameleer logs -f statefulset/clickhouse
# View PostgreSQL logs
kubectl -n cameleer logs -f statefulset/postgres
# View OpenSearch logs
kubectl -n cameleer logs -f statefulset/opensearch
# Restart server
kubectl -n cameleer rollout restart deployment/cameleer3-server

View File

@@ -36,10 +36,26 @@
<artifactId>spring-boot-starter-jdbc</artifactId>
</dependency>
<dependency>
<groupId>com.clickhouse</groupId>
<artifactId>clickhouse-jdbc</artifactId>
<version>0.9.7</version>
<classifier>all</classifier>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
</dependency>
<dependency>
<groupId>org.flywaydb</groupId>
<artifactId>flyway-core</artifactId>
</dependency>
<dependency>
<groupId>org.flywaydb</groupId>
<artifactId>flyway-database-postgresql</artifactId>
</dependency>
<dependency>
<groupId>org.opensearch.client</groupId>
<artifactId>opensearch-java</artifactId>
<version>2.19.0</version>
</dependency>
<dependency>
<groupId>org.opensearch.client</groupId>
<artifactId>opensearch-rest-client</artifactId>
<version>2.19.0</version>
</dependency>
<dependency>
<groupId>org.springdoc</groupId>
@@ -96,8 +112,18 @@
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>testcontainers-clickhouse</artifactId>
<version>2.0.3</version>
<artifactId>testcontainers-postgresql</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>testcontainers-junit-jupiter</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.opensearch</groupId>
<artifactId>opensearch-testcontainers</artifactId>
<version>2.1.1</version>
<scope>test</scope>
</dependency>
<dependency>
@@ -148,7 +174,7 @@
<artifactId>maven-failsafe-plugin</artifactId>
<configuration>
<forkCount>1</forkCount>
<reuseForks>false</reuseForks>
<reuseForks>true</reuseForks>
</configuration>
<executions>
<execution>

View File

@@ -1,17 +1,23 @@
package com.cameleer3.server.app.agent;
import com.cameleer3.server.core.agent.AgentEventService;
import com.cameleer3.server.core.agent.AgentInfo;
import com.cameleer3.server.core.agent.AgentRegistryService;
import com.cameleer3.server.core.agent.AgentState;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import java.util.HashMap;
import java.util.Map;
/**
* Periodic task that checks agent lifecycle and expires old commands.
* <p>
* Runs on a configurable fixed delay (default 10 seconds). Transitions
* agents LIVE -> STALE -> DEAD based on heartbeat timing, and removes
* expired pending commands.
* expired pending commands. Records lifecycle events for state transitions.
*/
@Component
public class AgentLifecycleMonitor {
@@ -19,18 +25,46 @@ public class AgentLifecycleMonitor {
private static final Logger log = LoggerFactory.getLogger(AgentLifecycleMonitor.class);
private final AgentRegistryService registryService;
private final AgentEventService agentEventService;
public AgentLifecycleMonitor(AgentRegistryService registryService) {
public AgentLifecycleMonitor(AgentRegistryService registryService,
AgentEventService agentEventService) {
this.registryService = registryService;
this.agentEventService = agentEventService;
}
@Scheduled(fixedDelayString = "${agent-registry.lifecycle-check-interval-ms:10000}")
public void checkLifecycle() {
try {
// Snapshot states before lifecycle check
Map<String, AgentState> statesBefore = new HashMap<>();
for (AgentInfo agent : registryService.findAll()) {
statesBefore.put(agent.id(), agent.state());
}
registryService.checkLifecycle();
registryService.expireOldCommands();
// Detect transitions and record events
for (AgentInfo agent : registryService.findAll()) {
AgentState before = statesBefore.get(agent.id());
if (before != null && before != agent.state()) {
String eventType = mapTransitionEvent(before, agent.state());
if (eventType != null) {
agentEventService.recordEvent(agent.id(), agent.application(), eventType,
agent.name() + " " + before + " -> " + agent.state());
}
}
}
} catch (Exception e) {
log.error("Error during agent lifecycle check", e);
}
}
private String mapTransitionEvent(AgentState from, AgentState to) {
if (from == AgentState.LIVE && to == AgentState.STALE) return "WENT_STALE";
if (from == AgentState.STALE && to == AgentState.DEAD) return "WENT_DEAD";
if (from == AgentState.STALE && to == AgentState.LIVE) return "RECOVERED";
return null;
}
}

View File

@@ -1,11 +1,13 @@
package com.cameleer3.server.app.config;
import com.cameleer3.server.core.agent.AgentEventRepository;
import com.cameleer3.server.core.agent.AgentEventService;
import com.cameleer3.server.core.agent.AgentRegistryService;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* Creates the {@link AgentRegistryService} bean.
* Creates the {@link AgentRegistryService} and {@link AgentEventService} beans.
* <p>
* Follows the established pattern: core module plain class, app module bean config.
*/
@@ -20,4 +22,9 @@ public class AgentRegistryBeanConfig {
config.getCommandExpiryMs()
);
}
@Bean
public AgentEventService agentEventService(AgentEventRepository repository) {
return new AgentEventService(repository);
}
}

View File

@@ -1,80 +0,0 @@
package com.cameleer3.server.app.config;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.io.Resource;
import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
import org.springframework.jdbc.core.JdbcTemplate;
import jakarta.annotation.PostConstruct;
import javax.sql.DataSource;
import java.nio.charset.StandardCharsets;
import java.util.Arrays;
import java.util.Comparator;
import java.util.stream.Collectors;
/**
* ClickHouse configuration.
* <p>
* Spring Boot auto-configures the DataSource from {@code spring.datasource.*} properties.
* This class exposes a JdbcTemplate bean and initializes the schema on startup.
* <p>
* The ClickHouse container's {@code CLICKHOUSE_DB} env var creates the database;
* this class creates the tables within it.
* <p>
* Migration files are discovered automatically from {@code classpath:clickhouse/*.sql}
* and executed in filename order (numeric prefix sort).
*/
@Configuration
public class ClickHouseConfig {
private static final Logger log = LoggerFactory.getLogger(ClickHouseConfig.class);
private static final String MIGRATION_PATTERN = "classpath:clickhouse/*.sql";
private final DataSource dataSource;
public ClickHouseConfig(DataSource dataSource) {
this.dataSource = dataSource;
}
@Bean
public JdbcTemplate jdbcTemplate() {
return new JdbcTemplate(dataSource);
}
@PostConstruct
void initSchema() {
var jdbc = new JdbcTemplate(dataSource);
try {
Resource[] resources = new PathMatchingResourcePatternResolver()
.getResources(MIGRATION_PATTERN);
Arrays.sort(resources, Comparator.comparing(Resource::getFilename));
for (Resource resource : resources) {
String filename = resource.getFilename();
try {
String sql = resource.getContentAsString(StandardCharsets.UTF_8);
String stripped = sql.lines()
.filter(line -> !line.trim().startsWith("--"))
.collect(Collectors.joining("\n"));
for (String statement : stripped.split(";")) {
String trimmed = statement.trim();
if (!trimmed.isEmpty()) {
jdbc.execute(trimmed);
}
}
log.info("Applied schema: {}", filename);
} catch (Exception e) {
log.error("Failed to apply schema: {}", filename, e);
throw new RuntimeException("Schema initialization failed: " + filename, e);
}
}
} catch (RuntimeException e) {
throw e;
} catch (Exception e) {
throw new RuntimeException("Failed to discover migration files", e);
}
}
}

View File

@@ -1,41 +1,22 @@
package com.cameleer3.server.app.config;
import com.cameleer3.server.core.ingestion.IngestionService;
import com.cameleer3.server.core.ingestion.TaggedDiagram;
import com.cameleer3.server.core.ingestion.TaggedExecution;
import com.cameleer3.server.core.ingestion.WriteBuffer;
import com.cameleer3.server.core.storage.model.MetricsSnapshot;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* Creates the write buffer and ingestion service beans.
* Creates the write buffer bean for metrics.
* <p>
* The {@link WriteBuffer} instances are shared between the
* {@link IngestionService} (producer side) and the flush scheduler (consumer side).
* The {@link WriteBuffer} instance is shared between the
* {@link com.cameleer3.server.core.ingestion.IngestionService} (producer side)
* and the flush scheduler (consumer side).
*/
@Configuration
public class IngestionBeanConfig {
@Bean
public WriteBuffer<TaggedExecution> executionBuffer(IngestionConfig config) {
return new WriteBuffer<>(config.getBufferCapacity());
}
@Bean
public WriteBuffer<TaggedDiagram> diagramBuffer(IngestionConfig config) {
return new WriteBuffer<>(config.getBufferCapacity());
}
@Bean
public WriteBuffer<MetricsSnapshot> metricsBuffer(IngestionConfig config) {
return new WriteBuffer<>(config.getBufferCapacity());
}
@Bean
public IngestionService ingestionService(WriteBuffer<TaggedExecution> executionBuffer,
WriteBuffer<TaggedDiagram> diagramBuffer,
WriteBuffer<MetricsSnapshot> metricsBuffer) {
return new IngestionService(executionBuffer, diagramBuffer, metricsBuffer);
}
}

View File

@@ -31,7 +31,10 @@ public class OpenApiConfig {
"ExecutionSummary", "ExecutionDetail", "ExecutionStats",
"StatsTimeseries", "TimeseriesBucket",
"SearchResultExecutionSummary", "UserInfo",
"ProcessorNode"
"ProcessorNode",
"AppCatalogEntry", "RouteSummary", "AgentSummary",
"RouteMetrics", "AgentEventResponse", "AgentInstanceResponse",
"ProcessorMetrics", "AgentMetricsResponse", "MetricBucket"
);
@Bean

View File

@@ -0,0 +1,28 @@
package com.cameleer3.server.app.config;
import org.apache.http.HttpHost;
import org.opensearch.client.RestClient;
import org.opensearch.client.json.jackson.JacksonJsonpMapper;
import org.opensearch.client.opensearch.OpenSearchClient;
import org.opensearch.client.transport.rest_client.RestClientTransport;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
public class OpenSearchConfig {
@Value("${opensearch.url:http://localhost:9200}")
private String opensearchUrl;
@Bean(destroyMethod = "close")
public RestClient opensearchRestClient() {
return RestClient.builder(HttpHost.create(opensearchUrl)).build();
}
@Bean
public OpenSearchClient openSearchClient(RestClient restClient) {
var transport = new RestClientTransport(restClient, new JacksonJsonpMapper());
return new OpenSearchClient(transport);
}
}

View File

@@ -1,32 +1,19 @@
package com.cameleer3.server.app.config;
import com.cameleer3.server.app.search.ClickHouseSearchEngine;
import com.cameleer3.server.core.detail.DetailService;
import com.cameleer3.server.core.search.SearchEngine;
import com.cameleer3.server.core.search.SearchService;
import com.cameleer3.server.core.storage.ExecutionRepository;
import com.cameleer3.server.core.storage.SearchIndex;
import com.cameleer3.server.core.storage.StatsStore;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.jdbc.core.JdbcTemplate;
/**
* Creates beans for the search and detail layers.
* Creates beans for the search layer.
*/
@Configuration
public class SearchBeanConfig {
@Bean
public SearchEngine searchEngine(JdbcTemplate jdbcTemplate) {
return new ClickHouseSearchEngine(jdbcTemplate);
}
@Bean
public SearchService searchService(SearchEngine searchEngine) {
return new SearchService(searchEngine);
}
@Bean
public DetailService detailService(ExecutionRepository executionRepository) {
return new DetailService(executionRepository);
public SearchService searchService(SearchIndex searchIndex, StatsStore statsStore) {
return new SearchService(searchIndex, statsStore);
}
}

View File

@@ -0,0 +1,44 @@
package com.cameleer3.server.app.config;
import com.cameleer3.server.core.admin.AuditRepository;
import com.cameleer3.server.core.admin.AuditService;
import com.cameleer3.server.core.detail.DetailService;
import com.cameleer3.server.core.indexing.SearchIndexer;
import com.cameleer3.server.core.ingestion.IngestionService;
import com.cameleer3.server.core.ingestion.WriteBuffer;
import com.cameleer3.server.core.storage.*;
import com.cameleer3.server.core.storage.model.MetricsSnapshot;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
public class StorageBeanConfig {
@Bean
public DetailService detailService(ExecutionStore executionStore) {
return new DetailService(executionStore);
}
@Bean(destroyMethod = "shutdown")
public SearchIndexer searchIndexer(ExecutionStore executionStore, SearchIndex searchIndex,
@Value("${opensearch.debounce-ms:2000}") long debounceMs,
@Value("${opensearch.queue-size:10000}") int queueSize) {
return new SearchIndexer(executionStore, searchIndex, debounceMs, queueSize);
}
@Bean
public AuditService auditService(AuditRepository auditRepository) {
return new AuditService(auditRepository);
}
@Bean
public IngestionService ingestionService(ExecutionStore executionStore,
DiagramStore diagramStore,
WriteBuffer<MetricsSnapshot> metricsBuffer,
SearchIndexer searchIndexer,
@Value("${cameleer.body-size-limit:16384}") int bodySizeLimit) {
return new IngestionService(executionStore, diagramStore, metricsBuffer,
searchIndexer::onExecutionUpdated, bodySizeLimit);
}
}

View File

@@ -92,7 +92,7 @@ public class AgentCommandController {
List<AgentInfo> agents = registryService.findAll().stream()
.filter(a -> a.state() == AgentState.LIVE)
.filter(a -> group.equals(a.group()))
.filter(a -> group.equals(a.application()))
.toList();
List<String> commandIds = new ArrayList<>();

View File

@@ -0,0 +1,49 @@
package com.cameleer3.server.app.controller;
import com.cameleer3.server.app.dto.AgentEventResponse;
import com.cameleer3.server.core.agent.AgentEventService;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import java.time.Instant;
import java.util.List;
@RestController
@RequestMapping("/api/v1/agents/events-log")
@Tag(name = "Agent Events", description = "Agent lifecycle event log")
public class AgentEventsController {
private final AgentEventService agentEventService;
public AgentEventsController(AgentEventService agentEventService) {
this.agentEventService = agentEventService;
}
@GetMapping
@Operation(summary = "Query agent events",
description = "Returns agent lifecycle events, optionally filtered by app and/or agent ID")
@ApiResponse(responseCode = "200", description = "Events returned")
public ResponseEntity<List<AgentEventResponse>> getEvents(
@RequestParam(required = false) String appId,
@RequestParam(required = false) String agentId,
@RequestParam(required = false) String from,
@RequestParam(required = false) String to,
@RequestParam(defaultValue = "50") int limit) {
Instant fromInstant = from != null ? Instant.parse(from) : null;
Instant toInstant = to != null ? Instant.parse(to) : null;
var events = agentEventService.queryEvents(appId, agentId, fromInstant, toInstant, limit)
.stream()
.map(AgentEventResponse::from)
.toList();
return ResponseEntity.ok(events);
}
}

View File

@@ -0,0 +1,66 @@
package com.cameleer3.server.app.controller;
import com.cameleer3.server.app.dto.AgentMetricsResponse;
import com.cameleer3.server.app.dto.MetricBucket;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.web.bind.annotation.*;
import java.sql.Timestamp;
import java.time.Instant;
import java.time.temporal.ChronoUnit;
import java.util.*;
@RestController
@RequestMapping("/api/v1/agents/{agentId}/metrics")
public class AgentMetricsController {
private final JdbcTemplate jdbc;
public AgentMetricsController(JdbcTemplate jdbc) {
this.jdbc = jdbc;
}
@GetMapping
public AgentMetricsResponse getMetrics(
@PathVariable String agentId,
@RequestParam String names,
@RequestParam(required = false) Instant from,
@RequestParam(required = false) Instant to,
@RequestParam(defaultValue = "60") int buckets) {
if (from == null) from = Instant.now().minus(1, ChronoUnit.HOURS);
if (to == null) to = Instant.now();
List<String> metricNames = Arrays.asList(names.split(","));
long intervalMs = (to.toEpochMilli() - from.toEpochMilli()) / Math.max(buckets, 1);
String intervalStr = intervalMs + " milliseconds";
Map<String, List<MetricBucket>> result = new LinkedHashMap<>();
for (String name : metricNames) {
result.put(name.trim(), new ArrayList<>());
}
String sql = """
SELECT time_bucket(CAST(? AS interval), collected_at) AS bucket,
metric_name,
AVG(metric_value) AS avg_value
FROM agent_metrics
WHERE agent_id = ?
AND collected_at >= ? AND collected_at < ?
AND metric_name = ANY(?)
GROUP BY bucket, metric_name
ORDER BY bucket
""";
String[] namesArray = metricNames.stream().map(String::trim).toArray(String[]::new);
jdbc.query(sql, rs -> {
String metricName = rs.getString("metric_name");
Instant bucket = rs.getTimestamp("bucket").toInstant();
double value = rs.getDouble("avg_value");
result.computeIfAbsent(metricName, k -> new ArrayList<>())
.add(new MetricBucket(bucket, value));
}, intervalStr, agentId, Timestamp.from(from), Timestamp.from(to), namesArray);
return new AgentMetricsResponse(result);
}
}

View File

@@ -8,6 +8,7 @@ import com.cameleer3.server.app.dto.AgentRegistrationRequest;
import com.cameleer3.server.app.dto.AgentRegistrationResponse;
import com.cameleer3.server.app.dto.ErrorResponse;
import com.cameleer3.server.app.security.BootstrapTokenValidator;
import com.cameleer3.server.core.agent.AgentEventService;
import com.cameleer3.server.core.agent.AgentInfo;
import com.cameleer3.server.core.agent.AgentRegistryService;
import com.cameleer3.server.core.agent.AgentState;
@@ -23,6 +24,7 @@ import jakarta.servlet.http.HttpServletRequest;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.http.ResponseEntity;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
@@ -31,8 +33,13 @@ import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import java.sql.Timestamp;
import java.time.Instant;
import java.time.temporal.ChronoUnit;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* Agent registration, heartbeat, listing, and token refresh endpoints.
@@ -50,17 +57,23 @@ public class AgentRegistrationController {
private final BootstrapTokenValidator bootstrapTokenValidator;
private final JwtService jwtService;
private final Ed25519SigningService ed25519SigningService;
private final AgentEventService agentEventService;
private final JdbcTemplate jdbc;
public AgentRegistrationController(AgentRegistryService registryService,
AgentRegistryConfig config,
BootstrapTokenValidator bootstrapTokenValidator,
JwtService jwtService,
Ed25519SigningService ed25519SigningService) {
Ed25519SigningService ed25519SigningService,
AgentEventService agentEventService,
JdbcTemplate jdbc) {
this.registryService = registryService;
this.config = config;
this.bootstrapTokenValidator = bootstrapTokenValidator;
this.jwtService = jwtService;
this.ed25519SigningService = ed25519SigningService;
this.agentEventService = agentEventService;
this.jdbc = jdbc;
}
@PostMapping("/register")
@@ -89,18 +102,21 @@ public class AgentRegistrationController {
return ResponseEntity.badRequest().build();
}
String group = request.group() != null ? request.group() : "default";
String application = request.application() != null ? request.application() : "default";
List<String> routeIds = request.routeIds() != null ? request.routeIds() : List.of();
var capabilities = request.capabilities() != null ? request.capabilities() : Collections.<String, Object>emptyMap();
AgentInfo agent = registryService.register(
request.agentId(), request.name(), group, request.version(), routeIds, capabilities);
log.info("Agent registered: {} (name={}, group={})", request.agentId(), request.name(), group);
request.agentId(), request.name(), application, request.version(), routeIds, capabilities);
log.info("Agent registered: {} (name={}, application={})", request.agentId(), request.name(), application);
agentEventService.recordEvent(request.agentId(), application, "REGISTERED",
"Agent registered: " + request.name());
// Issue JWT tokens with AGENT role
List<String> roles = List.of("AGENT");
String accessToken = jwtService.createAccessToken(request.agentId(), group, roles);
String refreshToken = jwtService.createRefreshToken(request.agentId(), group, roles);
String accessToken = jwtService.createAccessToken(request.agentId(), application, roles);
String refreshToken = jwtService.createRefreshToken(request.agentId(), application, roles);
return ResponseEntity.ok(new AgentRegistrationResponse(
agent.id(),
@@ -150,9 +166,10 @@ public class AgentRegistrationController {
// Preserve roles from refresh token
List<String> roles = result.roles().isEmpty()
? List.of("AGENT") : result.roles();
String newAccessToken = jwtService.createAccessToken(agentId, agent.group(), roles);
String newAccessToken = jwtService.createAccessToken(agentId, agent.application(), roles);
String newRefreshToken = jwtService.createRefreshToken(agentId, agent.application(), roles);
return ResponseEntity.ok(new AgentRefreshResponse(newAccessToken));
return ResponseEntity.ok(new AgentRefreshResponse(newAccessToken, newRefreshToken));
}
@PostMapping("/{id}/heartbeat")
@@ -170,13 +187,13 @@ public class AgentRegistrationController {
@GetMapping
@Operation(summary = "List all agents",
description = "Returns all registered agents, optionally filtered by status and/or group")
description = "Returns all registered agents with runtime metrics, optionally filtered by status and/or application")
@ApiResponse(responseCode = "200", description = "Agent list returned")
@ApiResponse(responseCode = "400", description = "Invalid status filter",
content = @Content(schema = @Schema(implementation = ErrorResponse.class)))
public ResponseEntity<List<AgentInstanceResponse>> listAgents(
@RequestParam(required = false) String status,
@RequestParam(required = false) String group) {
@RequestParam(required = false) String application) {
List<AgentInfo> agents;
if (status != null) {
@@ -190,16 +207,59 @@ public class AgentRegistrationController {
agents = registryService.findAll();
}
// Apply group filter if specified
if (group != null && !group.isBlank()) {
// Apply application filter if specified
if (application != null && !application.isBlank()) {
agents = agents.stream()
.filter(a -> group.equals(a.group()))
.filter(a -> application.equals(a.application()))
.toList();
}
List<AgentInstanceResponse> response = agents.stream()
.map(AgentInstanceResponse::from)
// Enrich with runtime metrics from continuous aggregates
Map<String, double[]> agentMetrics = queryAgentMetrics();
final List<AgentInfo> finalAgents = agents;
List<AgentInstanceResponse> response = finalAgents.stream()
.map(a -> {
AgentInstanceResponse dto = AgentInstanceResponse.from(a);
double[] m = agentMetrics.get(a.application());
if (m != null) {
long appAgentCount = finalAgents.stream()
.filter(ag -> ag.application().equals(a.application())).count();
double agentTps = appAgentCount > 0 ? m[0] / appAgentCount : 0;
double errorRate = m[1];
int activeRoutes = (int) m[2];
return dto.withMetrics(agentTps, errorRate, activeRoutes);
}
return dto;
})
.toList();
return ResponseEntity.ok(response);
}
private Map<String, double[]> queryAgentMetrics() {
Map<String, double[]> result = new HashMap<>();
Instant now = Instant.now();
Instant from1m = now.minus(1, ChronoUnit.MINUTES);
try {
jdbc.query(
"SELECT application_name, " +
"SUM(total_count) AS total, " +
"SUM(failed_count) AS failed, " +
"COUNT(DISTINCT route_id) AS active_routes " +
"FROM stats_1m_route WHERE bucket >= ? AND bucket < ? " +
"GROUP BY application_name",
rs -> {
long total = rs.getLong("total");
long failed = rs.getLong("failed");
double tps = total / 60.0;
double errorRate = total > 0 ? (double) failed / total : 0.0;
int activeRoutes = rs.getInt("active_routes");
result.put(rs.getString("application_name"), new double[]{tps, errorRate, activeRoutes});
},
Timestamp.from(from1m), Timestamp.from(now));
} catch (Exception e) {
log.debug("Could not query agent metrics: {}", e.getMessage());
}
return result;
}
}

View File

@@ -0,0 +1,68 @@
package com.cameleer3.server.app.controller;
import com.cameleer3.server.app.dto.AuditLogPageResponse;
import com.cameleer3.server.core.admin.AuditCategory;
import com.cameleer3.server.core.admin.AuditRepository;
import com.cameleer3.server.core.admin.AuditRepository.AuditPage;
import com.cameleer3.server.core.admin.AuditRepository.AuditQuery;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.springframework.format.annotation.DateTimeFormat;
import org.springframework.http.ResponseEntity;
import org.springframework.security.access.prepost.PreAuthorize;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import java.time.Instant;
import java.time.LocalDate;
import java.time.ZoneOffset;
@RestController
@RequestMapping("/api/v1/admin/audit")
@PreAuthorize("hasRole('ADMIN')")
@Tag(name = "Audit Log", description = "Audit log viewer (ADMIN only)")
public class AuditLogController {
private final AuditRepository auditRepository;
public AuditLogController(AuditRepository auditRepository) {
this.auditRepository = auditRepository;
}
@GetMapping
@Operation(summary = "Search audit log entries with pagination")
public ResponseEntity<AuditLogPageResponse> getAuditLog(
@RequestParam(required = false) String username,
@RequestParam(required = false) String category,
@RequestParam(required = false) String search,
@RequestParam(required = false) @DateTimeFormat(iso = DateTimeFormat.ISO.DATE) LocalDate from,
@RequestParam(required = false) @DateTimeFormat(iso = DateTimeFormat.ISO.DATE) LocalDate to,
@RequestParam(defaultValue = "timestamp") String sort,
@RequestParam(defaultValue = "desc") String order,
@RequestParam(defaultValue = "0") int page,
@RequestParam(defaultValue = "25") int size) {
size = Math.min(size, 100);
Instant fromInstant = from != null ? from.atStartOfDay(ZoneOffset.UTC).toInstant() : null;
Instant toInstant = to != null ? to.plusDays(1).atStartOfDay(ZoneOffset.UTC).toInstant() : null;
AuditCategory cat = null;
if (category != null && !category.isEmpty()) {
try {
cat = AuditCategory.valueOf(category.toUpperCase());
} catch (IllegalArgumentException ignored) {
// invalid category is treated as no filter
}
}
AuditQuery query = new AuditQuery(username, cat, search, fromInstant, toInstant, sort, order, page, size);
AuditPage result = auditRepository.find(query);
int totalPages = Math.max(1, (int) Math.ceil((double) result.totalCount() / size));
return ResponseEntity.ok(new AuditLogPageResponse(
result.items(), result.totalCount(), page, size, totalPages));
}
}

View File

@@ -0,0 +1,130 @@
package com.cameleer3.server.app.controller;
import com.cameleer3.server.app.dto.ActiveQueryResponse;
import com.cameleer3.server.app.dto.ConnectionPoolResponse;
import com.cameleer3.server.app.dto.DatabaseStatusResponse;
import com.cameleer3.server.app.dto.TableSizeResponse;
import com.cameleer3.server.core.admin.AuditCategory;
import com.cameleer3.server.core.admin.AuditResult;
import com.cameleer3.server.core.admin.AuditService;
import com.zaxxer.hikari.HikariDataSource;
import com.zaxxer.hikari.HikariPoolMXBean;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.tags.Tag;
import jakarta.servlet.http.HttpServletRequest;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.security.access.prepost.PreAuthorize;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.server.ResponseStatusException;
import javax.sql.DataSource;
import java.util.List;
@RestController
@RequestMapping("/api/v1/admin/database")
@PreAuthorize("hasRole('ADMIN')")
@Tag(name = "Database Admin", description = "Database monitoring and management (ADMIN only)")
public class DatabaseAdminController {
private final JdbcTemplate jdbc;
private final DataSource dataSource;
private final AuditService auditService;
public DatabaseAdminController(JdbcTemplate jdbc, DataSource dataSource, AuditService auditService) {
this.jdbc = jdbc;
this.dataSource = dataSource;
this.auditService = auditService;
}
@GetMapping("/status")
@Operation(summary = "Get database connection status and version")
public ResponseEntity<DatabaseStatusResponse> getStatus() {
try {
String version = jdbc.queryForObject("SELECT version()", String.class);
boolean timescaleDb = Boolean.TRUE.equals(
jdbc.queryForObject("SELECT EXISTS(SELECT 1 FROM pg_extension WHERE extname = 'timescaledb')", Boolean.class));
String schema = jdbc.queryForObject("SELECT current_schema()", String.class);
String host = extractHost(dataSource);
return ResponseEntity.ok(new DatabaseStatusResponse(true, version, host, schema, timescaleDb));
} catch (Exception e) {
return ResponseEntity.ok(new DatabaseStatusResponse(false, null, null, null, false));
}
}
@GetMapping("/pool")
@Operation(summary = "Get HikariCP connection pool stats")
public ResponseEntity<ConnectionPoolResponse> getPool() {
HikariDataSource hds = (HikariDataSource) dataSource;
HikariPoolMXBean pool = hds.getHikariPoolMXBean();
return ResponseEntity.ok(new ConnectionPoolResponse(
pool.getActiveConnections(), pool.getIdleConnections(),
pool.getThreadsAwaitingConnection(), hds.getConnectionTimeout(),
hds.getMaximumPoolSize()));
}
@GetMapping("/tables")
@Operation(summary = "Get table sizes and row counts")
public ResponseEntity<List<TableSizeResponse>> getTables() {
var tables = jdbc.query("""
SELECT relname AS table_name,
n_live_tup AS row_count,
pg_size_pretty(pg_total_relation_size(relid)) AS data_size,
pg_total_relation_size(relid) AS data_size_bytes,
pg_size_pretty(pg_indexes_size(relid)) AS index_size,
pg_indexes_size(relid) AS index_size_bytes
FROM pg_stat_user_tables
WHERE schemaname = current_schema()
ORDER BY pg_total_relation_size(relid) DESC
""", (rs, row) -> new TableSizeResponse(
rs.getString("table_name"), rs.getLong("row_count"),
rs.getString("data_size"), rs.getString("index_size"),
rs.getLong("data_size_bytes"), rs.getLong("index_size_bytes")));
return ResponseEntity.ok(tables);
}
@GetMapping("/queries")
@Operation(summary = "Get active queries")
public ResponseEntity<List<ActiveQueryResponse>> getQueries() {
var queries = jdbc.query("""
SELECT pid, EXTRACT(EPOCH FROM (now() - query_start)) AS duration_seconds,
state, query
FROM pg_stat_activity
WHERE state != 'idle' AND pid != pg_backend_pid() AND datname = current_database()
ORDER BY query_start ASC
""", (rs, row) -> new ActiveQueryResponse(
rs.getInt("pid"), rs.getDouble("duration_seconds"),
rs.getString("state"), rs.getString("query")));
return ResponseEntity.ok(queries);
}
@PostMapping("/queries/{pid}/kill")
@Operation(summary = "Terminate a query by PID")
public ResponseEntity<Void> killQuery(@PathVariable int pid, HttpServletRequest request) {
var exists = jdbc.queryForObject(
"SELECT EXISTS(SELECT 1 FROM pg_stat_activity WHERE pid = ? AND pid != pg_backend_pid())",
Boolean.class, pid);
if (!Boolean.TRUE.equals(exists)) {
throw new ResponseStatusException(HttpStatus.NOT_FOUND, "No active query with PID " + pid);
}
jdbc.queryForObject("SELECT pg_terminate_backend(?)", Boolean.class, pid);
auditService.log("kill_query", AuditCategory.INFRA, "PID " + pid, null, AuditResult.SUCCESS, request);
return ResponseEntity.ok().build();
}
private String extractHost(DataSource ds) {
try {
if (ds instanceof HikariDataSource hds) {
return hds.getJdbcUrl();
}
return "unknown";
} catch (Exception e) {
return "unknown";
}
}
}

View File

@@ -1,8 +1,9 @@
package com.cameleer3.server.app.controller;
import com.cameleer3.server.app.storage.ClickHouseExecutionRepository;
import com.cameleer3.server.core.detail.DetailService;
import com.cameleer3.server.core.detail.ExecutionDetail;
import com.cameleer3.server.core.storage.ExecutionStore;
import com.cameleer3.server.core.storage.ExecutionStore.ProcessorRecord;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
@@ -12,14 +13,16 @@ import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
/**
* Endpoints for retrieving execution details and processor snapshots.
* <p>
* The detail endpoint returns a nested processor tree reconstructed from
* flat parallel arrays stored in ClickHouse. The snapshot endpoint returns
* per-processor exchange data (bodies and headers).
* individual processor records stored in PostgreSQL. The snapshot endpoint
* returns per-processor exchange data (bodies and headers).
*/
@RestController
@RequestMapping("/api/v1/executions")
@@ -27,12 +30,12 @@ import java.util.Map;
public class DetailController {
private final DetailService detailService;
private final ClickHouseExecutionRepository executionRepository;
private final ExecutionStore executionStore;
public DetailController(DetailService detailService,
ClickHouseExecutionRepository executionRepository) {
ExecutionStore executionStore) {
this.detailService = detailService;
this.executionRepository = executionRepository;
this.executionStore = executionStore;
}
@GetMapping("/{executionId}")
@@ -52,8 +55,18 @@ public class DetailController {
public ResponseEntity<Map<String, String>> getProcessorSnapshot(
@PathVariable String executionId,
@PathVariable int index) {
return executionRepository.findProcessorSnapshot(executionId, index)
.map(ResponseEntity::ok)
.orElse(ResponseEntity.notFound().build());
List<ProcessorRecord> processors = executionStore.findProcessors(executionId);
if (index < 0 || index >= processors.size()) {
return ResponseEntity.notFound().build();
}
ProcessorRecord p = processors.get(index);
Map<String, String> snapshot = new LinkedHashMap<>();
if (p.inputBody() != null) snapshot.put("inputBody", p.inputBody());
if (p.outputBody() != null) snapshot.put("outputBody", p.outputBody());
if (p.inputHeaders() != null) snapshot.put("inputHeaders", p.inputHeaders());
if (p.outputHeaders() != null) snapshot.put("outputHeaders", p.outputHeaders());
return ResponseEntity.ok(snapshot);
}
}

View File

@@ -11,7 +11,6 @@ import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.security.core.Authentication;
import org.springframework.security.core.context.SecurityContextHolder;
@@ -25,8 +24,8 @@ import java.util.List;
/**
* Ingestion endpoint for route diagrams.
* <p>
* Accepts both single {@link RouteGraph} and arrays. Data is buffered
* and flushed to ClickHouse by the flush scheduler.
* Accepts both single {@link RouteGraph} and arrays. Data is written
* synchronously to PostgreSQL via {@link IngestionService}.
*/
@RestController
@RequestMapping("/api/v1/data")
@@ -47,26 +46,12 @@ public class DiagramController {
@Operation(summary = "Ingest route diagram data",
description = "Accepts a single RouteGraph or an array of RouteGraphs")
@ApiResponse(responseCode = "202", description = "Data accepted for processing")
@ApiResponse(responseCode = "503", description = "Buffer full, retry later")
public ResponseEntity<Void> ingestDiagrams(@RequestBody String body) throws JsonProcessingException {
String agentId = extractAgentId();
List<RouteGraph> graphs = parsePayload(body);
List<TaggedDiagram> tagged = graphs.stream()
.map(graph -> new TaggedDiagram(agentId, graph))
.toList();
boolean accepted;
if (tagged.size() == 1) {
accepted = ingestionService.acceptDiagram(tagged.get(0));
} else {
accepted = ingestionService.acceptDiagrams(tagged);
}
if (!accepted) {
log.warn("Diagram buffer full, returning 503");
return ResponseEntity.status(HttpStatus.SERVICE_UNAVAILABLE)
.header("Retry-After", "5")
.build();
for (RouteGraph graph : graphs) {
ingestionService.ingestDiagram(new TaggedDiagram(agentId, graph));
}
return ResponseEntity.accepted().build();

View File

@@ -5,7 +5,7 @@ import com.cameleer3.server.core.agent.AgentInfo;
import com.cameleer3.server.core.agent.AgentRegistryService;
import com.cameleer3.server.core.diagram.DiagramLayout;
import com.cameleer3.server.core.diagram.DiagramRenderer;
import com.cameleer3.server.core.storage.DiagramRepository;
import com.cameleer3.server.core.storage.DiagramStore;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.media.Content;
import io.swagger.v3.oas.annotations.media.Schema;
@@ -39,14 +39,14 @@ public class DiagramRenderController {
private static final MediaType SVG_MEDIA_TYPE = MediaType.valueOf("image/svg+xml");
private final DiagramRepository diagramRepository;
private final DiagramStore diagramStore;
private final DiagramRenderer diagramRenderer;
private final AgentRegistryService registryService;
public DiagramRenderController(DiagramRepository diagramRepository,
public DiagramRenderController(DiagramStore diagramStore,
DiagramRenderer diagramRenderer,
AgentRegistryService registryService) {
this.diagramRepository = diagramRepository;
this.diagramStore = diagramStore;
this.diagramRenderer = diagramRenderer;
this.registryService = registryService;
}
@@ -64,7 +64,7 @@ public class DiagramRenderController {
@PathVariable String contentHash,
HttpServletRequest request) {
Optional<RouteGraph> graphOpt = diagramRepository.findByContentHash(contentHash);
Optional<RouteGraph> graphOpt = diagramStore.findByContentHash(contentHash);
if (graphOpt.isEmpty()) {
return ResponseEntity.notFound().build();
}
@@ -90,14 +90,14 @@ public class DiagramRenderController {
}
@GetMapping
@Operation(summary = "Find diagram by application group and route ID",
description = "Resolves group to agent IDs and finds the latest diagram for the route")
@Operation(summary = "Find diagram by application and route ID",
description = "Resolves application to agent IDs and finds the latest diagram for the route")
@ApiResponse(responseCode = "200", description = "Diagram layout returned")
@ApiResponse(responseCode = "404", description = "No diagram found for the given group and route")
public ResponseEntity<DiagramLayout> findByGroupAndRoute(
@RequestParam String group,
@ApiResponse(responseCode = "404", description = "No diagram found for the given application and route")
public ResponseEntity<DiagramLayout> findByApplicationAndRoute(
@RequestParam String application,
@RequestParam String routeId) {
List<String> agentIds = registryService.findByGroup(group).stream()
List<String> agentIds = registryService.findByApplication(application).stream()
.map(AgentInfo::id)
.toList();
@@ -105,12 +105,12 @@ public class DiagramRenderController {
return ResponseEntity.notFound().build();
}
Optional<String> contentHash = diagramRepository.findContentHashForRouteByAgents(routeId, agentIds);
Optional<String> contentHash = diagramStore.findContentHashForRouteByAgents(routeId, agentIds);
if (contentHash.isEmpty()) {
return ResponseEntity.notFound().build();
}
Optional<RouteGraph> graphOpt = diagramRepository.findByContentHash(contentHash.get());
Optional<RouteGraph> graphOpt = diagramStore.findByContentHash(contentHash.get());
if (graphOpt.isEmpty()) {
return ResponseEntity.notFound().build();
}

View File

@@ -1,8 +1,9 @@
package com.cameleer3.server.app.controller;
import com.cameleer3.common.model.RouteExecution;
import com.cameleer3.server.core.agent.AgentInfo;
import com.cameleer3.server.core.agent.AgentRegistryService;
import com.cameleer3.server.core.ingestion.IngestionService;
import com.cameleer3.server.core.ingestion.TaggedExecution;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.ObjectMapper;
@@ -11,7 +12,6 @@ import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.security.core.Authentication;
import org.springframework.security.core.context.SecurityContextHolder;
@@ -25,9 +25,8 @@ import java.util.List;
/**
* Ingestion endpoint for route execution data.
* <p>
* Accepts both single {@link RouteExecution} and arrays. Data is buffered
* in a {@link com.cameleer3.server.core.ingestion.WriteBuffer} and flushed
* to ClickHouse by the flush scheduler.
* Accepts both single {@link RouteExecution} and arrays. Data is written
* synchronously to PostgreSQL via {@link IngestionService}.
*/
@RestController
@RequestMapping("/api/v1/data")
@@ -37,10 +36,14 @@ public class ExecutionController {
private static final Logger log = LoggerFactory.getLogger(ExecutionController.class);
private final IngestionService ingestionService;
private final AgentRegistryService registryService;
private final ObjectMapper objectMapper;
public ExecutionController(IngestionService ingestionService, ObjectMapper objectMapper) {
public ExecutionController(IngestionService ingestionService,
AgentRegistryService registryService,
ObjectMapper objectMapper) {
this.ingestionService = ingestionService;
this.registryService = registryService;
this.objectMapper = objectMapper;
}
@@ -48,26 +51,13 @@ public class ExecutionController {
@Operation(summary = "Ingest route execution data",
description = "Accepts a single RouteExecution or an array of RouteExecutions")
@ApiResponse(responseCode = "202", description = "Data accepted for processing")
@ApiResponse(responseCode = "503", description = "Buffer full, retry later")
public ResponseEntity<Void> ingestExecutions(@RequestBody String body) throws JsonProcessingException {
String agentId = extractAgentId();
String applicationName = resolveApplicationName(agentId);
List<RouteExecution> executions = parsePayload(body);
List<TaggedExecution> tagged = executions.stream()
.map(exec -> new TaggedExecution(agentId, exec))
.toList();
boolean accepted;
if (tagged.size() == 1) {
accepted = ingestionService.acceptExecution(tagged.get(0));
} else {
accepted = ingestionService.acceptExecutions(tagged);
}
if (!accepted) {
log.warn("Execution buffer full, returning 503");
return ResponseEntity.status(HttpStatus.SERVICE_UNAVAILABLE)
.header("Retry-After", "5")
.build();
for (RouteExecution execution : executions) {
ingestionService.ingestExecution(agentId, applicationName, execution);
}
return ResponseEntity.accepted().build();
@@ -78,6 +68,11 @@ public class ExecutionController {
return auth != null ? auth.getName() : "";
}
private String resolveApplicationName(String agentId) {
AgentInfo agent = registryService.findById(agentId);
return agent != null ? agent.application() : "";
}
private List<RouteExecution> parsePayload(String body) throws JsonProcessingException {
String trimmed = body.strip();
if (trimmed.startsWith("[")) {

View File

@@ -0,0 +1,167 @@
package com.cameleer3.server.app.controller;
import com.cameleer3.server.core.admin.AuditCategory;
import com.cameleer3.server.core.admin.AuditResult;
import com.cameleer3.server.core.admin.AuditService;
import com.cameleer3.server.core.rbac.GroupDetail;
import com.cameleer3.server.core.rbac.GroupRepository;
import com.cameleer3.server.core.rbac.GroupSummary;
import com.cameleer3.server.core.rbac.RbacService;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import jakarta.servlet.http.HttpServletRequest;
import org.springframework.http.ResponseEntity;
import org.springframework.security.access.prepost.PreAuthorize;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.UUID;
/**
* Admin endpoints for group management.
* Protected by {@code ROLE_ADMIN}.
*/
@RestController
@RequestMapping("/api/v1/admin/groups")
@Tag(name = "Group Admin", description = "Group management (ADMIN only)")
@PreAuthorize("hasRole('ADMIN')")
public class GroupAdminController {
private final GroupRepository groupRepository;
private final RbacService rbacService;
private final AuditService auditService;
public GroupAdminController(GroupRepository groupRepository, RbacService rbacService,
AuditService auditService) {
this.groupRepository = groupRepository;
this.rbacService = rbacService;
this.auditService = auditService;
}
@GetMapping
@Operation(summary = "List all groups with hierarchy and effective roles")
@ApiResponse(responseCode = "200", description = "Group list returned")
public ResponseEntity<List<GroupDetail>> listGroups() {
List<GroupSummary> summaries = groupRepository.findAll();
List<GroupDetail> details = new ArrayList<>();
for (GroupSummary summary : summaries) {
groupRepository.findById(summary.id()).ifPresent(details::add);
}
return ResponseEntity.ok(details);
}
@GetMapping("/{id}")
@Operation(summary = "Get group by ID with effective roles")
@ApiResponse(responseCode = "200", description = "Group found")
@ApiResponse(responseCode = "404", description = "Group not found")
public ResponseEntity<GroupDetail> getGroup(@PathVariable UUID id) {
return groupRepository.findById(id)
.map(ResponseEntity::ok)
.orElse(ResponseEntity.notFound().build());
}
@PostMapping
@Operation(summary = "Create a new group")
@ApiResponse(responseCode = "200", description = "Group created")
public ResponseEntity<Map<String, UUID>> createGroup(@RequestBody CreateGroupRequest request,
HttpServletRequest httpRequest) {
UUID id = groupRepository.create(request.name(), request.parentGroupId());
auditService.log("create_group", AuditCategory.RBAC, id.toString(),
Map.of("name", request.name()), AuditResult.SUCCESS, httpRequest);
return ResponseEntity.ok(Map.of("id", id));
}
@PutMapping("/{id}")
@Operation(summary = "Update group name or parent")
@ApiResponse(responseCode = "200", description = "Group updated")
@ApiResponse(responseCode = "404", description = "Group not found")
@ApiResponse(responseCode = "409", description = "Cycle detected in group hierarchy")
public ResponseEntity<Void> updateGroup(@PathVariable UUID id,
@RequestBody UpdateGroupRequest request,
HttpServletRequest httpRequest) {
Optional<GroupDetail> existing = groupRepository.findById(id);
if (existing.isEmpty()) {
return ResponseEntity.notFound().build();
}
// Cycle detection: walk ancestor chain of proposed parent and check if it includes 'id'
if (request.parentGroupId() != null) {
List<GroupSummary> ancestors = groupRepository.findAncestorChain(request.parentGroupId());
for (GroupSummary ancestor : ancestors) {
if (ancestor.id().equals(id)) {
return ResponseEntity.status(409).build();
}
}
// Also check that the proposed parent itself is not the group being updated
if (request.parentGroupId().equals(id)) {
return ResponseEntity.status(409).build();
}
}
groupRepository.update(id, request.name(), request.parentGroupId());
auditService.log("update_group", AuditCategory.RBAC, id.toString(),
null, AuditResult.SUCCESS, httpRequest);
return ResponseEntity.ok().build();
}
@DeleteMapping("/{id}")
@Operation(summary = "Delete group")
@ApiResponse(responseCode = "204", description = "Group deleted")
@ApiResponse(responseCode = "404", description = "Group not found")
public ResponseEntity<Void> deleteGroup(@PathVariable UUID id,
HttpServletRequest httpRequest) {
if (groupRepository.findById(id).isEmpty()) {
return ResponseEntity.notFound().build();
}
groupRepository.delete(id);
auditService.log("delete_group", AuditCategory.RBAC, id.toString(),
null, AuditResult.SUCCESS, httpRequest);
return ResponseEntity.noContent().build();
}
@PostMapping("/{id}/roles/{roleId}")
@Operation(summary = "Assign a role to a group")
@ApiResponse(responseCode = "200", description = "Role assigned to group")
@ApiResponse(responseCode = "404", description = "Group not found")
public ResponseEntity<Void> assignRoleToGroup(@PathVariable UUID id,
@PathVariable UUID roleId,
HttpServletRequest httpRequest) {
if (groupRepository.findById(id).isEmpty()) {
return ResponseEntity.notFound().build();
}
groupRepository.addRole(id, roleId);
auditService.log("assign_role_to_group", AuditCategory.RBAC, id.toString(),
Map.of("roleId", roleId), AuditResult.SUCCESS, httpRequest);
return ResponseEntity.ok().build();
}
@DeleteMapping("/{id}/roles/{roleId}")
@Operation(summary = "Remove a role from a group")
@ApiResponse(responseCode = "204", description = "Role removed from group")
@ApiResponse(responseCode = "404", description = "Group not found")
public ResponseEntity<Void> removeRoleFromGroup(@PathVariable UUID id,
@PathVariable UUID roleId,
HttpServletRequest httpRequest) {
if (groupRepository.findById(id).isEmpty()) {
return ResponseEntity.notFound().build();
}
groupRepository.removeRole(id, roleId);
auditService.log("remove_role_from_group", AuditCategory.RBAC, id.toString(),
Map.of("roleId", roleId), AuditResult.SUCCESS, httpRequest);
return ResponseEntity.noContent().build();
}
public record CreateGroupRequest(String name, UUID parentGroupId) {}
public record UpdateGroupRequest(String name, UUID parentGroupId) {}
}

View File

@@ -23,7 +23,7 @@ import java.util.List;
* Ingestion endpoint for agent metrics.
* <p>
* Accepts an array of {@link MetricsSnapshot}. Data is buffered
* and flushed to ClickHouse by the flush scheduler.
* and flushed to PostgreSQL by the flush scheduler.
*/
@RestController
@RequestMapping("/api/v1/data")

View File

@@ -5,8 +5,12 @@ import com.cameleer3.server.app.dto.OidcAdminConfigRequest;
import com.cameleer3.server.app.dto.OidcAdminConfigResponse;
import com.cameleer3.server.app.dto.OidcTestResult;
import com.cameleer3.server.app.security.OidcTokenExchanger;
import com.cameleer3.server.core.admin.AuditCategory;
import com.cameleer3.server.core.admin.AuditResult;
import com.cameleer3.server.core.admin.AuditService;
import com.cameleer3.server.core.security.OidcConfig;
import com.cameleer3.server.core.security.OidcConfigRepository;
import jakarta.servlet.http.HttpServletRequest;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.media.Content;
import io.swagger.v3.oas.annotations.media.Schema;
@@ -16,6 +20,7 @@ import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.security.access.prepost.PreAuthorize;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
@@ -26,6 +31,7 @@ import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.server.ResponseStatusException;
import java.util.List;
import java.util.Map;
import java.util.Optional;
/**
@@ -35,17 +41,21 @@ import java.util.Optional;
@RestController
@RequestMapping("/api/v1/admin/oidc")
@Tag(name = "OIDC Config Admin", description = "OIDC provider configuration (ADMIN only)")
@PreAuthorize("hasRole('ADMIN')")
public class OidcConfigAdminController {
private static final Logger log = LoggerFactory.getLogger(OidcConfigAdminController.class);
private final OidcConfigRepository configRepository;
private final OidcTokenExchanger tokenExchanger;
private final AuditService auditService;
public OidcConfigAdminController(OidcConfigRepository configRepository,
OidcTokenExchanger tokenExchanger) {
OidcTokenExchanger tokenExchanger,
AuditService auditService) {
this.configRepository = configRepository;
this.tokenExchanger = tokenExchanger;
this.auditService = auditService;
}
@GetMapping
@@ -64,7 +74,8 @@ public class OidcConfigAdminController {
@ApiResponse(responseCode = "200", description = "Configuration saved")
@ApiResponse(responseCode = "400", description = "Invalid configuration",
content = @Content(schema = @Schema(implementation = ErrorResponse.class)))
public ResponseEntity<OidcAdminConfigResponse> saveConfig(@RequestBody OidcAdminConfigRequest request) {
public ResponseEntity<OidcAdminConfigResponse> saveConfig(@RequestBody OidcAdminConfigRequest request,
HttpServletRequest httpRequest) {
// Resolve client_secret: if masked or empty, preserve existing
String clientSecret = request.clientSecret();
if (clientSecret == null || clientSecret.isBlank() || clientSecret.equals("********")) {
@@ -95,6 +106,7 @@ public class OidcConfigAdminController {
configRepository.save(config);
tokenExchanger.invalidateCache();
auditService.log("update_oidc", AuditCategory.CONFIG, "oidc", Map.of(), AuditResult.SUCCESS, httpRequest);
log.info("OIDC configuration updated: enabled={}, issuer={}", config.enabled(), config.issuerUri());
return ResponseEntity.ok(OidcAdminConfigResponse.from(config));
}
@@ -104,7 +116,7 @@ public class OidcConfigAdminController {
@ApiResponse(responseCode = "200", description = "Provider reachable")
@ApiResponse(responseCode = "400", description = "Provider unreachable or misconfigured",
content = @Content(schema = @Schema(implementation = ErrorResponse.class)))
public ResponseEntity<OidcTestResult> testConnection() {
public ResponseEntity<OidcTestResult> testConnection(HttpServletRequest httpRequest) {
Optional<OidcConfig> config = configRepository.find();
if (config.isEmpty() || !config.get().enabled()) {
throw new ResponseStatusException(HttpStatus.BAD_REQUEST,
@@ -114,6 +126,7 @@ public class OidcConfigAdminController {
try {
tokenExchanger.invalidateCache();
String authEndpoint = tokenExchanger.getAuthorizationEndpoint();
auditService.log("test_oidc", AuditCategory.CONFIG, "oidc", null, AuditResult.SUCCESS, httpRequest);
return ResponseEntity.ok(new OidcTestResult("ok", authEndpoint));
} catch (Exception e) {
log.warn("OIDC connectivity test failed: {}", e.getMessage());
@@ -125,9 +138,10 @@ public class OidcConfigAdminController {
@DeleteMapping
@Operation(summary = "Delete OIDC configuration")
@ApiResponse(responseCode = "204", description = "Configuration deleted")
public ResponseEntity<Void> deleteConfig() {
public ResponseEntity<Void> deleteConfig(HttpServletRequest httpRequest) {
configRepository.delete();
tokenExchanger.invalidateCache();
auditService.log("delete_oidc", AuditCategory.CONFIG, "oidc", null, AuditResult.SUCCESS, httpRequest);
log.info("OIDC configuration deleted");
return ResponseEntity.noContent().build();
}

View File

@@ -0,0 +1,257 @@
package com.cameleer3.server.app.controller;
import com.cameleer3.server.app.dto.IndexInfoResponse;
import com.cameleer3.server.app.dto.IndicesPageResponse;
import com.cameleer3.server.app.dto.OpenSearchStatusResponse;
import com.cameleer3.server.app.dto.PerformanceResponse;
import com.cameleer3.server.app.dto.PipelineStatsResponse;
import com.cameleer3.server.core.admin.AuditCategory;
import com.cameleer3.server.core.admin.AuditResult;
import com.cameleer3.server.core.admin.AuditService;
import com.cameleer3.server.core.indexing.SearchIndexerStats;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.tags.Tag;
import jakarta.servlet.http.HttpServletRequest;
import org.opensearch.client.Request;
import org.opensearch.client.Response;
import org.opensearch.client.RestClient;
import org.opensearch.client.opensearch.OpenSearchClient;
import org.opensearch.client.opensearch.cluster.HealthResponse;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.security.access.prepost.PreAuthorize;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.server.ResponseStatusException;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.List;
@RestController
@RequestMapping("/api/v1/admin/opensearch")
@PreAuthorize("hasRole('ADMIN')")
@Tag(name = "OpenSearch Admin", description = "OpenSearch monitoring and management (ADMIN only)")
public class OpenSearchAdminController {
private final OpenSearchClient client;
private final RestClient restClient;
private final SearchIndexerStats indexerStats;
private final AuditService auditService;
private final ObjectMapper objectMapper;
private final String opensearchUrl;
private final String indexPrefix;
public OpenSearchAdminController(OpenSearchClient client, RestClient restClient,
SearchIndexerStats indexerStats, AuditService auditService,
ObjectMapper objectMapper,
@Value("${opensearch.url:http://localhost:9200}") String opensearchUrl,
@Value("${opensearch.index-prefix:executions-}") String indexPrefix) {
this.client = client;
this.restClient = restClient;
this.indexerStats = indexerStats;
this.auditService = auditService;
this.objectMapper = objectMapper;
this.opensearchUrl = opensearchUrl;
this.indexPrefix = indexPrefix;
}
@GetMapping("/status")
@Operation(summary = "Get OpenSearch cluster status and version")
public ResponseEntity<OpenSearchStatusResponse> getStatus() {
try {
HealthResponse health = client.cluster().health();
String version = client.info().version().number();
return ResponseEntity.ok(new OpenSearchStatusResponse(
true,
health.status().name(),
version,
health.numberOfNodes(),
opensearchUrl));
} catch (Exception e) {
return ResponseEntity.ok(new OpenSearchStatusResponse(
false, "UNREACHABLE", null, 0, opensearchUrl));
}
}
@GetMapping("/pipeline")
@Operation(summary = "Get indexing pipeline statistics")
public ResponseEntity<PipelineStatsResponse> getPipeline() {
return ResponseEntity.ok(new PipelineStatsResponse(
indexerStats.getQueueDepth(),
indexerStats.getMaxQueueSize(),
indexerStats.getFailedCount(),
indexerStats.getIndexedCount(),
indexerStats.getDebounceMs(),
indexerStats.getIndexingRate(),
indexerStats.getLastIndexedAt()));
}
@GetMapping("/indices")
@Operation(summary = "Get OpenSearch indices with pagination")
public ResponseEntity<IndicesPageResponse> getIndices(
@RequestParam(defaultValue = "0") int page,
@RequestParam(defaultValue = "20") int size,
@RequestParam(defaultValue = "") String search) {
try {
Response response = restClient.performRequest(
new Request("GET", "/_cat/indices?format=json&h=index,health,docs.count,store.size,pri,rep&bytes=b"));
JsonNode indices;
try (InputStream is = response.getEntity().getContent()) {
indices = objectMapper.readTree(is);
}
List<IndexInfoResponse> allIndices = new ArrayList<>();
for (JsonNode idx : indices) {
String name = idx.path("index").asText("");
if (!name.startsWith(indexPrefix)) {
continue;
}
if (!search.isEmpty() && !name.contains(search)) {
continue;
}
allIndices.add(new IndexInfoResponse(
name,
parseLong(idx.path("docs.count").asText("0")),
humanSize(parseLong(idx.path("store.size").asText("0"))),
parseLong(idx.path("store.size").asText("0")),
idx.path("health").asText("unknown"),
parseInt(idx.path("pri").asText("0")),
parseInt(idx.path("rep").asText("0"))));
}
allIndices.sort(Comparator.comparing(IndexInfoResponse::name));
long totalDocs = allIndices.stream().mapToLong(IndexInfoResponse::docCount).sum();
long totalBytes = allIndices.stream().mapToLong(IndexInfoResponse::sizeBytes).sum();
int totalIndices = allIndices.size();
int totalPages = Math.max(1, (int) Math.ceil((double) totalIndices / size));
int fromIndex = Math.min(page * size, totalIndices);
int toIndex = Math.min(fromIndex + size, totalIndices);
List<IndexInfoResponse> pageItems = allIndices.subList(fromIndex, toIndex);
return ResponseEntity.ok(new IndicesPageResponse(
pageItems, totalIndices, totalDocs,
humanSize(totalBytes), page, size, totalPages));
} catch (Exception e) {
return ResponseEntity.ok(new IndicesPageResponse(
List.of(), 0, 0, "0 B", page, size, 0));
}
}
@DeleteMapping("/indices/{name}")
@Operation(summary = "Delete an OpenSearch index")
public ResponseEntity<Void> deleteIndex(@PathVariable String name, HttpServletRequest request) {
try {
if (!name.startsWith(indexPrefix)) {
throw new ResponseStatusException(HttpStatus.FORBIDDEN, "Cannot delete index outside application scope");
}
boolean exists = client.indices().exists(r -> r.index(name)).value();
if (!exists) {
throw new ResponseStatusException(HttpStatus.NOT_FOUND, "Index not found: " + name);
}
client.indices().delete(r -> r.index(name));
auditService.log("delete_index", AuditCategory.INFRA, name, null, AuditResult.SUCCESS, request);
return ResponseEntity.ok().build();
} catch (ResponseStatusException e) {
throw e;
} catch (Exception e) {
throw new ResponseStatusException(HttpStatus.INTERNAL_SERVER_ERROR, "Failed to delete index: " + e.getMessage());
}
}
@GetMapping("/performance")
@Operation(summary = "Get OpenSearch performance metrics")
public ResponseEntity<PerformanceResponse> getPerformance() {
try {
Response response = restClient.performRequest(
new Request("GET", "/_nodes/stats/jvm,indices"));
JsonNode root;
try (InputStream is = response.getEntity().getContent()) {
root = objectMapper.readTree(is);
}
JsonNode nodes = root.path("nodes");
long heapUsed = 0, heapMax = 0;
long queryCacheHits = 0, queryCacheMisses = 0;
long requestCacheHits = 0, requestCacheMisses = 0;
long searchQueryTotal = 0, searchQueryTimeMs = 0;
long indexTotal = 0, indexTimeMs = 0;
var it = nodes.fields();
while (it.hasNext()) {
var entry = it.next();
JsonNode node = entry.getValue();
JsonNode jvm = node.path("jvm").path("mem");
heapUsed += jvm.path("heap_used_in_bytes").asLong(0);
heapMax += jvm.path("heap_max_in_bytes").asLong(0);
JsonNode indicesNode = node.path("indices");
JsonNode queryCache = indicesNode.path("query_cache");
queryCacheHits += queryCache.path("hit_count").asLong(0);
queryCacheMisses += queryCache.path("miss_count").asLong(0);
JsonNode requestCache = indicesNode.path("request_cache");
requestCacheHits += requestCache.path("hit_count").asLong(0);
requestCacheMisses += requestCache.path("miss_count").asLong(0);
JsonNode searchNode = indicesNode.path("search");
searchQueryTotal += searchNode.path("query_total").asLong(0);
searchQueryTimeMs += searchNode.path("query_time_in_millis").asLong(0);
JsonNode indexing = indicesNode.path("indexing");
indexTotal += indexing.path("index_total").asLong(0);
indexTimeMs += indexing.path("index_time_in_millis").asLong(0);
}
double queryCacheHitRate = (queryCacheHits + queryCacheMisses) > 0
? (double) queryCacheHits / (queryCacheHits + queryCacheMisses) : 0.0;
double requestCacheHitRate = (requestCacheHits + requestCacheMisses) > 0
? (double) requestCacheHits / (requestCacheHits + requestCacheMisses) : 0.0;
double searchLatency = searchQueryTotal > 0
? (double) searchQueryTimeMs / searchQueryTotal : 0.0;
double indexingLatency = indexTotal > 0
? (double) indexTimeMs / indexTotal : 0.0;
return ResponseEntity.ok(new PerformanceResponse(
queryCacheHitRate, requestCacheHitRate,
searchLatency, indexingLatency,
heapUsed, heapMax));
} catch (Exception e) {
return ResponseEntity.ok(new PerformanceResponse(0, 0, 0, 0, 0, 0));
}
}
private static long parseLong(String s) {
try {
return Long.parseLong(s);
} catch (NumberFormatException e) {
return 0;
}
}
private static int parseInt(String s) {
try {
return Integer.parseInt(s);
} catch (NumberFormatException e) {
return 0;
}
}
private static String humanSize(long bytes) {
if (bytes < 1024) return bytes + " B";
if (bytes < 1024 * 1024) return String.format("%.1f KB", bytes / 1024.0);
if (bytes < 1024 * 1024 * 1024) return String.format("%.1f MB", bytes / (1024.0 * 1024));
return String.format("%.1f GB", bytes / (1024.0 * 1024 * 1024));
}
}

View File

@@ -0,0 +1,36 @@
package com.cameleer3.server.app.controller;
import com.cameleer3.server.core.rbac.RbacService;
import com.cameleer3.server.core.rbac.RbacStats;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.springframework.http.ResponseEntity;
import org.springframework.security.access.prepost.PreAuthorize;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
/**
* Admin endpoint for RBAC statistics.
* Protected by {@code ROLE_ADMIN}.
*/
@RestController
@RequestMapping("/api/v1/admin/rbac")
@Tag(name = "RBAC Stats", description = "RBAC statistics (ADMIN only)")
@PreAuthorize("hasRole('ADMIN')")
public class RbacStatsController {
private final RbacService rbacService;
public RbacStatsController(RbacService rbacService) {
this.rbacService = rbacService;
}
@GetMapping("/stats")
@Operation(summary = "Get RBAC statistics for the dashboard")
@ApiResponse(responseCode = "200", description = "RBAC stats returned")
public ResponseEntity<RbacStats> getStats() {
return ResponseEntity.ok(rbacService.getStats());
}
}

View File

@@ -0,0 +1,125 @@
package com.cameleer3.server.app.controller;
import com.cameleer3.server.core.admin.AuditCategory;
import com.cameleer3.server.core.admin.AuditResult;
import com.cameleer3.server.core.admin.AuditService;
import com.cameleer3.server.core.rbac.RbacService;
import com.cameleer3.server.core.rbac.RoleDetail;
import com.cameleer3.server.core.rbac.RoleRepository;
import com.cameleer3.server.core.rbac.SystemRole;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import jakarta.servlet.http.HttpServletRequest;
import org.springframework.http.ResponseEntity;
import org.springframework.security.access.prepost.PreAuthorize;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import java.util.List;
import java.util.Map;
import java.util.UUID;
/**
* Admin endpoints for role management.
* Protected by {@code ROLE_ADMIN}.
*/
@RestController
@RequestMapping("/api/v1/admin/roles")
@Tag(name = "Role Admin", description = "Role management (ADMIN only)")
@PreAuthorize("hasRole('ADMIN')")
public class RoleAdminController {
private final RoleRepository roleRepository;
private final RbacService rbacService;
private final AuditService auditService;
public RoleAdminController(RoleRepository roleRepository, RbacService rbacService,
AuditService auditService) {
this.roleRepository = roleRepository;
this.rbacService = rbacService;
this.auditService = auditService;
}
@GetMapping
@Operation(summary = "List all roles (system and custom)")
@ApiResponse(responseCode = "200", description = "Role list returned")
public ResponseEntity<List<RoleDetail>> listRoles() {
return ResponseEntity.ok(roleRepository.findAll());
}
@GetMapping("/{id}")
@Operation(summary = "Get role by ID with effective principals")
@ApiResponse(responseCode = "200", description = "Role found")
@ApiResponse(responseCode = "404", description = "Role not found")
public ResponseEntity<RoleDetail> getRole(@PathVariable UUID id) {
return roleRepository.findById(id)
.map(ResponseEntity::ok)
.orElse(ResponseEntity.notFound().build());
}
@PostMapping
@Operation(summary = "Create a custom role")
@ApiResponse(responseCode = "200", description = "Role created")
public ResponseEntity<Map<String, UUID>> createRole(@RequestBody CreateRoleRequest request,
HttpServletRequest httpRequest) {
String desc = request.description() != null ? request.description() : "";
String sc = request.scope() != null ? request.scope() : "custom";
UUID id = roleRepository.create(request.name(), desc, sc);
auditService.log("create_role", AuditCategory.RBAC, id.toString(),
Map.of("name", request.name()), AuditResult.SUCCESS, httpRequest);
return ResponseEntity.ok(Map.of("id", id));
}
@PutMapping("/{id}")
@Operation(summary = "Update a custom role")
@ApiResponse(responseCode = "200", description = "Role updated")
@ApiResponse(responseCode = "403", description = "Cannot modify system role")
@ApiResponse(responseCode = "404", description = "Role not found")
public ResponseEntity<Void> updateRole(@PathVariable UUID id,
@RequestBody UpdateRoleRequest request,
HttpServletRequest httpRequest) {
if (SystemRole.isSystem(id)) {
auditService.log("update_role", AuditCategory.RBAC, id.toString(),
Map.of("reason", "system_role_protected"), AuditResult.FAILURE, httpRequest);
return ResponseEntity.status(403).build();
}
if (roleRepository.findById(id).isEmpty()) {
return ResponseEntity.notFound().build();
}
roleRepository.update(id, request.name(), request.description(), request.scope());
auditService.log("update_role", AuditCategory.RBAC, id.toString(),
null, AuditResult.SUCCESS, httpRequest);
return ResponseEntity.ok().build();
}
@DeleteMapping("/{id}")
@Operation(summary = "Delete a custom role")
@ApiResponse(responseCode = "204", description = "Role deleted")
@ApiResponse(responseCode = "403", description = "Cannot delete system role")
@ApiResponse(responseCode = "404", description = "Role not found")
public ResponseEntity<Void> deleteRole(@PathVariable UUID id,
HttpServletRequest httpRequest) {
if (SystemRole.isSystem(id)) {
auditService.log("delete_role", AuditCategory.RBAC, id.toString(),
Map.of("reason", "system_role_protected"), AuditResult.FAILURE, httpRequest);
return ResponseEntity.status(403).build();
}
if (roleRepository.findById(id).isEmpty()) {
return ResponseEntity.notFound().build();
}
roleRepository.delete(id);
auditService.log("delete_role", AuditCategory.RBAC, id.toString(),
null, AuditResult.SUCCESS, httpRequest);
return ResponseEntity.noContent().build();
}
public record CreateRoleRequest(String name, String description, String scope) {}
public record UpdateRoleRequest(String name, String description, String scope) {}
}

View File

@@ -0,0 +1,151 @@
package com.cameleer3.server.app.controller;
import com.cameleer3.server.app.dto.AgentSummary;
import com.cameleer3.server.app.dto.AppCatalogEntry;
import com.cameleer3.server.app.dto.RouteSummary;
import com.cameleer3.server.core.agent.AgentInfo;
import com.cameleer3.server.core.agent.AgentRegistryService;
import com.cameleer3.server.core.agent.AgentState;
import com.cameleer3.server.core.storage.StatsStore;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.springframework.http.ResponseEntity;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import java.sql.Timestamp;
import java.time.Instant;
import java.time.temporal.ChronoUnit;
import java.util.ArrayList;
import java.util.LinkedHashMap;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.stream.Collectors;
@RestController
@RequestMapping("/api/v1/routes")
@Tag(name = "Route Catalog", description = "Route catalog and discovery")
public class RouteCatalogController {
private final AgentRegistryService registryService;
private final JdbcTemplate jdbc;
public RouteCatalogController(AgentRegistryService registryService, JdbcTemplate jdbc) {
this.registryService = registryService;
this.jdbc = jdbc;
}
@GetMapping("/catalog")
@Operation(summary = "Get route catalog",
description = "Returns all applications with their routes, agents, and health status")
@ApiResponse(responseCode = "200", description = "Catalog returned")
public ResponseEntity<List<AppCatalogEntry>> getCatalog() {
List<AgentInfo> allAgents = registryService.findAll();
// Group agents by application name
Map<String, List<AgentInfo>> agentsByApp = allAgents.stream()
.collect(Collectors.groupingBy(AgentInfo::application, LinkedHashMap::new, Collectors.toList()));
// Collect all distinct routes per app
Map<String, Set<String>> routesByApp = new LinkedHashMap<>();
for (var entry : agentsByApp.entrySet()) {
Set<String> routes = new LinkedHashSet<>();
for (AgentInfo agent : entry.getValue()) {
if (agent.routeIds() != null) {
routes.addAll(agent.routeIds());
}
}
routesByApp.put(entry.getKey(), routes);
}
// Query route-level stats for the last 24 hours
Instant now = Instant.now();
Instant from24h = now.minus(24, ChronoUnit.HOURS);
Instant from1m = now.minus(1, ChronoUnit.MINUTES);
// Route exchange counts from continuous aggregate
Map<String, Long> routeExchangeCounts = new LinkedHashMap<>();
Map<String, Instant> routeLastSeen = new LinkedHashMap<>();
try {
jdbc.query(
"SELECT application_name, route_id, SUM(total_count) AS cnt, MAX(bucket) AS last_seen " +
"FROM stats_1m_route WHERE bucket >= ? AND bucket < ? " +
"GROUP BY application_name, route_id",
rs -> {
String key = rs.getString("application_name") + "/" + rs.getString("route_id");
routeExchangeCounts.put(key, rs.getLong("cnt"));
Timestamp ts = rs.getTimestamp("last_seen");
if (ts != null) routeLastSeen.put(key, ts.toInstant());
},
Timestamp.from(from24h), Timestamp.from(now));
} catch (Exception e) {
// Continuous aggregate may not exist yet
}
// Per-agent TPS from the last minute
Map<String, Double> agentTps = new LinkedHashMap<>();
try {
jdbc.query(
"SELECT application_name, SUM(total_count) AS cnt " +
"FROM stats_1m_route WHERE bucket >= ? AND bucket < ? " +
"GROUP BY application_name",
rs -> {
// This gives per-app TPS; we'll distribute among agents below
},
Timestamp.from(from1m), Timestamp.from(now));
} catch (Exception e) {
// Continuous aggregate may not exist yet
}
// Build catalog entries
List<AppCatalogEntry> catalog = new ArrayList<>();
for (var entry : agentsByApp.entrySet()) {
String appId = entry.getKey();
List<AgentInfo> agents = entry.getValue();
// Routes
Set<String> routeIds = routesByApp.getOrDefault(appId, Set.of());
List<RouteSummary> routeSummaries = routeIds.stream()
.map(routeId -> {
String key = appId + "/" + routeId;
long count = routeExchangeCounts.getOrDefault(key, 0L);
Instant lastSeen = routeLastSeen.get(key);
return new RouteSummary(routeId, count, lastSeen);
})
.toList();
// Agent summaries
List<AgentSummary> agentSummaries = agents.stream()
.map(a -> new AgentSummary(a.id(), a.name(), a.state().name().toLowerCase(), 0.0))
.toList();
// Health = worst state among agents
String health = computeWorstHealth(agents);
// Total exchange count for the app
long totalExchanges = routeSummaries.stream().mapToLong(RouteSummary::exchangeCount).sum();
catalog.add(new AppCatalogEntry(appId, routeSummaries, agentSummaries,
agents.size(), health, totalExchanges));
}
return ResponseEntity.ok(catalog);
}
private String computeWorstHealth(List<AgentInfo> agents) {
boolean hasDead = false;
boolean hasStale = false;
for (AgentInfo a : agents) {
if (a.state() == AgentState.DEAD) hasDead = true;
if (a.state() == AgentState.STALE) hasStale = true;
}
if (hasDead) return "dead";
if (hasStale) return "stale";
return "live";
}
}

View File

@@ -0,0 +1,164 @@
package com.cameleer3.server.app.controller;
import com.cameleer3.server.app.dto.ProcessorMetrics;
import com.cameleer3.server.app.dto.RouteMetrics;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.springframework.http.ResponseEntity;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import java.sql.Timestamp;
import java.time.Duration;
import java.time.Instant;
import java.time.temporal.ChronoUnit;
import java.util.ArrayList;
import java.util.List;
@RestController
@RequestMapping("/api/v1/routes")
@Tag(name = "Route Metrics", description = "Route performance metrics")
public class RouteMetricsController {
private final JdbcTemplate jdbc;
public RouteMetricsController(JdbcTemplate jdbc) {
this.jdbc = jdbc;
}
@GetMapping("/metrics")
@Operation(summary = "Get route metrics",
description = "Returns aggregated performance metrics per route for the given time window")
@ApiResponse(responseCode = "200", description = "Metrics returned")
public ResponseEntity<List<RouteMetrics>> getMetrics(
@RequestParam(required = false) String from,
@RequestParam(required = false) String to,
@RequestParam(required = false) String appId) {
Instant toInstant = to != null ? Instant.parse(to) : Instant.now();
Instant fromInstant = from != null ? Instant.parse(from) : toInstant.minus(24, ChronoUnit.HOURS);
long windowSeconds = Duration.between(fromInstant, toInstant).toSeconds();
var sql = new StringBuilder(
"SELECT application_name, route_id, " +
"SUM(total_count) AS total, " +
"SUM(failed_count) AS failed, " +
"CASE WHEN SUM(total_count) > 0 THEN SUM(duration_sum) / SUM(total_count) ELSE 0 END AS avg_dur, " +
"COALESCE(MAX(p99_duration), 0) AS p99_dur " +
"FROM stats_1m_route WHERE bucket >= ? AND bucket < ?");
var params = new ArrayList<Object>();
params.add(Timestamp.from(fromInstant));
params.add(Timestamp.from(toInstant));
if (appId != null) {
sql.append(" AND application_name = ?");
params.add(appId);
}
sql.append(" GROUP BY application_name, route_id ORDER BY application_name, route_id");
// Key struct for sparkline lookup
record RouteKey(String appId, String routeId) {}
List<RouteKey> routeKeys = new ArrayList<>();
List<RouteMetrics> metrics = jdbc.query(sql.toString(), (rs, rowNum) -> {
String applicationName = rs.getString("application_name");
String routeId = rs.getString("route_id");
long total = rs.getLong("total");
long failed = rs.getLong("failed");
double avgDur = rs.getDouble("avg_dur");
double p99Dur = rs.getDouble("p99_dur");
double successRate = total > 0 ? (double) (total - failed) / total : 1.0;
double errorRate = total > 0 ? (double) failed / total : 0.0;
double tps = windowSeconds > 0 ? (double) total / windowSeconds : 0.0;
routeKeys.add(new RouteKey(applicationName, routeId));
return new RouteMetrics(routeId, applicationName, total, successRate,
avgDur, p99Dur, errorRate, tps, List.of());
}, params.toArray());
// Fetch sparklines (12 buckets over the time window)
if (!metrics.isEmpty()) {
int sparkBuckets = 12;
long bucketSeconds = Math.max(windowSeconds / sparkBuckets, 60);
for (int i = 0; i < metrics.size(); i++) {
RouteMetrics m = metrics.get(i);
try {
List<Double> sparkline = jdbc.query(
"SELECT time_bucket(? * INTERVAL '1 second', bucket) AS period, " +
"COALESCE(SUM(total_count), 0) AS cnt " +
"FROM stats_1m_route WHERE bucket >= ? AND bucket < ? " +
"AND application_name = ? AND route_id = ? " +
"GROUP BY period ORDER BY period",
(rs, rowNum) -> rs.getDouble("cnt"),
bucketSeconds, Timestamp.from(fromInstant), Timestamp.from(toInstant),
m.appId(), m.routeId());
metrics.set(i, new RouteMetrics(m.routeId(), m.appId(), m.exchangeCount(),
m.successRate(), m.avgDurationMs(), m.p99DurationMs(),
m.errorRate(), m.throughputPerSec(), sparkline));
} catch (Exception e) {
// Leave sparkline empty on error
}
}
}
return ResponseEntity.ok(metrics);
}
@GetMapping("/metrics/processors")
@Operation(summary = "Get processor metrics",
description = "Returns aggregated performance metrics per processor for the given route and time window")
@ApiResponse(responseCode = "200", description = "Metrics returned")
public ResponseEntity<List<ProcessorMetrics>> getProcessorMetrics(
@RequestParam String routeId,
@RequestParam(required = false) String appId,
@RequestParam(required = false) Instant from,
@RequestParam(required = false) Instant to) {
Instant toInstant = to != null ? to : Instant.now();
Instant fromInstant = from != null ? from : toInstant.minus(24, ChronoUnit.HOURS);
var sql = new StringBuilder(
"SELECT processor_id, processor_type, route_id, application_name, " +
"SUM(total_count) AS total_count, " +
"SUM(failed_count) AS failed_count, " +
"CASE WHEN SUM(total_count) > 0 THEN SUM(duration_sum)::double precision / SUM(total_count) ELSE 0 END AS avg_duration_ms, " +
"MAX(p99_duration) AS p99_duration_ms " +
"FROM stats_1m_processor_detail " +
"WHERE bucket >= ? AND bucket < ? AND route_id = ?");
var params = new ArrayList<Object>();
params.add(Timestamp.from(fromInstant));
params.add(Timestamp.from(toInstant));
params.add(routeId);
if (appId != null) {
sql.append(" AND application_name = ?");
params.add(appId);
}
sql.append(" GROUP BY processor_id, processor_type, route_id, application_name");
sql.append(" ORDER BY SUM(total_count) DESC");
List<ProcessorMetrics> metrics = jdbc.query(sql.toString(), (rs, rowNum) -> {
long totalCount = rs.getLong("total_count");
long failedCount = rs.getLong("failed_count");
double errorRate = failedCount > 0 ? (double) failedCount / totalCount : 0.0;
return new ProcessorMetrics(
rs.getString("processor_id"),
rs.getString("processor_type"),
rs.getString("route_id"),
rs.getString("application_name"),
totalCount,
failedCount,
rs.getDouble("avg_duration_ms"),
rs.getDouble("p99_duration_ms"),
errorRate);
}, params.toArray());
return ResponseEntity.ok(metrics);
}
}

View File

@@ -51,13 +51,13 @@ public class SearchController {
@RequestParam(required = false) String routeId,
@RequestParam(required = false) String agentId,
@RequestParam(required = false) String processorType,
@RequestParam(required = false) String group,
@RequestParam(required = false) String application,
@RequestParam(defaultValue = "0") int offset,
@RequestParam(defaultValue = "50") int limit,
@RequestParam(required = false) String sortField,
@RequestParam(required = false) String sortDir) {
List<String> agentIds = resolveGroupToAgentIds(group);
List<String> agentIds = resolveApplicationToAgentIds(application);
SearchRequest request = new SearchRequest(
status, timeFrom, timeTo,
@@ -65,7 +65,7 @@ public class SearchController {
correlationId,
text, null, null, null,
routeId, agentId, processorType,
group, agentIds,
application, agentIds,
offset, limit,
sortField, sortDir
);
@@ -77,11 +77,11 @@ public class SearchController {
@Operation(summary = "Advanced search with all filters")
public ResponseEntity<SearchResult<ExecutionSummary>> searchPost(
@RequestBody SearchRequest request) {
// Resolve group to agentIds if group is specified but agentIds is not
// Resolve application to agentIds if application is specified but agentIds is not
SearchRequest resolved = request;
if (request.group() != null && !request.group().isBlank()
if (request.application() != null && !request.application().isBlank()
&& (request.agentIds() == null || request.agentIds().isEmpty())) {
resolved = request.withAgentIds(resolveGroupToAgentIds(request.group()));
resolved = request.withAgentIds(resolveApplicationToAgentIds(request.application()));
}
return ResponseEntity.ok(searchService.search(resolved));
}
@@ -92,12 +92,15 @@ public class SearchController {
@RequestParam Instant from,
@RequestParam(required = false) Instant to,
@RequestParam(required = false) String routeId,
@RequestParam(required = false) String group) {
@RequestParam(required = false) String application) {
Instant end = to != null ? to : Instant.now();
List<String> agentIds = resolveGroupToAgentIds(group);
if (routeId == null && agentIds == null) {
if (routeId == null && application == null) {
return ResponseEntity.ok(searchService.stats(from, end));
}
if (routeId == null) {
return ResponseEntity.ok(searchService.statsForApp(from, end, application));
}
List<String> agentIds = resolveApplicationToAgentIds(application);
return ResponseEntity.ok(searchService.stats(from, end, routeId, agentIds));
}
@@ -108,9 +111,15 @@ public class SearchController {
@RequestParam(required = false) Instant to,
@RequestParam(defaultValue = "24") int buckets,
@RequestParam(required = false) String routeId,
@RequestParam(required = false) String group) {
@RequestParam(required = false) String application) {
Instant end = to != null ? to : Instant.now();
List<String> agentIds = resolveGroupToAgentIds(group);
if (routeId == null && application == null) {
return ResponseEntity.ok(searchService.timeseries(from, end, buckets));
}
if (routeId == null) {
return ResponseEntity.ok(searchService.timeseriesForApp(from, end, buckets, application));
}
List<String> agentIds = resolveApplicationToAgentIds(application);
if (routeId == null && agentIds == null) {
return ResponseEntity.ok(searchService.timeseries(from, end, buckets));
}
@@ -118,14 +127,14 @@ public class SearchController {
}
/**
* Resolve an application group name to agent IDs.
* Returns null if group is null/blank (no filtering).
* Resolve an application name to agent IDs.
* Returns null if application is null/blank (no filtering).
*/
private List<String> resolveGroupToAgentIds(String group) {
if (group == null || group.isBlank()) {
private List<String> resolveApplicationToAgentIds(String application) {
if (application == null || application.isBlank()) {
return null;
}
return registryService.findByGroup(group).stream()
return registryService.findByApplication(application).stream()
.map(AgentInfo::id)
.toList();
}

View File

@@ -0,0 +1,62 @@
package com.cameleer3.server.app.controller;
import com.cameleer3.server.app.dto.ThresholdConfigRequest;
import com.cameleer3.server.core.admin.AuditCategory;
import com.cameleer3.server.core.admin.AuditResult;
import com.cameleer3.server.core.admin.AuditService;
import com.cameleer3.server.core.admin.ThresholdConfig;
import com.cameleer3.server.core.admin.ThresholdRepository;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.tags.Tag;
import jakarta.servlet.http.HttpServletRequest;
import jakarta.validation.Valid;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.security.access.prepost.PreAuthorize;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.server.ResponseStatusException;
import java.util.List;
import java.util.Map;
@RestController
@RequestMapping("/api/v1/admin/thresholds")
@PreAuthorize("hasRole('ADMIN')")
@Tag(name = "Threshold Admin", description = "Monitoring threshold configuration (ADMIN only)")
public class ThresholdAdminController {
private final ThresholdRepository thresholdRepository;
private final AuditService auditService;
public ThresholdAdminController(ThresholdRepository thresholdRepository, AuditService auditService) {
this.thresholdRepository = thresholdRepository;
this.auditService = auditService;
}
@GetMapping
@Operation(summary = "Get current threshold configuration")
public ResponseEntity<ThresholdConfig> getThresholds() {
ThresholdConfig config = thresholdRepository.find().orElse(ThresholdConfig.defaults());
return ResponseEntity.ok(config);
}
@PutMapping
@Operation(summary = "Update threshold configuration")
public ResponseEntity<ThresholdConfig> updateThresholds(@Valid @RequestBody ThresholdConfigRequest request,
HttpServletRequest httpRequest) {
List<String> errors = request.validate();
if (!errors.isEmpty()) {
throw new ResponseStatusException(HttpStatus.BAD_REQUEST, String.join("; ", errors));
}
ThresholdConfig config = request.toConfig();
thresholdRepository.save(config, null);
auditService.log("update_thresholds", AuditCategory.CONFIG, "thresholds",
Map.of("config", config), AuditResult.SUCCESS, httpRequest);
return ResponseEntity.ok(config);
}
}

View File

@@ -1,73 +1,191 @@
package com.cameleer3.server.app.controller;
import com.cameleer3.server.app.dto.SetPasswordRequest;
import com.cameleer3.server.core.admin.AuditCategory;
import com.cameleer3.server.core.admin.AuditResult;
import com.cameleer3.server.core.admin.AuditService;
import com.cameleer3.server.core.rbac.RbacService;
import com.cameleer3.server.core.rbac.SystemRole;
import com.cameleer3.server.core.rbac.UserDetail;
import com.cameleer3.server.core.security.UserInfo;
import com.cameleer3.server.core.security.UserRepository;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import jakarta.servlet.http.HttpServletRequest;
import jakarta.validation.Valid;
import org.springframework.http.ResponseEntity;
import org.springframework.security.access.prepost.PreAuthorize;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder;
import java.time.Instant;
import java.util.List;
import java.util.Map;
import java.util.UUID;
/**
* Admin endpoints for user management.
* Protected by {@code ROLE_ADMIN} via SecurityConfig URL patterns.
* Protected by {@code ROLE_ADMIN}.
*/
@RestController
@RequestMapping("/api/v1/admin/users")
@Tag(name = "User Admin", description = "User management (ADMIN only)")
@PreAuthorize("hasRole('ADMIN')")
public class UserAdminController {
private final UserRepository userRepository;
private static final BCryptPasswordEncoder passwordEncoder = new BCryptPasswordEncoder();
public UserAdminController(UserRepository userRepository) {
private final RbacService rbacService;
private final UserRepository userRepository;
private final AuditService auditService;
public UserAdminController(RbacService rbacService, UserRepository userRepository,
AuditService auditService) {
this.rbacService = rbacService;
this.userRepository = userRepository;
this.auditService = auditService;
}
@GetMapping
@Operation(summary = "List all users")
@Operation(summary = "List all users with RBAC detail")
@ApiResponse(responseCode = "200", description = "User list returned")
public ResponseEntity<List<UserInfo>> listUsers() {
return ResponseEntity.ok(userRepository.findAll());
public ResponseEntity<List<UserDetail>> listUsers() {
return ResponseEntity.ok(rbacService.listUsers());
}
@GetMapping("/{userId}")
@Operation(summary = "Get user by ID")
@Operation(summary = "Get user by ID with RBAC detail")
@ApiResponse(responseCode = "200", description = "User found")
@ApiResponse(responseCode = "404", description = "User not found")
public ResponseEntity<UserInfo> getUser(@PathVariable String userId) {
return userRepository.findById(userId)
.map(ResponseEntity::ok)
.orElse(ResponseEntity.notFound().build());
}
@PutMapping("/{userId}/roles")
@Operation(summary = "Update user roles")
@ApiResponse(responseCode = "200", description = "Roles updated")
@ApiResponse(responseCode = "404", description = "User not found")
public ResponseEntity<Void> updateRoles(@PathVariable String userId,
@RequestBody RolesRequest request) {
if (userRepository.findById(userId).isEmpty()) {
public ResponseEntity<UserDetail> getUser(@PathVariable String userId) {
UserDetail detail = rbacService.getUser(userId);
if (detail == null) {
return ResponseEntity.notFound().build();
}
userRepository.updateRoles(userId, request.roles());
return ResponseEntity.ok(detail);
}
@PostMapping
@Operation(summary = "Create a local user")
@ApiResponse(responseCode = "200", description = "User created")
public ResponseEntity<UserDetail> createUser(@RequestBody CreateUserRequest request,
HttpServletRequest httpRequest) {
String userId = "user:" + request.username();
UserInfo user = new UserInfo(userId, "local",
request.email() != null ? request.email() : "",
request.displayName() != null ? request.displayName() : request.username(),
Instant.now());
userRepository.upsert(user);
if (request.password() != null && !request.password().isBlank()) {
userRepository.setPassword(userId, passwordEncoder.encode(request.password()));
}
rbacService.assignRoleToUser(userId, SystemRole.VIEWER_ID);
auditService.log("create_user", AuditCategory.USER_MGMT, userId,
Map.of("username", request.username()), AuditResult.SUCCESS, httpRequest);
return ResponseEntity.ok(rbacService.getUser(userId));
}
@PutMapping("/{userId}")
@Operation(summary = "Update user display name or email")
@ApiResponse(responseCode = "200", description = "User updated")
@ApiResponse(responseCode = "404", description = "User not found")
public ResponseEntity<Void> updateUser(@PathVariable String userId,
@RequestBody UpdateUserRequest request,
HttpServletRequest httpRequest) {
var existing = userRepository.findById(userId);
if (existing.isEmpty()) return ResponseEntity.notFound().build();
var user = existing.get();
var updated = new UserInfo(user.userId(), user.provider(),
request.email() != null ? request.email() : user.email(),
request.displayName() != null ? request.displayName() : user.displayName(),
user.createdAt());
userRepository.upsert(updated);
auditService.log("update_user", AuditCategory.USER_MGMT, userId,
null, AuditResult.SUCCESS, httpRequest);
return ResponseEntity.ok().build();
}
@PostMapping("/{userId}/roles/{roleId}")
@Operation(summary = "Assign a role to a user")
@ApiResponse(responseCode = "200", description = "Role assigned")
@ApiResponse(responseCode = "404", description = "User or role not found")
public ResponseEntity<Void> assignRoleToUser(@PathVariable String userId,
@PathVariable UUID roleId,
HttpServletRequest httpRequest) {
rbacService.assignRoleToUser(userId, roleId);
auditService.log("assign_role_to_user", AuditCategory.USER_MGMT, userId,
Map.of("roleId", roleId), AuditResult.SUCCESS, httpRequest);
return ResponseEntity.ok().build();
}
@DeleteMapping("/{userId}/roles/{roleId}")
@Operation(summary = "Remove a role from a user")
@ApiResponse(responseCode = "204", description = "Role removed")
public ResponseEntity<Void> removeRoleFromUser(@PathVariable String userId,
@PathVariable UUID roleId,
HttpServletRequest httpRequest) {
rbacService.removeRoleFromUser(userId, roleId);
auditService.log("remove_role_from_user", AuditCategory.USER_MGMT, userId,
Map.of("roleId", roleId), AuditResult.SUCCESS, httpRequest);
return ResponseEntity.noContent().build();
}
@PostMapping("/{userId}/groups/{groupId}")
@Operation(summary = "Add a user to a group")
@ApiResponse(responseCode = "200", description = "User added to group")
public ResponseEntity<Void> addUserToGroup(@PathVariable String userId,
@PathVariable UUID groupId,
HttpServletRequest httpRequest) {
rbacService.addUserToGroup(userId, groupId);
auditService.log("add_user_to_group", AuditCategory.USER_MGMT, userId,
Map.of("groupId", groupId), AuditResult.SUCCESS, httpRequest);
return ResponseEntity.ok().build();
}
@DeleteMapping("/{userId}/groups/{groupId}")
@Operation(summary = "Remove a user from a group")
@ApiResponse(responseCode = "204", description = "User removed from group")
public ResponseEntity<Void> removeUserFromGroup(@PathVariable String userId,
@PathVariable UUID groupId,
HttpServletRequest httpRequest) {
rbacService.removeUserFromGroup(userId, groupId);
auditService.log("remove_user_from_group", AuditCategory.USER_MGMT, userId,
Map.of("groupId", groupId), AuditResult.SUCCESS, httpRequest);
return ResponseEntity.noContent().build();
}
@DeleteMapping("/{userId}")
@Operation(summary = "Delete user")
@ApiResponse(responseCode = "204", description = "User deleted")
public ResponseEntity<Void> deleteUser(@PathVariable String userId) {
public ResponseEntity<Void> deleteUser(@PathVariable String userId,
HttpServletRequest httpRequest) {
userRepository.delete(userId);
auditService.log("delete_user", AuditCategory.USER_MGMT, userId,
null, AuditResult.SUCCESS, httpRequest);
return ResponseEntity.noContent().build();
}
public record RolesRequest(List<String> roles) {}
@PostMapping("/{userId}/password")
@Operation(summary = "Reset user password")
@ApiResponse(responseCode = "204", description = "Password reset")
public ResponseEntity<Void> resetPassword(
@PathVariable String userId,
@Valid @RequestBody SetPasswordRequest request,
HttpServletRequest httpRequest) {
userRepository.setPassword(userId, passwordEncoder.encode(request.password()));
auditService.log("reset_password", AuditCategory.USER_MGMT, userId, null, AuditResult.SUCCESS, httpRequest);
return ResponseEntity.noContent().build();
}
public record CreateUserRequest(String username, String displayName, String email, String password) {}
public record UpdateUserRequest(String displayName, String email) {}
}

View File

@@ -0,0 +1,11 @@
package com.cameleer3.server.app.dto;
import io.swagger.v3.oas.annotations.media.Schema;
@Schema(description = "Currently running database query")
public record ActiveQueryResponse(
@Schema(description = "Backend process ID") int pid,
@Schema(description = "Query duration in seconds") double durationSeconds,
@Schema(description = "Backend state (active, idle, etc.)") String state,
@Schema(description = "SQL query text") String query
) {}

View File

@@ -0,0 +1,24 @@
package com.cameleer3.server.app.dto;
import com.cameleer3.server.core.agent.AgentEventRecord;
import io.swagger.v3.oas.annotations.media.Schema;
import jakarta.validation.constraints.NotNull;
import java.time.Instant;
@Schema(description = "Agent lifecycle event")
public record AgentEventResponse(
@NotNull long id,
@NotNull String agentId,
@NotNull String appId,
@NotNull String eventType,
String detail,
@NotNull Instant timestamp
) {
public static AgentEventResponse from(AgentEventRecord record) {
return new AgentEventResponse(
record.id(), record.agentId(), record.appId(),
record.eventType(), record.detail(), record.timestamp()
);
}
}

View File

@@ -4,24 +4,46 @@ import com.cameleer3.server.core.agent.AgentInfo;
import io.swagger.v3.oas.annotations.media.Schema;
import jakarta.validation.constraints.NotNull;
import java.time.Duration;
import java.time.Instant;
import java.util.List;
import java.util.Map;
@Schema(description = "Agent instance summary")
@Schema(description = "Agent instance summary with runtime metrics")
public record AgentInstanceResponse(
@NotNull String id,
@NotNull String name,
@NotNull String group,
@NotNull String application,
@NotNull String status,
@NotNull List<String> routeIds,
@NotNull Instant registeredAt,
@NotNull Instant lastHeartbeat
@NotNull Instant lastHeartbeat,
String version,
Map<String, Object> capabilities,
double tps,
double errorRate,
int activeRoutes,
int totalRoutes,
long uptimeSeconds
) {
public static AgentInstanceResponse from(AgentInfo info) {
long uptime = Duration.between(info.registeredAt(), Instant.now()).toSeconds();
return new AgentInstanceResponse(
info.id(), info.name(), info.group(),
info.id(), info.name(), info.application(),
info.state().name(), info.routeIds(),
info.registeredAt(), info.lastHeartbeat()
info.registeredAt(), info.lastHeartbeat(),
info.version(), info.capabilities(),
0.0, 0.0,
0, info.routeIds() != null ? info.routeIds().size() : 0,
uptime
);
}
public AgentInstanceResponse withMetrics(double tps, double errorRate, int activeRoutes) {
return new AgentInstanceResponse(
id, name, application, status, routeIds, registeredAt, lastHeartbeat,
version, capabilities,
tps, errorRate, activeRoutes, totalRoutes, uptimeSeconds
);
}
}

View File

@@ -0,0 +1,9 @@
package com.cameleer3.server.app.dto;
import java.util.List;
import java.util.Map;
import jakarta.validation.constraints.NotNull;
public record AgentMetricsResponse(
@NotNull Map<String, List<MetricBucket>> metrics
) {}

View File

@@ -3,5 +3,5 @@ package com.cameleer3.server.app.dto;
import io.swagger.v3.oas.annotations.media.Schema;
import jakarta.validation.constraints.NotNull;
@Schema(description = "Refreshed access token")
public record AgentRefreshResponse(@NotNull String accessToken) {}
@Schema(description = "Refreshed access and refresh tokens")
public record AgentRefreshResponse(@NotNull String accessToken, @NotNull String refreshToken) {}

View File

@@ -10,7 +10,7 @@ import java.util.Map;
public record AgentRegistrationRequest(
@NotNull String agentId,
@NotNull String name,
@Schema(defaultValue = "default") String group,
@Schema(defaultValue = "default") String application,
String version,
List<String> routeIds,
Map<String, Object> capabilities

View File

@@ -0,0 +1,12 @@
package com.cameleer3.server.app.dto;
import io.swagger.v3.oas.annotations.media.Schema;
import jakarta.validation.constraints.NotNull;
@Schema(description = "Summary of an agent instance for sidebar display")
public record AgentSummary(
@NotNull String id,
@NotNull String name,
@NotNull String status,
@NotNull double tps
) {}

View File

@@ -0,0 +1,16 @@
package com.cameleer3.server.app.dto;
import io.swagger.v3.oas.annotations.media.Schema;
import jakarta.validation.constraints.NotNull;
import java.util.List;
@Schema(description = "Application catalog entry with routes and agents")
public record AppCatalogEntry(
@NotNull String appId,
@NotNull List<RouteSummary> routes,
@NotNull List<AgentSummary> agents,
@NotNull int agentCount,
@NotNull String health,
@NotNull long exchangeCount
) {}

View File

@@ -0,0 +1,15 @@
package com.cameleer3.server.app.dto;
import com.cameleer3.server.core.admin.AuditRecord;
import io.swagger.v3.oas.annotations.media.Schema;
import java.util.List;
@Schema(description = "Paginated audit log entries")
public record AuditLogPageResponse(
@Schema(description = "Audit log entries") List<AuditRecord> items,
@Schema(description = "Total number of matching entries") long totalCount,
@Schema(description = "Current page number (0-based)") int page,
@Schema(description = "Page size") int pageSize,
@Schema(description = "Total number of pages") int totalPages
) {}

View File

@@ -0,0 +1,12 @@
package com.cameleer3.server.app.dto;
import io.swagger.v3.oas.annotations.media.Schema;
@Schema(description = "HikariCP connection pool statistics")
public record ConnectionPoolResponse(
@Schema(description = "Number of currently active connections") int activeConnections,
@Schema(description = "Number of idle connections") int idleConnections,
@Schema(description = "Number of threads waiting for a connection") int pendingThreads,
@Schema(description = "Maximum wait time in milliseconds") long maxWaitMs,
@Schema(description = "Maximum pool size") int maxPoolSize
) {}

View File

@@ -0,0 +1,12 @@
package com.cameleer3.server.app.dto;
import io.swagger.v3.oas.annotations.media.Schema;
@Schema(description = "Database connection and version status")
public record DatabaseStatusResponse(
@Schema(description = "Whether the database is reachable") boolean connected,
@Schema(description = "PostgreSQL version string") String version,
@Schema(description = "Database host") String host,
@Schema(description = "Current schema search path") String schema,
@Schema(description = "Whether TimescaleDB extension is available") boolean timescaleDb
) {}

View File

@@ -0,0 +1,14 @@
package com.cameleer3.server.app.dto;
import io.swagger.v3.oas.annotations.media.Schema;
@Schema(description = "OpenSearch index information")
public record IndexInfoResponse(
@Schema(description = "Index name") String name,
@Schema(description = "Document count") long docCount,
@Schema(description = "Human-readable index size") String size,
@Schema(description = "Index size in bytes") long sizeBytes,
@Schema(description = "Index health status") String health,
@Schema(description = "Number of primary shards") int primaryShards,
@Schema(description = "Number of replica shards") int replicaShards
) {}

View File

@@ -0,0 +1,16 @@
package com.cameleer3.server.app.dto;
import io.swagger.v3.oas.annotations.media.Schema;
import java.util.List;
@Schema(description = "Paginated list of OpenSearch indices")
public record IndicesPageResponse(
@Schema(description = "Index list for current page") List<IndexInfoResponse> indices,
@Schema(description = "Total number of indices") long totalIndices,
@Schema(description = "Total document count across all indices") long totalDocs,
@Schema(description = "Human-readable total size") String totalSize,
@Schema(description = "Current page number (0-based)") int page,
@Schema(description = "Page size") int pageSize,
@Schema(description = "Total number of pages") int totalPages
) {}

View File

@@ -0,0 +1,9 @@
package com.cameleer3.server.app.dto;
import java.time.Instant;
import jakarta.validation.constraints.NotNull;
public record MetricBucket(
@NotNull Instant time,
double value
) {}

View File

@@ -0,0 +1,12 @@
package com.cameleer3.server.app.dto;
import io.swagger.v3.oas.annotations.media.Schema;
@Schema(description = "OpenSearch cluster status")
public record OpenSearchStatusResponse(
@Schema(description = "Whether the cluster is reachable") boolean reachable,
@Schema(description = "Cluster health status (GREEN, YELLOW, RED)") String clusterHealth,
@Schema(description = "OpenSearch version") String version,
@Schema(description = "Number of nodes in the cluster") int nodeCount,
@Schema(description = "OpenSearch host") String host
) {}

View File

@@ -0,0 +1,13 @@
package com.cameleer3.server.app.dto;
import io.swagger.v3.oas.annotations.media.Schema;
@Schema(description = "OpenSearch performance metrics")
public record PerformanceResponse(
@Schema(description = "Query cache hit rate (0.0-1.0)") double queryCacheHitRate,
@Schema(description = "Request cache hit rate (0.0-1.0)") double requestCacheHitRate,
@Schema(description = "Average search latency in milliseconds") double searchLatencyMs,
@Schema(description = "Average indexing latency in milliseconds") double indexingLatencyMs,
@Schema(description = "JVM heap used in bytes") long jvmHeapUsedBytes,
@Schema(description = "JVM heap max in bytes") long jvmHeapMaxBytes
) {}

View File

@@ -0,0 +1,16 @@
package com.cameleer3.server.app.dto;
import io.swagger.v3.oas.annotations.media.Schema;
import java.time.Instant;
@Schema(description = "Search indexing pipeline statistics")
public record PipelineStatsResponse(
@Schema(description = "Current queue depth") int queueDepth,
@Schema(description = "Maximum queue size") int maxQueueSize,
@Schema(description = "Number of failed indexing operations") long failedCount,
@Schema(description = "Number of successfully indexed documents") long indexedCount,
@Schema(description = "Debounce interval in milliseconds") long debounceMs,
@Schema(description = "Current indexing rate (docs/sec)") double indexingRate,
@Schema(description = "Timestamp of last indexed document") Instant lastIndexedAt
) {}

View File

@@ -0,0 +1,15 @@
package com.cameleer3.server.app.dto;
import jakarta.validation.constraints.NotNull;
public record ProcessorMetrics(
@NotNull String processorId,
@NotNull String processorType,
@NotNull String routeId,
@NotNull String appId,
long totalCount,
long failedCount,
double avgDurationMs,
double p99DurationMs,
double errorRate
) {}

View File

@@ -0,0 +1,19 @@
package com.cameleer3.server.app.dto;
import io.swagger.v3.oas.annotations.media.Schema;
import jakarta.validation.constraints.NotNull;
import java.util.List;
@Schema(description = "Aggregated route performance metrics")
public record RouteMetrics(
@NotNull String routeId,
@NotNull String appId,
@NotNull long exchangeCount,
@NotNull double successRate,
@NotNull double avgDurationMs,
@NotNull double p99DurationMs,
@NotNull double errorRate,
@NotNull double throughputPerSec,
@NotNull List<Double> sparkline
) {}

View File

@@ -0,0 +1,13 @@
package com.cameleer3.server.app.dto;
import io.swagger.v3.oas.annotations.media.Schema;
import jakarta.validation.constraints.NotNull;
import java.time.Instant;
@Schema(description = "Summary of a route within an application")
public record RouteSummary(
@NotNull String routeId,
@NotNull long exchangeCount,
Instant lastSeen
) {}

View File

@@ -0,0 +1,7 @@
package com.cameleer3.server.app.dto;
import jakarta.validation.constraints.NotBlank;
public record SetPasswordRequest(
@NotBlank String password
) {}

View File

@@ -0,0 +1,13 @@
package com.cameleer3.server.app.dto;
import io.swagger.v3.oas.annotations.media.Schema;
@Schema(description = "Table size and row count information")
public record TableSizeResponse(
@Schema(description = "Table name") String tableName,
@Schema(description = "Approximate row count") long rowCount,
@Schema(description = "Human-readable data size") String dataSize,
@Schema(description = "Human-readable index size") String indexSize,
@Schema(description = "Data size in bytes") long dataSizeBytes,
@Schema(description = "Index size in bytes") long indexSizeBytes
) {}

View File

@@ -0,0 +1,144 @@
package com.cameleer3.server.app.dto;
import com.cameleer3.server.core.admin.ThresholdConfig;
import io.swagger.v3.oas.annotations.media.Schema;
import jakarta.validation.Valid;
import jakarta.validation.constraints.Max;
import jakarta.validation.constraints.Min;
import jakarta.validation.constraints.NotBlank;
import jakarta.validation.constraints.NotNull;
import jakarta.validation.constraints.Positive;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
@Schema(description = "Threshold configuration for admin monitoring")
public record ThresholdConfigRequest(
@Valid @NotNull DatabaseThresholdsRequest database,
@Valid @NotNull OpenSearchThresholdsRequest opensearch
) {
@Schema(description = "Database monitoring thresholds")
public record DatabaseThresholdsRequest(
@Min(0) @Max(100)
@Schema(description = "Connection pool usage warning threshold (percentage)")
int connectionPoolWarning,
@Min(0) @Max(100)
@Schema(description = "Connection pool usage critical threshold (percentage)")
int connectionPoolCritical,
@Positive
@Schema(description = "Query duration warning threshold (seconds)")
double queryDurationWarning,
@Positive
@Schema(description = "Query duration critical threshold (seconds)")
double queryDurationCritical
) {}
@Schema(description = "OpenSearch monitoring thresholds")
public record OpenSearchThresholdsRequest(
@NotBlank
@Schema(description = "Cluster health warning threshold (GREEN, YELLOW, RED)")
String clusterHealthWarning,
@NotBlank
@Schema(description = "Cluster health critical threshold (GREEN, YELLOW, RED)")
String clusterHealthCritical,
@Min(0)
@Schema(description = "Queue depth warning threshold")
int queueDepthWarning,
@Min(0)
@Schema(description = "Queue depth critical threshold")
int queueDepthCritical,
@Min(0) @Max(100)
@Schema(description = "JVM heap usage warning threshold (percentage)")
int jvmHeapWarning,
@Min(0) @Max(100)
@Schema(description = "JVM heap usage critical threshold (percentage)")
int jvmHeapCritical,
@Min(0)
@Schema(description = "Failed document count warning threshold")
int failedDocsWarning,
@Min(0)
@Schema(description = "Failed document count critical threshold")
int failedDocsCritical
) {}
/** Convert to core domain model */
public ThresholdConfig toConfig() {
return new ThresholdConfig(
new ThresholdConfig.DatabaseThresholds(
database.connectionPoolWarning(),
database.connectionPoolCritical(),
database.queryDurationWarning(),
database.queryDurationCritical()
),
new ThresholdConfig.OpenSearchThresholds(
opensearch.clusterHealthWarning(),
opensearch.clusterHealthCritical(),
opensearch.queueDepthWarning(),
opensearch.queueDepthCritical(),
opensearch.jvmHeapWarning(),
opensearch.jvmHeapCritical(),
opensearch.failedDocsWarning(),
opensearch.failedDocsCritical()
)
);
}
/** Validate semantic constraints beyond annotation-level validation */
public List<String> validate() {
List<String> errors = new ArrayList<>();
if (database != null) {
if (database.connectionPoolWarning() > database.connectionPoolCritical()) {
errors.add("database.connectionPoolWarning must be <= connectionPoolCritical");
}
if (database.queryDurationWarning() > database.queryDurationCritical()) {
errors.add("database.queryDurationWarning must be <= queryDurationCritical");
}
}
if (opensearch != null) {
if (opensearch.queueDepthWarning() > opensearch.queueDepthCritical()) {
errors.add("opensearch.queueDepthWarning must be <= queueDepthCritical");
}
if (opensearch.jvmHeapWarning() > opensearch.jvmHeapCritical()) {
errors.add("opensearch.jvmHeapWarning must be <= jvmHeapCritical");
}
if (opensearch.failedDocsWarning() > opensearch.failedDocsCritical()) {
errors.add("opensearch.failedDocsWarning must be <= failedDocsCritical");
}
// Validate health severity ordering: GREEN < YELLOW < RED
int warningSeverity = healthSeverity(opensearch.clusterHealthWarning());
int criticalSeverity = healthSeverity(opensearch.clusterHealthCritical());
if (warningSeverity < 0) {
errors.add("opensearch.clusterHealthWarning must be GREEN, YELLOW, or RED");
}
if (criticalSeverity < 0) {
errors.add("opensearch.clusterHealthCritical must be GREEN, YELLOW, or RED");
}
if (warningSeverity >= 0 && criticalSeverity >= 0 && warningSeverity > criticalSeverity) {
errors.add("opensearch.clusterHealthWarning severity must be <= clusterHealthCritical (GREEN < YELLOW < RED)");
}
}
return errors;
}
private static final Map<String, Integer> HEALTH_SEVERITY =
Map.of("GREEN", 0, "YELLOW", 1, "RED", 2);
private static int healthSeverity(String health) {
return HEALTH_SEVERITY.getOrDefault(health != null ? health.toUpperCase() : "", -1);
}
}

View File

@@ -1,159 +0,0 @@
package com.cameleer3.server.app.ingestion;
import com.cameleer3.server.app.config.IngestionConfig;
import com.cameleer3.server.core.ingestion.TaggedDiagram;
import com.cameleer3.server.core.ingestion.TaggedExecution;
import com.cameleer3.server.core.ingestion.WriteBuffer;
import com.cameleer3.server.core.storage.DiagramRepository;
import com.cameleer3.server.core.storage.ExecutionRepository;
import com.cameleer3.server.core.storage.MetricsRepository;
import com.cameleer3.server.core.storage.model.MetricsSnapshot;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.SmartLifecycle;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import java.util.List;
/**
* Scheduled task that drains the write buffers and batch-inserts into ClickHouse.
* <p>
* Implements {@link SmartLifecycle} to ensure all remaining buffered data is
* flushed on application shutdown.
*/
@Component
public class ClickHouseFlushScheduler implements SmartLifecycle {
private static final Logger log = LoggerFactory.getLogger(ClickHouseFlushScheduler.class);
private final WriteBuffer<TaggedExecution> executionBuffer;
private final WriteBuffer<TaggedDiagram> diagramBuffer;
private final WriteBuffer<MetricsSnapshot> metricsBuffer;
private final ExecutionRepository executionRepository;
private final DiagramRepository diagramRepository;
private final MetricsRepository metricsRepository;
private final int batchSize;
private volatile boolean running = false;
public ClickHouseFlushScheduler(WriteBuffer<TaggedExecution> executionBuffer,
WriteBuffer<TaggedDiagram> diagramBuffer,
WriteBuffer<MetricsSnapshot> metricsBuffer,
ExecutionRepository executionRepository,
DiagramRepository diagramRepository,
MetricsRepository metricsRepository,
IngestionConfig config) {
this.executionBuffer = executionBuffer;
this.diagramBuffer = diagramBuffer;
this.metricsBuffer = metricsBuffer;
this.executionRepository = executionRepository;
this.diagramRepository = diagramRepository;
this.metricsRepository = metricsRepository;
this.batchSize = config.getBatchSize();
}
@Scheduled(fixedDelayString = "${ingestion.flush-interval-ms:1000}")
public void flushAll() {
flushExecutions();
flushDiagrams();
flushMetrics();
}
private void flushExecutions() {
try {
List<TaggedExecution> batch = executionBuffer.drain(batchSize);
if (!batch.isEmpty()) {
executionRepository.insertBatch(batch);
log.debug("Flushed {} executions to ClickHouse", batch.size());
}
} catch (Exception e) {
log.error("Failed to flush executions to ClickHouse", e);
}
}
private void flushDiagrams() {
try {
List<TaggedDiagram> batch = diagramBuffer.drain(batchSize);
for (TaggedDiagram diagram : batch) {
diagramRepository.store(diagram);
}
if (!batch.isEmpty()) {
log.debug("Flushed {} diagrams to ClickHouse", batch.size());
}
} catch (Exception e) {
log.error("Failed to flush diagrams to ClickHouse", e);
}
}
private void flushMetrics() {
try {
List<MetricsSnapshot> batch = metricsBuffer.drain(batchSize);
if (!batch.isEmpty()) {
metricsRepository.insertBatch(batch);
log.debug("Flushed {} metrics to ClickHouse", batch.size());
}
} catch (Exception e) {
log.error("Failed to flush metrics to ClickHouse", e);
}
}
// SmartLifecycle -- flush remaining data on shutdown
@Override
public void start() {
running = true;
log.info("ClickHouseFlushScheduler started");
}
@Override
public void stop() {
log.info("ClickHouseFlushScheduler stopping -- flushing remaining data");
drainAll();
running = false;
}
@Override
public boolean isRunning() {
return running;
}
@Override
public int getPhase() {
// Run after most beans but before DataSource shutdown
return Integer.MAX_VALUE - 1;
}
/**
* Drain all buffers completely (loop until empty).
*/
private void drainAll() {
drainBufferCompletely("executions", executionBuffer, batch -> executionRepository.insertBatch(batch));
drainBufferCompletely("diagrams", diagramBuffer, batch -> {
for (TaggedDiagram d : batch) {
diagramRepository.store(d);
}
});
drainBufferCompletely("metrics", metricsBuffer, batch -> metricsRepository.insertBatch(batch));
}
private <T> void drainBufferCompletely(String name, WriteBuffer<T> buffer, java.util.function.Consumer<List<T>> inserter) {
int total = 0;
while (buffer.size() > 0) {
List<T> batch = buffer.drain(batchSize);
if (batch.isEmpty()) {
break;
}
try {
inserter.accept(batch);
total += batch.size();
} catch (Exception e) {
log.error("Failed to flush remaining {} during shutdown", name, e);
break;
}
}
if (total > 0) {
log.info("Flushed {} remaining {} during shutdown", total, name);
}
}
}

View File

@@ -0,0 +1,59 @@
package com.cameleer3.server.app.ingestion;
import com.cameleer3.server.app.config.IngestionConfig;
import com.cameleer3.server.core.ingestion.WriteBuffer;
import com.cameleer3.server.core.storage.MetricsStore;
import com.cameleer3.server.core.storage.model.MetricsSnapshot;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.SmartLifecycle;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import java.util.List;
@Component
public class MetricsFlushScheduler implements SmartLifecycle {
private static final Logger log = LoggerFactory.getLogger(MetricsFlushScheduler.class);
private final WriteBuffer<MetricsSnapshot> metricsBuffer;
private final MetricsStore metricsStore;
private final int batchSize;
private volatile boolean running = false;
public MetricsFlushScheduler(WriteBuffer<MetricsSnapshot> metricsBuffer,
MetricsStore metricsStore,
IngestionConfig config) {
this.metricsBuffer = metricsBuffer;
this.metricsStore = metricsStore;
this.batchSize = config.getBatchSize();
}
@Scheduled(fixedDelayString = "${ingestion.flush-interval-ms:1000}")
public void flush() {
try {
List<MetricsSnapshot> batch = metricsBuffer.drain(batchSize);
if (!batch.isEmpty()) {
metricsStore.insertBatch(batch);
log.debug("Flushed {} metrics to PostgreSQL", batch.size());
}
} catch (Exception e) {
log.error("Failed to flush metrics", e);
}
}
@Override public void start() { running = true; }
@Override public void stop() {
// Drain remaining on shutdown
while (metricsBuffer.size() > 0) {
List<MetricsSnapshot> batch = metricsBuffer.drain(batchSize);
if (batch.isEmpty()) break;
try { metricsStore.insertBatch(batch); }
catch (Exception e) { log.error("Failed to flush metrics during shutdown", e); break; }
}
running = false;
}
@Override public boolean isRunning() { return running; }
@Override public int getPhase() { return Integer.MAX_VALUE - 1; }
}

View File

@@ -0,0 +1,253 @@
package com.cameleer3.server.app.rbac;
import com.cameleer3.server.core.rbac.*;
import com.cameleer3.server.core.security.UserInfo;
import com.cameleer3.server.core.security.UserRepository;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Service;
import java.util.*;
@Service
public class RbacServiceImpl implements RbacService {
private final JdbcTemplate jdbc;
private final UserRepository userRepository;
private final GroupRepository groupRepository;
private final RoleRepository roleRepository;
public RbacServiceImpl(JdbcTemplate jdbc, UserRepository userRepository,
GroupRepository groupRepository, RoleRepository roleRepository) {
this.jdbc = jdbc;
this.userRepository = userRepository;
this.groupRepository = groupRepository;
this.roleRepository = roleRepository;
}
@Override
public List<UserDetail> listUsers() {
return userRepository.findAll().stream()
.map(this::buildUserDetail)
.toList();
}
@Override
public UserDetail getUser(String userId) {
UserInfo user = userRepository.findById(userId).orElse(null);
if (user == null) return null;
return buildUserDetail(user);
}
private UserDetail buildUserDetail(UserInfo user) {
List<RoleSummary> directRoles = getDirectRolesForUser(user.userId());
List<GroupSummary> directGroups = getDirectGroupsForUser(user.userId());
List<RoleSummary> effectiveRoles = getEffectiveRolesForUser(user.userId());
List<GroupSummary> effectiveGroups = getEffectiveGroupsForUser(user.userId());
return new UserDetail(user.userId(), user.provider(), user.email(),
user.displayName(), user.createdAt(),
directRoles, directGroups, effectiveRoles, effectiveGroups);
}
@Override
public void assignRoleToUser(String userId, UUID roleId) {
jdbc.update("INSERT INTO user_roles (user_id, role_id) VALUES (?, ?) ON CONFLICT DO NOTHING",
userId, roleId);
}
@Override
public void removeRoleFromUser(String userId, UUID roleId) {
jdbc.update("DELETE FROM user_roles WHERE user_id = ? AND role_id = ?", userId, roleId);
}
@Override
public void addUserToGroup(String userId, UUID groupId) {
jdbc.update("INSERT INTO user_groups (user_id, group_id) VALUES (?, ?) ON CONFLICT DO NOTHING",
userId, groupId);
}
@Override
public void removeUserFromGroup(String userId, UUID groupId) {
jdbc.update("DELETE FROM user_groups WHERE user_id = ? AND group_id = ?", userId, groupId);
}
@Override
public List<RoleSummary> getEffectiveRolesForUser(String userId) {
List<RoleSummary> direct = getDirectRolesForUser(userId);
List<GroupSummary> effectiveGroups = getEffectiveGroupsForUser(userId);
Map<UUID, RoleSummary> roleMap = new LinkedHashMap<>();
for (RoleSummary r : direct) {
roleMap.put(r.id(), r);
}
for (GroupSummary group : effectiveGroups) {
List<RoleSummary> groupRoles = jdbc.query("""
SELECT r.id, r.name, r.system FROM group_roles gr
JOIN roles r ON r.id = gr.role_id WHERE gr.group_id = ?
""", (rs, rowNum) -> new RoleSummary(
rs.getObject("id", UUID.class),
rs.getString("name"),
rs.getBoolean("system"),
group.name()
), group.id());
for (RoleSummary r : groupRoles) {
roleMap.putIfAbsent(r.id(), r);
}
}
return new ArrayList<>(roleMap.values());
}
@Override
public List<GroupSummary> getEffectiveGroupsForUser(String userId) {
List<GroupSummary> directGroups = getDirectGroupsForUser(userId);
Set<UUID> visited = new LinkedHashSet<>();
List<GroupSummary> all = new ArrayList<>();
for (GroupSummary g : directGroups) {
collectAncestors(g.id(), visited, all);
}
return all;
}
private void collectAncestors(UUID groupId, Set<UUID> visited, List<GroupSummary> result) {
if (!visited.add(groupId)) return;
var rows = jdbc.query("SELECT id, name, parent_group_id FROM groups WHERE id = ?",
(rs, rowNum) -> new Object[]{
new GroupSummary(rs.getObject("id", UUID.class), rs.getString("name")),
rs.getObject("parent_group_id", UUID.class)
}, groupId);
if (rows.isEmpty()) return;
result.add((GroupSummary) rows.get(0)[0]);
UUID parentId = (UUID) rows.get(0)[1];
if (parentId != null) {
collectAncestors(parentId, visited, result);
}
}
@Override
public List<RoleSummary> getEffectiveRolesForGroup(UUID groupId) {
List<RoleSummary> direct = jdbc.query("""
SELECT r.id, r.name, r.system FROM group_roles gr
JOIN roles r ON r.id = gr.role_id WHERE gr.group_id = ?
""", (rs, rowNum) -> new RoleSummary(rs.getObject("id", UUID.class),
rs.getString("name"), rs.getBoolean("system"), "direct"), groupId);
Map<UUID, RoleSummary> roleMap = new LinkedHashMap<>();
for (RoleSummary r : direct) roleMap.put(r.id(), r);
List<GroupSummary> ancestors = groupRepository.findAncestorChain(groupId);
for (GroupSummary ancestor : ancestors) {
if (ancestor.id().equals(groupId)) continue;
List<RoleSummary> parentRoles = jdbc.query("""
SELECT r.id, r.name, r.system FROM group_roles gr
JOIN roles r ON r.id = gr.role_id WHERE gr.group_id = ?
""", (rs, rowNum) -> new RoleSummary(rs.getObject("id", UUID.class),
rs.getString("name"), rs.getBoolean("system"),
ancestor.name()), ancestor.id());
for (RoleSummary r : parentRoles) roleMap.putIfAbsent(r.id(), r);
}
return new ArrayList<>(roleMap.values());
}
@Override
public List<UserSummary> getEffectivePrincipalsForRole(UUID roleId) {
Set<String> seen = new LinkedHashSet<>();
List<UserSummary> result = new ArrayList<>();
List<UserSummary> direct = jdbc.query("""
SELECT u.user_id, u.display_name, u.provider FROM user_roles ur
JOIN users u ON u.user_id = ur.user_id WHERE ur.role_id = ?
""", (rs, rowNum) -> new UserSummary(rs.getString("user_id"),
rs.getString("display_name"), rs.getString("provider")), roleId);
for (UserSummary u : direct) {
if (seen.add(u.userId())) result.add(u);
}
List<UUID> groupsWithRole = jdbc.query(
"SELECT group_id FROM group_roles WHERE role_id = ?",
(rs, rowNum) -> rs.getObject("group_id", UUID.class), roleId);
Set<UUID> allGroups = new LinkedHashSet<>(groupsWithRole);
for (UUID gid : groupsWithRole) {
collectDescendants(gid, allGroups);
}
for (UUID gid : allGroups) {
List<UserSummary> members = jdbc.query("""
SELECT u.user_id, u.display_name, u.provider FROM user_groups ug
JOIN users u ON u.user_id = ug.user_id WHERE ug.group_id = ?
""", (rs, rowNum) -> new UserSummary(rs.getString("user_id"),
rs.getString("display_name"), rs.getString("provider")), gid);
for (UserSummary u : members) {
if (seen.add(u.userId())) result.add(u);
}
}
return result;
}
private void collectDescendants(UUID groupId, Set<UUID> result) {
List<UUID> children = jdbc.query(
"SELECT id FROM groups WHERE parent_group_id = ?",
(rs, rowNum) -> rs.getObject("id", UUID.class), groupId);
for (UUID child : children) {
if (result.add(child)) {
collectDescendants(child, result);
}
}
}
@Override
public List<String> getSystemRoleNames(String userId) {
return getEffectiveRolesForUser(userId).stream()
.filter(RoleSummary::system)
.map(RoleSummary::name)
.toList();
}
@Override
public RbacStats getStats() {
int userCount = jdbc.queryForObject("SELECT COUNT(*) FROM users", Integer.class);
int activeUserCount = jdbc.queryForObject(
"SELECT COUNT(DISTINCT user_id) FROM user_roles", Integer.class);
int groupCount = jdbc.queryForObject("SELECT COUNT(*) FROM groups", Integer.class);
int roleCount = jdbc.queryForObject("SELECT COUNT(*) FROM roles", Integer.class);
int maxDepth = computeMaxGroupDepth();
return new RbacStats(userCount, activeUserCount, groupCount, maxDepth, roleCount);
}
private int computeMaxGroupDepth() {
List<UUID> roots = jdbc.query(
"SELECT id FROM groups WHERE parent_group_id IS NULL",
(rs, rowNum) -> rs.getObject("id", UUID.class));
int max = 0;
for (UUID root : roots) {
max = Math.max(max, measureDepth(root, 1));
}
return max;
}
private int measureDepth(UUID groupId, int currentDepth) {
List<UUID> children = jdbc.query(
"SELECT id FROM groups WHERE parent_group_id = ?",
(rs, rowNum) -> rs.getObject("id", UUID.class), groupId);
if (children.isEmpty()) return currentDepth;
int max = currentDepth;
for (UUID child : children) {
max = Math.max(max, measureDepth(child, currentDepth + 1));
}
return max;
}
private List<RoleSummary> getDirectRolesForUser(String userId) {
return jdbc.query("""
SELECT r.id, r.name, r.system FROM user_roles ur
JOIN roles r ON r.id = ur.role_id WHERE ur.user_id = ?
""", (rs, rowNum) -> new RoleSummary(rs.getObject("id", UUID.class),
rs.getString("name"), rs.getBoolean("system"), "direct"), userId);
}
private List<GroupSummary> getDirectGroupsForUser(String userId) {
return jdbc.query("""
SELECT g.id, g.name FROM user_groups ug
JOIN groups g ON g.id = ug.group_id WHERE ug.user_id = ?
""", (rs, rowNum) -> new GroupSummary(rs.getObject("id", UUID.class),
rs.getString("name")), userId);
}
}

View File

@@ -0,0 +1,48 @@
package com.cameleer3.server.app.retention;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
@Component
public class RetentionScheduler {
private static final Logger log = LoggerFactory.getLogger(RetentionScheduler.class);
private final JdbcTemplate jdbc;
private final int retentionDays;
public RetentionScheduler(JdbcTemplate jdbc,
@Value("${cameleer.retention-days:30}") int retentionDays) {
this.jdbc = jdbc;
this.retentionDays = retentionDays;
}
@Scheduled(cron = "0 0 2 * * *") // Daily at 2 AM UTC
public void dropExpiredChunks() {
String interval = retentionDays + " days";
try {
// Raw data
jdbc.execute("SELECT drop_chunks('executions', INTERVAL '" + interval + "')");
jdbc.execute("SELECT drop_chunks('processor_executions', INTERVAL '" + interval + "')");
jdbc.execute("SELECT drop_chunks('agent_metrics', INTERVAL '" + interval + "')");
// Continuous aggregates (keep 3x longer)
String caggInterval = (retentionDays * 3) + " days";
jdbc.execute("SELECT drop_chunks('stats_1m_all', INTERVAL '" + caggInterval + "')");
jdbc.execute("SELECT drop_chunks('stats_1m_app', INTERVAL '" + caggInterval + "')");
jdbc.execute("SELECT drop_chunks('stats_1m_route', INTERVAL '" + caggInterval + "')");
jdbc.execute("SELECT drop_chunks('stats_1m_processor', INTERVAL '" + caggInterval + "')");
log.info("Retention: dropped chunks older than {} days (aggregates: {} days)",
retentionDays, retentionDays * 3);
} catch (Exception e) {
log.error("Retention job failed", e);
}
}
// Note: OpenSearch daily index deletion should be handled via ILM policy
// configured at deployment time, not in application code.
}

View File

@@ -1,357 +0,0 @@
package com.cameleer3.server.app.search;
import com.cameleer3.server.core.search.ExecutionStats;
import com.cameleer3.server.core.search.ExecutionSummary;
import com.cameleer3.server.core.search.SearchEngine;
import com.cameleer3.server.core.search.SearchRequest;
import com.cameleer3.server.core.search.SearchResult;
import com.cameleer3.server.core.search.StatsTimeseries;
import org.springframework.jdbc.core.JdbcTemplate;
import java.sql.Timestamp;
import java.time.Duration;
import java.time.Instant;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
/**
* ClickHouse implementation of {@link SearchEngine}.
* <p>
* Builds dynamic WHERE clauses from non-null {@link SearchRequest} fields
* and queries the {@code route_executions} table. LIKE patterns are properly
* escaped to prevent injection.
*/
public class ClickHouseSearchEngine implements SearchEngine {
/** Per-query memory cap (1 GiB) — prevents a single query from OOMing ClickHouse. */
private static final String SETTINGS = " SETTINGS max_memory_usage = 1000000000";
private final JdbcTemplate jdbcTemplate;
public ClickHouseSearchEngine(JdbcTemplate jdbcTemplate) {
this.jdbcTemplate = jdbcTemplate;
}
@Override
public SearchResult<ExecutionSummary> search(SearchRequest request) {
var conditions = new ArrayList<String>();
var params = new ArrayList<Object>();
buildWhereClause(request, conditions, params);
String where = conditions.isEmpty() ? "" : " WHERE " + String.join(" AND ", conditions);
// Count query
var countParams = params.toArray();
Long total = jdbcTemplate.queryForObject(
"SELECT count() FROM route_executions" + where + SETTINGS, Long.class, countParams);
if (total == null) total = 0L;
if (total == 0) {
return SearchResult.empty(request.offset(), request.limit());
}
// Data query
params.add(request.limit());
params.add(request.offset());
String orderDir = "asc".equalsIgnoreCase(request.sortDir()) ? "ASC" : "DESC";
String dataSql = "SELECT execution_id, route_id, agent_id, status, start_time, end_time, " +
"duration_ms, correlation_id, error_message, diagram_content_hash " +
"FROM route_executions" + where +
" ORDER BY " + request.sortColumn() + " " + orderDir + " LIMIT ? OFFSET ?" + SETTINGS;
List<ExecutionSummary> data = jdbcTemplate.query(dataSql, (rs, rowNum) -> {
Timestamp endTs = rs.getTimestamp("end_time");
return new ExecutionSummary(
rs.getString("execution_id"),
rs.getString("route_id"),
rs.getString("agent_id"),
rs.getString("status"),
rs.getTimestamp("start_time").toInstant(),
endTs != null ? endTs.toInstant() : null,
rs.getLong("duration_ms"),
rs.getString("correlation_id"),
rs.getString("error_message"),
rs.getString("diagram_content_hash")
);
}, params.toArray());
return new SearchResult<>(data, total, request.offset(), request.limit());
}
@Override
public long count(SearchRequest request) {
var conditions = new ArrayList<String>();
var params = new ArrayList<Object>();
buildWhereClause(request, conditions, params);
String where = conditions.isEmpty() ? "" : " WHERE " + String.join(" AND ", conditions);
Long result = jdbcTemplate.queryForObject(
"SELECT count() FROM route_executions" + where + SETTINGS, Long.class, params.toArray());
return result != null ? result : 0L;
}
@Override
public ExecutionStats stats(Instant from, Instant to) {
return stats(from, to, null, null);
}
@Override
public ExecutionStats stats(Instant from, Instant to, String routeId, List<String> agentIds) {
// Current period — read from rollup
var conditions = new ArrayList<String>();
var params = new ArrayList<Object>();
conditions.add("bucket >= ?");
params.add(bucketTimestamp(floorToFiveMinutes(from)));
conditions.add("bucket <= ?");
params.add(bucketTimestamp(to));
addScopeFilters(routeId, agentIds, conditions, params);
String where = " WHERE " + String.join(" AND ", conditions);
String rollupSql = "SELECT " +
"countMerge(total_count) AS cnt, " +
"countIfMerge(failed_count) AS failed, " +
"toInt64(ifNotFinite(sumMerge(duration_sum) / countMerge(total_count), 0)) AS avg_ms, " +
"toInt64(ifNotFinite(quantileTDigestMerge(0.99)(p99_duration), 0)) AS p99_ms " +
"FROM route_execution_stats_5m" + where + SETTINGS;
record PeriodStats(long totalCount, long failedCount, long avgDurationMs, long p99LatencyMs) {}
PeriodStats current = jdbcTemplate.queryForObject(rollupSql,
(rs, rowNum) -> new PeriodStats(
rs.getLong("cnt"),
rs.getLong("failed"),
rs.getLong("avg_ms"),
rs.getLong("p99_ms")),
params.toArray());
// Active count — PREWHERE reads only the status column before touching wide rows
var scopeConditions = new ArrayList<String>();
var activeParams = new ArrayList<Object>();
addScopeFilters(routeId, agentIds, scopeConditions, activeParams);
String scopeWhere = scopeConditions.isEmpty() ? "" : " WHERE " + String.join(" AND ", scopeConditions);
Long activeCount = jdbcTemplate.queryForObject(
"SELECT count() FROM route_executions PREWHERE status = 'RUNNING'" + scopeWhere + SETTINGS,
Long.class, activeParams.toArray());
// Previous period (same window shifted back 24h) — read from rollup
Duration window = Duration.between(from, to);
Instant prevFrom = from.minus(Duration.ofHours(24));
Instant prevTo = prevFrom.plus(window);
var prevConditions = new ArrayList<String>();
var prevParams = new ArrayList<Object>();
prevConditions.add("bucket >= ?");
prevParams.add(bucketTimestamp(floorToFiveMinutes(prevFrom)));
prevConditions.add("bucket <= ?");
prevParams.add(bucketTimestamp(prevTo));
addScopeFilters(routeId, agentIds, prevConditions, prevParams);
String prevWhere = " WHERE " + String.join(" AND ", prevConditions);
String prevRollupSql = "SELECT " +
"countMerge(total_count) AS cnt, " +
"countIfMerge(failed_count) AS failed, " +
"toInt64(ifNotFinite(sumMerge(duration_sum) / countMerge(total_count), 0)) AS avg_ms, " +
"toInt64(ifNotFinite(quantileTDigestMerge(0.99)(p99_duration), 0)) AS p99_ms " +
"FROM route_execution_stats_5m" + prevWhere + SETTINGS;
PeriodStats prev = jdbcTemplate.queryForObject(prevRollupSql,
(rs, rowNum) -> new PeriodStats(
rs.getLong("cnt"),
rs.getLong("failed"),
rs.getLong("avg_ms"),
rs.getLong("p99_ms")),
prevParams.toArray());
// Today total (midnight UTC to now) — read from rollup with same scope
Instant todayStart = Instant.now().truncatedTo(java.time.temporal.ChronoUnit.DAYS);
var todayConditions = new ArrayList<String>();
var todayParams = new ArrayList<Object>();
todayConditions.add("bucket >= ?");
todayParams.add(bucketTimestamp(floorToFiveMinutes(todayStart)));
addScopeFilters(routeId, agentIds, todayConditions, todayParams);
String todayWhere = " WHERE " + String.join(" AND ", todayConditions);
Long totalToday = jdbcTemplate.queryForObject(
"SELECT countMerge(total_count) FROM route_execution_stats_5m" + todayWhere + SETTINGS,
Long.class, todayParams.toArray());
return new ExecutionStats(
current.totalCount, current.failedCount, current.avgDurationMs,
current.p99LatencyMs, activeCount != null ? activeCount : 0L,
totalToday != null ? totalToday : 0L,
prev.totalCount, prev.failedCount, prev.avgDurationMs, prev.p99LatencyMs);
}
@Override
public StatsTimeseries timeseries(Instant from, Instant to, int bucketCount) {
return timeseries(from, to, bucketCount, null, null);
}
@Override
public StatsTimeseries timeseries(Instant from, Instant to, int bucketCount,
String routeId, List<String> agentIds) {
long intervalSeconds = Duration.between(from, to).getSeconds() / bucketCount;
if (intervalSeconds < 1) intervalSeconds = 1;
var conditions = new ArrayList<String>();
var params = new ArrayList<Object>();
conditions.add("bucket >= ?");
params.add(bucketTimestamp(floorToFiveMinutes(from)));
conditions.add("bucket <= ?");
params.add(bucketTimestamp(to));
addScopeFilters(routeId, agentIds, conditions, params);
String where = " WHERE " + String.join(" AND ", conditions);
// Re-aggregate 5-minute rollup buckets into the requested interval
String sql = "SELECT " +
"toDateTime(intDiv(toUInt32(bucket), " + intervalSeconds + ") * " + intervalSeconds + ") AS ts_bucket, " +
"countMerge(total_count) AS cnt, " +
"countIfMerge(failed_count) AS failed, " +
"toInt64(ifNotFinite(sumMerge(duration_sum) / countMerge(total_count), 0)) AS avg_ms, " +
"toInt64(ifNotFinite(quantileTDigestMerge(0.99)(p99_duration), 0)) AS p99_ms " +
"FROM route_execution_stats_5m" + where +
" GROUP BY ts_bucket ORDER BY ts_bucket" + SETTINGS;
List<StatsTimeseries.TimeseriesBucket> buckets = jdbcTemplate.query(sql, (rs, rowNum) ->
new StatsTimeseries.TimeseriesBucket(
rs.getTimestamp("ts_bucket").toInstant(),
rs.getLong("cnt"),
rs.getLong("failed"),
rs.getLong("avg_ms"),
rs.getLong("p99_ms"),
0L
),
params.toArray());
return new StatsTimeseries(buckets);
}
private void buildWhereClause(SearchRequest req, List<String> conditions, List<Object> params) {
if (req.status() != null && !req.status().isBlank()) {
String[] statuses = req.status().split(",");
if (statuses.length == 1) {
conditions.add("status = ?");
params.add(statuses[0].trim());
} else {
String placeholders = String.join(", ", Collections.nCopies(statuses.length, "?"));
conditions.add("status IN (" + placeholders + ")");
for (String s : statuses) {
params.add(s.trim());
}
}
}
if (req.timeFrom() != null) {
conditions.add("start_time >= ?");
params.add(Timestamp.from(req.timeFrom()));
}
if (req.timeTo() != null) {
conditions.add("start_time <= ?");
params.add(Timestamp.from(req.timeTo()));
}
if (req.durationMin() != null) {
conditions.add("duration_ms >= ?");
params.add(req.durationMin());
}
if (req.durationMax() != null) {
conditions.add("duration_ms <= ?");
params.add(req.durationMax());
}
if (req.correlationId() != null && !req.correlationId().isBlank()) {
conditions.add("correlation_id = ?");
params.add(req.correlationId());
}
if (req.routeId() != null && !req.routeId().isBlank()) {
conditions.add("route_id = ?");
params.add(req.routeId());
}
if (req.agentId() != null && !req.agentId().isBlank()) {
conditions.add("agent_id = ?");
params.add(req.agentId());
}
// agentIds from group resolution (takes precedence when agentId is not set)
if ((req.agentId() == null || req.agentId().isBlank())
&& req.agentIds() != null && !req.agentIds().isEmpty()) {
String placeholders = String.join(", ", Collections.nCopies(req.agentIds().size(), "?"));
conditions.add("agent_id IN (" + placeholders + ")");
params.addAll(req.agentIds());
}
if (req.processorType() != null && !req.processorType().isBlank()) {
conditions.add("has(processor_types, ?)");
params.add(req.processorType());
}
if (req.text() != null && !req.text().isBlank()) {
String pattern = "%" + escapeLike(req.text()) + "%";
String[] textColumns = {
"execution_id", "route_id", "agent_id",
"error_message", "error_stacktrace",
"exchange_bodies", "exchange_headers"
};
var likeClauses = java.util.Arrays.stream(textColumns)
.map(col -> col + " LIKE ?")
.toList();
conditions.add("(" + String.join(" OR ", likeClauses) + ")");
for (int i = 0; i < textColumns.length; i++) {
params.add(pattern);
}
}
if (req.textInBody() != null && !req.textInBody().isBlank()) {
conditions.add("exchange_bodies LIKE ?");
params.add("%" + escapeLike(req.textInBody()) + "%");
}
if (req.textInHeaders() != null && !req.textInHeaders().isBlank()) {
conditions.add("exchange_headers LIKE ?");
params.add("%" + escapeLike(req.textInHeaders()) + "%");
}
if (req.textInErrors() != null && !req.textInErrors().isBlank()) {
String pattern = "%" + escapeLike(req.textInErrors()) + "%";
conditions.add("(error_message LIKE ? OR error_stacktrace LIKE ?)");
params.add(pattern);
params.add(pattern);
}
}
/**
* Add route ID and agent IDs scope filters to conditions/params.
*/
private void addScopeFilters(String routeId, List<String> agentIds,
List<String> conditions, List<Object> params) {
if (routeId != null && !routeId.isBlank()) {
conditions.add("route_id = ?");
params.add(routeId);
}
if (agentIds != null && !agentIds.isEmpty()) {
String placeholders = String.join(", ", Collections.nCopies(agentIds.size(), "?"));
conditions.add("agent_id IN (" + placeholders + ")");
params.addAll(agentIds);
}
}
/**
* Floor an Instant to the start of its 5-minute bucket.
*/
private static Instant floorToFiveMinutes(Instant instant) {
long epochSecond = instant.getEpochSecond();
return Instant.ofEpochSecond(epochSecond - (epochSecond % 300));
}
/**
* Create a second-precision Timestamp for rollup bucket comparisons.
* The bucket column is DateTime('UTC') (second precision); the JDBC driver
* sends java.sql.Timestamp with nanoseconds which ClickHouse rejects.
*/
private static Timestamp bucketTimestamp(Instant instant) {
return Timestamp.from(instant.truncatedTo(java.time.temporal.ChronoUnit.SECONDS));
}
/**
* Escape special LIKE characters to prevent LIKE injection.
*/
static String escapeLike(String input) {
return input
.replace("\\", "\\\\")
.replace("%", "\\%")
.replace("_", "\\_");
}
}

View File

@@ -0,0 +1,336 @@
package com.cameleer3.server.app.search;
import com.cameleer3.server.core.search.ExecutionSummary;
import com.cameleer3.server.core.search.SearchRequest;
import com.cameleer3.server.core.search.SearchResult;
import com.cameleer3.server.core.storage.SearchIndex;
import com.cameleer3.server.core.storage.model.ExecutionDocument;
import com.cameleer3.server.core.storage.model.ExecutionDocument.ProcessorDoc;
import jakarta.annotation.PostConstruct;
import org.opensearch.client.json.JsonData;
import org.opensearch.client.opensearch.OpenSearchClient;
import org.opensearch.client.opensearch._types.FieldValue;
import org.opensearch.client.opensearch._types.SortOrder;
import org.opensearch.client.opensearch._types.query_dsl.*;
import org.opensearch.client.opensearch.core.*;
import org.opensearch.client.opensearch.core.search.Hit;
import org.opensearch.client.opensearch.indices.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Repository;
import java.io.IOException;
import java.time.Instant;
import java.time.ZoneOffset;
import java.time.format.DateTimeFormatter;
import java.util.*;
import java.util.stream.Collectors;
@Repository
public class OpenSearchIndex implements SearchIndex {
private static final Logger log = LoggerFactory.getLogger(OpenSearchIndex.class);
private static final DateTimeFormatter DAY_FMT = DateTimeFormatter.ofPattern("yyyy-MM-dd")
.withZone(ZoneOffset.UTC);
private final OpenSearchClient client;
private final String indexPrefix;
public OpenSearchIndex(OpenSearchClient client,
@Value("${opensearch.index-prefix:executions-}") String indexPrefix) {
this.client = client;
this.indexPrefix = indexPrefix;
}
@PostConstruct
void ensureIndexTemplate() {
String templateName = indexPrefix + "template";
String indexPattern = indexPrefix + "*";
try {
boolean exists = client.indices().existsIndexTemplate(
ExistsIndexTemplateRequest.of(b -> b.name(templateName))).value();
if (!exists) {
client.indices().putIndexTemplate(PutIndexTemplateRequest.of(b -> b
.name(templateName)
.indexPatterns(List.of(indexPattern))
.template(t -> t
.settings(s -> s
.numberOfShards("3")
.numberOfReplicas("1"))
.mappings(m -> m
.properties("processors", p -> p
.nested(n -> n))))));
log.info("OpenSearch index template created");
}
} catch (IOException e) {
log.error("Failed to create index template", e);
}
}
@Override
public void index(ExecutionDocument doc) {
String indexName = indexPrefix + DAY_FMT.format(doc.startTime());
try {
client.index(IndexRequest.of(b -> b
.index(indexName)
.id(doc.executionId())
.document(toMap(doc))));
} catch (IOException e) {
log.error("Failed to index execution {}", doc.executionId(), e);
}
}
@Override
public SearchResult<ExecutionSummary> search(SearchRequest request) {
try {
var searchReq = buildSearchRequest(request, request.limit());
var response = client.search(searchReq, Map.class);
List<ExecutionSummary> items = response.hits().hits().stream()
.map(this::hitToSummary)
.collect(Collectors.toList());
long total = response.hits().total() != null ? response.hits().total().value() : 0;
return new SearchResult<>(items, total, request.offset(), request.limit());
} catch (IOException e) {
log.error("Search failed", e);
return SearchResult.empty(request.offset(), request.limit());
}
}
@Override
public long count(SearchRequest request) {
try {
var countReq = CountRequest.of(b -> b
.index(indexPrefix + "*")
.query(buildQuery(request)));
return client.count(countReq).count();
} catch (IOException e) {
log.error("Count failed", e);
return 0;
}
}
@Override
public void delete(String executionId) {
try {
client.deleteByQuery(DeleteByQueryRequest.of(b -> b
.index(List.of(indexPrefix + "*"))
.query(Query.of(q -> q.term(t -> t
.field("execution_id")
.value(FieldValue.of(executionId)))))));
} catch (IOException e) {
log.error("Failed to delete execution {}", executionId, e);
}
}
private org.opensearch.client.opensearch.core.SearchRequest buildSearchRequest(
SearchRequest request, int size) {
return org.opensearch.client.opensearch.core.SearchRequest.of(b -> {
b.index(indexPrefix + "*")
.query(buildQuery(request))
.trackTotalHits(th -> th.enabled(true))
.size(size)
.from(request.offset())
.sort(s -> s.field(f -> f
.field(request.sortColumn())
.order("asc".equalsIgnoreCase(request.sortDir())
? SortOrder.Asc : SortOrder.Desc)));
return b;
});
}
private Query buildQuery(SearchRequest request) {
List<Query> must = new ArrayList<>();
List<Query> filter = new ArrayList<>();
// Time range
if (request.timeFrom() != null || request.timeTo() != null) {
filter.add(Query.of(q -> q.range(r -> {
r.field("start_time");
if (request.timeFrom() != null)
r.gte(JsonData.of(request.timeFrom().toString()));
if (request.timeTo() != null)
r.lte(JsonData.of(request.timeTo().toString()));
return r;
})));
}
// Keyword filters (use .keyword sub-field for exact matching on dynamically mapped text fields)
if (request.status() != null)
filter.add(termQuery("status.keyword", request.status()));
if (request.routeId() != null)
filter.add(termQuery("route_id.keyword", request.routeId()));
if (request.agentId() != null)
filter.add(termQuery("agent_id.keyword", request.agentId()));
if (request.correlationId() != null)
filter.add(termQuery("correlation_id.keyword", request.correlationId()));
// Full-text search across all fields + nested processor fields
if (request.text() != null && !request.text().isBlank()) {
String text = request.text();
String wildcard = "*" + text.toLowerCase() + "*";
List<Query> textQueries = new ArrayList<>();
// Search top-level text fields (analyzed match + wildcard for substring)
textQueries.add(Query.of(q -> q.multiMatch(m -> m
.query(text)
.fields("error_message", "error_stacktrace"))));
textQueries.add(Query.of(q -> q.wildcard(w -> w
.field("error_message").value(wildcard).caseInsensitive(true))));
textQueries.add(Query.of(q -> q.wildcard(w -> w
.field("error_stacktrace").value(wildcard).caseInsensitive(true))));
// Search nested processor fields (analyzed match + wildcard)
textQueries.add(Query.of(q -> q.nested(n -> n
.path("processors")
.query(nq -> nq.multiMatch(m -> m
.query(text)
.fields("processors.input_body", "processors.output_body",
"processors.input_headers", "processors.output_headers",
"processors.error_message", "processors.error_stacktrace"))))));
textQueries.add(Query.of(q -> q.nested(n -> n
.path("processors")
.query(nq -> nq.bool(nb -> nb.should(
wildcardQuery("processors.input_body", wildcard),
wildcardQuery("processors.output_body", wildcard),
wildcardQuery("processors.input_headers", wildcard),
wildcardQuery("processors.output_headers", wildcard)
).minimumShouldMatch("1"))))));
// Also try keyword fields for exact matches
textQueries.add(Query.of(q -> q.multiMatch(m -> m
.query(text)
.fields("execution_id", "route_id", "agent_id", "correlation_id", "exchange_id"))));
must.add(Query.of(q -> q.bool(b -> b.should(textQueries).minimumShouldMatch("1"))));
}
// Scoped text searches (multiMatch + wildcard fallback for substring matching)
if (request.textInBody() != null && !request.textInBody().isBlank()) {
String bodyText = request.textInBody();
String bodyWildcard = "*" + bodyText.toLowerCase() + "*";
must.add(Query.of(q -> q.nested(n -> n
.path("processors")
.query(nq -> nq.bool(nb -> nb.should(
Query.of(mq -> mq.multiMatch(m -> m
.query(bodyText)
.fields("processors.input_body", "processors.output_body"))),
wildcardQuery("processors.input_body", bodyWildcard),
wildcardQuery("processors.output_body", bodyWildcard)
).minimumShouldMatch("1"))))));
}
if (request.textInHeaders() != null && !request.textInHeaders().isBlank()) {
String headerText = request.textInHeaders();
String headerWildcard = "*" + headerText.toLowerCase() + "*";
must.add(Query.of(q -> q.nested(n -> n
.path("processors")
.query(nq -> nq.bool(nb -> nb.should(
Query.of(mq -> mq.multiMatch(m -> m
.query(headerText)
.fields("processors.input_headers", "processors.output_headers"))),
wildcardQuery("processors.input_headers", headerWildcard),
wildcardQuery("processors.output_headers", headerWildcard)
).minimumShouldMatch("1"))))));
}
if (request.textInErrors() != null && !request.textInErrors().isBlank()) {
String errText = request.textInErrors();
String errWildcard = "*" + errText.toLowerCase() + "*";
must.add(Query.of(q -> q.bool(b -> b.should(
Query.of(sq -> sq.multiMatch(m -> m
.query(errText)
.fields("error_message", "error_stacktrace"))),
wildcardQuery("error_message", errWildcard),
wildcardQuery("error_stacktrace", errWildcard),
Query.of(sq -> sq.nested(n -> n
.path("processors")
.query(nq -> nq.bool(nb -> nb.should(
Query.of(nmq -> nmq.multiMatch(m -> m
.query(errText)
.fields("processors.error_message", "processors.error_stacktrace"))),
wildcardQuery("processors.error_message", errWildcard),
wildcardQuery("processors.error_stacktrace", errWildcard)
).minimumShouldMatch("1")))))
).minimumShouldMatch("1"))));
}
// Duration range
if (request.durationMin() != null || request.durationMax() != null) {
filter.add(Query.of(q -> q.range(r -> {
r.field("duration_ms");
if (request.durationMin() != null)
r.gte(JsonData.of(request.durationMin()));
if (request.durationMax() != null)
r.lte(JsonData.of(request.durationMax()));
return r;
})));
}
return Query.of(q -> q.bool(b -> {
if (!must.isEmpty()) b.must(must);
if (!filter.isEmpty()) b.filter(filter);
if (must.isEmpty() && filter.isEmpty()) b.must(Query.of(mq -> mq.matchAll(m -> m)));
return b;
}));
}
private Query termQuery(String field, String value) {
return Query.of(q -> q.term(t -> t.field(field).value(FieldValue.of(value))));
}
private Query wildcardQuery(String field, String pattern) {
return Query.of(q -> q.wildcard(w -> w.field(field).value(pattern).caseInsensitive(true)));
}
private Map<String, Object> toMap(ExecutionDocument doc) {
Map<String, Object> map = new LinkedHashMap<>();
map.put("execution_id", doc.executionId());
map.put("route_id", doc.routeId());
map.put("agent_id", doc.agentId());
map.put("application_name", doc.applicationName());
map.put("status", doc.status());
map.put("correlation_id", doc.correlationId());
map.put("exchange_id", doc.exchangeId());
map.put("start_time", doc.startTime() != null ? doc.startTime().toString() : null);
map.put("end_time", doc.endTime() != null ? doc.endTime().toString() : null);
map.put("duration_ms", doc.durationMs());
map.put("error_message", doc.errorMessage());
map.put("error_stacktrace", doc.errorStacktrace());
if (doc.processors() != null) {
map.put("processors", doc.processors().stream().map(p -> {
Map<String, Object> pm = new LinkedHashMap<>();
pm.put("processor_id", p.processorId());
pm.put("processor_type", p.processorType());
pm.put("status", p.status());
pm.put("error_message", p.errorMessage());
pm.put("error_stacktrace", p.errorStacktrace());
pm.put("input_body", p.inputBody());
pm.put("output_body", p.outputBody());
pm.put("input_headers", p.inputHeaders());
pm.put("output_headers", p.outputHeaders());
return pm;
}).toList());
}
return map;
}
@SuppressWarnings("unchecked")
private ExecutionSummary hitToSummary(Hit<Map> hit) {
Map<String, Object> src = hit.source();
if (src == null) return null;
return new ExecutionSummary(
(String) src.get("execution_id"),
(String) src.get("route_id"),
(String) src.get("agent_id"),
(String) src.get("application_name"),
(String) src.get("status"),
src.get("start_time") != null ? Instant.parse((String) src.get("start_time")) : null,
src.get("end_time") != null ? Instant.parse((String) src.get("end_time")) : null,
src.get("duration_ms") != null ? ((Number) src.get("duration_ms")).longValue() : 0L,
(String) src.get("correlation_id"),
(String) src.get("error_message"),
null // diagramContentHash not stored in index
);
}
}

View File

@@ -60,13 +60,13 @@ public class JwtServiceImpl implements JwtService {
}
@Override
public String createAccessToken(String subject, String group, List<String> roles) {
return createToken(subject, group, roles, "access", properties.getAccessTokenExpiryMs());
public String createAccessToken(String subject, String application, List<String> roles) {
return createToken(subject, application, roles, "access", properties.getAccessTokenExpiryMs());
}
@Override
public String createRefreshToken(String subject, String group, List<String> roles) {
return createToken(subject, group, roles, "refresh", properties.getRefreshTokenExpiryMs());
public String createRefreshToken(String subject, String application, List<String> roles) {
return createToken(subject, application, roles, "refresh", properties.getRefreshTokenExpiryMs());
}
@Override
@@ -84,12 +84,12 @@ public class JwtServiceImpl implements JwtService {
return validateAccessToken(token).subject();
}
private String createToken(String subject, String group, List<String> roles,
private String createToken(String subject, String application, List<String> roles,
String type, long expiryMs) {
Instant now = Instant.now();
JWTClaimsSet claims = new JWTClaimsSet.Builder()
.subject(subject)
.claim("group", group)
.claim("group", application)
.claim("type", type)
.claim("roles", roles)
.issueTime(Date.from(now))
@@ -132,7 +132,7 @@ public class JwtServiceImpl implements JwtService {
throw new InvalidTokenException("Token has no subject");
}
String group = claims.getStringClaim("group");
String application = claims.getStringClaim("group");
// Extract roles — may be absent in legacy tokens
List<String> roles;
@@ -145,7 +145,7 @@ public class JwtServiceImpl implements JwtService {
roles = List.of();
}
return new JwtValidationResult(subject, group, roles);
return new JwtValidationResult(subject, application, roles);
} catch (ParseException e) {
throw new InvalidTokenException("Failed to parse JWT", e);
} catch (JOSEException e) {

View File

@@ -3,11 +3,17 @@ package com.cameleer3.server.app.security;
import com.cameleer3.server.app.dto.AuthTokenResponse;
import com.cameleer3.server.app.dto.ErrorResponse;
import com.cameleer3.server.app.dto.OidcPublicConfigResponse;
import com.cameleer3.server.core.admin.AuditCategory;
import com.cameleer3.server.core.admin.AuditResult;
import com.cameleer3.server.core.admin.AuditService;
import com.cameleer3.server.core.rbac.RbacService;
import com.cameleer3.server.core.rbac.SystemRole;
import com.cameleer3.server.core.security.JwtService;
import com.cameleer3.server.core.security.OidcConfig;
import com.cameleer3.server.core.security.OidcConfigRepository;
import com.cameleer3.server.core.security.UserInfo;
import com.cameleer3.server.core.security.UserRepository;
import jakarta.servlet.http.HttpServletRequest;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.media.Content;
import io.swagger.v3.oas.annotations.media.Schema;
@@ -27,12 +33,14 @@ import org.springframework.web.server.ResponseStatusException;
import java.net.URI;
import java.time.Instant;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.UUID;
/**
* OIDC authentication endpoints for the UI.
* <p>
* Always registered returns 404 when OIDC is not configured or disabled.
* Always registered -- returns 404 when OIDC is not configured or disabled.
* Configuration is read from the database (managed via admin UI).
*/
@RestController
@@ -46,15 +54,21 @@ public class OidcAuthController {
private final OidcConfigRepository configRepository;
private final JwtService jwtService;
private final UserRepository userRepository;
private final AuditService auditService;
private final RbacService rbacService;
public OidcAuthController(OidcTokenExchanger tokenExchanger,
OidcConfigRepository configRepository,
JwtService jwtService,
UserRepository userRepository) {
UserRepository userRepository,
AuditService auditService,
RbacService rbacService) {
this.tokenExchanger = tokenExchanger;
this.configRepository = configRepository;
this.jwtService = jwtService;
this.userRepository = userRepository;
this.auditService = auditService;
this.rbacService = rbacService;
}
/**
@@ -100,7 +114,8 @@ public class OidcAuthController {
@ApiResponse(responseCode = "403", description = "Account not provisioned",
content = @Content(schema = @Schema(implementation = ErrorResponse.class)))
@ApiResponse(responseCode = "404", description = "OIDC not configured or disabled")
public ResponseEntity<AuthTokenResponse> callback(@RequestBody CallbackRequest request) {
public ResponseEntity<AuthTokenResponse> callback(@RequestBody CallbackRequest request,
HttpServletRequest httpRequest) {
Optional<OidcConfig> config = configRepository.find();
if (config.isEmpty() || !config.get().enabled()) {
return ResponseEntity.notFound().build();
@@ -121,17 +136,24 @@ public class OidcAuthController {
"Account not provisioned. Contact your administrator.");
}
// Resolve roles: DB override > OIDC claim > default
List<String> roles = resolveRoles(existingUser, oidcUser.roles(), config.get());
// Upsert user (without roles -- roles are in user_roles table)
userRepository.upsert(new UserInfo(
userId, provider, oidcUser.email(), oidcUser.name(), roles, Instant.now()));
userId, provider, oidcUser.email(), oidcUser.name(), Instant.now()));
// Assign roles if new user
if (existingUser.isEmpty()) {
assignRolesForNewUser(userId, oidcUser.roles(), config.get());
}
List<String> roles = rbacService.getSystemRoleNames(userId);
String accessToken = jwtService.createAccessToken(userId, "user", roles);
String refreshToken = jwtService.createRefreshToken(userId, "user", roles);
String displayName = oidcUser.name() != null && !oidcUser.name().isBlank()
? oidcUser.name() : oidcUser.email();
auditService.log(userId, "login_oidc", AuditCategory.AUTH, null,
Map.of("provider", config.get().issuerUri()), AuditResult.SUCCESS, httpRequest);
return ResponseEntity.ok(new AuthTokenResponse(accessToken, refreshToken, displayName, oidcUser.idToken()));
} catch (ResponseStatusException e) {
throw e;
@@ -142,14 +164,14 @@ public class OidcAuthController {
}
}
private List<String> resolveRoles(Optional<UserInfo> existing, List<String> oidcRoles, OidcConfig config) {
if (existing.isPresent() && !existing.get().roles().isEmpty()) {
return existing.get().roles();
private void assignRolesForNewUser(String userId, List<String> oidcRoles, OidcConfig config) {
List<String> roleNames = !oidcRoles.isEmpty() ? oidcRoles : config.defaultRoles();
for (String roleName : roleNames) {
UUID roleId = SystemRole.BY_NAME.get(roleName.toUpperCase());
if (roleId != null) {
rbacService.assignRoleToUser(userId, roleId);
}
}
if (!oidcRoles.isEmpty()) {
return oidcRoles;
}
return config.defaultRoles();
}
public record CallbackRequest(String code, String redirectUri) {}

View File

@@ -1,29 +1,20 @@
package com.cameleer3.server.app.security;
import com.cameleer3.server.core.security.OidcConfig;
import com.cameleer3.server.core.security.OidcConfigRepository;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.util.List;
/**
* Configuration class that creates security service beans and validates
* that required security properties are set.
* <p>
* Fails fast on startup if {@code CAMELEER_AUTH_TOKEN} is not set.
* Seeds OIDC config from env vars into ClickHouse if DB is empty.
*/
@Configuration
@EnableConfigurationProperties(SecurityProperties.class)
public class SecurityBeanConfig {
private static final Logger log = LoggerFactory.getLogger(SecurityBeanConfig.class);
@Bean
public JwtServiceImpl jwtService(SecurityProperties properties) {
return new JwtServiceImpl(properties);
@@ -50,36 +41,4 @@ public class SecurityBeanConfig {
};
}
/**
* Seeds OIDC config from env vars into the database if the DB has no config yet.
* This allows initial provisioning via env vars, after which the admin UI takes over.
*/
@Bean
public InitializingBean oidcConfigSeeder(SecurityProperties properties,
OidcConfigRepository configRepository) {
return () -> {
if (configRepository.find().isPresent()) {
log.debug("OIDC config already present in database, skipping env var seed");
return;
}
SecurityProperties.Oidc envOidc = properties.getOidc();
if (envOidc.isEnabled()
&& envOidc.getIssuerUri() != null && !envOidc.getIssuerUri().isBlank()
&& envOidc.getClientId() != null && !envOidc.getClientId().isBlank()) {
OidcConfig config = new OidcConfig(
true,
envOidc.getIssuerUri(),
envOidc.getClientId(),
envOidc.getClientSecret() != null ? envOidc.getClientSecret() : "",
envOidc.getRolesClaim(),
envOidc.getDefaultRoles(),
true,
"name"
);
configRepository.save(config);
log.info("OIDC config seeded from environment variables: issuer={}", envOidc.getIssuerUri());
}
};
}
}

View File

@@ -6,6 +6,7 @@ import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.http.HttpMethod;
import org.springframework.http.HttpStatus;
import org.springframework.security.config.annotation.method.configuration.EnableMethodSecurity;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configurers.AbstractHttpConfigurer;
@@ -27,6 +28,7 @@ import java.util.List;
*/
@Configuration
@EnableWebSecurity
@EnableMethodSecurity
public class SecurityConfig {
@Bean
@@ -78,7 +80,10 @@ public class SecurityConfig {
// Read-only data endpoints — viewer+
.requestMatchers(HttpMethod.GET, "/api/v1/executions/**").hasAnyRole("VIEWER", "OPERATOR", "ADMIN")
.requestMatchers(HttpMethod.GET, "/api/v1/diagrams/**").hasAnyRole("VIEWER", "OPERATOR", "ADMIN")
.requestMatchers(HttpMethod.GET, "/api/v1/agents/*/metrics").hasAnyRole("VIEWER", "OPERATOR", "ADMIN")
.requestMatchers(HttpMethod.GET, "/api/v1/agents").hasAnyRole("VIEWER", "OPERATOR", "ADMIN")
.requestMatchers(HttpMethod.GET, "/api/v1/agents/events-log").hasAnyRole("VIEWER", "OPERATOR", "ADMIN")
.requestMatchers(HttpMethod.GET, "/api/v1/routes/**").hasAnyRole("VIEWER", "OPERATOR", "ADMIN")
.requestMatchers(HttpMethod.GET, "/api/v1/stats/**").hasAnyRole("VIEWER", "OPERATOR", "ADMIN")
// Admin endpoints

View File

@@ -2,8 +2,6 @@ package com.cameleer3.server.app.security;
import org.springframework.boot.context.properties.ConfigurationProperties;
import java.util.List;
/**
* Configuration properties for security settings.
* Bound from the {@code security.*} namespace in application.yml.
@@ -19,29 +17,6 @@ public class SecurityProperties {
private String uiPassword;
private String uiOrigin;
private String jwtSecret;
private Oidc oidc = new Oidc();
public static class Oidc {
private boolean enabled = false;
private String issuerUri;
private String clientId;
private String clientSecret;
private String rolesClaim = "realm_access.roles";
private List<String> defaultRoles = List.of("VIEWER");
public boolean isEnabled() { return enabled; }
public void setEnabled(boolean enabled) { this.enabled = enabled; }
public String getIssuerUri() { return issuerUri; }
public void setIssuerUri(String issuerUri) { this.issuerUri = issuerUri; }
public String getClientId() { return clientId; }
public void setClientId(String clientId) { this.clientId = clientId; }
public String getClientSecret() { return clientSecret; }
public void setClientSecret(String clientSecret) { this.clientSecret = clientSecret; }
public String getRolesClaim() { return rolesClaim; }
public void setRolesClaim(String rolesClaim) { this.rolesClaim = rolesClaim; }
public List<String> getDefaultRoles() { return defaultRoles; }
public void setDefaultRoles(List<String> defaultRoles) { this.defaultRoles = defaultRoles; }
}
public long getAccessTokenExpiryMs() { return accessTokenExpiryMs; }
public void setAccessTokenExpiryMs(long accessTokenExpiryMs) { this.accessTokenExpiryMs = accessTokenExpiryMs; }
@@ -59,6 +34,4 @@ public class SecurityProperties {
public void setUiOrigin(String uiOrigin) { this.uiOrigin = uiOrigin; }
public String getJwtSecret() { return jwtSecret; }
public void setJwtSecret(String jwtSecret) { this.jwtSecret = jwtSecret; }
public Oidc getOidc() { return oidc; }
public void setOidc(Oidc oidc) { this.oidc = oidc; }
}

View File

@@ -2,7 +2,13 @@ package com.cameleer3.server.app.security;
import com.cameleer3.server.app.dto.AuthTokenResponse;
import com.cameleer3.server.app.dto.ErrorResponse;
import com.cameleer3.server.core.admin.AuditCategory;
import com.cameleer3.server.core.admin.AuditResult;
import com.cameleer3.server.core.admin.AuditService;
import com.cameleer3.server.core.rbac.RbacService;
import com.cameleer3.server.core.rbac.SystemRole;
import com.cameleer3.server.core.security.JwtService;
import jakarta.servlet.http.HttpServletRequest;
import com.cameleer3.server.core.security.JwtService.JwtValidationResult;
import com.cameleer3.server.core.security.UserInfo;
import com.cameleer3.server.core.security.UserRepository;
@@ -15,6 +21,7 @@ import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
@@ -23,6 +30,8 @@ import org.springframework.web.server.ResponseStatusException;
import java.time.Instant;
import java.util.List;
import java.util.Map;
import java.util.Optional;
/**
* Authentication endpoints for the UI (local credentials).
@@ -37,16 +46,22 @@ import java.util.List;
public class UiAuthController {
private static final Logger log = LoggerFactory.getLogger(UiAuthController.class);
private static final BCryptPasswordEncoder passwordEncoder = new BCryptPasswordEncoder();
private final JwtService jwtService;
private final SecurityProperties properties;
private final UserRepository userRepository;
private final AuditService auditService;
private final RbacService rbacService;
public UiAuthController(JwtService jwtService, SecurityProperties properties,
UserRepository userRepository) {
UserRepository userRepository, AuditService auditService,
RbacService rbacService) {
this.jwtService = jwtService;
this.properties = properties;
this.userRepository = userRepository;
this.auditService = auditService;
this.rbacService = rbacService;
}
@PostMapping("/login")
@@ -54,36 +69,51 @@ public class UiAuthController {
@ApiResponse(responseCode = "200", description = "Login successful")
@ApiResponse(responseCode = "401", description = "Invalid credentials",
content = @Content(schema = @Schema(implementation = ErrorResponse.class)))
public ResponseEntity<AuthTokenResponse> login(@RequestBody LoginRequest request) {
public ResponseEntity<AuthTokenResponse> login(@RequestBody LoginRequest request,
HttpServletRequest httpRequest) {
String configuredUser = properties.getUiUser();
String configuredPassword = properties.getUiPassword();
if (configuredUser == null || configuredUser.isBlank()
|| configuredPassword == null || configuredPassword.isBlank()) {
log.warn("UI authentication attempted but CAMELEER_UI_USER / CAMELEER_UI_PASSWORD not configured");
throw new ResponseStatusException(HttpStatus.UNAUTHORIZED, "UI authentication not configured");
}
if (!configuredUser.equals(request.username())
|| !configuredPassword.equals(request.password())) {
log.debug("UI login failed for user: {}", request.username());
throw new ResponseStatusException(HttpStatus.UNAUTHORIZED, "Invalid credentials");
}
String subject = "user:" + request.username();
List<String> roles = List.of("ADMIN");
// Upsert local user into store
try {
userRepository.upsert(new UserInfo(
subject, "local", "", request.username(), roles, Instant.now()));
} catch (Exception e) {
log.warn("Failed to upsert local user to store (login continues): {}", e.getMessage());
// Try env-var admin first
boolean envMatch = configuredUser != null && !configuredUser.isBlank()
&& configuredPassword != null && !configuredPassword.isBlank()
&& configuredUser.equals(request.username())
&& configuredPassword.equals(request.password());
if (!envMatch) {
// Try per-user password
Optional<String> hash = userRepository.getPasswordHash(subject);
if (hash.isEmpty() || !passwordEncoder.matches(request.password(), hash.get())) {
log.debug("UI login failed for user: {}", request.username());
auditService.log(request.username(), "login_failed", AuditCategory.AUTH, null,
Map.of("reason", "Invalid credentials"), AuditResult.FAILURE, httpRequest);
throw new ResponseStatusException(HttpStatus.UNAUTHORIZED, "Invalid credentials");
}
}
if (envMatch) {
// Env-var admin: upsert and ensure ADMIN role + Admins group
try {
userRepository.upsert(new UserInfo(
subject, "local", "", request.username(), Instant.now()));
rbacService.assignRoleToUser(subject, SystemRole.ADMIN_ID);
rbacService.addUserToGroup(subject, SystemRole.ADMINS_GROUP_ID);
} catch (Exception e) {
log.warn("Failed to upsert local admin to store (login continues): {}", e.getMessage());
}
}
// Per-user logins: user already exists in DB (created by admin)
List<String> roles = rbacService.getSystemRoleNames(subject);
if (roles.isEmpty()) {
roles = List.of("VIEWER");
}
String accessToken = jwtService.createAccessToken(subject, "user", roles);
String refreshToken = jwtService.createRefreshToken(subject, "user", roles);
auditService.log(request.username(), "login", AuditCategory.AUTH, null, null, AuditResult.SUCCESS, httpRequest);
log.info("UI user logged in: {}", request.username());
return ResponseEntity.ok(new AuthTokenResponse(accessToken, refreshToken, request.username(), null));
}

View File

@@ -1,418 +0,0 @@
package com.cameleer3.server.app.storage;
import com.cameleer3.common.model.ExchangeSnapshot;
import com.cameleer3.common.model.ProcessorExecution;
import com.cameleer3.common.model.RouteExecution;
import com.cameleer3.server.core.detail.RawExecutionRow;
import com.cameleer3.server.core.ingestion.TaggedExecution;
import com.cameleer3.server.core.storage.DiagramRepository;
import com.cameleer3.server.core.storage.ExecutionRepository;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.jdbc.core.BatchPreparedStatementSetter;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Repository;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.sql.Timestamp;
import java.time.Instant;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.UUID;
/**
* ClickHouse implementation of {@link ExecutionRepository}.
* <p>
* Performs batch inserts into the {@code route_executions} table.
* Processor executions are flattened into parallel arrays with tree metadata
* (depth, parent index) for reconstruction.
*/
@Repository
public class ClickHouseExecutionRepository implements ExecutionRepository {
private static final Logger log = LoggerFactory.getLogger(ClickHouseExecutionRepository.class);
private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();
private static final String INSERT_SQL = """
INSERT INTO route_executions (
execution_id, route_id, agent_id, status, start_time, end_time,
duration_ms, correlation_id, exchange_id, error_message, error_stacktrace,
processor_ids, processor_types, processor_starts, processor_ends,
processor_durations, processor_statuses,
exchange_bodies, exchange_headers,
processor_depths, processor_parent_indexes,
processor_error_messages, processor_error_stacktraces,
processor_input_bodies, processor_output_bodies,
processor_input_headers, processor_output_headers,
processor_diagram_node_ids, diagram_content_hash
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""";
private final JdbcTemplate jdbcTemplate;
private final DiagramRepository diagramRepository;
public ClickHouseExecutionRepository(JdbcTemplate jdbcTemplate, DiagramRepository diagramRepository) {
this.jdbcTemplate = jdbcTemplate;
this.diagramRepository = diagramRepository;
}
@Override
public void insertBatch(List<TaggedExecution> executions) {
if (executions.isEmpty()) {
return;
}
jdbcTemplate.batchUpdate(INSERT_SQL, new BatchPreparedStatementSetter() {
@Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
TaggedExecution tagged = executions.get(i);
RouteExecution exec = tagged.execution();
String agentId = tagged.agentId() != null ? tagged.agentId() : "";
List<FlatProcessor> flatProcessors = flattenWithMetadata(exec.getProcessors());
int col = 1;
ps.setString(col++, UUID.randomUUID().toString());
ps.setString(col++, nullSafe(exec.getRouteId()));
ps.setString(col++, agentId);
ps.setString(col++, exec.getStatus() != null ? exec.getStatus().name() : "RUNNING");
ps.setObject(col++, toTimestamp(exec.getStartTime()));
ps.setObject(col++, toTimestamp(exec.getEndTime()));
ps.setLong(col++, exec.getDurationMs());
ps.setString(col++, nullSafe(exec.getCorrelationId()));
ps.setString(col++, nullSafe(exec.getExchangeId()));
ps.setString(col++, nullSafe(exec.getErrorMessage()));
ps.setString(col++, nullSafe(exec.getErrorStackTrace()));
// Original parallel arrays
ps.setObject(col++, flatProcessors.stream().map(fp -> nullSafe(fp.proc.getProcessorId())).toArray(String[]::new));
ps.setObject(col++, flatProcessors.stream().map(fp -> nullSafe(fp.proc.getProcessorType())).toArray(String[]::new));
ps.setObject(col++, flatProcessors.stream().map(fp -> toTimestamp(fp.proc.getStartTime())).toArray(Timestamp[]::new));
ps.setObject(col++, flatProcessors.stream().map(fp -> toTimestamp(fp.proc.getEndTime())).toArray(Timestamp[]::new));
ps.setObject(col++, flatProcessors.stream().mapToLong(fp -> fp.proc.getDurationMs()).boxed().toArray(Long[]::new));
ps.setObject(col++, flatProcessors.stream().map(fp -> fp.proc.getStatus() != null ? fp.proc.getStatus().name() : "RUNNING").toArray(String[]::new));
// Phase 2: exchange bodies and headers (concatenated for search)
StringBuilder allBodies = new StringBuilder();
StringBuilder allHeaders = new StringBuilder();
String[] inputBodies = new String[flatProcessors.size()];
String[] outputBodies = new String[flatProcessors.size()];
String[] inputHeaders = new String[flatProcessors.size()];
String[] outputHeaders = new String[flatProcessors.size()];
String[] errorMessages = new String[flatProcessors.size()];
String[] errorStacktraces = new String[flatProcessors.size()];
String[] diagramNodeIds = new String[flatProcessors.size()];
Short[] depths = new Short[flatProcessors.size()];
Integer[] parentIndexes = new Integer[flatProcessors.size()];
for (int j = 0; j < flatProcessors.size(); j++) {
FlatProcessor fp = flatProcessors.get(j);
ProcessorExecution p = fp.proc;
inputBodies[j] = nullSafe(p.getInputBody());
outputBodies[j] = nullSafe(p.getOutputBody());
inputHeaders[j] = mapToJson(p.getInputHeaders());
outputHeaders[j] = mapToJson(p.getOutputHeaders());
errorMessages[j] = nullSafe(p.getErrorMessage());
errorStacktraces[j] = nullSafe(p.getErrorStackTrace());
diagramNodeIds[j] = nullSafe(p.getDiagramNodeId());
depths[j] = (short) fp.depth;
parentIndexes[j] = fp.parentIndex;
allBodies.append(inputBodies[j]).append(' ').append(outputBodies[j]).append(' ');
allHeaders.append(inputHeaders[j]).append(' ').append(outputHeaders[j]).append(' ');
}
// Include route-level input/output snapshot in searchable text
appendSnapshotText(exec.getInputSnapshot(), allBodies, allHeaders);
appendSnapshotText(exec.getOutputSnapshot(), allBodies, allHeaders);
ps.setString(col++, allBodies.toString().trim()); // exchange_bodies
ps.setString(col++, allHeaders.toString().trim()); // exchange_headers
ps.setObject(col++, depths); // processor_depths
ps.setObject(col++, parentIndexes); // processor_parent_indexes
ps.setObject(col++, errorMessages); // processor_error_messages
ps.setObject(col++, errorStacktraces); // processor_error_stacktraces
ps.setObject(col++, inputBodies); // processor_input_bodies
ps.setObject(col++, outputBodies); // processor_output_bodies
ps.setObject(col++, inputHeaders); // processor_input_headers
ps.setObject(col++, outputHeaders); // processor_output_headers
ps.setObject(col++, diagramNodeIds); // processor_diagram_node_ids
String diagramHash = diagramRepository
.findContentHashForRoute(exec.getRouteId(), agentId)
.orElse("");
ps.setString(col++, diagramHash); // diagram_content_hash
}
@Override
public int getBatchSize() {
return executions.size();
}
});
log.debug("Inserted batch of {} route executions into ClickHouse", executions.size());
}
@Override
public Optional<RawExecutionRow> findRawById(String executionId) {
String sql = """
SELECT execution_id, route_id, agent_id, status, start_time, end_time,
duration_ms, correlation_id, exchange_id, error_message, error_stacktrace,
diagram_content_hash,
processor_ids, processor_types, processor_statuses,
processor_starts, processor_ends, processor_durations,
processor_diagram_node_ids,
processor_error_messages, processor_error_stacktraces,
processor_depths, processor_parent_indexes
FROM route_executions
WHERE execution_id = ?
LIMIT 1
""";
List<RawExecutionRow> results = jdbcTemplate.query(sql, (rs, rowNum) -> {
// Extract parallel arrays from ClickHouse
String[] processorIds = toStringArray(rs.getArray("processor_ids"));
String[] processorTypes = toStringArray(rs.getArray("processor_types"));
String[] processorStatuses = toStringArray(rs.getArray("processor_statuses"));
Instant[] processorStarts = toInstantArray(rs.getArray("processor_starts"));
Instant[] processorEnds = toInstantArray(rs.getArray("processor_ends"));
long[] processorDurations = toLongArray(rs.getArray("processor_durations"));
String[] processorDiagramNodeIds = toStringArray(rs.getArray("processor_diagram_node_ids"));
String[] processorErrorMessages = toStringArray(rs.getArray("processor_error_messages"));
String[] processorErrorStacktraces = toStringArray(rs.getArray("processor_error_stacktraces"));
int[] processorDepths = toIntArrayFromShort(rs.getArray("processor_depths"));
int[] processorParentIndexes = toIntArray(rs.getArray("processor_parent_indexes"));
Timestamp endTs = rs.getTimestamp("end_time");
return new RawExecutionRow(
rs.getString("execution_id"),
rs.getString("route_id"),
rs.getString("agent_id"),
rs.getString("status"),
rs.getTimestamp("start_time").toInstant(),
endTs != null ? endTs.toInstant() : null,
rs.getLong("duration_ms"),
rs.getString("correlation_id"),
rs.getString("exchange_id"),
rs.getString("error_message"),
rs.getString("error_stacktrace"),
rs.getString("diagram_content_hash"),
processorIds, processorTypes, processorStatuses,
processorStarts, processorEnds, processorDurations,
processorDiagramNodeIds,
processorErrorMessages, processorErrorStacktraces,
processorDepths, processorParentIndexes
);
}, executionId);
return results.isEmpty() ? Optional.empty() : Optional.of(results.get(0));
}
/**
* Find exchange snapshot data for a specific processor by index.
*
* @param executionId the execution ID
* @param processorIndex 0-based processor index
* @return map with inputBody, outputBody, inputHeaders, outputHeaders or empty if not found
*/
public Optional<java.util.Map<String, String>> findProcessorSnapshot(String executionId, int processorIndex) {
// ClickHouse arrays are 1-indexed in SQL
int chIndex = processorIndex + 1;
String sql = """
SELECT
processor_input_bodies[?] AS input_body,
processor_output_bodies[?] AS output_body,
processor_input_headers[?] AS input_headers,
processor_output_headers[?] AS output_headers,
length(processor_ids) AS proc_count
FROM route_executions
WHERE execution_id = ?
LIMIT 1
""";
List<java.util.Map<String, String>> results = jdbcTemplate.query(sql, (rs, rowNum) -> {
int procCount = rs.getInt("proc_count");
if (processorIndex < 0 || processorIndex >= procCount) {
return null;
}
var snapshot = new java.util.LinkedHashMap<String, String>();
snapshot.put("inputBody", rs.getString("input_body"));
snapshot.put("outputBody", rs.getString("output_body"));
snapshot.put("inputHeaders", rs.getString("input_headers"));
snapshot.put("outputHeaders", rs.getString("output_headers"));
return snapshot;
}, chIndex, chIndex, chIndex, chIndex, executionId);
if (results.isEmpty() || results.get(0) == null) {
return Optional.empty();
}
return Optional.of(results.get(0));
}
// --- Array extraction helpers ---
private static String[] toStringArray(java.sql.Array sqlArray) throws SQLException {
if (sqlArray == null) return new String[0];
Object arr = sqlArray.getArray();
if (arr instanceof String[] sa) return sa;
if (arr instanceof Object[] oa) {
String[] result = new String[oa.length];
for (int i = 0; i < oa.length; i++) {
result[i] = oa[i] != null ? oa[i].toString() : "";
}
return result;
}
return new String[0];
}
private static Instant[] toInstantArray(java.sql.Array sqlArray) throws SQLException {
if (sqlArray == null) return new Instant[0];
Object arr = sqlArray.getArray();
if (arr instanceof Timestamp[] ts) {
Instant[] result = new Instant[ts.length];
for (int i = 0; i < ts.length; i++) {
result[i] = ts[i] != null ? ts[i].toInstant() : Instant.EPOCH;
}
return result;
}
if (arr instanceof Object[] oa) {
Instant[] result = new Instant[oa.length];
for (int i = 0; i < oa.length; i++) {
if (oa[i] instanceof Timestamp ts) {
result[i] = ts.toInstant();
} else {
result[i] = Instant.EPOCH;
}
}
return result;
}
return new Instant[0];
}
private static long[] toLongArray(java.sql.Array sqlArray) throws SQLException {
if (sqlArray == null) return new long[0];
Object arr = sqlArray.getArray();
if (arr instanceof long[] la) return la;
if (arr instanceof Long[] la) {
long[] result = new long[la.length];
for (int i = 0; i < la.length; i++) {
result[i] = la[i] != null ? la[i] : 0;
}
return result;
}
if (arr instanceof Object[] oa) {
long[] result = new long[oa.length];
for (int i = 0; i < oa.length; i++) {
result[i] = oa[i] instanceof Number n ? n.longValue() : 0;
}
return result;
}
return new long[0];
}
private static int[] toIntArray(java.sql.Array sqlArray) throws SQLException {
if (sqlArray == null) return new int[0];
Object arr = sqlArray.getArray();
if (arr instanceof int[] ia) return ia;
if (arr instanceof Integer[] ia) {
int[] result = new int[ia.length];
for (int i = 0; i < ia.length; i++) {
result[i] = ia[i] != null ? ia[i] : 0;
}
return result;
}
if (arr instanceof Object[] oa) {
int[] result = new int[oa.length];
for (int i = 0; i < oa.length; i++) {
result[i] = oa[i] instanceof Number n ? n.intValue() : 0;
}
return result;
}
return new int[0];
}
private static int[] toIntArrayFromShort(java.sql.Array sqlArray) throws SQLException {
if (sqlArray == null) return new int[0];
Object arr = sqlArray.getArray();
if (arr instanceof short[] sa) {
int[] result = new int[sa.length];
for (int i = 0; i < sa.length; i++) {
result[i] = sa[i];
}
return result;
}
if (arr instanceof int[] ia) return ia;
if (arr instanceof Object[] oa) {
int[] result = new int[oa.length];
for (int i = 0; i < oa.length; i++) {
result[i] = oa[i] instanceof Number n ? n.intValue() : 0;
}
return result;
}
return new int[0];
}
/**
* Internal record for a flattened processor with tree metadata.
*/
private record FlatProcessor(ProcessorExecution proc, int depth, int parentIndex) {}
/**
* Flatten the processor tree with depth and parent index metadata (DFS order).
*/
private List<FlatProcessor> flattenWithMetadata(List<ProcessorExecution> processors) {
if (processors == null || processors.isEmpty()) {
return List.of();
}
var result = new ArrayList<FlatProcessor>();
for (ProcessorExecution p : processors) {
flattenRecursive(p, 0, -1, result);
}
return result;
}
private void flattenRecursive(ProcessorExecution processor, int depth, int parentIdx,
List<FlatProcessor> result) {
int myIndex = result.size();
result.add(new FlatProcessor(processor, depth, parentIdx));
if (processor.getChildren() != null) {
for (ProcessorExecution child : processor.getChildren()) {
flattenRecursive(child, depth + 1, myIndex, result);
}
}
}
private void appendSnapshotText(ExchangeSnapshot snapshot,
StringBuilder allBodies, StringBuilder allHeaders) {
if (snapshot != null) {
allBodies.append(nullSafe(snapshot.getBody())).append(' ');
allHeaders.append(mapToJson(snapshot.getHeaders())).append(' ');
}
}
private static String mapToJson(Map<String, String> map) {
if (map == null || map.isEmpty()) {
return "{}";
}
try {
return OBJECT_MAPPER.writeValueAsString(map);
} catch (JsonProcessingException e) {
log.warn("Failed to serialize headers map to JSON", e);
return "{}";
}
}
private static String nullSafe(String value) {
return value != null ? value : "";
}
private static Timestamp toTimestamp(Instant instant) {
return instant != null ? Timestamp.from(instant) : Timestamp.from(Instant.EPOCH);
}
}

View File

@@ -1,67 +0,0 @@
package com.cameleer3.server.app.storage;
import com.cameleer3.server.core.storage.MetricsRepository;
import com.cameleer3.server.core.storage.model.MetricsSnapshot;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.jdbc.core.BatchPreparedStatementSetter;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Repository;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.sql.Timestamp;
import java.time.Instant;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* ClickHouse implementation of {@link MetricsRepository}.
* <p>
* Performs batch inserts into the {@code agent_metrics} table.
*/
@Repository
public class ClickHouseMetricsRepository implements MetricsRepository {
private static final Logger log = LoggerFactory.getLogger(ClickHouseMetricsRepository.class);
private static final String INSERT_SQL = """
INSERT INTO agent_metrics (agent_id, collected_at, metric_name, metric_value, tags)
VALUES (?, ?, ?, ?, ?)
""";
private final JdbcTemplate jdbcTemplate;
public ClickHouseMetricsRepository(JdbcTemplate jdbcTemplate) {
this.jdbcTemplate = jdbcTemplate;
}
@Override
public void insertBatch(List<MetricsSnapshot> metrics) {
if (metrics.isEmpty()) {
return;
}
jdbcTemplate.batchUpdate(INSERT_SQL, new BatchPreparedStatementSetter() {
@Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
MetricsSnapshot m = metrics.get(i);
ps.setString(1, m.agentId() != null ? m.agentId() : "");
ps.setObject(2, m.collectedAt() != null ? Timestamp.from(m.collectedAt()) : Timestamp.from(Instant.EPOCH));
ps.setString(3, m.metricName() != null ? m.metricName() : "");
ps.setDouble(4, m.metricValue());
// ClickHouse Map(String, String) -- pass as a java.util.Map
Map<String, String> tags = m.tags() != null ? m.tags() : new HashMap<>();
ps.setObject(5, tags);
}
@Override
public int getBatchSize() {
return metrics.size();
}
});
log.debug("Inserted batch of {} metrics into ClickHouse", metrics.size());
}
}

View File

@@ -1,71 +0,0 @@
package com.cameleer3.server.app.storage;
import com.cameleer3.server.core.security.OidcConfig;
import com.cameleer3.server.core.security.OidcConfigRepository;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Repository;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.Arrays;
import java.util.List;
import java.util.Optional;
/**
* ClickHouse implementation of {@link OidcConfigRepository}.
* Singleton row with {@code config_id = 'default'}, using ReplacingMergeTree.
*/
@Repository
public class ClickHouseOidcConfigRepository implements OidcConfigRepository {
private final JdbcTemplate jdbc;
public ClickHouseOidcConfigRepository(JdbcTemplate jdbc) {
this.jdbc = jdbc;
}
@Override
public Optional<OidcConfig> find() {
List<OidcConfig> results = jdbc.query(
"SELECT enabled, issuer_uri, client_id, client_secret, roles_claim, default_roles, auto_signup, display_name_claim "
+ "FROM oidc_config FINAL WHERE config_id = 'default'",
this::mapRow
);
return results.isEmpty() ? Optional.empty() : Optional.of(results.get(0));
}
@Override
public void save(OidcConfig config) {
jdbc.update(
"INSERT INTO oidc_config (config_id, enabled, issuer_uri, client_id, client_secret, roles_claim, default_roles, auto_signup, display_name_claim, updated_at) "
+ "VALUES ('default', ?, ?, ?, ?, ?, ?, ?, ?, now64(3, 'UTC'))",
config.enabled(),
config.issuerUri(),
config.clientId(),
config.clientSecret(),
config.rolesClaim(),
config.defaultRoles().toArray(new String[0]),
config.autoSignup(),
config.displayNameClaim()
);
}
@Override
public void delete() {
jdbc.update("DELETE FROM oidc_config WHERE config_id = 'default'");
}
private OidcConfig mapRow(ResultSet rs, int rowNum) throws SQLException {
String[] rolesArray = (String[]) rs.getArray("default_roles").getArray();
return new OidcConfig(
rs.getBoolean("enabled"),
rs.getString("issuer_uri"),
rs.getString("client_id"),
rs.getString("client_secret"),
rs.getString("roles_claim"),
Arrays.asList(rolesArray),
rs.getBoolean("auto_signup"),
rs.getString("display_name_claim")
);
}
}

View File

@@ -1,112 +0,0 @@
package com.cameleer3.server.app.storage;
import com.cameleer3.server.core.security.UserInfo;
import com.cameleer3.server.core.security.UserRepository;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Repository;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.time.Instant;
import java.util.Arrays;
import java.util.List;
import java.util.Optional;
/**
* ClickHouse implementation of {@link UserRepository}.
* <p>
* Uses ReplacingMergeTree — reads use {@code FINAL} to get the latest version.
*/
@Repository
public class ClickHouseUserRepository implements UserRepository {
private final JdbcTemplate jdbc;
public ClickHouseUserRepository(JdbcTemplate jdbc) {
this.jdbc = jdbc;
}
@Override
public Optional<UserInfo> findById(String userId) {
List<UserInfo> results = jdbc.query(
"SELECT user_id, provider, email, display_name, roles, created_at "
+ "FROM users FINAL WHERE user_id = ?",
this::mapRow,
userId
);
return results.isEmpty() ? Optional.empty() : Optional.of(results.get(0));
}
@Override
public List<UserInfo> findAll() {
return jdbc.query(
"SELECT user_id, provider, email, display_name, roles, created_at FROM users FINAL ORDER BY user_id",
this::mapRow
);
}
@Override
public void upsert(UserInfo user) {
Optional<UserInfo> existing = findById(user.userId());
if (existing.isPresent()) {
UserInfo ex = existing.get();
// Skip write if nothing changed — avoids accumulating un-merged rows
if (ex.provider().equals(user.provider())
&& ex.email().equals(user.email())
&& ex.displayName().equals(user.displayName())
&& ex.roles().equals(user.roles())) {
return;
}
jdbc.update(
"INSERT INTO users (user_id, provider, email, display_name, roles, created_at, updated_at) "
+ "SELECT user_id, ?, ?, ?, ?, created_at, now64(3, 'UTC') "
+ "FROM users FINAL WHERE user_id = ?",
user.provider(),
user.email(),
user.displayName(),
user.roles().toArray(new String[0]),
user.userId()
);
} else {
jdbc.update(
"INSERT INTO users (user_id, provider, email, display_name, roles, updated_at) "
+ "VALUES (?, ?, ?, ?, ?, now64(3, 'UTC'))",
user.userId(),
user.provider(),
user.email(),
user.displayName(),
user.roles().toArray(new String[0])
);
}
}
@Override
public void updateRoles(String userId, List<String> roles) {
// ReplacingMergeTree: insert a new row with updated_at to supersede the old one.
// Copy existing fields, update roles.
jdbc.update(
"INSERT INTO users (user_id, provider, email, display_name, roles, created_at, updated_at) "
+ "SELECT user_id, provider, email, display_name, ?, created_at, now64(3, 'UTC') "
+ "FROM users FINAL WHERE user_id = ?",
roles.toArray(new String[0]),
userId
);
}
@Override
public void delete(String userId) {
jdbc.update("DELETE FROM users WHERE user_id = ?", userId);
}
private UserInfo mapRow(ResultSet rs, int rowNum) throws SQLException {
String[] rolesArray = (String[]) rs.getArray("roles").getArray();
return new UserInfo(
rs.getString("user_id"),
rs.getString("provider"),
rs.getString("email"),
rs.getString("display_name"),
Arrays.asList(rolesArray),
rs.getTimestamp("created_at").toInstant()
);
}
}

View File

@@ -0,0 +1,62 @@
package com.cameleer3.server.app.storage;
import com.cameleer3.server.core.agent.AgentEventRecord;
import com.cameleer3.server.core.agent.AgentEventRepository;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Repository;
import java.sql.Timestamp;
import java.time.Instant;
import java.util.ArrayList;
import java.util.List;
@Repository
public class PostgresAgentEventRepository implements AgentEventRepository {
private final JdbcTemplate jdbc;
public PostgresAgentEventRepository(JdbcTemplate jdbc) {
this.jdbc = jdbc;
}
@Override
public void insert(String agentId, String appId, String eventType, String detail) {
jdbc.update(
"INSERT INTO agent_events (agent_id, app_id, event_type, detail) VALUES (?, ?, ?, ?)",
agentId, appId, eventType, detail);
}
@Override
public List<AgentEventRecord> query(String appId, String agentId, Instant from, Instant to, int limit) {
var sql = new StringBuilder("SELECT id, agent_id, app_id, event_type, detail, timestamp FROM agent_events WHERE 1=1");
var params = new ArrayList<Object>();
if (appId != null) {
sql.append(" AND app_id = ?");
params.add(appId);
}
if (agentId != null) {
sql.append(" AND agent_id = ?");
params.add(agentId);
}
if (from != null) {
sql.append(" AND timestamp >= ?");
params.add(Timestamp.from(from));
}
if (to != null) {
sql.append(" AND timestamp < ?");
params.add(Timestamp.from(to));
}
sql.append(" ORDER BY timestamp DESC LIMIT ?");
params.add(limit);
return jdbc.query(sql.toString(), (rs, rowNum) -> new AgentEventRecord(
rs.getLong("id"),
rs.getString("agent_id"),
rs.getString("app_id"),
rs.getString("event_type"),
rs.getString("detail"),
rs.getTimestamp("timestamp").toInstant()
), params.toArray());
}
}

View File

@@ -0,0 +1,131 @@
package com.cameleer3.server.app.storage;
import com.cameleer3.server.core.admin.AuditCategory;
import com.cameleer3.server.core.admin.AuditRecord;
import com.cameleer3.server.core.admin.AuditRepository;
import com.cameleer3.server.core.admin.AuditResult;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Repository;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Timestamp;
import java.time.Instant;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Set;
@Repository
public class PostgresAuditRepository implements AuditRepository {
private static final Set<String> ALLOWED_SORT_COLUMNS =
Set.of("timestamp", "username", "action", "category");
private static final int MAX_PAGE_SIZE = 100;
private final JdbcTemplate jdbc;
private final ObjectMapper objectMapper;
public PostgresAuditRepository(JdbcTemplate jdbc, ObjectMapper objectMapper) {
this.jdbc = jdbc;
this.objectMapper = objectMapper;
}
@Override
public void insert(AuditRecord record) {
String detailJson = null;
if (record.detail() != null) {
try {
detailJson = objectMapper.writeValueAsString(record.detail());
} catch (JsonProcessingException e) {
throw new RuntimeException("Failed to serialize audit detail", e);
}
}
jdbc.update("""
INSERT INTO audit_log (username, action, category, target, detail, result, ip_address, user_agent)
VALUES (?, ?, ?, ?, ?::jsonb, ?, ?, ?)
""",
record.username(), record.action(),
record.category() != null ? record.category().name() : null,
record.target(), detailJson,
record.result() != null ? record.result().name() : null,
record.ipAddress(), record.userAgent());
}
@Override
public AuditPage find(AuditQuery query) {
int pageSize = Math.min(query.size() > 0 ? query.size() : 20, MAX_PAGE_SIZE);
int offset = query.page() * pageSize;
StringBuilder where = new StringBuilder("WHERE timestamp >= ? AND timestamp <= ?");
List<Object> params = new ArrayList<>();
params.add(Timestamp.from(query.from()));
params.add(Timestamp.from(query.to()));
if (query.username() != null && !query.username().isBlank()) {
where.append(" AND username = ?");
params.add(query.username());
}
if (query.category() != null) {
where.append(" AND category = ?");
params.add(query.category().name());
}
if (query.search() != null && !query.search().isBlank()) {
where.append(" AND (action ILIKE ? OR target ILIKE ?)");
String like = "%" + query.search() + "%";
params.add(like);
params.add(like);
}
// Count query
String countSql = "SELECT COUNT(*) FROM audit_log " + where;
Long totalCount = jdbc.queryForObject(countSql, Long.class, params.toArray());
// Sort column validation
String sortCol = ALLOWED_SORT_COLUMNS.contains(query.sort()) ? query.sort() : "timestamp";
String order = "asc".equalsIgnoreCase(query.order()) ? "ASC" : "DESC";
String dataSql = "SELECT * FROM audit_log " + where
+ " ORDER BY " + sortCol + " " + order
+ " LIMIT ? OFFSET ?";
List<Object> dataParams = new ArrayList<>(params);
dataParams.add(pageSize);
dataParams.add(offset);
List<AuditRecord> items = jdbc.query(dataSql, (rs, rowNum) -> mapRecord(rs), dataParams.toArray());
return new AuditPage(items, totalCount != null ? totalCount : 0);
}
@SuppressWarnings("unchecked")
private AuditRecord mapRecord(ResultSet rs) throws SQLException {
Map<String, Object> detail = null;
String detailStr = rs.getString("detail");
if (detailStr != null) {
try {
detail = objectMapper.readValue(detailStr, Map.class);
} catch (JsonProcessingException e) {
// leave detail as null if unparseable
}
}
Timestamp ts = rs.getTimestamp("timestamp");
String categoryStr = rs.getString("category");
String resultStr = rs.getString("result");
return new AuditRecord(
rs.getLong("id"),
ts != null ? ts.toInstant() : null,
rs.getString("username"),
rs.getString("action"),
categoryStr != null ? AuditCategory.valueOf(categoryStr) : null,
rs.getString("target"),
detail,
resultStr != null ? AuditResult.valueOf(resultStr) : null,
rs.getString("ip_address"),
rs.getString("user_agent")
);
}
}

View File

@@ -2,7 +2,7 @@ package com.cameleer3.server.app.storage;
import com.cameleer3.common.graph.RouteGraph;
import com.cameleer3.server.core.ingestion.TaggedDiagram;
import com.cameleer3.server.core.storage.DiagramRepository;
import com.cameleer3.server.core.storage.DiagramStore;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.datatype.jsr310.JavaTimeModule;
@@ -22,19 +22,20 @@ import java.util.Map;
import java.util.Optional;
/**
* ClickHouse implementation of {@link DiagramRepository}.
* PostgreSQL implementation of {@link DiagramStore}.
* <p>
* Stores route graphs as JSON with SHA-256 content-hash deduplication.
* The underlying table uses ReplacingMergeTree keyed on content_hash.
* Uses {@code ON CONFLICT (content_hash) DO NOTHING} for idempotent inserts.
*/
@Repository
public class ClickHouseDiagramRepository implements DiagramRepository {
public class PostgresDiagramStore implements DiagramStore {
private static final Logger log = LoggerFactory.getLogger(ClickHouseDiagramRepository.class);
private static final Logger log = LoggerFactory.getLogger(PostgresDiagramStore.class);
private static final String INSERT_SQL = """
INSERT INTO route_diagrams (content_hash, route_id, agent_id, definition)
VALUES (?, ?, ?, ?)
VALUES (?, ?, ?, ?::jsonb)
ON CONFLICT (content_hash) DO NOTHING
""";
private static final String SELECT_BY_HASH = """
@@ -50,7 +51,7 @@ public class ClickHouseDiagramRepository implements DiagramRepository {
private final JdbcTemplate jdbcTemplate;
private final ObjectMapper objectMapper;
public ClickHouseDiagramRepository(JdbcTemplate jdbcTemplate) {
public PostgresDiagramStore(JdbcTemplate jdbcTemplate) {
this.jdbcTemplate = jdbcTemplate;
this.objectMapper = new ObjectMapper();
this.objectMapper.registerModule(new JavaTimeModule());
@@ -82,7 +83,7 @@ public class ClickHouseDiagramRepository implements DiagramRepository {
try {
return Optional.of(objectMapper.readValue(json, RouteGraph.class));
} catch (JsonProcessingException e) {
log.error("Failed to deserialize RouteGraph from ClickHouse", e);
log.error("Failed to deserialize RouteGraph from PostgreSQL", e);
return Optional.empty();
}
}

View File

@@ -0,0 +1,131 @@
package com.cameleer3.server.app.storage;
import com.cameleer3.server.core.storage.ExecutionStore;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.jdbc.core.RowMapper;
import org.springframework.stereotype.Repository;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Timestamp;
import java.time.Instant;
import java.util.List;
import java.util.Optional;
@Repository
public class PostgresExecutionStore implements ExecutionStore {
private final JdbcTemplate jdbc;
public PostgresExecutionStore(JdbcTemplate jdbc) {
this.jdbc = jdbc;
}
@Override
public void upsert(ExecutionRecord execution) {
jdbc.update("""
INSERT INTO executions (execution_id, route_id, agent_id, application_name,
status, correlation_id, exchange_id, start_time, end_time,
duration_ms, error_message, error_stacktrace, diagram_content_hash,
created_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, now(), now())
ON CONFLICT (execution_id, start_time) DO UPDATE SET
status = CASE
WHEN EXCLUDED.status IN ('COMPLETED', 'FAILED')
AND executions.status = 'RUNNING'
THEN EXCLUDED.status
WHEN EXCLUDED.status = executions.status THEN executions.status
ELSE EXCLUDED.status
END,
end_time = COALESCE(EXCLUDED.end_time, executions.end_time),
duration_ms = COALESCE(EXCLUDED.duration_ms, executions.duration_ms),
error_message = COALESCE(EXCLUDED.error_message, executions.error_message),
error_stacktrace = COALESCE(EXCLUDED.error_stacktrace, executions.error_stacktrace),
diagram_content_hash = COALESCE(EXCLUDED.diagram_content_hash, executions.diagram_content_hash),
updated_at = now()
""",
execution.executionId(), execution.routeId(), execution.agentId(),
execution.applicationName(), execution.status(), execution.correlationId(),
execution.exchangeId(),
Timestamp.from(execution.startTime()),
execution.endTime() != null ? Timestamp.from(execution.endTime()) : null,
execution.durationMs(), execution.errorMessage(),
execution.errorStacktrace(), execution.diagramContentHash());
}
@Override
public void upsertProcessors(String executionId, Instant startTime,
String applicationName, String routeId,
List<ProcessorRecord> processors) {
jdbc.batchUpdate("""
INSERT INTO processor_executions (execution_id, processor_id, processor_type,
diagram_node_id, application_name, route_id, depth, parent_processor_id,
status, start_time, end_time, duration_ms, error_message, error_stacktrace,
input_body, output_body, input_headers, output_headers)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?::jsonb, ?::jsonb)
ON CONFLICT (execution_id, processor_id, start_time) DO UPDATE SET
status = EXCLUDED.status,
end_time = COALESCE(EXCLUDED.end_time, processor_executions.end_time),
duration_ms = COALESCE(EXCLUDED.duration_ms, processor_executions.duration_ms),
error_message = COALESCE(EXCLUDED.error_message, processor_executions.error_message),
error_stacktrace = COALESCE(EXCLUDED.error_stacktrace, processor_executions.error_stacktrace),
input_body = COALESCE(EXCLUDED.input_body, processor_executions.input_body),
output_body = COALESCE(EXCLUDED.output_body, processor_executions.output_body),
input_headers = COALESCE(EXCLUDED.input_headers, processor_executions.input_headers),
output_headers = COALESCE(EXCLUDED.output_headers, processor_executions.output_headers)
""",
processors.stream().map(p -> new Object[]{
p.executionId(), p.processorId(), p.processorType(),
p.diagramNodeId(), p.applicationName(), p.routeId(),
p.depth(), p.parentProcessorId(), p.status(),
Timestamp.from(p.startTime()),
p.endTime() != null ? Timestamp.from(p.endTime()) : null,
p.durationMs(), p.errorMessage(), p.errorStacktrace(),
p.inputBody(), p.outputBody(), p.inputHeaders(), p.outputHeaders()
}).toList());
}
@Override
public Optional<ExecutionRecord> findById(String executionId) {
List<ExecutionRecord> results = jdbc.query(
"SELECT * FROM executions WHERE execution_id = ? ORDER BY start_time DESC LIMIT 1",
EXECUTION_MAPPER, executionId);
return results.isEmpty() ? Optional.empty() : Optional.of(results.get(0));
}
@Override
public List<ProcessorRecord> findProcessors(String executionId) {
return jdbc.query(
"SELECT * FROM processor_executions WHERE execution_id = ? ORDER BY depth, start_time",
PROCESSOR_MAPPER, executionId);
}
private static final RowMapper<ExecutionRecord> EXECUTION_MAPPER = (rs, rowNum) ->
new ExecutionRecord(
rs.getString("execution_id"), rs.getString("route_id"),
rs.getString("agent_id"), rs.getString("application_name"),
rs.getString("status"), rs.getString("correlation_id"),
rs.getString("exchange_id"),
toInstant(rs, "start_time"), toInstant(rs, "end_time"),
rs.getObject("duration_ms") != null ? rs.getLong("duration_ms") : null,
rs.getString("error_message"), rs.getString("error_stacktrace"),
rs.getString("diagram_content_hash"));
private static final RowMapper<ProcessorRecord> PROCESSOR_MAPPER = (rs, rowNum) ->
new ProcessorRecord(
rs.getString("execution_id"), rs.getString("processor_id"),
rs.getString("processor_type"), rs.getString("diagram_node_id"),
rs.getString("application_name"), rs.getString("route_id"),
rs.getInt("depth"), rs.getString("parent_processor_id"),
rs.getString("status"),
toInstant(rs, "start_time"), toInstant(rs, "end_time"),
rs.getObject("duration_ms") != null ? rs.getLong("duration_ms") : null,
rs.getString("error_message"), rs.getString("error_stacktrace"),
rs.getString("input_body"), rs.getString("output_body"),
rs.getString("input_headers"), rs.getString("output_headers"));
private static Instant toInstant(ResultSet rs, String column) throws SQLException {
Timestamp ts = rs.getTimestamp(column);
return ts != null ? ts.toInstant() : null;
}
}

View File

@@ -0,0 +1,113 @@
package com.cameleer3.server.app.storage;
import com.cameleer3.server.core.rbac.*;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Repository;
import java.util.*;
@Repository
public class PostgresGroupRepository implements GroupRepository {
private final JdbcTemplate jdbc;
public PostgresGroupRepository(JdbcTemplate jdbc) {
this.jdbc = jdbc;
}
@Override
public List<GroupSummary> findAll() {
return jdbc.query("SELECT id, name FROM groups ORDER BY name",
(rs, rowNum) -> new GroupSummary(rs.getObject("id", UUID.class), rs.getString("name")));
}
@Override
public Optional<GroupDetail> findById(UUID id) {
var rows = jdbc.query(
"SELECT id, name, parent_group_id, created_at FROM groups WHERE id = ?",
(rs, rowNum) -> new GroupDetail(
rs.getObject("id", UUID.class),
rs.getString("name"),
rs.getObject("parent_group_id", UUID.class),
rs.getTimestamp("created_at").toInstant(),
List.of(), List.of(), List.of(), List.of()
), id);
if (rows.isEmpty()) return Optional.empty();
var g = rows.get(0);
List<RoleSummary> directRoles = jdbc.query("""
SELECT r.id, r.name, r.system FROM group_roles gr
JOIN roles r ON r.id = gr.role_id WHERE gr.group_id = ?
""", (rs, rowNum) -> new RoleSummary(rs.getObject("id", UUID.class),
rs.getString("name"), rs.getBoolean("system"), "direct"), id);
List<UserSummary> members = jdbc.query("""
SELECT u.user_id, u.display_name, u.provider FROM user_groups ug
JOIN users u ON u.user_id = ug.user_id WHERE ug.group_id = ?
""", (rs, rowNum) -> new UserSummary(rs.getString("user_id"),
rs.getString("display_name"), rs.getString("provider")), id);
List<GroupSummary> children = findChildGroups(id);
return Optional.of(new GroupDetail(g.id(), g.name(), g.parentGroupId(),
g.createdAt(), directRoles, List.of(), members, children));
}
@Override
public UUID create(String name, UUID parentGroupId) {
UUID id = UUID.randomUUID();
jdbc.update("INSERT INTO groups (id, name, parent_group_id) VALUES (?, ?, ?)",
id, name, parentGroupId);
return id;
}
@Override
public void update(UUID id, String name, UUID parentGroupId) {
jdbc.update("UPDATE groups SET name = COALESCE(?, name), parent_group_id = ? WHERE id = ?",
name, parentGroupId, id);
}
@Override
public void delete(UUID id) {
jdbc.update("DELETE FROM groups WHERE id = ?", id);
}
@Override
public void addRole(UUID groupId, UUID roleId) {
jdbc.update("INSERT INTO group_roles (group_id, role_id) VALUES (?, ?) ON CONFLICT DO NOTHING",
groupId, roleId);
}
@Override
public void removeRole(UUID groupId, UUID roleId) {
jdbc.update("DELETE FROM group_roles WHERE group_id = ? AND role_id = ?", groupId, roleId);
}
@Override
public List<GroupSummary> findChildGroups(UUID parentId) {
return jdbc.query("SELECT id, name FROM groups WHERE parent_group_id = ? ORDER BY name",
(rs, rowNum) -> new GroupSummary(rs.getObject("id", UUID.class), rs.getString("name")),
parentId);
}
@Override
public List<GroupSummary> findAncestorChain(UUID groupId) {
List<GroupSummary> chain = new ArrayList<>();
UUID current = groupId;
Set<UUID> visited = new HashSet<>();
while (current != null && visited.add(current)) {
UUID id = current;
var rows = jdbc.query(
"SELECT id, name, parent_group_id FROM groups WHERE id = ?",
(rs, rowNum) -> new Object[]{
new GroupSummary(rs.getObject("id", UUID.class), rs.getString("name")),
rs.getObject("parent_group_id", UUID.class)
}, id);
if (rows.isEmpty()) break;
chain.add((GroupSummary) rows.get(0)[0]);
current = (UUID) rows.get(0)[1];
}
Collections.reverse(chain);
return chain;
}
}

View File

@@ -0,0 +1,42 @@
package com.cameleer3.server.app.storage;
import com.cameleer3.server.core.storage.MetricsStore;
import com.cameleer3.server.core.storage.model.MetricsSnapshot;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Repository;
import java.sql.Timestamp;
import java.util.List;
@Repository
public class PostgresMetricsStore implements MetricsStore {
private static final ObjectMapper MAPPER = new ObjectMapper();
private final JdbcTemplate jdbc;
public PostgresMetricsStore(JdbcTemplate jdbc) {
this.jdbc = jdbc;
}
@Override
public void insertBatch(List<MetricsSnapshot> snapshots) {
jdbc.batchUpdate("""
INSERT INTO agent_metrics (agent_id, metric_name, metric_value, tags,
collected_at, server_received_at)
VALUES (?, ?, ?, ?::jsonb, ?, now())
""",
snapshots.stream().map(s -> new Object[]{
s.agentId(), s.metricName(), s.metricValue(),
tagsToJson(s.tags()),
Timestamp.from(s.collectedAt())
}).toList());
}
private String tagsToJson(java.util.Map<String, String> tags) {
if (tags == null || tags.isEmpty()) return null;
try { return MAPPER.writeValueAsString(tags); }
catch (JsonProcessingException e) { return null; }
}
}

View File

@@ -0,0 +1,62 @@
package com.cameleer3.server.app.storage;
import com.cameleer3.server.core.security.OidcConfig;
import com.cameleer3.server.core.security.OidcConfigRepository;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Repository;
import java.util.List;
import java.util.Optional;
@Repository
public class PostgresOidcConfigRepository implements OidcConfigRepository {
private final JdbcTemplate jdbc;
private final ObjectMapper objectMapper;
public PostgresOidcConfigRepository(JdbcTemplate jdbc, ObjectMapper objectMapper) {
this.jdbc = jdbc;
this.objectMapper = objectMapper;
}
@Override
public Optional<OidcConfig> find() {
List<OidcConfig> results = jdbc.query(
"SELECT config_val FROM server_config WHERE config_key = 'oidc'",
(rs, rowNum) -> {
String json = rs.getString("config_val");
try {
return objectMapper.readValue(json, OidcConfig.class);
} catch (JsonProcessingException e) {
throw new RuntimeException("Failed to deserialize OIDC config", e);
}
});
return results.isEmpty() ? Optional.empty() : Optional.of(results.get(0));
}
@Override
public void save(OidcConfig config) {
String json;
try {
json = objectMapper.writeValueAsString(config);
} catch (JsonProcessingException e) {
throw new RuntimeException("Failed to serialize OIDC config", e);
}
jdbc.update("""
INSERT INTO server_config (config_key, config_val, updated_at)
VALUES ('oidc', ?::jsonb, now())
ON CONFLICT (config_key) DO UPDATE SET
config_val = EXCLUDED.config_val,
updated_at = now()
""",
json);
}
@Override
public void delete() {
jdbc.update("DELETE FROM server_config WHERE config_key = 'oidc'");
}
}

View File

@@ -0,0 +1,85 @@
package com.cameleer3.server.app.storage;
import com.cameleer3.server.core.rbac.*;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Repository;
import java.util.*;
@Repository
public class PostgresRoleRepository implements RoleRepository {
private final JdbcTemplate jdbc;
public PostgresRoleRepository(JdbcTemplate jdbc) {
this.jdbc = jdbc;
}
@Override
public List<RoleDetail> findAll() {
return jdbc.query("""
SELECT id, name, description, scope, system, created_at FROM roles ORDER BY system DESC, name
""", (rs, rowNum) -> new RoleDetail(
rs.getObject("id", UUID.class),
rs.getString("name"),
rs.getString("description"),
rs.getString("scope"),
rs.getBoolean("system"),
rs.getTimestamp("created_at").toInstant(),
List.of(), List.of(), List.of()
));
}
@Override
public Optional<RoleDetail> findById(UUID id) {
var rows = jdbc.query("""
SELECT id, name, description, scope, system, created_at FROM roles WHERE id = ?
""", (rs, rowNum) -> new RoleDetail(
rs.getObject("id", UUID.class),
rs.getString("name"),
rs.getString("description"),
rs.getString("scope"),
rs.getBoolean("system"),
rs.getTimestamp("created_at").toInstant(),
List.of(), List.of(), List.of()
), id);
if (rows.isEmpty()) return Optional.empty();
var r = rows.get(0);
List<GroupSummary> assignedGroups = jdbc.query("""
SELECT g.id, g.name FROM group_roles gr
JOIN groups g ON g.id = gr.group_id WHERE gr.role_id = ?
""", (rs, rowNum) -> new GroupSummary(rs.getObject("id", UUID.class),
rs.getString("name")), id);
List<UserSummary> directUsers = jdbc.query("""
SELECT u.user_id, u.display_name, u.provider FROM user_roles ur
JOIN users u ON u.user_id = ur.user_id WHERE ur.role_id = ?
""", (rs, rowNum) -> new UserSummary(rs.getString("user_id"),
rs.getString("display_name"), rs.getString("provider")), id);
return Optional.of(new RoleDetail(r.id(), r.name(), r.description(),
r.scope(), r.system(), r.createdAt(), assignedGroups, directUsers, List.of()));
}
@Override
public UUID create(String name, String description, String scope) {
UUID id = UUID.randomUUID();
jdbc.update("INSERT INTO roles (id, name, description, scope, system) VALUES (?, ?, ?, ?, false)",
id, name, description, scope);
return id;
}
@Override
public void update(UUID id, String name, String description, String scope) {
jdbc.update("""
UPDATE roles SET name = COALESCE(?, name), description = COALESCE(?, description),
scope = COALESCE(?, scope) WHERE id = ? AND system = false
""", name, description, scope, id);
}
@Override
public void delete(UUID id) {
jdbc.update("DELETE FROM roles WHERE id = ? AND system = false", id);
}
}

View File

@@ -0,0 +1,187 @@
package com.cameleer3.server.app.storage;
import com.cameleer3.server.core.search.ExecutionStats;
import com.cameleer3.server.core.search.StatsTimeseries;
import com.cameleer3.server.core.search.StatsTimeseries.TimeseriesBucket;
import com.cameleer3.server.core.storage.StatsStore;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Repository;
import java.sql.Timestamp;
import java.time.Duration;
import java.time.Instant;
import java.time.temporal.ChronoUnit;
import java.util.ArrayList;
import java.util.List;
@Repository
public class PostgresStatsStore implements StatsStore {
private final JdbcTemplate jdbc;
public PostgresStatsStore(JdbcTemplate jdbc) {
this.jdbc = jdbc;
}
@Override
public ExecutionStats stats(Instant from, Instant to) {
return queryStats("stats_1m_all", from, to, List.of());
}
@Override
public ExecutionStats statsForApp(Instant from, Instant to, String applicationName) {
return queryStats("stats_1m_app", from, to, List.of(
new Filter("application_name", applicationName)));
}
@Override
public ExecutionStats statsForRoute(Instant from, Instant to, String routeId, List<String> agentIds) {
// Note: agentIds is accepted for interface compatibility but not filterable
// on the continuous aggregate (it groups by route_id, not agent_id).
// All agents for the same route contribute to the same aggregate.
return queryStats("stats_1m_route", from, to, List.of(
new Filter("route_id", routeId)));
}
@Override
public ExecutionStats statsForProcessor(Instant from, Instant to, String routeId, String processorType) {
return queryStats("stats_1m_processor", from, to, List.of(
new Filter("route_id", routeId),
new Filter("processor_type", processorType)));
}
@Override
public StatsTimeseries timeseries(Instant from, Instant to, int bucketCount) {
return queryTimeseries("stats_1m_all", from, to, bucketCount, List.of(), true);
}
@Override
public StatsTimeseries timeseriesForApp(Instant from, Instant to, int bucketCount, String applicationName) {
return queryTimeseries("stats_1m_app", from, to, bucketCount, List.of(
new Filter("application_name", applicationName)), true);
}
@Override
public StatsTimeseries timeseriesForRoute(Instant from, Instant to, int bucketCount,
String routeId, List<String> agentIds) {
return queryTimeseries("stats_1m_route", from, to, bucketCount, List.of(
new Filter("route_id", routeId)), true);
}
@Override
public StatsTimeseries timeseriesForProcessor(Instant from, Instant to, int bucketCount,
String routeId, String processorType) {
// stats_1m_processor does NOT have running_count column
return queryTimeseries("stats_1m_processor", from, to, bucketCount, List.of(
new Filter("route_id", routeId),
new Filter("processor_type", processorType)), false);
}
private record Filter(String column, String value) {}
private ExecutionStats queryStats(String view, Instant from, Instant to, List<Filter> filters) {
// running_count only exists on execution-level aggregates, not processor
boolean hasRunning = !view.equals("stats_1m_processor");
String runningCol = hasRunning ? "COALESCE(SUM(running_count), 0)" : "0";
String sql = "SELECT COALESCE(SUM(total_count), 0) AS total_count, " +
"COALESCE(SUM(failed_count), 0) AS failed_count, " +
"CASE WHEN SUM(total_count) > 0 THEN SUM(duration_sum) / SUM(total_count) ELSE 0 END AS avg_duration, " +
"COALESCE(MAX(p99_duration), 0) AS p99_duration, " +
runningCol + " AS active_count " +
"FROM " + view + " WHERE bucket >= ? AND bucket < ?";
List<Object> params = new ArrayList<>();
params.add(Timestamp.from(from));
params.add(Timestamp.from(to));
for (Filter f : filters) {
sql += " AND " + f.column() + " = ?";
params.add(f.value());
}
long totalCount = 0, failedCount = 0, avgDuration = 0, p99Duration = 0, activeCount = 0;
var currentResult = jdbc.query(sql, (rs, rowNum) -> new long[]{
rs.getLong("total_count"), rs.getLong("failed_count"),
rs.getLong("avg_duration"), rs.getLong("p99_duration"),
rs.getLong("active_count")
}, params.toArray());
if (!currentResult.isEmpty()) {
long[] r = currentResult.get(0);
totalCount = r[0]; failedCount = r[1]; avgDuration = r[2];
p99Duration = r[3]; activeCount = r[4];
}
// Previous period (shifted back 24h)
Instant prevFrom = from.minus(Duration.ofHours(24));
Instant prevTo = to.minus(Duration.ofHours(24));
List<Object> prevParams = new ArrayList<>();
prevParams.add(Timestamp.from(prevFrom));
prevParams.add(Timestamp.from(prevTo));
for (Filter f : filters) prevParams.add(f.value());
String prevSql = sql; // same shape, different time params
long prevTotal = 0, prevFailed = 0, prevAvg = 0, prevP99 = 0;
var prevResult = jdbc.query(prevSql, (rs, rowNum) -> new long[]{
rs.getLong("total_count"), rs.getLong("failed_count"),
rs.getLong("avg_duration"), rs.getLong("p99_duration")
}, prevParams.toArray());
if (!prevResult.isEmpty()) {
long[] r = prevResult.get(0);
prevTotal = r[0]; prevFailed = r[1]; prevAvg = r[2]; prevP99 = r[3];
}
// Today total (from midnight UTC)
Instant todayStart = Instant.now().truncatedTo(ChronoUnit.DAYS);
List<Object> todayParams = new ArrayList<>();
todayParams.add(Timestamp.from(todayStart));
todayParams.add(Timestamp.from(Instant.now()));
for (Filter f : filters) todayParams.add(f.value());
String todaySql = sql;
long totalToday = 0;
var todayResult = jdbc.query(todaySql, (rs, rowNum) -> rs.getLong("total_count"),
todayParams.toArray());
if (!todayResult.isEmpty()) totalToday = todayResult.get(0);
return new ExecutionStats(
totalCount, failedCount, avgDuration, p99Duration, activeCount,
totalToday, prevTotal, prevFailed, prevAvg, prevP99);
}
private StatsTimeseries queryTimeseries(String view, Instant from, Instant to,
int bucketCount, List<Filter> filters,
boolean hasRunningCount) {
long intervalSeconds = Duration.between(from, to).toSeconds() / Math.max(bucketCount, 1);
if (intervalSeconds < 60) intervalSeconds = 60;
String runningCol = hasRunningCount ? "COALESCE(SUM(running_count), 0)" : "0";
String sql = "SELECT time_bucket(? * INTERVAL '1 second', bucket) AS period, " +
"COALESCE(SUM(total_count), 0) AS total_count, " +
"COALESCE(SUM(failed_count), 0) AS failed_count, " +
"CASE WHEN SUM(total_count) > 0 THEN SUM(duration_sum) / SUM(total_count) ELSE 0 END AS avg_duration, " +
"COALESCE(MAX(p99_duration), 0) AS p99_duration, " +
runningCol + " AS active_count " +
"FROM " + view + " WHERE bucket >= ? AND bucket < ?";
List<Object> params = new ArrayList<>();
params.add(intervalSeconds);
params.add(Timestamp.from(from));
params.add(Timestamp.from(to));
for (Filter f : filters) {
sql += " AND " + f.column() + " = ?";
params.add(f.value());
}
sql += " GROUP BY period ORDER BY period";
List<TimeseriesBucket> buckets = jdbc.query(sql, (rs, rowNum) ->
new TimeseriesBucket(
rs.getTimestamp("period").toInstant(),
rs.getLong("total_count"), rs.getLong("failed_count"),
rs.getLong("avg_duration"), rs.getLong("p99_duration"),
rs.getLong("active_count")
), params.toArray());
return new StatsTimeseries(buckets);
}
}

View File

@@ -0,0 +1,58 @@
package com.cameleer3.server.app.storage;
import com.cameleer3.server.core.admin.ThresholdConfig;
import com.cameleer3.server.core.admin.ThresholdRepository;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Repository;
import java.util.List;
import java.util.Optional;
@Repository
public class PostgresThresholdRepository implements ThresholdRepository {
private final JdbcTemplate jdbc;
private final ObjectMapper objectMapper;
public PostgresThresholdRepository(JdbcTemplate jdbc, ObjectMapper objectMapper) {
this.jdbc = jdbc;
this.objectMapper = objectMapper;
}
@Override
public Optional<ThresholdConfig> find() {
List<ThresholdConfig> results = jdbc.query(
"SELECT config_val FROM server_config WHERE config_key = 'thresholds'",
(rs, rowNum) -> {
String json = rs.getString("config_val");
try {
return objectMapper.readValue(json, ThresholdConfig.class);
} catch (JsonProcessingException e) {
throw new RuntimeException("Failed to deserialize threshold config", e);
}
});
return results.isEmpty() ? Optional.empty() : Optional.of(results.get(0));
}
@Override
public void save(ThresholdConfig config, String updatedBy) {
String json;
try {
json = objectMapper.writeValueAsString(config);
} catch (JsonProcessingException e) {
throw new RuntimeException("Failed to serialize threshold config", e);
}
jdbc.update("""
INSERT INTO server_config (config_key, config_val, updated_by, updated_at)
VALUES ('thresholds', ?::jsonb, ?, now())
ON CONFLICT (config_key) DO UPDATE SET
config_val = EXCLUDED.config_val,
updated_by = EXCLUDED.updated_by,
updated_at = now()
""",
json, updatedBy);
}
}

View File

@@ -0,0 +1,75 @@
package com.cameleer3.server.app.storage;
import com.cameleer3.server.core.security.UserInfo;
import com.cameleer3.server.core.security.UserRepository;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Repository;
import java.util.List;
import java.util.Optional;
@Repository
public class PostgresUserRepository implements UserRepository {
private final JdbcTemplate jdbc;
public PostgresUserRepository(JdbcTemplate jdbc) {
this.jdbc = jdbc;
}
@Override
public Optional<UserInfo> findById(String userId) {
var results = jdbc.query(
"SELECT user_id, provider, email, display_name, created_at FROM users WHERE user_id = ?",
(rs, rowNum) -> mapUser(rs), userId);
return results.isEmpty() ? Optional.empty() : Optional.of(results.get(0));
}
@Override
public List<UserInfo> findAll() {
return jdbc.query("SELECT user_id, provider, email, display_name, created_at FROM users ORDER BY user_id",
(rs, rowNum) -> mapUser(rs));
}
@Override
public void upsert(UserInfo user) {
jdbc.update("""
INSERT INTO users (user_id, provider, email, display_name, created_at, updated_at)
VALUES (?, ?, ?, ?, now(), now())
ON CONFLICT (user_id) DO UPDATE SET
provider = EXCLUDED.provider, email = EXCLUDED.email,
display_name = EXCLUDED.display_name,
updated_at = now()
""",
user.userId(), user.provider(), user.email(), user.displayName());
}
@Override
public void delete(String userId) {
jdbc.update("DELETE FROM users WHERE user_id = ?", userId);
}
@Override
public void setPassword(String userId, String passwordHash) {
jdbc.update("UPDATE users SET password_hash = ? WHERE user_id = ?", passwordHash, userId);
}
@Override
public Optional<String> getPasswordHash(String userId) {
List<String> results = jdbc.query(
"SELECT password_hash FROM users WHERE user_id = ?",
(rs, rowNum) -> rs.getString("password_hash"),
userId);
if (results.isEmpty() || results.get(0) == null) return Optional.empty();
return Optional.of(results.get(0));
}
private UserInfo mapUser(java.sql.ResultSet rs) throws java.sql.SQLException {
java.sql.Timestamp ts = rs.getTimestamp("created_at");
java.time.Instant createdAt = ts != null ? ts.toInstant() : null;
return new UserInfo(
rs.getString("user_id"), rs.getString("provider"),
rs.getString("email"), rs.getString("display_name"),
createdAt);
}
}

View File

@@ -3,10 +3,18 @@ server:
spring:
datasource:
url: jdbc:ch://localhost:8123/cameleer3
url: jdbc:postgresql://localhost:5432/cameleer3?currentSchema=${CAMELEER_DB_SCHEMA:public}
username: cameleer
password: cameleer_dev
driver-class-name: com.clickhouse.jdbc.ClickHouseDriver
password: ${CAMELEER_DB_PASSWORD:cameleer_dev}
driver-class-name: org.postgresql.Driver
flyway:
enabled: true
locations: classpath:db/migration
url: jdbc:postgresql://localhost:5432/cameleer3?currentSchema=${CAMELEER_DB_SCHEMA:public},public
user: ${spring.datasource.username}
password: ${spring.datasource.password}
schemas: ${CAMELEER_DB_SCHEMA:public}
default-schema: ${CAMELEER_DB_SCHEMA:public}
mvc:
async:
request-timeout: -1
@@ -29,8 +37,15 @@ ingestion:
batch-size: 5000
flush-interval-ms: 1000
clickhouse:
ttl-days: 30
opensearch:
url: ${OPENSEARCH_URL:http://localhost:9200}
index-prefix: ${CAMELEER_OPENSEARCH_INDEX_PREFIX:executions-}
queue-size: ${CAMELEER_OPENSEARCH_QUEUE_SIZE:10000}
debounce-ms: ${CAMELEER_OPENSEARCH_DEBOUNCE_MS:2000}
cameleer:
body-size-limit: ${CAMELEER_BODY_SIZE_LIMIT:16384}
retention-days: ${CAMELEER_RETENTION_DAYS:30}
security:
access-token-expiry-ms: 3600000
@@ -41,13 +56,7 @@ security:
ui-password: ${CAMELEER_UI_PASSWORD:admin}
ui-origin: ${CAMELEER_UI_ORIGIN:http://localhost:5173}
jwt-secret: ${CAMELEER_JWT_SECRET:}
oidc:
enabled: ${CAMELEER_OIDC_ENABLED:false}
issuer-uri: ${CAMELEER_OIDC_ISSUER:}
client-id: ${CAMELEER_OIDC_CLIENT_ID:}
client-secret: ${CAMELEER_OIDC_CLIENT_SECRET:}
roles-claim: ${CAMELEER_OIDC_ROLES_CLAIM:realm_access.roles}
default-roles: ${CAMELEER_OIDC_DEFAULT_ROLES:VIEWER}
springdoc:
api-docs:

View File

@@ -1,57 +0,0 @@
-- Cameleer3 ClickHouse Schema
-- Tables for route executions, route diagrams, and agent metrics.
CREATE TABLE IF NOT EXISTS route_executions (
execution_id String,
route_id LowCardinality(String),
agent_id LowCardinality(String),
status LowCardinality(String),
start_time DateTime64(3, 'UTC'),
end_time Nullable(DateTime64(3, 'UTC')),
duration_ms UInt64,
correlation_id String,
exchange_id String,
error_message String DEFAULT '',
error_stacktrace String DEFAULT '',
-- Nested processor executions stored as parallel arrays
processor_ids Array(String),
processor_types Array(LowCardinality(String)),
processor_starts Array(DateTime64(3, 'UTC')),
processor_ends Array(DateTime64(3, 'UTC')),
processor_durations Array(UInt64),
processor_statuses Array(LowCardinality(String)),
-- Metadata
server_received_at DateTime64(3, 'UTC') DEFAULT now64(3, 'UTC'),
-- Skip indexes
INDEX idx_correlation correlation_id TYPE bloom_filter GRANULARITY 4,
INDEX idx_error error_message TYPE tokenbf_v1(32768, 3, 0) GRANULARITY 4
)
ENGINE = MergeTree()
PARTITION BY toYYYYMMDD(start_time)
ORDER BY (agent_id, status, start_time, execution_id)
TTL toDateTime(start_time) + toIntervalDay(30)
SETTINGS ttl_only_drop_parts = 1;
CREATE TABLE IF NOT EXISTS route_diagrams (
content_hash String,
route_id LowCardinality(String),
agent_id LowCardinality(String),
definition String,
created_at DateTime64(3, 'UTC') DEFAULT now64(3, 'UTC')
)
ENGINE = ReplacingMergeTree(created_at)
ORDER BY (content_hash);
CREATE TABLE IF NOT EXISTS agent_metrics (
agent_id LowCardinality(String),
collected_at DateTime64(3, 'UTC'),
metric_name LowCardinality(String),
metric_value Float64,
tags Map(String, String),
server_received_at DateTime64(3, 'UTC') DEFAULT now64(3, 'UTC')
)
ENGINE = MergeTree()
PARTITION BY toYYYYMMDD(collected_at)
ORDER BY (agent_id, metric_name, collected_at)
TTL toDateTime(collected_at) + toIntervalDay(30)
SETTINGS ttl_only_drop_parts = 1;

View File

@@ -1,25 +0,0 @@
-- Phase 2: Schema extension for search, detail, and diagram linking columns.
-- Adds exchange snapshot data, processor tree metadata, and diagram content hash.
ALTER TABLE route_executions
ADD COLUMN IF NOT EXISTS exchange_bodies String DEFAULT '',
ADD COLUMN IF NOT EXISTS exchange_headers String DEFAULT '',
ADD COLUMN IF NOT EXISTS processor_depths Array(UInt16) DEFAULT [],
ADD COLUMN IF NOT EXISTS processor_parent_indexes Array(Int32) DEFAULT [],
ADD COLUMN IF NOT EXISTS processor_error_messages Array(String) DEFAULT [],
ADD COLUMN IF NOT EXISTS processor_error_stacktraces Array(String) DEFAULT [],
ADD COLUMN IF NOT EXISTS processor_input_bodies Array(String) DEFAULT [],
ADD COLUMN IF NOT EXISTS processor_output_bodies Array(String) DEFAULT [],
ADD COLUMN IF NOT EXISTS processor_input_headers Array(String) DEFAULT [],
ADD COLUMN IF NOT EXISTS processor_output_headers Array(String) DEFAULT [],
ADD COLUMN IF NOT EXISTS processor_diagram_node_ids Array(String) DEFAULT [],
ADD COLUMN IF NOT EXISTS diagram_content_hash String DEFAULT '';
-- Skip indexes for full-text search on new text columns
ALTER TABLE route_executions
ADD INDEX IF NOT EXISTS idx_exchange_bodies exchange_bodies TYPE tokenbf_v1(32768, 3, 0) GRANULARITY 4,
ADD INDEX IF NOT EXISTS idx_exchange_headers exchange_headers TYPE tokenbf_v1(32768, 3, 0) GRANULARITY 4;
-- Skip index on error_stacktrace (not indexed in 01-schema.sql, needed for SRCH-05)
ALTER TABLE route_executions
ADD INDEX IF NOT EXISTS idx_error_stacktrace error_stacktrace TYPE tokenbf_v1(32768, 3, 0) GRANULARITY 4;

View File

@@ -1,10 +0,0 @@
CREATE TABLE IF NOT EXISTS users (
user_id String,
provider LowCardinality(String),
email String DEFAULT '',
display_name String DEFAULT '',
roles Array(LowCardinality(String)),
created_at DateTime64(3, 'UTC') DEFAULT now64(3, 'UTC'),
updated_at DateTime64(3, 'UTC') DEFAULT now64(3, 'UTC')
) ENGINE = ReplacingMergeTree(updated_at)
ORDER BY (user_id);

View File

@@ -1,13 +0,0 @@
CREATE TABLE IF NOT EXISTS oidc_config (
config_id String DEFAULT 'default',
enabled Bool DEFAULT false,
issuer_uri String DEFAULT '',
client_id String DEFAULT '',
client_secret String DEFAULT '',
roles_claim String DEFAULT 'realm_access.roles',
default_roles Array(LowCardinality(String)),
auto_signup Bool DEFAULT true,
display_name_claim String DEFAULT 'name',
updated_at DateTime64(3, 'UTC') DEFAULT now64(3, 'UTC')
) ENGINE = ReplacingMergeTree(updated_at)
ORDER BY (config_id);

View File

@@ -1 +0,0 @@
ALTER TABLE oidc_config ADD COLUMN IF NOT EXISTS auto_signup Bool DEFAULT true;

View File

@@ -1 +0,0 @@
ALTER TABLE oidc_config ADD COLUMN IF NOT EXISTS display_name_claim String DEFAULT 'name';

View File

@@ -1,35 +0,0 @@
-- Pre-aggregated 5-minute stats rollup for route executions.
-- Uses AggregatingMergeTree with -State/-Merge combinators so intermediate
-- aggregates can be merged across arbitrary time windows and dimensions.
-- Drop existing objects to allow schema changes (MV must be dropped before table)
DROP VIEW IF EXISTS route_execution_stats_5m_mv;
DROP TABLE IF EXISTS route_execution_stats_5m;
CREATE TABLE route_execution_stats_5m (
bucket DateTime('UTC'),
route_id LowCardinality(String),
agent_id LowCardinality(String),
total_count AggregateFunction(count),
failed_count AggregateFunction(countIf, UInt8),
duration_sum AggregateFunction(sum, UInt64),
p99_duration AggregateFunction(quantileTDigest(0.99), UInt64)
)
ENGINE = AggregatingMergeTree()
PARTITION BY toYYYYMMDD(bucket)
ORDER BY (agent_id, route_id, bucket)
TTL bucket + toIntervalDay(30)
SETTINGS ttl_only_drop_parts = 1;
CREATE MATERIALIZED VIEW route_execution_stats_5m_mv
TO route_execution_stats_5m
AS SELECT
toStartOfFiveMinutes(start_time) AS bucket,
route_id,
agent_id,
countState() AS total_count,
countIfState(status = 'FAILED') AS failed_count,
sumState(duration_ms) AS duration_sum,
quantileTDigestState(0.99)(duration_ms) AS p99_duration
FROM route_executions
GROUP BY bucket, route_id, agent_id;

View File

@@ -1,16 +0,0 @@
-- One-time idempotent backfill of existing route_executions into the
-- 5-minute stats rollup table. Safe for repeated execution — the WHERE
-- clause skips the INSERT if the target table already contains data.
INSERT INTO route_execution_stats_5m
SELECT
toStartOfFiveMinutes(start_time) AS bucket,
route_id,
agent_id,
countState() AS total_count,
countIfState(status = 'FAILED') AS failed_count,
sumState(duration_ms) AS duration_sum,
quantileTDigestState(0.99)(duration_ms) AS p99_duration
FROM route_executions
WHERE (SELECT count() FROM route_execution_stats_5m) = 0
GROUP BY bucket, route_id, agent_id;

View File

@@ -0,0 +1,303 @@
-- V1__init.sql - Consolidated schema for Cameleer3
-- Extensions
CREATE EXTENSION IF NOT EXISTS timescaledb;
CREATE EXTENSION IF NOT EXISTS timescaledb_toolkit;
-- =============================================================
-- RBAC
-- =============================================================
CREATE TABLE users (
user_id TEXT PRIMARY KEY,
provider TEXT NOT NULL,
email TEXT,
display_name TEXT,
password_hash TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE roles (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL UNIQUE,
description TEXT NOT NULL DEFAULT '',
scope TEXT NOT NULL DEFAULT 'custom',
system BOOLEAN NOT NULL DEFAULT false,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
INSERT INTO roles (id, name, description, scope, system) VALUES
('00000000-0000-0000-0000-000000000001', 'AGENT', 'Agent registration and data ingestion', 'system-wide', true),
('00000000-0000-0000-0000-000000000002', 'VIEWER', 'Read-only access to dashboards and data', 'system-wide', true),
('00000000-0000-0000-0000-000000000003', 'OPERATOR', 'Operational commands (start/stop/configure agents)', 'system-wide', true),
('00000000-0000-0000-0000-000000000004', 'ADMIN', 'Full administrative access', 'system-wide', true);
CREATE TABLE groups (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL UNIQUE,
parent_group_id UUID REFERENCES groups(id) ON DELETE SET NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- Built-in Admins group
INSERT INTO groups (id, name) VALUES
('00000000-0000-0000-0000-000000000010', 'Admins');
CREATE TABLE group_roles (
group_id UUID NOT NULL REFERENCES groups(id) ON DELETE CASCADE,
role_id UUID NOT NULL REFERENCES roles(id) ON DELETE CASCADE,
PRIMARY KEY (group_id, role_id)
);
-- Assign ADMIN role to Admins group
INSERT INTO group_roles (group_id, role_id) VALUES
('00000000-0000-0000-0000-000000000010', '00000000-0000-0000-0000-000000000004');
CREATE TABLE user_groups (
user_id TEXT NOT NULL REFERENCES users(user_id) ON DELETE CASCADE,
group_id UUID NOT NULL REFERENCES groups(id) ON DELETE CASCADE,
PRIMARY KEY (user_id, group_id)
);
CREATE TABLE user_roles (
user_id TEXT NOT NULL REFERENCES users(user_id) ON DELETE CASCADE,
role_id UUID NOT NULL REFERENCES roles(id) ON DELETE CASCADE,
PRIMARY KEY (user_id, role_id)
);
CREATE INDEX idx_user_roles_user_id ON user_roles(user_id);
CREATE INDEX idx_user_groups_user_id ON user_groups(user_id);
CREATE INDEX idx_group_roles_group_id ON group_roles(group_id);
CREATE INDEX idx_groups_parent ON groups(parent_group_id);
-- =============================================================
-- Execution data (TimescaleDB hypertables)
-- =============================================================
CREATE TABLE executions (
execution_id TEXT NOT NULL,
route_id TEXT NOT NULL,
agent_id TEXT NOT NULL,
application_name TEXT NOT NULL,
status TEXT NOT NULL,
correlation_id TEXT,
exchange_id TEXT,
start_time TIMESTAMPTZ NOT NULL,
end_time TIMESTAMPTZ,
duration_ms BIGINT,
error_message TEXT,
error_stacktrace TEXT,
diagram_content_hash TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (execution_id, start_time)
);
SELECT create_hypertable('executions', 'start_time', chunk_time_interval => INTERVAL '1 day');
CREATE INDEX idx_executions_agent_time ON executions (agent_id, start_time DESC);
CREATE INDEX idx_executions_route_time ON executions (route_id, start_time DESC);
CREATE INDEX idx_executions_app_time ON executions (application_name, start_time DESC);
CREATE INDEX idx_executions_correlation ON executions (correlation_id);
CREATE TABLE processor_executions (
id BIGSERIAL,
execution_id TEXT NOT NULL,
processor_id TEXT NOT NULL,
processor_type TEXT NOT NULL,
diagram_node_id TEXT,
application_name TEXT NOT NULL,
route_id TEXT NOT NULL,
depth INT NOT NULL,
parent_processor_id TEXT,
status TEXT NOT NULL,
start_time TIMESTAMPTZ NOT NULL,
end_time TIMESTAMPTZ,
duration_ms BIGINT,
error_message TEXT,
error_stacktrace TEXT,
input_body TEXT,
output_body TEXT,
input_headers JSONB,
output_headers JSONB,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
UNIQUE (execution_id, processor_id, start_time)
);
SELECT create_hypertable('processor_executions', 'start_time', chunk_time_interval => INTERVAL '1 day');
CREATE INDEX idx_proc_exec_execution ON processor_executions (execution_id);
CREATE INDEX idx_proc_exec_type_time ON processor_executions (processor_type, start_time DESC);
-- =============================================================
-- Agent metrics
-- =============================================================
CREATE TABLE agent_metrics (
agent_id TEXT NOT NULL,
metric_name TEXT NOT NULL,
metric_value DOUBLE PRECISION NOT NULL,
tags JSONB,
collected_at TIMESTAMPTZ NOT NULL,
server_received_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
SELECT create_hypertable('agent_metrics', 'collected_at', chunk_time_interval => INTERVAL '1 day');
CREATE INDEX idx_metrics_agent_name ON agent_metrics (agent_id, metric_name, collected_at DESC);
-- =============================================================
-- Route diagrams
-- =============================================================
CREATE TABLE route_diagrams (
content_hash TEXT PRIMARY KEY,
route_id TEXT NOT NULL,
agent_id TEXT NOT NULL,
definition TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE INDEX idx_diagrams_route_agent ON route_diagrams (route_id, agent_id);
-- =============================================================
-- Agent events
-- =============================================================
CREATE TABLE agent_events (
id BIGSERIAL PRIMARY KEY,
agent_id TEXT NOT NULL,
app_id TEXT NOT NULL,
event_type TEXT NOT NULL,
detail TEXT,
timestamp TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX idx_agent_events_agent ON agent_events(agent_id, timestamp DESC);
CREATE INDEX idx_agent_events_app ON agent_events(app_id, timestamp DESC);
CREATE INDEX idx_agent_events_time ON agent_events(timestamp DESC);
-- =============================================================
-- Server configuration
-- =============================================================
CREATE TABLE server_config (
config_key TEXT PRIMARY KEY,
config_val JSONB NOT NULL,
updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_by TEXT
);
-- =============================================================
-- Admin
-- =============================================================
CREATE TABLE audit_log (
id BIGSERIAL PRIMARY KEY,
timestamp TIMESTAMPTZ NOT NULL DEFAULT now(),
username TEXT NOT NULL,
action TEXT NOT NULL,
category TEXT NOT NULL,
target TEXT,
detail JSONB,
result TEXT NOT NULL,
ip_address TEXT,
user_agent TEXT
);
CREATE INDEX idx_audit_log_timestamp ON audit_log (timestamp DESC);
CREATE INDEX idx_audit_log_username ON audit_log (username);
CREATE INDEX idx_audit_log_category ON audit_log (category);
CREATE INDEX idx_audit_log_action ON audit_log (action);
CREATE INDEX idx_audit_log_target ON audit_log (target);
-- =============================================================
-- Continuous aggregates
-- =============================================================
CREATE MATERIALIZED VIEW stats_1m_all
WITH (timescaledb.continuous, timescaledb.materialized_only = false) AS
SELECT
time_bucket('1 minute', start_time) AS bucket,
COUNT(*) AS total_count,
COUNT(*) FILTER (WHERE status = 'FAILED') AS failed_count,
COUNT(*) FILTER (WHERE status = 'RUNNING') AS running_count,
SUM(duration_ms) AS duration_sum,
MAX(duration_ms) AS duration_max,
approx_percentile(0.99, percentile_agg(duration_ms::DOUBLE PRECISION)) AS p99_duration
FROM executions
WHERE status IS NOT NULL
GROUP BY bucket
WITH NO DATA;
CREATE MATERIALIZED VIEW stats_1m_app
WITH (timescaledb.continuous, timescaledb.materialized_only = false) AS
SELECT
time_bucket('1 minute', start_time) AS bucket,
application_name,
COUNT(*) AS total_count,
COUNT(*) FILTER (WHERE status = 'FAILED') AS failed_count,
COUNT(*) FILTER (WHERE status = 'RUNNING') AS running_count,
SUM(duration_ms) AS duration_sum,
MAX(duration_ms) AS duration_max,
approx_percentile(0.99, percentile_agg(duration_ms::DOUBLE PRECISION)) AS p99_duration
FROM executions
WHERE status IS NOT NULL
GROUP BY bucket, application_name
WITH NO DATA;
CREATE MATERIALIZED VIEW stats_1m_route
WITH (timescaledb.continuous, timescaledb.materialized_only = false) AS
SELECT
time_bucket('1 minute', start_time) AS bucket,
application_name,
route_id,
COUNT(*) AS total_count,
COUNT(*) FILTER (WHERE status = 'FAILED') AS failed_count,
COUNT(*) FILTER (WHERE status = 'RUNNING') AS running_count,
SUM(duration_ms) AS duration_sum,
MAX(duration_ms) AS duration_max,
approx_percentile(0.99, percentile_agg(duration_ms::DOUBLE PRECISION)) AS p99_duration
FROM executions
WHERE status IS NOT NULL
GROUP BY bucket, application_name, route_id
WITH NO DATA;
CREATE MATERIALIZED VIEW stats_1m_processor
WITH (timescaledb.continuous, timescaledb.materialized_only = false) AS
SELECT
time_bucket('1 minute', start_time) AS bucket,
application_name,
route_id,
processor_type,
COUNT(*) AS total_count,
COUNT(*) FILTER (WHERE status = 'FAILED') AS failed_count,
SUM(duration_ms) AS duration_sum,
MAX(duration_ms) AS duration_max,
approx_percentile(0.99, percentile_agg(duration_ms::DOUBLE PRECISION)) AS p99_duration
FROM processor_executions
GROUP BY bucket, application_name, route_id, processor_type
WITH NO DATA;
CREATE MATERIALIZED VIEW stats_1m_processor_detail
WITH (timescaledb.continuous, timescaledb.materialized_only = false) AS
SELECT
time_bucket('1 minute', start_time) AS bucket,
application_name,
route_id,
processor_id,
processor_type,
COUNT(*) AS total_count,
COUNT(*) FILTER (WHERE status = 'FAILED') AS failed_count,
SUM(duration_ms) AS duration_sum,
MAX(duration_ms) AS duration_max,
approx_percentile(0.99, percentile_agg(duration_ms)) AS p99_duration
FROM processor_executions
GROUP BY bucket, application_name, route_id, processor_id, processor_type
WITH NO DATA;

View File

@@ -0,0 +1,38 @@
-- V2__policies.sql - TimescaleDB policies (must run outside transaction)
-- flyway:executeInTransaction=false
-- Agent metrics retention & compression
ALTER TABLE agent_metrics SET (timescaledb.compress);
SELECT add_retention_policy('agent_metrics', INTERVAL '90 days', if_not_exists => true);
SELECT add_compression_policy('agent_metrics', INTERVAL '7 days', if_not_exists => true);
-- Continuous aggregate refresh policies
SELECT add_continuous_aggregate_policy('stats_1m_all',
start_offset => INTERVAL '1 hour',
end_offset => INTERVAL '1 minute',
schedule_interval => INTERVAL '1 minute',
if_not_exists => true);
SELECT add_continuous_aggregate_policy('stats_1m_app',
start_offset => INTERVAL '1 hour',
end_offset => INTERVAL '1 minute',
schedule_interval => INTERVAL '1 minute',
if_not_exists => true);
SELECT add_continuous_aggregate_policy('stats_1m_route',
start_offset => INTERVAL '1 hour',
end_offset => INTERVAL '1 minute',
schedule_interval => INTERVAL '1 minute',
if_not_exists => true);
SELECT add_continuous_aggregate_policy('stats_1m_processor',
start_offset => INTERVAL '1 hour',
end_offset => INTERVAL '1 minute',
schedule_interval => INTERVAL '1 minute',
if_not_exists => true);
SELECT add_continuous_aggregate_policy('stats_1m_processor_detail',
start_offset => INTERVAL '1 hour',
end_offset => INTERVAL '1 minute',
schedule_interval => INTERVAL '1 minute',
if_not_exists => true);

View File

@@ -1,82 +0,0 @@
package com.cameleer3.server.app;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.test.context.ActiveProfiles;
import org.springframework.test.context.DynamicPropertyRegistry;
import org.springframework.test.context.DynamicPropertySource;
import org.testcontainers.clickhouse.ClickHouseContainer;
import org.junit.jupiter.api.BeforeAll;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.Statement;
/**
* Base class for integration tests requiring a ClickHouse instance.
* <p>
* Uses Testcontainers to spin up a ClickHouse server and initializes the schema
* from {@code clickhouse/init/01-schema.sql} before the first test runs.
* Subclasses get a {@link JdbcTemplate} for direct database assertions.
* <p>
* Container lifecycle is managed manually (started once, shared across all test classes).
*/
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@ActiveProfiles("test")
public abstract class AbstractClickHouseIT {
protected static final ClickHouseContainer CLICKHOUSE;
static {
CLICKHOUSE = new ClickHouseContainer("clickhouse/clickhouse-server:25.3");
CLICKHOUSE.start();
}
@Autowired
protected JdbcTemplate jdbcTemplate;
@DynamicPropertySource
static void overrideProperties(DynamicPropertyRegistry registry) {
registry.add("spring.datasource.url", CLICKHOUSE::getJdbcUrl);
registry.add("spring.datasource.username", CLICKHOUSE::getUsername);
registry.add("spring.datasource.password", CLICKHOUSE::getPassword);
}
@BeforeAll
static void initSchema() throws Exception {
// Surefire runs from the module directory; schema is in the project root
Path baseDir = Path.of("clickhouse/init");
if (!Files.exists(baseDir)) {
baseDir = Path.of("../clickhouse/init");
}
// Load all schema files in order
String[] schemaFiles = {"01-schema.sql", "02-search-columns.sql", "03-users.sql", "04-oidc-config.sql", "05-oidc-auto-signup.sql"};
try (Connection conn = DriverManager.getConnection(
CLICKHOUSE.getJdbcUrl(),
CLICKHOUSE.getUsername(),
CLICKHOUSE.getPassword());
Statement stmt = conn.createStatement()) {
for (String schemaFile : schemaFiles) {
Path schemaPath = baseDir.resolve(schemaFile);
if (Files.exists(schemaPath)) {
String sql = Files.readString(schemaPath, StandardCharsets.UTF_8);
// Execute each statement separately (separated by semicolons)
for (String statement : sql.split(";")) {
String trimmed = statement.trim();
if (!trimmed.isEmpty()) {
stmt.execute(trimmed);
}
}
}
}
}
}
}

View File

@@ -0,0 +1,50 @@
package com.cameleer3.server.app;
import org.opensearch.testcontainers.OpensearchContainer;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.test.context.ActiveProfiles;
import org.springframework.test.context.DynamicPropertyRegistry;
import org.springframework.test.context.DynamicPropertySource;
import org.testcontainers.containers.PostgreSQLContainer;
import org.testcontainers.utility.DockerImageName;
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@ActiveProfiles("test")
public abstract class AbstractPostgresIT {
private static final DockerImageName TIMESCALEDB_IMAGE =
DockerImageName.parse("timescale/timescaledb-ha:pg16")
.asCompatibleSubstituteFor("postgres");
static final PostgreSQLContainer<?> postgres;
static final OpensearchContainer<?> opensearch;
static {
postgres = new PostgreSQLContainer<>(TIMESCALEDB_IMAGE)
.withDatabaseName("cameleer3")
.withUsername("cameleer")
.withPassword("test");
postgres.start();
opensearch = new OpensearchContainer<>("opensearchproject/opensearch:2.19.0");
opensearch.start();
}
@Autowired
protected JdbcTemplate jdbcTemplate;
@DynamicPropertySource
static void configureProperties(DynamicPropertyRegistry registry) {
registry.add("spring.datasource.url", postgres::getJdbcUrl);
registry.add("spring.datasource.username", postgres::getUsername);
registry.add("spring.datasource.password", postgres::getPassword);
registry.add("spring.datasource.driver-class-name", () -> "org.postgresql.Driver");
registry.add("spring.flyway.enabled", () -> "true");
registry.add("spring.flyway.url", postgres::getJdbcUrl);
registry.add("spring.flyway.user", postgres::getUsername);
registry.add("spring.flyway.password", postgres::getPassword);
registry.add("opensearch.url", opensearch::getHttpHostAddress);
}
}

View File

@@ -27,11 +27,40 @@ public class TestSecurityHelper {
}
/**
* Registers a test agent and returns a valid JWT access token for it.
* Registers a test agent and returns a valid JWT access token with AGENT role.
*/
public String registerTestAgent(String agentId) {
agentRegistryService.register(agentId, "test", "test-group", "1.0", List.of(), Map.of());
return jwtService.createAccessToken(agentId, "test-group");
return jwtService.createAccessToken(agentId, "test-group", List.of("AGENT"));
}
/**
* Returns a valid JWT access token with the given roles (no agent registration).
*/
public String createToken(String subject, String application, List<String> roles) {
return jwtService.createAccessToken(subject, application, roles);
}
/**
* Returns a valid JWT access token with OPERATOR role.
*/
public String operatorToken() {
// Subject must start with "user:" for JwtAuthenticationFilter to treat it as a UI user token
return jwtService.createAccessToken("user:test-operator", "user", List.of("OPERATOR"));
}
/**
* Returns a valid JWT access token with ADMIN role.
*/
public String adminToken() {
return jwtService.createAccessToken("user:test-admin", "user", List.of("ADMIN"));
}
/**
* Returns a valid JWT access token with VIEWER role.
*/
public String viewerToken() {
return jwtService.createAccessToken("user:test-viewer", "user", List.of("VIEWER"));
}
/**

View File

@@ -0,0 +1,49 @@
package com.cameleer3.server.app.admin;
import com.cameleer3.server.core.admin.*;
import jakarta.servlet.http.HttpServletRequest;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.mockito.ArgumentCaptor;
import java.util.Map;
import static org.junit.jupiter.api.Assertions.*;
import static org.mockito.Mockito.*;
class AuditServiceTest {
private AuditRepository mockRepository;
private AuditService auditService;
@BeforeEach
void setUp() {
mockRepository = mock(AuditRepository.class);
auditService = new AuditService(mockRepository);
}
@Test
void log_withExplicitUsername_insertsRecordWithCorrectFields() {
var request = mock(HttpServletRequest.class);
when(request.getRemoteAddr()).thenReturn("192.168.1.1");
when(request.getHeader("User-Agent")).thenReturn("Mozilla/5.0");
auditService.log("admin", "kill_query", AuditCategory.INFRA, "PID 42",
Map.of("query", "SELECT 1"), AuditResult.SUCCESS, request);
var captor = ArgumentCaptor.forClass(AuditRecord.class);
verify(mockRepository).insert(captor.capture());
var record = captor.getValue();
assertEquals("admin", record.username());
assertEquals("kill_query", record.action());
assertEquals(AuditCategory.INFRA, record.category());
assertEquals("PID 42", record.target());
assertEquals("192.168.1.1", record.ipAddress());
assertEquals("Mozilla/5.0", record.userAgent());
}
@Test
void log_withNullRequest_handlesGracefully() {
auditService.log("admin", "test", AuditCategory.CONFIG, null, null, AuditResult.SUCCESS, null);
verify(mockRepository).insert(any(AuditRecord.class));
}
}

View File

@@ -1,6 +1,6 @@
package com.cameleer3.server.app.controller;
import com.cameleer3.server.app.AbstractClickHouseIT;
import com.cameleer3.server.app.AbstractPostgresIT;
import com.cameleer3.server.app.TestSecurityHelper;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
@@ -18,7 +18,7 @@ import java.util.UUID;
import static org.assertj.core.api.Assertions.assertThat;
class AgentCommandControllerIT extends AbstractClickHouseIT {
class AgentCommandControllerIT extends AbstractPostgresIT {
@Autowired
private TestRestTemplate restTemplate;
@@ -29,24 +29,26 @@ class AgentCommandControllerIT extends AbstractClickHouseIT {
@Autowired
private TestSecurityHelper securityHelper;
private String jwt;
private String agentJwt;
private String operatorJwt;
@BeforeEach
void setUp() {
jwt = securityHelper.registerTestAgent("test-agent-command-it");
agentJwt = securityHelper.registerTestAgent("test-agent-command-it");
operatorJwt = securityHelper.operatorToken();
}
private ResponseEntity<String> registerAgent(String agentId, String name, String group) {
private ResponseEntity<String> registerAgent(String agentId, String name, String application) {
String json = """
{
"agentId": "%s",
"name": "%s",
"group": "%s",
"application": "%s",
"version": "1.0.0",
"routeIds": ["route-1"],
"capabilities": {}
}
""".formatted(agentId, name, group);
""".formatted(agentId, name, application);
return restTemplate.postForEntity(
"/api/v1/agents/register",
@@ -65,7 +67,7 @@ class AgentCommandControllerIT extends AbstractClickHouseIT {
ResponseEntity<String> response = restTemplate.postForEntity(
"/api/v1/agents/" + agentId + "/commands",
new HttpEntity<>(commandJson, securityHelper.authHeaders(jwt)),
new HttpEntity<>(commandJson, securityHelper.authHeaders(operatorJwt)),
String.class);
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.ACCEPTED);
@@ -88,7 +90,7 @@ class AgentCommandControllerIT extends AbstractClickHouseIT {
ResponseEntity<String> response = restTemplate.postForEntity(
"/api/v1/agents/groups/" + group + "/commands",
new HttpEntity<>(commandJson, securityHelper.authHeaders(jwt)),
new HttpEntity<>(commandJson, securityHelper.authHeaders(operatorJwt)),
String.class);
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.ACCEPTED);
@@ -110,7 +112,7 @@ class AgentCommandControllerIT extends AbstractClickHouseIT {
ResponseEntity<String> response = restTemplate.postForEntity(
"/api/v1/agents/commands",
new HttpEntity<>(commandJson, securityHelper.authHeaders(jwt)),
new HttpEntity<>(commandJson, securityHelper.authHeaders(operatorJwt)),
String.class);
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.ACCEPTED);
@@ -131,7 +133,7 @@ class AgentCommandControllerIT extends AbstractClickHouseIT {
ResponseEntity<String> cmdResponse = restTemplate.postForEntity(
"/api/v1/agents/" + agentId + "/commands",
new HttpEntity<>(commandJson, securityHelper.authHeaders(jwt)),
new HttpEntity<>(commandJson, securityHelper.authHeaders(operatorJwt)),
String.class);
JsonNode cmdBody = objectMapper.readTree(cmdResponse.getBody());
@@ -140,7 +142,7 @@ class AgentCommandControllerIT extends AbstractClickHouseIT {
ResponseEntity<Void> ackResponse = restTemplate.exchange(
"/api/v1/agents/" + agentId + "/commands/" + commandId + "/ack",
HttpMethod.POST,
new HttpEntity<>(securityHelper.authHeadersNoBody(jwt)),
new HttpEntity<>(securityHelper.authHeadersNoBody(agentJwt)),
Void.class);
assertThat(ackResponse.getStatusCode()).isEqualTo(HttpStatus.OK);
@@ -154,7 +156,7 @@ class AgentCommandControllerIT extends AbstractClickHouseIT {
ResponseEntity<Void> response = restTemplate.exchange(
"/api/v1/agents/" + agentId + "/commands/nonexistent-cmd-id/ack",
HttpMethod.POST,
new HttpEntity<>(securityHelper.authHeadersNoBody(jwt)),
new HttpEntity<>(securityHelper.authHeadersNoBody(agentJwt)),
Void.class);
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.NOT_FOUND);
@@ -168,7 +170,7 @@ class AgentCommandControllerIT extends AbstractClickHouseIT {
ResponseEntity<String> response = restTemplate.postForEntity(
"/api/v1/agents/nonexistent-agent-xyz/commands",
new HttpEntity<>(commandJson, securityHelper.authHeaders(jwt)),
new HttpEntity<>(commandJson, securityHelper.authHeaders(operatorJwt)),
String.class);
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.NOT_FOUND);

Some files were not shown because too many files have changed in this diff Show More