151 Commits

Author SHA1 Message Date
hsiegeln
085c4e395b feat: execution overlay & debugger (sub-project 2)
Some checks failed
CI / cleanup-branch (push) Has been skipped
CI / build (push) Failing after 36s
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Adds execution overlay to the ProcessDiagram component, turning it into
an after-the-fact debugger for Camel route executions.

Backend:
- Flyway V8: iteration fields (loop/split/multicast index/size) on processor_executions
- Snapshot-by-processorId endpoint for robust processor lookup
- ELK LINEAR_SEGMENTS node placement for consistent Y-alignment

Frontend:
- ExecutionDiagram wrapper: exchange bar, resizable splitter, detail panel
- Node overlay: green tint+checkmark (completed), red tint+! (failed), dimmed (skipped)
- Edge overlay: green solid (traversed), dashed gray (not traversed)
- Per-compound iteration stepper for loops/splits/multicasts
- 7-tab detail panel: Info, Headers, Input, Output, Error, Config, Timeline
- Jump to Error: selects + centers viewport on failed processor
- Triggered error handler sections highlighted with solid red frame
- Drill-down disables overlay (sub-routes show topology only)
- Integrated into ExchangeDetail page Flow view

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 19:51:55 +01:00
hsiegeln
d7166b6d0a feat: Jump to Error centers the failed node in the viewport
Added centerOnNodeId prop to ProcessDiagram. When set, the diagram
pans to center the specified node in the viewport. Jump to Error
now selects the failed processor AND centers the viewport on it.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 19:51:00 +01:00
hsiegeln
25e23c0b87 feat: highlight triggered error handler sections
When an onException/error handler section has any executed processors
(overlay entries), it renders with a stronger red tint (8% vs 3%),
a solid red border frame, and a solid divider line. This makes it
easy to identify which handler was triggered when multiple exist.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 19:47:57 +01:00
hsiegeln
cf9e847f84 fix: use design system CodeBlock for error stack trace
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 19:45:54 +01:00
hsiegeln
bfd76261ef fix: disable execution overlay when drilled into sub-route
The execution overlay data maps to the root route's processor IDs. When
drilled into a sub-route, those IDs don't match, causing all nodes to
appear dimmed. Now clears the overlay and shows pure topology when
viewing a sub-route via drill-down.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 19:43:51 +01:00
hsiegeln
0b8efa1998 fix: drill-down uses route-based fetch instead of pre-loaded layout
When drilled into a sub-route, the pre-fetched diagramLayout (loaded by
content hash for the root execution) doesn't contain the sub-route's
diagram. Only use the pre-loaded layout for the root route; fall back to
useDiagramByRoute for drilled-down sub-routes.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 19:40:20 +01:00
hsiegeln
3027e9b24f fix: scrollable headers/timeline, CodeBlock for body, ELK node alignment
- Make headers tab and timeline tab scrollable when content overflows
- Replace custom <pre> code block with design system CodeBlock component
  for body tabs (Input/Output) to match existing styleguide
- Add LINEAR_SEGMENTS node placement strategy to ELK layout to fix
  Y-offset misalignment between nodes in left-to-right diagrams
  (e.g., ENDPOINT at different Y level than subsequent processors)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 19:34:25 +01:00
hsiegeln
3d5d462de0 fix: ENDPOINT node execution state, badge position, and edge traversal
- Synthesize COMPLETED state for ENDPOINT nodes when overlay is active
  (endpoints are route entry points, not in the processor execution tree)
- Move status badge (check/error) inside the card (top-right, below top bar)
  to avoid collision with ConfigBadge (TRACE/TAP) badges
- Include ENDPOINT nodes in edge traversal check so the edge from
  endpoint to first processor renders as green/traversed

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 19:29:30 +01:00
hsiegeln
f675451384 fix: use non-passive wheel listener to prevent page scroll during diagram zoom
React's onWheel is passive by default, so preventDefault() doesn't stop
page scrolling. Attach native wheel listener with { passive: false } via
useEffect instead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 19:24:09 +01:00
hsiegeln
021a52e56b feat: integrate ExecutionDiagram into ExchangeDetail flow view
Replace the RouteFlow-based flow view with the new ExecutionDiagram
component which provides execution overlay, iteration stepping, and
an integrated detail panel. The gantt view and all other page sections
remain unchanged.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 19:12:11 +01:00
hsiegeln
5ccefa3cdb feat: add ExecutionDiagram wrapper component
Composes ProcessDiagram with execution overlay data, exchange summary
bar, resizable splitter, and detail panel into a single root component.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 19:05:43 +01:00
hsiegeln
e4c66b1311 feat: add DetailPanel with 7 tabs for execution diagram overlay
Implements the bottom detail panel with processor header bar, tab bar
(Info, Headers, Input, Output, Error, Config, Timeline), and all tab
content components. Info shows processor/exchange metadata in a grid,
Headers fetches per-processor snapshots for side-by-side display,
Input/Output render formatted code blocks, Error extracts exception
types, Config is a placeholder, and Timeline renders a Gantt chart.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 19:01:53 +01:00
hsiegeln
5da03d0938 feat: add useExecutionOverlay and useIterationState hooks
useExecutionOverlay maps processor tree to overlay state map, handling
iteration filtering, sub-route failure detection, and trace data flags.
useIterationState detects compound nodes with iterated children and
manages per-compound iteration selection.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 18:56:38 +01:00
hsiegeln
3af1d1f3b6 feat: add useProcessorSnapshotById hook for snapshot-by-processorId endpoint
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 18:54:01 +01:00
hsiegeln
1984c597de feat: add iteration stepper to compound nodes and thread overlay props
Add a left/right stepper widget to compound node headers (LOOP, SPLIT,
MULTICAST) when iteration overlay data is present. Thread executionOverlay,
overlayActive, iterationState, and onIterationChange props through
ProcessDiagram -> CompoundNode -> children and ProcessDiagram ->
ErrorSection -> children so leaf DiagramNode instances render with
execution state (green/red badges, dimming for skipped nodes).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 18:52:32 +01:00
hsiegeln
3029704051 feat: add traversed/not-traversed visual states to DiagramEdge
Add green solid edges for traversed paths and dashed gray for
not-traversed when execution overlay is active. Includes green
arrowhead marker and overlay threading through CompoundNode and
ErrorSection.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 18:47:59 +01:00
hsiegeln
2b805ec196 feat: add execution overlay visual states to DiagramNode
DiagramNode now accepts executionState and overlayActive props to render
execution status: green tint + checkmark badge for completed nodes, red
tint + exclamation badge for failed nodes, dimmed opacity for skipped
nodes. Duration is shown at bottom-right, and a drill-down arrow appears
for sub-route failures.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 18:44:16 +01:00
hsiegeln
ff59dc5d57 feat: add execution overlay types and extend ProcessDiagram with diagramLayout prop
Define the execution overlay type system (NodeExecutionState, IterationInfo,
DetailTab) and extend ProcessDiagramProps with optional overlay props. Add
diagramLayout prop so ExecutionDiagram can pass a pre-fetched layout by content
hash, bypassing the internal route-based fetch in useDiagramData.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 18:40:57 +01:00
hsiegeln
3928743ea7 feat: update OpenAPI spec and TypeScript types for execution overlay
Add iteration fields (loopIndex, loopSize, splitIndex, splitSize,
multicastIndex) to ProcessorNode schema. Add new endpoint path
/executions/{executionId}/processors/by-id/{processorId}/snapshot.
Remove stale diagramNodeId field that was dropped in V6 migration.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 18:38:09 +01:00
hsiegeln
cf6c4bd60c feat: add snapshot-by-processorId endpoint for robust processor lookup
Add GET /executions/{id}/processors/by-id/{processorId}/snapshot endpoint
that fetches processor snapshot data by processorId instead of positional
index, which is fragile when the tree structure changes. The existing
index-based endpoint remains unchanged for backward compatibility.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 18:34:45 +01:00
hsiegeln
edd841ffeb feat: add iteration fields to processor execution storage
Add loop_index, loop_size, split_index, split_size, multicast_index
columns to processor_executions table and thread them through the
full storage → ingestion → detail pipeline. These fields enable
execution overlay to display iteration context for loop, split,
and multicast EIPs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 18:32:47 +01:00
hsiegeln
889f0e5263 chore: add .worktrees/ to .gitignore for worktree isolation 2026-03-27 18:27:34 +01:00
hsiegeln
3a41e1f1d3 docs: add execution overlay implementation plan (sub-project 2)
12 tasks covering backend prerequisites (iteration fields, snapshot-by-id
endpoint), ProcessDiagram overlay props, node/edge visual states, compound
iteration stepper, detail panel with 7 tabs, ExecutionDiagram wrapper,
and ExchangeDetail page integration.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 18:25:47 +01:00
hsiegeln
509159417b docs: add execution overlay & debugger design spec (sub-project 2)
Design for overlaying real execution data onto the ProcessDiagram:
- Node status visualization (green OK, red failed, dimmed skipped)
- Per-compound iteration stepping for loops/splits
- Tabbed detail panel (Info, Headers, Input, Output, Error, Config, Timeline)
- Jump to Error with cross-route drill-down
- Backend prerequisites for iteration fields and snapshot-by-id endpoint

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 18:13:03 +01:00
hsiegeln
30c8fe1091 feat: add minimap overview to process diagram
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 1m0s
CI / docker (push) Successful in 57s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 37s
Small overview panel in the bottom-left showing the full diagram
layout with colored node rectangles and an amber viewport indicator.
Click or drag on the minimap to pan the main diagram.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 17:16:05 +01:00
hsiegeln
b1ff05439a docs: update design spec and increase section gap to 80px
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 1m5s
CI / docker (push) Successful in 54s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 35s
Update design spec with implementation notes covering recursive
compound nesting, edge z-ordering, ON_COMPLETION sections, drill-down
navigation, CSS transform zoom, and HTML overlay toolbar.

Increase SECTION_GAP to 80px for better visual separation between
completion and error handler sections.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 17:10:01 +01:00
hsiegeln
eb9c20e734 feat: drill-down into sub-routes with breadcrumb navigation
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 1m3s
CI / docker (push) Successful in 55s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 43s
Double-click a DIRECT or SEDA node to navigate into that route's
diagram. Breadcrumbs show the route stack and allow clicking back
to any level. Escape key goes back one level.

Route ID resolution handles camelCase endpoint URIs mapping to
kebab-case route IDs (e.g. direct:callGetProduct → call-get-product)
using the catalog's known route IDs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 16:58:35 +01:00
hsiegeln
f6220a9f89 feat: support ON_COMPLETION handler sections in diagram
Add ON_COMPLETION to backend COMPOUND_TYPES and frontend rendering.
Completion handlers render as teal-tinted sections between the main
flow and error handlers, structurally parallel to onException.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 16:45:10 +01:00
hsiegeln
9b7626f6ff fix: diagram rendering improvements
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 57s
CI / docker (push) Successful in 52s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 37s
- Recursive compound rendering: CompoundNode checks if children are
  themselves compound types (WHEN inside CHOICE) and renders them
  recursively. Added EIP_WHEN, EIP_OTHERWISE, DO_CATCH, DO_FINALLY
  to frontend COMPOUND_TYPES.
- Edge z-ordering: edges are distributed to their containing compound
  and rendered after the background rect, so they're not hidden behind
  compound containers.
- Error section sizing: normalize error handler node coordinates to
  start at (0,0), compute red tint background height from actual
  content with symmetric padding for vertical centering.
- Toolbar as HTML overlay: moved from SVG foreignObject to absolute-
  positioned HTML div so it stays fixed size at any zoom level. Uses
  design system tokens for consistent styling.
- Zoom: replaced viewBox approach with CSS transform on content group.
  Default zoom is 100% anchored top-left. Fit-to-view still available
  via button.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 16:33:24 +01:00
hsiegeln
20d1182259 fix: recursive compound nesting, fixed node width, zoom crash
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 1m0s
CI / docker (push) Successful in 52s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 36s
ELK renderer:
- Add EIP_WHEN, EIP_OTHERWISE, DO_CATCH, DO_FINALLY to COMPOUND_TYPES
  so branch body processors nest inside their containers
- Rewrite node creation and result extraction as recursive methods
  to support compound-inside-compound (CHOICE → WHEN → processors)
- Use fixed NODE_WIDTH=160 for leaf nodes instead of variable width

Frontend:
- Fix mousewheel crash: capture getBoundingClientRect() before
  setState updater (React nulls currentTarget after handler returns)
- Anchor fitToView to top-left instead of centering

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 14:26:35 +01:00
hsiegeln
afcb7d3175 fix: DevDiagram page uses time range and correct catalog shape
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 1m0s
CI / docker (push) Successful in 54s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 37s
The dev diagram page was calling useRouteCatalog() without time range
params (returned empty) and parsing the wrong response shape (expected
flat {application, routeId} but catalog returns {appId, routes[]}).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 14:05:32 +01:00
hsiegeln
ac32396a57 feat: add interactive ProcessDiagram SVG component (sub-project 1/3)
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 1m0s
CI / docker (push) Successful in 56s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 38s
New interactive route diagram component with SVG rendering using
server-computed ELK layout coordinates. TIBCO BW5-inspired top-bar
card node style with zoom/pan, hover toolbars, config badges, and
error handler sections below the main flow.

Backend: add direction query parameter (LR/TB) to diagram render
endpoints, defaulting to left-to-right layout.

Frontend: 14-file ProcessDiagram component in ui/src/components/
with DiagramNode, CompoundNode, DiagramEdge, ConfigBadge, NodeToolbar,
ErrorSection, ZoomControls, and supporting hooks. Dev test page at
/dev/diagram for validation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 13:55:29 +01:00
hsiegeln
78e12f5cf9 fix: separate onException/errorHandler into distinct RouteFlow segments
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 57s
CI / docker (push) Successful in 52s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 37s
ON_EXCEPTION and ERROR_HANDLER nodes are now treated as compound containers
in the ELK diagram renderer, nesting their children. The frontend
diagram-mapping builds separate FlowSegments for each error handler,
displayed as distinct sections in the RouteFlow component.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 09:15:06 +01:00
hsiegeln
62709ce80b feat: include tap attributes in cmd-K full-text search
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 1m4s
CI / docker (push) Successful in 1m13s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 36s
Add attributes_text flattened field to OpenSearch indexing for both
execution and processor levels. Include in full-text search queries,
wildcard matching, and highlighting. Merge processor-level attributes
into ExecutionSummary. Add 'attribute' category to CommandPalette
(design-system 0.1.17) with per-key-value results in the search UI.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 08:13:58 +01:00
hsiegeln
ea88042ef5 fix: exclude search endpoint from audit log
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 1m1s
CI / docker (push) Successful in 37s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 28s
POST /api/v1/search/executions is a read-only query using POST for the
request body. Skip it in AuditInterceptor to avoid flooding the audit
log with search operations.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 23:55:24 +01:00
hsiegeln
cde79bd172 fix: remove stale diagramNodeId from test ProcessorRecord constructors
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 1m0s
CI / docker (push) Successful in 1m16s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 27s
TreeReconstructionTest and PostgresExecutionStoreIT still passed the
removed diagramNodeId parameter. Missed by mvn compile (main only);
caught by mvn verify (test compilation).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 23:40:13 +01:00
hsiegeln
a2a8e4ae3f feat: rename logForwardingLevel to applicationLogLevel, add agentLogLevel
Some checks failed
CI / cleanup-branch (push) Has been skipped
CI / build (push) Failing after 39s
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Align with cameleer3-common rename: logForwardingLevel → applicationLogLevel
(root logger) and new agentLogLevel (com.cameleer3 logger). Both fields
are on ApplicationConfig, pushed via config-update. UI shows "App Log Level"
and "Agent Log Level" on AppConfig slide-in, AgentHealth config bar, and
AppConfigDetailPage.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 23:36:31 +01:00
hsiegeln
6e187ccb48 feat: native TRACE log level with design system 0.1.16
Some checks failed
CI / cleanup-branch (push) Has been skipped
CI / build (push) Failing after 35s
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Map TRACE to its own 'trace' level instead of grouping with DEBUG,
now that the design system LogViewer supports it natively.
Bump @cameleer/design-system to 0.1.16.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 23:07:42 +01:00
hsiegeln
862a27b0b8 feat: add TRACE log level support across UI
Some checks failed
CI / cleanup-branch (push) Has been skipped
CI / build (push) Failing after 34s
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Add TRACE option to log forwarding level dropdowns (AppConfig,
AgentHealth), badge color mapping, and log filter ButtonGroups
on all pages that display application logs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 23:03:15 +01:00
hsiegeln
d6c1f2c25b refactor: derive processor-route mapping from diagrams instead of executions
Some checks failed
CI / cleanup-branch (push) Has been skipped
CI / build (push) Failing after 37s
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Store application_name in route_diagrams at ingestion time (V7 migration),
resolve from agent registry same as ExecutionController. Move
findProcessorRouteMapping from ExecutionStore to DiagramStore using a
JSONB query that extracts node IDs directly from stored RouteGraph
definitions. This makes the mapping available as soon as diagrams are
sent, before any executions are recorded.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 23:00:10 +01:00
hsiegeln
100b780b47 refactor: remove diagramNodeId indirection, use processorId directly
Some checks failed
CI / cleanup-branch (push) Has been skipped
CI / build (push) Failing after 37s
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Agent now uses Camel processorId as RouteNode.id, eliminating the
nodeId mapping layer. Drop diagram_node_id column (V6 migration),
remove from ProcessorRecord/ProcessorNode/IngestionService/DetailService,
add /processor-routes endpoint for processorId→routeId lookup,
simplify frontend diagram-mapping and ExchangeDetail overlays,
replace N diagram fetches in AppConfigPage with single hook.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 22:44:07 +01:00
hsiegeln
bd63a8ce95 feat: App Config slide-in with Route column, clickable taps, and edit toolbar
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 1m4s
CI / docker (push) Successful in 1m19s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 27s
- Add Route column to Traces & Taps table (diagram-based mapping, pending backend fix)
- Make tap badges clickable to navigate to route's Taps tab
- Add edit/save/cancel toolbar with design system Button components
- Move Sampling Rate to last position in settings grid
- Support ?tab= URL param on RouteDetail for direct tab navigation
- Bump @cameleer/design-system to 0.1.15 (DetailPanel overlay + backdrop)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 22:26:28 +01:00
hsiegeln
ef9ec6069f fix: improve App Config slide-in panel layout
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 59s
CI / docker (push) Successful in 54s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 26s
- Narrowed panel from 640px to 520px so main table columns stay visible
- Settings grid uses CSS grid (3 columns) for proper wrapping
- Removed unused PanelActions component that caused white footer bar

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 21:49:03 +01:00
hsiegeln
bf84f1814f feat: convert App Config detail to slide-in DetailPanel
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 1m20s
CI / docker (push) Successful in 1m24s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 28s
Replaces the separate AppConfigDetailPage route with a 640px-wide
DetailPanel that slides in when clicking a row on the App Config
overview table. All editing functionality (settings, traces & taps,
route recording) is preserved inside the panel.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 21:44:30 +01:00
hsiegeln
00efaf0ca0 chore: bump @cameleer/design-system to 0.1.14
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 1m0s
CI / docker (push) Successful in 1m14s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 26s
Picks up LogViewer background fix (removes --bg-inset for consistent
card backgrounds).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 21:35:11 +01:00
hsiegeln
900b6f45c5 fix: use pencil and trash icons for tap row actions
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 1m3s
CI / docker (push) Successful in 1m25s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 27s
Replaces text "Edit"/"Del" buttons with pencil and trash can icon
buttons matching the style used elsewhere in the UI.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 21:32:05 +01:00
hsiegeln
dd6ea7563f feat: use Toggle switch for metrics setting on AgentHealth config bar
Some checks failed
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 58s
CI / deploy (push) Has been cancelled
CI / deploy-feature (push) Has been cancelled
CI / docker (push) Has been cancelled
Replaces the plain checkbox with the design system Toggle component
for consistency with the recording toggle on RouteDetail and
AppConfigDetailPage.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 21:30:35 +01:00
hsiegeln
57bb84a2df fix: align edit and save/cancel buttons after badges on AgentHealth
Some checks failed
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 56s
CI / docker (push) Successful in 54s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Has been cancelled
Moved edit pencil and save/cancel actions to sit right after the last
badge field instead of at the start or far right of the config bar.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 21:28:30 +01:00
hsiegeln
a0fbf785c3 fix: move config edit button to right side of badges on AgentHealth
Some checks failed
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 56s
CI / deploy (push) Has been cancelled
CI / deploy-feature (push) Has been cancelled
CI / docker (push) Has been cancelled
Moved the pencil edit button after the badge fields and added
margin-left: auto to push it to the far right of the config bar.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 21:27:01 +01:00
hsiegeln
91e51d4f6a feat: show configured taps count on Admin App Config overview
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 54s
CI / docker (push) Successful in 55s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 37s
New Taps column shows enabled/total count as a badge (e.g. "2/3")
next to the existing Traced column.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 21:22:59 +01:00
hsiegeln
b52d588fc5 feat: add tooltips to tap attribute type selector buttons
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 55s
CI / docker (push) Successful in 50s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 36s
Each type option now shows a descriptive tooltip on hover explaining
its purpose: Business Object (key identifiers), Correlation (cross-route
linking), Event (business events), Custom (general purpose).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 19:47:39 +01:00
hsiegeln
23b23bbb66 fix: replace crypto.randomUUID with fallback for non-HTTPS contexts
Some checks failed
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 52s
CI / deploy (push) Has been cancelled
CI / deploy-feature (push) Has been cancelled
CI / docker (push) Has been cancelled
crypto.randomUUID() requires a secure context (HTTPS). Since the server
may be accessed via HTTP, use a timestamp + random string ID instead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 19:46:32 +01:00
hsiegeln
82b47f4364 fix: use design system status tokens for test expression result alerts
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 55s
CI / docker (push) Successful in 47s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 36s
Replaces hardcoded dark-theme hex fallbacks with proper tokens from
tokens.css: --success-bg/--success-border/--success for success and
--error-bg/--error-border/--error for errors. Works in both themes.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 19:38:24 +01:00
hsiegeln
e4b2dd2604 fix: use design system tokens for tap type selector active state
Some checks failed
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 56s
CI / deploy (push) Has been cancelled
CI / deploy-feature (push) Has been cancelled
CI / docker (push) Has been cancelled
The active type option was invisible because --accent-primary doesn't
exist in the design system. Now uses --amber-bg/--amber-deep/--amber
from tokens.css for a clearly visible selected state matching the
brand accent palette.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 19:37:12 +01:00
hsiegeln
3b31e69ae4 chore: regenerate openapi.json and schema.d.ts from live server
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 54s
CI / docker (push) Successful in 48s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 36s
Updated types now include attributes on ExecutionDetail, ProcessorNode,
and ExecutionSummary from the actual API. Removed stale detail.children
fallback that no longer exists in the schema.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 19:22:55 +01:00
hsiegeln
499fd7f8e8 fix: accept ISO datetime for audit log from/to parameters
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 54s
CI / docker (push) Successful in 37s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 36s
The frontend sends full ISO timestamps (e.g. 2026-03-19T17:55:29Z) but
the controller expected LocalDate (yyyy-MM-dd). This caused null parsing,
which threw NullPointerException in the repository WHERE clause. Changed
to accept Instant directly with sensible defaults (last 7 days).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 19:07:09 +01:00
hsiegeln
1080c76e99 feat: wire attributes from RouteExecution/ProcessorExecution into storage
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 54s
CI / docker (push) Successful in 36s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 36s
Replaces null placeholders with actual getAttributes() calls now that
cameleer3-common SNAPSHOT is resolved with attributes support.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 19:03:18 +01:00
hsiegeln
7f58bca0e6 chore: update IngestionService TODO comments for attributes wiring
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 59s
CI / docker (push) Successful in 50s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 37s
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 18:59:17 +01:00
hsiegeln
c087e4af08 fix: add missing attributes parameter to test record constructors
Tests were not updated when attributes field was added to ExecutionRecord,
ProcessorRecord, ProcessorDoc, and ExecutionDocument records.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 18:58:44 +01:00
hsiegeln
387ed44989 fix: add missing attributes parameter to test record constructors
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-26 18:58:32 +01:00
hsiegeln
64b677696e feat(ui): restructure AppConfigDetailPage into 3 sections
Some checks failed
CI / cleanup-branch (push) Has been skipped
CI / build (push) Failing after 32s
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Merge Logging + Observability into unified "Settings" section with
flex-wrap badge grid including new compressSuccess toggle. Merge
Traced Processors with Taps into "Traces & Taps" section showing
capture mode and tap badges per processor. Add "Route Recording"
section with per-route toggles sourced from route catalog. All new
fields (compressSuccess, routeRecording) included in form state
and save payload.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 18:48:14 +01:00
hsiegeln
78813ea15f feat(ui): add taps DataTable, CRUD modal with test expression to RouteDetail
- Replace taps tab placeholder with full DataTable showing all route taps
- Add columns: attribute, processor, expression, language, target, type, enabled toggle, actions
- Add tap modal with form fields: attribute name, processor select, language, target, expression, type selector
- Implement inline enable/disable toggle per tap row
- Add ConfirmDialog for tap deletion
- Add test expression section with Recent Exchange and Custom Payload tabs
- Add save/edit/delete tap operations via application config update
- Add all supporting CSS module classes (no inline styles)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 18:44:36 +01:00
hsiegeln
807e191397 feat(ui): add recording toggle, active taps KPI, and taps tab to RouteDetail
- Add Toggle for route recording on/off in the route header
- Fetch application config to determine recording state and route taps
- Add Active Taps KPI card showing enabled/total tap counts
- Add Taps tab to the tabbed section with placeholder content

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 18:44:06 +01:00
hsiegeln
47ff122c48 feat: add Attributes column to Dashboard exchanges table
Shows up to 2 attribute badges (color="auto") per row with a +N overflow
indicator; empty rows render a muted dash. Uses CSS module classes only.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-26 18:36:53 +01:00
hsiegeln
eb796f531f feat(ui): add replay modal to ExchangeDetail page
Add a Replay button in the exchange header that opens a modal allowing
users to re-send the exchange to a live agent. The modal pre-populates
headers and body from the original exchange input, provides an agent
selector filtered to live agents for the application, and supports
editable header key-value rows with add/remove.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 18:35:00 +01:00
hsiegeln
a3706cf7c2 feat(ui): display business attributes on ExchangeDetail page
Show route-level attributes as Badge strips in the exchange header
card, and per-processor attributes above the message IN/OUT panels.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 18:33:16 +01:00
hsiegeln
2b1d49c032 feat: add TapDefinition, extend ApplicationConfig, and add API hooks
- Add TapDefinition interface for tap configuration
- Extend ApplicationConfig with taps, tapVersion, routeRecording, compressSuccess
- Add useTestExpression mutation hook (manual fetch to new endpoint)
- Add useReplayExchange mutation hook (uses api client, targets single agent)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-26 18:29:52 +01:00
hsiegeln
ae1ee38441 feat: add attributes fields to schema.d.ts types
Add optional `attributes?: Record<string, string>` to ExecutionSummary,
ExecutionDetail, and ProcessorNode in the manually-maintained OpenAPI
schema to reflect the new backend attributes support.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-26 18:29:47 +01:00
hsiegeln
d6d96aad07 feat: add TEST_EXPRESSION command with request-reply infrastructure
Adds CompletableFuture-based request-reply mechanism for commands that
need synchronous results. CommandReply record in core, pendingReplies
map in AgentRegistryService, test-expression endpoint on config controller
with 5s timeout. CommandAckRequest extended with optional data field.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 18:27:59 +01:00
hsiegeln
2d6cc4c634 feat(search): deserialize and surface attributes in detail service and OpenSearch indexing (Task 4)
DetailService deserializes attributes JSON from ExecutionRecord/ProcessorRecord and
passes them to ExecutionDetail and ProcessorNode constructors. ExecutionDocument and
ProcessorDoc carry attributes as a JSON string. SearchIndexer passes attributes when
building documents. OpenSearchIndex includes attributes in indexed maps and
deserializes them when constructing ExecutionSummary from search hits.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-26 18:23:47 +01:00
hsiegeln
ca5250c134 feat(ingestion): wire attributes through ingestion pipeline into PostgreSQL (Task 3)
IngestionService passes attributes (currently null, pending cameleer3-common update)
to ExecutionRecord and ProcessorRecord. PostgresExecutionStore includes the
attributes column in INSERT and ON CONFLICT UPDATE (with COALESCE), and reads
it back in both row mappers.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-26 18:23:38 +01:00
hsiegeln
64f797bd96 feat(core): add attributes field to storage records and detail/summary models (Task 2)
Adds Map<String,String> attributes to ExecutionRecord, ProcessorRecord,
ExecutionDetail, ProcessorNode, and ExecutionSummary. ExecutionStore records
carry attributes as a JSON string; detail/summary models carry deserialized maps.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-26 18:23:32 +01:00
hsiegeln
f08461cf35 feat(db): add attributes JSONB columns to executions and processor_executions (Task 1)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-26 18:23:26 +01:00
hsiegeln
2b5d803a60 docs: add implementation plan for taps, attributes, replay UI features
14-task plan covering: database migration, attributes pipeline, test-expression
command with request-reply, OpenAPI regeneration, frontend types/hooks,
ExchangeDetail attributes + replay modal, Dashboard attributes column,
RouteDetail recording toggle + taps tab + tap CRUD modal, and
AppConfigDetailPage restructure.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 18:13:58 +01:00
hsiegeln
e3902cd85f docs: add UI design spec for taps, attributes, replay, recording & compression
Covers all 5 new agent features: tap management on RouteDetail, business
attributes display on ExchangeDetail/Dashboard, enhanced replay with
editable payload, per-route recording toggles, and success compression.
Includes backend prerequisites, RBAC matrix, and TypeScript interfaces.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 17:48:20 +01:00
hsiegeln
25ca8d5132 feat: show log indices on OpenSearch admin page
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 54s
CI / docker (push) Successful in 47s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 35s
Add prefix query parameter to /admin/opensearch/indices endpoint so
the UI can fetch execution and log indices separately. OpenSearch admin
page now shows two card sections: Execution Indices and Log Indices,
each with doc count and size summary. Page restyled with CSS module
replacing inline styles. Delete endpoint also allows log index deletion.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 16:47:44 +01:00
hsiegeln
0d94132c98 feat: SOC2 audit log completeness — hybrid interceptor + explicit calls
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 54s
CI / docker (push) Successful in 51s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 37s
Add AuditInterceptor as a safety net that auto-audits any POST/PUT/DELETE
without an explicit audit call (excludes data ingestion + heartbeat).
AuditService sets a request attribute so the interceptor skips when
explicit logging already happened.

New explicit audit calls:
- ApplicationConfigController: view/update app config
- AgentCommandController: send/broadcast commands (AGENT category)
- AgentRegistrationController: agent register + token refresh
- UiAuthController: UI token refresh
- OidcAuthController: OIDC callback failure
- AuditLogController: view audit log (sensitive read)
- UserAdminController: view users (sensitive read)
- OidcConfigAdminController: view OIDC config (sensitive read)

New AuditCategory.AGENT added. Frontend audit log filter updated.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 16:41:10 +01:00
hsiegeln
0e6de69cd9 feat: add App Config detail page with view/edit mode
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 53s
CI / docker (push) Successful in 52s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 38s
Click a row in the admin App Config table to navigate to a dedicated
detail page at /admin/appconfig/:appId. Shows all config fields as
badges in view mode; pencil toggles to edit mode with dropdowns.

Traced processors are now editable (capture mode dropdown + remove
button per processor). Sections and header use card styling for
visual contrast. OidcConfigPage gets the same card treatment.

List page simplified to read-only badge overview with row click
navigation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 16:15:27 +01:00
hsiegeln
e53274bcb9 fix: LogViewer and EventFeed scroll to top on load
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 56s
CI / docker (push) Successful in 1m9s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 35s
Update design system to v0.1.13 where both components scroll to the
top (newest entries) instead of the bottom, matching the descending
sort order used across the UI.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 15:54:56 +01:00
hsiegeln
4433b26bf8 fix: move pencil/save buttons to start of config bar for consistency
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 57s
CI / docker (push) Successful in 50s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 35s
Pencil icon and Save/Cancel buttons now appear at the left side of
the AgentHealth config bar, matching the admin overview table where
the edit column is at the start of each row.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 15:38:36 +01:00
hsiegeln
74fa08f41f fix: visible Save/Cancel buttons on AgentHealth config edit mode
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 56s
CI / docker (push) Successful in 52s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 37s
Replace subtle Unicode checkmark/X with proper labeled buttons styled
as primary (Save) and secondary (Cancel) for better visibility.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 13:20:11 +01:00
hsiegeln
4b66d78cf4 refactor: config settings shown as badges with pencil-to-edit
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 56s
CI / docker (push) Successful in 47s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 36s
Settings (log level, engine level, payload capture, metrics) now
display as color-coded badges by default. Clicking the pencil icon
enters edit mode where badges become dropdowns. Save (checkmark)
persists changes and reverts to badge view; cancel discards changes.

Applied consistently on both the admin App Config page and the
AgentHealth config bar.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 13:12:56 +01:00
hsiegeln
b1c2950b1e fix: add id field to AppConfigPage DataTable rows
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 2m51s
CI / docker (push) Successful in 1m9s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 35s
DataTable requires rows with an { id: string } constraint. Map
ApplicationConfig to ConfigRow adding id from the application field.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 12:55:19 +01:00
hsiegeln
b0484459a2 feat: add application config overview and inline editing
Some checks failed
CI / cleanup-branch (push) Has been skipped
CI / build (push) Failing after 22s
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Add admin page at /admin/appconfig with a DataTable showing all
application configurations. Inline dropdowns allow editing log level,
engine level, payload capture mode, and metrics toggle directly from
the table. Changes push to agents via SSE immediately.

Also adds a config bar on the AgentHealth page (/agents/:appId) for
per-application config management with the same 4 settings.

Backend: GET /api/v1/config list endpoint, findAll() on repository,
sensible defaults for logForwardingLevel/engineLevel/payloadCaptureMode.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 12:51:07 +01:00
hsiegeln
056a6f0ff5 feat: sidebar exchange counts respect selected time range
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 2m47s
CI / docker (push) Successful in 48s
CI / deploy-feature (push) Has been skipped
CI / deploy (push) Successful in 36s
The /routes/catalog endpoint now accepts optional from/to query
parameters instead of hardcoding a 24h window. The UI passes the
global filter time range so sidebar counts match what the user sees.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 12:21:10 +01:00
hsiegeln
f4bf38fcba feat: add inspect column to agent instance data table
All checks were successful
CI / build (push) Successful in 58s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 58s
CI / deploy (push) Successful in 35s
CI / deploy-feature (push) Has been skipped
Add a dedicated inspect button column (↗) to navigate to the agent
instance page, consistent with the exchange inspect pattern on the
Dashboard. Row click still opens the detail slide-in panel.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 12:04:06 +01:00
hsiegeln
15632a2170 fix: show full exchange ID in breadcrumb
All checks were successful
CI / build (push) Successful in 53s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 47s
CI / deploy (push) Successful in 35s
CI / deploy-feature (push) Has been skipped
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 11:49:41 +01:00
hsiegeln
479b67cd2d refactor: consolidate breadcrumbs to single TopBar instance
All checks were successful
CI / build (push) Successful in 1m1s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 1m11s
CI / deploy (push) Successful in 35s
CI / deploy-feature (push) Has been skipped
Remove duplicate in-page breadcrumbs (ExchangeDetail, AgentHealth scope
trail) and improve the global TopBar breadcrumb with semantic labels and
a context-based override for pages with richer navigation data.

- Add BreadcrumbProvider from design system v0.1.12
- LayoutShell: label map prettifies URL segments (apps→Applications, etc.)
- ExchangeDetail: uses useBreadcrumb() to set semantic trail via context
- AgentHealth: remove scope trail, keep live-count badge standalone

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 11:40:37 +01:00
hsiegeln
bde0459416 fix: prevent log viewer flicker on ExchangeDetail page
All checks were successful
CI / build (push) Successful in 1m0s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 1m12s
CI / deploy (push) Successful in 35s
CI / deploy-feature (push) Has been skipped
Skip global time range in the logs query key when filtering by
exchangeId (exchange logs are historical, the sliding time window is
irrelevant). Add placeholderData to keep previous results visible
during query key transitions on other pages.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 11:03:38 +01:00
hsiegeln
a01712e68c fix: use .keyword suffix on both exchangeId term queries
All checks were successful
CI / build (push) Successful in 1m1s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 41s
CI / deploy (push) Successful in 36s
CI / deploy-feature (push) Has been skipped
Defensive: use .keyword on the top-level exchangeId field too, in
case indices were created before the explicit keyword mapping was
added to the template.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 10:45:59 +01:00
hsiegeln
9aa78f681d fix: use .keyword suffix for MDC exchangeId term query
Some checks failed
CI / docker (push) Has been cancelled
CI / deploy (push) Has been cancelled
CI / deploy-feature (push) Has been cancelled
CI / cleanup-branch (push) Has been cancelled
CI / build (push) Has been cancelled
Dynamically mapped string fields in OpenSearch are multi-field
(text + keyword). Term queries require the .keyword sub-field for
exact matching.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 10:45:14 +01:00
hsiegeln
befefe457f fix: query both top-level and MDC exchangeId for log search
All checks were successful
CI / build (push) Successful in 1m1s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 49s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
Existing log records only have exchangeId inside the mdc object, not
as a top-level indexed field. Use a bool should clause to match on
either exchangeId (new records) or mdc.camel.exchangeId (old records).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 10:40:42 +01:00
hsiegeln
ea665ff411 feat: exchange-level log viewer on ExchangeDetail page
All checks were successful
CI / build (push) Successful in 1m0s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 49s
CI / deploy (push) Successful in 37s
CI / deploy-feature (push) Has been skipped
Index exchangeId from Camel MDC (camel.exchangeId) as a top-level
keyword field in OpenSearch log indices. Add exchangeId filter to
the log query API and frontend hook. Show a LogViewer on the
ExchangeDetail page filtered to that exchange's logs, with search
input and level filter pills.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 10:26:30 +01:00
hsiegeln
f9bd492191 chore: update design system to v0.1.11 (live time range fix)
All checks were successful
CI / build (push) Successful in 56s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 1m9s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
The GlobalFilterProvider now recomputes the preset time range every
10s when auto-refresh is on, so timeRange.end stays fresh instead of
being frozen at the moment the preset was clicked.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 09:57:43 +01:00
hsiegeln
1be303b801 feat: add application log panel to agent health page
All checks were successful
CI / build (push) Successful in 55s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 48s
CI / deploy (push) Successful in 37s
CI / deploy-feature (push) Has been skipped
Add the same log + timeline side-by-side layout from AgentInstance to
the AgentHealth page (/agents/{appId}). Includes search input, level
filter pills, sort toggle, and refresh button — matching the instance
page design exactly.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 22:54:07 +01:00
hsiegeln
d57249906a fix: refresh buttons use "now" as to-date for queries
All checks were successful
CI / build (push) Successful in 56s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 47s
CI / deploy (push) Successful in 41s
CI / deploy-feature (push) Has been skipped
Instead of calling refetch() with stale time params, the refresh
buttons now set a toOverride state to new Date().toISOString(). This
flows into the query key, triggering a fresh fetch with the current
time as the upper bound. Both useApplicationLogs and useAgentEvents
hooks accept an optional toOverride parameter.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 22:41:00 +01:00
hsiegeln
6a24dd01e9 fix: add exchange body fields to schema.d.ts for CI tsc check
All checks were successful
CI / cleanup-branch (push) Has been skipped
CI / build (push) Successful in 54s
CI / docker (push) Successful in 9s
CI / deploy (push) Successful in 19s
CI / deploy-feature (push) Has been skipped
The CI build runs tsc --noEmit which failed because the ExecutionDetail
type in schema.d.ts was missing the new inputBody/outputBody/inputHeaders/
outputHeaders fields added to the backend DTO.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 22:06:26 +01:00
hsiegeln
e10f021c54 use self hosted image for build
Some checks failed
CI / build (push) Failing after 26s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
2026-03-25 22:03:19 +01:00
hsiegeln
b3c5e87230 fix: expose exchange body in API, fix RouteFlow index mapping
Some checks failed
CI / build (push) Failing after 25s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Add inputBody/outputBody/inputHeaders/outputHeaders to ExecutionDetail
DTO so exchange-level bodies are returned by the detail endpoint. Show
"Exchange Input" and "Exchange Output" panels on the detail page when
the data is available.

Fix RouteFlow node click selecting the wrong processor snapshot by
building a flowToTreeIndex mapping that correctly translates flow
display index → diagram node index → processorId → processor tree
index. Previously the diagram node index was used directly as the
processor tree index, which broke when the two orderings differed.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 22:02:26 +01:00
hsiegeln
9b63443842 feat: add sort toggle and refresh buttons to log/timeline panels
All checks were successful
CI / build (push) Successful in 55s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 50s
CI / deploy (push) Successful in 42s
CI / deploy-feature (push) Has been skipped
Remove auto-scroll override hack. Add sort order toggle (asc/desc
by time) and manual refresh button to both the application log and
agent events timeline panels on AgentInstance and AgentHealth pages.
Default is descending (newest first); toggling reverses the array.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 19:53:33 +01:00
hsiegeln
cd30c2d9b5 fix: match log/timeline height, DESC sort with scroll-to-top
All checks were successful
CI / build (push) Successful in 55s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 52s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
Give logCard the same max-height and flex layout as timelineCard so
both columns are equal height. Revert .toReversed() so events stay
in DESC order (newest at top). Override EventFeed's auto-scroll-to-
bottom with a requestAnimationFrame that resets scrollTop to 0 after
mount, keeping newest entries visible at the top of both panels.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 19:12:08 +01:00
hsiegeln
b612941aae feat: wire up application logs from OpenSearch, fix event autoscroll
All checks were successful
CI / build (push) Successful in 55s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 51s
CI / deploy (push) Successful in 37s
CI / deploy-feature (push) Has been skipped
Add GET /api/v1/logs endpoint to query application logs stored in
OpenSearch with filters for application, agent, level, time range,
and text search. Wire up the AgentInstance LogViewer with real data
and an EventFeed-style toolbar (search input + level filter pills).

Fix agent events timeline autoscroll by reversing the DESC-ordered
events so newest entries appear at the bottom where EventFeed
autoscrolls to.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 18:56:13 +01:00
hsiegeln
20ee448f4e fix: OpenSearch status field mismatch, adopt RouteFlow flows prop
All checks were successful
CI / build (push) Successful in 56s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 1m43s
CI / deploy (push) Successful in 38s
CI / deploy-feature (push) Has been skipped
Fix admin OpenSearch page always showing "Disconnected" by aligning
frontend field names (reachable/nodeCount/host) with backend DTO.

Update design system to v0.1.10 and adopt the new multi-flow RouteFlow
API — error-handler nodes now render as labeled segments with error
variant instead of relying on legacy auto-separation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 18:34:58 +01:00
hsiegeln
2bbca8ae38 fix: force SNAPSHOT update in Docker build (-U flag)
All checks were successful
CI / build (push) Successful in 55s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 40s
CI / deploy (push) Successful in 38s
CI / deploy-feature (push) Has been skipped
Same issue as the CI build — Docker layer cache can serve a stale
cameleer3-common SNAPSHOT.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 13:36:07 +01:00
hsiegeln
fea50b51ae fix: force SNAPSHOT update in CI build (-U flag)
Some checks failed
CI / build (push) Successful in 55s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Failing after 23s
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Maven cache can serve stale cameleer3-common SNAPSHOTs. The -U flag
forces Maven to check the remote registry for updated SNAPSHOTs on
every build.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 13:33:59 +01:00
79d37118e0 chore: use pre-baked build images from cameleer-build-images
Some checks failed
CI / build (push) Failing after 40s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Replace maven:3.9-eclipse-temurin-17 with cameleer-build:1 (includes
Node.js 22, curl, jq). Replace docker:27 with cameleer-docker-builder:1
(includes git, curl, jq). Removes per-build tool installation steps.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 13:26:11 +01:00
hsiegeln
7fd55ea8ba fix: remove core LogIndexService to fix CI snapshot resolution
Some checks failed
CI / build (push) Failing after 1m11s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
LogIndexService in server-core imported LogEntry from cameleer3-common,
but the SNAPSHOT on the registry may not have it yet when the server CI
runs. Moved the dependency to server-app where both the controller and
OpenSearch implementation live.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 13:11:11 +01:00
hsiegeln
c96fbef5d5 ci: retry after cameleer3-common publish
Some checks failed
CI / build (push) Failing after 50s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 13:05:23 +01:00
hsiegeln
7423e2ca14 feat: add application log ingestion with OpenSearch storage
Some checks failed
CI / cleanup-branch (push) Has been skipped
CI / build (push) Failing after 59s
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Agents can now send application log entries in batches via POST /api/v1/data/logs.
Logs are indexed directly into OpenSearch daily indices (logs-{yyyy-MM-dd}) using
the bulk API. Index template defines explicit mappings for full-text search readiness.

New DTOs (LogEntry, LogBatch) added to cameleer3-common in the agent repo.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 11:53:27 +01:00
hsiegeln
bf600f8c5f fix: read version and updated_at from SQL columns in config repository
All checks were successful
CI / build (push) Successful in 12m13s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 44s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
The findByApplication query only read config_val JSONB, ignoring the
version and updated_at SQL columns. The JSON blob contained version 0
from the original save, so agents saw no config and fell back to defaults.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 10:22:13 +01:00
hsiegeln
996ea65293 feat: LIVE/PAUSED toggle controls data fetching on sidebar navigation
All checks were successful
CI / build (push) Successful in 1m13s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 55s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
LIVE: sidebar clicks trigger initial fetch + polling for the new route.
PAUSED: sidebar clicks navigate but queries are disabled — no fetches
until the user switches back to LIVE.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 10:01:14 +01:00
hsiegeln
9866dd5f23 fix: move design system dev install after COPY to bust Docker cache
All checks were successful
CI / build (push) Successful in 1m23s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 1m12s
CI / deploy (push) Successful in 38s
CI / deploy-feature (push) Has been skipped
The npm install @cameleer/design-system@dev was in the same cached layer
as npm ci, so Docker never re-ran it when the registry had a new version.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 09:37:51 +01:00
hsiegeln
d9c8816647 feat: add OpenSearch highlight snippets to search results
All checks were successful
CI / build (push) Successful in 1m23s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 54s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
- Add highlight field to ExecutionSummary record
- Request highlight fragments from OpenSearch when full-text search is active
- Pass matchContext to command palette for display

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 09:29:07 +01:00
hsiegeln
b32c97c02b feat: fix Cmd-K shortcut and add exchange full-text search to command palette
All checks were successful
CI / build (push) Successful in 1m43s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 1m17s
CI / deploy (push) Successful in 40s
CI / deploy-feature (push) Has been skipped
- Add missing onOpen prop to CommandPalette (fixes Ctrl+K/Cmd+K)
- Wire server-side exchange search with debounced text query
- Use design system dev snapshot from Gitea registry in CI builds

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 08:57:24 +01:00
hsiegeln
552f02d25c fix: add JWT auth to application config API calls
All checks were successful
CI / build (push) Successful in 1m42s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 57s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
Raw fetch() had no auth headers, causing 401s that silently broke tracing toggle.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 08:19:44 +01:00
hsiegeln
9f9968abab chore: upgrade cameleer3-common to 1.0-SNAPSHOT and enable snapshot resolution
All checks were successful
CI / build (push) Successful in 1m44s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 3m27s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 08:04:29 +01:00
hsiegeln
69a3eb192f feat: persistent per-application config with GET/PUT endpoints
Some checks failed
CI / build (push) Failing after 1m10s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Add application_config table (V4 migration), repository, and REST
controller. GET /api/v1/config/{app} returns config, PUT saves and
pushes CONFIG_UPDATE to all LIVE agents via SSE. UI tracing toggle
now uses config API instead of direct SET_TRACED_PROCESSORS command.
Tracing store syncs with server config on load.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 07:42:55 +01:00
hsiegeln
488a32f319 feat: show tracing badges on processor nodes
All checks were successful
CI / build (push) Successful in 1m18s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 1m12s
CI / deploy (push) Successful in 40s
CI / deploy-feature (push) Has been skipped
Update design system to 0.1.8 and pass NodeBadge[] to both
ProcessorTimeline and RouteFlow. Traced processors display a
blue "TRACED" badge that updates reactively via Zustand store.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 23:10:37 +01:00
hsiegeln
bf57fd139b fix: show tracing action on all Flow view nodes
All checks were successful
CI / build (push) Successful in 1m26s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 53s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
Use diagram node ID as fallback processorId when no processor
execution match exists (e.g. error handlers that didn't trigger).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 22:46:52 +01:00
hsiegeln
581d53a33e fix: match SET_TRACED_PROCESSORS payload to agent protocol
Some checks failed
CI / build (push) Successful in 1m28s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 55s
CI / deploy-feature (push) Has been cancelled
CI / deploy (push) Has been cancelled
Payload now sends {processors: {id: "BOTH"}} map instead of
{routeId, processorIds[]} array. Tracing state keyed by application
name (global, not per-route) matching agent behavior.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 22:43:55 +01:00
hsiegeln
f4dd2b3415 feat: add processor tracing toggle to exchange detail views
All checks were successful
CI / build (push) Successful in 1m22s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 52s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
Wire getActions on ProcessorTimeline and RouteFlow to send
SET_TRACED_PROCESSORS commands to all agents of the same application.
Tracing state managed via Zustand store with optimistic UI and rollback.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 22:30:26 +01:00
hsiegeln
7532cc9d59 chore: update @cameleer/design-system to 0.1.7
All checks were successful
CI / build (push) Successful in 1m14s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 1m8s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 21:59:40 +01:00
hsiegeln
e7590d72fd fix: restore Swagger UI on api-docs page
All checks were successful
CI / build (push) Successful in 1m23s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 50s
CI / deploy (push) Successful in 38s
CI / deploy-feature (push) Has been skipped
- Change Vite proxy pattern from /api to /api/ so /api-docs client
  route is not captured and proxied to the backend
- Fix SwaggerUIBundle init: remove empty presets/layout overrides that
  crashed the internal persistConfigs function
- Use correct CSS import (swagger-ui.css instead of index.css)
- Add requestInterceptor to auto-attach JWT token to Try-it-out calls
- Add swagger-ui-bundle to optimizeDeps.include for reliable loading
- Remove unused swagger-ui-dist.d.ts type declarations

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 20:53:48 +01:00
hsiegeln
57ce1db248 add metrics ingestion diagnostics and upgrade cameleer3-common to 0.0.3
All checks were successful
CI / build (push) Successful in 1m34s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 3m20s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
- Add logging to MetricsController: warn on parse failures, debug on
  received metrics, buffer depth on 503
- Add GET /api/v1/admin/database/metrics-pipeline diagnostic endpoint
  (buffer depth, row count, distinct agents/metrics, latest timestamp)
- Fix BackpressureIT test JSON to match actual MetricsSnapshot schema
  (collectedAt/metricName/metricValue instead of timestamp/metrics)
- Upgrade cameleer3-common from 1.0-SNAPSHOT to 0.0.3 (adds engineLevel)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 20:23:26 +01:00
hsiegeln
c97d730a00 fix: show N/A for agent heap/CPU when no JVM metrics available
All checks were successful
CI / build (push) Successful in 1m22s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 55s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
Indeterminate progress bars were misleading when agents don't report
JVM metrics — replaced with plain "N/A" text.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 19:46:58 +01:00
hsiegeln
581c4f9ad9 fix: restore registry URL in package-lock.json for CI
All checks were successful
CI / build (push) Successful in 1m16s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 1m12s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
The lock file had "resolved": "../../design-system" from a local
install, causing npm ci in CI to silently skip the package.
Reinstalled from registry to fix the resolved URL.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 19:15:44 +01:00
hsiegeln
ef6bc4be21 fix: add npm registry auth token for UI build in CI
Some checks failed
CI / build (push) Failing after 39s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
The Build UI step ran npm ci without authenticating to the Gitea npm
registry, causing @cameleer/design-system to fail to resolve. Add
REGISTRY_TOKEN to .npmrc before npm ci.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 19:12:35 +01:00
hsiegeln
8534bb8839 chore: upgrade @cameleer/design-system to v0.1.6
Some checks failed
CI / build (push) Failing after 39s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 19:07:13 +01:00
hsiegeln
a5bc7cf6d1 fix: use self-portaling DetailPanel from design system v0.1.5
Some checks failed
CI / build (push) Failing after 57s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
DetailPanel now portals itself to #cameleer-detail-panel-root (a div
AppShell places as a sibling of .main in the top-level flex row).
Pages just render <DetailPanel> inline — no manual createPortal,
no context, no prop drilling.

Remove the old #detail-panel-portal div from LayoutShell and the
createPortal wrappers from Dashboard and AgentHealth.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 19:00:02 +01:00
hsiegeln
5d2eff4f73 fix: normalize null fields from unconfigured OIDC response
All checks were successful
CI / build (push) Successful in 1m16s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 53s
CI / deploy (push) Successful in 40s
CI / deploy-feature (push) Has been skipped
When no OIDC config exists, the backend returns an object with all
null fields (via OidcAdminConfigResponse.unconfigured()). Normalize
all null values to sensible defaults when loading the form instead
of passing nulls through to Input components and .map() calls.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 18:44:02 +01:00
hsiegeln
9a4a4dc1af fix: handle null defaultRoles in OIDC config page
Some checks failed
CI / build (push) Has been cancelled
CI / docker (push) Has been cancelled
CI / deploy (push) Has been cancelled
CI / deploy-feature (push) Has been cancelled
CI / cleanup-branch (push) Has been cancelled
The API returns defaultRoles as null when no roles are configured.
Add null guards on all defaultRoles accesses to prevent .map() crash.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 18:41:59 +01:00
hsiegeln
f3241e904f fix: use createPortal for DetailPanel instead of context+useEffect
Some checks failed
CI / build (push) Successful in 1m21s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 53s
CI / deploy-feature (push) Has been cancelled
CI / deploy (push) Has been cancelled
The previous approach used useEffect+context to hoist DetailPanel
content to the AppShell level, but the dependency-free useEffect
caused a re-render loop that broke sidebar navigation.

Replace with createPortal: pages render DetailPanel inline in their
JSX but portal it to a target div (#detail-panel-portal) at the
AppShell level. No state lifting, no re-render loops.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 18:38:59 +01:00
hsiegeln
5de792744e fix: hoist DetailPanel into AppShell detail slot for proper slide-in
All checks were successful
CI / build (push) Successful in 1m22s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 51s
CI / deploy (push) Successful in 38s
CI / deploy-feature (push) Has been skipped
DetailPanel is a flex sibling that slides in from the right — it must
be rendered at the AppShell level via the detail prop, not inside the
page content. Add DetailPanelContext so pages can push their panel
content up to LayoutShell, which passes it to AppShell.detail.

Applied to Dashboard (exchange detail) and AgentHealth (instance detail).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 18:28:03 +01:00
hsiegeln
0a5f4a03b5 chore: upgrade @cameleer/design-system to v0.1.4
All checks were successful
CI / build (push) Successful in 1m13s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 1m11s
CI / deploy (push) Successful in 37s
CI / deploy-feature (push) Has been skipped
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 18:18:20 +01:00
hsiegeln
4ac11551c9 feat: add auto-refresh toggle wired to all polling queries
Some checks failed
CI / build (push) Failing after 51s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Upgrade @cameleer/design-system to ^0.1.3 which adds LIVE/PAUSED
toggle to TopBar backed by autoRefresh state in GlobalFilterProvider.

Add useRefreshInterval() hook that returns the polling interval when
auto-refresh is on, or false when paused. Wire it into all query
hooks that use refetchInterval (executions, catalog, agents, metrics,
admin database/opensearch).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 18:10:32 +01:00
hsiegeln
6fea5f2c5b fix: use .keyword suffix for text field sorting in OpenSearch
All checks were successful
CI / build (push) Successful in 1m22s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 44s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
OpenSearch dynamically maps string fields as text with a .keyword
subfield. Sorting on text fields throws an error; only .keyword,
date, and numeric fields support sorting. Add .keyword suffix to
all string sort columns (status, routeId, agentId, executionId,
correlationId, applicationName) while keeping start_time and
duration_ms as-is.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 17:56:18 +01:00
hsiegeln
b7cac68ee1 fix: filter exchanges by application and restore snake_case sort columns
All checks were successful
CI / build (push) Successful in 1m23s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 41s
CI / deploy (push) Successful in 39s
CI / deploy-feature (push) Has been skipped
Add application_name filter to OpenSearch query builder — sidebar
app selection now correctly filters the exchange list. The
application field was being resolved to agentIds in the controller
but never applied as a query filter in OpenSearch.

Also restore snake_case sort column mapping since the OpenSearch
toMap() serializer uses snake_case field names (start_time, route_id,
etc.), not camelCase.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 17:41:07 +01:00
hsiegeln
cdbe330c47 fix: support all sortable columns and use camelCase for OpenSearch
All checks were successful
CI / build (push) Successful in 1m24s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 45s
CI / deploy (push) Successful in 37s
CI / deploy-feature (push) Has been skipped
Add executionId and applicationName to allowed sort fields. Fix sort
column mapping to use camelCase field names matching the OpenSearch
ExecutionDocument fields instead of snake_case DB column names. This
was causing sorts on most columns to either silently fall back to
startTime or return empty results from OpenSearch.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 17:37:01 +01:00
53e9073dca fix: update ExecutionRecord constructor in stats test for new fields
All checks were successful
CI / build (push) Successful in 1m13s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 1m9s
CI / deploy (push) Successful in 38s
CI / deploy-feature (push) Has been skipped
2026-03-24 17:26:07 +01:00
b8c316727e fix: update ExecutionRecord constructor calls in tests for new fields
Some checks failed
CI / build (push) Has started running
CI / docker (push) Has been cancelled
CI / deploy (push) Has been cancelled
CI / deploy-feature (push) Has been cancelled
CI / cleanup-branch (push) Has been cancelled
2026-03-24 17:25:48 +01:00
hsiegeln
48455cd559 fix: use server-side sorting for paginated tables
Some checks failed
CI / cleanup-branch (push) Has been skipped
CI / build (push) Failing after 1m10s
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Upgrade @cameleer/design-system to v0.1.1 which adds onSortChange
callback to DataTable. Wire it up in Dashboard (exchanges), AuditLog,
and RouteDetail (recent executions) so sorting triggers a new API
request with sortField/sortDir instead of only sorting the current page.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 17:05:17 +01:00
aa3d9f375b Merge pull request 'feat: agent protocol v2 — engine levels, enriched acks, route snapshots' (#91) from fix/agent-protocol-v2 into main
Some checks failed
CI / build (push) Failing after 1m0s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
Reviewed-on: cameleer/cameleer3-server#91
2026-03-24 16:50:09 +01:00
hsiegeln
e54d20bcb7 feat: migrate login page to design system styling
All checks were successful
CI / build (push) Successful in 1m26s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 57s
CI / deploy (push) Successful in 38s
CI / deploy-feature (push) Has been skipped
Replace inline styles with CSS module matching the design system's
LoginForm visual patterns. Uses proper DS class structure (divider,
social section, form fields) while keeping username-based auth
instead of the DS component's email validation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 16:44:52 +01:00
hsiegeln
81f85aa82d feat: replace UI with design system example pages wired to real API
Some checks failed
CI / build (push) Successful in 1m18s
CI / cleanup-branch (push) Has been skipped
CI / docker (push) Successful in 55s
CI / deploy-feature (push) Has been cancelled
CI / deploy (push) Has been cancelled
Migrate all page components from the @cameleer/design-system v0.0.3
example UI, replacing mock data with real backend API hooks. This brings
richer visuals (KpiStrip, GroupCard, RouteFlow, ProcessorTimeline,
DateRangePicker, expandable rows) while preserving all existing API
integration, auth, and routing infrastructure.

Pages migrated: Dashboard, RoutesMetrics, RouteDetail, ExchangeDetail,
AgentHealth, AgentInstance, OidcConfig, AuditLog, RBAC (Users/Groups/Roles).
Also enhanced LayoutShell CommandPalette with real search data from catalog.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 16:42:16 +01:00
2887fe9599 feat: add V3 migration for engine_level and route-level snapshot columns
Some checks failed
CI / build (push) Failing after 51s
CI / cleanup-branch (push) Has been skipped
CI / build (pull_request) Failing after 52s
CI / cleanup-branch (pull_request) Has been skipped
CI / docker (push) Has been skipped
CI / docker (pull_request) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
CI / deploy (pull_request) Has been skipped
CI / deploy-feature (pull_request) Has been skipped
2026-03-24 16:13:11 +01:00
b1679b110c feat: add engine_level and route-level snapshot columns to PostgresExecutionStore
Some checks failed
CI / docker (push) Has been cancelled
CI / build (push) Has been cancelled
CI / deploy (push) Has been cancelled
CI / deploy-feature (push) Has been cancelled
CI / cleanup-branch (push) Has been cancelled
Add engine_level, input_body, output_body, input_headers, output_headers
to the executions INSERT/SELECT/UPSERT and row mapper. Required for
REGULAR mode where route-level payloads exist but no processor records.

Note: requires ALTER TABLE migration to add the new columns.
2026-03-24 16:12:46 +01:00
e7835e1100 feat: map engineLevel and route-level snapshots in IngestionService
Some checks failed
CI / docker (push) Has been cancelled
CI / deploy (push) Has been cancelled
CI / deploy-feature (push) Has been cancelled
CI / cleanup-branch (push) Has been cancelled
CI / build (push) Has been cancelled
Extract inputBody/outputBody/inputHeaders/outputHeaders from RouteExecution
snapshots and pass to ExecutionRecord. Maps engineLevel field. Critical for
REGULAR mode where no processor records exist but route-level payloads do.
2026-03-24 16:11:55 +01:00
ed65b87af2 feat: add engineLevel and route-level snapshot fields to ExecutionRecord
Some checks failed
CI / docker (push) Has been cancelled
CI / deploy (push) Has been cancelled
CI / deploy-feature (push) Has been cancelled
CI / cleanup-branch (push) Has been cancelled
CI / build (push) Has been cancelled
Adds engineLevel (NONE/MINIMAL/REGULAR/COMPLETE) and inputBody/outputBody/
inputHeaders/outputHeaders to ExecutionRecord so REGULAR mode route-level
payloads are persisted (previously only processor-level records had payloads).
2026-03-24 16:11:26 +01:00
4a99e6cf6b feat: support enriched command ack with status/message + set-traced-processors command type
Some checks failed
CI / docker (push) Has been cancelled
CI / deploy (push) Has been cancelled
CI / deploy-feature (push) Has been cancelled
CI / cleanup-branch (push) Has been cancelled
CI / build (push) Has been cancelled
- Add @RequestBody(required=false) CommandAckRequest to ack endpoint for
  receiving agent command results (backward compat with old agents)
- Record command results in agent event log via AgentEventService
- Add set-traced-processors to mapCommandType switch
- Inject AgentEventService dependency
2026-03-24 16:11:04 +01:00
4d9a9ff851 feat: add CommandAckRequest DTO for enriched command acknowledgments
Some checks failed
CI / build (push) Has started running
CI / docker (push) Has been cancelled
CI / deploy (push) Has been cancelled
CI / deploy-feature (push) Has been cancelled
CI / cleanup-branch (push) Has been cancelled
2026-03-24 16:10:27 +01:00
292a38fe30 feat: add SET_TRACED_PROCESSORS command type for per-processor overrides
Some checks failed
CI / docker (push) Has been cancelled
CI / deploy (push) Has been cancelled
CI / deploy-feature (push) Has been cancelled
CI / cleanup-branch (push) Has been cancelled
CI / build (push) Has been cancelled
2026-03-24 16:10:21 +01:00
158 changed files with 22975 additions and 3077 deletions

View File

@@ -14,16 +14,11 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: github.event_name != 'delete' if: github.event_name != 'delete'
container: container:
image: maven:3.9-eclipse-temurin-17 image: gitea.siegeln.net/cameleer/cameleer-build:1
credentials:
username: cameleer
password: ${{ secrets.REGISTRY_TOKEN }}
steps: steps:
- name: Install Node.js 22
run: |
apt-get update && apt-get install -y ca-certificates curl gnupg
mkdir -p /etc/apt/keyrings
curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg
echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_22.x nodistro main" > /etc/apt/sources.list.d/nodesource.list
apt-get update && apt-get install -y nodejs
- uses: actions/checkout@v4 - uses: actions/checkout@v4
- name: Configure Gitea Maven Registry - name: Configure Gitea Maven Registry
@@ -53,22 +48,27 @@ jobs:
- name: Build UI - name: Build UI
working-directory: ui working-directory: ui
run: | run: |
echo '//gitea.siegeln.net/api/packages/cameleer/npm/:_authToken=${REGISTRY_TOKEN}' >> .npmrc
npm ci npm ci
npm run build npm run build
env:
REGISTRY_TOKEN: ${{ secrets.REGISTRY_TOKEN }}
- name: Build and Test - name: Build and Test
run: mvn clean verify -DskipITs --batch-mode run: mvn clean verify -DskipITs -U --batch-mode
docker: docker:
needs: build needs: build
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: github.event_name == 'push' if: github.event_name == 'push'
container: container:
image: docker:27 image: gitea.siegeln.net/cameleer/cameleer-docker-builder:1
credentials:
username: cameleer
password: ${{ secrets.REGISTRY_TOKEN }}
steps: steps:
- name: Checkout - name: Checkout
run: | run: |
apk add --no-cache git
git clone --depth=1 --branch=${GITHUB_REF_NAME} https://cameleer:${REGISTRY_TOKEN}@gitea.siegeln.net/${GITHUB_REPOSITORY}.git . git clone --depth=1 --branch=${GITHUB_REF_NAME} https://cameleer:${REGISTRY_TOKEN}@gitea.siegeln.net/${GITHUB_REPOSITORY}.git .
env: env:
REGISTRY_TOKEN: ${{ secrets.REGISTRY_TOKEN }} REGISTRY_TOKEN: ${{ secrets.REGISTRY_TOKEN }}
@@ -95,7 +95,7 @@ jobs:
echo "IMAGE_TAGS=branch-$SLUG" >> "$GITHUB_ENV" echo "IMAGE_TAGS=branch-$SLUG" >> "$GITHUB_ENV"
fi fi
- name: Set up QEMU for cross-platform builds - name: Set up QEMU for cross-platform builds
run: docker run --rm --privileged tonistiigi/binfmt --install all run: docker run --rm --privileged gitea.siegeln.net/cameleer/binfmt:1 --install all
- name: Build and push server - name: Build and push server
run: | run: |
docker buildx create --use --name cibuilder docker buildx create --use --name cibuilder
@@ -133,7 +133,6 @@ jobs:
if: always() if: always()
- name: Cleanup old container images - name: Cleanup old container images
run: | run: |
apk add --no-cache curl jq
API="https://gitea.siegeln.net/api/v1" API="https://gitea.siegeln.net/api/v1"
AUTH="Authorization: token ${REGISTRY_TOKEN}" AUTH="Authorization: token ${REGISTRY_TOKEN}"
CURRENT_SHA="${{ github.sha }}" CURRENT_SHA="${{ github.sha }}"

1
.gitignore vendored
View File

@@ -39,3 +39,4 @@ logs/
# Claude # Claude
.claude/ .claude/
.worktrees/

View File

@@ -36,9 +36,9 @@ java -jar cameleer3-server-app/target/cameleer3-server-app-1.0-SNAPSHOT.jar
- Spring Boot 3.4.3 parent POM - Spring Boot 3.4.3 parent POM
- Depends on `com.cameleer3:cameleer3-common` from Gitea Maven registry - Depends on `com.cameleer3:cameleer3-common` from Gitea Maven registry
- Jackson `JavaTimeModule` for `Instant` deserialization - Jackson `JavaTimeModule` for `Instant` deserialization
- Communication: receives HTTP POST data from agents, serves SSE event streams for config push/commands - Communication: receives HTTP POST data from agents (executions, diagrams, metrics, logs), serves SSE event streams for config push/commands
- Maintains agent instance registry with states: LIVE → STALE → DEAD - Maintains agent instance registry with states: LIVE → STALE → DEAD
- Storage: PostgreSQL (TimescaleDB) for structured data, OpenSearch for full-text search - Storage: PostgreSQL (TimescaleDB) for structured data, OpenSearch for full-text search and application log storage
- Security: JWT auth with RBAC (AGENT/VIEWER/OPERATOR/ADMIN roles), Ed25519 config signing, bootstrap token for registration - Security: JWT auth with RBAC (AGENT/VIEWER/OPERATOR/ADMIN roles), Ed25519 config signing, bootstrap token for registration
- OIDC: Optional external identity provider support (token exchange pattern). Configured via admin API, stored in database (`server_config` table) - OIDC: Optional external identity provider support (token exchange pattern). Configured via admin API, stored in database (`server_config` table)
- User persistence: PostgreSQL `users` table, admin CRUD at `/api/v1/admin/users` - User persistence: PostgreSQL `users` table, admin CRUD at `/api/v1/admin/users`

View File

@@ -12,7 +12,7 @@ COPY cameleer3-server-app/pom.xml cameleer3-server-app/
# Cache deps — only re-downloaded when POMs change # Cache deps — only re-downloaded when POMs change
RUN mvn dependency:go-offline -B || true RUN mvn dependency:go-offline -B || true
COPY . . COPY . .
RUN mvn clean package -DskipTests -B RUN mvn clean package -DskipTests -U -B
FROM eclipse-temurin:17-jre FROM eclipse-temurin:17-jre
WORKDIR /app WORKDIR /app

View File

@@ -100,7 +100,7 @@ JWTs carry a `roles` claim. Endpoints are restricted by role:
| Role | Access | | Role | Access |
|------|--------| |------|--------|
| `AGENT` | Data ingestion (`/data/**`), heartbeat, SSE events, command ack | | `AGENT` | Data ingestion (`/data/**` — executions, diagrams, metrics, logs), heartbeat, SSE events, command ack |
| `VIEWER` | Search, execution detail, diagrams, agent list | | `VIEWER` | Search, execution detail, diagrams, agent list |
| `OPERATOR` | VIEWER + send commands to agents | | `OPERATOR` | VIEWER + send commands to agents |
| `ADMIN` | OPERATOR + user management (`/admin/**`) | | `ADMIN` | OPERATOR + user management (`/admin/**`) |
@@ -220,6 +220,20 @@ curl -s -X POST http://localhost:8081/api/v1/data/metrics \
-H "X-Protocol-Version: 1" \ -H "X-Protocol-Version: 1" \
-H "Authorization: Bearer $TOKEN" \ -H "Authorization: Bearer $TOKEN" \
-d '[{"agentId":"agent-1","metricName":"cpu","value":42.0,"timestamp":"2026-03-11T00:00:00Z","tags":{}}]' -d '[{"agentId":"agent-1","metricName":"cpu","value":42.0,"timestamp":"2026-03-11T00:00:00Z","tags":{}}]'
# Post application log entries (batch)
curl -s -X POST http://localhost:8081/api/v1/data/logs \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d '{
"entries": [{
"timestamp": "2026-03-25T10:00:00Z",
"level": "INFO",
"loggerName": "com.acme.MyService",
"message": "Processing order #12345",
"threadName": "main"
}]
}'
``` ```
**Note:** The `X-Protocol-Version: 1` header is required on all `/api/v1/data/**` endpoints. Missing or wrong version returns 400. **Note:** The `X-Protocol-Version: 1` header is required on all `/api/v1/data/**` endpoints. Missing or wrong version returns 400.
@@ -361,6 +375,8 @@ Key settings in `cameleer3-server-app/src/main/resources/application.yml`:
| `security.oidc.client-secret` | | OAuth2 client secret (`CAMELEER_OIDC_CLIENT_SECRET`) | | `security.oidc.client-secret` | | OAuth2 client secret (`CAMELEER_OIDC_CLIENT_SECRET`) |
| `security.oidc.roles-claim` | `realm_access.roles` | JSONPath to roles in OIDC id_token (`CAMELEER_OIDC_ROLES_CLAIM`) | | `security.oidc.roles-claim` | `realm_access.roles` | JSONPath to roles in OIDC id_token (`CAMELEER_OIDC_ROLES_CLAIM`) |
| `security.oidc.default-roles` | `VIEWER` | Default roles for new OIDC users (`CAMELEER_OIDC_DEFAULT_ROLES`) | | `security.oidc.default-roles` | `VIEWER` | Default roles for new OIDC users (`CAMELEER_OIDC_DEFAULT_ROLES`) |
| `opensearch.log-index-prefix` | `logs-` | OpenSearch index prefix for application logs (`CAMELEER_LOG_INDEX_PREFIX`) |
| `opensearch.log-retention-days` | `7` | Days before log indices are deleted (`CAMELEER_LOG_RETENTION_DAYS`) |
## Web UI Development ## Web UI Development

View File

@@ -1,5 +1,6 @@
package com.cameleer3.server.app.config; package com.cameleer3.server.app.config;
import com.cameleer3.server.app.interceptor.AuditInterceptor;
import com.cameleer3.server.app.interceptor.ProtocolVersionInterceptor; import com.cameleer3.server.app.interceptor.ProtocolVersionInterceptor;
import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Configuration;
import org.springframework.web.servlet.config.annotation.InterceptorRegistry; import org.springframework.web.servlet.config.annotation.InterceptorRegistry;
@@ -7,17 +8,17 @@ import org.springframework.web.servlet.config.annotation.WebMvcConfigurer;
/** /**
* Web MVC configuration. * Web MVC configuration.
* <p>
* Registers the {@link ProtocolVersionInterceptor} on data and agent endpoint paths,
* excluding health, API docs, and Swagger UI paths that do not require protocol versioning.
*/ */
@Configuration @Configuration
public class WebConfig implements WebMvcConfigurer { public class WebConfig implements WebMvcConfigurer {
private final ProtocolVersionInterceptor protocolVersionInterceptor; private final ProtocolVersionInterceptor protocolVersionInterceptor;
private final AuditInterceptor auditInterceptor;
public WebConfig(ProtocolVersionInterceptor protocolVersionInterceptor) { public WebConfig(ProtocolVersionInterceptor protocolVersionInterceptor,
AuditInterceptor auditInterceptor) {
this.protocolVersionInterceptor = protocolVersionInterceptor; this.protocolVersionInterceptor = protocolVersionInterceptor;
this.auditInterceptor = auditInterceptor;
} }
@Override @Override
@@ -33,5 +34,14 @@ public class WebConfig implements WebMvcConfigurer {
"/api/v1/agents/register", "/api/v1/agents/register",
"/api/v1/agents/*/refresh" "/api/v1/agents/*/refresh"
); );
// Safety-net audit: catches any unaudited POST/PUT/DELETE
registry.addInterceptor(auditInterceptor)
.addPathPatterns("/api/v1/**")
.excludePathPatterns(
"/api/v1/data/**",
"/api/v1/agents/*/heartbeat",
"/api/v1/health"
);
} }
} }

View File

@@ -1,16 +1,22 @@
package com.cameleer3.server.app.controller; package com.cameleer3.server.app.controller;
import com.cameleer3.server.app.agent.SseConnectionManager; import com.cameleer3.server.app.agent.SseConnectionManager;
import com.cameleer3.server.app.dto.CommandAckRequest;
import com.cameleer3.server.app.dto.CommandBroadcastResponse; import com.cameleer3.server.app.dto.CommandBroadcastResponse;
import com.cameleer3.server.app.dto.CommandRequest; import com.cameleer3.server.app.dto.CommandRequest;
import com.cameleer3.server.app.dto.CommandSingleResponse; import com.cameleer3.server.app.dto.CommandSingleResponse;
import com.cameleer3.server.core.admin.AuditCategory;
import com.cameleer3.server.core.admin.AuditResult;
import com.cameleer3.server.core.admin.AuditService;
import com.cameleer3.server.core.agent.AgentCommand; import com.cameleer3.server.core.agent.AgentCommand;
import com.cameleer3.server.core.agent.AgentEventService;
import com.cameleer3.server.core.agent.AgentInfo; import com.cameleer3.server.core.agent.AgentInfo;
import com.cameleer3.server.core.agent.AgentRegistryService; import com.cameleer3.server.core.agent.AgentRegistryService;
import com.cameleer3.server.core.agent.AgentState; import com.cameleer3.server.core.agent.AgentState;
import com.cameleer3.server.core.agent.CommandType; import com.cameleer3.server.core.agent.CommandType;
import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper; import com.fasterxml.jackson.databind.ObjectMapper;
import jakarta.servlet.http.HttpServletRequest;
import io.swagger.v3.oas.annotations.Operation; import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.responses.ApiResponse; import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag; import io.swagger.v3.oas.annotations.tags.Tag;
@@ -48,23 +54,30 @@ public class AgentCommandController {
private final AgentRegistryService registryService; private final AgentRegistryService registryService;
private final SseConnectionManager connectionManager; private final SseConnectionManager connectionManager;
private final ObjectMapper objectMapper; private final ObjectMapper objectMapper;
private final AgentEventService agentEventService;
private final AuditService auditService;
public AgentCommandController(AgentRegistryService registryService, public AgentCommandController(AgentRegistryService registryService,
SseConnectionManager connectionManager, SseConnectionManager connectionManager,
ObjectMapper objectMapper) { ObjectMapper objectMapper,
AgentEventService agentEventService,
AuditService auditService) {
this.registryService = registryService; this.registryService = registryService;
this.connectionManager = connectionManager; this.connectionManager = connectionManager;
this.objectMapper = objectMapper; this.objectMapper = objectMapper;
this.agentEventService = agentEventService;
this.auditService = auditService;
} }
@PostMapping("/{id}/commands") @PostMapping("/{id}/commands")
@Operation(summary = "Send command to a specific agent", @Operation(summary = "Send command to a specific agent",
description = "Sends a config-update, deep-trace, or replay command to the specified agent") description = "Sends a command to the specified agent via SSE")
@ApiResponse(responseCode = "202", description = "Command accepted") @ApiResponse(responseCode = "202", description = "Command accepted")
@ApiResponse(responseCode = "400", description = "Invalid command payload") @ApiResponse(responseCode = "400", description = "Invalid command payload")
@ApiResponse(responseCode = "404", description = "Agent not registered") @ApiResponse(responseCode = "404", description = "Agent not registered")
public ResponseEntity<CommandSingleResponse> sendCommand(@PathVariable String id, public ResponseEntity<CommandSingleResponse> sendCommand(@PathVariable String id,
@RequestBody CommandRequest request) throws JsonProcessingException { @RequestBody CommandRequest request,
HttpServletRequest httpRequest) throws JsonProcessingException {
AgentInfo agent = registryService.findById(id); AgentInfo agent = registryService.findById(id);
if (agent == null) { if (agent == null) {
throw new ResponseStatusException(HttpStatus.NOT_FOUND, "Agent not found: " + id); throw new ResponseStatusException(HttpStatus.NOT_FOUND, "Agent not found: " + id);
@@ -76,6 +89,10 @@ public class AgentCommandController {
String status = connectionManager.isConnected(id) ? "DELIVERED" : "PENDING"; String status = connectionManager.isConnected(id) ? "DELIVERED" : "PENDING";
auditService.log("send_agent_command", AuditCategory.AGENT, id,
java.util.Map.of("type", request.type(), "status", status),
AuditResult.SUCCESS, httpRequest);
return ResponseEntity.status(HttpStatus.ACCEPTED) return ResponseEntity.status(HttpStatus.ACCEPTED)
.body(new CommandSingleResponse(command.id(), status)); .body(new CommandSingleResponse(command.id(), status));
} }
@@ -86,7 +103,8 @@ public class AgentCommandController {
@ApiResponse(responseCode = "202", description = "Commands accepted") @ApiResponse(responseCode = "202", description = "Commands accepted")
@ApiResponse(responseCode = "400", description = "Invalid command payload") @ApiResponse(responseCode = "400", description = "Invalid command payload")
public ResponseEntity<CommandBroadcastResponse> sendGroupCommand(@PathVariable String group, public ResponseEntity<CommandBroadcastResponse> sendGroupCommand(@PathVariable String group,
@RequestBody CommandRequest request) throws JsonProcessingException { @RequestBody CommandRequest request,
HttpServletRequest httpRequest) throws JsonProcessingException {
CommandType type = mapCommandType(request.type()); CommandType type = mapCommandType(request.type());
String payloadJson = request.payload() != null ? objectMapper.writeValueAsString(request.payload()) : "{}"; String payloadJson = request.payload() != null ? objectMapper.writeValueAsString(request.payload()) : "{}";
@@ -101,6 +119,10 @@ public class AgentCommandController {
commandIds.add(command.id()); commandIds.add(command.id());
} }
auditService.log("broadcast_group_command", AuditCategory.AGENT, group,
java.util.Map.of("type", request.type(), "agentCount", agents.size()),
AuditResult.SUCCESS, httpRequest);
return ResponseEntity.status(HttpStatus.ACCEPTED) return ResponseEntity.status(HttpStatus.ACCEPTED)
.body(new CommandBroadcastResponse(commandIds, agents.size())); .body(new CommandBroadcastResponse(commandIds, agents.size()));
} }
@@ -110,7 +132,8 @@ public class AgentCommandController {
description = "Sends a command to all agents currently in LIVE state") description = "Sends a command to all agents currently in LIVE state")
@ApiResponse(responseCode = "202", description = "Commands accepted") @ApiResponse(responseCode = "202", description = "Commands accepted")
@ApiResponse(responseCode = "400", description = "Invalid command payload") @ApiResponse(responseCode = "400", description = "Invalid command payload")
public ResponseEntity<CommandBroadcastResponse> broadcastCommand(@RequestBody CommandRequest request) throws JsonProcessingException { public ResponseEntity<CommandBroadcastResponse> broadcastCommand(@RequestBody CommandRequest request,
HttpServletRequest httpRequest) throws JsonProcessingException {
CommandType type = mapCommandType(request.type()); CommandType type = mapCommandType(request.type());
String payloadJson = request.payload() != null ? objectMapper.writeValueAsString(request.payload()) : "{}"; String payloadJson = request.payload() != null ? objectMapper.writeValueAsString(request.payload()) : "{}";
@@ -122,21 +145,42 @@ public class AgentCommandController {
commandIds.add(command.id()); commandIds.add(command.id());
} }
auditService.log("broadcast_all_command", AuditCategory.AGENT, null,
java.util.Map.of("type", request.type(), "agentCount", liveAgents.size()),
AuditResult.SUCCESS, httpRequest);
return ResponseEntity.status(HttpStatus.ACCEPTED) return ResponseEntity.status(HttpStatus.ACCEPTED)
.body(new CommandBroadcastResponse(commandIds, liveAgents.size())); .body(new CommandBroadcastResponse(commandIds, liveAgents.size()));
} }
@PostMapping("/{id}/commands/{commandId}/ack") @PostMapping("/{id}/commands/{commandId}/ack")
@Operation(summary = "Acknowledge command receipt", @Operation(summary = "Acknowledge command receipt",
description = "Agent acknowledges that it has received and processed a command") description = "Agent acknowledges that it has received and processed a command, with result status and message")
@ApiResponse(responseCode = "200", description = "Command acknowledged") @ApiResponse(responseCode = "200", description = "Command acknowledged")
@ApiResponse(responseCode = "404", description = "Command not found") @ApiResponse(responseCode = "404", description = "Command not found")
public ResponseEntity<Void> acknowledgeCommand(@PathVariable String id, public ResponseEntity<Void> acknowledgeCommand(@PathVariable String id,
@PathVariable String commandId) { @PathVariable String commandId,
@RequestBody(required = false) CommandAckRequest body) {
boolean acknowledged = registryService.acknowledgeCommand(id, commandId); boolean acknowledged = registryService.acknowledgeCommand(id, commandId);
if (!acknowledged) { if (!acknowledged) {
throw new ResponseStatusException(HttpStatus.NOT_FOUND, "Command not found: " + commandId); throw new ResponseStatusException(HttpStatus.NOT_FOUND, "Command not found: " + commandId);
} }
// Complete any pending reply future (for synchronous request-reply commands like TEST_EXPRESSION)
registryService.completeReply(commandId,
body != null ? body.status() : "SUCCESS",
body != null ? body.message() : null,
body != null ? body.data() : null);
// Record command result in agent event log
if (body != null && body.status() != null) {
AgentInfo agent = registryService.findById(id);
String application = agent != null ? agent.application() : "unknown";
agentEventService.recordEvent(id, application, "COMMAND_" + body.status(),
"Command " + commandId + ": " + body.message());
log.debug("Command {} ack from agent {}: {} - {}", commandId, id, body.status(), body.message());
}
return ResponseEntity.ok().build(); return ResponseEntity.ok().build();
} }
@@ -145,8 +189,10 @@ public class AgentCommandController {
case "config-update" -> CommandType.CONFIG_UPDATE; case "config-update" -> CommandType.CONFIG_UPDATE;
case "deep-trace" -> CommandType.DEEP_TRACE; case "deep-trace" -> CommandType.DEEP_TRACE;
case "replay" -> CommandType.REPLAY; case "replay" -> CommandType.REPLAY;
case "set-traced-processors" -> CommandType.SET_TRACED_PROCESSORS;
case "test-expression" -> CommandType.TEST_EXPRESSION;
default -> throw new ResponseStatusException(HttpStatus.BAD_REQUEST, default -> throw new ResponseStatusException(HttpStatus.BAD_REQUEST,
"Invalid command type: " + typeStr + ". Valid: config-update, deep-trace, replay"); "Invalid command type: " + typeStr + ". Valid: config-update, deep-trace, replay, set-traced-processors, test-expression");
}; };
} }
} }

View File

@@ -8,6 +8,9 @@ import com.cameleer3.server.app.dto.AgentRegistrationRequest;
import com.cameleer3.server.app.dto.AgentRegistrationResponse; import com.cameleer3.server.app.dto.AgentRegistrationResponse;
import com.cameleer3.server.app.dto.ErrorResponse; import com.cameleer3.server.app.dto.ErrorResponse;
import com.cameleer3.server.app.security.BootstrapTokenValidator; import com.cameleer3.server.app.security.BootstrapTokenValidator;
import com.cameleer3.server.core.admin.AuditCategory;
import com.cameleer3.server.core.admin.AuditResult;
import com.cameleer3.server.core.admin.AuditService;
import com.cameleer3.server.core.agent.AgentEventService; import com.cameleer3.server.core.agent.AgentEventService;
import com.cameleer3.server.core.agent.AgentInfo; import com.cameleer3.server.core.agent.AgentInfo;
import com.cameleer3.server.core.agent.AgentRegistryService; import com.cameleer3.server.core.agent.AgentRegistryService;
@@ -58,6 +61,7 @@ public class AgentRegistrationController {
private final JwtService jwtService; private final JwtService jwtService;
private final Ed25519SigningService ed25519SigningService; private final Ed25519SigningService ed25519SigningService;
private final AgentEventService agentEventService; private final AgentEventService agentEventService;
private final AuditService auditService;
private final JdbcTemplate jdbc; private final JdbcTemplate jdbc;
public AgentRegistrationController(AgentRegistryService registryService, public AgentRegistrationController(AgentRegistryService registryService,
@@ -66,6 +70,7 @@ public class AgentRegistrationController {
JwtService jwtService, JwtService jwtService,
Ed25519SigningService ed25519SigningService, Ed25519SigningService ed25519SigningService,
AgentEventService agentEventService, AgentEventService agentEventService,
AuditService auditService,
JdbcTemplate jdbc) { JdbcTemplate jdbc) {
this.registryService = registryService; this.registryService = registryService;
this.config = config; this.config = config;
@@ -73,6 +78,7 @@ public class AgentRegistrationController {
this.jwtService = jwtService; this.jwtService = jwtService;
this.ed25519SigningService = ed25519SigningService; this.ed25519SigningService = ed25519SigningService;
this.agentEventService = agentEventService; this.agentEventService = agentEventService;
this.auditService = auditService;
this.jdbc = jdbc; this.jdbc = jdbc;
} }
@@ -113,6 +119,10 @@ public class AgentRegistrationController {
agentEventService.recordEvent(request.agentId(), application, "REGISTERED", agentEventService.recordEvent(request.agentId(), application, "REGISTERED",
"Agent registered: " + request.name()); "Agent registered: " + request.name());
auditService.log(request.agentId(), "agent_register", AuditCategory.AGENT, request.agentId(),
Map.of("application", application, "name", request.name()),
AuditResult.SUCCESS, httpRequest);
// Issue JWT tokens with AGENT role // Issue JWT tokens with AGENT role
List<String> roles = List.of("AGENT"); List<String> roles = List.of("AGENT");
String accessToken = jwtService.createAccessToken(request.agentId(), application, roles); String accessToken = jwtService.createAccessToken(request.agentId(), application, roles);
@@ -135,7 +145,8 @@ public class AgentRegistrationController {
@ApiResponse(responseCode = "401", description = "Invalid or expired refresh token") @ApiResponse(responseCode = "401", description = "Invalid or expired refresh token")
@ApiResponse(responseCode = "404", description = "Agent not found") @ApiResponse(responseCode = "404", description = "Agent not found")
public ResponseEntity<AgentRefreshResponse> refresh(@PathVariable String id, public ResponseEntity<AgentRefreshResponse> refresh(@PathVariable String id,
@RequestBody AgentRefreshRequest request) { @RequestBody AgentRefreshRequest request,
HttpServletRequest httpRequest) {
if (request.refreshToken() == null || request.refreshToken().isBlank()) { if (request.refreshToken() == null || request.refreshToken().isBlank()) {
return ResponseEntity.status(401).build(); return ResponseEntity.status(401).build();
} }
@@ -169,6 +180,9 @@ public class AgentRegistrationController {
String newAccessToken = jwtService.createAccessToken(agentId, agent.application(), roles); String newAccessToken = jwtService.createAccessToken(agentId, agent.application(), roles);
String newRefreshToken = jwtService.createRefreshToken(agentId, agent.application(), roles); String newRefreshToken = jwtService.createRefreshToken(agentId, agent.application(), roles);
auditService.log(agentId, "agent_token_refresh", AuditCategory.AUTH, agentId,
null, AuditResult.SUCCESS, httpRequest);
return ResponseEntity.ok(new AgentRefreshResponse(newAccessToken, newRefreshToken)); return ResponseEntity.ok(new AgentRefreshResponse(newAccessToken, newRefreshToken));
} }

View File

@@ -0,0 +1,208 @@
package com.cameleer3.server.app.controller;
import com.cameleer3.common.model.ApplicationConfig;
import com.cameleer3.server.app.dto.TestExpressionRequest;
import com.cameleer3.server.app.dto.TestExpressionResponse;
import com.cameleer3.server.app.storage.PostgresApplicationConfigRepository;
import com.cameleer3.server.core.admin.AuditCategory;
import com.cameleer3.server.core.admin.AuditResult;
import com.cameleer3.server.core.admin.AuditService;
import com.cameleer3.server.core.agent.AgentCommand;
import com.cameleer3.server.core.agent.AgentInfo;
import com.cameleer3.server.core.agent.AgentRegistryService;
import com.cameleer3.server.core.agent.AgentState;
import com.cameleer3.server.core.agent.CommandReply;
import com.cameleer3.server.core.agent.CommandType;
import com.cameleer3.server.core.storage.DiagramStore;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import jakarta.servlet.http.HttpServletRequest;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.security.core.Authentication;
import org.springframework.web.bind.annotation.*;
import java.util.List;
import java.util.Map;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
/**
* Per-application configuration management.
* Agents fetch config at startup; the UI modifies config which is persisted and pushed to agents via SSE.
*/
@RestController
@RequestMapping("/api/v1/config")
@Tag(name = "Application Config", description = "Per-application observability configuration")
public class ApplicationConfigController {
private static final Logger log = LoggerFactory.getLogger(ApplicationConfigController.class);
private final PostgresApplicationConfigRepository configRepository;
private final AgentRegistryService registryService;
private final ObjectMapper objectMapper;
private final AuditService auditService;
private final DiagramStore diagramStore;
public ApplicationConfigController(PostgresApplicationConfigRepository configRepository,
AgentRegistryService registryService,
ObjectMapper objectMapper,
AuditService auditService,
DiagramStore diagramStore) {
this.configRepository = configRepository;
this.registryService = registryService;
this.objectMapper = objectMapper;
this.auditService = auditService;
this.diagramStore = diagramStore;
}
@GetMapping
@Operation(summary = "List all application configs",
description = "Returns stored configurations for all applications")
@ApiResponse(responseCode = "200", description = "Configs returned")
public ResponseEntity<List<ApplicationConfig>> listConfigs(HttpServletRequest httpRequest) {
auditService.log("view_app_configs", AuditCategory.CONFIG, null, null, AuditResult.SUCCESS, httpRequest);
return ResponseEntity.ok(configRepository.findAll());
}
@GetMapping("/{application}")
@Operation(summary = "Get application config",
description = "Returns the current configuration for an application. Returns defaults if none stored.")
@ApiResponse(responseCode = "200", description = "Config returned")
public ResponseEntity<ApplicationConfig> getConfig(@PathVariable String application,
HttpServletRequest httpRequest) {
auditService.log("view_app_config", AuditCategory.CONFIG, application, null, AuditResult.SUCCESS, httpRequest);
return ResponseEntity.ok(
configRepository.findByApplication(application)
.orElse(defaultConfig(application)));
}
@PutMapping("/{application}")
@Operation(summary = "Update application config",
description = "Saves config and pushes CONFIG_UPDATE to all LIVE agents of this application")
@ApiResponse(responseCode = "200", description = "Config saved and pushed")
public ResponseEntity<ApplicationConfig> updateConfig(@PathVariable String application,
@RequestBody ApplicationConfig config,
Authentication auth,
HttpServletRequest httpRequest) {
String updatedBy = auth != null ? auth.getName() : "system";
config.setApplication(application);
ApplicationConfig saved = configRepository.save(application, config, updatedBy);
int pushed = pushConfigToAgents(application, saved);
log.info("Config v{} saved for '{}', pushed to {} agent(s)", saved.getVersion(), application, pushed);
auditService.log("update_app_config", AuditCategory.CONFIG, application,
Map.of("version", saved.getVersion(), "agentsPushed", pushed),
AuditResult.SUCCESS, httpRequest);
return ResponseEntity.ok(saved);
}
@GetMapping("/{application}/processor-routes")
@Operation(summary = "Get processor to route mapping",
description = "Returns a map of processorId → routeId for all processors seen in this application")
@ApiResponse(responseCode = "200", description = "Mapping returned")
public ResponseEntity<Map<String, String>> getProcessorRouteMapping(@PathVariable String application) {
return ResponseEntity.ok(diagramStore.findProcessorRouteMapping(application));
}
@PostMapping("/{application}/test-expression")
@Operation(summary = "Test a tap expression against sample data via a live agent")
@ApiResponse(responseCode = "200", description = "Expression evaluated successfully")
@ApiResponse(responseCode = "404", description = "No live agent available for this application")
@ApiResponse(responseCode = "504", description = "Agent did not respond in time")
public ResponseEntity<TestExpressionResponse> testExpression(
@PathVariable String application,
@RequestBody TestExpressionRequest request) {
// Find a LIVE agent for this application
AgentInfo agent = registryService.findAll().stream()
.filter(a -> application.equals(a.application()))
.filter(a -> a.state() == AgentState.LIVE)
.findFirst()
.orElse(null);
if (agent == null) {
return ResponseEntity.status(HttpStatus.NOT_FOUND)
.body(new TestExpressionResponse(null, "No live agent available for application: " + application));
}
// Build payload JSON
String payloadJson;
try {
payloadJson = objectMapper.writeValueAsString(Map.of(
"expression", request.expression() != null ? request.expression() : "",
"language", request.language() != null ? request.language() : "",
"body", request.body() != null ? request.body() : "",
"target", request.target() != null ? request.target() : ""
));
} catch (JsonProcessingException e) {
log.error("Failed to serialize test-expression payload", e);
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR)
.body(new TestExpressionResponse(null, "Failed to serialize request"));
}
// Send command and await reply
CompletableFuture<CommandReply> future = registryService.addCommandWithReply(
agent.id(), CommandType.TEST_EXPRESSION, payloadJson);
try {
CommandReply reply = future.orTimeout(5, TimeUnit.SECONDS).join();
if ("SUCCESS".equals(reply.status())) {
return ResponseEntity.ok(new TestExpressionResponse(reply.data(), null));
} else {
return ResponseEntity.ok(new TestExpressionResponse(null, reply.message()));
}
} catch (CompletionException e) {
if (e.getCause() instanceof TimeoutException) {
return ResponseEntity.status(HttpStatus.GATEWAY_TIMEOUT)
.body(new TestExpressionResponse(null, "Agent did not respond within 5 seconds"));
}
log.error("Error awaiting test-expression reply from agent {}", agent.id(), e);
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR)
.body(new TestExpressionResponse(null, "Internal error: " + e.getCause().getMessage()));
}
}
private int pushConfigToAgents(String application, ApplicationConfig config) {
String payloadJson;
try {
payloadJson = objectMapper.writeValueAsString(config);
} catch (JsonProcessingException e) {
log.error("Failed to serialize config for push", e);
return 0;
}
List<AgentInfo> agents = registryService.findAll().stream()
.filter(a -> a.state() == AgentState.LIVE)
.filter(a -> application.equals(a.application()))
.toList();
for (AgentInfo agent : agents) {
registryService.addCommand(agent.id(), CommandType.CONFIG_UPDATE, payloadJson);
}
return agents.size();
}
private static ApplicationConfig defaultConfig(String application) {
ApplicationConfig config = new ApplicationConfig();
config.setApplication(application);
config.setVersion(0);
config.setMetricsEnabled(true);
config.setSamplingRate(1.0);
config.setTracedProcessors(Map.of());
config.setApplicationLogLevel("INFO");
config.setAgentLogLevel("INFO");
config.setEngineLevel("REGULAR");
config.setPayloadCaptureMode("NONE");
return config;
}
}

View File

@@ -5,8 +5,11 @@ import com.cameleer3.server.core.admin.AuditCategory;
import com.cameleer3.server.core.admin.AuditRepository; import com.cameleer3.server.core.admin.AuditRepository;
import com.cameleer3.server.core.admin.AuditRepository.AuditPage; import com.cameleer3.server.core.admin.AuditRepository.AuditPage;
import com.cameleer3.server.core.admin.AuditRepository.AuditQuery; import com.cameleer3.server.core.admin.AuditRepository.AuditQuery;
import com.cameleer3.server.core.admin.AuditResult;
import com.cameleer3.server.core.admin.AuditService;
import io.swagger.v3.oas.annotations.Operation; import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.tags.Tag; import io.swagger.v3.oas.annotations.tags.Tag;
import jakarta.servlet.http.HttpServletRequest;
import org.springframework.format.annotation.DateTimeFormat; import org.springframework.format.annotation.DateTimeFormat;
import org.springframework.http.ResponseEntity; import org.springframework.http.ResponseEntity;
import org.springframework.security.access.prepost.PreAuthorize; import org.springframework.security.access.prepost.PreAuthorize;
@@ -16,8 +19,6 @@ import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController; import org.springframework.web.bind.annotation.RestController;
import java.time.Instant; import java.time.Instant;
import java.time.LocalDate;
import java.time.ZoneOffset;
@RestController @RestController
@RequestMapping("/api/v1/admin/audit") @RequestMapping("/api/v1/admin/audit")
@@ -26,19 +27,22 @@ import java.time.ZoneOffset;
public class AuditLogController { public class AuditLogController {
private final AuditRepository auditRepository; private final AuditRepository auditRepository;
private final AuditService auditService;
public AuditLogController(AuditRepository auditRepository) { public AuditLogController(AuditRepository auditRepository, AuditService auditService) {
this.auditRepository = auditRepository; this.auditRepository = auditRepository;
this.auditService = auditService;
} }
@GetMapping @GetMapping
@Operation(summary = "Search audit log entries with pagination") @Operation(summary = "Search audit log entries with pagination")
public ResponseEntity<AuditLogPageResponse> getAuditLog( public ResponseEntity<AuditLogPageResponse> getAuditLog(
HttpServletRequest httpRequest,
@RequestParam(required = false) String username, @RequestParam(required = false) String username,
@RequestParam(required = false) String category, @RequestParam(required = false) String category,
@RequestParam(required = false) String search, @RequestParam(required = false) String search,
@RequestParam(required = false) @DateTimeFormat(iso = DateTimeFormat.ISO.DATE) LocalDate from, @RequestParam(required = false) @DateTimeFormat(iso = DateTimeFormat.ISO.DATE_TIME) Instant from,
@RequestParam(required = false) @DateTimeFormat(iso = DateTimeFormat.ISO.DATE) LocalDate to, @RequestParam(required = false) @DateTimeFormat(iso = DateTimeFormat.ISO.DATE_TIME) Instant to,
@RequestParam(defaultValue = "timestamp") String sort, @RequestParam(defaultValue = "timestamp") String sort,
@RequestParam(defaultValue = "desc") String order, @RequestParam(defaultValue = "desc") String order,
@RequestParam(defaultValue = "0") int page, @RequestParam(defaultValue = "0") int page,
@@ -46,8 +50,8 @@ public class AuditLogController {
size = Math.min(size, 100); size = Math.min(size, 100);
Instant fromInstant = from != null ? from.atStartOfDay(ZoneOffset.UTC).toInstant() : null; Instant fromInstant = from != null ? from : Instant.now().minus(java.time.Duration.ofDays(7));
Instant toInstant = to != null ? to.plusDays(1).atStartOfDay(ZoneOffset.UTC).toInstant() : null; Instant toInstant = to != null ? to : Instant.now();
AuditCategory cat = null; AuditCategory cat = null;
if (category != null && !category.isEmpty()) { if (category != null && !category.isEmpty()) {
@@ -58,6 +62,8 @@ public class AuditLogController {
} }
} }
auditService.log("view_audit_log", AuditCategory.AUTH, null, null, AuditResult.SUCCESS, httpRequest);
AuditQuery query = new AuditQuery(username, cat, search, fromInstant, toInstant, sort, order, page, size); AuditQuery query = new AuditQuery(username, cat, search, fromInstant, toInstant, sort, order, page, size);
AuditPage result = auditRepository.find(query); AuditPage result = auditRepository.find(query);

View File

@@ -7,6 +7,7 @@ import com.cameleer3.server.app.dto.TableSizeResponse;
import com.cameleer3.server.core.admin.AuditCategory; import com.cameleer3.server.core.admin.AuditCategory;
import com.cameleer3.server.core.admin.AuditResult; import com.cameleer3.server.core.admin.AuditResult;
import com.cameleer3.server.core.admin.AuditService; import com.cameleer3.server.core.admin.AuditService;
import com.cameleer3.server.core.ingestion.IngestionService;
import com.zaxxer.hikari.HikariDataSource; import com.zaxxer.hikari.HikariDataSource;
import com.zaxxer.hikari.HikariPoolMXBean; import com.zaxxer.hikari.HikariPoolMXBean;
import io.swagger.v3.oas.annotations.Operation; import io.swagger.v3.oas.annotations.Operation;
@@ -24,7 +25,9 @@ import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.server.ResponseStatusException; import org.springframework.web.server.ResponseStatusException;
import javax.sql.DataSource; import javax.sql.DataSource;
import java.time.Instant;
import java.util.List; import java.util.List;
import java.util.Map;
@RestController @RestController
@RequestMapping("/api/v1/admin/database") @RequestMapping("/api/v1/admin/database")
@@ -35,11 +38,14 @@ public class DatabaseAdminController {
private final JdbcTemplate jdbc; private final JdbcTemplate jdbc;
private final DataSource dataSource; private final DataSource dataSource;
private final AuditService auditService; private final AuditService auditService;
private final IngestionService ingestionService;
public DatabaseAdminController(JdbcTemplate jdbc, DataSource dataSource, AuditService auditService) { public DatabaseAdminController(JdbcTemplate jdbc, DataSource dataSource,
AuditService auditService, IngestionService ingestionService) {
this.jdbc = jdbc; this.jdbc = jdbc;
this.dataSource = dataSource; this.dataSource = dataSource;
this.auditService = auditService; this.auditService = auditService;
this.ingestionService = ingestionService;
} }
@GetMapping("/status") @GetMapping("/status")
@@ -117,6 +123,29 @@ public class DatabaseAdminController {
return ResponseEntity.ok().build(); return ResponseEntity.ok().build();
} }
@GetMapping("/metrics-pipeline")
@Operation(summary = "Get metrics ingestion pipeline diagnostics")
public ResponseEntity<Map<String, Object>> getMetricsPipeline() {
int bufferDepth = ingestionService.getMetricsBufferDepth();
Long totalRows = jdbc.queryForObject(
"SELECT count(*) FROM agent_metrics", Long.class);
List<String> agentIds = jdbc.queryForList(
"SELECT DISTINCT agent_id FROM agent_metrics ORDER BY agent_id", String.class);
Instant latestCollected = jdbc.queryForObject(
"SELECT max(collected_at) FROM agent_metrics", Instant.class);
List<String> metricNames = jdbc.queryForList(
"SELECT DISTINCT metric_name FROM agent_metrics ORDER BY metric_name", String.class);
return ResponseEntity.ok(Map.of(
"bufferDepth", bufferDepth,
"totalRows", totalRows != null ? totalRows : 0,
"distinctAgents", agentIds,
"distinctMetrics", metricNames,
"latestCollectedAt", latestCollected != null ? latestCollected.toString() : "none"
));
}
private String extractHost(DataSource ds) { private String extractHost(DataSource ds) {
try { try {
if (ds instanceof HikariDataSource hds) { if (ds instanceof HikariDataSource hds) {

View File

@@ -49,7 +49,7 @@ public class DetailController {
} }
@GetMapping("/{executionId}/processors/{index}/snapshot") @GetMapping("/{executionId}/processors/{index}/snapshot")
@Operation(summary = "Get exchange snapshot for a specific processor") @Operation(summary = "Get exchange snapshot for a specific processor by index")
@ApiResponse(responseCode = "200", description = "Snapshot data") @ApiResponse(responseCode = "200", description = "Snapshot data")
@ApiResponse(responseCode = "404", description = "Snapshot not found") @ApiResponse(responseCode = "404", description = "Snapshot not found")
public ResponseEntity<Map<String, String>> getProcessorSnapshot( public ResponseEntity<Map<String, String>> getProcessorSnapshot(
@@ -69,4 +69,16 @@ public class DetailController {
return ResponseEntity.ok(snapshot); return ResponseEntity.ok(snapshot);
} }
@GetMapping("/{executionId}/processors/by-id/{processorId}/snapshot")
@Operation(summary = "Get exchange snapshot for a specific processor by processorId")
@ApiResponse(responseCode = "200", description = "Snapshot data")
@ApiResponse(responseCode = "404", description = "Snapshot not found")
public ResponseEntity<Map<String, String>> processorSnapshotById(
@PathVariable String executionId,
@PathVariable String processorId) {
return detailService.getProcessorSnapshot(executionId, processorId)
.map(ResponseEntity::ok)
.orElse(ResponseEntity.notFound().build());
}
} }

View File

@@ -1,6 +1,8 @@
package com.cameleer3.server.app.controller; package com.cameleer3.server.app.controller;
import com.cameleer3.common.graph.RouteGraph; import com.cameleer3.common.graph.RouteGraph;
import com.cameleer3.server.core.agent.AgentInfo;
import com.cameleer3.server.core.agent.AgentRegistryService;
import com.cameleer3.server.core.ingestion.IngestionService; import com.cameleer3.server.core.ingestion.IngestionService;
import com.cameleer3.server.core.ingestion.TaggedDiagram; import com.cameleer3.server.core.ingestion.TaggedDiagram;
import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.core.JsonProcessingException;
@@ -35,10 +37,14 @@ public class DiagramController {
private static final Logger log = LoggerFactory.getLogger(DiagramController.class); private static final Logger log = LoggerFactory.getLogger(DiagramController.class);
private final IngestionService ingestionService; private final IngestionService ingestionService;
private final AgentRegistryService registryService;
private final ObjectMapper objectMapper; private final ObjectMapper objectMapper;
public DiagramController(IngestionService ingestionService, ObjectMapper objectMapper) { public DiagramController(IngestionService ingestionService,
AgentRegistryService registryService,
ObjectMapper objectMapper) {
this.ingestionService = ingestionService; this.ingestionService = ingestionService;
this.registryService = registryService;
this.objectMapper = objectMapper; this.objectMapper = objectMapper;
} }
@@ -48,10 +54,11 @@ public class DiagramController {
@ApiResponse(responseCode = "202", description = "Data accepted for processing") @ApiResponse(responseCode = "202", description = "Data accepted for processing")
public ResponseEntity<Void> ingestDiagrams(@RequestBody String body) throws JsonProcessingException { public ResponseEntity<Void> ingestDiagrams(@RequestBody String body) throws JsonProcessingException {
String agentId = extractAgentId(); String agentId = extractAgentId();
String applicationName = resolveApplicationName(agentId);
List<RouteGraph> graphs = parsePayload(body); List<RouteGraph> graphs = parsePayload(body);
for (RouteGraph graph : graphs) { for (RouteGraph graph : graphs) {
ingestionService.ingestDiagram(new TaggedDiagram(agentId, graph)); ingestionService.ingestDiagram(new TaggedDiagram(agentId, applicationName, graph));
} }
return ResponseEntity.accepted().build(); return ResponseEntity.accepted().build();
@@ -62,6 +69,11 @@ public class DiagramController {
return auth != null ? auth.getName() : ""; return auth != null ? auth.getName() : "";
} }
private String resolveApplicationName(String agentId) {
AgentInfo agent = registryService.findById(agentId);
return agent != null ? agent.application() : "";
}
private List<RouteGraph> parsePayload(String body) throws JsonProcessingException { private List<RouteGraph> parsePayload(String body) throws JsonProcessingException {
String trimmed = body.strip(); String trimmed = body.strip();
if (trimmed.startsWith("[")) { if (trimmed.startsWith("[")) {

View File

@@ -62,6 +62,7 @@ public class DiagramRenderController {
@ApiResponse(responseCode = "404", description = "Diagram not found") @ApiResponse(responseCode = "404", description = "Diagram not found")
public ResponseEntity<?> renderDiagram( public ResponseEntity<?> renderDiagram(
@PathVariable String contentHash, @PathVariable String contentHash,
@RequestParam(defaultValue = "LR") String direction,
HttpServletRequest request) { HttpServletRequest request) {
Optional<RouteGraph> graphOpt = diagramStore.findByContentHash(contentHash); Optional<RouteGraph> graphOpt = diagramStore.findByContentHash(contentHash);
@@ -76,7 +77,7 @@ public class DiagramRenderController {
// without also accepting everything (*/*). This means "application/json" // without also accepting everything (*/*). This means "application/json"
// must appear and wildcards must not dominate the preference. // must appear and wildcards must not dominate the preference.
if (accept != null && isJsonPreferred(accept)) { if (accept != null && isJsonPreferred(accept)) {
DiagramLayout layout = diagramRenderer.layoutJson(graph); DiagramLayout layout = diagramRenderer.layoutJson(graph, direction);
return ResponseEntity.ok() return ResponseEntity.ok()
.contentType(MediaType.APPLICATION_JSON) .contentType(MediaType.APPLICATION_JSON)
.body(layout); .body(layout);
@@ -96,7 +97,8 @@ public class DiagramRenderController {
@ApiResponse(responseCode = "404", description = "No diagram found for the given application and route") @ApiResponse(responseCode = "404", description = "No diagram found for the given application and route")
public ResponseEntity<DiagramLayout> findByApplicationAndRoute( public ResponseEntity<DiagramLayout> findByApplicationAndRoute(
@RequestParam String application, @RequestParam String application,
@RequestParam String routeId) { @RequestParam String routeId,
@RequestParam(defaultValue = "LR") String direction) {
List<String> agentIds = registryService.findByApplication(application).stream() List<String> agentIds = registryService.findByApplication(application).stream()
.map(AgentInfo::id) .map(AgentInfo::id)
.toList(); .toList();
@@ -115,7 +117,7 @@ public class DiagramRenderController {
return ResponseEntity.notFound().build(); return ResponseEntity.notFound().build();
} }
DiagramLayout layout = diagramRenderer.layoutJson(graphOpt.get()); DiagramLayout layout = diagramRenderer.layoutJson(graphOpt.get(), direction);
return ResponseEntity.ok(layout); return ResponseEntity.ok(layout);
} }

View File

@@ -0,0 +1,61 @@
package com.cameleer3.server.app.controller;
import com.cameleer3.common.model.LogBatch;
import com.cameleer3.server.app.search.OpenSearchLogIndex;
import com.cameleer3.server.core.agent.AgentInfo;
import com.cameleer3.server.core.agent.AgentRegistryService;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.http.ResponseEntity;
import org.springframework.security.core.Authentication;
import org.springframework.security.core.context.SecurityContextHolder;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
@RequestMapping("/api/v1/data")
@Tag(name = "Ingestion", description = "Data ingestion endpoints")
public class LogIngestionController {
private static final Logger log = LoggerFactory.getLogger(LogIngestionController.class);
private final OpenSearchLogIndex logIndex;
private final AgentRegistryService registryService;
public LogIngestionController(OpenSearchLogIndex logIndex,
AgentRegistryService registryService) {
this.logIndex = logIndex;
this.registryService = registryService;
}
@PostMapping("/logs")
@Operation(summary = "Ingest application log entries",
description = "Accepts a batch of log entries from an agent. Entries are indexed in OpenSearch.")
@ApiResponse(responseCode = "202", description = "Logs accepted for indexing")
public ResponseEntity<Void> ingestLogs(@RequestBody LogBatch batch) {
String agentId = extractAgentId();
String application = resolveApplicationName(agentId);
if (batch.getEntries() != null && !batch.getEntries().isEmpty()) {
log.debug("Received {} log entries from agent={}, app={}", batch.getEntries().size(), agentId, application);
logIndex.indexBatch(agentId, application, batch.getEntries());
}
return ResponseEntity.accepted().build();
}
private String extractAgentId() {
Authentication auth = SecurityContextHolder.getContext().getAuthentication();
return auth != null ? auth.getName() : "";
}
private String resolveApplicationName(String agentId) {
AgentInfo agent = registryService.findById(agentId);
return agent != null ? agent.application() : "";
}
}

View File

@@ -0,0 +1,50 @@
package com.cameleer3.server.app.controller;
import com.cameleer3.server.app.dto.LogEntryResponse;
import com.cameleer3.server.app.search.OpenSearchLogIndex;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import java.time.Instant;
import java.util.List;
@RestController
@RequestMapping("/api/v1/logs")
@Tag(name = "Application Logs", description = "Query application logs stored in OpenSearch")
public class LogQueryController {
private final OpenSearchLogIndex logIndex;
public LogQueryController(OpenSearchLogIndex logIndex) {
this.logIndex = logIndex;
}
@GetMapping
@Operation(summary = "Search application log entries",
description = "Returns log entries for a given application, optionally filtered by agent, level, time range, and text query")
public ResponseEntity<List<LogEntryResponse>> searchLogs(
@RequestParam String application,
@RequestParam(required = false) String agentId,
@RequestParam(required = false) String level,
@RequestParam(required = false) String query,
@RequestParam(required = false) String exchangeId,
@RequestParam(required = false) String from,
@RequestParam(required = false) String to,
@RequestParam(defaultValue = "200") int limit) {
limit = Math.min(limit, 1000);
Instant fromInstant = from != null ? Instant.parse(from) : null;
Instant toInstant = to != null ? Instant.parse(to) : null;
List<LogEntryResponse> entries = logIndex.search(
application, agentId, level, query, exchangeId, fromInstant, toInstant, limit);
return ResponseEntity.ok(entries);
}
}

View File

@@ -44,13 +44,23 @@ public class MetricsController {
@Operation(summary = "Ingest agent metrics", @Operation(summary = "Ingest agent metrics",
description = "Accepts an array of MetricsSnapshot objects") description = "Accepts an array of MetricsSnapshot objects")
@ApiResponse(responseCode = "202", description = "Data accepted for processing") @ApiResponse(responseCode = "202", description = "Data accepted for processing")
@ApiResponse(responseCode = "400", description = "Invalid payload")
@ApiResponse(responseCode = "503", description = "Buffer full, retry later") @ApiResponse(responseCode = "503", description = "Buffer full, retry later")
public ResponseEntity<Void> ingestMetrics(@RequestBody String body) throws JsonProcessingException { public ResponseEntity<Void> ingestMetrics(@RequestBody String body) {
List<MetricsSnapshot> metrics = parsePayload(body); List<MetricsSnapshot> metrics;
boolean accepted = ingestionService.acceptMetrics(metrics); try {
metrics = parsePayload(body);
} catch (JsonProcessingException e) {
log.warn("Failed to parse metrics payload: {}", e.getMessage());
return ResponseEntity.badRequest().build();
}
log.debug("Received {} metric(s) from agent(s)", metrics.size());
boolean accepted = ingestionService.acceptMetrics(metrics);
if (!accepted) { if (!accepted) {
log.warn("Metrics buffer full, returning 503"); log.warn("Metrics buffer full ({} items), returning 503",
ingestionService.getMetricsBufferDepth());
return ResponseEntity.status(HttpStatus.SERVICE_UNAVAILABLE) return ResponseEntity.status(HttpStatus.SERVICE_UNAVAILABLE)
.header("Retry-After", "5") .header("Retry-After", "5")
.build(); .build();

View File

@@ -61,7 +61,8 @@ public class OidcConfigAdminController {
@GetMapping @GetMapping
@Operation(summary = "Get OIDC configuration") @Operation(summary = "Get OIDC configuration")
@ApiResponse(responseCode = "200", description = "Current OIDC configuration (client_secret masked)") @ApiResponse(responseCode = "200", description = "Current OIDC configuration (client_secret masked)")
public ResponseEntity<OidcAdminConfigResponse> getConfig() { public ResponseEntity<OidcAdminConfigResponse> getConfig(HttpServletRequest httpRequest) {
auditService.log("view_oidc_config", AuditCategory.CONFIG, null, null, AuditResult.SUCCESS, httpRequest);
Optional<OidcConfig> config = configRepository.find(); Optional<OidcConfig> config = configRepository.find();
if (config.isEmpty()) { if (config.isEmpty()) {
return ResponseEntity.ok(OidcAdminConfigResponse.unconfigured()); return ResponseEntity.ok(OidcAdminConfigResponse.unconfigured());

View File

@@ -49,12 +49,14 @@ public class OpenSearchAdminController {
private final ObjectMapper objectMapper; private final ObjectMapper objectMapper;
private final String opensearchUrl; private final String opensearchUrl;
private final String indexPrefix; private final String indexPrefix;
private final String logIndexPrefix;
public OpenSearchAdminController(OpenSearchClient client, RestClient restClient, public OpenSearchAdminController(OpenSearchClient client, RestClient restClient,
SearchIndexerStats indexerStats, AuditService auditService, SearchIndexerStats indexerStats, AuditService auditService,
ObjectMapper objectMapper, ObjectMapper objectMapper,
@Value("${opensearch.url:http://localhost:9200}") String opensearchUrl, @Value("${opensearch.url:http://localhost:9200}") String opensearchUrl,
@Value("${opensearch.index-prefix:executions-}") String indexPrefix) { @Value("${opensearch.index-prefix:executions-}") String indexPrefix,
@Value("${opensearch.log-index-prefix:logs-}") String logIndexPrefix) {
this.client = client; this.client = client;
this.restClient = restClient; this.restClient = restClient;
this.indexerStats = indexerStats; this.indexerStats = indexerStats;
@@ -62,6 +64,7 @@ public class OpenSearchAdminController {
this.objectMapper = objectMapper; this.objectMapper = objectMapper;
this.opensearchUrl = opensearchUrl; this.opensearchUrl = opensearchUrl;
this.indexPrefix = indexPrefix; this.indexPrefix = indexPrefix;
this.logIndexPrefix = logIndexPrefix;
} }
@GetMapping("/status") @GetMapping("/status")
@@ -100,7 +103,8 @@ public class OpenSearchAdminController {
public ResponseEntity<IndicesPageResponse> getIndices( public ResponseEntity<IndicesPageResponse> getIndices(
@RequestParam(defaultValue = "0") int page, @RequestParam(defaultValue = "0") int page,
@RequestParam(defaultValue = "20") int size, @RequestParam(defaultValue = "20") int size,
@RequestParam(defaultValue = "") String search) { @RequestParam(defaultValue = "") String search,
@RequestParam(defaultValue = "executions") String prefix) {
try { try {
Response response = restClient.performRequest( Response response = restClient.performRequest(
new Request("GET", "/_cat/indices?format=json&h=index,health,docs.count,store.size,pri,rep&bytes=b")); new Request("GET", "/_cat/indices?format=json&h=index,health,docs.count,store.size,pri,rep&bytes=b"));
@@ -109,10 +113,12 @@ public class OpenSearchAdminController {
indices = objectMapper.readTree(is); indices = objectMapper.readTree(is);
} }
String filterPrefix = "logs".equals(prefix) ? logIndexPrefix : indexPrefix;
List<IndexInfoResponse> allIndices = new ArrayList<>(); List<IndexInfoResponse> allIndices = new ArrayList<>();
for (JsonNode idx : indices) { for (JsonNode idx : indices) {
String name = idx.path("index").asText(""); String name = idx.path("index").asText("");
if (!name.startsWith(indexPrefix)) { if (!name.startsWith(filterPrefix)) {
continue; continue;
} }
if (!search.isEmpty() && !name.contains(search)) { if (!search.isEmpty() && !name.contains(search)) {
@@ -152,7 +158,7 @@ public class OpenSearchAdminController {
@Operation(summary = "Delete an OpenSearch index") @Operation(summary = "Delete an OpenSearch index")
public ResponseEntity<Void> deleteIndex(@PathVariable String name, HttpServletRequest request) { public ResponseEntity<Void> deleteIndex(@PathVariable String name, HttpServletRequest request) {
try { try {
if (!name.startsWith(indexPrefix)) { if (!name.startsWith(indexPrefix) && !name.startsWith(logIndexPrefix)) {
throw new ResponseStatusException(HttpStatus.FORBIDDEN, "Cannot delete index outside application scope"); throw new ResponseStatusException(HttpStatus.FORBIDDEN, "Cannot delete index outside application scope");
} }
boolean exists = client.indices().exists(r -> r.index(name)).value(); boolean exists = client.indices().exists(r -> r.index(name)).value();

View File

@@ -14,6 +14,7 @@ import org.springframework.http.ResponseEntity;
import org.springframework.jdbc.core.JdbcTemplate; import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController; import org.springframework.web.bind.annotation.RestController;
import java.sql.Timestamp; import java.sql.Timestamp;
@@ -44,7 +45,9 @@ public class RouteCatalogController {
@Operation(summary = "Get route catalog", @Operation(summary = "Get route catalog",
description = "Returns all applications with their routes, agents, and health status") description = "Returns all applications with their routes, agents, and health status")
@ApiResponse(responseCode = "200", description = "Catalog returned") @ApiResponse(responseCode = "200", description = "Catalog returned")
public ResponseEntity<List<AppCatalogEntry>> getCatalog() { public ResponseEntity<List<AppCatalogEntry>> getCatalog(
@RequestParam(required = false) String from,
@RequestParam(required = false) String to) {
List<AgentInfo> allAgents = registryService.findAll(); List<AgentInfo> allAgents = registryService.findAll();
// Group agents by application name // Group agents by application name
@@ -63,9 +66,10 @@ public class RouteCatalogController {
routesByApp.put(entry.getKey(), routes); routesByApp.put(entry.getKey(), routes);
} }
// Query route-level stats for the last 24 hours // Time range for exchange counts — use provided range or default to last 24h
Instant now = Instant.now(); Instant now = Instant.now();
Instant from24h = now.minus(24, ChronoUnit.HOURS); Instant rangeFrom = from != null ? Instant.parse(from) : now.minus(24, ChronoUnit.HOURS);
Instant rangeTo = to != null ? Instant.parse(to) : now;
Instant from1m = now.minus(1, ChronoUnit.MINUTES); Instant from1m = now.minus(1, ChronoUnit.MINUTES);
// Route exchange counts from continuous aggregate // Route exchange counts from continuous aggregate
@@ -82,7 +86,7 @@ public class RouteCatalogController {
Timestamp ts = rs.getTimestamp("last_seen"); Timestamp ts = rs.getTimestamp("last_seen");
if (ts != null) routeLastSeen.put(key, ts.toInstant()); if (ts != null) routeLastSeen.put(key, ts.toInstant());
}, },
Timestamp.from(from24h), Timestamp.from(now)); Timestamp.from(rangeFrom), Timestamp.from(rangeTo));
} catch (Exception e) { } catch (Exception e) {
// Continuous aggregate may not exist yet // Continuous aggregate may not exist yet
} }

View File

@@ -58,7 +58,8 @@ public class UserAdminController {
@GetMapping @GetMapping
@Operation(summary = "List all users with RBAC detail") @Operation(summary = "List all users with RBAC detail")
@ApiResponse(responseCode = "200", description = "User list returned") @ApiResponse(responseCode = "200", description = "User list returned")
public ResponseEntity<List<UserDetail>> listUsers() { public ResponseEntity<List<UserDetail>> listUsers(HttpServletRequest httpRequest) {
auditService.log("view_users", AuditCategory.USER_MGMT, null, null, AuditResult.SUCCESS, httpRequest);
return ResponseEntity.ok(rbacService.listUsers()); return ResponseEntity.ok(rbacService.listUsers());
} }

View File

@@ -13,6 +13,7 @@ import org.eclipse.elk.core.RecursiveGraphLayoutEngine;
import org.eclipse.elk.core.options.CoreOptions; import org.eclipse.elk.core.options.CoreOptions;
import org.eclipse.elk.core.options.Direction; import org.eclipse.elk.core.options.Direction;
import org.eclipse.elk.core.options.HierarchyHandling; import org.eclipse.elk.core.options.HierarchyHandling;
import org.eclipse.elk.alg.layered.options.NodePlacementStrategy;
import org.eclipse.elk.core.util.BasicProgressMonitor; import org.eclipse.elk.core.util.BasicProgressMonitor;
import org.eclipse.elk.graph.ElkBendPoint; import org.eclipse.elk.graph.ElkBendPoint;
import org.eclipse.elk.graph.ElkEdge; import org.eclipse.elk.graph.ElkEdge;
@@ -45,6 +46,7 @@ public class ElkDiagramRenderer implements DiagramRenderer {
private static final int PADDING = 20; private static final int PADDING = 20;
private static final int NODE_HEIGHT = 40; private static final int NODE_HEIGHT = 40;
private static final int NODE_WIDTH = 160;
private static final int MIN_NODE_WIDTH = 80; private static final int MIN_NODE_WIDTH = 80;
private static final int CHAR_WIDTH = 8; private static final int CHAR_WIDTH = 8;
private static final int LABEL_PADDING = 32; private static final int LABEL_PADDING = 32;
@@ -97,9 +99,12 @@ public class ElkDiagramRenderer implements DiagramRenderer {
/** NodeTypes that act as compound containers with children. */ /** NodeTypes that act as compound containers with children. */
private static final Set<NodeType> COMPOUND_TYPES = EnumSet.of( private static final Set<NodeType> COMPOUND_TYPES = EnumSet.of(
NodeType.EIP_CHOICE, NodeType.EIP_SPLIT, NodeType.TRY_CATCH, NodeType.EIP_CHOICE, NodeType.EIP_WHEN, NodeType.EIP_OTHERWISE,
NodeType.DO_TRY, NodeType.EIP_LOOP, NodeType.EIP_MULTICAST, NodeType.EIP_SPLIT, NodeType.TRY_CATCH,
NodeType.EIP_AGGREGATE NodeType.DO_TRY, NodeType.DO_CATCH, NodeType.DO_FINALLY,
NodeType.EIP_LOOP, NodeType.EIP_MULTICAST,
NodeType.EIP_AGGREGATE, NodeType.ON_EXCEPTION, NodeType.ERROR_HANDLER,
NodeType.ON_COMPLETION
); );
public ElkDiagramRenderer() { public ElkDiagramRenderer() {
@@ -112,7 +117,7 @@ public class ElkDiagramRenderer implements DiagramRenderer {
@Override @Override
public String renderSvg(RouteGraph graph) { public String renderSvg(RouteGraph graph) {
LayoutResult result = computeLayout(graph); LayoutResult result = computeLayout(graph, Direction.DOWN);
DiagramLayout layout = result.layout; DiagramLayout layout = result.layout;
int svgWidth = (int) Math.ceil(layout.width()) + 2 * PADDING; int svgWidth = (int) Math.ceil(layout.width()) + 2 * PADDING;
@@ -153,97 +158,56 @@ public class ElkDiagramRenderer implements DiagramRenderer {
@Override @Override
public DiagramLayout layoutJson(RouteGraph graph) { public DiagramLayout layoutJson(RouteGraph graph) {
return computeLayout(graph).layout; return computeLayout(graph, Direction.RIGHT).layout;
}
@Override
public DiagramLayout layoutJson(RouteGraph graph, String direction) {
Direction dir = "TB".equalsIgnoreCase(direction) ? Direction.DOWN : Direction.RIGHT;
return computeLayout(graph, dir).layout;
} }
// ---------------------------------------------------------------- // ----------------------------------------------------------------
// Layout computation // Layout computation
// ---------------------------------------------------------------- // ----------------------------------------------------------------
private LayoutResult computeLayout(RouteGraph graph) { private LayoutResult computeLayout(RouteGraph graph, Direction rootDirection) {
ElkGraphFactory factory = ElkGraphFactory.eINSTANCE; ElkGraphFactory factory = ElkGraphFactory.eINSTANCE;
// Create root node // Create root node
ElkNode rootNode = factory.createElkNode(); ElkNode rootNode = factory.createElkNode();
rootNode.setIdentifier("root"); rootNode.setIdentifier("root");
rootNode.setProperty(CoreOptions.ALGORITHM, "org.eclipse.elk.layered"); rootNode.setProperty(CoreOptions.ALGORITHM, "org.eclipse.elk.layered");
rootNode.setProperty(CoreOptions.DIRECTION, Direction.DOWN); rootNode.setProperty(CoreOptions.DIRECTION, rootDirection);
rootNode.setProperty(CoreOptions.SPACING_NODE_NODE, NODE_SPACING); rootNode.setProperty(CoreOptions.SPACING_NODE_NODE, NODE_SPACING);
rootNode.setProperty(CoreOptions.SPACING_EDGE_NODE, EDGE_SPACING); rootNode.setProperty(CoreOptions.SPACING_EDGE_NODE, EDGE_SPACING);
rootNode.setProperty(CoreOptions.HIERARCHY_HANDLING, HierarchyHandling.INCLUDE_CHILDREN); rootNode.setProperty(CoreOptions.HIERARCHY_HANDLING, HierarchyHandling.INCLUDE_CHILDREN);
rootNode.setProperty(org.eclipse.elk.alg.layered.options.LayeredOptions.NODE_PLACEMENT_STRATEGY,
NodePlacementStrategy.LINEAR_SEGMENTS);
// Build index of RouteNodes // Build index of all RouteNodes (flat list from graph + recursive children)
Map<String, RouteNode> routeNodeMap = new HashMap<>(); Map<String, RouteNode> routeNodeMap = new HashMap<>();
if (graph.getNodes() != null) { if (graph.getNodes() != null) {
for (RouteNode rn : graph.getNodes()) { for (RouteNode rn : graph.getNodes()) {
routeNodeMap.put(rn.getId(), rn); indexNodeRecursive(rn, routeNodeMap);
} }
} }
// Identify compound node IDs and their children // Track which nodes are children of a compound (at any depth)
Set<String> compoundNodeIds = new HashSet<>(); Set<String> childNodeIds = new HashSet<>();
Map<String, String> childToParent = new HashMap<>();
for (RouteNode rn : routeNodeMap.values()) {
if (rn.getType() != null && COMPOUND_TYPES.contains(rn.getType())
&& rn.getChildren() != null && !rn.getChildren().isEmpty()) {
compoundNodeIds.add(rn.getId());
for (RouteNode child : rn.getChildren()) {
childToParent.put(child.getId(), rn.getId());
}
}
}
// Create ELK nodes // Create ELK nodes recursively — compounds contain their children
Map<String, ElkNode> elkNodeMap = new HashMap<>(); Map<String, ElkNode> elkNodeMap = new HashMap<>();
Map<String, Color> nodeColors = new HashMap<>(); Map<String, Color> nodeColors = new HashMap<>();
Set<String> compoundNodeIds = new HashSet<>();
// First, create compound (parent) nodes // Process top-level nodes from the graph
for (String compoundId : compoundNodeIds) { if (graph.getNodes() != null) {
RouteNode rn = routeNodeMap.get(compoundId); for (RouteNode rn : graph.getNodes()) {
ElkNode elkCompound = factory.createElkNode(); if (!elkNodeMap.containsKey(rn.getId())) {
elkCompound.setIdentifier(rn.getId()); createElkNodeRecursive(rn, rootNode, factory, elkNodeMap, nodeColors,
elkCompound.setParent(rootNode); compoundNodeIds, childNodeIds);
}
// Compound nodes are larger initially -- ELK will resize
elkCompound.setWidth(200);
elkCompound.setHeight(100);
// Set properties for compound layout
elkCompound.setProperty(CoreOptions.ALGORITHM, "org.eclipse.elk.layered");
elkCompound.setProperty(CoreOptions.DIRECTION, Direction.DOWN);
elkCompound.setProperty(CoreOptions.SPACING_NODE_NODE, NODE_SPACING * 0.5);
elkCompound.setProperty(CoreOptions.SPACING_EDGE_NODE, EDGE_SPACING * 0.5);
elkCompound.setProperty(CoreOptions.PADDING,
new org.eclipse.elk.core.math.ElkPadding(COMPOUND_TOP_PADDING,
COMPOUND_SIDE_PADDING, COMPOUND_SIDE_PADDING, COMPOUND_SIDE_PADDING));
elkNodeMap.put(rn.getId(), elkCompound);
nodeColors.put(rn.getId(), colorForType(rn.getType()));
// Create child nodes inside compound
for (RouteNode child : rn.getChildren()) {
ElkNode elkChild = factory.createElkNode();
elkChild.setIdentifier(child.getId());
elkChild.setParent(elkCompound);
int w = Math.max(MIN_NODE_WIDTH, (child.getLabel() != null ? child.getLabel().length() : 0) * CHAR_WIDTH + LABEL_PADDING);
elkChild.setWidth(w);
elkChild.setHeight(NODE_HEIGHT);
elkNodeMap.put(child.getId(), elkChild);
nodeColors.put(child.getId(), colorForType(child.getType()));
}
}
// Then, create non-compound, non-child nodes
for (RouteNode rn : routeNodeMap.values()) {
if (!elkNodeMap.containsKey(rn.getId())) {
ElkNode elkNode = factory.createElkNode();
elkNode.setIdentifier(rn.getId());
elkNode.setParent(rootNode);
int w = Math.max(MIN_NODE_WIDTH, (rn.getLabel() != null ? rn.getLabel().length() : 0) * CHAR_WIDTH + LABEL_PADDING);
elkNode.setWidth(w);
elkNode.setHeight(NODE_HEIGHT);
elkNodeMap.put(rn.getId(), elkNode);
nodeColors.put(rn.getId(), colorForType(rn.getType()));
} }
} }
@@ -270,64 +234,21 @@ public class ElkDiagramRenderer implements DiagramRenderer {
RecursiveGraphLayoutEngine engine = new RecursiveGraphLayoutEngine(); RecursiveGraphLayoutEngine engine = new RecursiveGraphLayoutEngine();
engine.layout(rootNode, new BasicProgressMonitor()); engine.layout(rootNode, new BasicProgressMonitor());
// Extract results // Extract results — only top-level nodes (children collected recursively)
List<PositionedNode> positionedNodes = new ArrayList<>(); List<PositionedNode> positionedNodes = new ArrayList<>();
Map<String, CompoundInfo> compoundInfos = new HashMap<>(); Map<String, CompoundInfo> compoundInfos = new HashMap<>();
for (RouteNode rn : routeNodeMap.values()) { if (graph.getNodes() != null) {
if (childToParent.containsKey(rn.getId())) { for (RouteNode rn : graph.getNodes()) {
// Skip children -- they are collected under their parent if (childNodeIds.contains(rn.getId())) {
continue; // Skip — collected under its parent compound
} continue;
ElkNode elkNode = elkNodeMap.get(rn.getId());
if (elkNode == null) continue;
if (compoundNodeIds.contains(rn.getId())) {
// Compound node: collect children
List<PositionedNode> children = new ArrayList<>();
if (rn.getChildren() != null) {
for (RouteNode child : rn.getChildren()) {
ElkNode childElk = elkNodeMap.get(child.getId());
if (childElk != null) {
children.add(new PositionedNode(
child.getId(),
child.getLabel() != null ? child.getLabel() : "",
child.getType() != null ? child.getType().name() : "UNKNOWN",
elkNode.getX() + childElk.getX(),
elkNode.getY() + childElk.getY(),
childElk.getWidth(),
childElk.getHeight(),
List.of()
));
}
}
} }
ElkNode elkNode = elkNodeMap.get(rn.getId());
if (elkNode == null) continue;
positionedNodes.add(new PositionedNode( positionedNodes.add(extractPositionedNode(rn, elkNode, elkNodeMap,
rn.getId(), compoundNodeIds, compoundInfos, rootNode));
rn.getLabel() != null ? rn.getLabel() : "",
rn.getType() != null ? rn.getType().name() : "UNKNOWN",
elkNode.getX(),
elkNode.getY(),
elkNode.getWidth(),
elkNode.getHeight(),
children
));
compoundInfos.put(rn.getId(), new CompoundInfo(
rn.getId(), colorForType(rn.getType())));
} else {
positionedNodes.add(new PositionedNode(
rn.getId(),
rn.getLabel() != null ? rn.getLabel() : "",
rn.getType() != null ? rn.getType().name() : "UNKNOWN",
elkNode.getX(),
elkNode.getY(),
elkNode.getWidth(),
elkNode.getHeight(),
List.of()
));
} }
} }
@@ -481,6 +402,98 @@ public class ElkDiagramRenderer implements DiagramRenderer {
} }
} }
// ----------------------------------------------------------------
// Recursive node building
// ----------------------------------------------------------------
/** Index a RouteNode and all its descendants into the map. */
private void indexNodeRecursive(RouteNode node, Map<String, RouteNode> map) {
map.put(node.getId(), node);
if (node.getChildren() != null) {
for (RouteNode child : node.getChildren()) {
indexNodeRecursive(child, map);
}
}
}
/**
* Recursively create ELK nodes. Compound nodes become ELK containers
* with their children nested inside. Non-compound nodes become leaf nodes.
*/
private void createElkNodeRecursive(
RouteNode rn, ElkNode parentElk, ElkGraphFactory factory,
Map<String, ElkNode> elkNodeMap, Map<String, Color> nodeColors,
Set<String> compoundNodeIds, Set<String> childNodeIds) {
boolean isCompound = rn.getType() != null && COMPOUND_TYPES.contains(rn.getType())
&& rn.getChildren() != null && !rn.getChildren().isEmpty();
ElkNode elkNode = factory.createElkNode();
elkNode.setIdentifier(rn.getId());
elkNode.setParent(parentElk);
if (isCompound) {
compoundNodeIds.add(rn.getId());
elkNode.setWidth(200);
elkNode.setHeight(100);
elkNode.setProperty(CoreOptions.ALGORITHM, "org.eclipse.elk.layered");
elkNode.setProperty(CoreOptions.DIRECTION, Direction.DOWN);
elkNode.setProperty(CoreOptions.SPACING_NODE_NODE, NODE_SPACING * 0.5);
elkNode.setProperty(CoreOptions.SPACING_EDGE_NODE, EDGE_SPACING * 0.5);
elkNode.setProperty(CoreOptions.PADDING,
new org.eclipse.elk.core.math.ElkPadding(COMPOUND_TOP_PADDING,
COMPOUND_SIDE_PADDING, COMPOUND_SIDE_PADDING, COMPOUND_SIDE_PADDING));
// Recursively create children inside this compound
for (RouteNode child : rn.getChildren()) {
childNodeIds.add(child.getId());
createElkNodeRecursive(child, elkNode, factory, elkNodeMap, nodeColors,
compoundNodeIds, childNodeIds);
}
} else {
elkNode.setWidth(NODE_WIDTH);
elkNode.setHeight(NODE_HEIGHT);
}
elkNodeMap.put(rn.getId(), elkNode);
nodeColors.put(rn.getId(), colorForType(rn.getType()));
}
/**
* Recursively extract a PositionedNode from the ELK layout result.
* Compound nodes include their children with absolute coordinates.
*/
private PositionedNode extractPositionedNode(
RouteNode rn, ElkNode elkNode, Map<String, ElkNode> elkNodeMap,
Set<String> compoundNodeIds, Map<String, CompoundInfo> compoundInfos,
ElkNode rootNode) {
double absX = getAbsoluteX(elkNode, rootNode);
double absY = getAbsoluteY(elkNode, rootNode);
List<PositionedNode> children = List.of();
if (compoundNodeIds.contains(rn.getId()) && rn.getChildren() != null) {
children = new ArrayList<>();
for (RouteNode child : rn.getChildren()) {
ElkNode childElk = elkNodeMap.get(child.getId());
if (childElk != null) {
children.add(extractPositionedNode(child, childElk, elkNodeMap,
compoundNodeIds, compoundInfos, rootNode));
}
}
compoundInfos.put(rn.getId(), new CompoundInfo(rn.getId(), colorForType(rn.getType())));
}
return new PositionedNode(
rn.getId(),
rn.getLabel() != null ? rn.getLabel() : "",
rn.getType() != null ? rn.getType().name() : "UNKNOWN",
absX, absY,
elkNode.getWidth(), elkNode.getHeight(),
children
);
}
// ---------------------------------------------------------------- // ----------------------------------------------------------------
// ELK graph helpers // ELK graph helpers
// ---------------------------------------------------------------- // ----------------------------------------------------------------
@@ -539,8 +552,8 @@ public class ElkDiagramRenderer implements DiagramRenderer {
List<PositionedNode> all = new ArrayList<>(); List<PositionedNode> all = new ArrayList<>();
for (PositionedNode n : nodes) { for (PositionedNode n : nodes) {
all.add(n); all.add(n);
if (n.children() != null) { if (n.children() != null && !n.children().isEmpty()) {
all.addAll(n.children()); all.addAll(allNodes(n.children()));
} }
} }
return all; return all;

View File

@@ -0,0 +1,11 @@
package com.cameleer3.server.app.dto;
/**
* Request body for command acknowledgment from agents.
* Contains the result status and message of the command execution.
*
* @param status "SUCCESS" or "FAILURE"
* @param message human-readable description of the result
* @param data optional structured JSON data returned by the agent (e.g. expression evaluation results)
*/
public record CommandAckRequest(String status, String message, String data) {}

View File

@@ -0,0 +1,13 @@
package com.cameleer3.server.app.dto;
import io.swagger.v3.oas.annotations.media.Schema;
@Schema(description = "Application log entry from OpenSearch")
public record LogEntryResponse(
@Schema(description = "Log timestamp (ISO-8601)") String timestamp,
@Schema(description = "Log level (INFO, WARN, ERROR, DEBUG)") String level,
@Schema(description = "Logger name") String loggerName,
@Schema(description = "Log message") String message,
@Schema(description = "Thread name") String threadName,
@Schema(description = "Stack trace (if present)") String stackTrace
) {}

View File

@@ -0,0 +1,11 @@
package com.cameleer3.server.app.dto;
/**
* Request body for testing a tap expression against sample data via a live agent.
*
* @param expression the expression to evaluate (e.g. Simple, JSONPath, XPath)
* @param language the expression language identifier
* @param body sample message body to evaluate the expression against
* @param target what the expression targets (e.g. "body", "header", "property")
*/
public record TestExpressionRequest(String expression, String language, String body, String target) {}

View File

@@ -0,0 +1,9 @@
package com.cameleer3.server.app.dto;
/**
* Response from testing a tap expression against sample data.
*
* @param result the evaluation result (null if an error occurred)
* @param error error message if evaluation failed (null on success)
*/
public record TestExpressionResponse(String result, String error) {}

View File

@@ -0,0 +1,57 @@
package com.cameleer3.server.app.interceptor;
import com.cameleer3.server.core.admin.AuditCategory;
import com.cameleer3.server.core.admin.AuditResult;
import com.cameleer3.server.core.admin.AuditService;
import jakarta.servlet.http.HttpServletRequest;
import jakarta.servlet.http.HttpServletResponse;
import org.springframework.stereotype.Component;
import org.springframework.web.servlet.HandlerInterceptor;
import java.util.Map;
import java.util.Set;
/**
* Safety-net audit interceptor that logs a basic entry for any state-changing
* request (POST/PUT/DELETE) that was not explicitly audited by the controller.
* <p>
* Controllers that call {@link AuditService#log} set the {@code audit.logged}
* request attribute, which this interceptor checks to avoid double-recording.
*/
@Component
public class AuditInterceptor implements HandlerInterceptor {
private static final Set<String> AUDITABLE_METHODS = Set.of("POST", "PUT", "DELETE");
private static final Set<String> EXCLUDED_PATHS = Set.of("/api/v1/search/executions");
private final AuditService auditService;
public AuditInterceptor(AuditService auditService) {
this.auditService = auditService;
}
@Override
public void afterCompletion(HttpServletRequest request, HttpServletResponse response,
Object handler, Exception ex) {
if (!AUDITABLE_METHODS.contains(request.getMethod())) {
return;
}
if (Boolean.TRUE.equals(request.getAttribute("audit.logged"))) {
return;
}
String path = request.getRequestURI();
if (EXCLUDED_PATHS.contains(path)) {
return;
}
AuditResult result = response.getStatus() < 400 ? AuditResult.SUCCESS : AuditResult.FAILURE;
auditService.log(
"HTTP " + request.getMethod() + " " + path,
AuditCategory.INFRA,
path,
Map.of("status", response.getStatus()),
result,
request);
}
}

View File

@@ -6,6 +6,8 @@ import com.cameleer3.server.core.search.SearchResult;
import com.cameleer3.server.core.storage.SearchIndex; import com.cameleer3.server.core.storage.SearchIndex;
import com.cameleer3.server.core.storage.model.ExecutionDocument; import com.cameleer3.server.core.storage.model.ExecutionDocument;
import com.cameleer3.server.core.storage.model.ExecutionDocument.ProcessorDoc; import com.cameleer3.server.core.storage.model.ExecutionDocument.ProcessorDoc;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.ObjectMapper;
import jakarta.annotation.PostConstruct; import jakarta.annotation.PostConstruct;
import org.opensearch.client.json.JsonData; import org.opensearch.client.json.JsonData;
import org.opensearch.client.opensearch.OpenSearchClient; import org.opensearch.client.opensearch.OpenSearchClient;
@@ -33,6 +35,8 @@ public class OpenSearchIndex implements SearchIndex {
private static final Logger log = LoggerFactory.getLogger(OpenSearchIndex.class); private static final Logger log = LoggerFactory.getLogger(OpenSearchIndex.class);
private static final DateTimeFormatter DAY_FMT = DateTimeFormatter.ofPattern("yyyy-MM-dd") private static final DateTimeFormatter DAY_FMT = DateTimeFormatter.ofPattern("yyyy-MM-dd")
.withZone(ZoneOffset.UTC); .withZone(ZoneOffset.UTC);
private static final ObjectMapper JSON = new ObjectMapper();
private static final TypeReference<Map<String, String>> STR_MAP = new TypeReference<>() {};
private final OpenSearchClient client; private final OpenSearchClient client;
private final String indexPrefix; private final String indexPrefix;
@@ -125,6 +129,12 @@ public class OpenSearchIndex implements SearchIndex {
} }
} }
private static final List<String> HIGHLIGHT_FIELDS = List.of(
"error_message", "attributes_text",
"processors.input_body", "processors.output_body",
"processors.input_headers", "processors.output_headers",
"processors.attributes_text");
private org.opensearch.client.opensearch.core.SearchRequest buildSearchRequest( private org.opensearch.client.opensearch.core.SearchRequest buildSearchRequest(
SearchRequest request, int size) { SearchRequest request, int size) {
return org.opensearch.client.opensearch.core.SearchRequest.of(b -> { return org.opensearch.client.opensearch.core.SearchRequest.of(b -> {
@@ -137,6 +147,17 @@ public class OpenSearchIndex implements SearchIndex {
.field(request.sortColumn()) .field(request.sortColumn())
.order("asc".equalsIgnoreCase(request.sortDir()) .order("asc".equalsIgnoreCase(request.sortDir())
? SortOrder.Asc : SortOrder.Desc))); ? SortOrder.Asc : SortOrder.Desc)));
// Add highlight when full-text search is active
if (request.text() != null && !request.text().isBlank()) {
b.highlight(h -> {
for (String field : HIGHLIGHT_FIELDS) {
h.fields(field, hf -> hf
.fragmentSize(120)
.numberOfFragments(1));
}
return h;
});
}
return b; return b;
}); });
} }
@@ -166,6 +187,8 @@ public class OpenSearchIndex implements SearchIndex {
filter.add(termQuery("agent_id.keyword", request.agentId())); filter.add(termQuery("agent_id.keyword", request.agentId()));
if (request.correlationId() != null) if (request.correlationId() != null)
filter.add(termQuery("correlation_id.keyword", request.correlationId())); filter.add(termQuery("correlation_id.keyword", request.correlationId()));
if (request.application() != null && !request.application().isBlank())
filter.add(termQuery("application_name.keyword", request.application()));
// Full-text search across all fields + nested processor fields // Full-text search across all fields + nested processor fields
if (request.text() != null && !request.text().isBlank()) { if (request.text() != null && !request.text().isBlank()) {
@@ -176,11 +199,13 @@ public class OpenSearchIndex implements SearchIndex {
// Search top-level text fields (analyzed match + wildcard for substring) // Search top-level text fields (analyzed match + wildcard for substring)
textQueries.add(Query.of(q -> q.multiMatch(m -> m textQueries.add(Query.of(q -> q.multiMatch(m -> m
.query(text) .query(text)
.fields("error_message", "error_stacktrace")))); .fields("error_message", "error_stacktrace", "attributes_text"))));
textQueries.add(Query.of(q -> q.wildcard(w -> w textQueries.add(Query.of(q -> q.wildcard(w -> w
.field("error_message").value(wildcard).caseInsensitive(true)))); .field("error_message").value(wildcard).caseInsensitive(true))));
textQueries.add(Query.of(q -> q.wildcard(w -> w textQueries.add(Query.of(q -> q.wildcard(w -> w
.field("error_stacktrace").value(wildcard).caseInsensitive(true)))); .field("error_stacktrace").value(wildcard).caseInsensitive(true))));
textQueries.add(Query.of(q -> q.wildcard(w -> w
.field("attributes_text").value(wildcard).caseInsensitive(true))));
// Search nested processor fields (analyzed match + wildcard) // Search nested processor fields (analyzed match + wildcard)
textQueries.add(Query.of(q -> q.nested(n -> n textQueries.add(Query.of(q -> q.nested(n -> n
@@ -189,14 +214,16 @@ public class OpenSearchIndex implements SearchIndex {
.query(text) .query(text)
.fields("processors.input_body", "processors.output_body", .fields("processors.input_body", "processors.output_body",
"processors.input_headers", "processors.output_headers", "processors.input_headers", "processors.output_headers",
"processors.error_message", "processors.error_stacktrace")))))); "processors.error_message", "processors.error_stacktrace",
"processors.attributes_text"))))));
textQueries.add(Query.of(q -> q.nested(n -> n textQueries.add(Query.of(q -> q.nested(n -> n
.path("processors") .path("processors")
.query(nq -> nq.bool(nb -> nb.should( .query(nq -> nq.bool(nb -> nb.should(
wildcardQuery("processors.input_body", wildcard), wildcardQuery("processors.input_body", wildcard),
wildcardQuery("processors.output_body", wildcard), wildcardQuery("processors.output_body", wildcard),
wildcardQuery("processors.input_headers", wildcard), wildcardQuery("processors.input_headers", wildcard),
wildcardQuery("processors.output_headers", wildcard) wildcardQuery("processors.output_headers", wildcard),
wildcardQuery("processors.attributes_text", wildcard)
).minimumShouldMatch("1")))))); ).minimumShouldMatch("1"))))));
// Also try keyword fields for exact matches // Also try keyword fields for exact matches
@@ -297,6 +324,11 @@ public class OpenSearchIndex implements SearchIndex {
map.put("duration_ms", doc.durationMs()); map.put("duration_ms", doc.durationMs());
map.put("error_message", doc.errorMessage()); map.put("error_message", doc.errorMessage());
map.put("error_stacktrace", doc.errorStacktrace()); map.put("error_stacktrace", doc.errorStacktrace());
if (doc.attributes() != null) {
Map<String, String> attrs = parseAttributesJson(doc.attributes());
map.put("attributes", attrs);
map.put("attributes_text", flattenAttributes(attrs));
}
if (doc.processors() != null) { if (doc.processors() != null) {
map.put("processors", doc.processors().stream().map(p -> { map.put("processors", doc.processors().stream().map(p -> {
Map<String, Object> pm = new LinkedHashMap<>(); Map<String, Object> pm = new LinkedHashMap<>();
@@ -309,6 +341,11 @@ public class OpenSearchIndex implements SearchIndex {
pm.put("output_body", p.outputBody()); pm.put("output_body", p.outputBody());
pm.put("input_headers", p.inputHeaders()); pm.put("input_headers", p.inputHeaders());
pm.put("output_headers", p.outputHeaders()); pm.put("output_headers", p.outputHeaders());
if (p.attributes() != null) {
Map<String, String> pAttrs = parseAttributesJson(p.attributes());
pm.put("attributes", pAttrs);
pm.put("attributes_text", flattenAttributes(pAttrs));
}
return pm; return pm;
}).toList()); }).toList());
} }
@@ -319,6 +356,22 @@ public class OpenSearchIndex implements SearchIndex {
private ExecutionSummary hitToSummary(Hit<Map> hit) { private ExecutionSummary hitToSummary(Hit<Map> hit) {
Map<String, Object> src = hit.source(); Map<String, Object> src = hit.source();
if (src == null) return null; if (src == null) return null;
@SuppressWarnings("unchecked")
Map<String, String> attributes = src.get("attributes") instanceof Map
? new LinkedHashMap<>((Map<String, String>) src.get("attributes")) : null;
// Merge processor-level attributes (execution-level takes precedence)
if (src.get("processors") instanceof List<?> procs) {
for (Object pObj : procs) {
if (pObj instanceof Map<?, ?> pm && pm.get("attributes") instanceof Map<?, ?> pa) {
if (attributes == null) attributes = new LinkedHashMap<>();
for (var entry : pa.entrySet()) {
attributes.putIfAbsent(
String.valueOf(entry.getKey()),
String.valueOf(entry.getValue()));
}
}
}
}
return new ExecutionSummary( return new ExecutionSummary(
(String) src.get("execution_id"), (String) src.get("execution_id"),
(String) src.get("route_id"), (String) src.get("route_id"),
@@ -330,7 +383,35 @@ public class OpenSearchIndex implements SearchIndex {
src.get("duration_ms") != null ? ((Number) src.get("duration_ms")).longValue() : 0L, src.get("duration_ms") != null ? ((Number) src.get("duration_ms")).longValue() : 0L,
(String) src.get("correlation_id"), (String) src.get("correlation_id"),
(String) src.get("error_message"), (String) src.get("error_message"),
null // diagramContentHash not stored in index null, // diagramContentHash not stored in index
extractHighlight(hit),
attributes
); );
} }
private String extractHighlight(Hit<Map> hit) {
if (hit.highlight() == null || hit.highlight().isEmpty()) return null;
for (List<String> fragments : hit.highlight().values()) {
if (fragments != null && !fragments.isEmpty()) {
return fragments.get(0);
}
}
return null;
}
private static Map<String, String> parseAttributesJson(String json) {
if (json == null || json.isBlank()) return null;
try {
return JSON.readValue(json, STR_MAP);
} catch (Exception e) {
return null;
}
}
private static String flattenAttributes(Map<String, String> attrs) {
if (attrs == null || attrs.isEmpty()) return "";
return attrs.entrySet().stream()
.map(e -> e.getKey() + "=" + e.getValue())
.collect(Collectors.joining(" "));
}
} }

View File

@@ -0,0 +1,223 @@
package com.cameleer3.server.app.search;
import com.cameleer3.common.model.LogEntry;
import com.cameleer3.server.app.dto.LogEntryResponse;
import jakarta.annotation.PostConstruct;
import org.opensearch.client.json.JsonData;
import org.opensearch.client.opensearch.OpenSearchClient;
import org.opensearch.client.opensearch._types.FieldValue;
import org.opensearch.client.opensearch._types.SortOrder;
import org.opensearch.client.opensearch._types.mapping.Property;
import org.opensearch.client.opensearch._types.query_dsl.BoolQuery;
import org.opensearch.client.opensearch._types.query_dsl.Query;
import org.opensearch.client.opensearch.core.BulkRequest;
import org.opensearch.client.opensearch.core.BulkResponse;
import org.opensearch.client.opensearch.core.bulk.BulkResponseItem;
import org.opensearch.client.opensearch.indices.ExistsIndexTemplateRequest;
import org.opensearch.client.opensearch.indices.PutIndexTemplateRequest;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Repository;
import java.io.IOException;
import java.time.Instant;
import java.time.ZoneOffset;
import java.time.format.DateTimeFormatter;
import java.util.ArrayList;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
@Repository
public class OpenSearchLogIndex {
private static final Logger log = LoggerFactory.getLogger(OpenSearchLogIndex.class);
private static final DateTimeFormatter DAY_FMT = DateTimeFormatter.ofPattern("yyyy-MM-dd")
.withZone(ZoneOffset.UTC);
private final OpenSearchClient client;
private final String indexPrefix;
private final int retentionDays;
public OpenSearchLogIndex(OpenSearchClient client,
@Value("${opensearch.log-index-prefix:logs-}") String indexPrefix,
@Value("${opensearch.log-retention-days:7}") int retentionDays) {
this.client = client;
this.indexPrefix = indexPrefix;
this.retentionDays = retentionDays;
}
@PostConstruct
void init() {
ensureIndexTemplate();
ensureIsmPolicy();
}
private void ensureIndexTemplate() {
String templateName = indexPrefix.replace("-", "") + "-template";
String indexPattern = indexPrefix + "*";
try {
boolean exists = client.indices().existsIndexTemplate(
ExistsIndexTemplateRequest.of(b -> b.name(templateName))).value();
if (!exists) {
client.indices().putIndexTemplate(PutIndexTemplateRequest.of(b -> b
.name(templateName)
.indexPatterns(List.of(indexPattern))
.template(t -> t
.settings(s -> s
.numberOfShards("1")
.numberOfReplicas("1"))
.mappings(m -> m
.properties("@timestamp", Property.of(p -> p.date(d -> d)))
.properties("level", Property.of(p -> p.keyword(k -> k)))
.properties("loggerName", Property.of(p -> p.keyword(k -> k)))
.properties("message", Property.of(p -> p.text(tx -> tx)))
.properties("threadName", Property.of(p -> p.keyword(k -> k)))
.properties("stackTrace", Property.of(p -> p.text(tx -> tx)))
.properties("agentId", Property.of(p -> p.keyword(k -> k)))
.properties("application", Property.of(p -> p.keyword(k -> k)))
.properties("exchangeId", Property.of(p -> p.keyword(k -> k)))))));
log.info("OpenSearch log index template '{}' created", templateName);
}
} catch (IOException e) {
log.error("Failed to create log index template", e);
}
}
private void ensureIsmPolicy() {
String policyId = "logs-retention";
try {
// Use the low-level REST client to manage ISM policies
var restClient = client._transport();
// Check if the ISM policy exists via a GET; create if not
// ISM is managed via the _plugins/_ism/policies API
// For now, log a reminder — ISM policy should be created via OpenSearch API or dashboard
log.info("Log retention policy: indices matching '{}*' should be deleted after {} days. " +
"Ensure ISM policy '{}' is configured in OpenSearch.", indexPrefix, retentionDays, policyId);
} catch (Exception e) {
log.warn("Could not verify ISM policy for log retention", e);
}
}
public List<LogEntryResponse> search(String application, String agentId, String level,
String query, String exchangeId,
Instant from, Instant to, int limit) {
try {
BoolQuery.Builder bool = new BoolQuery.Builder();
bool.must(Query.of(q -> q.term(t -> t.field("application").value(FieldValue.of(application)))));
if (agentId != null && !agentId.isEmpty()) {
bool.must(Query.of(q -> q.term(t -> t.field("agentId").value(FieldValue.of(agentId)))));
}
if (exchangeId != null && !exchangeId.isEmpty()) {
// Match on top-level field (new records) or MDC nested field (old records)
bool.must(Query.of(q -> q.bool(b -> b
.should(Query.of(s -> s.term(t -> t.field("exchangeId.keyword").value(FieldValue.of(exchangeId)))))
.should(Query.of(s -> s.term(t -> t.field("mdc.camel.exchangeId.keyword").value(FieldValue.of(exchangeId)))))
.minimumShouldMatch("1"))));
}
if (level != null && !level.isEmpty()) {
bool.must(Query.of(q -> q.term(t -> t.field("level").value(FieldValue.of(level.toUpperCase())))));
}
if (query != null && !query.isEmpty()) {
bool.must(Query.of(q -> q.match(m -> m.field("message").query(FieldValue.of(query)))));
}
if (from != null || to != null) {
bool.must(Query.of(q -> q.range(r -> {
r.field("@timestamp");
if (from != null) r.gte(JsonData.of(from.toString()));
if (to != null) r.lte(JsonData.of(to.toString()));
return r;
})));
}
var response = client.search(s -> s
.index(indexPrefix + "*")
.query(Query.of(q -> q.bool(bool.build())))
.sort(so -> so.field(f -> f.field("@timestamp").order(SortOrder.Desc)))
.size(limit), Map.class);
List<LogEntryResponse> results = new ArrayList<>();
for (var hit : response.hits().hits()) {
@SuppressWarnings("unchecked")
Map<String, Object> src = (Map<String, Object>) hit.source();
if (src == null) continue;
results.add(new LogEntryResponse(
str(src, "@timestamp"),
str(src, "level"),
str(src, "loggerName"),
str(src, "message"),
str(src, "threadName"),
str(src, "stackTrace")));
}
return results;
} catch (IOException e) {
log.error("Failed to search log entries for application={}", application, e);
return List.of();
}
}
private static String str(Map<String, Object> map, String key) {
Object v = map.get(key);
return v != null ? v.toString() : null;
}
public void indexBatch(String agentId, String application, List<LogEntry> entries) {
if (entries == null || entries.isEmpty()) {
return;
}
try {
BulkRequest.Builder bulkBuilder = new BulkRequest.Builder();
for (LogEntry entry : entries) {
String indexName = indexPrefix + DAY_FMT.format(
entry.getTimestamp() != null ? entry.getTimestamp() : java.time.Instant.now());
Map<String, Object> doc = toMap(entry, agentId, application);
bulkBuilder.operations(op -> op
.index(idx -> idx
.index(indexName)
.document(doc)));
}
BulkResponse response = client.bulk(bulkBuilder.build());
if (response.errors()) {
int errorCount = 0;
for (BulkResponseItem item : response.items()) {
if (item.error() != null) {
errorCount++;
if (errorCount == 1) {
log.error("Bulk log index error: {}", item.error().reason());
}
}
}
log.error("Bulk log indexing had {} error(s) out of {} entries", errorCount, entries.size());
} else {
log.debug("Indexed {} log entries for agent={}, app={}", entries.size(), agentId, application);
}
} catch (IOException e) {
log.error("Failed to bulk index {} log entries for agent={}", entries.size(), agentId, e);
}
}
private Map<String, Object> toMap(LogEntry entry, String agentId, String application) {
Map<String, Object> doc = new LinkedHashMap<>();
doc.put("@timestamp", entry.getTimestamp() != null ? entry.getTimestamp().toString() : null);
doc.put("level", entry.getLevel());
doc.put("loggerName", entry.getLoggerName());
doc.put("message", entry.getMessage());
doc.put("threadName", entry.getThreadName());
doc.put("stackTrace", entry.getStackTrace());
doc.put("mdc", entry.getMdc());
doc.put("agentId", agentId);
doc.put("application", application);
if (entry.getMdc() != null) {
String exId = entry.getMdc().get("camel.exchangeId");
if (exId != null) doc.put("exchangeId", exId);
}
return doc;
}
}

View File

@@ -159,6 +159,9 @@ public class OidcAuthController {
throw e; throw e;
} catch (Exception e) { } catch (Exception e) {
log.error("OIDC callback failed: {}", e.getMessage(), e); log.error("OIDC callback failed: {}", e.getMessage(), e);
auditService.log("unknown", "login_oidc", AuditCategory.AUTH, null,
Map.of("reason", e.getMessage() != null ? e.getMessage() : "unknown"),
AuditResult.FAILURE, httpRequest);
throw new ResponseStatusException(HttpStatus.UNAUTHORIZED, throw new ResponseStatusException(HttpStatus.UNAUTHORIZED,
"OIDC authentication failed: " + e.getMessage()); "OIDC authentication failed: " + e.getMessage());
} }

View File

@@ -77,6 +77,10 @@ public class SecurityConfig {
.requestMatchers(HttpMethod.GET, "/api/v1/search/**").hasAnyRole("VIEWER", "OPERATOR", "ADMIN", "AGENT") .requestMatchers(HttpMethod.GET, "/api/v1/search/**").hasAnyRole("VIEWER", "OPERATOR", "ADMIN", "AGENT")
.requestMatchers(HttpMethod.POST, "/api/v1/search/**").hasAnyRole("VIEWER", "OPERATOR", "ADMIN") .requestMatchers(HttpMethod.POST, "/api/v1/search/**").hasAnyRole("VIEWER", "OPERATOR", "ADMIN")
// Application config endpoints
.requestMatchers(HttpMethod.GET, "/api/v1/config/*").hasAnyRole("VIEWER", "OPERATOR", "ADMIN", "AGENT")
.requestMatchers(HttpMethod.PUT, "/api/v1/config/*").hasAnyRole("OPERATOR", "ADMIN")
// Read-only data endpoints — viewer+ // Read-only data endpoints — viewer+
.requestMatchers(HttpMethod.GET, "/api/v1/executions/**").hasAnyRole("VIEWER", "OPERATOR", "ADMIN") .requestMatchers(HttpMethod.GET, "/api/v1/executions/**").hasAnyRole("VIEWER", "OPERATOR", "ADMIN")
.requestMatchers(HttpMethod.GET, "/api/v1/diagrams/**").hasAnyRole("VIEWER", "OPERATOR", "ADMIN") .requestMatchers(HttpMethod.GET, "/api/v1/diagrams/**").hasAnyRole("VIEWER", "OPERATOR", "ADMIN")

View File

@@ -123,7 +123,8 @@ public class UiAuthController {
@ApiResponse(responseCode = "200", description = "Token refreshed") @ApiResponse(responseCode = "200", description = "Token refreshed")
@ApiResponse(responseCode = "401", description = "Invalid refresh token", @ApiResponse(responseCode = "401", description = "Invalid refresh token",
content = @Content(schema = @Schema(implementation = ErrorResponse.class))) content = @Content(schema = @Schema(implementation = ErrorResponse.class)))
public ResponseEntity<AuthTokenResponse> refresh(@RequestBody RefreshRequest request) { public ResponseEntity<AuthTokenResponse> refresh(@RequestBody RefreshRequest request,
HttpServletRequest httpRequest) {
try { try {
JwtValidationResult result = jwtService.validateRefreshToken(request.refreshToken()); JwtValidationResult result = jwtService.validateRefreshToken(request.refreshToken());
if (!result.subject().startsWith("user:")) { if (!result.subject().startsWith("user:")) {
@@ -138,6 +139,7 @@ public class UiAuthController {
String displayName = userRepository.findById(result.subject()) String displayName = userRepository.findById(result.subject())
.map(UserInfo::displayName) .map(UserInfo::displayName)
.orElse(result.subject()); .orElse(result.subject());
auditService.log(result.subject(), "token_refresh", AuditCategory.AUTH, null, null, AuditResult.SUCCESS, httpRequest);
return ResponseEntity.ok(new AuthTokenResponse(accessToken, refreshToken, displayName, null)); return ResponseEntity.ok(new AuthTokenResponse(accessToken, refreshToken, displayName, null));
} catch (ResponseStatusException e) { } catch (ResponseStatusException e) {
throw e; throw e;

View File

@@ -0,0 +1,77 @@
package com.cameleer3.server.app.storage;
import com.cameleer3.common.model.ApplicationConfig;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Repository;
import java.util.List;
import java.util.Optional;
@Repository
public class PostgresApplicationConfigRepository {
private final JdbcTemplate jdbc;
private final ObjectMapper objectMapper;
public PostgresApplicationConfigRepository(JdbcTemplate jdbc, ObjectMapper objectMapper) {
this.jdbc = jdbc;
this.objectMapper = objectMapper;
}
public List<ApplicationConfig> findAll() {
return jdbc.query(
"SELECT config_val, version, updated_at FROM application_config ORDER BY application",
(rs, rowNum) -> {
try {
ApplicationConfig cfg = objectMapper.readValue(rs.getString("config_val"), ApplicationConfig.class);
cfg.setVersion(rs.getInt("version"));
cfg.setUpdatedAt(rs.getTimestamp("updated_at").toInstant());
return cfg;
} catch (JsonProcessingException e) {
throw new RuntimeException("Failed to deserialize application config", e);
}
});
}
public Optional<ApplicationConfig> findByApplication(String application) {
List<ApplicationConfig> results = jdbc.query(
"SELECT config_val, version, updated_at FROM application_config WHERE application = ?",
(rs, rowNum) -> {
try {
ApplicationConfig cfg = objectMapper.readValue(rs.getString("config_val"), ApplicationConfig.class);
cfg.setVersion(rs.getInt("version"));
cfg.setUpdatedAt(rs.getTimestamp("updated_at").toInstant());
return cfg;
} catch (JsonProcessingException e) {
throw new RuntimeException("Failed to deserialize application config", e);
}
},
application);
return results.isEmpty() ? Optional.empty() : Optional.of(results.get(0));
}
public ApplicationConfig save(String application, ApplicationConfig config, String updatedBy) {
String json;
try {
json = objectMapper.writeValueAsString(config);
} catch (JsonProcessingException e) {
throw new RuntimeException("Failed to serialize application config", e);
}
// Upsert: insert or update, auto-increment version
int updated = jdbc.update("""
INSERT INTO application_config (application, config_val, version, updated_at, updated_by)
VALUES (?, ?::jsonb, 1, now(), ?)
ON CONFLICT (application) DO UPDATE SET
config_val = EXCLUDED.config_val,
version = application_config.version + 1,
updated_at = now(),
updated_by = EXCLUDED.updated_by
""",
application, json, updatedBy);
return findByApplication(application).orElseThrow();
}
}

View File

@@ -16,6 +16,7 @@ import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException; import java.security.NoSuchAlgorithmException;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.Collections; import java.util.Collections;
import java.util.HashMap;
import java.util.HexFormat; import java.util.HexFormat;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
@@ -33,8 +34,8 @@ public class PostgresDiagramStore implements DiagramStore {
private static final Logger log = LoggerFactory.getLogger(PostgresDiagramStore.class); private static final Logger log = LoggerFactory.getLogger(PostgresDiagramStore.class);
private static final String INSERT_SQL = """ private static final String INSERT_SQL = """
INSERT INTO route_diagrams (content_hash, route_id, agent_id, definition) INSERT INTO route_diagrams (content_hash, route_id, agent_id, application_name, definition)
VALUES (?, ?, ?, ?::jsonb) VALUES (?, ?, ?, ?, ?::jsonb)
ON CONFLICT (content_hash) DO NOTHING ON CONFLICT (content_hash) DO NOTHING
"""; """;
@@ -62,11 +63,12 @@ public class PostgresDiagramStore implements DiagramStore {
try { try {
RouteGraph graph = diagram.graph(); RouteGraph graph = diagram.graph();
String agentId = diagram.agentId() != null ? diagram.agentId() : ""; String agentId = diagram.agentId() != null ? diagram.agentId() : "";
String applicationName = diagram.applicationName() != null ? diagram.applicationName() : "";
String json = objectMapper.writeValueAsString(graph); String json = objectMapper.writeValueAsString(graph);
String contentHash = sha256Hex(json); String contentHash = sha256Hex(json);
String routeId = graph.getRouteId() != null ? graph.getRouteId() : ""; String routeId = graph.getRouteId() != null ? graph.getRouteId() : "";
jdbcTemplate.update(INSERT_SQL, contentHash, routeId, agentId, json); jdbcTemplate.update(INSERT_SQL, contentHash, routeId, agentId, applicationName, json);
log.debug("Stored diagram for route={} agent={} with hash={}", routeId, agentId, contentHash); log.debug("Stored diagram for route={} agent={} with hash={}", routeId, agentId, contentHash);
} catch (JsonProcessingException e) { } catch (JsonProcessingException e) {
throw new RuntimeException("Failed to serialize RouteGraph to JSON", e); throw new RuntimeException("Failed to serialize RouteGraph to JSON", e);
@@ -116,6 +118,21 @@ public class PostgresDiagramStore implements DiagramStore {
return Optional.of((String) rows.get(0).get("content_hash")); return Optional.of((String) rows.get(0).get("content_hash"));
} }
@Override
public Map<String, String> findProcessorRouteMapping(String applicationName) {
Map<String, String> mapping = new HashMap<>();
jdbcTemplate.query("""
SELECT DISTINCT rd.route_id, node_elem->>'id' AS processor_id
FROM route_diagrams rd,
jsonb_array_elements(rd.definition::jsonb->'nodes') AS node_elem
WHERE rd.application_name = ?
AND node_elem->>'id' IS NOT NULL
""",
rs -> { mapping.put(rs.getString("processor_id"), rs.getString("route_id")); },
applicationName);
return mapping;
}
static String sha256Hex(String input) { static String sha256Hex(String input) {
try { try {
MessageDigest digest = MessageDigest.getInstance("SHA-256"); MessageDigest digest = MessageDigest.getInstance("SHA-256");

View File

@@ -27,8 +27,9 @@ public class PostgresExecutionStore implements ExecutionStore {
INSERT INTO executions (execution_id, route_id, agent_id, application_name, INSERT INTO executions (execution_id, route_id, agent_id, application_name,
status, correlation_id, exchange_id, start_time, end_time, status, correlation_id, exchange_id, start_time, end_time,
duration_ms, error_message, error_stacktrace, diagram_content_hash, duration_ms, error_message, error_stacktrace, diagram_content_hash,
created_at, updated_at) engine_level, input_body, output_body, input_headers, output_headers,
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, now(), now()) attributes, created_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?::jsonb, ?::jsonb, ?::jsonb, now(), now())
ON CONFLICT (execution_id, start_time) DO UPDATE SET ON CONFLICT (execution_id, start_time) DO UPDATE SET
status = CASE status = CASE
WHEN EXCLUDED.status IN ('COMPLETED', 'FAILED') WHEN EXCLUDED.status IN ('COMPLETED', 'FAILED')
@@ -42,6 +43,12 @@ public class PostgresExecutionStore implements ExecutionStore {
error_message = COALESCE(EXCLUDED.error_message, executions.error_message), error_message = COALESCE(EXCLUDED.error_message, executions.error_message),
error_stacktrace = COALESCE(EXCLUDED.error_stacktrace, executions.error_stacktrace), error_stacktrace = COALESCE(EXCLUDED.error_stacktrace, executions.error_stacktrace),
diagram_content_hash = COALESCE(EXCLUDED.diagram_content_hash, executions.diagram_content_hash), diagram_content_hash = COALESCE(EXCLUDED.diagram_content_hash, executions.diagram_content_hash),
engine_level = COALESCE(EXCLUDED.engine_level, executions.engine_level),
input_body = COALESCE(EXCLUDED.input_body, executions.input_body),
output_body = COALESCE(EXCLUDED.output_body, executions.output_body),
input_headers = COALESCE(EXCLUDED.input_headers, executions.input_headers),
output_headers = COALESCE(EXCLUDED.output_headers, executions.output_headers),
attributes = COALESCE(EXCLUDED.attributes, executions.attributes),
updated_at = now() updated_at = now()
""", """,
execution.executionId(), execution.routeId(), execution.agentId(), execution.executionId(), execution.routeId(), execution.agentId(),
@@ -50,7 +57,11 @@ public class PostgresExecutionStore implements ExecutionStore {
Timestamp.from(execution.startTime()), Timestamp.from(execution.startTime()),
execution.endTime() != null ? Timestamp.from(execution.endTime()) : null, execution.endTime() != null ? Timestamp.from(execution.endTime()) : null,
execution.durationMs(), execution.errorMessage(), execution.durationMs(), execution.errorMessage(),
execution.errorStacktrace(), execution.diagramContentHash()); execution.errorStacktrace(), execution.diagramContentHash(),
execution.engineLevel(),
execution.inputBody(), execution.outputBody(),
execution.inputHeaders(), execution.outputHeaders(),
execution.attributes());
} }
@Override @Override
@@ -59,10 +70,11 @@ public class PostgresExecutionStore implements ExecutionStore {
List<ProcessorRecord> processors) { List<ProcessorRecord> processors) {
jdbc.batchUpdate(""" jdbc.batchUpdate("""
INSERT INTO processor_executions (execution_id, processor_id, processor_type, INSERT INTO processor_executions (execution_id, processor_id, processor_type,
diagram_node_id, application_name, route_id, depth, parent_processor_id, application_name, route_id, depth, parent_processor_id,
status, start_time, end_time, duration_ms, error_message, error_stacktrace, status, start_time, end_time, duration_ms, error_message, error_stacktrace,
input_body, output_body, input_headers, output_headers) input_body, output_body, input_headers, output_headers, attributes,
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?::jsonb, ?::jsonb) loop_index, loop_size, split_index, split_size, multicast_index)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?::jsonb, ?::jsonb, ?::jsonb, ?, ?, ?, ?, ?)
ON CONFLICT (execution_id, processor_id, start_time) DO UPDATE SET ON CONFLICT (execution_id, processor_id, start_time) DO UPDATE SET
status = EXCLUDED.status, status = EXCLUDED.status,
end_time = COALESCE(EXCLUDED.end_time, processor_executions.end_time), end_time = COALESCE(EXCLUDED.end_time, processor_executions.end_time),
@@ -72,16 +84,25 @@ public class PostgresExecutionStore implements ExecutionStore {
input_body = COALESCE(EXCLUDED.input_body, processor_executions.input_body), input_body = COALESCE(EXCLUDED.input_body, processor_executions.input_body),
output_body = COALESCE(EXCLUDED.output_body, processor_executions.output_body), output_body = COALESCE(EXCLUDED.output_body, processor_executions.output_body),
input_headers = COALESCE(EXCLUDED.input_headers, processor_executions.input_headers), input_headers = COALESCE(EXCLUDED.input_headers, processor_executions.input_headers),
output_headers = COALESCE(EXCLUDED.output_headers, processor_executions.output_headers) output_headers = COALESCE(EXCLUDED.output_headers, processor_executions.output_headers),
attributes = COALESCE(EXCLUDED.attributes, processor_executions.attributes),
loop_index = COALESCE(EXCLUDED.loop_index, processor_executions.loop_index),
loop_size = COALESCE(EXCLUDED.loop_size, processor_executions.loop_size),
split_index = COALESCE(EXCLUDED.split_index, processor_executions.split_index),
split_size = COALESCE(EXCLUDED.split_size, processor_executions.split_size),
multicast_index = COALESCE(EXCLUDED.multicast_index, processor_executions.multicast_index)
""", """,
processors.stream().map(p -> new Object[]{ processors.stream().map(p -> new Object[]{
p.executionId(), p.processorId(), p.processorType(), p.executionId(), p.processorId(), p.processorType(),
p.diagramNodeId(), p.applicationName(), p.routeId(), p.applicationName(), p.routeId(),
p.depth(), p.parentProcessorId(), p.status(), p.depth(), p.parentProcessorId(), p.status(),
Timestamp.from(p.startTime()), Timestamp.from(p.startTime()),
p.endTime() != null ? Timestamp.from(p.endTime()) : null, p.endTime() != null ? Timestamp.from(p.endTime()) : null,
p.durationMs(), p.errorMessage(), p.errorStacktrace(), p.durationMs(), p.errorMessage(), p.errorStacktrace(),
p.inputBody(), p.outputBody(), p.inputHeaders(), p.outputHeaders() p.inputBody(), p.outputBody(), p.inputHeaders(), p.outputHeaders(),
p.attributes(),
p.loopIndex(), p.loopSize(), p.splitIndex(), p.splitSize(),
p.multicastIndex()
}).toList()); }).toList());
} }
@@ -100,6 +121,13 @@ public class PostgresExecutionStore implements ExecutionStore {
PROCESSOR_MAPPER, executionId); PROCESSOR_MAPPER, executionId);
} }
@Override
public Optional<ProcessorRecord> findProcessorById(String executionId, String processorId) {
String sql = "SELECT * FROM processor_executions WHERE execution_id = ? AND processor_id = ? LIMIT 1";
List<ProcessorRecord> results = jdbc.query(sql, PROCESSOR_MAPPER, executionId, processorId);
return results.isEmpty() ? Optional.empty() : Optional.of(results.get(0));
}
private static final RowMapper<ExecutionRecord> EXECUTION_MAPPER = (rs, rowNum) -> private static final RowMapper<ExecutionRecord> EXECUTION_MAPPER = (rs, rowNum) ->
new ExecutionRecord( new ExecutionRecord(
rs.getString("execution_id"), rs.getString("route_id"), rs.getString("execution_id"), rs.getString("route_id"),
@@ -109,12 +137,16 @@ public class PostgresExecutionStore implements ExecutionStore {
toInstant(rs, "start_time"), toInstant(rs, "end_time"), toInstant(rs, "start_time"), toInstant(rs, "end_time"),
rs.getObject("duration_ms") != null ? rs.getLong("duration_ms") : null, rs.getObject("duration_ms") != null ? rs.getLong("duration_ms") : null,
rs.getString("error_message"), rs.getString("error_stacktrace"), rs.getString("error_message"), rs.getString("error_stacktrace"),
rs.getString("diagram_content_hash")); rs.getString("diagram_content_hash"),
rs.getString("engine_level"),
rs.getString("input_body"), rs.getString("output_body"),
rs.getString("input_headers"), rs.getString("output_headers"),
rs.getString("attributes"));
private static final RowMapper<ProcessorRecord> PROCESSOR_MAPPER = (rs, rowNum) -> private static final RowMapper<ProcessorRecord> PROCESSOR_MAPPER = (rs, rowNum) ->
new ProcessorRecord( new ProcessorRecord(
rs.getString("execution_id"), rs.getString("processor_id"), rs.getString("execution_id"), rs.getString("processor_id"),
rs.getString("processor_type"), rs.getString("diagram_node_id"), rs.getString("processor_type"),
rs.getString("application_name"), rs.getString("route_id"), rs.getString("application_name"), rs.getString("route_id"),
rs.getInt("depth"), rs.getString("parent_processor_id"), rs.getInt("depth"), rs.getString("parent_processor_id"),
rs.getString("status"), rs.getString("status"),
@@ -122,7 +154,13 @@ public class PostgresExecutionStore implements ExecutionStore {
rs.getObject("duration_ms") != null ? rs.getLong("duration_ms") : null, rs.getObject("duration_ms") != null ? rs.getLong("duration_ms") : null,
rs.getString("error_message"), rs.getString("error_stacktrace"), rs.getString("error_message"), rs.getString("error_stacktrace"),
rs.getString("input_body"), rs.getString("output_body"), rs.getString("input_body"), rs.getString("output_body"),
rs.getString("input_headers"), rs.getString("output_headers")); rs.getString("input_headers"), rs.getString("output_headers"),
rs.getString("attributes"),
rs.getObject("loop_index") != null ? rs.getInt("loop_index") : null,
rs.getObject("loop_size") != null ? rs.getInt("loop_size") : null,
rs.getObject("split_index") != null ? rs.getInt("split_index") : null,
rs.getObject("split_size") != null ? rs.getInt("split_size") : null,
rs.getObject("multicast_index") != null ? rs.getInt("multicast_index") : null);
private static Instant toInstant(ResultSet rs, String column) throws SQLException { private static Instant toInstant(ResultSet rs, String column) throws SQLException {
Timestamp ts = rs.getTimestamp(column); Timestamp ts = rs.getTimestamp(column);

View File

@@ -42,6 +42,8 @@ opensearch:
index-prefix: ${CAMELEER_OPENSEARCH_INDEX_PREFIX:executions-} index-prefix: ${CAMELEER_OPENSEARCH_INDEX_PREFIX:executions-}
queue-size: ${CAMELEER_OPENSEARCH_QUEUE_SIZE:10000} queue-size: ${CAMELEER_OPENSEARCH_QUEUE_SIZE:10000}
debounce-ms: ${CAMELEER_OPENSEARCH_DEBOUNCE_MS:2000} debounce-ms: ${CAMELEER_OPENSEARCH_DEBOUNCE_MS:2000}
log-index-prefix: ${CAMELEER_LOG_INDEX_PREFIX:logs-}
log-retention-days: ${CAMELEER_LOG_RETENTION_DAYS:7}
cameleer: cameleer:
body-size-limit: ${CAMELEER_BODY_SIZE_LIMIT:16384} body-size-limit: ${CAMELEER_BODY_SIZE_LIMIT:16384}

View File

@@ -0,0 +1,9 @@
-- Add engine level and route-level snapshot columns to executions table.
-- Required for REGULAR engine level where route-level payloads exist but
-- no processor execution records are created.
ALTER TABLE executions ADD COLUMN IF NOT EXISTS engine_level VARCHAR(16);
ALTER TABLE executions ADD COLUMN IF NOT EXISTS input_body TEXT;
ALTER TABLE executions ADD COLUMN IF NOT EXISTS output_body TEXT;
ALTER TABLE executions ADD COLUMN IF NOT EXISTS input_headers JSONB;
ALTER TABLE executions ADD COLUMN IF NOT EXISTS output_headers JSONB;

View File

@@ -0,0 +1,9 @@
-- Per-application configuration for agent observability settings.
-- Agents download this at startup and receive updates via SSE CONFIG_UPDATE.
CREATE TABLE application_config (
application TEXT PRIMARY KEY,
config_val JSONB NOT NULL,
version INTEGER NOT NULL DEFAULT 1,
updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_by TEXT
);

View File

@@ -0,0 +1,2 @@
ALTER TABLE executions ADD COLUMN IF NOT EXISTS attributes JSONB;
ALTER TABLE processor_executions ADD COLUMN IF NOT EXISTS attributes JSONB;

View File

@@ -0,0 +1 @@
ALTER TABLE processor_executions DROP COLUMN IF EXISTS diagram_node_id;

View File

@@ -0,0 +1,2 @@
ALTER TABLE route_diagrams ADD COLUMN IF NOT EXISTS application_name TEXT NOT NULL DEFAULT '';
CREATE INDEX IF NOT EXISTS idx_diagrams_application ON route_diagrams (application_name);

View File

@@ -0,0 +1,5 @@
ALTER TABLE processor_executions ADD COLUMN IF NOT EXISTS loop_index INTEGER;
ALTER TABLE processor_executions ADD COLUMN IF NOT EXISTS loop_size INTEGER;
ALTER TABLE processor_executions ADD COLUMN IF NOT EXISTS split_index INTEGER;
ALTER TABLE processor_executions ADD COLUMN IF NOT EXISTS split_size INTEGER;
ALTER TABLE processor_executions ADD COLUMN IF NOT EXISTS multicast_index INTEGER;

View File

@@ -50,11 +50,11 @@ class BackpressureIT extends AbstractPostgresIT {
// Fill the metrics buffer completely with a batch of 5 // Fill the metrics buffer completely with a batch of 5
String batchJson = """ String batchJson = """
[ [
{"agentId":"bp-agent","timestamp":"2026-03-11T10:00:00Z","metrics":{}}, {"agentId":"bp-agent","collectedAt":"2026-03-11T10:00:00Z","metricName":"test.metric","metricValue":1.0,"tags":{}},
{"agentId":"bp-agent","timestamp":"2026-03-11T10:00:01Z","metrics":{}}, {"agentId":"bp-agent","collectedAt":"2026-03-11T10:00:01Z","metricName":"test.metric","metricValue":2.0,"tags":{}},
{"agentId":"bp-agent","timestamp":"2026-03-11T10:00:02Z","metrics":{}}, {"agentId":"bp-agent","collectedAt":"2026-03-11T10:00:02Z","metricName":"test.metric","metricValue":3.0,"tags":{}},
{"agentId":"bp-agent","timestamp":"2026-03-11T10:00:03Z","metrics":{}}, {"agentId":"bp-agent","collectedAt":"2026-03-11T10:00:03Z","metricName":"test.metric","metricValue":4.0,"tags":{}},
{"agentId":"bp-agent","timestamp":"2026-03-11T10:00:04Z","metrics":{}} {"agentId":"bp-agent","collectedAt":"2026-03-11T10:00:04Z","metricName":"test.metric","metricValue":5.0,"tags":{}}
] ]
"""; """;
@@ -66,7 +66,7 @@ class BackpressureIT extends AbstractPostgresIT {
// Now buffer should be full -- next POST should get 503 // Now buffer should be full -- next POST should get 503
String overflowJson = """ String overflowJson = """
[{"agentId":"bp-agent","timestamp":"2026-03-11T10:00:05Z","metrics":{}}] [{"agentId":"bp-agent","collectedAt":"2026-03-11T10:00:05Z","metricName":"test.metric","metricValue":6.0,"tags":{}}]
"""; """;
ResponseEntity<String> response = restTemplate.postForEntity( ResponseEntity<String> response = restTemplate.postForEntity(

View File

@@ -65,7 +65,6 @@ class DetailControllerIT extends AbstractPostgresIT {
"startTime": "2026-03-10T10:00:00Z", "startTime": "2026-03-10T10:00:00Z",
"endTime": "2026-03-10T10:00:01Z", "endTime": "2026-03-10T10:00:01Z",
"durationMs": 1000, "durationMs": 1000,
"diagramNodeId": "node-root",
"inputBody": "root-input-body", "inputBody": "root-input-body",
"outputBody": "root-output-body", "outputBody": "root-output-body",
"inputHeaders": {"Content-Type": "application/json"}, "inputHeaders": {"Content-Type": "application/json"},
@@ -78,7 +77,6 @@ class DetailControllerIT extends AbstractPostgresIT {
"startTime": "2026-03-10T10:00:00.100Z", "startTime": "2026-03-10T10:00:00.100Z",
"endTime": "2026-03-10T10:00:00.200Z", "endTime": "2026-03-10T10:00:00.200Z",
"durationMs": 100, "durationMs": 100,
"diagramNodeId": "node-child1",
"inputBody": "child1-input", "inputBody": "child1-input",
"outputBody": "child1-output", "outputBody": "child1-output",
"inputHeaders": {}, "inputHeaders": {},
@@ -91,7 +89,6 @@ class DetailControllerIT extends AbstractPostgresIT {
"startTime": "2026-03-10T10:00:00.200Z", "startTime": "2026-03-10T10:00:00.200Z",
"endTime": "2026-03-10T10:00:00.800Z", "endTime": "2026-03-10T10:00:00.800Z",
"durationMs": 600, "durationMs": 600,
"diagramNodeId": "node-child2",
"inputBody": "child2-input", "inputBody": "child2-input",
"outputBody": "child2-output", "outputBody": "child2-output",
"inputHeaders": {}, "inputHeaders": {},
@@ -104,7 +101,6 @@ class DetailControllerIT extends AbstractPostgresIT {
"startTime": "2026-03-10T10:00:00.300Z", "startTime": "2026-03-10T10:00:00.300Z",
"endTime": "2026-03-10T10:00:00.700Z", "endTime": "2026-03-10T10:00:00.700Z",
"durationMs": 400, "durationMs": 400,
"diagramNodeId": "node-gc",
"inputBody": "gc-input", "inputBody": "gc-input",
"outputBody": "gc-output", "outputBody": "gc-output",
"inputHeaders": {"X-GC": "true"}, "inputHeaders": {"X-GC": "true"},

View File

@@ -39,8 +39,7 @@ class DiagramControllerIT extends AbstractPostgresIT {
"description": "Test route", "description": "Test route",
"version": 1, "version": 1,
"nodes": [], "nodes": [],
"edges": [], "edges": []
"processorNodeMapping": {}
} }
"""; """;
@@ -60,8 +59,7 @@ class DiagramControllerIT extends AbstractPostgresIT {
"description": "Flush test", "description": "Flush test",
"version": 1, "version": 1,
"nodes": [], "nodes": [],
"edges": [], "edges": []
"processorNodeMapping": {}
} }
"""; """;

View File

@@ -53,8 +53,7 @@ class DiagramRenderControllerIT extends AbstractPostgresIT {
"edges": [ "edges": [
{"source": "n1", "target": "n2", "edgeType": "FLOW"}, {"source": "n1", "target": "n2", "edgeType": "FLOW"},
{"source": "n2", "target": "n3", "edgeType": "FLOW"} {"source": "n2", "target": "n3", "edgeType": "FLOW"}
], ]
"processorNodeMapping": {}
} }
"""; """;

View File

@@ -35,7 +35,8 @@ class OpenSearchIndexIT extends AbstractPostgresIT {
now, now.plusMillis(100), 100L, now, now.plusMillis(100), 100L,
"OrderNotFoundException: order-12345 not found", null, "OrderNotFoundException: order-12345 not found", null,
List.of(new ProcessorDoc("proc-1", "log", "COMPLETED", List.of(new ProcessorDoc("proc-1", "log", "COMPLETED",
null, null, "request body with customer-99", null, null, null))); null, null, "request body with customer-99", null, null, null, null)),
null);
searchIndex.index(doc); searchIndex.index(doc);
refreshOpenSearchIndices(); refreshOpenSearchIndices();
@@ -60,7 +61,8 @@ class OpenSearchIndexIT extends AbstractPostgresIT {
"COMPLETED", null, null, "COMPLETED", null, null,
now, now.plusMillis(50), 50L, null, null, now, now.plusMillis(50), 50L, null, null,
List.of(new ProcessorDoc("proc-1", "bean", "COMPLETED", List.of(new ProcessorDoc("proc-1", "bean", "COMPLETED",
null, null, "UniquePayloadIdentifier12345", null, null, null))); null, null, "UniquePayloadIdentifier12345", null, null, null, null)),
null);
searchIndex.index(doc); searchIndex.index(doc);
refreshOpenSearchIndices(); refreshOpenSearchIndices();

View File

@@ -46,8 +46,7 @@ class DiagramLinkingIT extends AbstractPostgresIT {
], ],
"edges": [ "edges": [
{"source": "n1", "target": "n2", "edgeType": "FLOW"} {"source": "n1", "target": "n2", "edgeType": "FLOW"}
], ]
"processorNodeMapping": {}
} }
"""; """;

View File

@@ -55,8 +55,7 @@ class IngestionSchemaIT extends AbstractPostgresIT {
"startTime": "2026-03-11T10:00:00Z", "startTime": "2026-03-11T10:00:00Z",
"endTime": "2026-03-11T10:00:00.500Z", "endTime": "2026-03-11T10:00:00.500Z",
"durationMs": 500, "durationMs": 500,
"diagramNodeId": "node-root", "inputBody": "root-input",
"inputBody": "root-input",
"outputBody": "root-output", "outputBody": "root-output",
"inputHeaders": {"Content-Type": "application/json"}, "inputHeaders": {"Content-Type": "application/json"},
"outputHeaders": {"X-Result": "ok"}, "outputHeaders": {"X-Result": "ok"},
@@ -68,8 +67,7 @@ class IngestionSchemaIT extends AbstractPostgresIT {
"startTime": "2026-03-11T10:00:00.100Z", "startTime": "2026-03-11T10:00:00.100Z",
"endTime": "2026-03-11T10:00:00.400Z", "endTime": "2026-03-11T10:00:00.400Z",
"durationMs": 300, "durationMs": 300,
"diagramNodeId": "node-child", "inputBody": "child-input",
"inputBody": "child-input",
"outputBody": "child-output", "outputBody": "child-output",
"children": [ "children": [
{ {
@@ -79,8 +77,7 @@ class IngestionSchemaIT extends AbstractPostgresIT {
"startTime": "2026-03-11T10:00:00.200Z", "startTime": "2026-03-11T10:00:00.200Z",
"endTime": "2026-03-11T10:00:00.300Z", "endTime": "2026-03-11T10:00:00.300Z",
"durationMs": 100, "durationMs": 100,
"diagramNodeId": "node-grandchild", "children": []
"children": []
} }
] ]
} }
@@ -101,7 +98,7 @@ class IngestionSchemaIT extends AbstractPostgresIT {
// Verify processors were flattened into processor_executions // Verify processors were flattened into processor_executions
List<Map<String, Object>> processors = jdbcTemplate.queryForList( List<Map<String, Object>> processors = jdbcTemplate.queryForList(
"SELECT processor_id, processor_type, depth, parent_processor_id, " + "SELECT processor_id, processor_type, depth, parent_processor_id, " +
"diagram_node_id, input_body, output_body, input_headers " + "input_body, output_body, input_headers " +
"FROM processor_executions WHERE execution_id = 'ex-tree-1' " + "FROM processor_executions WHERE execution_id = 'ex-tree-1' " +
"ORDER BY depth, processor_id"); "ORDER BY depth, processor_id");
assertThat(processors).hasSize(3); assertThat(processors).hasSize(3);
@@ -110,7 +107,6 @@ class IngestionSchemaIT extends AbstractPostgresIT {
assertThat(processors.get(0).get("processor_id")).isEqualTo("root-proc"); assertThat(processors.get(0).get("processor_id")).isEqualTo("root-proc");
assertThat(((Number) processors.get(0).get("depth")).intValue()).isEqualTo(0); assertThat(((Number) processors.get(0).get("depth")).intValue()).isEqualTo(0);
assertThat(processors.get(0).get("parent_processor_id")).isNull(); assertThat(processors.get(0).get("parent_processor_id")).isNull();
assertThat(processors.get(0).get("diagram_node_id")).isEqualTo("node-root");
assertThat(processors.get(0).get("input_body")).isEqualTo("root-input"); assertThat(processors.get(0).get("input_body")).isEqualTo("root-input");
assertThat(processors.get(0).get("output_body")).isEqualTo("root-output"); assertThat(processors.get(0).get("output_body")).isEqualTo("root-output");
assertThat(processors.get(0).get("input_headers").toString()).contains("Content-Type"); assertThat(processors.get(0).get("input_headers").toString()).contains("Content-Type");
@@ -119,7 +115,6 @@ class IngestionSchemaIT extends AbstractPostgresIT {
assertThat(processors.get(1).get("processor_id")).isEqualTo("child-proc"); assertThat(processors.get(1).get("processor_id")).isEqualTo("child-proc");
assertThat(((Number) processors.get(1).get("depth")).intValue()).isEqualTo(1); assertThat(((Number) processors.get(1).get("depth")).intValue()).isEqualTo(1);
assertThat(processors.get(1).get("parent_processor_id")).isEqualTo("root-proc"); assertThat(processors.get(1).get("parent_processor_id")).isEqualTo("root-proc");
assertThat(processors.get(1).get("diagram_node_id")).isEqualTo("node-child");
assertThat(processors.get(1).get("input_body")).isEqualTo("child-input"); assertThat(processors.get(1).get("input_body")).isEqualTo("child-input");
assertThat(processors.get(1).get("output_body")).isEqualTo("child-output"); assertThat(processors.get(1).get("output_body")).isEqualTo("child-output");
@@ -127,7 +122,6 @@ class IngestionSchemaIT extends AbstractPostgresIT {
assertThat(processors.get(2).get("processor_id")).isEqualTo("grandchild-proc"); assertThat(processors.get(2).get("processor_id")).isEqualTo("grandchild-proc");
assertThat(((Number) processors.get(2).get("depth")).intValue()).isEqualTo(2); assertThat(((Number) processors.get(2).get("depth")).intValue()).isEqualTo(2);
assertThat(processors.get(2).get("parent_processor_id")).isEqualTo("child-proc"); assertThat(processors.get(2).get("parent_processor_id")).isEqualTo("child-proc");
assertThat(processors.get(2).get("diagram_node_id")).isEqualTo("node-grandchild");
} }
@Test @Test

View File

@@ -25,7 +25,8 @@ class PostgresExecutionStoreIT extends AbstractPostgresIT {
"exec-1", "route-a", "agent-1", "app-1", "exec-1", "route-a", "agent-1", "app-1",
"COMPLETED", "corr-1", "exchange-1", "COMPLETED", "corr-1", "exchange-1",
now, now.plusMillis(100), 100L, now, now.plusMillis(100), 100L,
null, null, null); null, null, null,
"REGULAR", null, null, null, null, null);
executionStore.upsert(record); executionStore.upsert(record);
Optional<ExecutionRecord> found = executionStore.findById("exec-1"); Optional<ExecutionRecord> found = executionStore.findById("exec-1");
@@ -33,6 +34,7 @@ class PostgresExecutionStoreIT extends AbstractPostgresIT {
assertTrue(found.isPresent()); assertTrue(found.isPresent());
assertEquals("exec-1", found.get().executionId()); assertEquals("exec-1", found.get().executionId());
assertEquals("COMPLETED", found.get().status()); assertEquals("COMPLETED", found.get().status());
assertEquals("REGULAR", found.get().engineLevel());
} }
@Test @Test
@@ -40,10 +42,12 @@ class PostgresExecutionStoreIT extends AbstractPostgresIT {
Instant now = Instant.now(); Instant now = Instant.now();
ExecutionRecord first = new ExecutionRecord( ExecutionRecord first = new ExecutionRecord(
"exec-dup", "route-a", "agent-1", "app-1", "exec-dup", "route-a", "agent-1", "app-1",
"RUNNING", null, null, now, null, null, null, null, null); "RUNNING", null, null, now, null, null, null, null, null,
null, null, null, null, null, null);
ExecutionRecord second = new ExecutionRecord( ExecutionRecord second = new ExecutionRecord(
"exec-dup", "route-a", "agent-1", "app-1", "exec-dup", "route-a", "agent-1", "app-1",
"COMPLETED", null, null, now, now.plusMillis(200), 200L, null, null, null); "COMPLETED", null, null, now, now.plusMillis(200), 200L, null, null, null,
"COMPLETE", null, null, null, null, null);
executionStore.upsert(first); executionStore.upsert(first);
executionStore.upsert(second); executionStore.upsert(second);
@@ -59,18 +63,19 @@ class PostgresExecutionStoreIT extends AbstractPostgresIT {
Instant now = Instant.now(); Instant now = Instant.now();
ExecutionRecord exec = new ExecutionRecord( ExecutionRecord exec = new ExecutionRecord(
"exec-proc", "route-a", "agent-1", "app-1", "exec-proc", "route-a", "agent-1", "app-1",
"COMPLETED", null, null, now, now.plusMillis(50), 50L, null, null, null); "COMPLETED", null, null, now, now.plusMillis(50), 50L, null, null, null,
"COMPLETE", null, null, null, null, null);
executionStore.upsert(exec); executionStore.upsert(exec);
List<ProcessorRecord> processors = List.of( List<ProcessorRecord> processors = List.of(
new ProcessorRecord("exec-proc", "proc-1", "log", null, new ProcessorRecord("exec-proc", "proc-1", "log",
"app-1", "route-a", 0, null, "COMPLETED", "app-1", "route-a", 0, null, "COMPLETED",
now, now.plusMillis(10), 10L, null, null, now, now.plusMillis(10), 10L, null, null,
"input body", "output body", null, null), "input body", "output body", null, null, null),
new ProcessorRecord("exec-proc", "proc-2", "to", null, new ProcessorRecord("exec-proc", "proc-2", "to",
"app-1", "route-a", 1, "proc-1", "COMPLETED", "app-1", "route-a", 1, "proc-1", "COMPLETED",
now.plusMillis(10), now.plusMillis(30), 20L, null, null, now.plusMillis(10), now.plusMillis(30), 20L, null, null,
null, null, null, null) null, null, null, null, null)
); );
executionStore.upsertProcessors("exec-proc", now, "app-1", "route-a", processors); executionStore.upsertProcessors("exec-proc", now, "app-1", "route-a", processors);

View File

@@ -59,6 +59,7 @@ class PostgresStatsStoreIT extends AbstractPostgresIT {
executionStore.upsert(new ExecutionRecord( executionStore.upsert(new ExecutionRecord(
id, routeId, "agent-1", applicationName, status, null, null, id, routeId, "agent-1", applicationName, status, null, null,
startTime, startTime.plusMillis(durationMs), durationMs, startTime, startTime.plusMillis(durationMs), durationMs,
status.equals("FAILED") ? "error" : null, null, null)); status.equals("FAILED") ? "error" : null, null, null,
null, null, null, null, null, null));
} }
} }

View File

@@ -1,5 +1,5 @@
package com.cameleer3.server.core.admin; package com.cameleer3.server.core.admin;
public enum AuditCategory { public enum AuditCategory {
INFRA, AUTH, USER_MGMT, CONFIG, RBAC INFRA, AUTH, USER_MGMT, CONFIG, RBAC, AGENT
} }

View File

@@ -34,6 +34,10 @@ public class AuditService {
repository.insert(record); repository.insert(record);
if (request != null) {
request.setAttribute("audit.logged", true);
}
log.info("AUDIT: user={} action={} category={} target={} result={}", log.info("AUDIT: user={} action={} category={} target={} result={}",
username, action, category, target, result); username, action, category, target, result);
} }

View File

@@ -9,6 +9,7 @@ import java.util.ArrayList;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
import java.util.UUID; import java.util.UUID;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentLinkedQueue; import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.stream.Collectors; import java.util.stream.Collectors;
@@ -30,6 +31,7 @@ public class AgentRegistryService {
private final ConcurrentHashMap<String, AgentInfo> agents = new ConcurrentHashMap<>(); private final ConcurrentHashMap<String, AgentInfo> agents = new ConcurrentHashMap<>();
private final ConcurrentHashMap<String, ConcurrentLinkedQueue<AgentCommand>> commands = new ConcurrentHashMap<>(); private final ConcurrentHashMap<String, ConcurrentLinkedQueue<AgentCommand>> commands = new ConcurrentHashMap<>();
private final ConcurrentHashMap<String, CompletableFuture<CommandReply>> pendingReplies = new ConcurrentHashMap<>();
private volatile AgentEventListener eventListener; private volatile AgentEventListener eventListener;
@@ -279,6 +281,31 @@ public class AgentRegistryService {
} }
} }
/**
* Register a command that expects a synchronous reply from the agent.
* Returns a CompletableFuture that will be completed when the agent ACKs the command.
* Auto-cleans up from the pending map on completion or timeout.
*/
public CompletableFuture<CommandReply> addCommandWithReply(String agentId, CommandType type, String payload) {
AgentCommand command = addCommand(agentId, type, payload);
CompletableFuture<CommandReply> future = new CompletableFuture<>();
pendingReplies.put(command.id(), future);
future.whenComplete((result, ex) -> pendingReplies.remove(command.id()));
return future;
}
/**
* Complete a pending reply future for a command.
* Called when an agent ACKs a command that was registered via {@link #addCommandWithReply}.
* No-op if no pending future exists for the given command ID.
*/
public void completeReply(String commandId, String status, String message, String data) {
CompletableFuture<CommandReply> future = pendingReplies.remove(commandId);
if (future != null) {
future.complete(new CommandReply(status, message, data));
}
}
/** /**
* Set the event listener for command notifications. * Set the event listener for command notifications.
* The SSE layer in the app module implements this interface. * The SSE layer in the app module implements this interface.

View File

@@ -0,0 +1,11 @@
package com.cameleer3.server.core.agent;
/**
* Represents the reply data from an agent command acknowledgment.
* Used for synchronous request-reply command patterns (e.g. TEST_EXPRESSION).
*
* @param status "SUCCESS" or "FAILURE"
* @param message human-readable description of the result
* @param data optional structured JSON data returned by the agent
*/
public record CommandReply(String status, String message, String data) {}

View File

@@ -6,5 +6,7 @@ package com.cameleer3.server.core.agent;
public enum CommandType { public enum CommandType {
CONFIG_UPDATE, CONFIG_UPDATE,
DEEP_TRACE, DEEP_TRACE,
REPLAY REPLAY,
SET_TRACED_PROCESSORS,
TEST_EXPRESSION
} }

View File

@@ -2,11 +2,16 @@ package com.cameleer3.server.core.detail;
import com.cameleer3.server.core.storage.ExecutionStore; import com.cameleer3.server.core.storage.ExecutionStore;
import com.cameleer3.server.core.storage.ExecutionStore.ProcessorRecord; import com.cameleer3.server.core.storage.ExecutionStore.ProcessorRecord;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.ObjectMapper;
import java.util.*; import java.util.*;
public class DetailService { public class DetailService {
private static final ObjectMapper JSON = new ObjectMapper();
private static final TypeReference<Map<String, String>> STR_MAP = new TypeReference<>() {};
private final ExecutionStore executionStore; private final ExecutionStore executionStore;
public DetailService(ExecutionStore executionStore) { public DetailService(ExecutionStore executionStore) {
@@ -25,11 +30,26 @@ public class DetailService {
exec.durationMs() != null ? exec.durationMs() : 0L, exec.durationMs() != null ? exec.durationMs() : 0L,
exec.correlationId(), exec.exchangeId(), exec.correlationId(), exec.exchangeId(),
exec.errorMessage(), exec.errorStacktrace(), exec.errorMessage(), exec.errorStacktrace(),
exec.diagramContentHash(), roots exec.diagramContentHash(), roots,
exec.inputBody(), exec.outputBody(),
exec.inputHeaders(), exec.outputHeaders(),
parseAttributes(exec.attributes())
); );
}); });
} }
public Optional<Map<String, String>> getProcessorSnapshot(String executionId, String processorId) {
return executionStore.findProcessorById(executionId, processorId)
.map(p -> {
Map<String, String> snapshot = new LinkedHashMap<>();
if (p.inputBody() != null) snapshot.put("inputBody", p.inputBody());
if (p.outputBody() != null) snapshot.put("outputBody", p.outputBody());
if (p.inputHeaders() != null) snapshot.put("inputHeaders", p.inputHeaders());
if (p.outputHeaders() != null) snapshot.put("outputHeaders", p.outputHeaders());
return snapshot;
});
}
List<ProcessorNode> buildTree(List<ProcessorRecord> processors) { List<ProcessorNode> buildTree(List<ProcessorRecord> processors) {
if (processors.isEmpty()) return List.of(); if (processors.isEmpty()) return List.of();
@@ -39,7 +59,11 @@ public class DetailService {
p.processorId(), p.processorType(), p.status(), p.processorId(), p.processorType(), p.status(),
p.startTime(), p.endTime(), p.startTime(), p.endTime(),
p.durationMs() != null ? p.durationMs() : 0L, p.durationMs() != null ? p.durationMs() : 0L,
p.diagramNodeId(), p.errorMessage(), p.errorStacktrace() p.errorMessage(), p.errorStacktrace(),
parseAttributes(p.attributes()),
p.loopIndex(), p.loopSize(),
p.splitIndex(), p.splitSize(),
p.multicastIndex()
)); ));
} }
@@ -59,4 +83,13 @@ public class DetailService {
} }
return roots; return roots;
} }
private static Map<String, String> parseAttributes(String json) {
if (json == null || json.isBlank()) return null;
try {
return JSON.readValue(json, STR_MAP);
} catch (Exception e) {
return null;
}
}
} }

View File

@@ -2,6 +2,7 @@ package com.cameleer3.server.core.detail;
import java.time.Instant; import java.time.Instant;
import java.util.List; import java.util.List;
import java.util.Map;
/** /**
* Full detail of a route execution, including the nested processor tree. * Full detail of a route execution, including the nested processor tree.
@@ -22,6 +23,10 @@ import java.util.List;
* @param errorStackTrace error stack trace (empty string if no error) * @param errorStackTrace error stack trace (empty string if no error)
* @param diagramContentHash content hash linking to the active route diagram version * @param diagramContentHash content hash linking to the active route diagram version
* @param processors nested processor execution tree (root nodes) * @param processors nested processor execution tree (root nodes)
* @param inputBody exchange input body at route entry (null if not captured)
* @param outputBody exchange output body at route exit (null if not captured)
* @param inputHeaders exchange input headers at route entry (null if not captured)
* @param outputHeaders exchange output headers at route exit (null if not captured)
*/ */
public record ExecutionDetail( public record ExecutionDetail(
String executionId, String executionId,
@@ -37,6 +42,11 @@ public record ExecutionDetail(
String errorMessage, String errorMessage,
String errorStackTrace, String errorStackTrace,
String diagramContentHash, String diagramContentHash,
List<ProcessorNode> processors List<ProcessorNode> processors,
String inputBody,
String outputBody,
String inputHeaders,
String outputHeaders,
Map<String, String> attributes
) { ) {
} }

View File

@@ -3,6 +3,7 @@ package com.cameleer3.server.core.detail;
import java.time.Instant; import java.time.Instant;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.List; import java.util.List;
import java.util.Map;
/** /**
* Nested tree node representing a single processor execution within a route. * Nested tree node representing a single processor execution within a route.
@@ -18,23 +19,37 @@ public final class ProcessorNode {
private final Instant startTime; private final Instant startTime;
private final Instant endTime; private final Instant endTime;
private final long durationMs; private final long durationMs;
private final String diagramNodeId;
private final String errorMessage; private final String errorMessage;
private final String errorStackTrace; private final String errorStackTrace;
private final Map<String, String> attributes;
private final Integer loopIndex;
private final Integer loopSize;
private final Integer splitIndex;
private final Integer splitSize;
private final Integer multicastIndex;
private final List<ProcessorNode> children; private final List<ProcessorNode> children;
public ProcessorNode(String processorId, String processorType, String status, public ProcessorNode(String processorId, String processorType, String status,
Instant startTime, Instant endTime, long durationMs, Instant startTime, Instant endTime, long durationMs,
String diagramNodeId, String errorMessage, String errorStackTrace) { String errorMessage, String errorStackTrace,
Map<String, String> attributes,
Integer loopIndex, Integer loopSize,
Integer splitIndex, Integer splitSize,
Integer multicastIndex) {
this.processorId = processorId; this.processorId = processorId;
this.processorType = processorType; this.processorType = processorType;
this.status = status; this.status = status;
this.startTime = startTime; this.startTime = startTime;
this.endTime = endTime; this.endTime = endTime;
this.durationMs = durationMs; this.durationMs = durationMs;
this.diagramNodeId = diagramNodeId;
this.errorMessage = errorMessage; this.errorMessage = errorMessage;
this.errorStackTrace = errorStackTrace; this.errorStackTrace = errorStackTrace;
this.attributes = attributes;
this.loopIndex = loopIndex;
this.loopSize = loopSize;
this.splitIndex = splitIndex;
this.splitSize = splitSize;
this.multicastIndex = multicastIndex;
this.children = new ArrayList<>(); this.children = new ArrayList<>();
} }
@@ -48,8 +63,13 @@ public final class ProcessorNode {
public Instant getStartTime() { return startTime; } public Instant getStartTime() { return startTime; }
public Instant getEndTime() { return endTime; } public Instant getEndTime() { return endTime; }
public long getDurationMs() { return durationMs; } public long getDurationMs() { return durationMs; }
public String getDiagramNodeId() { return diagramNodeId; }
public String getErrorMessage() { return errorMessage; } public String getErrorMessage() { return errorMessage; }
public String getErrorStackTrace() { return errorStackTrace; } public String getErrorStackTrace() { return errorStackTrace; }
public Map<String, String> getAttributes() { return attributes; }
public Integer getLoopIndex() { return loopIndex; }
public Integer getLoopSize() { return loopSize; }
public Integer getSplitIndex() { return splitIndex; }
public Integer getSplitSize() { return splitSize; }
public Integer getMulticastIndex() { return multicastIndex; }
public List<ProcessorNode> getChildren() { return List.copyOf(children); } public List<ProcessorNode> getChildren() { return List.copyOf(children); }
} }

View File

@@ -19,4 +19,14 @@ public interface DiagramRenderer {
* Compute a positioned JSON layout for the route graph. * Compute a positioned JSON layout for the route graph.
*/ */
DiagramLayout layoutJson(RouteGraph graph); DiagramLayout layoutJson(RouteGraph graph);
/**
* Compute a positioned JSON layout with a specific flow direction.
*
* @param graph the route graph
* @param direction "LR" for left-to-right, "TB" for top-to-bottom
*/
default DiagramLayout layoutJson(RouteGraph graph, String direction) {
return layoutJson(graph);
}
} }

View File

@@ -70,14 +70,16 @@ public class SearchIndexer implements SearchIndexerStats {
p.processorId(), p.processorType(), p.status(), p.processorId(), p.processorType(), p.status(),
p.errorMessage(), p.errorStacktrace(), p.errorMessage(), p.errorStacktrace(),
p.inputBody(), p.outputBody(), p.inputBody(), p.outputBody(),
p.inputHeaders(), p.outputHeaders())) p.inputHeaders(), p.outputHeaders(),
p.attributes()))
.toList(); .toList();
searchIndex.index(new ExecutionDocument( searchIndex.index(new ExecutionDocument(
exec.executionId(), exec.routeId(), exec.agentId(), exec.applicationName(), exec.executionId(), exec.routeId(), exec.agentId(), exec.applicationName(),
exec.status(), exec.correlationId(), exec.exchangeId(), exec.status(), exec.correlationId(), exec.exchangeId(),
exec.startTime(), exec.endTime(), exec.durationMs(), exec.startTime(), exec.endTime(), exec.durationMs(),
exec.errorMessage(), exec.errorStacktrace(), processorDocs)); exec.errorMessage(), exec.errorStacktrace(), processorDocs,
exec.attributes()));
indexedCount.incrementAndGet(); indexedCount.incrementAndGet();
lastIndexedAt = Instant.now(); lastIndexedAt = Instant.now();

View File

@@ -1,5 +1,6 @@
package com.cameleer3.server.core.ingestion; package com.cameleer3.server.core.ingestion;
import com.cameleer3.common.model.ExchangeSnapshot;
import com.cameleer3.common.model.ProcessorExecution; import com.cameleer3.common.model.ProcessorExecution;
import com.cameleer3.common.model.RouteExecution; import com.cameleer3.common.model.RouteExecution;
import com.cameleer3.server.core.indexing.ExecutionUpdatedEvent; import com.cameleer3.server.core.indexing.ExecutionUpdatedEvent;
@@ -77,6 +78,25 @@ public class IngestionService {
String diagramHash = diagramStore String diagramHash = diagramStore
.findContentHashForRoute(exec.getRouteId(), agentId) .findContentHashForRoute(exec.getRouteId(), agentId)
.orElse(""); .orElse("");
// Extract route-level snapshots (critical for REGULAR mode where no processors are recorded)
String inputBody = null;
String outputBody = null;
String inputHeaders = null;
String outputHeaders = null;
ExchangeSnapshot inputSnapshot = exec.getInputSnapshot();
if (inputSnapshot != null) {
inputBody = truncateBody(inputSnapshot.getBody());
inputHeaders = toJson(inputSnapshot.getHeaders());
}
ExchangeSnapshot outputSnapshot = exec.getOutputSnapshot();
if (outputSnapshot != null) {
outputBody = truncateBody(outputSnapshot.getBody());
outputHeaders = toJson(outputSnapshot.getHeaders());
}
return new ExecutionRecord( return new ExecutionRecord(
exec.getExchangeId(), exec.getRouteId(), agentId, applicationName, exec.getExchangeId(), exec.getRouteId(), agentId, applicationName,
exec.getStatus() != null ? exec.getStatus().name() : "RUNNING", exec.getStatus() != null ? exec.getStatus().name() : "RUNNING",
@@ -84,7 +104,10 @@ public class IngestionService {
exec.getStartTime(), exec.getEndTime(), exec.getStartTime(), exec.getEndTime(),
exec.getDurationMs(), exec.getDurationMs(),
exec.getErrorMessage(), exec.getErrorStackTrace(), exec.getErrorMessage(), exec.getErrorStackTrace(),
diagramHash diagramHash,
exec.getEngineLevel(),
inputBody, outputBody, inputHeaders, outputHeaders,
toJson(exec.getAttributes())
); );
} }
@@ -96,7 +119,7 @@ public class IngestionService {
for (ProcessorExecution p : processors) { for (ProcessorExecution p : processors) {
flat.add(new ProcessorRecord( flat.add(new ProcessorRecord(
executionId, p.getProcessorId(), p.getProcessorType(), executionId, p.getProcessorId(), p.getProcessorType(),
p.getDiagramNodeId(), applicationName, routeId, applicationName, routeId,
depth, parentProcessorId, depth, parentProcessorId,
p.getStatus() != null ? p.getStatus().name() : "RUNNING", p.getStatus() != null ? p.getStatus().name() : "RUNNING",
p.getStartTime() != null ? p.getStartTime() : execStartTime, p.getStartTime() != null ? p.getStartTime() : execStartTime,
@@ -104,7 +127,11 @@ public class IngestionService {
p.getDurationMs(), p.getDurationMs(),
p.getErrorMessage(), p.getErrorStackTrace(), p.getErrorMessage(), p.getErrorStackTrace(),
truncateBody(p.getInputBody()), truncateBody(p.getOutputBody()), truncateBody(p.getInputBody()), truncateBody(p.getOutputBody()),
toJson(p.getInputHeaders()), toJson(p.getOutputHeaders()) toJson(p.getInputHeaders()), toJson(p.getOutputHeaders()),
toJson(p.getAttributes()),
p.getLoopIndex(), p.getLoopSize(),
p.getSplitIndex(), p.getSplitSize(),
p.getMulticastIndex()
)); ));
if (p.getChildren() != null) { if (p.getChildren() != null) {
flat.addAll(flattenProcessors( flat.addAll(flattenProcessors(

View File

@@ -8,4 +8,4 @@ import com.cameleer3.common.graph.RouteGraph;
* The agent ID is extracted from the SecurityContext in the controller layer * The agent ID is extracted from the SecurityContext in the controller layer
* and carried through the write buffer so the flush scheduler can persist it. * and carried through the write buffer so the flush scheduler can persist it.
*/ */
public record TaggedDiagram(String agentId, RouteGraph graph) {} public record TaggedDiagram(String agentId, String applicationName, RouteGraph graph) {}

View File

@@ -1,6 +1,7 @@
package com.cameleer3.server.core.search; package com.cameleer3.server.core.search;
import java.time.Instant; import java.time.Instant;
import java.util.Map;
/** /**
* Lightweight summary of a route execution for search result listings. * Lightweight summary of a route execution for search result listings.
@@ -30,6 +31,8 @@ public record ExecutionSummary(
long durationMs, long durationMs,
String correlationId, String correlationId,
String errorMessage, String errorMessage,
String diagramContentHash String diagramContentHash,
String highlight,
Map<String, String> attributes
) { ) {
} }

View File

@@ -55,16 +55,21 @@ public record SearchRequest(
private static final int MAX_LIMIT = 500; private static final int MAX_LIMIT = 500;
private static final java.util.Set<String> ALLOWED_SORT_FIELDS = java.util.Set.of( private static final java.util.Set<String> ALLOWED_SORT_FIELDS = java.util.Set.of(
"startTime", "status", "agentId", "routeId", "correlationId", "durationMs" "startTime", "status", "agentId", "routeId", "correlationId",
"durationMs", "executionId", "applicationName"
); );
private static final java.util.Map<String, String> SORT_FIELD_TO_COLUMN = java.util.Map.of( /** Maps camelCase API sort field names to OpenSearch field names.
"startTime", "start_time", * Text fields use .keyword subfield; date/numeric fields are used directly. */
"status", "status", private static final java.util.Map<String, String> SORT_FIELD_TO_COLUMN = java.util.Map.ofEntries(
"agentId", "agent_id", java.util.Map.entry("startTime", "start_time"),
"routeId", "route_id", java.util.Map.entry("durationMs", "duration_ms"),
"correlationId", "correlation_id", java.util.Map.entry("status", "status.keyword"),
"durationMs", "duration_ms" java.util.Map.entry("agentId", "agent_id.keyword"),
java.util.Map.entry("routeId", "route_id.keyword"),
java.util.Map.entry("correlationId", "correlation_id.keyword"),
java.util.Map.entry("executionId", "execution_id.keyword"),
java.util.Map.entry("applicationName", "application_name.keyword")
); );
public SearchRequest { public SearchRequest {
@@ -75,7 +80,7 @@ public record SearchRequest(
if (!"asc".equalsIgnoreCase(sortDir)) sortDir = "desc"; if (!"asc".equalsIgnoreCase(sortDir)) sortDir = "desc";
} }
/** Returns the validated database column name for ORDER BY. */ /** Returns the snake_case column name for OpenSearch/DB ORDER BY. */
public String sortColumn() { public String sortColumn() {
return SORT_FIELD_TO_COLUMN.getOrDefault(sortField, "start_time"); return SORT_FIELD_TO_COLUMN.getOrDefault(sortField, "start_time");
} }

View File

@@ -4,6 +4,7 @@ import com.cameleer3.common.graph.RouteGraph;
import com.cameleer3.server.core.ingestion.TaggedDiagram; import com.cameleer3.server.core.ingestion.TaggedDiagram;
import java.util.List; import java.util.List;
import java.util.Map;
import java.util.Optional; import java.util.Optional;
public interface DiagramStore { public interface DiagramStore {
@@ -15,4 +16,6 @@ public interface DiagramStore {
Optional<String> findContentHashForRoute(String routeId, String agentId); Optional<String> findContentHashForRoute(String routeId, String agentId);
Optional<String> findContentHashForRouteByAgents(String routeId, List<String> agentIds); Optional<String> findContentHashForRouteByAgents(String routeId, List<String> agentIds);
Map<String, String> findProcessorRouteMapping(String applicationName);
} }

View File

@@ -16,19 +16,28 @@ public interface ExecutionStore {
List<ProcessorRecord> findProcessors(String executionId); List<ProcessorRecord> findProcessors(String executionId);
Optional<ProcessorRecord> findProcessorById(String executionId, String processorId);
record ExecutionRecord( record ExecutionRecord(
String executionId, String routeId, String agentId, String applicationName, String executionId, String routeId, String agentId, String applicationName,
String status, String correlationId, String exchangeId, String status, String correlationId, String exchangeId,
Instant startTime, Instant endTime, Long durationMs, Instant startTime, Instant endTime, Long durationMs,
String errorMessage, String errorStacktrace, String diagramContentHash String errorMessage, String errorStacktrace, String diagramContentHash,
String engineLevel,
String inputBody, String outputBody, String inputHeaders, String outputHeaders,
String attributes
) {} ) {}
record ProcessorRecord( record ProcessorRecord(
String executionId, String processorId, String processorType, String executionId, String processorId, String processorType,
String diagramNodeId, String applicationName, String routeId, String applicationName, String routeId,
int depth, String parentProcessorId, String status, int depth, String parentProcessorId, String status,
Instant startTime, Instant endTime, Long durationMs, Instant startTime, Instant endTime, Long durationMs,
String errorMessage, String errorStacktrace, String errorMessage, String errorStacktrace,
String inputBody, String outputBody, String inputHeaders, String outputHeaders String inputBody, String outputBody, String inputHeaders, String outputHeaders,
String attributes,
Integer loopIndex, Integer loopSize,
Integer splitIndex, Integer splitSize,
Integer multicastIndex
) {} ) {}
} }

View File

@@ -8,12 +8,14 @@ public record ExecutionDocument(
String status, String correlationId, String exchangeId, String status, String correlationId, String exchangeId,
Instant startTime, Instant endTime, Long durationMs, Instant startTime, Instant endTime, Long durationMs,
String errorMessage, String errorStacktrace, String errorMessage, String errorStacktrace,
List<ProcessorDoc> processors List<ProcessorDoc> processors,
String attributes
) { ) {
public record ProcessorDoc( public record ProcessorDoc(
String processorId, String processorType, String status, String processorId, String processorType, String status,
String errorMessage, String errorStacktrace, String errorMessage, String errorStacktrace,
String inputBody, String outputBody, String inputBody, String outputBody,
String inputHeaders, String outputHeaders String inputHeaders, String outputHeaders,
String attributes
) {} ) {}
} }

View File

@@ -24,10 +24,10 @@ class TreeReconstructionTest {
private ProcessorRecord proc(String id, String type, String status, private ProcessorRecord proc(String id, String type, String status,
int depth, String parentId) { int depth, String parentId) {
return new ProcessorRecord( return new ProcessorRecord(
"exec-1", id, type, "node-" + id, "exec-1", id, type,
"default", "route1", depth, parentId, "default", "route1", depth, parentId,
status, NOW, NOW, 10L, status, NOW, NOW, 10L,
null, null, null, null, null, null null, null, null, null, null, null, null
); );
} }

View File

@@ -0,0 +1,858 @@
# Taps, Business Attributes & Enhanced Replay — Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Add UI and backend support for tap management, business attribute display, enhanced replay, per-route recording toggles, and success compression.
**Architecture:** Backend-first approach — add attributes to the execution pipeline, then build the command infrastructure for test-expression and replay, then layer on the frontend features page by page. Each task produces a self-contained, committable unit.
**Tech Stack:** Java 17 / Spring Boot 3.4 (backend), React 18 / TypeScript / TanStack Query (frontend), @cameleer/design-system components, PostgreSQL (JSONB), OpenSearch.
**Spec:** `docs/superpowers/specs/2026-03-26-taps-attributes-replay-ui-design.md`
---
## File Map
### Backend — New Files
- `cameleer3-server-app/src/main/resources/db/migration/V5__attributes.sql` — Flyway migration adding `attributes JSONB` to executions and processor_executions tables
- `cameleer3-server-app/src/main/java/com/cameleer3/server/app/dto/TestExpressionRequest.java` — Request DTO for test-expression endpoint
- `cameleer3-server-app/src/main/java/com/cameleer3/server/app/dto/TestExpressionResponse.java` — Response DTO for test-expression endpoint
### Backend — Modified Files
- `cameleer3-server-core/src/main/java/com/cameleer3/server/core/agent/CommandType.java` — add TEST_EXPRESSION
- `cameleer3-server-core/src/main/java/com/cameleer3/server/core/storage/ExecutionStore.java` — add attributes to ExecutionRecord and ProcessorRecord
- `cameleer3-server-core/src/main/java/com/cameleer3/server/core/detail/ExecutionDetail.java` — add attributes field
- `cameleer3-server-core/src/main/java/com/cameleer3/server/core/detail/ProcessorNode.java` — add attributes field
- `cameleer3-server-core/src/main/java/com/cameleer3/server/core/detail/DetailService.java` — pass attributes through tree reconstruction
- `cameleer3-server-core/src/main/java/com/cameleer3/server/core/search/ExecutionSummary.java` — add attributes field
- `cameleer3-server-core/src/main/java/com/cameleer3/server/core/ingestion/IngestionService.java` — extract attributes from RouteExecution/ProcessorExecution
- `cameleer3-server-core/src/main/java/com/cameleer3/server/core/storage/model/ExecutionDocument.java` — add attributes to ProcessorDoc
- `cameleer3-server-core/src/main/java/com/cameleer3/server/core/indexing/SearchIndexer.java` — include attributes in indexing
- `cameleer3-server-core/src/main/java/com/cameleer3/server/core/agent/AgentRegistryService.java` — add CompletableFuture-based command reply support
- `cameleer3-server-app/src/main/java/com/cameleer3/server/app/storage/PostgresExecutionStore.java` — add attributes to INSERT/UPDATE queries
- `cameleer3-server-app/src/main/java/com/cameleer3/server/app/search/OpenSearchIndex.java` — add attributes to toMap() and fromSearchHit()
- `cameleer3-server-app/src/main/java/com/cameleer3/server/app/controller/ApplicationConfigController.java` — add test-expression endpoint
- `cameleer3-server-app/src/main/java/com/cameleer3/server/app/controller/AgentCommandController.java` — add test-expression mapping, complete futures on ACK
- `cameleer3-server-app/src/main/java/com/cameleer3/server/app/dto/CommandAckRequest.java` — add optional data field
### Frontend — Modified Files
- `ui/src/api/schema.d.ts` — add attributes to ExecutionDetail, ProcessorNode, ExecutionSummary
- `ui/src/api/queries/commands.ts` — add TapDefinition type, extend ApplicationConfig, add test-expression mutation, add replay mutation
- `ui/src/pages/ExchangeDetail/ExchangeDetail.tsx` — attributes strip, per-processor attributes, replay modal
- `ui/src/pages/ExchangeDetail/ExchangeDetail.module.css` — attributes strip and replay styles
- `ui/src/pages/Dashboard/Dashboard.tsx` — attributes column in exchanges table
- `ui/src/pages/Routes/RouteDetail.tsx` — recording toggle, taps tab, tap modal with test
- `ui/src/pages/Routes/RouteDetail.module.css` — taps and recording styles
- `ui/src/pages/Admin/AppConfigDetailPage.tsx` — restructure to 3 sections
- `ui/src/pages/Admin/AppConfigDetailPage.module.css` — updated styles
---
## Task 1: Verify Prerequisites and Database Migration
**Files:**
- Create: `cameleer3-server-app/src/main/resources/db/migration/V5__attributes.sql`
- [ ] **Step 1: Verify cameleer3-common has attributes support**
Confirm the `cameleer3-common` SNAPSHOT dependency includes `RouteExecution.getAttributes()` and `ProcessorExecution.getAttributes()`. Run:
```bash
mvn dependency:sources -pl cameleer3-server-core -q
```
Then inspect the source jar for `RouteExecution.java` to confirm the `attributes` field exists. If it does not, the dependency must be updated first.
- [ ] **Step 2: Write migration SQL**
```sql
-- V5__attributes.sql
ALTER TABLE executions ADD COLUMN IF NOT EXISTS attributes JSONB;
ALTER TABLE processor_executions ADD COLUMN IF NOT EXISTS attributes JSONB;
```
- [ ] **Step 3: Verify migration compiles**
Run: `cd cameleer3-server-app && mvn compile -pl . -q`
Expected: BUILD SUCCESS
- [ ] **Step 4: Commit**
```bash
git add cameleer3-server-app/src/main/resources/db/migration/V5__attributes.sql
git commit -m "feat: add attributes JSONB columns to executions and processor_executions"
```
---
## Task 2: Backend — Add Attributes to Storage Records and Detail Models
**Files:**
- Modify: `cameleer3-server-core/src/main/java/com/cameleer3/server/core/storage/ExecutionStore.java`
- Modify: `cameleer3-server-core/src/main/java/com/cameleer3/server/core/detail/ExecutionDetail.java`
- Modify: `cameleer3-server-core/src/main/java/com/cameleer3/server/core/detail/ProcessorNode.java`
- Modify: `cameleer3-server-core/src/main/java/com/cameleer3/server/core/search/ExecutionSummary.java`
- [ ] **Step 1: Add `attributes` field to `ExecutionRecord`**
In `ExecutionStore.java`, add `String attributes` (JSONB as string) as the last parameter of the `ExecutionRecord` record. This is a serialized `Map<String, String>`.
- [ ] **Step 2: Add `attributes` field to `ProcessorRecord`**
In `ExecutionStore.java`, add `String attributes` (JSONB as string) as the last parameter of the `ProcessorRecord` record.
- [ ] **Step 3: Add `attributes` field to `ExecutionDetail`**
Add `Map<String, String> attributes` as the last parameter of the `ExecutionDetail` record (after `outputHeaders`).
- [ ] **Step 4: Add `attributes` field to `ProcessorNode`**
`ProcessorNode` is a mutable class with a constructor. Add a `Map<String, String> attributes` field with getter. Add it to the constructor. Update the existing `ProcessorNode` constructor calls in `DetailService.java` to pass `null` or the attributes map.
- [ ] **Step 5: Add `attributes` field to `ExecutionSummary`**
Add `Map<String, String> attributes` as the last parameter (after `highlight`).
- [ ] **Step 6: Verify compilation**
Run: `mvn compile -q`
Expected: Compilation errors in files that construct these records — these will be fixed in the next tasks.
- [ ] **Step 7: Commit**
```bash
git add cameleer3-server-core/
git commit -m "feat: add attributes field to ExecutionRecord, ProcessorRecord, ExecutionDetail, ProcessorNode, ExecutionSummary"
```
---
## Task 3: Backend — Attributes Ingestion Pipeline
**Files:**
- Modify: `cameleer3-server-core/src/main/java/com/cameleer3/server/core/ingestion/IngestionService.java`
- Modify: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/storage/PostgresExecutionStore.java`
- [ ] **Step 1: Extract attributes in `IngestionService.toExecutionRecord()`**
In the `toExecutionRecord()` method (~line 76-111), serialize `execution.getAttributes()` to JSON string using Jackson `ObjectMapper`. Pass it as the new `attributes` parameter to `ExecutionRecord`. If attributes is null or empty, pass `null`.
```java
String attributes = null;
if (execution.getAttributes() != null && !execution.getAttributes().isEmpty()) {
attributes = JSON.writeValueAsString(execution.getAttributes());
}
```
Note: `IngestionService` has a static `private static final ObjectMapper JSON` field (line 22). Use `JSON.writeValueAsString()`.
- [ ] **Step 2: Extract attributes in `IngestionService.flattenProcessors()`**
In the `flattenProcessors()` method (~line 113-138), serialize each `ProcessorExecution.getAttributes()` to JSON string. Pass as the new `attributes` parameter to `ProcessorRecord`.
- [ ] **Step 3: Update `PostgresExecutionStore.upsert()`**
Add `attributes` to the INSERT statement and bind parameters. The column is JSONB, so use `PGobject` with type "jsonb" or cast `?::jsonb` in the SQL.
In the INSERT (~line 26-32): add `attributes` column and `?::jsonb` placeholder.
In the ON CONFLICT UPDATE (~line 33-51): add `attributes = COALESCE(EXCLUDED.attributes, executions.attributes)` merge (follows the existing pattern, e.g., `input_body = COALESCE(EXCLUDED.input_body, executions.input_body)`).
In the bind parameters (~line 53-62): bind `record.attributes()`.
- [ ] **Step 4: Update `PostgresExecutionStore.upsertProcessors()`**
Same pattern: add `attributes` column, `?::jsonb` placeholder, bind parameter.
- [ ] **Step 5: Verify compilation**
Run: `mvn compile -q`
Expected: BUILD SUCCESS (or remaining errors from DetailService/SearchIndexer which are next tasks)
- [ ] **Step 6: Commit**
```bash
git add cameleer3-server-core/src/main/java/com/cameleer3/server/core/ingestion/IngestionService.java
git add cameleer3-server-app/src/main/java/com/cameleer3/server/app/storage/PostgresExecutionStore.java
git commit -m "feat: store execution and processor attributes from agent data"
```
---
## Task 4: Backend — Attributes in Detail Service and OpenSearch Indexing
**Files:**
- Modify: `cameleer3-server-core/src/main/java/com/cameleer3/server/core/detail/DetailService.java`
- Modify: `cameleer3-server-core/src/main/java/com/cameleer3/server/core/storage/model/ExecutionDocument.java`
- Modify: `cameleer3-server-core/src/main/java/com/cameleer3/server/core/indexing/SearchIndexer.java`
- Modify: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/search/OpenSearchIndex.java`
- [ ] **Step 1: Pass attributes through `DetailService.buildTree()`**
In `buildTree()` (~line 35-63), when constructing `ProcessorNode` from `ProcessorRecord`, deserialize the `attributes` JSON string back to `Map<String, String>` and pass it to the constructor.
In `getDetail()` (~line 16-33), when constructing `ExecutionDetail`, deserialize the `ExecutionRecord.attributes()` JSON and pass it as the `attributes` parameter.
- [ ] **Step 2: Update `PostgresExecutionStore.findById()` and `findProcessors()` queries**
These SELECT queries need to include the new `attributes` column and map it into `ExecutionRecord` / `ProcessorRecord` via the row mapper.
- [ ] **Step 3: Add attributes to `ExecutionDocument.ProcessorDoc`**
Add `String attributes` field to the `ProcessorDoc` record in `ExecutionDocument.java`. Also add `String attributes` to `ExecutionDocument` itself for route-level attributes.
- [ ] **Step 4: Update `SearchIndexer.indexExecution()`**
When constructing `ProcessorDoc` objects (~line 68-74), pass `processor.attributes()`. When constructing `ExecutionDocument` (~line 76-80), pass the execution record's attributes.
- [ ] **Step 5: Update `OpenSearchIndex.toMap()`**
In the `toMap()` method (~line 303-333), add `"attributes"` to the document map and to each processor sub-document map.
- [ ] **Step 6: Update `OpenSearchIndex.fromSearchHit()` (or equivalent)**
When parsing search results back into `ExecutionSummary`, extract the `attributes` field from the OpenSearch hit source and deserialize it into `Map<String, String>`.
- [ ] **Step 7: Verify compilation**
Run: `mvn compile -q`
Expected: BUILD SUCCESS
- [ ] **Step 8: Commit**
```bash
git add cameleer3-server-core/ cameleer3-server-app/
git commit -m "feat: thread attributes through detail service and OpenSearch indexing"
```
---
## Task 5: Backend — TEST_EXPRESSION Command and Request-Reply Infrastructure
**Files:**
- Modify: `cameleer3-server-core/src/main/java/com/cameleer3/server/core/agent/CommandType.java`
- Modify: `cameleer3-server-core/src/main/java/com/cameleer3/server/core/agent/AgentRegistryService.java`
- Modify: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/dto/CommandAckRequest.java`
- Modify: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/controller/AgentCommandController.java`
- Create: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/dto/TestExpressionRequest.java`
- Create: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/dto/TestExpressionResponse.java`
- Modify: `cameleer3-server-app/src/main/java/com/cameleer3/server/app/controller/ApplicationConfigController.java`
- [ ] **Step 1: Add TEST_EXPRESSION to CommandType enum**
```java
public enum CommandType {
CONFIG_UPDATE,
DEEP_TRACE,
REPLAY,
SET_TRACED_PROCESSORS,
TEST_EXPRESSION
}
```
- [ ] **Step 2: Add `data` field to `CommandAckRequest`**
```java
public record CommandAckRequest(String status, String message, String data) {}
```
The `data` field carries structured JSON results (e.g., expression test result). Existing ACKs that don't send data will deserialize `data` as `null`.
- [ ] **Step 3: Add CompletableFuture map to AgentRegistryService**
Add a `ConcurrentHashMap<String, CompletableFuture<CommandAckRequest>>` for pending request-reply commands. Add methods:
```java
public CompletableFuture<CommandAckRequest> addCommandWithReply(String agentId, CommandType type, String payload) {
AgentCommand command = addCommand(agentId, type, payload);
CompletableFuture<CommandAckRequest> future = new CompletableFuture<>();
pendingReplies.put(command.id(), future);
return future;
}
public void completeReply(String commandId, CommandAckRequest ack) {
CompletableFuture<CommandAckRequest> future = pendingReplies.remove(commandId);
if (future != null) {
future.complete(ack);
}
}
```
Note: Use `future.orTimeout(5, TimeUnit.SECONDS)` in the caller. The future auto-completes exceptionally on timeout. Add a `whenComplete` handler that removes the entry from `pendingReplies` to prevent leaks:
```java
future.whenComplete((result, ex) -> pendingReplies.remove(command.id()));
```
- [ ] **Step 4: Complete futures in AgentCommandController.acknowledgeCommand()**
In the ACK endpoint (~line 156-179), after `registryService.acknowledgeCommand()`, call `registryService.completeReply(commandId, ack)`.
- [ ] **Step 5: Add test-expression mapping to mapCommandType()**
```java
case "test-expression" -> CommandType.TEST_EXPRESSION;
```
- [ ] **Step 6: Create TestExpressionRequest and TestExpressionResponse DTOs**
```java
// TestExpressionRequest.java
public record TestExpressionRequest(String expression, String language, String body, String target) {}
// TestExpressionResponse.java
public record TestExpressionResponse(String result, String error) {}
```
- [ ] **Step 7: Add test-expression endpoint to ApplicationConfigController**
Note: `ApplicationConfigController` does not use `@PreAuthorize` — security is handled at the URL pattern level in the security config. The test-expression endpoint inherits the same access rules as other config endpoints. No `@PreAuthorize` annotation needed.
```java
@PostMapping("/{application}/test-expression")
@Operation(summary = "Test a tap expression against sample data via a live agent")
public ResponseEntity<TestExpressionResponse> testExpression(
@PathVariable String application,
@RequestBody TestExpressionRequest request) {
// 1. Find a LIVE agent for this application via registryService
// 2. Send TEST_EXPRESSION command with addCommandWithReply()
// 3. Await CompletableFuture with 5s timeout via future.orTimeout(5, TimeUnit.SECONDS)
// 4. Parse ACK data as result/error, return TestExpressionResponse
// Handle: no live agent (404), timeout (504), parse error (500)
// Clean up: future.whenComplete removes from pendingReplies map on timeout
}
```
- [ ] **Step 8: Verify compilation**
Run: `mvn compile -q`
Expected: BUILD SUCCESS
- [ ] **Step 9: Commit**
```bash
git add cameleer3-server-core/ cameleer3-server-app/
git commit -m "feat: add TEST_EXPRESSION command with request-reply infrastructure"
```
---
## Task 6: Backend — Regenerate OpenAPI and Schema
**Files:**
- Modify: `openapi.json` (regenerated)
- Modify: `ui/src/api/schema.d.ts` (regenerated)
- [ ] **Step 1: Build the server to generate updated OpenAPI spec**
Run: `mvn clean compile -q`
- [ ] **Step 2: Start the server temporarily to extract OpenAPI JSON**
Run the server, fetch `http://localhost:8080/v3/api-docs`, save to `openapi.json`. Alternatively, if the project has an automated OpenAPI generation step, use that.
- [ ] **Step 3: Regenerate schema.d.ts from openapi.json**
Run the existing schema generation command (check package.json scripts in ui/).
- [ ] **Step 4: Verify the new types include `attributes` on ExecutionDetail, ProcessorNode, ExecutionSummary**
Read `ui/src/api/schema.d.ts` and confirm the fields are present. Note: the OpenAPI generator may strip nullable fields (e.g., `highlight` exists on Java `ExecutionSummary` but not in the current schema). If `attributes` is missing, add `@Schema(nullable = true)` or `@JsonInclude(JsonInclude.Include.ALWAYS)` annotation on the Java DTO and regenerate. Alternatively, manually add the field to `schema.d.ts`.
- [ ] **Step 5: Commit**
```bash
git add openapi.json ui/src/api/schema.d.ts
git commit -m "chore: regenerate openapi.json and schema.d.ts with attributes and test-expression"
```
---
## Task 7: Frontend — TypeScript Types and API Hooks
**Files:**
- Modify: `ui/src/api/queries/commands.ts`
- [ ] **Step 1: Add TapDefinition interface**
```typescript
export interface TapDefinition {
tapId: string;
processorId: string;
target: 'INPUT' | 'OUTPUT' | 'BOTH';
expression: string;
language: string;
attributeName: string;
attributeType: 'BUSINESS_OBJECT' | 'CORRELATION' | 'EVENT' | 'CUSTOM';
enabled: boolean;
version: number;
}
```
- [ ] **Step 2: Extend ApplicationConfig interface**
Add to the existing `ApplicationConfig` interface:
```typescript
taps: TapDefinition[];
tapVersion: number;
routeRecording: Record<string, boolean>;
compressSuccess: boolean;
```
- [ ] **Step 3: Add useTestExpression mutation hook**
```typescript
export function useTestExpression() {
return useMutation({
mutationFn: async ({ application, expression, language, body, target }: {
application: string;
expression: string;
language: string;
body: string;
target: string;
}) => {
const { data, error } = await api.POST('/config/{application}/test-expression', {
params: { path: { application } },
body: { expression, language, body, target },
});
if (error) throw new Error('Failed to test expression');
return data!;
},
});
}
```
- [ ] **Step 4: Add useReplayExchange mutation hook**
```typescript
export function useReplayExchange() {
return useMutation({
mutationFn: async ({ agentId, headers, body }: {
agentId: string;
headers: Record<string, string>;
body: string;
}) => {
const { data, error } = await api.POST('/agents/{id}/commands', {
params: { path: { id: agentId } },
body: { type: 'replay', payload: { headers, body } } as any,
});
if (error) throw new Error('Failed to send replay command');
return data!;
},
});
}
```
- [ ] **Step 5: Verify build**
Run: `cd ui && npm run build`
Expected: BUILD SUCCESS (or type errors in pages that now receive new fields — those pages are updated in later tasks)
- [ ] **Step 6: Commit**
```bash
git add ui/src/api/queries/commands.ts
git commit -m "feat: add TapDefinition type, extend ApplicationConfig, add test-expression and replay hooks"
```
---
## Task 8: Frontend — Business Attributes on ExchangeDetail
**Files:**
- Modify: `ui/src/pages/ExchangeDetail/ExchangeDetail.tsx`
- Modify: `ui/src/pages/ExchangeDetail/ExchangeDetail.module.css`
- [ ] **Step 1: Add attributes strip to exchange header**
After the header info row and before the stat boxes, render the route-level attributes:
```tsx
{detail.attributes && Object.keys(detail.attributes).length > 0 && (
<div className={styles.attributesStrip}>
<span className={styles.attributesLabel}>Attributes</span>
{Object.entries(detail.attributes).map(([key, value]) => (
<Badge key={key} label={`${key}: ${value}`} color="auto" variant="filled" />
))}
</div>
)}
```
- [ ] **Step 2: Add per-processor attributes in processor detail panel**
In the processor detail section (where the selected processor's message IN/OUT is shown), add attributes badges if the selected processor has them. Access via `detail.processors` tree — traverse the nested tree to find the processor at the selected index and read its `attributes` map. Note: body/headers data comes from a separate `useProcessorSnapshot` call, but `attributes` is inline on the `ProcessorNode` in the detail response — no additional API call needed.
- [ ] **Step 3: Add CSS for attributes strip**
```css
.attributesStrip {
display: flex;
gap: 8px;
flex-wrap: wrap;
align-items: center;
padding: 10px 14px;
background: var(--bg-surface);
border: 1px solid var(--border-subtle);
border-radius: var(--radius-lg);
margin-bottom: 16px;
}
.attributesLabel {
font-size: 11px;
color: var(--text-muted);
margin-right: 4px;
}
```
- [ ] **Step 4: Verify build**
Run: `cd ui && npm run build`
Expected: BUILD SUCCESS
- [ ] **Step 5: Commit**
```bash
git add ui/src/pages/ExchangeDetail/
git commit -m "feat: display business attributes on ExchangeDetail page"
```
---
## Task 9: Frontend — Replay Modal on ExchangeDetail
**Files:**
- Modify: `ui/src/pages/ExchangeDetail/ExchangeDetail.tsx`
- Modify: `ui/src/pages/ExchangeDetail/ExchangeDetail.module.css`
- [ ] **Step 1: Add replay button to exchange header**
Add a "Replay" button (primary variant) in the header action area. Only render for OPERATOR/ADMIN roles (check with `useAuthStore()`).
```tsx
<Button variant="primary" size="sm" onClick={() => setReplayOpen(true)}>
Replay
</Button>
```
- [ ] **Step 2: Build the replay modal component**
Add state: `replayOpen`, `replayHeaders` (key-value array), `replayBody` (string), `replayAgent` (string), `replayTab` ('headers' | 'body').
Pre-populate from `detail.inputHeaders` (parse JSON string to object) and `detail.inputBody`.
Use Modal (size="lg"), Tabs for Headers/Body, and the `useReplayExchange` mutation hook.
Headers tab: render editable rows with Input fields for key and value, remove button per row, "Add header" link at bottom.
Body tab: Textarea with monospace font, pre-populated with `detail.inputBody`.
- [ ] **Step 3: Wire up agent selector**
Use `useAgents('LIVE', detail.applicationName)` to populate a Select dropdown. Default to the agent that originally processed this exchange (`detail.agentId`) if it's still LIVE.
- [ ] **Step 4: Wire up replay submission**
On "Replay" click: call `replayExchange.mutate({ agentId, headers, body })`. Show loading spinner on button. On success: `toast('Replay command sent')`, close modal. On error: `toast('Replay failed: ...')`.
- [ ] **Step 5: Add CSS for replay modal elements**
Style the warning banner, header table, body textarea, and agent selector.
- [ ] **Step 6: Verify build**
Run: `cd ui && npm run build`
Expected: BUILD SUCCESS
- [ ] **Step 7: Commit**
```bash
git add ui/src/pages/ExchangeDetail/
git commit -m "feat: add replay modal with editable headers and body on ExchangeDetail"
```
---
## Task 10: Frontend — Attributes Column on Dashboard
**Files:**
- Modify: `ui/src/pages/Dashboard/Dashboard.tsx`
- [ ] **Step 1: Add attributes column to the exchanges table**
In `buildBaseColumns()` (~line 97-163), add a new column after the `applicationName` column. Use CSS module classes (not inline styles — per project convention in `feedback_css_modules_not_inline.md`):
```typescript
{
key: 'attributes',
header: 'Attributes',
render: (_, row) => {
const attrs = row.attributes;
if (!attrs || Object.keys(attrs).length === 0) return <span className={styles.muted}></span>;
const entries = Object.entries(attrs);
const shown = entries.slice(0, 2);
const overflow = entries.length - 2;
return (
<div className={styles.attrCell}>
{shown.map(([k, v]) => (
<Badge key={k} label={String(v)} color="auto" title={k} />
))}
{overflow > 0 && <span className={styles.attrOverflow}>+{overflow}</span>}
</div>
);
},
},
```
Add corresponding CSS classes to `Dashboard.module.css`:
```css
.attrCell { display: flex; gap: 4px; align-items: center; }
.attrOverflow { font-size: 10px; color: var(--text-muted); }
```
- [ ] **Step 2: Verify build**
Run: `cd ui && npm run build`
Expected: BUILD SUCCESS
- [ ] **Step 3: Commit**
```bash
git add ui/src/pages/Dashboard/Dashboard.tsx
git commit -m "feat: show business attributes as compact badges in dashboard exchanges table"
```
---
## Task 11: Frontend — RouteDetail Recording Toggle and Taps KPI
**Files:**
- Modify: `ui/src/pages/Routes/RouteDetail.tsx`
- Modify: `ui/src/pages/Routes/RouteDetail.module.css`
- [ ] **Step 1: Add recording toggle to route header**
Add imports: `import { useApplicationConfig, useUpdateApplicationConfig } from '../../api/queries/commands'` and `Toggle` from `@cameleer/design-system`.
In the route header section, add a pill-styled container with a Toggle component:
```tsx
const config = useApplicationConfig(appId);
const updateConfig = useUpdateApplicationConfig();
const isRecording = config.data?.routeRecording?.[routeId] !== false; // default true
function toggleRecording() {
if (!config.data) return;
const routeRecording = { ...config.data.routeRecording, [routeId]: !isRecording };
updateConfig.mutate({ ...config.data, routeRecording });
}
```
Render:
```tsx
<div className={styles.recordingPill}>
<span className={styles.recordingLabel}>Recording</span>
<Toggle checked={isRecording} onChange={toggleRecording} />
</div>
```
- [ ] **Step 2: Add "Active Taps" to KPI strip**
Count enabled taps for this route's processors (cross-reference tap processorIds with this route's processor list from diagram data). Add to `kpiItems` array.
- [ ] **Step 3: Add "Taps" tab to tabs array**
```typescript
const tapCount = /* count taps for this route */;
const tabs = [
{ label: 'Performance', value: 'performance' },
{ label: 'Recent Executions', value: 'executions', count: exchangeRows.length },
{ label: 'Error Patterns', value: 'errors', count: errorPatterns.length },
{ label: 'Taps', value: 'taps', count: tapCount },
];
```
- [ ] **Step 4: Add CSS for recording pill**
```css
.recordingPill {
display: flex;
align-items: center;
gap: 8px;
background: var(--bg-surface);
border: 1px solid var(--border-subtle);
border-radius: var(--radius-lg);
padding: 6px 12px;
}
.recordingLabel {
font-size: 11px;
color: var(--text-muted);
}
```
- [ ] **Step 5: Verify build**
Run: `cd ui && npm run build`
Expected: BUILD SUCCESS
- [ ] **Step 6: Commit**
```bash
git add ui/src/pages/Routes/
git commit -m "feat: add recording toggle and taps KPI to RouteDetail header"
```
---
## Task 12: Frontend — RouteDetail Taps Tab and Tap Modal
**Files:**
- Modify: `ui/src/pages/Routes/RouteDetail.tsx`
- Modify: `ui/src/pages/Routes/RouteDetail.module.css`
- [ ] **Step 1: Render taps DataTable when "Taps" tab is active**
Filter `config.data.taps` to only taps whose `processorId` exists in this route's diagram. Display in a DataTable with columns: Attribute, Processor, Expression, Language, Target, Type, Enabled (Toggle), Actions.
Empty state: "No taps configured for this route. Add a tap to extract business attributes from exchange data."
- [ ] **Step 2: Build the Add/Edit Tap modal**
State: `tapModalOpen`, `editingTap` (null for new, TapDefinition for edit), form fields.
Modal contents:
- FormField + Input for Attribute Name
- FormField + Select for Processor (options from `useDiagramLayout` node list)
- Two FormFields side-by-side: Select for Language (simple, jsonpath, xpath, jq, groovy) and Select for Target (INPUT, OUTPUT, BOTH)
- FormField + Textarea for Expression (monospace)
- Attribute Type pill selector (4 options, styled as button group)
- Toggle for Enabled
- [ ] **Step 3: Add Test Expression section to tap modal**
Collapsible section (default expanded) with two tabs: "Recent Exchange" and "Custom Payload".
Recent Exchange tab:
- Use `useSearchExecutions` with this route's filter to get recent exchanges as summaries
- Auto-select most recent exchange, then fetch its detail via `useExecutionDetail` to get the `inputBody` for the test payload
- Select dropdown to change exchange
- "Test" button calls `useTestExpression` mutation with the exchange's body
Custom Payload tab:
- Textarea pre-populated from the most recent exchange's body (fetched via detail endpoint)
- Switching from Recent Exchange tab carries the payload over
- "Test" button calls `useTestExpression` mutation
Result display: green box for success, red box for error.
- [ ] **Step 4: Wire up tap save**
On save: update the `taps` array in ApplicationConfig (add new or replace existing by tapId), then call `updateConfig.mutate()`. Generate `tapId` as UUID for new taps.
- [ ] **Step 5: Wire up tap delete**
On delete: remove tap from array, call `updateConfig.mutate()`. Import and use `ConfirmDialog` from `@cameleer/design-system` before deleting.
- [ ] **Step 6: Wire up enabled toggle inline**
Toggle in the DataTable row directly calls config update (toggle the specific tap's `enabled` field).
- [ ] **Step 7: Add CSS for taps tab content**
Style the taps header (title + button), tap modal form layout, test expression section, result boxes.
- [ ] **Step 8: Verify build**
Run: `cd ui && npm run build`
Expected: BUILD SUCCESS
- [ ] **Step 9: Commit**
```bash
git add ui/src/pages/Routes/
git commit -m "feat: add taps management tab with CRUD modal and expression testing on RouteDetail"
```
---
## Task 13: Frontend — AppConfigDetailPage Restructure
**Files:**
- Modify: `ui/src/pages/Admin/AppConfigDetailPage.tsx`
- Modify: `ui/src/pages/Admin/AppConfigDetailPage.module.css`
- [ ] **Step 1: Merge Logging + Observability into "Settings" section**
Replace the two separate `SectionHeader` sections with a single "Settings" section. Render all setting badges in a single flex row: Log Forwarding, Engine Level, Payload Capture, Metrics, Sampling Rate, Compress Success (new field).
Edit mode: all badges become dropdowns/toggles as before, plus a new Toggle for `compressSuccess`.
- [ ] **Step 2: Merge Traced Processors + Taps into "Traces & Taps" section**
Build a merged data structure: for each processor that has either a trace override or taps, create a row with Route, Processor, Capture badge, Taps badges.
To resolve processor-to-route mapping: fetch route catalog for this application, then for each route fetch its diagram. Build a `Map<processorId, routeId>` by iterating diagram nodes. For processors not found, show "unknown".
Table columns: Route, Processor, Capture (badge/select in edit mode), Taps (attribute badges with enabled indicators, read-only).
Summary: "N traced · M taps · manage taps on route pages".
- [ ] **Step 3: Add "Route Recording" section**
Fetch route list from `useRouteCatalog` filtered by application. Render table with Route name and Toggle.
In view mode: toggles show current state (disabled).
In edit mode: toggles are interactive.
Default for routes not in `routeRecording` map: recording enabled (true).
Summary: "N of M routes recording".
- [ ] **Step 4: Update form state for new fields**
Add `compressSuccess` and `routeRecording` to the form state object and `updateField` handler. Ensure save sends the complete config including new fields.
- [ ] **Step 5: Update CSS for restructured sections**
Adjust section spacing, flex row for merged settings badges.
- [ ] **Step 6: Verify build**
Run: `cd ui && npm run build`
Expected: BUILD SUCCESS
- [ ] **Step 7: Commit**
```bash
git add ui/src/pages/Admin/
git commit -m "feat: restructure AppConfigDetailPage to Settings, Traces & Taps, Route Recording sections"
```
---
## Task 14: Final Build Verification and Push
- [ ] **Step 1: Run full backend build**
Run: `mvn clean compile -q`
Expected: BUILD SUCCESS
- [ ] **Step 2: Run full frontend build**
Run: `cd ui && npm run build`
Expected: BUILD SUCCESS
- [ ] **Step 3: Manual smoke test checklist**
Verify in browser:
- ExchangeDetail shows attributes strip when attributes exist
- ExchangeDetail replay button opens modal, can send replay
- Dashboard table shows attributes column
- RouteDetail shows recording toggle, taps tab with CRUD
- Tap modal test expression section works (if live agent available)
- AppConfigDetailPage shows 3 merged sections
- AppConfigDetailPage edit mode works for compress success and route recording
- [ ] **Step 4: Push to remote**
```bash
git push origin main
```

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,247 @@
# Taps, Business Attributes & Enhanced Replay — UI Design
## Context
The Cameleer3 agent now supports camel-native data extraction taps, business attributes on executions, enhanced replay with editable payloads, per-route recording toggles, and success compression. The agent-side implementation is deployed and live.
The shared models (`TapDefinition`, extended `ApplicationConfig` with `taps`, `tapVersion`, `routeRecording`, `compressSuccess`) exist in `cameleer3-common` (agent repo). The server already depends on this library and persists `ApplicationConfig` as JSONB in the `application_config` table. However, the server-side execution DTOs (`ExecutionDetail`, `ExecutionSummary`, `ProcessorNode`) do not yet carry `attributes` fields, and the `CommandType` enum lacks `TEST_EXPRESSION`.
This spec covers all UI surfaces and the backend changes needed to support them.
## Design Decisions
| Decision | Choice | Rationale |
|----------|--------|-----------|
| Tap management location | RouteDetail contextual + AppConfigDetail overview | Taps target processors; processor list is contextual to a route. Admin overview for cross-route visibility. |
| Business attributes display | Header badges + per-processor + dashboard table | Primary value of taps — must be front-and-center for quick identification |
| Replay trigger | Button in ExchangeDetail header | Route-level action, clear and discoverable |
| Route recording location | RouteDetail toggle + AppConfigDetail bulk table | Contextual single-route control + centralized bulk management |
| Compress success | Badge in AppConfigDetail Settings section | Simple boolean toggle, admin-level concern |
| Expression testing | Agent-side evaluation via TEST_EXPRESSION command | Only the agent has the Camel expression engine; works for all languages |
| AppConfigDetail layout | 3 sections: Settings, Traces & Taps, Route Recording | Collapsed from 4 sections; Logging+Observability merged, TracedProcessors+Taps merged |
## Prerequisites
Before UI work can begin, the following backend changes are required:
1. **Update `cameleer3-common` dependency** — ensure the server pulls a version that includes `TapDefinition`, and `ApplicationConfig` with `taps`, `tapVersion`, `routeRecording`, `compressSuccess` fields.
2. **Add `attributes` to execution DTOs**`ExecutionDetail`, `ProcessorNode`, and `ExecutionSummary` need a `Map<String, String> attributes` field. This requires changes to the PostgreSQL ingestion pipeline (store attributes from agent-submitted `RouteExecution`/`ProcessorExecution`), the detail service (reconstruct attributes), and the OpenSearch indexing (index attributes for search results).
3. **Add `TEST_EXPRESSION` to `CommandType`** enum.
4. **Enhance `CommandAckRequest`** — add an optional `data` field (`String`, JSON) to carry structured results (currently only `status` + `message`). The test-expression endpoint needs the result value from the ACK.
5. **Regenerate `openapi.json`** after all backend REST API changes.
## Page Changes
### 1. ExchangeDetail
**Business attributes strip** between header info and stat boxes:
- Route-level attributes as auto-colored badges (`key: value`, monospace)
- Wraps on overflow
- Empty state: section not rendered when no attributes exist
**Per-processor attributes** in processor detail panel:
- Badges below processor info, before message IN/OUT sections
- Shows attributes extracted at that specific processor
**Replay button** in header action area (top-right), primary blue. Requires OPERATOR or ADMIN role:
- Opens large Modal with:
- Warning banner ("This will re-execute the exchange on the selected agent")
- Target Agent select — uses `useAgents(application, 'LIVE')` to populate. Disabled with message when no LIVE agents available.
- Tabs: Headers (editable key-value table with add/remove) | Body (editable monospace textarea, JSON indicator)
- Pre-populated from original exchange's `inputHeaders` and `inputBody` (already available on `ExecutionDetail`)
- Cancel / Replay footer
- Sends REPLAY command via `POST /api/v1/agents/{agentId}/commands`
- Payload: `{ "type": "replay", "payload": { "headers": {...}, "body": "..." } }`
- Success: toast with confirmation message from ACK
- Failure: toast with error message
- Loading state: Replay button shows spinner while awaiting ACK
### 2. Dashboard Exchanges Table
**New "Attributes" column** between App and Exchange ID:
- First 2 attribute values as compact auto-colored badges (value only; key shown via native `title` attribute on hover)
- "+N" overflow indicator when more than 2
- Em-dash when no attributes
### 3. RouteDetail
**Recording toggle** in route header (top-right):
- Toggle in pill container with "Recording" label
- Updates `routeRecording` map in ApplicationConfig via PUT
- Requires OPERATOR or ADMIN role
**"Active Taps" KPI card** added to KPI strip.
**New "Taps" tab** (fourth tab alongside Performance, Recent Executions, Error Patterns):
- Header: "Data Extraction Taps" + "Add Tap" button (OPERATOR or ADMIN only)
- DataTable columns: Attribute, Processor, Expression, Language, Target, Type, Enabled (toggle), Actions (edit/delete)
- Add/edit opens tap modal
- Empty state: "No taps configured for this route. Add a tap to extract business attributes from exchange data."
**Add/Edit Tap modal** (Modal size="md"):
- Fields: Attribute Name (input), Processor (select from route diagram via `useDiagramLayout`), Language + Target (side-by-side selects), Expression (monospace textarea), Attribute Type (pill selector: BUSINESS_OBJECT / CORRELATION / EVENT / CUSTOM), Enabled toggle
- **Test Expression section** (collapsible, default expanded):
- Tabs: "Recent Exchange" | "Custom Payload"
- Recent Exchange: auto-selects most recent exchange with captured data at selected processor. Dropdown to change. Test button sends expression to live agent. Result display.
- Custom Payload: editable textarea pre-populated from most recent exchange body. Switching from Recent Exchange carries the payload over. Test button → result display.
- Result: green success box with extracted value, or red error box with message
- Loading state: spinner on Test button while awaiting agent response
- No agents state: "No LIVE agents available to test expression" with Test button disabled
- Note showing which agent evaluated and which language was used
- Save / Cancel footer
- Save writes the tap to the `taps` array in ApplicationConfig via existing `PUT /api/v1/config/{application}`
### 4. AppConfigDetailPage
Restructured to **3 sections** (from 4):
**Section 1 — Settings:** Merged Logging + Observability. All settings as badges in flex row: Log Forwarding, Engine Level, Payload Capture, Metrics, Sampling Rate, Compress Success (new). Edit mode: badges become dropdowns/toggles.
**Section 2 — Traces & Taps:** Merged Traced Processors + Data Extraction Taps. Table columns: Route, Processor, Capture (badge or em-dash), Taps (attribute name badges with enabled/disabled indicator). Sorted by route. Capture editable in edit mode; taps read-only with "manage taps on route pages" hint. Summary: "N traced · M taps".
Processor-to-route mapping: Taps carry a `processorId` that belongs to a specific route. The route association is derived by cross-referencing with route diagram data (via `useDiagramLayout` per route from the route catalog). If a processor cannot be mapped to a route (e.g., route no longer active), show "unknown" in the Route column.
**Section 3 — Route Recording:** Table: Route + Recording toggle. Summary: "N of M routes recording". Toggles editable in edit mode. Route list from `useRouteCatalog` filtered by application. Routes not present in the `routeRecording` map default to recording enabled (consistent with agent behavior where absence = enabled).
### 5. AgentHealth Config Bar
No changes. New features managed at AppConfig level, not per-agent.
## RBAC Permissions
| Action | Minimum Role |
|--------|-------------|
| View business attributes | VIEWER |
| View taps / traces / recording state | VIEWER |
| Create / edit / delete taps | OPERATOR |
| Toggle route recording | OPERATOR |
| Edit app config settings | OPERATOR |
| Replay exchange | OPERATOR |
| Test expression | OPERATOR |
These align with the existing pattern where VIEWER sees data and OPERATOR can modify configuration.
## TypeScript Interface Changes
```typescript
// Add to ApplicationConfig in commands.ts
interface ApplicationConfig {
// ... existing fields ...
taps: TapDefinition[]
tapVersion: number
routeRecording: Record<string, boolean>
compressSuccess: boolean
}
interface TapDefinition {
tapId: string
processorId: string
target: 'INPUT' | 'OUTPUT' | 'BOTH'
expression: string
language: string
attributeName: string
attributeType: 'BUSINESS_OBJECT' | 'CORRELATION' | 'EVENT' | 'CUSTOM'
enabled: boolean
version: number
}
```
## Backend Changes
### New Endpoint: Test Expression
`POST /api/v1/config/{application}/test-expression`
Request:
```json
{
"expression": "${body.orderId}",
"language": "simple",
"body": "{\"orderId\": \"ORD-123\"}",
"target": "OUTPUT"
}
```
Response (success):
```json
{ "result": "ORD-123" }
```
Response (failure):
```json
{ "error": "Expression evaluation timed out (50ms limit)" }
```
**Request-reply mechanism:** The server selects a LIVE agent for the application, sends a `TEST_EXPRESSION` command via SSE, then awaits the ACK with a `CompletableFuture` (timeout 5s). The `CommandAckRequest` record is extended with an optional `data` field (JSON string) to carry the evaluation result. The controller completes the future when the ACK arrives, returning the result to the HTTP caller. If no LIVE agent is available or the timeout expires, the endpoint returns an appropriate error response.
### Replay Command Payload
The REPLAY command (already exists in `CommandType`) is sent via `POST /api/v1/agents/{agentId}/commands`:
```json
{
"type": "replay",
"payload": {
"headers": {
"Content-Type": "application/json",
"X-Correlation-Id": "corr-abc123"
},
"body": "{\"orderId\": \"ORD-2024-78542\", ...}"
}
}
```
The agent uses `ProducerTemplate.send()` to replay the exchange on the original route with the provided headers and body.
### Execution DTO Changes
**`ExecutionDetail`** — add `Map<String, String> attributes` (route-level aggregated)
**`ProcessorNode`** — add `Map<String, String> attributes` (per-processor)
**`ExecutionSummary`** — add `Map<String, String> attributes` (route-level, for dashboard table)
These require:
- PostgreSQL ingestion: store attributes from incoming `RouteExecution` and `ProcessorExecution` (the agent already sends them)
- Detail service: include attributes when reconstructing the execution tree
- OpenSearch indexing: index route-level attributes for search result enrichment
### CommandType Addition
Add `TEST_EXPRESSION` to the `CommandType` enum.
### CommandAckRequest Enhancement
Extend from `(String status, String message)` to `(String status, String message, String data)` where `data` is an optional JSON string for structured results.
## Design System Impact
No new components required. Uses existing: Modal, DataTable, Badge, Toggle, Select, Input, Textarea, FormField, Tabs, Button, CodeBlock, Collapsible.
## Files Touched
### Frontend (ui/src/)
- `api/queries/commands.ts` — TapDefinition interface, extend ApplicationConfig, add test-expression mutation, add replay mutation
- `pages/ExchangeDetail/ExchangeDetail.tsx` — attributes strip, per-processor attributes, replay button + modal
- `pages/ExchangeDetail/ExchangeDetail.module.css` — attributes strip styles, replay modal styles
- `pages/Dashboard/Dashboard.tsx` — attributes column in exchanges table
- `pages/Routes/RouteDetail.tsx` — recording toggle, active taps KPI, taps tab, tap modal with test section
- `pages/Routes/RouteDetail.module.css` — taps tab, recording toggle, tap modal styles
- `pages/Admin/AppConfigDetailPage.tsx` — restructure to 3 sections, traces & taps merged table, route recording table, compress success badge
- `pages/Admin/AppConfigDetailPage.module.css` — updated section styles
### Backend (cameleer3-server-app/)
- `controller/ApplicationConfigController.java` — add test-expression endpoint
- `dto/CommandAckRequest.java` — add optional `data` field
- `controller/AgentCommandController.java` — support CompletableFuture-based ACK for test-expression
### Backend (cameleer3-server-core/)
- `agent/CommandType.java` — add TEST_EXPRESSION
- `detail/ExecutionDetail.java` — add attributes field
- `detail/ProcessorNode.java` — add attributes field
- `search/ExecutionSummary.java` — add attributes field
- `detail/DetailService.java` — include attributes in reconstruction
- `storage/` — store attributes from ingested executions
- `search/SearchService.java` — include attributes in search results
### Generated
- `ui/src/api/schema.d.ts` — regenerate from openapi.json
- `openapi.json` — regenerate after backend changes

View File

@@ -0,0 +1,417 @@
# Execution Overlay & Debugger — Design Spec
**Sub-project:** 2 of 3 (Component → **Execution Overlay** → Page Integration)
**Scope:** Overlay real execution data onto the ProcessDiagram component from sub-project 1. Adds node status visualization, per-compound iteration stepping, a tabbed detail panel, and error navigation. Does NOT include page integration — that is sub-project 3.
---
## Problem
The ProcessDiagram from sub-project 1 shows route topology but cannot display what actually happened during an exchange's execution. Users investigating failures must cross-reference between the diagram and separate execution detail views. There is no way to see which processors were hit, which were skipped, where errors occurred, or what the message looked like at each step.
## Goal
Build an `ExecutionDiagram` wrapper component that overlays execution data onto ProcessDiagram, turning it into an "after-the-fact debugger." Users can see the execution path at a glance (green = OK, red = failed, dimmed = skipped), step through loop/split iterations independently, and inspect processor-level details (input/output body, headers, errors, timing) in a tabbed detail panel below the diagram.
---
## Decisions
| Decision | Choice | Rationale |
|----------|--------|-----------|
| Architecture | Wrapper component (`ExecutionDiagram`) composing `ProcessDiagram` | Keeps topology component pure; execution concerns isolated |
| Layout | Top/bottom IDE split (diagram top, detail panel bottom) | Left-to-right diagram needs full width; familiar IDE pattern |
| Node status | Tinted backgrounds + status badges | Green tint + checkmark for OK, red tint + ! for failed, dimmed for skipped — scannable at a glance |
| Duration display | Badge on each executed node (bottom-right) | Quick bottleneck identification without opening detail panel |
| Iteration stepping | Per-compound stepper in header bar | Independent stepping at each nesting level; contextually placed |
| Error navigation | Passive highlighting + "Jump to Error" action | Red border + ! badge on failed node; jump action drills into sub-routes if needed |
| Cross-route errors | Red border + drill-down arrow on calling node | Communicates failure exists here; arrow signals root cause is deeper |
| Detail panel tabs | Info, Headers, Input, Output, Error, Config, Timeline | Comprehensive debugging context |
| Error tab visibility | Always visible, grayed out when no error | No layout shift; consistent tab bar |
| Reusability | Component usable standalone and embedded | Immediately replaces ExchangeDetail flow view; usable elsewhere |
---
## 0. Backend Prerequisites
### Iteration fields on ProcessorNode
The `ProcessorExecution` model in `cameleer3-common` has iteration tracking fields (`loopIndex`, `loopSize`, `splitIndex`, `splitSize`, `multicastIndex`), but the server's storage layer and API response model do not surface them. The following changes are needed:
**Storage:**
- Add columns to `processor_records` table: `loop_index`, `loop_size`, `split_index`, `split_size`, `multicast_index` (all nullable integers)
- Flyway migration to add columns
- Update `ExecutionStore` to persist and read these fields
**Detail model:**
- Add fields to `ProcessorNode.java`: `loopIndex`, `loopSize`, `splitIndex`, `splitSize`, `multicastIndex`
- Update `DetailService.buildTree()` to populate them from storage
**API:**
- Regenerate `openapi.json` and `schema.d.ts` to include the new fields
### Snapshot endpoint: accept processorId
The current snapshot endpoint `GET /executions/{id}/processors/{index}/snapshot` uses a positional index into the flat processor list. This is fragile when the tree structure changes. Add an alternative parameter:
- `GET /executions/{id}/processors/by-id/{processorId}/snapshot` — fetches snapshot by processor ID
- Add corresponding `useProcessorSnapshotById(executionId, processorId)` hook on the frontend
### Diagram loading by content hash
`ExecutionDetail` includes `diagramContentHash` linking to the diagram version active during the execution. The existing `useDiagramLayout(contentHash, direction)` hook already supports loading by content hash. The `ExecutionDiagram` wrapper uses this path instead of `useDiagramByRoute(application, routeId)`.
---
## 1. ExecutionDiagram Wrapper Component
### Location
```
ui/src/components/ExecutionDiagram/
├── ExecutionDiagram.tsx # Root: top/bottom split, orchestrates overlay + detail panel
├── ExecutionDiagram.module.css # Layout styles (splitter, exchange bar, panel)
├── useExecutionOverlay.ts # Hook: maps execution data → node overlay state
├── useIterationState.ts # Hook: per-compound iteration tracking
├── ExecutionContext.tsx # React context: shares execution data + iteration state
├── DetailPanel.tsx # Bottom panel: tabs container
├── tabs/InfoTab.tsx # Processor metadata + attributes
├── tabs/HeadersTab.tsx # Input/output headers side-by-side
├── tabs/BodyTab.tsx # Shared: formatted message body (used by Input + Output)
├── tabs/ErrorTab.tsx # Exception details + stack trace
├── tabs/ConfigTab.tsx # Processor configuration (TODO: agent data)
├── tabs/TimelineTab.tsx # Gantt-style processor duration chart
├── types.ts # Overlay-specific types
└── index.ts # Public exports
```
### Props API
```typescript
interface ExecutionDiagramProps {
/** Execution to overlay — fetched externally or by executionId */
executionId: string;
/** Optional: pre-fetched execution detail (skips internal fetch) */
executionDetail?: ExecutionDetail;
/** Diagram direction */
direction?: 'LR' | 'TB';
/** Known route IDs for drill-down resolution */
knownRouteIds?: Set<string>;
/** Called when user triggers node actions (trace toggle, tap config) */
onNodeAction?: (nodeId: string, action: NodeAction) => void;
/** Active node configs (trace/tap badges) */
nodeConfigs?: Map<string, NodeConfig>;
className?: string;
}
```
### Behavior
1. Fetches `ExecutionDetail` via `useExecutionDetail(executionId)` (or uses pre-fetched prop)
2. Extracts the `diagramContentHash` from the execution to load the correct diagram version
3. Maps processor execution tree to diagram node IDs (processor IDs match diagram node IDs)
4. Passes overlay data to ProcessDiagram via new overlay props
5. Manages selected node state, detail panel content, and iteration stepping
---
## 2. ProcessDiagram Overlay Props Extension
The existing `ProcessDiagramProps` gains optional overlay props. When absent, the diagram renders in topology-only mode (sub-project 1 behavior). When present, nodes render with execution state.
```typescript
interface ProcessDiagramProps {
// ... existing props from sub-project 1 ...
/** Execution overlay: maps diagram node ID → execution state */
executionOverlay?: Map<string, NodeExecutionState>;
/** Per-compound iteration state: maps compound node ID → current iteration index */
iterationState?: Map<string, number>;
/** Called when user changes iteration on a compound stepper */
onIterationChange?: (compoundNodeId: string, iterationIndex: number) => void;
}
interface NodeExecutionState {
status: 'COMPLETED' | 'FAILED';
durationMs: number;
/** True if this node's target sub-route failed (for DIRECT/SEDA nodes) */
subRouteFailed?: boolean;
/** True if trace data (input/output body) is available */
hasTraceData?: boolean;
/** Loop/split iteration info for the compound containing this node */
iterationIndex?: number;
iterationCount?: number;
}
```
---
## 3. Node Visual States
### Executed — Completed
- Background: green tint (`#F0F9F1`)
- Border: 1.5px solid `--success` (`#3D7C47`) + 4px green left accent
- Badge: green circle with white checkmark (top-right corner, 16px diameter)
- Duration: green text bottom-right (e.g., "5ms")
### Executed — Failed
- Background: red tint (`#FDF2F0`)
- Border: 2px solid `--error` (`#C0392B`)
- Badge: red circle with white `!` (top-right corner, 16px diameter)
- Duration: red text bottom-right
- Label text turns red, subtitle shows "FAILED"
### Sub-Route Failure (DIRECT/SEDA node whose target route failed)
- Same visual as Failed (red tint, red border, red ! badge)
- Additional: drill-down arrow icon (bottom-left corner)
- "Jump to Error" action on this node auto-drills into the sub-route
### Not Executed (Skipped)
- Opacity: 35%
- No status badge, no duration badge
- Original topology styling (no tint)
### Compound Node Status
Compound nodes (CHOICE, LOOP, SPLIT, etc.) derive their status from their children:
- If any child failed → compound shows as COMPLETED (the compound itself executed) but the failed child shows individually
- The compound does not get its own status badge — only leaf processors do
- Compound background tint: subtle green if all children OK, no tint if mixed results
### RUNNING Executions
RUNNING executions are out of scope for overlay (see Non-Goals). If the `ExecutionDetail.status` is `RUNNING`, the ExecutionDiagram shows the overlay for processors that have completed so far — completed processors get green/red treatment, processors not yet reached are dimmed. No special "in-progress" visual is needed.
### Edge States
- **Traversed edge:** solid, `--success` green (`#3D7C47`), 1.5px stroke
- **Not traversed edge:** dashed, `#9CA3AF` gray, 1px stroke
---
## 4. Per-Compound Iteration Stepper
### Placement
Small control widget embedded in the compound node's header bar (right-aligned). Rendered as part of the `CompoundNode` component when overlay data includes iteration info.
### Visual
Semi-transparent background pill inside the purple/colored header:
```
LOOP [< 3 / 5 >]
```
Prev/next buttons with the current iteration and total count.
### Behavior
- Each compound (LOOP, SPLIT, MULTICAST) tracks its iteration independently via `iterationState` map
- Changing iteration updates the overlay data for all children of that compound
- Nested compounds: outer loop at iteration 2, inner split at branch 1 — independent
- CHOICE compounds: no stepper. The taken branch renders with execution state; untaken branches are dimmed
- Keyboard: left/right arrow keys step when compound is hovered
- Detail panel syncs: selecting a processor inside a loop shows that iteration's snapshot data
### Data Flow
The `useIterationState` hook maintains a `Map<compoundNodeId, currentIndex>`. When an iteration changes:
1. The hook recalculates which `ProcessorExecution` children correspond to the selected iteration (using `loopIndex`, `splitIndex`, or `multicastIndex` fields)
2. Rebuilds the `executionOverlay` map for that compound's children
3. ProcessDiagram re-renders with updated overlay
---
## 5. Exchange Summary Bar
A thin bar above the diagram showing exchange-level information:
- Exchange ID (monospace, copyable)
- Status badge (COMPLETED green, FAILED red)
- Application / route ID
- Total duration
- "Jump to Error" button (only for FAILED exchanges) — scrolls diagram to failed node, drills into sub-route if needed
---
## 6. Detail Panel
### Layout
Below the diagram, separated by a resizable splitter. Default split: 60% diagram / 40% panel. Minimum panel height: 120px. The panel can be collapsed by dragging the splitter to the bottom.
The panel has:
1. **Processor header:** selected processor name, status badge, processor ID, duration
2. **Tab bar:** Info | Headers | Input | Output | Error | Config | Timeline
3. **Tab content area:** scrollable
When no processor is selected, the panel shows exchange-level data:
- **Info tab:** exchange metadata (exchangeId, correlationId, route, application, total duration, engine level, route-level attributes)
- **Headers tab:** route-level input/output headers
- **Input tab:** route-level input body
- **Output tab:** route-level output body
- **Error tab:** route-level error (if failed)
- **Config tab:** grayed out (not applicable at exchange level)
- **Timeline tab:** Gantt chart of all processors (always available)
### Tab: Info
Grid layout showing processor metadata:
- Processor ID, Type, Status
- Start time, End time, Duration
- Endpoint URI, Resolved Endpoint URI
- Attributes section: tap-extracted attributes as pill badges
### Tab: Headers
Side-by-side layout:
- Left: Input headers (key/value table)
- Right: Output headers (key/value table)
- New/changed headers highlighted in green
Data source: `useProcessorSnapshotById(executionId, processorId)``inputHeaders`, `outputHeaders`
### Tab: Input
Formatted message body at processor entry:
- Auto-detect format (JSON, XML, plain text)
- Syntax-highlighted code block (dark theme)
- Copy button
- Byte size indicator
Data source: `useProcessorSnapshotById(executionId, processorId)``inputBody`
### Tab: Output
Same layout as Input tab, showing processor exit body.
Data source: `useProcessorSnapshotById(executionId, processorId)``outputBody`
### Tab: Error
Shown for all processors but grayed out when the selected processor has no error.
When error exists:
- Exception type (class name)
- Error message
- Root cause type + message
- Stack trace in monospace block
Data source: `ProcessorNode.errorMessage`, `ProcessorNode.errorStackTrace` from the execution detail tree
### Tab: Config
Processor configuration from the route definition. **TODO:** Requires agent-side work to capture and expose processor configuration metadata on `RouteNode`. Initially shows a placeholder indicating config data is not yet available.
### Tab: Timeline
Gantt-style horizontal bar chart showing executed processors' relative durations:
- One row per processor from the `ProcessorNode` execution tree (flattened in execution order) — only executed processors, not all diagram nodes
- Bar width proportional to duration relative to total route duration
- Green bars for completed, red for failed
- Clicking a bar selects that processor in the diagram and scrolls to it
- Duration label on the right of each row
- When inside a loop/split compound, shows the current iteration's processors
---
## 7. Data Flow
```
ExecutionDiagram
├── useExecutionDetail(executionId)
│ → ExecutionDetail { processors: ProcessorNode[], diagramContentHash, ... }
├── useExecutionOverlay(executionDetail, iterationState)
│ → Maps ProcessorNode tree → Map<diagramNodeId, NodeExecutionState>
│ → Handles iteration filtering (loopIndex, splitIndex matching)
│ → Detects sub-route failures on DIRECT/SEDA nodes
├── useIterationState()
│ → Map<compoundNodeId, currentIterationIndex>
│ → onIterationChange(compoundId, index) callback
├── ProcessDiagram
│ props: { application, routeId, executionOverlay, iterationState, onIterationChange, ... }
│ Renders nodes with overlay visual states
└── DetailPanel
├── useProcessorSnapshotById(executionId, selectedProcessorId)
│ → { inputBody, outputBody, inputHeaders, outputHeaders }
└── Tabs render from ProcessorNode + snapshot data
```
### Processor-to-Node Mapping
The `processorId` field on `ProcessorNode` is the same value as the `id` field on diagram `PositionedNode`. The agent uses diagram node IDs as processor IDs during route model extraction, so no separate mapping or `diagramNodeId` field is needed. The `useExecutionOverlay` hook builds its map by walking the `ProcessorNode` tree and keying on `processorId`, which directly matches diagram node IDs.
### Snapshot Loading
Per-processor body/header data is fetched lazily via `useProcessorSnapshotById(executionId, processorId)` when a processor is selected and the user switches to Input/Output/Headers tabs. This avoids loading all snapshot data upfront for routes with many processors. The snapshot endpoint accepts `processorId` (see Backend Prerequisites, Section 0).
---
## 8. Jump to Error
When the user clicks "Jump to Error":
1. Find the first `ProcessorNode` with `status === 'FAILED'` in the execution tree
2. If the failed processor is a DIRECT/SEDA node with `subRouteFailed: true`:
a. Drill down into the target route (same as double-click drill-down from sub-project 1)
b. Recursively find the failed processor in the sub-route's execution
3. Select the failed processor node
4. Pan/zoom the diagram to center the failed node
5. Show the Error tab in the detail panel
This handles arbitrarily deep cross-route error chains (route A calls direct:B which calls direct:C where the actual failure is).
---
## 9. Integration with ExchangeDetail Page
The `ExecutionDiagram` component replaces the existing "Flow" view tab on the `ExchangeDetail` page. The page passes `executionId` and the component handles everything internally.
```typescript
// In ExchangeDetail page
<ExecutionDiagram
executionId={executionId}
knownRouteIds={knownRouteIds}
onNodeAction={handleNodeAction}
nodeConfigs={nodeConfigs}
/>
```
The existing Gantt timeline view on ExchangeDetail can be removed or kept as an alternative view — the Timeline tab inside the detail panel provides the same functionality.
---
## Non-Goals (Sub-project 3)
- Replacing RouteFlow on the Dashboard or RouteDetail pages
- Aggregate execution heatmaps (showing hot processors across many exchanges)
- Live execution tracking (watching a RUNNING exchange in real-time)
- Diff between two executions
- Export/share execution view
---
## Verification
1. `npx tsc -p tsconfig.app.json --noEmit` passes
2. ExecutionDiagram renders on ExchangeDetail page for a known failed exchange
3. Completed nodes show green tint + checkmark + duration badge
4. Failed nodes show red tint + ! badge + red duration
5. Skipped nodes are dimmed to 35% opacity
6. Edges between executed nodes turn green; edges to skipped nodes are dashed gray
7. Loop/split compounds show iteration stepper; stepping updates child overlay
8. CHOICE compounds highlight taken branch, dim untaken branches
9. Nested loops step independently
10. Clicking a node shows its data in the detail panel
11. Detail panel tabs: Info shows metadata + attributes, Headers shows side-by-side, Input/Output show formatted body, Error shows exception + stack trace, Timeline shows Gantt chart
12. "Jump to Error" navigates to and selects the failed processor, drilling into sub-routes if needed
13. Error tab grayed out for non-failed processors
14. Config tab shows placeholder (TODO)
15. Resizable splitter between diagram and detail panel works

View File

@@ -0,0 +1,359 @@
# Interactive Process Diagram — Design Spec
**Sub-project:** 1 of 3 (Component → Execution Overlay → Page Integration)
**Scope:** Interactive SVG diagram component with zoom/pan, node interactions, config badges, and a configurable layout direction. Does NOT include execution overlay or page replacement — those are sub-projects 2 and 3.
---
## Problem
The current RouteFlow component renders Camel routes as a flat vertical list of nodes. It cannot show compound structures (choice branches, split fan-out, try-catch nesting), does not support zoom/pan, and has no interactive controls beyond click-to-select. Routes with 10+ processors become hard to follow, and the relationship between processors is not visually clear.
## Goal
Build an interactive process diagram component styled after MuleSoft / TIBCO BusinessWorks 5, rendering Camel routes as left-to-right flow diagrams using server-computed ELK layout coordinates. The component supports zoom/pan, node hover toolbars for tracing/tap configuration, config badge indicators, and a collapsible detail side-panel.
---
## Decisions
| Decision | Choice | Rationale |
|----------|--------|-----------|
| Rendering | SVG + custom React | Full control over styling, no heavy deps. Server owns layout. |
| Node style | Top-Bar Cards | TIBCO BW5-inspired white cards with colored top accent bar. Professional, clean. |
| Flow direction | Left-to-right (default) | Matches MuleSoft/BW5 conventions. Query param for flexibility. |
| Component location | `ui/src/components/ProcessDiagram/` | Tightly coupled to Cameleer data model, no design-system abstraction needed. |
| Interactions | Hover floating toolbar + click-to-select | Discoverable, no right-click dependency. |
| Error handlers | Below main flow | Clear visual separation, labeled divider. |
| Selection behavior | Side panel with config info; execution data only with overlay | Keeps base diagram focused on topology. |
---
## 1. Backend: Layout Direction Parameter
### Change
Add optional `direction` query parameter to diagram render endpoints.
### Files
- `cameleer3-server-app/.../diagram/ElkDiagramRenderer.java` — accept direction param, map to ELK `Direction.RIGHT` (LR) or `Direction.DOWN` (TB)
- `cameleer3-server-core/.../diagram/DiagramRenderer.java` — update interface to accept direction
- `cameleer3-server-app/.../controller/DiagramRenderController.java` — add `@RequestParam(defaultValue = "LR") String direction` to render endpoints
- `ui/src/api/queries/diagrams.ts` — pass `direction` query param to API calls; also update `DiagramLayout` edge type to match backend `PositionedEdge` serialization: `{ sourceId, targetId, label?, points: number[][] }` (currently defines `{ from?, to? }` which is missing `points` and `label`)
### Behavior
- `GET /diagrams/{contentHash}/render?direction=LR` → left-to-right layout (default)
- `GET /diagrams/{contentHash}/render?direction=TB` → top-to-bottom layout
- `GET /diagrams?application=X&routeId=Y&direction=LR` → same for by-route endpoint
### Compound Node Direction
The direction parameter applies to the **root** layout only. Compound nodes (CHOICE, SPLIT, TRY_CATCH, etc.) keep their internal layout direction as **top-to-bottom** regardless of the root direction. This matches how MuleSoft/BW5 render branching patterns: the main flow goes left-to-right, but branches within a choice or split fan out vertically inside their container.
---
## 2. Frontend: ProcessDiagram Component
### File Structure
```
ui/src/components/ProcessDiagram/
├── ProcessDiagram.tsx # Root: SVG container, zoom/pan, section layout
├── ProcessDiagram.module.css # Styles using design system tokens
├── DiagramNode.tsx # Individual node: top-bar card rendering
├── DiagramEdge.tsx # Edge: cubic Bezier path with arrowhead
├── CompoundNode.tsx # Container for compound types (choice, split)
├── NodeToolbar.tsx # Floating action toolbar on hover
├── ConfigBadge.tsx # Indicator badges (TRACE, TAP) on nodes
├── ErrorSection.tsx # Visual separator + error handler flow section
├── ZoomControls.tsx # HTML overlay: zoom in/out/fit buttons
├── useZoomPan.ts # Hook: viewBox transform, wheel zoom, drag pan
├── useDiagramData.ts # Hook: fetch + separate layout into sections
├── node-colors.ts # NodeType → design system color token mapping
├── types.ts # Shared TypeScript interfaces
└── index.ts # Public exports
```
### Props API
```typescript
interface ProcessDiagramProps {
application: string;
routeId: string;
direction?: 'LR' | 'TB'; // default 'LR'
selectedNodeId?: string; // controlled selection
onNodeSelect?: (nodeId: string) => void;
onNodeAction?: (nodeId: string, action: NodeAction) => void;
nodeConfigs?: Map<string, NodeConfig>; // active taps/tracing per processor
className?: string;
}
type NodeAction = 'inspect' | 'toggle-trace' | 'configure-tap' | 'copy-id';
interface NodeConfig {
traceEnabled?: boolean;
tapExpression?: string;
}
// ExecutionOverlay types will be added in sub-project 2 when needed.
// No forward-declared types here to avoid drift.
```
### SVG Structure
```
<div class="process-diagram">
<svg viewBox="..."> // zoom = viewBox transform
<defs> // arrowhead markers, filters
<marker id="arrow">...</marker>
</defs>
<g class="diagram-content"> // pan offset transform
<!-- Main Route section -->
<g class="section section--main">
<g class="edges"> // rendered first (behind nodes)
<path d="M ... C ..." /> // cubic bezier from ELK waypoints
</g>
<g class="nodes">
<g transform="translate(x,y)"> // ELK-computed position
<!-- DiagramNode: top-bar card -->
<!-- ConfigBadge: top-right corner pills -->
<!-- NodeToolbar: foreignObject on hover -->
</g>
<g class="compound"> // CompoundNode: dashed border container
<g transform="translate(...)"> <!-- children inside -->
</g>
</g>
</g>
<!-- Error Handler section(s) -->
<g class="section section--error"
transform="translate(0, mainHeight + gap)">
<text>onException: java.lang.Exception</text>
<line ... /> // divider
<g class="edges">...</g>
<g class="nodes">...</g>
</g>
</g>
</svg>
<div class="zoom-controls">...</div> // HTML overlay, bottom-right
</div>
```
---
## 3. Node Visual States
### Base States
| State | Visual |
|-------|--------|
| Normal | White card, `--border` (#E4DFD8), colored top bar per type |
| Hovered | Warm tint background (`--bg-hover` / #F5F0EA), stronger border, floating toolbar appears above |
| Selected | Amber selection ring (2.5px solid `--amber`), side panel opens |
### Config Badges
Small colored pill badges positioned at the top-right corner of the node card, always visible:
- **TRACE** — teal (`--running`) pill, shown when tracing is enabled
- **TAP** — purple (`--purple`) pill, shown when a tap expression is configured
### Execution Overlay States (sub-project 2 — node must support these props)
| State | Visual |
|-------|--------|
| Executed (OK) | Green left border or subtle green tint |
| Failed (caused error handler) | Red border (2px `--error`), red marker icon |
| Not executed | Dimmed (reduced opacity) |
| Has trace data | Small "data available" indicator icon |
| No trace data | No indicator (or grayed-out data icon) |
### Node Type Colors
| Category | Token | Hex | Types |
|----------|-------|-----|-------|
| Endpoints | `--running` | #1A7F8E teal | ENDPOINT |
| Processors | `--amber` | #C6820E | PROCESSOR, BEAN, LOG, SET_HEADER, SET_BODY, TRANSFORM, MARSHAL, UNMARSHAL |
| Targets | `--success` | #3D7C47 green | TO, TO_DYNAMIC, DIRECT, SEDA |
| EIP Patterns | `--purple` | #7C3AED | EIP_CHOICE, EIP_WHEN, EIP_OTHERWISE, EIP_SPLIT, EIP_MULTICAST, EIP_LOOP, EIP_AGGREGATE, EIP_FILTER, etc. |
| Error Handling | `--error` | #C0392B red | ERROR_HANDLER, ON_EXCEPTION, TRY_CATCH, DO_TRY, DO_CATCH, DO_FINALLY |
| Cross-Route | (hardcoded) | #06B6D4 cyan | EIP_WIRE_TAP, EIP_ENRICH, EIP_POLL_ENRICH |
Note: This frontend color mapping intentionally differs from the backend `ElkDiagramRenderer` SVG colors (which use blue for endpoints, green for processors). The frontend uses design system tokens for consistency with the rest of the UI. The backend SVG renderer is not changed.
### Compound Node Rendering
Compound types (CHOICE, SPLIT, TRY_CATCH, LOOP, etc.) render as:
- Full-width colored header bar with white text label (type name)
- White body area with subtle border matching the type color
- Children rendered inside at their ELK-relative positions
- Children have their own hover/select/badge behavior
---
## 4. Interactions
### Hover Floating Toolbar
On mouse enter over a node, a dark floating toolbar appears above the node (centered). Uses `<foreignObject>` for HTML accessibility.
| Icon | Action | Callback |
|------|--------|----------|
| Search | Inspect | `onNodeAction(id, 'inspect')` — selects node, opens side panel |
| T | Toggle Trace | `onNodeAction(id, 'toggle-trace')` — enables/disables tracing |
| Pencil | Configure Tap | `onNodeAction(id, 'configure-tap')` — opens tap config |
| ... | More | `onNodeAction(id, 'copy-id')` — copies processor ID |
Toolbar hides on mouse leave after a short delay (150ms) to prevent flicker when moving between node and toolbar.
### Click-to-Select
Click on a node → calls `onNodeSelect(nodeId)`. Parent controls `selectedNodeId` prop. Selected node shows amber ring.
### Zoom & Pan
**`useZoomPan` hook manages:**
- Mouse wheel → zoom centered on cursor
- Click+drag on background → pan
- Pinch gesture → zoom (trackpad/touch)
- State: `{ scale, translateX, translateY }`
- Applied to SVG `viewBox` attribute
**`ZoomControls` component:**
- Three buttons: `+` (zoom in), `` (zoom out), fit-to-view icon
- Positioned as HTML overlay at bottom-right of diagram container
- Fit-to-view calculates viewBox to show entire diagram with 40px padding
**Zoom limits:** 25% to 400%.
### Keyboard Navigation
**Required:**
| Key | Action |
|-----|--------|
| Escape | Deselect / close panel |
| +/- | Zoom in/out |
| 0 | Fit to view |
**Stretch (implement if time permits):**
| Key | Action |
|-----|--------|
| Arrow keys | Move selection between connected nodes |
| Tab | Cycle through nodes in flow order |
| Enter | Open detail panel for selected node |
---
## 5. Error Handler Sections
Error handler compounds (ON_EXCEPTION, ERROR_HANDLER) render as separate sections below the main flow:
1. **Divider:** Horizontal line with label text (e.g., "onException: java.lang.Exception")
2. **Gap:** 40px vertical gap between main section and error section
3. **Layout:** Error section gets its own ELK-computed layout (compound node children already have relative coordinates)
4. **Styling:** Same node rendering as main section, but the section background has a subtle red tint
5. **Multiple handlers:** Each ON_EXCEPTION becomes its own section, stacked vertically
The `useDiagramData` hook separates top-level compound error nodes from regular nodes, computing the Y offset for each error section based on accumulated heights.
---
## 6. Data Flow
```
useDiagramByRoute(app, routeId)
→ contentHash
→ useDiagramLayout(contentHash, direction)
→ DiagramLayout { nodes[], edges[], width, height }
useDiagramData hook:
1. Separate nodes into mainNodes[] and errorSections[]
(reuses logic from buildFlowSegments: error-handler compounds with children → error sections)
2. Filter edges: mainEdges (between main nodes), errorEdges (within each error section)
3. Compute total SVG dimensions: max(mainWidth, errorWidths) × (mainHeight + gap + errorHeights)
4. Return { mainNodes, mainEdges, errorSections, totalWidth, totalHeight }
```
The existing `diagram-mapping.ts` `buildFlowSegments` function handles the separation logic. The new `useDiagramData` hook adapts this for SVG coordinate-based rendering instead of RouteFlow's FlowSegment format.
---
## 7. Side Panel (Detail Panel)
When a node is selected, a collapsible side panel slides in from the right of the diagram container.
**Base mode (no execution overlay):**
- Processor ID
- Processor type
- Endpoint URI (if applicable)
- Active configuration: tracing status, tap expression
- Node metadata from the diagram
**With execution overlay (sub-project 2):**
- Execution status + duration
- Input/output body (if trace data captured)
- Input/output headers
- Error message + stack trace (if failed)
- Loop iteration selector (if inside a loop)
For sub-project 1, the side panel shows config info only. The component accepts an `onNodeSelect` callback — the parent page controls what appears in the panel.
The side panel is NOT part of the ProcessDiagram component itself. It is rendered by the parent page and controlled via the `selectedNodeId` / `onNodeSelect` props. This keeps the diagram component focused on visualization.
**Dev test page (`/dev/diagram`):** In sub-project 1, the test page renders the ProcessDiagram with a simple stub side panel that shows the selected node's ID, type, label, and any `nodeConfigs` entry. This validates the selection interaction without needing full page integration.
---
## 8. Non-Goals (Sub-project 2 & 3)
These are explicitly out of scope for sub-project 1:
- **Execution overlay rendering** — animated flow, per-node status/duration, dimming non-executed nodes
- **Loop/split iteration stepping** — "debugger" UI with iteration tabs
- **Page integration** — replacing RouteFlow on RouteDetail, ExchangeDetail, Dashboard
- **Minimap** — small overview for large diagrams (stretch goal, not v1)
- **Drag to rearrange** — nodes are server-positioned, not user-movable
---
## Verification
1. **Backend:** `mvn clean verify -DskipITs` passes after direction param addition
2. **Frontend types:** `npx tsc -p tsconfig.app.json --noEmit` passes
3. **Manual test:** Create a temporary test page or Storybook-like route (`/dev/diagram`) that renders the ProcessDiagram component with a known route
4. **Zoom/pan:** Mouse wheel zooms, drag pans, fit-to-view works
5. **Node interaction:** Hover shows toolbar, click selects with amber ring
6. **Config badges:** Pass mock `nodeConfigs` and verify TRACE/TAP pills render
7. **Error sections:** Route with ON_EXCEPTION renders error handler below main flow
8. **Compound nodes:** Route with CHOICE renders children inside dashed container
9. **Keyboard (required):** Escape deselects, +/- zooms, 0 fits to view
10. **Direction:** `?direction=TB` renders top-to-bottom layout
---
## Implementation Notes (post-spec additions)
The following features were added during implementation beyond the original spec:
### Recursive compound nesting
EIP_WHEN, EIP_OTHERWISE, DO_CATCH, DO_FINALLY added to COMPOUND_TYPES on both backend and frontend. CompoundNode recursively renders children that are themselves compound (e.g., CHOICE → WHEN → processors).
### Edge z-ordering
Edges are distributed to their containing compound and rendered inside the compound's SVG group (after background, before children). Top-level edges stay in the main edges group. This prevents compound backgrounds from hiding edges.
### ON_COMPLETION handler sections
ON_COMPLETION nodes render as teal-tinted sections between the main flow and error handler sections. Structurally parallel to ON_EXCEPTION.
### Drill-down navigation
Double-click on DIRECT or SEDA nodes navigates into the target route's diagram. A breadcrumb bar shows the route stack and supports clicking back to any level. Escape key goes back one level. Route ID resolution handles camelCase endpoint URIs → kebab-case route IDs using the catalog's known route IDs.
### Zoom via CSS transform
The original spec proposed SVG viewBox manipulation. Implementation uses CSS `transform: translate() scale()` on the content `<g>` element instead, which is simpler and more predictable. Default zoom is 100%.
### Toolbar as HTML overlay
The original spec proposed SVG `<foreignObject>`. Implementation renders the toolbar as an absolute-positioned HTML div outside the SVG, so it maintains fixed size regardless of zoom level. Styled with design system tokens.

View File

@@ -58,6 +58,9 @@
<repository> <repository>
<id>gitea</id> <id>gitea</id>
<url>https://gitea.siegeln.net/api/packages/cameleer/maven</url> <url>https://gitea.siegeln.net/api/packages/cameleer/maven</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository> </repository>
</repositories> </repositories>

View File

@@ -4,11 +4,15 @@ WORKDIR /app
ARG REGISTRY_TOKEN ARG REGISTRY_TOKEN
COPY package.json package-lock.json .npmrc ./ COPY package.json package-lock.json .npmrc ./
RUN echo "//gitea.siegeln.net/api/packages/cameleer/npm/:_authToken=${REGISTRY_TOKEN}" >> .npmrc && \ RUN echo "//gitea.siegeln.net/api/packages/cameleer/npm/:_authToken=${REGISTRY_TOKEN}" >> .npmrc && \
npm ci && \ npm ci
rm -f .npmrc
COPY . . COPY . .
# Upgrade design system to latest dev snapshot (after COPY to bust Docker cache)
RUN echo "//gitea.siegeln.net/api/packages/cameleer/npm/:_authToken=${REGISTRY_TOKEN}" >> .npmrc && \
npm install @cameleer/design-system@dev && \
rm -f .npmrc
ARG VITE_ENV_NAME=PRODUCTION ARG VITE_ENV_NAME=PRODUCTION
ENV VITE_ENV_NAME=$VITE_ENV_NAME ENV VITE_ENV_NAME=$VITE_ENV_NAME
RUN npm run build RUN npm run build

22
ui/package-lock.json generated
View File

@@ -8,7 +8,7 @@
"name": "ui", "name": "ui",
"version": "0.0.0", "version": "0.0.0",
"dependencies": { "dependencies": {
"@cameleer/design-system": "^0.0.3", "@cameleer/design-system": "^0.1.17",
"@tanstack/react-query": "^5.90.21", "@tanstack/react-query": "^5.90.21",
"openapi-fetch": "^0.17.0", "openapi-fetch": "^0.17.0",
"react": "^19.2.4", "react": "^19.2.4",
@@ -276,9 +276,9 @@
} }
}, },
"node_modules/@cameleer/design-system": { "node_modules/@cameleer/design-system": {
"version": "0.0.3", "version": "0.1.17",
"resolved": "https://gitea.siegeln.net/api/packages/cameleer/npm/%40cameleer%2Fdesign-system/-/0.0.3/design-system-0.0.3.tgz", "resolved": "https://gitea.siegeln.net/api/packages/cameleer/npm/%40cameleer%2Fdesign-system/-/0.1.17/design-system-0.1.17.tgz",
"integrity": "sha512-x1mZvgYz7j57xFB26pMh9hn5waSJA1CcRWTgkzleLfaO/CmhekLup1HHlbh0b9SxVci6g2HzbcJldr4kvM1yzg==", "integrity": "sha512-THK6yN+xSrxEJadEQ4AZiVhPvoI2rq6gvmMonpxVhUw93dOPO5p06pRS5csJc1miFD1thOrazsoDzSTAbNaELw==",
"dependencies": { "dependencies": {
"react": "^19.0.0", "react": "^19.0.0",
"react-dom": "^19.0.0", "react-dom": "^19.0.0",
@@ -2934,9 +2934,9 @@
} }
}, },
"node_modules/react-router": { "node_modules/react-router": {
"version": "7.13.1", "version": "7.13.2",
"resolved": "https://registry.npmjs.org/react-router/-/react-router-7.13.1.tgz", "resolved": "https://registry.npmjs.org/react-router/-/react-router-7.13.2.tgz",
"integrity": "sha512-td+xP4X2/6BJvZoX6xw++A2DdEi++YypA69bJUV5oVvqf6/9/9nNlD70YO1e9d3MyamJEBQFEzk6mbfDYbqrSA==", "integrity": "sha512-tX1Aee+ArlKQP+NIUd7SE6Li+CiGKwQtbS+FfRxPX6Pe4vHOo6nr9d++u5cwg+Z8K/x8tP+7qLmujDtfrAoUJA==",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"cookie": "^1.0.1", "cookie": "^1.0.1",
@@ -2956,12 +2956,12 @@
} }
}, },
"node_modules/react-router-dom": { "node_modules/react-router-dom": {
"version": "7.13.1", "version": "7.13.2",
"resolved": "https://registry.npmjs.org/react-router-dom/-/react-router-dom-7.13.1.tgz", "resolved": "https://registry.npmjs.org/react-router-dom/-/react-router-dom-7.13.2.tgz",
"integrity": "sha512-UJnV3Rxc5TgUPJt2KJpo1Jpy0OKQr0AjgbZzBFjaPJcFOb2Y8jA5H3LT8HUJAiRLlWrEXWHbF1Z4SCZaQjWDHw==", "integrity": "sha512-aR7SUORwTqAW0JDeiWF07e9SBE9qGpByR9I8kJT5h/FrBKxPMS6TiC7rmVO+gC0q52Bx7JnjWe8Z1sR9faN4YA==",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"react-router": "7.13.1" "react-router": "7.13.2"
}, },
"engines": { "engines": {
"node": ">=20.0.0" "node": ">=20.0.0"

View File

@@ -14,7 +14,7 @@
"generate-api:live": "curl -s http://localhost:8081/api/v1/api-docs -o src/api/openapi.json && openapi-typescript src/api/openapi.json -o src/api/schema.d.ts" "generate-api:live": "curl -s http://localhost:8081/api/v1/api-docs -o src/api/openapi.json && openapi-typescript src/api/openapi.json -o src/api/schema.d.ts"
}, },
"dependencies": { "dependencies": {
"@cameleer/design-system": "^0.0.3", "@cameleer/design-system": "^0.1.17",
"@tanstack/react-query": "^5.90.21", "@tanstack/react-query": "^5.90.21",
"openapi-fetch": "^0.17.0", "openapi-fetch": "^0.17.0",
"react": "^19.2.4", "react": "^19.2.4",

File diff suppressed because one or more lines are too long

View File

@@ -1,5 +1,6 @@
import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query'; import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query';
import { adminFetch } from './admin-api'; import { adminFetch } from './admin-api';
import { useRefreshInterval } from '../use-refresh-interval';
// ── Types ────────────────────────────────────────────────────────────── // ── Types ──────────────────────────────────────────────────────────────
@@ -38,34 +39,38 @@ export interface ActiveQuery {
// ── Query Hooks ──────────────────────────────────────────────────────── // ── Query Hooks ────────────────────────────────────────────────────────
export function useDatabaseStatus() { export function useDatabaseStatus() {
const refetchInterval = useRefreshInterval(30_000);
return useQuery({ return useQuery({
queryKey: ['admin', 'database', 'status'], queryKey: ['admin', 'database', 'status'],
queryFn: () => adminFetch<DatabaseStatus>('/database/status'), queryFn: () => adminFetch<DatabaseStatus>('/database/status'),
refetchInterval: 30_000, refetchInterval,
}); });
} }
export function useConnectionPool() { export function useConnectionPool() {
const refetchInterval = useRefreshInterval(10_000);
return useQuery({ return useQuery({
queryKey: ['admin', 'database', 'pool'], queryKey: ['admin', 'database', 'pool'],
queryFn: () => adminFetch<PoolStats>('/database/pool'), queryFn: () => adminFetch<PoolStats>('/database/pool'),
refetchInterval: 10_000, refetchInterval,
}); });
} }
export function useDatabaseTables() { export function useDatabaseTables() {
const refetchInterval = useRefreshInterval(60_000);
return useQuery({ return useQuery({
queryKey: ['admin', 'database', 'tables'], queryKey: ['admin', 'database', 'tables'],
queryFn: () => adminFetch<TableInfo[]>('/database/tables'), queryFn: () => adminFetch<TableInfo[]>('/database/tables'),
refetchInterval: 60_000, refetchInterval,
}); });
} }
export function useActiveQueries() { export function useActiveQueries() {
const refetchInterval = useRefreshInterval(5_000);
return useQuery({ return useQuery({
queryKey: ['admin', 'database', 'queries'], queryKey: ['admin', 'database', 'queries'],
queryFn: () => adminFetch<ActiveQuery[]>('/database/queries'), queryFn: () => adminFetch<ActiveQuery[]>('/database/queries'),
refetchInterval: 5_000, refetchInterval,
}); });
} }

View File

@@ -1,14 +1,15 @@
import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query'; import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query';
import { adminFetch } from './admin-api'; import { adminFetch } from './admin-api';
import { useRefreshInterval } from '../use-refresh-interval';
// ── Types ────────────────────────────────────────────────────────────── // ── Types ──────────────────────────────────────────────────────────────
export interface OpenSearchStatus { export interface OpenSearchStatus {
connected: boolean; reachable: boolean;
clusterHealth: string; clusterHealth: string;
version: string | null; version: string | null;
numberOfNodes: number; nodeCount: number;
url: string; host: string;
} }
export interface PipelineStats { export interface PipelineStats {
@@ -53,28 +54,31 @@ export interface PerformanceStats {
// ── Query Hooks ──────────────────────────────────────────────────────── // ── Query Hooks ────────────────────────────────────────────────────────
export function useOpenSearchStatus() { export function useOpenSearchStatus() {
const refetchInterval = useRefreshInterval(30_000);
return useQuery({ return useQuery({
queryKey: ['admin', 'opensearch', 'status'], queryKey: ['admin', 'opensearch', 'status'],
queryFn: () => adminFetch<OpenSearchStatus>('/opensearch/status'), queryFn: () => adminFetch<OpenSearchStatus>('/opensearch/status'),
refetchInterval: 30_000, refetchInterval,
}); });
} }
export function usePipelineStats() { export function usePipelineStats() {
const refetchInterval = useRefreshInterval(10_000);
return useQuery({ return useQuery({
queryKey: ['admin', 'opensearch', 'pipeline'], queryKey: ['admin', 'opensearch', 'pipeline'],
queryFn: () => adminFetch<PipelineStats>('/opensearch/pipeline'), queryFn: () => adminFetch<PipelineStats>('/opensearch/pipeline'),
refetchInterval: 10_000, refetchInterval,
}); });
} }
export function useOpenSearchIndices(page = 0, size = 20, search = '') { export function useOpenSearchIndices(page = 0, size = 20, search = '', prefix = 'executions') {
return useQuery({ return useQuery({
queryKey: ['admin', 'opensearch', 'indices', page, size, search], queryKey: ['admin', 'opensearch', 'indices', prefix, page, size, search],
queryFn: () => { queryFn: () => {
const params = new URLSearchParams(); const params = new URLSearchParams();
params.set('page', String(page)); params.set('page', String(page));
params.set('size', String(size)); params.set('size', String(size));
params.set('prefix', prefix);
if (search) params.set('search', search); if (search) params.set('search', search);
return adminFetch<IndicesPage>(`/opensearch/indices?${params}`); return adminFetch<IndicesPage>(`/opensearch/indices?${params}`);
}, },
@@ -83,10 +87,11 @@ export function useOpenSearchIndices(page = 0, size = 20, search = '') {
} }
export function useOpenSearchPerformance() { export function useOpenSearchPerformance() {
const refetchInterval = useRefreshInterval(30_000);
return useQuery({ return useQuery({
queryKey: ['admin', 'opensearch', 'performance'], queryKey: ['admin', 'opensearch', 'performance'],
queryFn: () => adminFetch<PerformanceStats>('/opensearch/performance'), queryFn: () => adminFetch<PerformanceStats>('/opensearch/performance'),
refetchInterval: 30_000, refetchInterval,
}); });
} }

View File

@@ -1,8 +1,10 @@
import { useQuery } from '@tanstack/react-query'; import { useQuery } from '@tanstack/react-query';
import { config } from '../../config'; import { config } from '../../config';
import { useAuthStore } from '../../auth/auth-store'; import { useAuthStore } from '../../auth/auth-store';
import { useRefreshInterval } from './use-refresh-interval';
export function useAgentMetrics(agentId: string | null, names: string[], buckets = 60) { export function useAgentMetrics(agentId: string | null, names: string[], buckets = 60) {
const refetchInterval = useRefreshInterval(30_000);
return useQuery({ return useQuery({
queryKey: ['agent-metrics', agentId, names.join(','), buckets], queryKey: ['agent-metrics', agentId, names.join(','), buckets],
queryFn: async () => { queryFn: async () => {
@@ -21,6 +23,6 @@ export function useAgentMetrics(agentId: string | null, names: string[], buckets
return res.json() as Promise<{ metrics: Record<string, Array<{ time: string; value: number }>> }>; return res.json() as Promise<{ metrics: Record<string, Array<{ time: string; value: number }>> }>;
}, },
enabled: !!agentId && names.length > 0, enabled: !!agentId && names.length > 0,
refetchInterval: 30_000, refetchInterval,
}); });
} }

View File

@@ -2,8 +2,10 @@ import { useQuery } from '@tanstack/react-query';
import { api } from '../client'; import { api } from '../client';
import { config } from '../../config'; import { config } from '../../config';
import { useAuthStore } from '../../auth/auth-store'; import { useAuthStore } from '../../auth/auth-store';
import { useRefreshInterval } from './use-refresh-interval';
export function useAgents(status?: string, application?: string) { export function useAgents(status?: string, application?: string) {
const refetchInterval = useRefreshInterval(10_000);
return useQuery({ return useQuery({
queryKey: ['agents', status, application], queryKey: ['agents', status, application],
queryFn: async () => { queryFn: async () => {
@@ -13,18 +15,20 @@ export function useAgents(status?: string, application?: string) {
if (error) throw new Error('Failed to load agents'); if (error) throw new Error('Failed to load agents');
return data!; return data!;
}, },
refetchInterval: 10_000, refetchInterval,
}); });
} }
export function useAgentEvents(appId?: string, agentId?: string, limit = 50) { export function useAgentEvents(appId?: string, agentId?: string, limit = 50, toOverride?: string) {
const refetchInterval = useRefreshInterval(15_000);
return useQuery({ return useQuery({
queryKey: ['agents', 'events', appId, agentId, limit], queryKey: ['agents', 'events', appId, agentId, limit, toOverride],
queryFn: async () => { queryFn: async () => {
const token = useAuthStore.getState().accessToken; const token = useAuthStore.getState().accessToken;
const params = new URLSearchParams(); const params = new URLSearchParams();
if (appId) params.set('appId', appId); if (appId) params.set('appId', appId);
if (agentId) params.set('agentId', agentId); if (agentId) params.set('agentId', agentId);
if (toOverride) params.set('to', toOverride);
params.set('limit', String(limit)); params.set('limit', String(limit));
const res = await fetch(`${config.apiBaseUrl}/agents/events-log?${params}`, { const res = await fetch(`${config.apiBaseUrl}/agents/events-log?${params}`, {
headers: { headers: {
@@ -35,6 +39,6 @@ export function useAgentEvents(appId?: string, agentId?: string, limit = 50) {
if (!res.ok) throw new Error('Failed to load agent events'); if (!res.ok) throw new Error('Failed to load agent events');
return res.json(); return res.json();
}, },
refetchInterval: 15_000, refetchInterval,
}); });
} }

View File

@@ -1,13 +1,19 @@
import { useQuery } from '@tanstack/react-query'; import { useQuery } from '@tanstack/react-query';
import { config } from '../../config'; import { config } from '../../config';
import { useAuthStore } from '../../auth/auth-store'; import { useAuthStore } from '../../auth/auth-store';
import { useRefreshInterval } from './use-refresh-interval';
export function useRouteCatalog() { export function useRouteCatalog(from?: string, to?: string) {
const refetchInterval = useRefreshInterval(15_000);
return useQuery({ return useQuery({
queryKey: ['routes', 'catalog'], queryKey: ['routes', 'catalog', from, to],
queryFn: async () => { queryFn: async () => {
const token = useAuthStore.getState().accessToken; const token = useAuthStore.getState().accessToken;
const res = await fetch(`${config.apiBaseUrl}/routes/catalog`, { const params = new URLSearchParams();
if (from) params.set('from', from);
if (to) params.set('to', to);
const qs = params.toString();
const res = await fetch(`${config.apiBaseUrl}/routes/catalog${qs ? `?${qs}` : ''}`, {
headers: { headers: {
Authorization: `Bearer ${token}`, Authorization: `Bearer ${token}`,
'X-Cameleer-Protocol-Version': '1', 'X-Cameleer-Protocol-Version': '1',
@@ -16,11 +22,13 @@ export function useRouteCatalog() {
if (!res.ok) throw new Error('Failed to load route catalog'); if (!res.ok) throw new Error('Failed to load route catalog');
return res.json(); return res.json();
}, },
refetchInterval: 15_000, placeholderData: (prev) => prev,
refetchInterval,
}); });
} }
export function useRouteMetrics(from?: string, to?: string, appId?: string) { export function useRouteMetrics(from?: string, to?: string, appId?: string) {
const refetchInterval = useRefreshInterval(30_000);
return useQuery({ return useQuery({
queryKey: ['routes', 'metrics', from, to, appId], queryKey: ['routes', 'metrics', from, to, appId],
queryFn: async () => { queryFn: async () => {
@@ -38,6 +46,6 @@ export function useRouteMetrics(from?: string, to?: string, appId?: string) {
if (!res.ok) throw new Error('Failed to load route metrics'); if (!res.ok) throw new Error('Failed to load route metrics');
return res.json(); return res.json();
}, },
refetchInterval: 30_000, refetchInterval,
}); });
} }

View File

@@ -0,0 +1,178 @@
import { useMutation, useQuery, useQueryClient } from '@tanstack/react-query'
import { api } from '../client'
import { useAuthStore } from '../../auth/auth-store'
// ── Application Config ────────────────────────────────────────────────────
export interface TapDefinition {
tapId: string
processorId: string
target: 'INPUT' | 'OUTPUT' | 'BOTH'
expression: string
language: string
attributeName: string
attributeType: 'BUSINESS_OBJECT' | 'CORRELATION' | 'EVENT' | 'CUSTOM'
enabled: boolean
version: number
}
export interface ApplicationConfig {
application: string
version: number
updatedAt?: string
engineLevel?: string
payloadCaptureMode?: string
applicationLogLevel?: string
agentLogLevel?: string
metricsEnabled: boolean
samplingRate: number
tracedProcessors: Record<string, string>
taps: TapDefinition[]
tapVersion: number
routeRecording: Record<string, boolean>
compressSuccess: boolean
}
/** Authenticated fetch using the JWT from auth store */
function authFetch(url: string, init?: RequestInit): Promise<Response> {
const token = useAuthStore.getState().accessToken
const headers = new Headers(init?.headers)
if (token) headers.set('Authorization', `Bearer ${token}`)
headers.set('X-Cameleer-Protocol-Version', '1')
return fetch(url, { ...init, headers })
}
export function useAllApplicationConfigs() {
return useQuery({
queryKey: ['applicationConfig', 'all'],
queryFn: async () => {
const res = await authFetch('/api/v1/config')
if (!res.ok) throw new Error('Failed to fetch configs')
return res.json() as Promise<ApplicationConfig[]>
},
})
}
export function useApplicationConfig(application: string | undefined) {
return useQuery({
queryKey: ['applicationConfig', application],
queryFn: async () => {
const res = await authFetch(`/api/v1/config/${application}`)
if (!res.ok) throw new Error('Failed to fetch config')
return res.json() as Promise<ApplicationConfig>
},
enabled: !!application,
})
}
export function useUpdateApplicationConfig() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: async (config: ApplicationConfig) => {
const res = await authFetch(`/api/v1/config/${config.application}`, {
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(config),
})
if (!res.ok) throw new Error('Failed to update config')
return res.json() as Promise<ApplicationConfig>
},
onSuccess: (saved) => {
queryClient.setQueryData(['applicationConfig', saved.application], saved)
queryClient.invalidateQueries({ queryKey: ['applicationConfig', 'all'] })
},
})
}
// ── Processor → Route Mapping ─────────────────────────────────────────────
export function useProcessorRouteMapping(application?: string) {
return useQuery({
queryKey: ['config', application, 'processor-routes'],
queryFn: async () => {
const res = await authFetch(`/api/v1/config/${application}/processor-routes`)
if (!res.ok) throw new Error('Failed to fetch processor-route mapping')
return res.json() as Promise<Record<string, string>>
},
enabled: !!application,
})
}
// ── Generic Group Command (kept for non-config commands) ──────────────────
interface SendGroupCommandParams {
group: string
type: string
payload: Record<string, unknown>
}
export function useSendGroupCommand() {
return useMutation({
mutationFn: async ({ group, type, payload }: SendGroupCommandParams) => {
const { data, error } = await api.POST('/agents/groups/{group}/commands', {
params: { path: { group } },
body: { type, payload } as any,
})
if (error) throw new Error('Failed to send command')
return data!
},
})
}
// ── Test Expression ───────────────────────────────────────────────────────
export function useTestExpression() {
return useMutation({
mutationFn: async ({
application,
expression,
language,
body,
target,
}: {
application: string
expression: string
language: string
body: string
target: string
}) => {
const res = await authFetch(
`/api/v1/config/${encodeURIComponent(application)}/test-expression`,
{
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ expression, language, body, target }),
},
)
if (!res.ok) {
if (res.status === 404) throw new Error('No live agent available')
if (res.status === 504) throw new Error('Expression test timed out')
throw new Error('Failed to test expression')
}
return res.json() as Promise<{ result?: string; error?: string }>
},
})
}
// ── Replay Exchange ───────────────────────────────────────────────────────
export function useReplayExchange() {
return useMutation({
mutationFn: async ({
agentId,
headers,
body,
}: {
agentId: string
headers: Record<string, string>
body: string
}) => {
const { data, error } = await api.POST('/agents/{id}/commands', {
params: { path: { id: agentId } },
body: { type: 'replay', payload: { headers, body } } as any,
})
if (error) throw new Error('Failed to send replay command')
return data!
},
})
}

View File

@@ -1,19 +1,43 @@
import { useQuery } from '@tanstack/react-query'; import { useQuery } from '@tanstack/react-query';
import { api } from '../client'; import { api } from '../client';
interface DiagramLayout { export interface DiagramNode {
id?: string;
label?: string;
type?: string;
x?: number;
y?: number;
width?: number; width?: number;
height?: number; height?: number;
nodes?: Array<{ id?: string; label?: string; type?: string; x?: number; y?: number; width?: number; height?: number }>; children?: DiagramNode[];
edges?: Array<{ from?: string; to?: string }>;
} }
export function useDiagramLayout(contentHash: string | null) { export interface DiagramEdge {
sourceId: string;
targetId: string;
label?: string;
points: number[][];
}
export interface DiagramLayout {
width?: number;
height?: number;
nodes?: DiagramNode[];
edges?: DiagramEdge[];
}
export function useDiagramLayout(
contentHash: string | null,
direction: 'LR' | 'TB' = 'LR',
) {
return useQuery({ return useQuery({
queryKey: ['diagrams', 'layout', contentHash], queryKey: ['diagrams', 'layout', contentHash, direction],
queryFn: async () => { queryFn: async () => {
const { data, error } = await api.GET('/diagrams/{contentHash}/render', { const { data, error } = await api.GET('/diagrams/{contentHash}/render', {
params: { path: { contentHash: contentHash! } }, params: {
path: { contentHash: contentHash! },
query: { direction },
},
headers: { Accept: 'application/json' }, headers: { Accept: 'application/json' },
}); });
if (error) throw new Error('Failed to load diagram layout'); if (error) throw new Error('Failed to load diagram layout');
@@ -23,15 +47,19 @@ export function useDiagramLayout(contentHash: string | null) {
}); });
} }
export function useDiagramByRoute(application: string | undefined, routeId: string | undefined) { export function useDiagramByRoute(
application: string | undefined,
routeId: string | undefined,
direction: 'LR' | 'TB' = 'LR',
) {
return useQuery({ return useQuery({
queryKey: ['diagrams', 'byRoute', application, routeId], queryKey: ['diagrams', 'byRoute', application, routeId, direction],
queryFn: async () => { queryFn: async () => {
const { data, error } = await api.GET('/diagrams', { const { data, error } = await api.GET('/diagrams', {
params: { query: { application: application!, routeId: routeId! } }, params: { query: { application: application!, routeId: routeId!, direction } },
}); });
if (error) throw new Error('Failed to load diagram for route'); if (error) throw new Error('Failed to load diagram for route');
return data!; return data as DiagramLayout;
}, },
enabled: !!application && !!routeId, enabled: !!application && !!routeId,
}); });

View File

@@ -1,6 +1,7 @@
import { useQuery } from '@tanstack/react-query'; import { useQuery } from '@tanstack/react-query';
import { api } from '../client'; import { api } from '../client';
import type { SearchRequest } from '../types'; import type { SearchRequest } from '../types';
import { useLiveQuery } from './use-refresh-interval';
export function useExecutionStats( export function useExecutionStats(
timeFrom: string | undefined, timeFrom: string | undefined,
@@ -8,6 +9,7 @@ export function useExecutionStats(
routeId?: string, routeId?: string,
application?: string, application?: string,
) { ) {
const live = useLiveQuery(10_000);
return useQuery({ return useQuery({
queryKey: ['executions', 'stats', timeFrom, timeTo, routeId, application], queryKey: ['executions', 'stats', timeFrom, timeTo, routeId, application],
queryFn: async () => { queryFn: async () => {
@@ -24,13 +26,14 @@ export function useExecutionStats(
if (error) throw new Error('Failed to load stats'); if (error) throw new Error('Failed to load stats');
return data!; return data!;
}, },
enabled: !!timeFrom, enabled: !!timeFrom && live.enabled,
placeholderData: (prev) => prev, placeholderData: (prev) => prev,
refetchInterval: 10_000, refetchInterval: live.refetchInterval,
}); });
} }
export function useSearchExecutions(filters: SearchRequest, live = false) { export function useSearchExecutions(filters: SearchRequest, live = false) {
const liveQuery = useLiveQuery(5_000);
return useQuery({ return useQuery({
queryKey: ['executions', 'search', filters], queryKey: ['executions', 'search', filters],
queryFn: async () => { queryFn: async () => {
@@ -41,7 +44,8 @@ export function useSearchExecutions(filters: SearchRequest, live = false) {
return data!; return data!;
}, },
placeholderData: (prev) => prev, placeholderData: (prev) => prev,
refetchInterval: live ? 5_000 : false, enabled: live ? liveQuery.enabled : true,
refetchInterval: live ? liveQuery.refetchInterval : false,
}); });
} }
@@ -51,6 +55,7 @@ export function useStatsTimeseries(
routeId?: string, routeId?: string,
application?: string, application?: string,
) { ) {
const live = useLiveQuery(30_000);
return useQuery({ return useQuery({
queryKey: ['executions', 'timeseries', timeFrom, timeTo, routeId, application], queryKey: ['executions', 'timeseries', timeFrom, timeTo, routeId, application],
queryFn: async () => { queryFn: async () => {
@@ -68,9 +73,9 @@ export function useStatsTimeseries(
if (error) throw new Error('Failed to load timeseries'); if (error) throw new Error('Failed to load timeseries');
return data!; return data!;
}, },
enabled: !!timeFrom, enabled: !!timeFrom && live.enabled,
placeholderData: (prev) => prev, placeholderData: (prev) => prev,
refetchInterval: 30_000, refetchInterval: live.refetchInterval,
}); });
} }
@@ -109,3 +114,25 @@ export function useProcessorSnapshot(
enabled: !!executionId && index !== null, enabled: !!executionId && index !== null,
}); });
} }
export function useProcessorSnapshotById(
executionId: string | null,
processorId: string | null,
) {
return useQuery({
queryKey: ['executions', 'snapshot-by-id', executionId, processorId],
queryFn: async () => {
const { data, error } = await api.GET(
'/executions/{executionId}/processors/by-id/{processorId}/snapshot',
{
params: {
path: { executionId: executionId!, processorId: processorId! },
},
},
);
if (error) throw new Error('Failed to load snapshot');
return data!;
},
enabled: !!executionId && !!processorId,
});
}

View File

@@ -0,0 +1,56 @@
import { useQuery } from '@tanstack/react-query';
import { config } from '../../config';
import { useAuthStore } from '../../auth/auth-store';
import { useRefreshInterval } from './use-refresh-interval';
import { useGlobalFilters } from '@cameleer/design-system';
export interface LogEntryResponse {
timestamp: string;
level: string;
loggerName: string | null;
message: string;
threadName: string | null;
stackTrace: string | null;
}
export function useApplicationLogs(
application?: string,
agentId?: string,
options?: { limit?: number; toOverride?: string; exchangeId?: string },
) {
const refetchInterval = useRefreshInterval(15_000);
const { timeRange } = useGlobalFilters();
const to = options?.toOverride ?? timeRange.end.toISOString();
// When filtering by exchangeId, skip the global time range — exchange logs are historical
const useTimeRange = !options?.exchangeId;
return useQuery({
queryKey: ['logs', application, agentId,
useTimeRange ? timeRange.start.toISOString() : null,
useTimeRange ? to : null,
options?.limit, options?.exchangeId],
queryFn: async () => {
const token = useAuthStore.getState().accessToken;
const params = new URLSearchParams();
params.set('application', application!);
if (agentId) params.set('agentId', agentId);
if (options?.exchangeId) params.set('exchangeId', options.exchangeId);
if (useTimeRange) {
params.set('from', timeRange.start.toISOString());
params.set('to', to);
}
if (options?.limit) params.set('limit', String(options.limit));
const res = await fetch(`${config.apiBaseUrl}/logs?${params}`, {
headers: {
Authorization: `Bearer ${token}`,
'X-Cameleer-Protocol-Version': '1',
},
});
if (!res.ok) throw new Error('Failed to load application logs');
return res.json() as Promise<LogEntryResponse[]>;
},
enabled: !!application,
placeholderData: (prev) => prev,
refetchInterval,
});
}

View File

@@ -1,8 +1,10 @@
import { useQuery } from '@tanstack/react-query'; import { useQuery } from '@tanstack/react-query';
import { config } from '../../config'; import { config } from '../../config';
import { useAuthStore } from '../../auth/auth-store'; import { useAuthStore } from '../../auth/auth-store';
import { useRefreshInterval } from './use-refresh-interval';
export function useProcessorMetrics(routeId: string | null, appId?: string) { export function useProcessorMetrics(routeId: string | null, appId?: string) {
const refetchInterval = useRefreshInterval(30_000);
return useQuery({ return useQuery({
queryKey: ['processor-metrics', routeId, appId], queryKey: ['processor-metrics', routeId, appId],
queryFn: async () => { queryFn: async () => {
@@ -20,6 +22,6 @@ export function useProcessorMetrics(routeId: string | null, appId?: string) {
return res.json(); return res.json();
}, },
enabled: !!routeId, enabled: !!routeId,
refetchInterval: 30_000, refetchInterval,
}); });
} }

View File

@@ -0,0 +1,23 @@
import { useGlobalFilters } from '@cameleer/design-system';
/**
* Returns the given interval when auto-refresh is enabled, or `false` when paused.
* Use as `refetchInterval` in React Query hooks.
*/
export function useRefreshInterval(intervalMs: number): number | false {
const { autoRefresh } = useGlobalFilters();
return autoRefresh ? intervalMs : false;
}
/**
* Returns `enabled` and `refetchInterval` tied to the LIVE/PAUSED toggle.
* - LIVE: enabled=true, refetchInterval=intervalMs (fetch + poll)
* - PAUSED: enabled=false, refetchInterval=false (no fetch at all)
*/
export function useLiveQuery(intervalMs: number) {
const { autoRefresh } = useGlobalFilters();
return {
enabled: autoRefresh,
refetchInterval: autoRefresh ? intervalMs : false as number | false,
};
}

669
ui/src/api/schema.d.ts vendored
View File

@@ -4,6 +4,30 @@
*/ */
export interface paths { export interface paths {
"/config/{application}": {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
/**
* Get application config
* @description Returns the current configuration for an application. Returns defaults if none stored.
*/
get: operations["getConfig"];
/**
* Update application config
* @description Saves config and pushes CONFIG_UPDATE to all LIVE agents of this application
*/
put: operations["updateConfig"];
post?: never;
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
"/admin/users/{userId}": { "/admin/users/{userId}": {
parameters: { parameters: {
query?: never; query?: never;
@@ -68,7 +92,7 @@ export interface paths {
cookie?: never; cookie?: never;
}; };
/** Get OIDC configuration */ /** Get OIDC configuration */
get: operations["getConfig"]; get: operations["getConfig_1"];
/** Save OIDC configuration */ /** Save OIDC configuration */
put: operations["saveConfig"]; put: operations["saveConfig"];
post?: never; post?: never;
@@ -136,6 +160,26 @@ export interface paths {
patch?: never; patch?: never;
trace?: never; trace?: never;
}; };
"/data/logs": {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
get?: never;
put?: never;
/**
* Ingest application log entries
* @description Accepts a batch of log entries from an agent. Entries are indexed in OpenSearch.
*/
post: operations["ingestLogs"];
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
"/data/executions": { "/data/executions": {
parameters: { parameters: {
query?: never; query?: never;
@@ -176,6 +220,23 @@ export interface paths {
patch?: never; patch?: never;
trace?: never; trace?: never;
}; };
"/config/{application}/test-expression": {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
get?: never;
put?: never;
/** Test a tap expression against sample data via a live agent */
post: operations["testExpression"];
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
"/auth/refresh": { "/auth/refresh": {
parameters: { parameters: {
query?: never; query?: never;
@@ -278,7 +339,7 @@ export interface paths {
put?: never; put?: never;
/** /**
* Send command to a specific agent * Send command to a specific agent
* @description Sends a config-update, deep-trace, or replay command to the specified agent * @description Sends a command to the specified agent via SSE
*/ */
post: operations["sendCommand"]; post: operations["sendCommand"];
delete?: never; delete?: never;
@@ -298,7 +359,7 @@ export interface paths {
put?: never; put?: never;
/** /**
* Acknowledge command receipt * Acknowledge command receipt
* @description Agent acknowledges that it has received and processed a command * @description Agent acknowledges that it has received and processed a command, with result status and message
*/ */
post: operations["acknowledgeCommand"]; post: operations["acknowledgeCommand"];
delete?: never; delete?: never;
@@ -403,6 +464,23 @@ export interface paths {
patch?: never; patch?: never;
trace?: never; trace?: never;
}; };
"/admin/users/{userId}/password": {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
get?: never;
put?: never;
/** Reset user password */
post: operations["resetPassword"];
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
"/admin/users/{userId}/groups/{groupId}": { "/admin/users/{userId}/groups/{groupId}": {
parameters: { parameters: {
query?: never; query?: never;
@@ -563,6 +641,26 @@ export interface paths {
patch?: never; patch?: never;
trace?: never; trace?: never;
}; };
"/routes/metrics/processors": {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
/**
* Get processor metrics
* @description Returns aggregated performance metrics per processor for the given route and time window
*/
get: operations["getProcessorMetrics"];
put?: never;
post?: never;
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
"/routes/catalog": { "/routes/catalog": {
parameters: { parameters: {
query?: never; query?: never;
@@ -583,6 +681,26 @@ export interface paths {
patch?: never; patch?: never;
trace?: never; trace?: never;
}; };
"/logs": {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
/**
* Search application log entries
* @description Returns log entries for a given application, optionally filtered by agent, level, time range, and text query
*/
get: operations["searchLogs"];
put?: never;
post?: never;
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
"/executions/{executionId}": { "/executions/{executionId}": {
parameters: { parameters: {
query?: never; query?: never;
@@ -617,6 +735,23 @@ export interface paths {
patch?: never; patch?: never;
trace?: never; trace?: never;
}; };
"/executions/{executionId}/processors/by-id/{processorId}/snapshot": {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
/** Get exchange snapshot for a processor by processorId */
get: operations["getProcessorSnapshotById"];
put?: never;
post?: never;
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
"/diagrams": { "/diagrams": {
parameters: { parameters: {
query?: never; query?: never;
@@ -657,6 +792,26 @@ export interface paths {
patch?: never; patch?: never;
trace?: never; trace?: never;
}; };
"/config": {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
/**
* List all application configs
* @description Returns stored configurations for all applications
*/
get: operations["listConfigs"];
put?: never;
post?: never;
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
"/auth/oidc/config": { "/auth/oidc/config": {
parameters: { parameters: {
query?: never; query?: never;
@@ -665,7 +820,7 @@ export interface paths {
cookie?: never; cookie?: never;
}; };
/** Get OIDC config for SPA login flow */ /** Get OIDC config for SPA login flow */
get: operations["getConfig_1"]; get: operations["getConfig_2"];
put?: never; put?: never;
post?: never; post?: never;
delete?: never; delete?: never;
@@ -714,6 +869,22 @@ export interface paths {
patch?: never; patch?: never;
trace?: never; trace?: never;
}; };
"/agents/{agentId}/metrics": {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
get: operations["getMetrics_1"];
put?: never;
post?: never;
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
"/agents/events-log": { "/agents/events-log": {
parameters: { parameters: {
query?: never; query?: never;
@@ -887,6 +1058,23 @@ export interface paths {
patch?: never; patch?: never;
trace?: never; trace?: never;
}; };
"/admin/database/metrics-pipeline": {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
/** Get metrics ingestion pipeline diagnostics */
get: operations["getMetricsPipeline"];
put?: never;
post?: never;
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
"/admin/audit": { "/admin/audit": {
parameters: { parameters: {
query?: never; query?: never;
@@ -925,6 +1113,41 @@ export interface paths {
export type webhooks = Record<string, never>; export type webhooks = Record<string, never>;
export interface components { export interface components {
schemas: { schemas: {
ApplicationConfig: {
application?: string;
/** Format: int32 */
version?: number;
/** Format: date-time */
updatedAt?: string;
engineLevel?: string;
payloadCaptureMode?: string;
metricsEnabled?: boolean;
/** Format: double */
samplingRate?: number;
tracedProcessors?: {
[key: string]: string;
};
logForwardingLevel?: string;
taps?: components["schemas"]["TapDefinition"][];
/** Format: int32 */
tapVersion?: number;
routeRecording?: {
[key: string]: boolean;
};
compressSuccess?: boolean;
};
TapDefinition: {
tapId?: string;
processorId?: string;
target?: string;
expression?: string;
language?: string;
attributeName?: string;
attributeType?: string;
enabled?: boolean;
/** Format: int32 */
version?: number;
};
UpdateUserRequest: { UpdateUserRequest: {
displayName?: string; displayName?: string;
email?: string; email?: string;
@@ -1103,6 +1326,10 @@ export interface components {
correlationId: string; correlationId: string;
errorMessage: string; errorMessage: string;
diagramContentHash: string; diagramContentHash: string;
highlight: string;
attributes: {
[key: string]: string;
};
}; };
SearchResultExecutionSummary: { SearchResultExecutionSummary: {
data: components["schemas"]["ExecutionSummary"][]; data: components["schemas"]["ExecutionSummary"][];
@@ -1113,6 +1340,31 @@ export interface components {
/** Format: int32 */ /** Format: int32 */
limit: number; limit: number;
}; };
LogBatch: {
entries?: components["schemas"]["LogEntry"][];
};
LogEntry: {
/** Format: date-time */
timestamp?: string;
level?: string;
loggerName?: string;
message?: string;
threadName?: string;
stackTrace?: string;
mdc?: {
[key: string]: string;
};
};
TestExpressionRequest: {
expression?: string;
language?: string;
body?: string;
target?: string;
};
TestExpressionResponse: {
result?: string;
error?: string;
};
RefreshRequest: { RefreshRequest: {
refreshToken?: string; refreshToken?: string;
}; };
@@ -1153,6 +1405,11 @@ export interface components {
commandId: string; commandId: string;
status: string; status: string;
}; };
CommandAckRequest: {
status?: string;
message?: string;
data?: string;
};
/** @description Agent registration payload */ /** @description Agent registration payload */
AgentRegistrationRequest: { AgentRegistrationRequest: {
agentId: string; agentId: string;
@@ -1211,6 +1468,9 @@ export interface components {
effectiveRoles?: components["schemas"]["RoleSummary"][]; effectiveRoles?: components["schemas"]["RoleSummary"][];
effectiveGroups?: components["schemas"]["GroupSummary"][]; effectiveGroups?: components["schemas"]["GroupSummary"][];
}; };
SetPasswordRequest: {
password?: string;
};
CreateRoleRequest: { CreateRoleRequest: {
name?: string; name?: string;
description?: string; description?: string;
@@ -1283,6 +1543,22 @@ export interface components {
throughputPerSec: number; throughputPerSec: number;
sparkline: number[]; sparkline: number[];
}; };
ProcessorMetrics: {
processorId: string;
processorType: string;
routeId: string;
appId: string;
/** Format: int64 */
totalCount: number;
/** Format: int64 */
failedCount: number;
/** Format: double */
avgDurationMs: number;
/** Format: double */
p99DurationMs: number;
/** Format: double */
errorRate: number;
};
/** @description Summary of an agent instance for sidebar display */ /** @description Summary of an agent instance for sidebar display */
AgentSummary: { AgentSummary: {
id: string; id: string;
@@ -1310,10 +1586,26 @@ export interface components {
/** Format: date-time */ /** Format: date-time */
lastSeen: string; lastSeen: string;
}; };
/** @description Application log entry from OpenSearch */
LogEntryResponse: {
/** @description Log timestamp (ISO-8601) */
timestamp?: string;
/** @description Log level (INFO, WARN, ERROR, DEBUG) */
level?: string;
/** @description Logger name */
loggerName?: string;
/** @description Log message */
message?: string;
/** @description Thread name */
threadName?: string;
/** @description Stack trace (if present) */
stackTrace?: string;
};
ExecutionDetail: { ExecutionDetail: {
executionId: string; executionId: string;
routeId: string; routeId: string;
agentId: string; agentId: string;
applicationName: string;
status: string; status: string;
/** Format: date-time */ /** Format: date-time */
startTime: string; startTime: string;
@@ -1327,8 +1619,13 @@ export interface components {
errorStackTrace: string; errorStackTrace: string;
diagramContentHash: string; diagramContentHash: string;
processors: components["schemas"]["ProcessorNode"][]; processors: components["schemas"]["ProcessorNode"][];
applicationName?: string; inputBody: string;
children?: components["schemas"]["ProcessorNode"][]; outputBody: string;
inputHeaders: string;
outputHeaders: string;
attributes: {
[key: string]: string;
};
}; };
ProcessorNode: { ProcessorNode: {
processorId: string; processorId: string;
@@ -1340,9 +1637,21 @@ export interface components {
endTime: string; endTime: string;
/** Format: int64 */ /** Format: int64 */
durationMs: number; durationMs: number;
diagramNodeId: string; /** Format: int32 */
loopIndex?: number;
/** Format: int32 */
loopSize?: number;
/** Format: int32 */
splitIndex?: number;
/** Format: int32 */
splitSize?: number;
/** Format: int32 */
multicastIndex?: number;
errorMessage: string; errorMessage: string;
errorStackTrace: string; errorStackTrace: string;
attributes: {
[key: string]: string;
};
children: components["schemas"]["ProcessorNode"][]; children: components["schemas"]["ProcessorNode"][];
}; };
DiagramLayout: { DiagramLayout: {
@@ -1391,6 +1700,10 @@ export interface components {
registeredAt: string; registeredAt: string;
/** Format: date-time */ /** Format: date-time */
lastHeartbeat: string; lastHeartbeat: string;
version: string;
capabilities: {
[key: string]: Record<string, never>;
};
/** Format: double */ /** Format: double */
tps: number; tps: number;
/** Format: double */ /** Format: double */
@@ -1406,6 +1719,17 @@ export interface components {
/** Format: int64 */ /** Format: int64 */
timeout?: number; timeout?: number;
}; };
AgentMetricsResponse: {
metrics: {
[key: string]: components["schemas"]["MetricBucket"][];
};
};
MetricBucket: {
/** Format: date-time */
time: string;
/** Format: double */
value: number;
};
/** @description Agent lifecycle event */ /** @description Agent lifecycle event */
AgentEventResponse: { AgentEventResponse: {
/** Format: int64 */ /** Format: int64 */
@@ -1723,7 +2047,7 @@ export interface components {
username?: string; username?: string;
action?: string; action?: string;
/** @enum {string} */ /** @enum {string} */
category?: "INFRA" | "AUTH" | "USER_MGMT" | "CONFIG" | "RBAC"; category?: "INFRA" | "AUTH" | "USER_MGMT" | "CONFIG" | "RBAC" | "AGENT";
target?: string; target?: string;
detail?: { detail?: {
[key: string]: Record<string, never>; [key: string]: Record<string, never>;
@@ -1742,6 +2066,54 @@ export interface components {
} }
export type $defs = Record<string, never>; export type $defs = Record<string, never>;
export interface operations { export interface operations {
getConfig: {
parameters: {
query?: never;
header?: never;
path: {
application: string;
};
cookie?: never;
};
requestBody?: never;
responses: {
/** @description Config returned */
200: {
headers: {
[name: string]: unknown;
};
content: {
"*/*": components["schemas"]["ApplicationConfig"];
};
};
};
};
updateConfig: {
parameters: {
query?: never;
header?: never;
path: {
application: string;
};
cookie?: never;
};
requestBody: {
content: {
"application/json": components["schemas"]["ApplicationConfig"];
};
};
responses: {
/** @description Config saved and pushed */
200: {
headers: {
[name: string]: unknown;
};
content: {
"*/*": components["schemas"]["ApplicationConfig"];
};
};
};
};
getUser: { getUser: {
parameters: { parameters: {
query?: never; query?: never;
@@ -1971,7 +2343,7 @@ export interface operations {
}; };
}; };
}; };
getConfig: { getConfig_1: {
parameters: { parameters: {
query?: never; query?: never;
header?: never; header?: never;
@@ -2149,7 +2521,7 @@ export interface operations {
routeId?: string; routeId?: string;
agentId?: string; agentId?: string;
processorType?: string; processorType?: string;
group?: string; application?: string;
offset?: number; offset?: number;
limit?: number; limit?: number;
sortField?: string; sortField?: string;
@@ -2216,6 +2588,13 @@ export interface operations {
}; };
content?: never; content?: never;
}; };
/** @description Invalid payload */
400: {
headers: {
[name: string]: unknown;
};
content?: never;
};
/** @description Buffer full, retry later */ /** @description Buffer full, retry later */
503: { 503: {
headers: { headers: {
@@ -2225,6 +2604,28 @@ export interface operations {
}; };
}; };
}; };
ingestLogs: {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
requestBody: {
content: {
"application/json": components["schemas"]["LogBatch"];
};
};
responses: {
/** @description Logs accepted for indexing */
202: {
headers: {
[name: string]: unknown;
};
content?: never;
};
};
};
ingestExecutions: { ingestExecutions: {
parameters: { parameters: {
query?: never; query?: never;
@@ -2269,6 +2670,50 @@ export interface operations {
}; };
}; };
}; };
testExpression: {
parameters: {
query?: never;
header?: never;
path: {
application: string;
};
cookie?: never;
};
requestBody: {
content: {
"application/json": components["schemas"]["TestExpressionRequest"];
};
};
responses: {
/** @description Expression evaluated successfully */
200: {
headers: {
[name: string]: unknown;
};
content: {
"*/*": components["schemas"]["TestExpressionResponse"];
};
};
/** @description No live agent available for this application */
404: {
headers: {
[name: string]: unknown;
};
content: {
"*/*": components["schemas"]["TestExpressionResponse"];
};
};
/** @description Agent did not respond in time */
504: {
headers: {
[name: string]: unknown;
};
content: {
"*/*": components["schemas"]["TestExpressionResponse"];
};
};
};
};
refresh: { refresh: {
parameters: { parameters: {
query?: never; query?: never;
@@ -2511,7 +2956,11 @@ export interface operations {
}; };
cookie?: never; cookie?: never;
}; };
requestBody?: never; requestBody?: {
content: {
"application/json": components["schemas"]["CommandAckRequest"];
};
};
responses: { responses: {
/** @description Command acknowledged */ /** @description Command acknowledged */
200: { 200: {
@@ -2732,6 +3181,30 @@ export interface operations {
}; };
}; };
}; };
resetPassword: {
parameters: {
query?: never;
header?: never;
path: {
userId: string;
};
cookie?: never;
};
requestBody: {
content: {
"application/json": components["schemas"]["SetPasswordRequest"];
};
};
responses: {
/** @description Password reset */
204: {
headers: {
[name: string]: unknown;
};
content?: never;
};
};
};
addUserToGroup: { addUserToGroup: {
parameters: { parameters: {
query?: never; query?: never;
@@ -3046,9 +3519,37 @@ export interface operations {
}; };
}; };
}; };
getProcessorMetrics: {
parameters: {
query: {
routeId: string;
appId?: string;
from?: string;
to?: string;
};
header?: never;
path?: never;
cookie?: never;
};
requestBody?: never;
responses: {
/** @description Metrics returned */
200: {
headers: {
[name: string]: unknown;
};
content: {
"*/*": components["schemas"]["ProcessorMetrics"][];
};
};
};
};
getCatalog: { getCatalog: {
parameters: { parameters: {
query?: never; query?: {
from?: string;
to?: string;
};
header?: never; header?: never;
path?: never; path?: never;
cookie?: never; cookie?: never;
@@ -3066,6 +3567,35 @@ export interface operations {
}; };
}; };
}; };
searchLogs: {
parameters: {
query: {
application: string;
agentId?: string;
level?: string;
query?: string;
exchangeId?: string;
from?: string;
to?: string;
limit?: number;
};
header?: never;
path?: never;
cookie?: never;
};
requestBody?: never;
responses: {
/** @description OK */
200: {
headers: {
[name: string]: unknown;
};
content: {
"*/*": components["schemas"]["LogEntryResponse"][];
};
};
};
};
getDetail: { getDetail: {
parameters: { parameters: {
query?: never; query?: never;
@@ -3133,11 +3663,49 @@ export interface operations {
}; };
}; };
}; };
getProcessorSnapshotById: {
parameters: {
query?: never;
header?: never;
path: {
executionId: string;
processorId: string;
};
cookie?: never;
};
requestBody?: never;
responses: {
/** @description Snapshot data */
200: {
headers: {
[name: string]: unknown;
};
content: {
"*/*": {
[key: string]: string;
};
};
};
/** @description Snapshot not found */
404: {
headers: {
[name: string]: unknown;
};
content: {
"*/*": {
[key: string]: string;
};
};
};
};
};
findByApplicationAndRoute: { findByApplicationAndRoute: {
parameters: { parameters: {
query: { query: {
application: string; application: string;
routeId: string; routeId: string;
/** @description Layout direction: LR (left-to-right) or TB (top-to-bottom) */
direction?: "LR" | "TB";
}; };
header?: never; header?: never;
path?: never; path?: never;
@@ -3167,7 +3735,10 @@ export interface operations {
}; };
renderDiagram: { renderDiagram: {
parameters: { parameters: {
query?: never; query?: {
/** @description Layout direction: LR (left-to-right) or TB (top-to-bottom) */
direction?: "LR" | "TB";
};
header?: never; header?: never;
path: { path: {
contentHash: string; contentHash: string;
@@ -3197,7 +3768,27 @@ export interface operations {
}; };
}; };
}; };
getConfig_1: { listConfigs: {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
requestBody?: never;
responses: {
/** @description Configs returned */
200: {
headers: {
[name: string]: unknown;
};
content: {
"*/*": components["schemas"]["ApplicationConfig"][];
};
};
};
};
getConfig_2: {
parameters: { parameters: {
query?: never; query?: never;
header?: never; header?: never;
@@ -3301,6 +3892,33 @@ export interface operations {
}; };
}; };
}; };
getMetrics_1: {
parameters: {
query: {
names: string;
from?: string;
to?: string;
buckets?: number;
};
header?: never;
path: {
agentId: string;
};
cookie?: never;
};
requestBody?: never;
responses: {
/** @description OK */
200: {
headers: {
[name: string]: unknown;
};
content: {
"*/*": components["schemas"]["AgentMetricsResponse"];
};
};
};
};
getEvents: { getEvents: {
parameters: { parameters: {
query?: { query?: {
@@ -3413,6 +4031,7 @@ export interface operations {
page?: number; page?: number;
size?: number; size?: number;
search?: string; search?: string;
prefix?: string;
}; };
header?: never; header?: never;
path?: never; path?: never;
@@ -3511,6 +4130,28 @@ export interface operations {
}; };
}; };
}; };
getMetricsPipeline: {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
requestBody?: never;
responses: {
/** @description OK */
200: {
headers: {
[name: string]: unknown;
};
content: {
"*/*": {
[key: string]: Record<string, never>;
};
};
};
};
};
getAuditLog: { getAuditLog: {
parameters: { parameters: {
query?: { query?: {

View File

@@ -0,0 +1,85 @@
.page {
display: flex;
align-items: center;
justify-content: center;
min-height: 100vh;
background: var(--bg-base);
}
.card {
width: 100%;
max-width: 400px;
padding: 32px;
}
.loginForm {
display: flex;
flex-direction: column;
align-items: center;
font-family: var(--font-body);
width: 100%;
}
.logo {
margin-bottom: 8px;
font-size: 24px;
font-weight: 700;
color: var(--text-primary);
}
.subtitle {
font-size: 13px;
color: var(--text-muted);
margin: 0 0 24px;
}
.error {
width: 100%;
margin-bottom: 16px;
}
.socialSection {
display: flex;
flex-direction: column;
gap: 8px;
width: 100%;
margin-bottom: 20px;
}
.divider {
display: flex;
align-items: center;
gap: 12px;
width: 100%;
margin-bottom: 20px;
}
.dividerLine {
flex: 1;
height: 1px;
background: var(--border);
}
.dividerText {
color: var(--text-muted);
font-size: 11px;
text-transform: uppercase;
letter-spacing: 0.5px;
font-weight: 500;
}
.fields {
display: flex;
flex-direction: column;
gap: 14px;
width: 100%;
}
.submitButton {
width: 100%;
}
.ssoButton {
width: 100%;
justify-content: center;
}

View File

@@ -3,6 +3,7 @@ import { Navigate } from 'react-router';
import { useAuthStore } from './auth-store'; import { useAuthStore } from './auth-store';
import { api } from '../api/client'; import { api } from '../api/client';
import { Card, Input, Button, Alert, FormField } from '@cameleer/design-system'; import { Card, Input, Button, Alert, FormField } from '@cameleer/design-system';
import styles from './LoginPage.module.css';
interface OidcInfo { interface OidcInfo {
clientId: string; clientId: string;
@@ -50,53 +51,75 @@ export function LoginPage() {
}; };
return ( return (
<div style={{ display: 'flex', alignItems: 'center', justifyContent: 'center', minHeight: '100vh', background: 'var(--surface-ground)' }}> <div className={styles.page}>
<Card> <Card className={styles.card}>
<form onSubmit={handleSubmit} style={{ padding: '2rem', minWidth: 360 }}> <div className={styles.loginForm}>
<div style={{ textAlign: 'center', marginBottom: '1.5rem' }}> <div className={styles.logo}>cameleer3</div>
<h1 style={{ fontSize: '1.5rem', fontWeight: 600 }}>cameleer3</h1> <p className={styles.subtitle}>Sign in to access the observability dashboard</p>
<p style={{ color: 'var(--text-secondary)', marginTop: '0.25rem', fontSize: '0.875rem' }}>
Sign in to access the observability dashboard {error && (
</p> <div className={styles.error}>
</div> <Alert variant="error">{error}</Alert>
</div>
)}
{oidc && ( {oidc && (
<> <>
<Button variant="secondary" onClick={handleOidcLogin} disabled={oidcLoading} style={{ width: '100%', marginBottom: '1rem' }}> <div className={styles.socialSection}>
{oidcLoading ? 'Redirecting...' : 'Sign in with SSO'} <Button
</Button> variant="secondary"
<div style={{ display: 'flex', alignItems: 'center', gap: '0.75rem', margin: '1rem 0' }}> className={styles.ssoButton}
<hr style={{ flex: 1, border: 'none', borderTop: '1px solid var(--border)' }} /> onClick={handleOidcLogin}
<span style={{ color: 'var(--text-tertiary)', fontSize: '0.75rem' }}>or</span> disabled={oidcLoading}
<hr style={{ flex: 1, border: 'none', borderTop: '1px solid var(--border)' }} /> type="button"
>
{oidcLoading ? 'Redirecting...' : 'Sign in with SSO'}
</Button>
</div>
<div className={styles.divider}>
<div className={styles.dividerLine} />
<span className={styles.dividerText}>or</span>
<div className={styles.dividerLine} />
</div> </div>
</> </>
)} )}
<FormField label="Username"> <form className={styles.fields} onSubmit={handleSubmit} aria-label="Sign in" noValidate>
<Input <FormField label="Username" htmlFor="login-username">
value={username} <Input
onChange={(e) => setUsername(e.target.value)} id="login-username"
autoFocus value={username}
autoComplete="username" onChange={(e) => setUsername(e.target.value)}
/> placeholder="Enter your username"
</FormField> autoFocus
autoComplete="username"
disabled={loading}
/>
</FormField>
<FormField label="Password"> <FormField label="Password" htmlFor="login-password">
<Input <Input
type="password" id="login-password"
value={password} type="password"
onChange={(e) => setPassword(e.target.value)} value={password}
autoComplete="current-password" onChange={(e) => setPassword(e.target.value)}
/> placeholder="••••••••"
</FormField> autoComplete="current-password"
disabled={loading}
/>
</FormField>
<Button variant="primary" disabled={loading || !username || !password} style={{ width: '100%', marginTop: '0.5rem' }}> <Button
{loading ? 'Signing in...' : 'Sign In'} variant="primary"
</Button> type="submit"
loading={loading}
{error && <div style={{ marginTop: '1rem' }}><Alert variant="error">{error}</Alert></div>} disabled={loading || !username || !password}
</form> className={styles.submitButton}
>
Sign in
</Button>
</form>
</div>
</Card> </Card>
</div> </div>
); );

View File

@@ -0,0 +1,164 @@
import { useState, useEffect } from 'react';
import type { ProcessorNode, ExecutionDetail, DetailTab } from './types';
import { useProcessorSnapshotById } from '../../api/queries/executions';
import { InfoTab } from './tabs/InfoTab';
import { HeadersTab } from './tabs/HeadersTab';
import { BodyTab } from './tabs/BodyTab';
import { ErrorTab } from './tabs/ErrorTab';
import { ConfigTab } from './tabs/ConfigTab';
import { TimelineTab } from './tabs/TimelineTab';
import styles from './ExecutionDiagram.module.css';
interface DetailPanelProps {
selectedProcessor: ProcessorNode | null;
executionDetail: ExecutionDetail;
executionId: string;
onSelectProcessor: (processorId: string) => void;
}
const TABS: { key: DetailTab; label: string }[] = [
{ key: 'info', label: 'Info' },
{ key: 'headers', label: 'Headers' },
{ key: 'input', label: 'Input' },
{ key: 'output', label: 'Output' },
{ key: 'error', label: 'Error' },
{ key: 'config', label: 'Config' },
{ key: 'timeline', label: 'Timeline' },
];
function formatDuration(ms: number | undefined): string {
if (ms === undefined || ms === null) return '-';
if (ms < 1000) return `${ms}ms`;
return `${(ms / 1000).toFixed(1)}s`;
}
function statusClass(status: string): string {
const s = status?.toUpperCase();
if (s === 'COMPLETED') return styles.statusCompleted;
if (s === 'FAILED') return styles.statusFailed;
return '';
}
export function DetailPanel({
selectedProcessor,
executionDetail,
executionId,
onSelectProcessor,
}: DetailPanelProps) {
const [activeTab, setActiveTab] = useState<DetailTab>('info');
// When selectedProcessor changes, keep current tab unless it was a
// processor-specific tab and now there is no processor selected.
const prevProcessorId = selectedProcessor?.processorId;
useEffect(() => {
// If no processor is selected and we're on a processor-specific tab, go to info
if (!selectedProcessor && (activeTab === 'input' || activeTab === 'output')) {
// Input/Output at exchange level still make sense, keep them
}
}, [prevProcessorId]); // eslint-disable-line react-hooks/exhaustive-deps
const hasError = selectedProcessor
? !!selectedProcessor.errorMessage
: !!executionDetail.errorMessage;
// Fetch snapshot for body tabs when a processor is selected
const snapshotQuery = useProcessorSnapshotById(
selectedProcessor ? executionId : null,
selectedProcessor?.processorId ?? null,
);
// Determine body content for Input/Output tabs
let inputBody: string | undefined;
let outputBody: string | undefined;
if (selectedProcessor && snapshotQuery.data) {
inputBody = snapshotQuery.data.inputBody;
outputBody = snapshotQuery.data.outputBody;
} else if (!selectedProcessor) {
inputBody = executionDetail.inputBody;
outputBody = executionDetail.outputBody;
}
// Header display
const headerName = selectedProcessor ? selectedProcessor.processorType : 'Exchange';
const headerStatus = selectedProcessor ? selectedProcessor.status : executionDetail.status;
const headerId = selectedProcessor ? selectedProcessor.processorId : executionDetail.executionId;
const headerDuration = selectedProcessor ? selectedProcessor.durationMs : executionDetail.durationMs;
return (
<div className={styles.detailPanel}>
{/* Processor / Exchange header bar */}
<div className={styles.processorHeader}>
<span className={styles.processorName}>{headerName}</span>
<span className={`${styles.statusBadge} ${statusClass(headerStatus)}`}>
{headerStatus}
</span>
<span className={styles.processorId}>{headerId}</span>
<span className={styles.processorDuration}>{formatDuration(headerDuration)}</span>
</div>
{/* Tab bar */}
<div className={styles.tabBar}>
{TABS.map((tab) => {
const isActive = activeTab === tab.key;
const isDisabled = tab.key === 'config';
const isError = tab.key === 'error' && hasError;
const isErrorGrayed = tab.key === 'error' && !hasError;
let className = styles.tab;
if (isActive) className += ` ${styles.tabActive}`;
if (isDisabled) className += ` ${styles.tabDisabled}`;
if (isError && !isActive) className += ` ${styles.tabError}`;
if (isErrorGrayed && !isActive) className += ` ${styles.tabDisabled}`;
return (
<button
key={tab.key}
className={className}
onClick={() => {
if (!isDisabled) setActiveTab(tab.key);
}}
type="button"
>
{tab.label}
</button>
);
})}
</div>
{/* Tab content */}
<div className={styles.tabContent}>
{activeTab === 'info' && (
<InfoTab processor={selectedProcessor} executionDetail={executionDetail} />
)}
{activeTab === 'headers' && (
<HeadersTab
executionId={executionId}
processorId={selectedProcessor?.processorId ?? null}
exchangeInputHeaders={executionDetail.inputHeaders}
exchangeOutputHeaders={executionDetail.outputHeaders}
/>
)}
{activeTab === 'input' && (
<BodyTab body={inputBody} label="Input" />
)}
{activeTab === 'output' && (
<BodyTab body={outputBody} label="Output" />
)}
{activeTab === 'error' && (
<ErrorTab processor={selectedProcessor} executionDetail={executionDetail} />
)}
{activeTab === 'config' && (
<ConfigTab />
)}
{activeTab === 'timeline' && (
<TimelineTab
executionDetail={executionDetail}
selectedProcessorId={selectedProcessor?.processorId ?? null}
onSelectProcessor={onSelectProcessor}
/>
)}
</div>
</div>
);
}

View File

@@ -0,0 +1,538 @@
/* ==========================================================================
EXECUTION DIAGRAM — LAYOUT
========================================================================== */
.executionDiagram {
display: flex;
flex-direction: column;
width: 100%;
height: 100%;
min-height: 400px;
overflow: hidden;
}
.exchangeBar {
display: flex;
align-items: center;
gap: 12px;
padding: 8px 14px;
background: var(--bg-surface, #FFFFFF);
border-bottom: 1px solid var(--border, #E4DFD8);
font-size: 12px;
color: var(--text-secondary, #5C5347);
flex-shrink: 0;
}
.exchangeLabel {
font-weight: 600;
color: var(--text-primary, #1A1612);
}
.exchangeId {
font-size: 11px;
background: var(--bg-hover, #F5F0EA);
padding: 2px 6px;
border-radius: 3px;
color: var(--text-primary, #1A1612);
}
.exchangeMeta {
color: var(--text-muted, #9C9184);
}
.jumpToError {
margin-left: auto;
font-size: 10px;
padding: 3px 10px;
border: 1px solid var(--error, #C0392B);
background: #FDF2F0;
color: var(--error, #C0392B);
border-radius: 4px;
cursor: pointer;
font-weight: 500;
font-family: inherit;
}
.jumpToError:hover {
background: #F9E0DC;
}
.diagramArea {
overflow: hidden;
position: relative;
}
.splitter {
height: 4px;
background: var(--border, #E4DFD8);
cursor: row-resize;
flex-shrink: 0;
}
.splitter:hover {
background: var(--amber, #C6820E);
}
.detailArea {
overflow: hidden;
min-height: 120px;
}
.loadingState {
display: flex;
align-items: center;
justify-content: center;
flex: 1;
color: var(--text-muted, #9C9184);
font-size: 13px;
}
.errorState {
display: flex;
align-items: center;
justify-content: center;
flex: 1;
color: var(--error, #C0392B);
font-size: 13px;
}
.statusRunning {
color: var(--amber, #C6820E);
background: #FFF8F0;
}
/* ==========================================================================
DETAIL PANEL
========================================================================== */
.detailPanel {
display: flex;
flex-direction: column;
overflow: hidden;
background: var(--bg-surface, #FFFFFF);
border-top: 1px solid var(--border, #E4DFD8);
}
.processorHeader {
display: flex;
flex-direction: row;
align-items: center;
gap: 10px;
padding: 6px 14px;
border-bottom: 1px solid var(--border, #E4DFD8);
background: #FAFAF8;
min-height: 32px;
}
.processorName {
font-size: 12px;
font-weight: 600;
color: var(--text-primary, #1A1612);
}
.processorId {
font-size: 11px;
font-family: var(--font-mono, monospace);
color: var(--text-muted, #9C9184);
}
.processorDuration {
font-size: 11px;
font-family: var(--font-mono, monospace);
color: var(--text-secondary, #5C5347);
margin-left: auto;
}
/* ==========================================================================
STATUS BADGE
========================================================================== */
.statusBadge {
font-size: 10px;
font-weight: 600;
padding: 1px 6px;
border-radius: 8px;
text-transform: uppercase;
letter-spacing: 0.3px;
}
.statusCompleted {
color: var(--success, #3D7C47);
background: #F0F9F1;
}
.statusFailed {
color: var(--error, #C0392B);
background: #FDF2F0;
}
/* ==========================================================================
TAB BAR
========================================================================== */
.tabBar {
display: flex;
flex-direction: row;
border-bottom: 1px solid var(--border, #E4DFD8);
padding: 0 14px;
background: #FAFAF8;
gap: 0;
}
.tab {
padding: 6px 12px;
font-size: 11px;
font-family: var(--font-body, inherit);
cursor: pointer;
color: var(--text-muted, #9C9184);
border: none;
background: none;
border-bottom: 2px solid transparent;
transition: color 0.12s, border-color 0.12s;
white-space: nowrap;
}
.tab:hover {
color: var(--text-secondary, #5C5347);
}
.tabActive {
color: var(--amber, #C6820E);
border-bottom: 2px solid var(--amber, #C6820E);
font-weight: 600;
}
.tabDisabled {
opacity: 0.4;
cursor: default;
}
.tabDisabled:hover {
color: var(--text-muted, #9C9184);
}
.tabError {
color: var(--error, #C0392B);
}
.tabError:hover {
color: var(--error, #C0392B);
}
/* ==========================================================================
TAB CONTENT
========================================================================== */
.tabContent {
flex: 1;
overflow-y: auto;
padding: 10px 14px;
}
/* ==========================================================================
INFO TAB — GRID
========================================================================== */
.infoGrid {
display: grid;
grid-template-columns: 1fr 1fr 1fr;
gap: 12px 24px;
}
.fieldLabel {
font-size: 10px;
color: var(--text-muted, #9C9184);
text-transform: uppercase;
letter-spacing: 0.5px;
margin-bottom: 2px;
}
.fieldValue {
font-size: 12px;
color: var(--text-primary, #1A1612);
word-break: break-all;
}
.fieldValueMono {
font-size: 12px;
color: var(--text-primary, #1A1612);
font-family: var(--font-mono, monospace);
word-break: break-all;
}
/* ==========================================================================
ATTRIBUTE PILLS
========================================================================== */
.attributesSection {
margin-top: 14px;
padding-top: 10px;
border-top: 1px solid var(--border, #E4DFD8);
}
.attributesLabel {
font-size: 10px;
color: var(--text-muted, #9C9184);
text-transform: uppercase;
letter-spacing: 0.5px;
margin-bottom: 6px;
}
.attributesList {
display: flex;
flex-wrap: wrap;
gap: 6px;
}
.attributePill {
font-size: 10px;
padding: 2px 8px;
background: var(--bg-hover, #F5F0EA);
border-radius: 10px;
color: var(--text-secondary, #5C5347);
font-family: var(--font-mono, monospace);
}
/* ==========================================================================
HEADERS TAB — SPLIT
========================================================================== */
.headersSplit {
display: flex;
gap: 0;
min-height: 0;
overflow-y: auto;
max-height: 100%;
}
.headersColumn {
flex: 1;
min-width: 0;
}
.headersColumn + .headersColumn {
border-left: 1px solid var(--border, #E4DFD8);
padding-left: 14px;
margin-left: 14px;
}
.headersColumnLabel {
font-size: 10px;
font-weight: 600;
color: var(--text-muted, #9C9184);
text-transform: uppercase;
letter-spacing: 0.5px;
margin-bottom: 6px;
}
.headersTable {
width: 100%;
font-size: 11px;
border-collapse: collapse;
}
.headersTable td {
padding: 3px 0;
border-bottom: 1px solid var(--border, #E4DFD8);
vertical-align: top;
}
.headersTable tr:last-child td {
border-bottom: none;
}
.headerKey {
font-family: var(--font-mono, monospace);
font-weight: 600;
color: var(--text-muted, #9C9184);
white-space: nowrap;
padding-right: 12px;
width: 140px;
max-width: 140px;
overflow: hidden;
text-overflow: ellipsis;
}
.headerVal {
font-family: var(--font-mono, monospace);
color: var(--text-primary, #1A1612);
word-break: break-all;
}
/* ==========================================================================
BODY / CODE TAB
========================================================================== */
.codeHeader {
display: flex;
align-items: center;
gap: 8px;
margin-bottom: 8px;
}
.codeFormat {
font-size: 10px;
font-weight: 600;
color: var(--text-muted, #9C9184);
text-transform: uppercase;
letter-spacing: 0.5px;
}
.codeSize {
font-size: 10px;
color: var(--text-muted, #9C9184);
font-family: var(--font-mono, monospace);
}
.codeCopyBtn {
margin-left: auto;
font-size: 10px;
font-family: var(--font-body, inherit);
padding: 2px 8px;
border: 1px solid var(--border, #E4DFD8);
border-radius: 4px;
background: var(--bg-surface, #FFFFFF);
color: var(--text-secondary, #5C5347);
cursor: pointer;
}
.codeCopyBtn:hover {
background: var(--bg-hover, #F5F0EA);
}
.codeBlock {
background: #1A1612;
color: #E4DFD8;
padding: 12px;
border-radius: 6px;
font-family: var(--font-mono, monospace);
font-size: 11px;
line-height: 1.5;
overflow-x: auto;
white-space: pre-wrap;
word-break: break-all;
max-height: 400px;
overflow-y: auto;
}
/* ==========================================================================
ERROR TAB
========================================================================== */
.errorType {
font-size: 13px;
font-weight: 600;
color: var(--error, #C0392B);
margin-bottom: 8px;
}
.errorMessage {
font-size: 12px;
color: var(--text-primary, #1A1612);
background: #FDF2F0;
border: 1px solid #F5D5D0;
border-radius: 6px;
padding: 10px 12px;
margin-bottom: 12px;
line-height: 1.5;
word-break: break-word;
}
.errorStackTrace {
background: #1A1612;
color: #E4DFD8;
padding: 12px;
border-radius: 6px;
font-family: var(--font-mono, monospace);
font-size: 10px;
line-height: 1.5;
overflow-x: auto;
white-space: pre;
max-height: 300px;
overflow-y: auto;
}
.errorStackLabel {
font-size: 10px;
font-weight: 600;
color: var(--text-muted, #9C9184);
text-transform: uppercase;
letter-spacing: 0.5px;
margin-bottom: 6px;
}
/* ==========================================================================
TIMELINE / GANTT TAB
========================================================================== */
.ganttContainer {
display: flex;
flex-direction: column;
gap: 2px;
overflow-y: auto;
max-height: 100%;
}
.ganttRow {
display: flex;
align-items: center;
gap: 8px;
cursor: pointer;
padding: 3px 4px;
border-radius: 3px;
transition: background 0.1s;
}
.ganttRow:hover {
background: var(--bg-hover, #F5F0EA);
}
.ganttSelected {
background: #FFF8F0;
}
.ganttSelected:hover {
background: #FFF8F0;
}
.ganttLabel {
width: 100px;
min-width: 100px;
font-size: 10px;
color: var(--text-secondary, #5C5347);
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.ganttBar {
flex: 1;
height: 16px;
background: var(--bg-hover, #F5F0EA);
border-radius: 2px;
position: relative;
min-width: 0;
}
.ganttFill {
position: absolute;
height: 100%;
border-radius: 2px;
min-width: 2px;
}
.ganttFillCompleted {
background: var(--success, #3D7C47);
}
.ganttFillFailed {
background: var(--error, #C0392B);
}
.ganttDuration {
width: 50px;
min-width: 50px;
font-size: 10px;
font-family: var(--font-mono, monospace);
color: var(--text-muted, #9C9184);
text-align: right;
}
/* ==========================================================================
EMPTY STATE
========================================================================== */
.emptyState {
text-align: center;
color: var(--text-muted, #9C9184);
font-size: 12px;
padding: 20px;
}

View File

@@ -0,0 +1,215 @@
import { useCallback, useRef, useState } from 'react';
import type { NodeAction, NodeConfig } from '../ProcessDiagram/types';
import type { ExecutionDetail, ProcessorNode } from './types';
import { useExecutionDetail } from '../../api/queries/executions';
import { useDiagramLayout } from '../../api/queries/diagrams';
import { ProcessDiagram } from '../ProcessDiagram';
import { DetailPanel } from './DetailPanel';
import { useExecutionOverlay } from './useExecutionOverlay';
import { useIterationState } from './useIterationState';
import styles from './ExecutionDiagram.module.css';
interface ExecutionDiagramProps {
executionId: string;
executionDetail?: ExecutionDetail;
direction?: 'LR' | 'TB';
knownRouteIds?: Set<string>;
onNodeAction?: (nodeId: string, action: NodeAction) => void;
nodeConfigs?: Map<string, NodeConfig>;
className?: string;
}
function findProcessorInTree(
nodes: ProcessorNode[] | undefined,
processorId: string | null,
): ProcessorNode | null {
if (!nodes || !processorId) return null;
for (const n of nodes) {
if (n.processorId === processorId) return n;
if (n.children) {
const found = findProcessorInTree(n.children, processorId);
if (found) return found;
}
}
return null;
}
function findFailedProcessor(nodes: ProcessorNode[]): ProcessorNode | null {
for (const n of nodes) {
if (n.status === 'FAILED') return n;
if (n.children) {
const found = findFailedProcessor(n.children);
if (found) return found;
}
}
return null;
}
function statusBadgeClass(status: string): string {
const s = status?.toUpperCase();
if (s === 'COMPLETED') return `${styles.statusBadge} ${styles.statusCompleted}`;
if (s === 'FAILED') return `${styles.statusBadge} ${styles.statusFailed}`;
if (s === 'RUNNING') return `${styles.statusBadge} ${styles.statusRunning}`;
return styles.statusBadge;
}
export function ExecutionDiagram({
executionId,
executionDetail: externalDetail,
direction = 'LR',
knownRouteIds,
onNodeAction,
nodeConfigs,
className,
}: ExecutionDiagramProps) {
// 1. Fetch execution data (skip if pre-fetched prop provided)
const detailQuery = useExecutionDetail(externalDetail ? null : executionId);
const detail = externalDetail ?? detailQuery.data;
const detailLoading = !externalDetail && detailQuery.isLoading;
const detailError = !externalDetail && detailQuery.error;
// 2. Load diagram by content hash
const diagramQuery = useDiagramLayout(detail?.diagramContentHash ?? null, direction);
const diagramLayout = diagramQuery.data;
const diagramLoading = diagramQuery.isLoading;
const diagramError = diagramQuery.error;
// 3. Initialize iteration state
const { iterationState, setIteration } = useIterationState(detail?.processors);
// 4. Compute overlay
const overlay = useExecutionOverlay(detail?.processors, iterationState);
// 5. Manage selection + center-on-node
const [selectedProcessorId, setSelectedProcessorId] = useState<string>('');
const [centerOnNodeId, setCenterOnNodeId] = useState<string>('');
// 6. Resizable splitter state
const [splitPercent, setSplitPercent] = useState(60);
const containerRef = useRef<HTMLDivElement>(null);
const handleSplitterDown = useCallback((e: React.PointerEvent) => {
e.currentTarget.setPointerCapture(e.pointerId);
const container = containerRef.current;
if (!container) return;
const onMove = (me: PointerEvent) => {
const rect = container.getBoundingClientRect();
const y = me.clientY - rect.top;
const pct = Math.min(85, Math.max(30, (y / rect.height) * 100));
setSplitPercent(pct);
};
const onUp = () => {
document.removeEventListener('pointermove', onMove);
document.removeEventListener('pointerup', onUp);
};
document.addEventListener('pointermove', onMove);
document.addEventListener('pointerup', onUp);
}, []);
// Jump to error: find first FAILED processor, select it, and center the viewport
const handleJumpToError = useCallback(() => {
if (!detail?.processors) return;
const failed = findFailedProcessor(detail.processors);
if (failed?.processorId) {
setSelectedProcessorId(failed.processorId);
// Use a unique value to re-trigger centering even if the same node
setCenterOnNodeId('');
requestAnimationFrame(() => setCenterOnNodeId(failed.processorId));
}
}, [detail?.processors]);
// Loading state
if (detailLoading || (detail && diagramLoading)) {
return (
<div className={`${styles.executionDiagram} ${className ?? ''}`}>
<div className={styles.loadingState}>Loading execution data...</div>
</div>
);
}
// Error state
if (detailError) {
return (
<div className={`${styles.executionDiagram} ${className ?? ''}`}>
<div className={styles.errorState}>Failed to load execution detail</div>
</div>
);
}
if (diagramError) {
return (
<div className={`${styles.executionDiagram} ${className ?? ''}`}>
<div className={styles.errorState}>Failed to load diagram</div>
</div>
);
}
if (!detail) {
return (
<div className={`${styles.executionDiagram} ${className ?? ''}`}>
<div className={styles.loadingState}>No execution data</div>
</div>
);
}
return (
<div ref={containerRef} className={`${styles.executionDiagram} ${className ?? ''}`}>
{/* Exchange summary bar */}
<div className={styles.exchangeBar}>
<span className={styles.exchangeLabel}>Exchange</span>
<code className={styles.exchangeId}>{detail.exchangeId || detail.executionId}</code>
<span className={statusBadgeClass(detail.status)}>
{detail.status}
</span>
<span className={styles.exchangeMeta}>
{detail.applicationName} / {detail.routeId}
</span>
<span className={styles.exchangeMeta}>{detail.durationMs}ms</span>
{detail.status === 'FAILED' && (
<button
className={styles.jumpToError}
onClick={handleJumpToError}
type="button"
>
Jump to Error
</button>
)}
</div>
{/* Diagram area */}
<div className={styles.diagramArea} style={{ height: `${splitPercent}%` }}>
<ProcessDiagram
application={detail.applicationName}
routeId={detail.routeId}
direction={direction}
diagramLayout={diagramLayout}
selectedNodeId={selectedProcessorId}
onNodeSelect={setSelectedProcessorId}
onNodeAction={onNodeAction}
nodeConfigs={nodeConfigs}
knownRouteIds={knownRouteIds}
executionOverlay={overlay}
iterationState={iterationState}
onIterationChange={setIteration}
centerOnNodeId={centerOnNodeId}
/>
</div>
{/* Resizable splitter */}
<div
className={styles.splitter}
onPointerDown={handleSplitterDown}
/>
{/* Detail panel */}
<div className={styles.detailArea} style={{ height: `${100 - splitPercent}%` }}>
<DetailPanel
selectedProcessor={findProcessorInTree(detail.processors, selectedProcessorId || null)}
executionDetail={detail}
executionId={executionId}
onSelectProcessor={setSelectedProcessorId}
/>
</div>
</div>
);
}

View File

@@ -0,0 +1,2 @@
export { ExecutionDiagram } from './ExecutionDiagram';
export type { NodeExecutionState, IterationInfo, DetailTab } from './types';

View File

@@ -0,0 +1,47 @@
import { CodeBlock } from '@cameleer/design-system';
import styles from '../ExecutionDiagram.module.css';
interface BodyTabProps {
body: string | undefined;
label: string;
}
function detectLanguage(text: string): string {
const trimmed = text.trimStart();
if (trimmed.startsWith('{') || trimmed.startsWith('[')) {
try {
JSON.parse(text);
return 'json';
} catch {
// not valid JSON
}
}
if (trimmed.startsWith('<')) return 'xml';
return 'text';
}
function formatBody(text: string, language: string): string {
if (language === 'json') {
try {
return JSON.stringify(JSON.parse(text), null, 2);
} catch {
return text;
}
}
return text;
}
export function BodyTab({ body, label }: BodyTabProps) {
if (!body) {
return <div className={styles.emptyState}>No {label.toLowerCase()} body available</div>;
}
const language = detectLanguage(body);
const formatted = formatBody(body, language);
return (
<div>
<CodeBlock content={formatted} language={language} copyable />
</div>
);
}

View File

@@ -0,0 +1,9 @@
import styles from '../ExecutionDiagram.module.css';
export function ConfigTab() {
return (
<div className={styles.emptyState}>
Processor configuration data is not yet available.
</div>
);
}

View File

@@ -0,0 +1,46 @@
import { CodeBlock } from '@cameleer/design-system';
import type { ProcessorNode, ExecutionDetail } from '../types';
import styles from '../ExecutionDiagram.module.css';
interface ErrorTabProps {
processor: ProcessorNode | null;
executionDetail: ExecutionDetail;
}
function extractExceptionType(errorMessage: string): string {
const colonIdx = errorMessage.indexOf(':');
if (colonIdx > 0) {
return errorMessage.substring(0, colonIdx).trim();
}
return 'Error';
}
export function ErrorTab({ processor, executionDetail }: ErrorTabProps) {
const errorMessage = processor?.errorMessage || executionDetail.errorMessage;
const errorStackTrace = processor?.errorStackTrace || executionDetail.errorStackTrace;
if (!errorMessage) {
return (
<div className={styles.emptyState}>
{processor
? 'No error on this processor'
: 'No error on this exchange'}
</div>
);
}
const exceptionType = extractExceptionType(errorMessage);
return (
<div>
<div className={styles.errorType}>{exceptionType}</div>
<div className={styles.errorMessage}>{errorMessage}</div>
{errorStackTrace && (
<>
<div className={styles.errorStackLabel}>Stack Trace</div>
<CodeBlock content={errorStackTrace} copyable />
</>
)}
</div>
);
}

Some files were not shown because too many files have changed in this diff Show More