Correct the factual claim that the cursor advances — it is dead code: _nextCursor is computed but never persisted by applyBatchFiring/reschedule, so every tick re-enqueues notifications for every matching exchange in retention. Clarify that instance-level dedup already works via the unique index; notification-level dedup is what's broken. Reframe §2 as "make it atomic before §1 goes live." Add builder-UX lessons from the njams Server_4 rules editor: clear stale fields on fireMode toggle (not just hide them); block save on empty webhooks+targets; wire the already-existing /render-preview endpoint into the Review step. Add Test 5 (red-first notification-bleed regression) and Test 6 (form-state clear on mode toggle). Park two follow-ups explicitly: sealed condition-type hierarchy (backend lags the UI's condition-forms/* sharding) and a coalesceSeconds primitive for Inbox-storm taming. Amend cursor-format-churn risk: benign in theory, but first post-deploy tick against long-standing rules could scan from rule.createdAt forward — suggests a deployBacklogCap clamp to bound the one-time backlog flood. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
22 KiB
PER_EXCHANGE — Exactly-Once-Per-Exchange Alerting — Design
Date: 2026-04-22
Scope: alerting layer only (webhook-delivery-level idempotency is out of scope — see "Out of scope" below).
Preceding context: .planning/sse-flakiness-diagnosis.md (unrelated); docs/superpowers/specs/2026-04-19-alerting-design.md (foundational alerting design — original PER_EXCHANGE intent).
Motivation
A user wants to create an alert rule of the shape "for every exchange that ends in FAILED status, notify Slack — exactly once per exchange." Exchanges are terminal events: once in FAILED state they never transition back, unlike agents which toggle between LIVE / STALE / DEAD. So "exactly once" is well-defined and achievable.
Today's PER_EXCHANGE mode partially supports this but has three gaps that, in combination, either miss exchanges or flood the Inbox with duplicates:
- The cursor is dead code.
ExchangeMatchEvaluator.evaluatePerExchangeatcameleer-server-app/.../eval/ExchangeMatchEvaluator.java:141computes a_nextCursorand stamps it onto the last firing's context map, butAlertEvaluatorJob.applyBatchFiring(:206-213) never reads it, andreschedule(:259) callsreleaseClaim(rule.id(), nextRun, rule.evalState())— passing the original, unmodifiedevalState. SolastExchangeTsis never persisted; every tick runs withtimeFrom = nulland re-scans ClickHouse from retention's beginning. The partial unique index onalert_instances(rule_id, context->'exchange'->>'id')silently de-dups duplicate instances on each tick — butenqueueNotificationsinapplyBatchFiringruns unconditionally on the returned row, so every tick enqueues a fresh PENDINGAlertNotificationfor every matching exchange already in retention. The user-visible symptom is not "a same-millisecond collision"; it is "Slack gets re-spammed for every historical failed exchange on every tick." Same-millisecond collision is also real, but strictly subsumed once a working cursor exists. - Alert-instance writes, notification enqueues, and
evalStatecursor advance are not coupled transactionally. Once §1 is fixed and the cursor does advance, a crash between the instance write and the cursor persist would produce either silent data loss (cursor advanced, instances never persisted) or duplicate instances on recovery (instances persisted, cursor not advanced). §2 makes this atomic before the cursor goes live. - The rule-configuration surface for PER_EXCHANGE admits nonsensical combinations (
reNotifyMinutes > 0, mandatory-but-unusedperExchangeLingerSeconds,forDurationSeconds) — a user following the UI defaults can build a rule that re-notifies hourly even though they want one-shot semantics.
The product rules, agreed with the user:
- The AlertInstance stays FIRING until a human acks or resolves it. No auto-resolve sweep.
- The action (webhook) fires exactly once per AlertInstance.
- The Inbox contains exactly one AlertInstance per failed exchange — never a duplicate from cursor errors, tick re-runs, or process restarts.
"Exactly once" here is at the alerting layer — one AlertInstance, one PENDING AlertNotification per (instance × webhook binding). The HTTP dispatch that follows is still at-least-once on transient failures; that's a separate scope.
Non-goals
- Auto-resolve after linger seconds. The existing spec reserved
perExchangeLingerSecondsfor this; we're explicitly dropping the field (unused + not desired). - Resolve-on-delivery semantics. Alert stays FIRING until human intervention.
- Webhook-level idempotency / exactly-once HTTP delivery to Slack. Rare duplicate Slack messages on timeout retries are accepted; consumer-side dedup (via
alert.idin the payload) is a template concern, not a server change. - Any change to COUNT_IN_WINDOW mode.
- Backfilling duplicate instances already created by the existing broken cursor in any running environment. Pre-prod, manual cleanup if needed.
- Sealed condition-type hierarchy. A follow-up refactor could replace the current "one
AlertRulewith mode-gated fields" model with a sealed hierarchy (PerExchangeCondition/CountInWindowCondition/AgentLifecycleCondition) where each type carries only the knobs it supports. The UI already sharded condition forms underui/src/pages/Alerts/RuleEditor/condition-forms/— the backend is the laggard. §3 here only patches the three known field conflicts; the structural cleanup is a separate phase (see §Follow-ups). - Throttle / coalesce primitive for PER_EXCHANGE. njams bakes
throttling+throttlingEventCountinto its CEP queries ("fire once, count the rest in W seconds"). If operators later find the Inbox unwieldy during incident storms, acoalesceSecondsknob is the right cure — one FIRING per (rule × signature) per window, withoccurrenceCountmaintained on the instance. Explicitly parked; see §Follow-ups.
Design
Four focused changes. Each is small on its own; together they make PER_EXCHANGE fire exactly once per failed exchange.
1. Composite cursor for cursor monotonicity
Current. evalState.lastExchangeTs is a single ISO-8601 string the evaluator reads as a lower bound, but never writes. The advance is computed (latestTs = max(startTime)) and attached to the last firing's context map as _nextCursor, then discarded by applyBatchFiring — reschedule passes the untouched rule.evalState() to releaseClaim. Net effect today: every tick queries ClickHouse with timeFrom = null (first-run path). The same-millisecond-collision bug described in the original spec assumes the cursor works; in practice the cursor has never worked. Fixing the advance is therefore both a correctness fix and a dead-code-elimination.
New. Replace with a composite cursor (startTime, executionId), serialized as "<ISO-8601 startTime>|<executionId>" in evalState.lastExchangeCursor.
- Selection predicate:
(start_time > cursor.ts) OR (start_time = cursor.ts AND execution_id > cursor.id).- This is trivially monotone: every consumed exchange is strictly-after the cursor in the composite ordering.
- Handles two exchanges at the exact same millisecond correctly — both are selected on their turn, neither re-selected.
- Uses the existing ClickHouse primary-key order
(tenant_id, start_time, …, execution_id), so the predicate is a range scan on the PK.
- Advance: set cursor to the
(startTime, executionId)of the lexicographically-last row in the batch (last row when sorted by(start_time asc, execution_id asc)). - First run (no cursor): today's behaviour is
cursor = null → timeFrom = null → unbounded scan of ClickHouse history— any pre-existing FAILED exchange in retention would fire an alert on the first tick. That's broken and needs fixing. New rule: initializelastExchangeCursorto(rule.createdAt, "")at rule creation time — so a PER_EXCHANGE rule only alerts on exchanges that fail after it was created. The empty-stringexecutionIdcomponent is correct: any real execution_id sorts strictly after it lexicographically, so the very first matching exchange post-creation gets picked up on the first tick. No ambient lookback window, no retention dependency, no backlog flood. evalStateschema change: retire thelastExchangeTskey, addlastExchangeCursor. Pre-prod; no migration needed. Readers that see neither key treat the rule as first-run.
Affected files (scope estimate):
ExchangeMatchEvaluator.evaluatePerExchange— cursor parse/advance/selection.SearchRequest/ClickHouseSearchIndex.search— needs to accept the composite predicate. Option A: add an optionalafterExecutionIdparam alongsidetimeFrom. Option B: introduce a dedicatedAfterCursor(ts, id)type. Plan phase picks one — A is simpler.evalStateJSON schema (documented in alerting spec).
2. Transactional coupling of instance writes + cursor advance
This section presumes §1 is landed — once the cursor actually advances, we need the advance and the instance writes to be atomic.
Current (post-§1). Per tick for a PER_EXCHANGE rule:
applyResultiterates theEvalResult.Batchfirings and callsapplyBatchFiringfor each — oneAlertInstancesave +enqueueNotificationsper firing, each its own transaction (or auto-commit).- After the rule loop,
reschedule(rule, nextRun)saves the updatedevalState+nextRunAtin a separate write.
Crash anywhere between steps 1 and 2 (or partway through the loop in step 1) produces one of two inconsistent states:
- Instances saved but cursor not advanced → next tick duplicates them.
- Cursor advanced but no instances saved → those exchanges never alerted.
(Note: today, pre-§1, the "cursor never advances" bug means only the first failure mode ever occurs. §2 prevents the second from appearing once §1 is live.)
New. Wrap the whole Batch-result processing for a single rule in one TransactionTemplate.execute(...):
TX {
persist all AlertInstances for the batch
insert all PENDING AlertNotifications for those instances
update rule: evalState.lastExchangeCursor + nextRunAt
}
Commit: all three land atomically. Rollback: none do, and the rule stays claimed-but-cursor-unchanged so the next tick re-processes the same exchanges. Combined with the monotone cursor from §1, that gives exactly-once instance creation: if a batch half-succeeded and rolled back, the second attempt starts from the same cursor and produces the same set.
Notification dispatch (NotificationDispatchJob picking up PENDING rows) happens outside the transaction on its own schedule — webhook I/O must never hold a DB transaction open.
Affected files (scope estimate):
AlertEvaluatorJob.applyResult+applyBatchFiring— fold into one transactional block when the result is aBatch.- No change to the COUNT_IN_WINDOW path (
applyResultfor non-Batch results keeps its current semantics). PostgresAlertInstanceRepository/PostgresAlertNotificationRepository/PostgresAlertRuleRepository— existing methods usable from inside a transaction; verify no implicit auto-commit.
3. Config hygiene — enforce a coherent PER_EXCHANGE rule shape
Three knobs on the rule are wrong for PER_EXCHANGE and trap the user into buggy configurations.
| Knob | Current state | New state for PER_EXCHANGE |
|---|---|---|
reNotifyMinutes |
Default 60 in UI; re-notify sweep fires every N min while FIRING | Must be 0. API 400s if non-zero. UI forces to 0 and disables the input with tooltip "Per-exchange rules fire exactly once — re-notify does not apply." |
perExchangeLingerSeconds |
Validated as required by ExchangeMatchCondition compact ctor; unused anywhere in the code |
Removed. Drop the field entirely — from the record, the compact-ctor validation, AlertRuleRequest DTO, form state, UI. Pre-prod; no shim. |
forDurationSeconds |
Applied by the state machine in the COUNT_IN_WINDOW / agent-lifecycle path | Must be 0/null for PER_EXCHANGE. 400 on save if non-zero. UI hides the field when PER_EXCHANGE is selected. Evaluator path already ignores it for Batch results, so this is a contract-tightening at the API edge only. |
Net effect: a PER_EXCHANGE rule's configurable surface becomes exactly {scope, filter, severity, notification title/message, webhooks, targets}. The user can't express an inconsistent combination.
Mode-toggle state hygiene (UX). When the user flips fireMode PER_EXCHANGE ↔ COUNT_IN_WINDOW inside ExchangeMatchForm, the form state for the other mode's fields must be cleared — not just hidden. The njams Server_4 frontend mode-gates fields via *ngIf and silently retains stale values behind the toggle (src/app/rules/.../rule-view.component.html:96–112), which produces save-time surprises. In form-state.ts the setFireMode reducer must reset the fields that are no longer in scope for the new mode (to their type-appropriate zero, not to undefined — the record compact-ctor still runs). That keeps the API-layer cross-field validator (400-on-save) and the form shape permanently consistent.
Builder-UX lessons worth adopting (tiny, in-scope).
- Disabled "Add" gating.
AlertRuleControlleracceptswebhooks: []andtargets: []as valid, which lets the user save a rule that never notifies anyone. The form already splits by step; the Notify step's "Add webhook" button should stay enabled, but the wizard's "Save rule" inReviewStep.tsxshould block-with-reason ifwebhooks.length === 0 && targets.length === 0. njams's pattern of disabling "Add X" until the last row is complete (rule-view.component.ts:38–45) is the right shape. - Preserve
/test-evaluateand/render-preview.AlertRuleControllerexposes POST{id}/test-evaluateand{id}/render-preview; the wizard should surface at least render-preview inReviewStep.tsxbefore save. njams ships no in-builder preview and operators compensate with trial-and-error creation. We already have the endpoints; not wiring them up would be leaving value on the floor.
Affected files (scope estimate):
ExchangeMatchCondition— removeperExchangeLingerSeconds.AlertRuleController/AlertRuleRequest— cross-field validation (reNotify + forDuration vs fireMode; empty webhooks+targets).ui/src/pages/Alerts/RuleEditor/condition-forms/ExchangeMatchForm.tsx+form-state.ts— clear fields on mode toggle; disable reNotify + hide forDuration when PER_EXCHANGE; remove the linger field.ui/src/pages/Alerts/RuleEditor/ReviewStep.tsx— block save on empty webhooks+targets; render-preview pane.- Tests (§4).
4. Tests that lock the guarantees
Six scenarios: four on the correctness core, one red-test that reproduces today's actual bleed (turns green on fix), one on the builder-UX state-clearing contract.
Test 1 — cursor monotonicity (ExchangeMatchEvaluatorTest, unit)
- Seed two FAILED executions with identical
start_time, differentexecutionId. - Tick 1: both fire, batch of 2.
- Tick 2: neither fires.
- Seed a third at the same timestamp. Tick 3: that third only.
Test 2 — tick atomicity (AlertEvaluatorJobIT, integration with real Postgres)
- Seed 3 FAILED executions. Inject a fault on the second notification-insert.
- Tick → transaction rolls back: 0 AlertInstances, cursor unchanged, rule
nextRunAtunchanged. - Remove fault, tick again: 3 AlertInstances + 3 PENDING notifications, cursor advanced.
Test 3 — full-lifecycle exactly-once (extends AlertingFullLifecycleIT)
- PER_EXCHANGE rule, dummy webhook.
- Seed 5 FAILED executions across two ticks (3 + 2). After both ticks: exactly 5 FIRING AlertInstances, exactly 5 PENDING notifications.
- Third tick with no new executions: zero new instances, zero new notifications.
- Ack one instance: other four unchanged.
- Additionally: POST a PER_EXCHANGE rule with
reNotifyMinutes=60via the controller → expect 400. - Additionally: POST a PER_EXCHANGE rule with
forDurationSeconds=60→ expect 400.
Test 4 — first-run uses rule creation time, not unbounded history (unit, in ExchangeMatchEvaluatorTest)
- Seed 2 FAILED executions dated before rule creation, 1 after.
- Evaluate a freshly-created PER_EXCHANGE rule whose
evalStateis empty. - Expect: exactly 1 firing (the one after creation). The pre-creation ones must not appear in the batch.
Test 5 — pre-fix regression reproducer: notifications do not re-enqueue for already-matched exchanges (integration, AlertEvaluatorJobIT)
- Seed 2 FAILED executions. Tick 1 → 2 FIRING instances, 2 PENDING notifications. Dispatcher drains them → 2 DELIVERED.
- Tick 2 with no new executions: expect zero new PENDING notifications. (Today, without the §1+§2 fix, tick 2 re-enqueues both. This test should be written red-first against
main, then go green when the cursor is actually persisted.) - This test directly pins the bug the original spec text understated: instance-level dedup via the unique index is already working; notification-level dedup is what's broken.
Test 6 — form state clears on fireMode toggle (unit, Vitest, condition-forms/ExchangeMatchForm.test.tsx)
- Build an initial form state with
fireMode=COUNT_IN_WINDOW, threshold=5, windowSeconds=300. - Dispatch
setFireMode(PER_EXCHANGE). - Expect:
thresholdandwindowSecondsare cleared to their zero-values (not merely hidden), and the record compact-ctor doesn't throw whenform-state.tsrebuilds the condition object. - Dispatch
setFireMode(COUNT_IN_WINDOW)— expect threshold/window come back as defaults, not as stale values.
Plus a small unit test on the new cross-field validator to isolate its logic from the IT setup.
Out of scope
- Webhook-level idempotency.
WebhookDispatcherstill retries on 5xx / network / timeout. For Slack, that means a timeout mid-POST can produce a duplicate channel message. The consumer-side fix is to include a stable ID (e.g.{{alert.id}}) in the message template and drop duplicates on Slack's side — doable today via the existing Mustache editor, no server change. If in the future we want strict exactly-once HTTP delivery, that's a separate design. - Auto-resolve of PER_EXCHANGE instances. Alerts stay FIRING until humans intervene. If operational experience shows the Inbox gets unwieldy, a later phase can add a manual "resolve all" bulk action or an opt-in TTL sweep.
- Rule-level dedup of identical alerts in a short window (e.g. "same failure signature fires twice in 5 s"). Out of scope; every failed exchange is its own event by design.
- COUNT_IN_WINDOW changes. Untouched.
- Migration of existing PER_EXCHANGE rules. Pre-prod; any existing rule using the retired
perExchangeLingerSecondsfield gets the value silently dropped by the API's unknown-property handling on next PUT, or rejected on create (new shape). If needed, a one-shot cleanup is easier than a shim.
Risks
- ClickHouse predicate performance. The composite predicate
(start_time > ? OR (start_time = ? AND execution_id > ?))must hit the PK range efficiently. The table PK is(tenant_id, start_time, environment, application_id, route_id, execution_id), so the OR-form should be fine, but we'll verify withEXPLAIN PIPELINEagainst the IT container during plan-phase. Fallback:(start_time, execution_id)tuple comparison if CH has native support ((start_time, execution_id) > (?, ?)), which it does in recent versions. - Transaction size. A single tick caps at
limit = 50matches (existing behaviour), so the transaction holds at most 50 AlertInstance + 50 AlertNotification writes + 1 rule update. Well within safe bounds. - Cursor format churn. Dropping
lastExchangeTsin favour oflastExchangeCursoris a one-lineevalStateJSON change. Pre-prod; no shim needed. In practice the churn is even more benign than it looks: today no rule has ever persisted alastExchangeTsvalue (the advance path is dead code — see §1 Current.), so every existing PER_EXCHANGE rule will hit the first-run path(rule.createdAt, "")on first post-deploy tick. Side effect: on deploy, long-standing PER_EXCHANGE rules will immediately scan fromrule.createdAtforward and enqueue notifications for every FAILED exchange in retention that matches. This is a one-time backlog flood proportional to ClickHouse retention × failure rate × number of PER_EXCHANGE rules. For pre-prod with small history this is tolerable; if a rule was created years ago on a real environment, bound the first-run scan by clamping initial cursor tomax(rule.createdAt, now() - deployBacklogCap)wheredeployBacklogCapis a config (default 24 h). Call this out explicitly in the plan-phase so deployment order is "deploy first, then create rules" or "accept the one-time flood."
Follow-ups (parked — separate phases)
Explicit list of ideas that are valuable but deliberately not in this spec's scope.
- Sealed condition-type hierarchy (backend). Replace
AlertRule+fireModefield with a sealedConditionhierarchy where each type carries only its own knobs. The UI is already sharded (condition-forms/*Form.tsx); the backend would follow. Biggest win: kills the whole "mode-gated field" class of bug at the record level, so cross-field validators become compact-ctor invariants instead of controller-layer glue. Estimated scope: medium (DTO migration, Jackson polymorphism, request compatibility). Trigger: when a 4th condition kind lands or when the next "silently-ignored field" bug surfaces. coalesceSecondsprimitive on PER_EXCHANGE. "One FIRING per (rule × signature) per window; attach occurrenceCount." Addresses the Inbox-flood scenario during incident storms without breaking the exactly-once-per-exchange guarantee for the default case. njams bakes this into its CEP template asthrottling+throttlingEventCount; we'd express it as a post-match coalescer on the AlertInstance write path. Trigger: first operator complaint about Inbox volume during a real incident, or when we onboard a tenant with >100 failed exchanges/min.- Cross-phase: ingestion-time rule-matching. Today's tick+cursor model is correct but latency-bound to
evaluationIntervalSeconds. A streaming path (agent → ClickHouse ingest → publish → rule matcher) would drop alert latency to seconds. Not needed today; flagged because the spec's design explicitly chooses batch over streaming and future requirements may flip that.
Verification
mvn -pl cameleer-server-app -am -Dit.test='ExchangeMatchEvaluatorTest,AlertEvaluatorJobIT,AlertingFullLifecycleIT,AlertRuleControllerIT' ... verify→ 0 failures.- Manual: create a PER_EXCHANGE / FAILED rule via UI. Verify
reNotifyMinutesis fixed at 0 and disabled; verify the linger field is gone; verify togglingfireModeclears COUNT_IN_WINDOW-specific fields. Produce failing exchanges. Verify Inbox shows one instance per exchange, Slack gets exactly one message each. Wait three evaluation-interval ticks with no new exchanges; verify no additional notifications arrive (the pre-fix bleed). Ack one instance. Produce another failure. Verify only the new one appears. Save a rule with empty webhooks+targets → expect blocked at the Review step with a reason shown.