Compare commits
12 Commits
a1ea112876
...
4e45be59ef
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4e45be59ef | ||
|
|
3a54b4d7e7 | ||
|
|
b77968bb2d | ||
|
|
10b412c50c | ||
|
|
24c858cca4 | ||
|
|
462b9a4bf0 | ||
|
|
c4f4477472 | ||
|
|
961dadd1c8 | ||
|
|
887a9b6faa | ||
|
|
04da0af4bc | ||
|
|
dd0f0e73b3 | ||
|
|
2542e430ac |
@@ -75,6 +75,7 @@ paths:
|
||||
- `ClickHouseStatsStore` — pre-aggregated stats, punchcard
|
||||
- `ClickHouseDiagramStore`, `ClickHouseAgentEventRepository`
|
||||
- `ClickHouseUsageTracker` — usage_events for billing
|
||||
- `ClickHouseRouteCatalogStore` — persistent route catalog with first_seen cache, warm-loaded on startup
|
||||
|
||||
## search/ — ClickHouse search and log stores
|
||||
|
||||
|
||||
@@ -51,7 +51,8 @@ paths:
|
||||
|
||||
## storage/ — Storage abstractions
|
||||
|
||||
- `ExecutionStore`, `MetricsStore`, `MetricsQueryStore`, `StatsStore`, `DiagramStore`, `SearchIndex`, `LogIndex` — interfaces
|
||||
- `ExecutionStore`, `MetricsStore`, `MetricsQueryStore`, `StatsStore`, `DiagramStore`, `RouteCatalogStore`, `SearchIndex`, `LogIndex` — interfaces
|
||||
- `RouteCatalogEntry` — record: applicationId, routeId, environment, firstSeen, lastSeen
|
||||
- `LogEntryResult` — log query result record
|
||||
- `model/` — `ExecutionDocument`, `MetricTimeSeries`, `MetricsSnapshot`
|
||||
|
||||
|
||||
104
CLAUDE.md
104
CLAUDE.md
@@ -38,7 +38,7 @@ java -jar cameleer-server-app/target/cameleer-server-app-1.0-SNAPSHOT.jar
|
||||
- Jackson `JavaTimeModule` for `Instant` deserialization
|
||||
- Communication: receives HTTP POST data from agents (executions, diagrams, metrics, logs), serves SSE event streams for config push/commands (config-update, deep-trace, replay, route-control)
|
||||
- Environment filtering: all data queries filter by the selected environment. All commands target only agents in the selected environment. Backend endpoints accept optional `environment` query parameter; null = all environments (backward compatible).
|
||||
- Maintains agent instance registry (in-memory) with states: LIVE -> STALE -> DEAD. Auto-heals from JWT `env` claim + heartbeat body on heartbeat/SSE after server restart (priority: heartbeat `environmentId` > JWT `env` claim > `"default"`). Capabilities and route states updated on every heartbeat (protocol v2). Route catalog falls back to ClickHouse stats for route discovery when registry has incomplete data.
|
||||
- Maintains agent instance registry (in-memory) with states: LIVE -> STALE -> DEAD. Auto-heals from JWT `env` claim + heartbeat body on heartbeat/SSE after server restart (priority: heartbeat `environmentId` > JWT `env` claim > `"default"`). Capabilities and route states updated on every heartbeat (protocol v2). Route catalog merges three sources: in-memory agent registry, persistent `route_catalog` table (ClickHouse), and `stats_1m_route` execution stats. The persistent catalog tracks `first_seen`/`last_seen` per route per environment, updated on every registration and heartbeat. Routes appear in the sidebar when their lifecycle overlaps the selected time window (`first_seen <= to AND last_seen >= from`), so historical routes remain visible even after being dropped from newer app versions.
|
||||
- Multi-tenancy: each server instance serves one tenant (configured via `CAMELEER_SERVER_TENANT_ID`, default: `"default"`). Environments (dev/staging/prod) are first-class. PostgreSQL isolated via schema-per-tenant (`?currentSchema=tenant_{id}`) and `ApplicationName=tenant_{id}` on the JDBC URL. ClickHouse shared DB with `tenant_id` + `environment` columns, partitioned by `(tenant_id, toYYYYMM(timestamp))`.
|
||||
- Storage: PostgreSQL for RBAC, config, and audit; ClickHouse for all observability data (executions, search, logs, metrics, stats, diagrams). ClickHouse schema migrations in `clickhouse/*.sql`, run idempotently on startup by `ClickHouseSchemaInitializer`. Use `IF NOT EXISTS` for CREATE and ADD PROJECTION.
|
||||
- Log exchange correlation: `ClickHouseLogStore` extracts `exchange_id` from log entry MDC, preferring `cameleer.exchangeId` over `camel.exchangeId` (fallback for older agents). For `ON_COMPLETION` exchange copies, the agent sets `cameleer.exchangeId` to the parent's exchange ID via `CORRELATION_ID`.
|
||||
@@ -74,3 +74,105 @@ When adding, removing, or renaming classes, controllers, endpoints, UI component
|
||||
## Disabled Skills
|
||||
|
||||
- Do NOT use any `gsd:*` skills in this project. This includes all `/gsd:` prefixed commands.
|
||||
|
||||
<!-- gitnexus:start -->
|
||||
# GitNexus — Code Intelligence
|
||||
|
||||
This project is indexed by GitNexus as **cameleer-server** (6281 symbols, 15871 relationships, 300 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
|
||||
|
||||
> If any GitNexus tool warns the index is stale, run `npx gitnexus analyze` in terminal first.
|
||||
|
||||
## Always Do
|
||||
|
||||
- **MUST run impact analysis before editing any symbol.** Before modifying a function, class, or method, run `gitnexus_impact({target: "symbolName", direction: "upstream"})` and report the blast radius (direct callers, affected processes, risk level) to the user.
|
||||
- **MUST run `gitnexus_detect_changes()` before committing** to verify your changes only affect expected symbols and execution flows.
|
||||
- **MUST warn the user** if impact analysis returns HIGH or CRITICAL risk before proceeding with edits.
|
||||
- When exploring unfamiliar code, use `gitnexus_query({query: "concept"})` to find execution flows instead of grepping. It returns process-grouped results ranked by relevance.
|
||||
- When you need full context on a specific symbol — callers, callees, which execution flows it participates in — use `gitnexus_context({name: "symbolName"})`.
|
||||
|
||||
## When Debugging
|
||||
|
||||
1. `gitnexus_query({query: "<error or symptom>"})` — find execution flows related to the issue
|
||||
2. `gitnexus_context({name: "<suspect function>"})` — see all callers, callees, and process participation
|
||||
3. `READ gitnexus://repo/cameleer-server/process/{processName}` — trace the full execution flow step by step
|
||||
4. For regressions: `gitnexus_detect_changes({scope: "compare", base_ref: "main"})` — see what your branch changed
|
||||
|
||||
## When Refactoring
|
||||
|
||||
- **Renaming**: MUST use `gitnexus_rename({symbol_name: "old", new_name: "new", dry_run: true})` first. Review the preview — graph edits are safe, text_search edits need manual review. Then run with `dry_run: false`.
|
||||
- **Extracting/Splitting**: MUST run `gitnexus_context({name: "target"})` to see all incoming/outgoing refs, then `gitnexus_impact({target: "target", direction: "upstream"})` to find all external callers before moving code.
|
||||
- After any refactor: run `gitnexus_detect_changes({scope: "all"})` to verify only expected files changed.
|
||||
|
||||
## Never Do
|
||||
|
||||
- NEVER edit a function, class, or method without first running `gitnexus_impact` on it.
|
||||
- NEVER ignore HIGH or CRITICAL risk warnings from impact analysis.
|
||||
- NEVER rename symbols with find-and-replace — use `gitnexus_rename` which understands the call graph.
|
||||
- NEVER commit changes without running `gitnexus_detect_changes()` to check affected scope.
|
||||
|
||||
## Tools Quick Reference
|
||||
|
||||
| Tool | When to use | Command |
|
||||
|------|-------------|---------|
|
||||
| `query` | Find code by concept | `gitnexus_query({query: "auth validation"})` |
|
||||
| `context` | 360-degree view of one symbol | `gitnexus_context({name: "validateUser"})` |
|
||||
| `impact` | Blast radius before editing | `gitnexus_impact({target: "X", direction: "upstream"})` |
|
||||
| `detect_changes` | Pre-commit scope check | `gitnexus_detect_changes({scope: "staged"})` |
|
||||
| `rename` | Safe multi-file rename | `gitnexus_rename({symbol_name: "old", new_name: "new", dry_run: true})` |
|
||||
| `cypher` | Custom graph queries | `gitnexus_cypher({query: "MATCH ..."})` |
|
||||
|
||||
## Impact Risk Levels
|
||||
|
||||
| Depth | Meaning | Action |
|
||||
|-------|---------|--------|
|
||||
| d=1 | WILL BREAK — direct callers/importers | MUST update these |
|
||||
| d=2 | LIKELY AFFECTED — indirect deps | Should test |
|
||||
| d=3 | MAY NEED TESTING — transitive | Test if critical path |
|
||||
|
||||
## Resources
|
||||
|
||||
| Resource | Use for |
|
||||
|----------|---------|
|
||||
| `gitnexus://repo/cameleer-server/context` | Codebase overview, check index freshness |
|
||||
| `gitnexus://repo/cameleer-server/clusters` | All functional areas |
|
||||
| `gitnexus://repo/cameleer-server/processes` | All execution flows |
|
||||
| `gitnexus://repo/cameleer-server/process/{name}` | Step-by-step execution trace |
|
||||
|
||||
## Self-Check Before Finishing
|
||||
|
||||
Before completing any code modification task, verify:
|
||||
1. `gitnexus_impact` was run for all modified symbols
|
||||
2. No HIGH/CRITICAL risk warnings were ignored
|
||||
3. `gitnexus_detect_changes()` confirms changes match expected scope
|
||||
4. All d=1 (WILL BREAK) dependents were updated
|
||||
|
||||
## Keeping the Index Fresh
|
||||
|
||||
After committing code changes, the GitNexus index becomes stale. Re-run analyze to update it:
|
||||
|
||||
```bash
|
||||
npx gitnexus analyze
|
||||
```
|
||||
|
||||
If the index previously included embeddings, preserve them by adding `--embeddings`:
|
||||
|
||||
```bash
|
||||
npx gitnexus analyze --embeddings
|
||||
```
|
||||
|
||||
To check whether embeddings exist, inspect `.gitnexus/meta.json` — the `stats.embeddings` field shows the count (0 means no embeddings). **Running analyze without `--embeddings` will delete any previously generated embeddings.**
|
||||
|
||||
> Claude Code users: A PostToolUse hook handles this automatically after `git commit` and `git merge`.
|
||||
|
||||
## CLI
|
||||
|
||||
| Task | Read this skill file |
|
||||
|------|---------------------|
|
||||
| Understand architecture / "How does X work?" | `.claude/skills/gitnexus/gitnexus-exploring/SKILL.md` |
|
||||
| Blast radius / "What breaks if I change X?" | `.claude/skills/gitnexus/gitnexus-impact-analysis/SKILL.md` |
|
||||
| Trace bugs / "Why is X failing?" | `.claude/skills/gitnexus/gitnexus-debugging/SKILL.md` |
|
||||
| Rename / extract / split / refactor | `.claude/skills/gitnexus/gitnexus-refactoring/SKILL.md` |
|
||||
| Tools, resources, schema reference | `.claude/skills/gitnexus/gitnexus-guide/SKILL.md` |
|
||||
| Index, status, clean, wiki CLI commands | `.claude/skills/gitnexus/gitnexus-cli/SKILL.md` |
|
||||
|
||||
<!-- gitnexus:end -->
|
||||
|
||||
@@ -5,6 +5,8 @@ import com.cameleer.server.app.search.ClickHouseLogStore;
|
||||
import com.cameleer.server.app.storage.ClickHouseAgentEventRepository;
|
||||
import com.cameleer.server.app.storage.ClickHouseUsageTracker;
|
||||
import com.cameleer.server.app.storage.ClickHouseDiagramStore;
|
||||
import com.cameleer.server.app.storage.ClickHouseRouteCatalogStore;
|
||||
import com.cameleer.server.core.storage.RouteCatalogStore;
|
||||
import com.cameleer.server.app.storage.ClickHouseMetricsQueryStore;
|
||||
import com.cameleer.server.app.storage.ClickHouseMetricsStore;
|
||||
import com.cameleer.server.app.storage.ClickHouseStatsStore;
|
||||
@@ -145,6 +147,15 @@ public class StorageBeanConfig {
|
||||
return new ClickHouseDiagramStore(tenantProperties.getId(), clickHouseJdbc);
|
||||
}
|
||||
|
||||
// ── ClickHouse Route Catalog Store ───────────────────────────────
|
||||
|
||||
@Bean
|
||||
public RouteCatalogStore clickHouseRouteCatalogStore(
|
||||
TenantProperties tenantProperties,
|
||||
@Qualifier("clickHouseJdbcTemplate") JdbcTemplate clickHouseJdbc) {
|
||||
return new ClickHouseRouteCatalogStore(tenantProperties.getId(), clickHouseJdbc);
|
||||
}
|
||||
|
||||
// ── ClickHouse Agent Event Repository ─────────────────────────────
|
||||
|
||||
@Bean
|
||||
|
||||
@@ -19,6 +19,7 @@ import com.cameleer.server.core.agent.AgentRegistryService;
|
||||
import com.cameleer.server.core.agent.AgentState;
|
||||
import com.cameleer.server.core.agent.RouteStateRegistry;
|
||||
import com.cameleer.server.core.security.Ed25519SigningService;
|
||||
import com.cameleer.server.core.storage.RouteCatalogStore;
|
||||
import com.cameleer.server.core.security.InvalidTokenException;
|
||||
import com.cameleer.server.core.security.JwtService;
|
||||
import io.swagger.v3.oas.annotations.Operation;
|
||||
@@ -68,6 +69,7 @@ public class AgentRegistrationController {
|
||||
private final AuditService auditService;
|
||||
private final JdbcTemplate jdbc;
|
||||
private final RouteStateRegistry routeStateRegistry;
|
||||
private final RouteCatalogStore routeCatalogStore;
|
||||
|
||||
public AgentRegistrationController(AgentRegistryService registryService,
|
||||
AgentRegistryConfig config,
|
||||
@@ -77,7 +79,8 @@ public class AgentRegistrationController {
|
||||
AgentEventService agentEventService,
|
||||
AuditService auditService,
|
||||
@org.springframework.beans.factory.annotation.Qualifier("clickHouseJdbcTemplate") JdbcTemplate jdbc,
|
||||
RouteStateRegistry routeStateRegistry) {
|
||||
RouteStateRegistry routeStateRegistry,
|
||||
RouteCatalogStore routeCatalogStore) {
|
||||
this.registryService = registryService;
|
||||
this.config = config;
|
||||
this.bootstrapTokenValidator = bootstrapTokenValidator;
|
||||
@@ -87,6 +90,7 @@ public class AgentRegistrationController {
|
||||
this.auditService = auditService;
|
||||
this.jdbc = jdbc;
|
||||
this.routeStateRegistry = routeStateRegistry;
|
||||
this.routeCatalogStore = routeCatalogStore;
|
||||
}
|
||||
|
||||
@PostMapping("/register")
|
||||
@@ -125,6 +129,11 @@ public class AgentRegistrationController {
|
||||
request.instanceId(), request.instanceId(), application, environmentId,
|
||||
request.version(), routeIds, capabilities);
|
||||
|
||||
// Persist routes in catalog for server-restart recovery
|
||||
if (!routeIds.isEmpty()) {
|
||||
routeCatalogStore.upsert(application, environmentId, routeIds);
|
||||
}
|
||||
|
||||
if (reRegistration) {
|
||||
log.info("Agent re-registered: {} (application={}, routes={}, capabilities={})",
|
||||
request.instanceId(), application, routeIds.size(), capabilities.keySet());
|
||||
@@ -248,15 +257,20 @@ public class AgentRegistrationController {
|
||||
}
|
||||
}
|
||||
|
||||
if (request != null && request.getRouteStates() != null && !request.getRouteStates().isEmpty()) {
|
||||
if (routeIds != null && !routeIds.isEmpty()) {
|
||||
AgentInfo agent = registryService.findById(id);
|
||||
if (agent != null) {
|
||||
for (var entry : request.getRouteStates().entrySet()) {
|
||||
RouteStateRegistry.RouteState state = parseRouteState(entry.getValue());
|
||||
if (state != null) {
|
||||
routeStateRegistry.setState(agent.applicationId(), entry.getKey(), state);
|
||||
// Update route states from heartbeat
|
||||
if (request != null && request.getRouteStates() != null) {
|
||||
for (var entry : request.getRouteStates().entrySet()) {
|
||||
RouteStateRegistry.RouteState state = parseRouteState(entry.getValue());
|
||||
if (state != null) {
|
||||
routeStateRegistry.setState(agent.applicationId(), entry.getKey(), state);
|
||||
}
|
||||
}
|
||||
}
|
||||
// Persist routes in catalog for server-restart recovery
|
||||
routeCatalogStore.upsert(agent.applicationId(), agent.environmentId(), routeIds);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -11,6 +11,8 @@ import com.cameleer.server.core.agent.AgentState;
|
||||
import com.cameleer.server.core.agent.RouteStateRegistry;
|
||||
import com.cameleer.server.core.runtime.*;
|
||||
import com.cameleer.server.core.storage.DiagramStore;
|
||||
import com.cameleer.server.core.storage.RouteCatalogEntry;
|
||||
import com.cameleer.server.core.storage.RouteCatalogStore;
|
||||
import io.swagger.v3.oas.annotations.Operation;
|
||||
import io.swagger.v3.oas.annotations.responses.ApiResponse;
|
||||
import io.swagger.v3.oas.annotations.tags.Tag;
|
||||
@@ -47,6 +49,7 @@ public class CatalogController {
|
||||
private final EnvironmentService envService;
|
||||
private final DeploymentRepository deploymentRepo;
|
||||
private final TenantProperties tenantProperties;
|
||||
private final RouteCatalogStore routeCatalogStore;
|
||||
|
||||
@Value("${cameleer.server.catalog.discoveryttldays:7}")
|
||||
private int discoveryTtlDays;
|
||||
@@ -58,7 +61,8 @@ public class CatalogController {
|
||||
AppService appService,
|
||||
EnvironmentService envService,
|
||||
DeploymentRepository deploymentRepo,
|
||||
TenantProperties tenantProperties) {
|
||||
TenantProperties tenantProperties,
|
||||
RouteCatalogStore routeCatalogStore) {
|
||||
this.registryService = registryService;
|
||||
this.diagramStore = diagramStore;
|
||||
this.jdbc = jdbc;
|
||||
@@ -67,6 +71,7 @@ public class CatalogController {
|
||||
this.envService = envService;
|
||||
this.deploymentRepo = deploymentRepo;
|
||||
this.tenantProperties = tenantProperties;
|
||||
this.routeCatalogStore = routeCatalogStore;
|
||||
}
|
||||
|
||||
@GetMapping
|
||||
@@ -154,6 +159,20 @@ public class CatalogController {
|
||||
}
|
||||
}
|
||||
|
||||
// Merge routes from persistent catalog (covers routes with 0 executions
|
||||
// and routes from previous app versions within the selected time window)
|
||||
try {
|
||||
List<RouteCatalogEntry> catalogEntries = (environment != null && !environment.isBlank())
|
||||
? routeCatalogStore.findByEnvironment(environment, rangeFrom, rangeTo)
|
||||
: routeCatalogStore.findAll(rangeFrom, rangeTo);
|
||||
for (RouteCatalogEntry entry : catalogEntries) {
|
||||
routesByApp.computeIfAbsent(entry.applicationId(), k -> new LinkedHashSet<>())
|
||||
.add(entry.routeId());
|
||||
}
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to query route catalog: {}", e.getMessage());
|
||||
}
|
||||
|
||||
// 7. Build unified catalog
|
||||
Set<String> allSlugs = new LinkedHashSet<>(appsBySlug.keySet());
|
||||
allSlugs.addAll(agentsByApp.keySet());
|
||||
@@ -331,6 +350,7 @@ public class CatalogController {
|
||||
|
||||
// Delete ClickHouse data
|
||||
deleteClickHouseData(tenantId, applicationId);
|
||||
routeCatalogStore.deleteByApplication(applicationId);
|
||||
|
||||
// Delete managed app if exists (PostgreSQL)
|
||||
try {
|
||||
@@ -348,7 +368,7 @@ public class CatalogController {
|
||||
String[] tablesWithAppId = {
|
||||
"executions", "processor_executions", "route_diagrams", "agent_events",
|
||||
"stats_1m_app", "stats_1m_route", "stats_1m_processor_type", "stats_1m_processor",
|
||||
"stats_1m_processor_detail"
|
||||
"stats_1m_processor_detail", "route_catalog"
|
||||
};
|
||||
for (String table : tablesWithAppId) {
|
||||
try {
|
||||
|
||||
@@ -9,6 +9,8 @@ import com.cameleer.server.core.agent.AgentRegistryService;
|
||||
import com.cameleer.server.core.agent.AgentState;
|
||||
import com.cameleer.server.core.agent.RouteStateRegistry;
|
||||
import com.cameleer.server.core.storage.DiagramStore;
|
||||
import com.cameleer.server.core.storage.RouteCatalogEntry;
|
||||
import com.cameleer.server.core.storage.RouteCatalogStore;
|
||||
import com.cameleer.server.core.storage.StatsStore;
|
||||
import io.swagger.v3.oas.annotations.Operation;
|
||||
import io.swagger.v3.oas.annotations.responses.ApiResponse;
|
||||
@@ -42,15 +44,18 @@ public class RouteCatalogController {
|
||||
private final DiagramStore diagramStore;
|
||||
private final JdbcTemplate jdbc;
|
||||
private final RouteStateRegistry routeStateRegistry;
|
||||
private final RouteCatalogStore routeCatalogStore;
|
||||
|
||||
public RouteCatalogController(AgentRegistryService registryService,
|
||||
DiagramStore diagramStore,
|
||||
@org.springframework.beans.factory.annotation.Qualifier("clickHouseJdbcTemplate") JdbcTemplate jdbc,
|
||||
RouteStateRegistry routeStateRegistry) {
|
||||
RouteStateRegistry routeStateRegistry,
|
||||
RouteCatalogStore routeCatalogStore) {
|
||||
this.registryService = registryService;
|
||||
this.diagramStore = diagramStore;
|
||||
this.jdbc = jdbc;
|
||||
this.routeStateRegistry = routeStateRegistry;
|
||||
this.routeCatalogStore = routeCatalogStore;
|
||||
}
|
||||
|
||||
@GetMapping("/catalog")
|
||||
@@ -122,6 +127,20 @@ public class RouteCatalogController {
|
||||
}
|
||||
}
|
||||
|
||||
// Merge routes from persistent catalog (covers routes with 0 executions
|
||||
// and routes from previous app versions within the selected time window)
|
||||
try {
|
||||
List<RouteCatalogEntry> catalogEntries = (environment != null && !environment.isBlank())
|
||||
? routeCatalogStore.findByEnvironment(environment, rangeFrom, rangeTo)
|
||||
: routeCatalogStore.findAll(rangeFrom, rangeTo);
|
||||
for (RouteCatalogEntry entry : catalogEntries) {
|
||||
routesByApp.computeIfAbsent(entry.applicationId(), k -> new LinkedHashSet<>())
|
||||
.add(entry.routeId());
|
||||
}
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to query route catalog: {}", e.getMessage());
|
||||
}
|
||||
|
||||
// Build catalog entries — merge apps from agent registry + ClickHouse data
|
||||
Set<String> allAppIds = new LinkedHashSet<>(agentsByApp.keySet());
|
||||
allAppIds.addAll(routesByApp.keySet());
|
||||
|
||||
@@ -0,0 +1,126 @@
|
||||
package com.cameleer.server.app.storage;
|
||||
|
||||
import com.cameleer.server.core.storage.RouteCatalogEntry;
|
||||
import com.cameleer.server.core.storage.RouteCatalogStore;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.jdbc.core.JdbcTemplate;
|
||||
|
||||
import java.sql.Timestamp;
|
||||
import java.time.Instant;
|
||||
import java.util.Collection;
|
||||
import java.util.List;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
|
||||
public class ClickHouseRouteCatalogStore implements RouteCatalogStore {
|
||||
|
||||
private static final Logger log = LoggerFactory.getLogger(ClickHouseRouteCatalogStore.class);
|
||||
|
||||
private final String tenantId;
|
||||
private final JdbcTemplate jdbc;
|
||||
|
||||
// key: environment + "\0" + application_id + "\0" + route_id -> first_seen
|
||||
private final ConcurrentHashMap<String, Instant> firstSeenCache = new ConcurrentHashMap<>();
|
||||
|
||||
public ClickHouseRouteCatalogStore(String tenantId, JdbcTemplate jdbc) {
|
||||
this.tenantId = tenantId;
|
||||
this.jdbc = jdbc;
|
||||
warmLoadFirstSeenCache();
|
||||
}
|
||||
|
||||
private void warmLoadFirstSeenCache() {
|
||||
try {
|
||||
jdbc.query(
|
||||
"SELECT environment, application_id, route_id, first_seen " +
|
||||
"FROM route_catalog FINAL WHERE tenant_id = ?",
|
||||
rs -> {
|
||||
String key = cacheKey(
|
||||
rs.getString("environment"),
|
||||
rs.getString("application_id"),
|
||||
rs.getString("route_id"));
|
||||
Timestamp ts = rs.getTimestamp("first_seen");
|
||||
if (ts != null) {
|
||||
firstSeenCache.put(key, ts.toInstant());
|
||||
}
|
||||
},
|
||||
tenantId);
|
||||
log.info("Route catalog cache warmed: {} entries", firstSeenCache.size());
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to warm route catalog cache — first_seen values will default to now: {}", e.getMessage());
|
||||
}
|
||||
}
|
||||
|
||||
private static String cacheKey(String environment, String applicationId, String routeId) {
|
||||
return environment + "\0" + applicationId + "\0" + routeId;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void upsert(String applicationId, String environment, Collection<String> routeIds) {
|
||||
if (routeIds == null || routeIds.isEmpty()) {
|
||||
return;
|
||||
}
|
||||
|
||||
Instant now = Instant.now();
|
||||
Timestamp nowTs = Timestamp.from(now);
|
||||
|
||||
for (String routeId : routeIds) {
|
||||
String key = cacheKey(environment, applicationId, routeId);
|
||||
Instant firstSeen = firstSeenCache.computeIfAbsent(key, k -> now);
|
||||
Timestamp firstSeenTs = Timestamp.from(firstSeen);
|
||||
|
||||
try {
|
||||
jdbc.update(
|
||||
"INSERT INTO route_catalog " +
|
||||
"(tenant_id, environment, application_id, route_id, first_seen, last_seen) " +
|
||||
"VALUES (?, ?, ?, ?, ?, ?)",
|
||||
tenantId, environment, applicationId, routeId, firstSeenTs, nowTs);
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to upsert route catalog entry {}/{}: {}",
|
||||
applicationId, routeId, e.getMessage());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<RouteCatalogEntry> findByEnvironment(String environment, Instant from, Instant to) {
|
||||
return jdbc.query(
|
||||
"SELECT application_id, route_id, first_seen, last_seen " +
|
||||
"FROM route_catalog FINAL " +
|
||||
"WHERE tenant_id = ? AND environment = ? " +
|
||||
"AND first_seen <= ? AND last_seen >= ?",
|
||||
(rs, rowNum) -> new RouteCatalogEntry(
|
||||
rs.getString("application_id"),
|
||||
rs.getString("route_id"),
|
||||
environment,
|
||||
rs.getTimestamp("first_seen").toInstant(),
|
||||
rs.getTimestamp("last_seen").toInstant()),
|
||||
tenantId, environment,
|
||||
Timestamp.from(to), Timestamp.from(from));
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<RouteCatalogEntry> findAll(Instant from, Instant to) {
|
||||
return jdbc.query(
|
||||
"SELECT application_id, route_id, environment, first_seen, last_seen " +
|
||||
"FROM route_catalog FINAL " +
|
||||
"WHERE tenant_id = ? " +
|
||||
"AND first_seen <= ? AND last_seen >= ?",
|
||||
(rs, rowNum) -> new RouteCatalogEntry(
|
||||
rs.getString("application_id"),
|
||||
rs.getString("route_id"),
|
||||
rs.getString("environment"),
|
||||
rs.getTimestamp("first_seen").toInstant(),
|
||||
rs.getTimestamp("last_seen").toInstant()),
|
||||
tenantId,
|
||||
Timestamp.from(to), Timestamp.from(from));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void deleteByApplication(String applicationId) {
|
||||
// Remove from cache
|
||||
firstSeenCache.entrySet().removeIf(e -> {
|
||||
String[] parts = e.getKey().split("\0", 3);
|
||||
return parts.length == 3 && parts[1].equals(applicationId);
|
||||
});
|
||||
}
|
||||
}
|
||||
@@ -385,3 +385,16 @@ CREATE TABLE IF NOT EXISTS usage_events (
|
||||
ENGINE = MergeTree()
|
||||
ORDER BY (tenant_id, timestamp, environment, username, normalized)
|
||||
TTL toDateTime(timestamp) + INTERVAL 90 DAY;
|
||||
|
||||
-- ── Route Catalog ──────────────────────────────────────────────────────
|
||||
|
||||
CREATE TABLE IF NOT EXISTS route_catalog (
|
||||
tenant_id LowCardinality(String) DEFAULT 'default',
|
||||
environment LowCardinality(String) DEFAULT 'default',
|
||||
application_id LowCardinality(String),
|
||||
route_id LowCardinality(String),
|
||||
first_seen DateTime64(3),
|
||||
last_seen DateTime64(3)
|
||||
)
|
||||
ENGINE = ReplacingMergeTree(last_seen)
|
||||
ORDER BY (tenant_id, environment, application_id, route_id);
|
||||
|
||||
@@ -0,0 +1,10 @@
|
||||
package com.cameleer.server.core.storage;
|
||||
|
||||
import java.time.Instant;
|
||||
|
||||
public record RouteCatalogEntry(
|
||||
String applicationId,
|
||||
String routeId,
|
||||
String environment,
|
||||
Instant firstSeen,
|
||||
Instant lastSeen) {}
|
||||
@@ -0,0 +1,16 @@
|
||||
package com.cameleer.server.core.storage;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.Collection;
|
||||
import java.util.List;
|
||||
|
||||
public interface RouteCatalogStore {
|
||||
|
||||
void upsert(String applicationId, String environment, Collection<String> routeIds);
|
||||
|
||||
List<RouteCatalogEntry> findByEnvironment(String environment, Instant from, Instant to);
|
||||
|
||||
List<RouteCatalogEntry> findAll(Instant from, Instant to);
|
||||
|
||||
void deleteByApplication(String applicationId);
|
||||
}
|
||||
611
docs/superpowers/plans/2026-04-16-persistent-route-catalog.md
Normal file
611
docs/superpowers/plans/2026-04-16-persistent-route-catalog.md
Normal file
@@ -0,0 +1,611 @@
|
||||
# Persistent Route Catalog Implementation Plan
|
||||
|
||||
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||
|
||||
**Goal:** Persist the route catalog in ClickHouse so routes (including sub-routes with zero executions) survive server restarts without requiring agent reconnection, and historical routes remain visible when the selected time window overlaps their lifecycle.
|
||||
|
||||
**Architecture:** A new `route_catalog` ClickHouse table (`ReplacingMergeTree`) keyed on `(tenant_id, environment, application_id, route_id)` with `first_seen`/`last_seen` timestamps. Written on agent registration and heartbeat. Read as a third source in both catalog controllers, merged into the existing `routesByApp` map. Cache-backed `first_seen` preservation follows the `ClickHouseDiagramStore` pattern.
|
||||
|
||||
**Tech Stack:** Java 17, Spring Boot, ClickHouse (JDBC), ConcurrentHashMap cache
|
||||
|
||||
**Spec:** `docs/superpowers/specs/2026-04-16-persistent-route-catalog-design.md`
|
||||
|
||||
---
|
||||
|
||||
### Task 1: ClickHouse Schema
|
||||
|
||||
**Files:**
|
||||
- Modify: `cameleer-server-app/src/main/resources/clickhouse/init.sql`
|
||||
|
||||
- [ ] **Step 1: Add the route_catalog table to init.sql**
|
||||
|
||||
Append before the closing of the file (after the usage_events table):
|
||||
|
||||
```sql
|
||||
-- ── Route Catalog ──────────────────────────────────────────────────────
|
||||
|
||||
CREATE TABLE IF NOT EXISTS route_catalog (
|
||||
tenant_id LowCardinality(String) DEFAULT 'default',
|
||||
environment LowCardinality(String) DEFAULT 'default',
|
||||
application_id LowCardinality(String),
|
||||
route_id LowCardinality(String),
|
||||
first_seen DateTime64(3),
|
||||
last_seen DateTime64(3)
|
||||
)
|
||||
ENGINE = ReplacingMergeTree(last_seen)
|
||||
ORDER BY (tenant_id, environment, application_id, route_id);
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Verify the project compiles**
|
||||
|
||||
Run: `mvn clean compile -q`
|
||||
Expected: BUILD SUCCESS
|
||||
|
||||
- [ ] **Step 3: Commit**
|
||||
|
||||
```bash
|
||||
git add cameleer-server-app/src/main/resources/clickhouse/init.sql
|
||||
git commit -m "feat: add route_catalog table to ClickHouse schema"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 2: Core Interface and Record
|
||||
|
||||
**Files:**
|
||||
- Create: `cameleer-server-core/src/main/java/com/cameleer/server/core/storage/RouteCatalogEntry.java`
|
||||
- Create: `cameleer-server-core/src/main/java/com/cameleer/server/core/storage/RouteCatalogStore.java`
|
||||
|
||||
- [ ] **Step 1: Create the RouteCatalogEntry record**
|
||||
|
||||
```java
|
||||
package com.cameleer.server.core.storage;
|
||||
|
||||
import java.time.Instant;
|
||||
|
||||
public record RouteCatalogEntry(
|
||||
String applicationId,
|
||||
String routeId,
|
||||
String environment,
|
||||
Instant firstSeen,
|
||||
Instant lastSeen) {}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Create the RouteCatalogStore interface**
|
||||
|
||||
```java
|
||||
package com.cameleer.server.core.storage;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.Collection;
|
||||
import java.util.List;
|
||||
|
||||
public interface RouteCatalogStore {
|
||||
|
||||
void upsert(String applicationId, String environment, Collection<String> routeIds);
|
||||
|
||||
List<RouteCatalogEntry> findByEnvironment(String environment, Instant from, Instant to);
|
||||
|
||||
List<RouteCatalogEntry> findAll(Instant from, Instant to);
|
||||
|
||||
void deleteByApplication(String applicationId);
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Verify the project compiles**
|
||||
|
||||
Run: `mvn clean compile -q`
|
||||
Expected: BUILD SUCCESS
|
||||
|
||||
- [ ] **Step 4: Commit**
|
||||
|
||||
```bash
|
||||
git add cameleer-server-core/src/main/java/com/cameleer/server/core/storage/RouteCatalogEntry.java \
|
||||
cameleer-server-core/src/main/java/com/cameleer/server/core/storage/RouteCatalogStore.java
|
||||
git commit -m "feat: add RouteCatalogStore interface and RouteCatalogEntry record"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 3: ClickHouse Implementation
|
||||
|
||||
**Files:**
|
||||
- Create: `cameleer-server-app/src/main/java/com/cameleer/server/app/storage/ClickHouseRouteCatalogStore.java`
|
||||
|
||||
- [ ] **Step 1: Create the ClickHouseRouteCatalogStore class**
|
||||
|
||||
Follow the `ClickHouseDiagramStore` pattern: constructor takes `tenantId` + `JdbcTemplate`, warm-loads a `firstSeenCache` on construction, provides `upsert`, `findByEnvironment`, `findAll`, `deleteByApplication`.
|
||||
|
||||
```java
|
||||
package com.cameleer.server.app.storage;
|
||||
|
||||
import com.cameleer.server.core.storage.RouteCatalogEntry;
|
||||
import com.cameleer.server.core.storage.RouteCatalogStore;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.jdbc.core.JdbcTemplate;
|
||||
|
||||
import java.sql.Timestamp;
|
||||
import java.time.Instant;
|
||||
import java.time.temporal.ChronoUnit;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collection;
|
||||
import java.util.List;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
|
||||
public class ClickHouseRouteCatalogStore implements RouteCatalogStore {
|
||||
|
||||
private static final Logger log = LoggerFactory.getLogger(ClickHouseRouteCatalogStore.class);
|
||||
|
||||
private final String tenantId;
|
||||
private final JdbcTemplate jdbc;
|
||||
|
||||
// key: environment + "\0" + application_id + "\0" + route_id -> first_seen
|
||||
private final ConcurrentHashMap<String, Instant> firstSeenCache = new ConcurrentHashMap<>();
|
||||
|
||||
public ClickHouseRouteCatalogStore(String tenantId, JdbcTemplate jdbc) {
|
||||
this.tenantId = tenantId;
|
||||
this.jdbc = jdbc;
|
||||
warmLoadFirstSeenCache();
|
||||
}
|
||||
|
||||
private void warmLoadFirstSeenCache() {
|
||||
try {
|
||||
jdbc.query(
|
||||
"SELECT environment, application_id, route_id, first_seen " +
|
||||
"FROM route_catalog FINAL WHERE tenant_id = ?",
|
||||
rs -> {
|
||||
String key = cacheKey(
|
||||
rs.getString("environment"),
|
||||
rs.getString("application_id"),
|
||||
rs.getString("route_id"));
|
||||
Timestamp ts = rs.getTimestamp("first_seen");
|
||||
if (ts != null) {
|
||||
firstSeenCache.put(key, ts.toInstant());
|
||||
}
|
||||
},
|
||||
tenantId);
|
||||
log.info("Route catalog cache warmed: {} entries", firstSeenCache.size());
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to warm route catalog cache — first_seen values will default to now: {}", e.getMessage());
|
||||
}
|
||||
}
|
||||
|
||||
private static String cacheKey(String environment, String applicationId, String routeId) {
|
||||
return environment + "\0" + applicationId + "\0" + routeId;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void upsert(String applicationId, String environment, Collection<String> routeIds) {
|
||||
if (routeIds == null || routeIds.isEmpty()) {
|
||||
return;
|
||||
}
|
||||
|
||||
Instant now = Instant.now();
|
||||
Timestamp nowTs = Timestamp.from(now);
|
||||
|
||||
for (String routeId : routeIds) {
|
||||
String key = cacheKey(environment, applicationId, routeId);
|
||||
Instant firstSeen = firstSeenCache.computeIfAbsent(key, k -> now);
|
||||
Timestamp firstSeenTs = Timestamp.from(firstSeen);
|
||||
|
||||
try {
|
||||
jdbc.update(
|
||||
"INSERT INTO route_catalog " +
|
||||
"(tenant_id, environment, application_id, route_id, first_seen, last_seen) " +
|
||||
"VALUES (?, ?, ?, ?, ?, ?)",
|
||||
tenantId, environment, applicationId, routeId, firstSeenTs, nowTs);
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to upsert route catalog entry {}/{}: {}",
|
||||
applicationId, routeId, e.getMessage());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<RouteCatalogEntry> findByEnvironment(String environment, Instant from, Instant to) {
|
||||
return jdbc.query(
|
||||
"SELECT application_id, route_id, first_seen, last_seen " +
|
||||
"FROM route_catalog FINAL " +
|
||||
"WHERE tenant_id = ? AND environment = ? " +
|
||||
"AND first_seen <= ? AND last_seen >= ?",
|
||||
(rs, rowNum) -> new RouteCatalogEntry(
|
||||
rs.getString("application_id"),
|
||||
rs.getString("route_id"),
|
||||
environment,
|
||||
rs.getTimestamp("first_seen").toInstant(),
|
||||
rs.getTimestamp("last_seen").toInstant()),
|
||||
tenantId, environment,
|
||||
Timestamp.from(to), Timestamp.from(from));
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<RouteCatalogEntry> findAll(Instant from, Instant to) {
|
||||
return jdbc.query(
|
||||
"SELECT application_id, route_id, environment, first_seen, last_seen " +
|
||||
"FROM route_catalog FINAL " +
|
||||
"WHERE tenant_id = ? " +
|
||||
"AND first_seen <= ? AND last_seen >= ?",
|
||||
(rs, rowNum) -> new RouteCatalogEntry(
|
||||
rs.getString("application_id"),
|
||||
rs.getString("route_id"),
|
||||
rs.getString("environment"),
|
||||
rs.getTimestamp("first_seen").toInstant(),
|
||||
rs.getTimestamp("last_seen").toInstant()),
|
||||
tenantId,
|
||||
Timestamp.from(to), Timestamp.from(from));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void deleteByApplication(String applicationId) {
|
||||
// Remove from cache
|
||||
firstSeenCache.entrySet().removeIf(e -> {
|
||||
String[] parts = e.getKey().split("\0", 3);
|
||||
return parts.length == 3 && parts[1].equals(applicationId);
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Verify the project compiles**
|
||||
|
||||
Run: `mvn clean compile -q`
|
||||
Expected: BUILD SUCCESS
|
||||
|
||||
- [ ] **Step 3: Commit**
|
||||
|
||||
```bash
|
||||
git add cameleer-server-app/src/main/java/com/cameleer/server/app/storage/ClickHouseRouteCatalogStore.java
|
||||
git commit -m "feat: implement ClickHouseRouteCatalogStore with first_seen cache"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 4: Wire the Bean
|
||||
|
||||
**Files:**
|
||||
- Modify: `cameleer-server-app/src/main/java/com/cameleer/server/app/config/StorageBeanConfig.java`
|
||||
|
||||
- [ ] **Step 1: Add the import and bean method**
|
||||
|
||||
Add import at the top of `StorageBeanConfig.java`:
|
||||
|
||||
```java
|
||||
import com.cameleer.server.app.storage.ClickHouseRouteCatalogStore;
|
||||
import com.cameleer.server.core.storage.RouteCatalogStore;
|
||||
```
|
||||
|
||||
Add a new bean method after the `clickHouseDiagramStore` bean (after line 146):
|
||||
|
||||
```java
|
||||
// ── ClickHouse Route Catalog Store ───────────────────────────────
|
||||
|
||||
@Bean
|
||||
public RouteCatalogStore clickHouseRouteCatalogStore(
|
||||
TenantProperties tenantProperties,
|
||||
@Qualifier("clickHouseJdbcTemplate") JdbcTemplate clickHouseJdbc) {
|
||||
return new ClickHouseRouteCatalogStore(tenantProperties.getId(), clickHouseJdbc);
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Verify the project compiles**
|
||||
|
||||
Run: `mvn clean compile -q`
|
||||
Expected: BUILD SUCCESS
|
||||
|
||||
- [ ] **Step 3: Commit**
|
||||
|
||||
```bash
|
||||
git add cameleer-server-app/src/main/java/com/cameleer/server/app/config/StorageBeanConfig.java
|
||||
git commit -m "feat: wire ClickHouseRouteCatalogStore bean"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 5: Write Path — AgentRegistrationController
|
||||
|
||||
**Files:**
|
||||
- Modify: `cameleer-server-app/src/main/java/com/cameleer/server/app/controller/AgentRegistrationController.java`
|
||||
|
||||
- [ ] **Step 1: Add the RouteCatalogStore field and constructor parameter**
|
||||
|
||||
Add import:
|
||||
|
||||
```java
|
||||
import com.cameleer.server.core.storage.RouteCatalogStore;
|
||||
```
|
||||
|
||||
Add a field after the existing `routeStateRegistry` field (after line 70):
|
||||
|
||||
```java
|
||||
private final RouteCatalogStore routeCatalogStore;
|
||||
```
|
||||
|
||||
Add `RouteCatalogStore routeCatalogStore` as the last constructor parameter and assign it in the body:
|
||||
|
||||
```java
|
||||
public AgentRegistrationController(AgentRegistryService registryService,
|
||||
AgentRegistryConfig config,
|
||||
BootstrapTokenValidator bootstrapTokenValidator,
|
||||
JwtService jwtService,
|
||||
Ed25519SigningService ed25519SigningService,
|
||||
AgentEventService agentEventService,
|
||||
AuditService auditService,
|
||||
@org.springframework.beans.factory.annotation.Qualifier("clickHouseJdbcTemplate") JdbcTemplate jdbc,
|
||||
RouteStateRegistry routeStateRegistry,
|
||||
RouteCatalogStore routeCatalogStore) {
|
||||
this.registryService = registryService;
|
||||
this.config = config;
|
||||
this.bootstrapTokenValidator = bootstrapTokenValidator;
|
||||
this.jwtService = jwtService;
|
||||
this.ed25519SigningService = ed25519SigningService;
|
||||
this.agentEventService = agentEventService;
|
||||
this.auditService = auditService;
|
||||
this.jdbc = jdbc;
|
||||
this.routeStateRegistry = routeStateRegistry;
|
||||
this.routeCatalogStore = routeCatalogStore;
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Add upsert call in the register method**
|
||||
|
||||
After the `registryService.register(...)` call (after line 126), before the re-registration log, add:
|
||||
|
||||
```java
|
||||
// Persist routes in catalog for server-restart recovery
|
||||
if (!routeIds.isEmpty()) {
|
||||
routeCatalogStore.upsert(application, environmentId, routeIds);
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Add upsert call in the heartbeat method**
|
||||
|
||||
In the `heartbeat` method, after the `routeStateRegistry` update block (after line 261, before the final `return`), add:
|
||||
|
||||
```java
|
||||
// Persist routes in catalog for server-restart recovery
|
||||
if (routeIds != null && !routeIds.isEmpty()) {
|
||||
AgentInfo agentForCatalog = registryService.findById(id);
|
||||
if (agentForCatalog != null) {
|
||||
String catalogEnv = agentForCatalog.environmentId();
|
||||
routeCatalogStore.upsert(agentForCatalog.applicationId(), catalogEnv, routeIds);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This handles both the normal heartbeat path and the auto-heal path (lines 230-248), since `routeIds` is extracted from `routeStates` at line 227-228 before the branch.
|
||||
|
||||
- [ ] **Step 4: Verify the project compiles**
|
||||
|
||||
Run: `mvn clean compile -q`
|
||||
Expected: BUILD SUCCESS
|
||||
|
||||
- [ ] **Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git add cameleer-server-app/src/main/java/com/cameleer/server/app/controller/AgentRegistrationController.java
|
||||
git commit -m "feat: persist route catalog on agent register and heartbeat"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 6: Read Path — CatalogController
|
||||
|
||||
**Files:**
|
||||
- Modify: `cameleer-server-app/src/main/java/com/cameleer/server/app/controller/CatalogController.java`
|
||||
|
||||
- [ ] **Step 1: Add the RouteCatalogStore field and constructor parameter**
|
||||
|
||||
Add imports:
|
||||
|
||||
```java
|
||||
import com.cameleer.server.core.storage.RouteCatalogStore;
|
||||
import com.cameleer.server.core.storage.RouteCatalogEntry;
|
||||
```
|
||||
|
||||
Add a field after `tenantProperties` (after line 49):
|
||||
|
||||
```java
|
||||
private final RouteCatalogStore routeCatalogStore;
|
||||
```
|
||||
|
||||
Add `RouteCatalogStore routeCatalogStore` as the last constructor parameter and assign it:
|
||||
|
||||
```java
|
||||
public CatalogController(AgentRegistryService registryService,
|
||||
DiagramStore diagramStore,
|
||||
@org.springframework.beans.factory.annotation.Qualifier("clickHouseJdbcTemplate") JdbcTemplate jdbc,
|
||||
RouteStateRegistry routeStateRegistry,
|
||||
AppService appService,
|
||||
EnvironmentService envService,
|
||||
DeploymentRepository deploymentRepo,
|
||||
TenantProperties tenantProperties,
|
||||
RouteCatalogStore routeCatalogStore) {
|
||||
this.registryService = registryService;
|
||||
this.diagramStore = diagramStore;
|
||||
this.jdbc = jdbc;
|
||||
this.routeStateRegistry = routeStateRegistry;
|
||||
this.appService = appService;
|
||||
this.envService = envService;
|
||||
this.deploymentRepo = deploymentRepo;
|
||||
this.tenantProperties = tenantProperties;
|
||||
this.routeCatalogStore = routeCatalogStore;
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Add persistent catalog merge in getCatalog()**
|
||||
|
||||
After the existing ClickHouse stats merge block (after line 155, `// Merge ClickHouse routes into routesByApp`), add:
|
||||
|
||||
```java
|
||||
// Merge routes from persistent catalog (covers routes with 0 executions
|
||||
// and routes from previous app versions within the selected time window)
|
||||
try {
|
||||
List<RouteCatalogEntry> catalogEntries = (environment != null && !environment.isBlank())
|
||||
? routeCatalogStore.findByEnvironment(environment, rangeFrom, rangeTo)
|
||||
: routeCatalogStore.findAll(rangeFrom, rangeTo);
|
||||
for (RouteCatalogEntry entry : catalogEntries) {
|
||||
routesByApp.computeIfAbsent(entry.applicationId(), k -> new LinkedHashSet<>())
|
||||
.add(entry.routeId());
|
||||
}
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to query route catalog: {}", e.getMessage());
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Add route_catalog to the dismiss deletion list**
|
||||
|
||||
In the `deleteClickHouseData` method (line 348), add `"route_catalog"` to the `tablesWithAppId` array:
|
||||
|
||||
```java
|
||||
String[] tablesWithAppId = {
|
||||
"executions", "processor_executions", "route_diagrams", "agent_events",
|
||||
"stats_1m_app", "stats_1m_route", "stats_1m_processor_type", "stats_1m_processor",
|
||||
"stats_1m_processor_detail", "route_catalog"
|
||||
};
|
||||
```
|
||||
|
||||
Also call `routeCatalogStore.deleteByApplication(applicationId)` inside `dismissApplication()`, after the `deleteClickHouseData` call (after line 333):
|
||||
|
||||
```java
|
||||
routeCatalogStore.deleteByApplication(applicationId);
|
||||
```
|
||||
|
||||
- [ ] **Step 4: Verify the project compiles**
|
||||
|
||||
Run: `mvn clean compile -q`
|
||||
Expected: BUILD SUCCESS
|
||||
|
||||
- [ ] **Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git add cameleer-server-app/src/main/java/com/cameleer/server/app/controller/CatalogController.java
|
||||
git commit -m "feat: merge persistent route catalog into unified catalog endpoint"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 7: Read Path — RouteCatalogController
|
||||
|
||||
**Files:**
|
||||
- Modify: `cameleer-server-app/src/main/java/com/cameleer/server/app/controller/RouteCatalogController.java`
|
||||
|
||||
- [ ] **Step 1: Add the RouteCatalogStore field and constructor parameter**
|
||||
|
||||
Add imports:
|
||||
|
||||
```java
|
||||
import com.cameleer.server.core.storage.RouteCatalogStore;
|
||||
import com.cameleer.server.core.storage.RouteCatalogEntry;
|
||||
```
|
||||
|
||||
Add a field after `routeStateRegistry` (after line 44):
|
||||
|
||||
```java
|
||||
private final RouteCatalogStore routeCatalogStore;
|
||||
```
|
||||
|
||||
Update the constructor to accept and assign it:
|
||||
|
||||
```java
|
||||
public RouteCatalogController(AgentRegistryService registryService,
|
||||
DiagramStore diagramStore,
|
||||
@org.springframework.beans.factory.annotation.Qualifier("clickHouseJdbcTemplate") JdbcTemplate jdbc,
|
||||
RouteStateRegistry routeStateRegistry,
|
||||
RouteCatalogStore routeCatalogStore) {
|
||||
this.registryService = registryService;
|
||||
this.diagramStore = diagramStore;
|
||||
this.jdbc = jdbc;
|
||||
this.routeStateRegistry = routeStateRegistry;
|
||||
this.routeCatalogStore = routeCatalogStore;
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Add persistent catalog merge in getCatalog()**
|
||||
|
||||
After the existing ClickHouse stats merge block (after line 123, `// Merge route IDs from ClickHouse stats into routesByApp`), add:
|
||||
|
||||
```java
|
||||
// Merge routes from persistent catalog (covers routes with 0 executions
|
||||
// and routes from previous app versions within the selected time window)
|
||||
try {
|
||||
List<RouteCatalogEntry> catalogEntries = (environment != null && !environment.isBlank())
|
||||
? routeCatalogStore.findByEnvironment(environment, rangeFrom, rangeTo)
|
||||
: routeCatalogStore.findAll(rangeFrom, rangeTo);
|
||||
for (RouteCatalogEntry entry : catalogEntries) {
|
||||
routesByApp.computeIfAbsent(entry.applicationId(), k -> new LinkedHashSet<>())
|
||||
.add(entry.routeId());
|
||||
}
|
||||
} catch (Exception e) {
|
||||
log.warn("Failed to query route catalog: {}", e.getMessage());
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Verify the project compiles**
|
||||
|
||||
Run: `mvn clean compile -q`
|
||||
Expected: BUILD SUCCESS
|
||||
|
||||
- [ ] **Step 4: Commit**
|
||||
|
||||
```bash
|
||||
git add cameleer-server-app/src/main/java/com/cameleer/server/app/controller/RouteCatalogController.java
|
||||
git commit -m "feat: merge persistent route catalog into legacy catalog endpoint"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 8: Update Rule Files
|
||||
|
||||
**Files:**
|
||||
- Modify: `.claude/rules/core-classes.md`
|
||||
- Modify: `.claude/rules/app-classes.md`
|
||||
|
||||
- [ ] **Step 1: Update core-classes.md storage section**
|
||||
|
||||
In the `## storage/ -- Storage abstractions` section (line 52-56), update the interfaces line and add the new record:
|
||||
|
||||
Change line 54 from:
|
||||
|
||||
```
|
||||
- `ExecutionStore`, `MetricsStore`, `MetricsQueryStore`, `StatsStore`, `DiagramStore`, `SearchIndex`, `LogIndex` — interfaces
|
||||
```
|
||||
|
||||
to:
|
||||
|
||||
```
|
||||
- `ExecutionStore`, `MetricsStore`, `MetricsQueryStore`, `StatsStore`, `DiagramStore`, `RouteCatalogStore`, `SearchIndex`, `LogIndex` — interfaces
|
||||
- `RouteCatalogEntry` — record: applicationId, routeId, environment, firstSeen, lastSeen
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Update app-classes.md ClickHouse stores section**
|
||||
|
||||
In the `## storage/ -- ClickHouse stores` section (line 72-77), add after the `ClickHouseUsageTracker` line:
|
||||
|
||||
```
|
||||
- `ClickHouseRouteCatalogStore` — persistent route catalog with first_seen cache, warm-loaded on startup
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Commit**
|
||||
|
||||
```bash
|
||||
git add .claude/rules/core-classes.md .claude/rules/app-classes.md
|
||||
git commit -m "docs: update rule files with RouteCatalogStore classes"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 9: Full Build Verification
|
||||
|
||||
- [ ] **Step 1: Run the full build**
|
||||
|
||||
Run: `mvn clean verify -q`
|
||||
Expected: BUILD SUCCESS (all tests pass)
|
||||
|
||||
- [ ] **Step 2: If the build fails, fix the issue and re-run**
|
||||
|
||||
Common issues:
|
||||
- Constructor parameter ordering mismatches in tests that instantiate controllers directly
|
||||
- Missing import statements
|
||||
|
||||
After fixing, run `mvn clean verify -q` again to confirm.
|
||||
@@ -0,0 +1,118 @@
|
||||
# Persistent Route Catalog
|
||||
|
||||
## Problem
|
||||
|
||||
The route catalog is assembled at query time from two ephemeral sources: the in-memory agent registry and ClickHouse `stats_1m_route` execution stats. Routes with zero executions (sub-routes) exist only in the agent registry. When the server restarts and no agent reconnects, these routes vanish from the sidebar. The ClickHouse stats fallback only covers routes that have at least one recorded execution.
|
||||
|
||||
This forces operators to restart agents after a server outage just to restore the sidebar, even though all historical execution data is still intact in ClickHouse.
|
||||
|
||||
## Solution
|
||||
|
||||
Add a `route_catalog` table in ClickHouse that persistently tracks which routes exist per application and environment, with `first_seen` and `last_seen` timestamps. This becomes a third source for the catalog endpoints, filling the gap between the in-memory registry and execution stats.
|
||||
|
||||
### Historical Lifecycle Support
|
||||
|
||||
When a new app version drops a route, the route's `last_seen` stops being updated. The sidebar query uses time-range overlap: any route whose `[first_seen, last_seen]` intersects the user's selected `[from, to]` window is included. This means historical routes remain visible when browsing past time windows, even if they no longer exist in the current app version.
|
||||
|
||||
## ClickHouse Table Schema
|
||||
|
||||
```sql
|
||||
CREATE TABLE IF NOT EXISTS route_catalog (
|
||||
tenant_id LowCardinality(String) DEFAULT 'default',
|
||||
environment LowCardinality(String) DEFAULT 'default',
|
||||
application_id LowCardinality(String),
|
||||
route_id LowCardinality(String),
|
||||
first_seen DateTime64(3),
|
||||
last_seen DateTime64(3)
|
||||
)
|
||||
ENGINE = ReplacingMergeTree(last_seen)
|
||||
ORDER BY (tenant_id, environment, application_id, route_id);
|
||||
```
|
||||
|
||||
- `ReplacingMergeTree(last_seen)` keeps the row with the highest `last_seen` on merge.
|
||||
- No TTL: catalog entries live until explicitly dismissed.
|
||||
- No partitioning: table stays small (one row per route per app per environment).
|
||||
- Added to `init.sql`, idempotent via `IF NOT EXISTS`.
|
||||
|
||||
## Write Path
|
||||
|
||||
### Interface (`cameleer-server-core`)
|
||||
|
||||
```java
|
||||
public interface RouteCatalogStore {
|
||||
void upsert(String applicationId, String environment, Collection<String> routeIds);
|
||||
List<RouteCatalogEntry> findByEnvironment(String environment, Instant from, Instant to);
|
||||
List<RouteCatalogEntry> findAll(Instant from, Instant to);
|
||||
void deleteByApplication(String applicationId);
|
||||
}
|
||||
```
|
||||
|
||||
`RouteCatalogEntry` is a record: `(applicationId, routeId, environment, firstSeen, lastSeen)`.
|
||||
|
||||
### Implementation (`cameleer-server-app`)
|
||||
|
||||
`ClickHouseRouteCatalogStore` follows the `ClickHouseDiagramStore` pattern:
|
||||
|
||||
- Maintains a `ConcurrentHashMap<String, Instant>` as `firstSeenCache`, keyed by `tenant_id + "\0" + environment + "\0" + application_id + "\0" + route_id`.
|
||||
- Warm-loaded on startup from ClickHouse (`SELECT ... FROM route_catalog WHERE tenant_id = ?`).
|
||||
- On `upsert()`: for each route ID, look up `firstSeen` from cache (use `now` if absent), batch insert all rows with `first_seen` from cache and `last_seen = now`, update cache for new routes.
|
||||
|
||||
### Write Triggers
|
||||
|
||||
Two existing code paths already have the route list available:
|
||||
|
||||
1. **`AgentRegistrationController.register()`** -- has the full `routeIds` list from the registration request.
|
||||
2. **`AgentRegistrationController.heartbeat()`** -- has `routeStates.keySet()` as route IDs, including the auto-heal path after server restart.
|
||||
|
||||
Both call `routeCatalogStore.upsert(application, environment, routeIds)` after the existing registry logic. No new HTTP calls or scheduled jobs.
|
||||
|
||||
## Read Path
|
||||
|
||||
### Query
|
||||
|
||||
```sql
|
||||
SELECT application_id, route_id, first_seen, last_seen
|
||||
FROM route_catalog FINAL
|
||||
WHERE tenant_id = ? AND first_seen <= ? AND last_seen >= ?
|
||||
AND environment = ?
|
||||
```
|
||||
|
||||
`FINAL` forces dedup at read time. The table is small, so the cost is negligible.
|
||||
|
||||
### Merge Logic
|
||||
|
||||
In both `CatalogController` and `RouteCatalogController`, after the existing ClickHouse stats merge, add the catalog as a third source:
|
||||
|
||||
```java
|
||||
for (RouteCatalogEntry entry : catalogEntries) {
|
||||
routesByApp.computeIfAbsent(entry.applicationId(), k -> new LinkedHashSet<>())
|
||||
.add(entry.routeId());
|
||||
}
|
||||
```
|
||||
|
||||
Routes already known from the agent registry or stats are deduplicated by the `Set`.
|
||||
|
||||
### Time Range
|
||||
|
||||
Both controllers already receive `from`/`to` query params (default: last 24h). These same bounds are passed to the catalog query, giving the lifecycle overlap behavior.
|
||||
|
||||
## Dismiss Path
|
||||
|
||||
`CatalogController.dismissApplication()` adds `route_catalog` to the existing `tablesWithAppId` array. The existing `ALTER TABLE ... DELETE WHERE tenant_id = ? AND application_id = ?` loop handles deletion. The `firstSeenCache` is also cleared for the dismissed app.
|
||||
|
||||
## Files Changed
|
||||
|
||||
| File | Change |
|
||||
|------|--------|
|
||||
| `clickhouse/init.sql` | Add `CREATE TABLE IF NOT EXISTS route_catalog` |
|
||||
| **New:** `core/.../storage/RouteCatalogStore.java` | Interface |
|
||||
| **New:** `core/.../storage/RouteCatalogEntry.java` | Record |
|
||||
| **New:** `app/.../storage/ClickHouseRouteCatalogStore.java` | Implementation with cache |
|
||||
| `app/.../config/StorageBeanConfig.java` | Wire bean |
|
||||
| `app/.../controller/AgentRegistrationController.java` | Call `upsert()` on register + heartbeat |
|
||||
| `app/.../controller/CatalogController.java` | Inject store, query, merge, add to dismiss list |
|
||||
| `app/.../controller/RouteCatalogController.java` | Same catalog merge |
|
||||
| `.claude/rules/core-classes.md` | Add to storage section |
|
||||
| `.claude/rules/app-classes.md` | Add to ClickHouse stores section |
|
||||
|
||||
No new dependencies. No PostgreSQL migration. No UI changes -- the sidebar already consumes routes from the catalog endpoints.
|
||||
Reference in New Issue
Block a user