27 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project
Cameleer SaaS — vendor management plane for the Cameleer observability stack. Two personas: vendor (platform:admin) manages the platform and provisions tenants; tenant admin (tenant:manage) manages their observability instance. The vendor creates tenants, which provisions per-tenant cameleer3-server + UI instances via Docker API. No example tenant — clean slate bootstrap, vendor creates everything.
Ecosystem
This repo is the SaaS layer on top of two proven components:
- cameleer3 (sibling repo) — Java agent using ByteBuddy for zero-code instrumentation of Camel apps. Captures route executions, processor traces, payloads, metrics, and route graph topology. Deploys as
-javaagentJAR. - cameleer3-server (sibling repo) — Spring Boot observability backend. Receives agent data via HTTP, pushes config/commands via SSE. PostgreSQL + ClickHouse storage. React SPA dashboard. JWT auth with Ed25519 config signing. Docker container orchestration for app deployments.
- cameleer-website — Marketing site (Astro 5)
- design-system — Shared React component library (
@cameleer/design-systemon Gitea npm registry)
Agent-server protocol is defined in cameleer3/cameleer3-common/PROTOCOL.md. The agent and server are mature, proven components — this repo wraps them with multi-tenancy, billing, and self-service onboarding.
Key Classes
Java Backend (src/main/java/net/siegeln/cameleer/saas/)
config/ — Security, tenant isolation, web config
SecurityConfig.java— OAuth2 JWT decoder (ES384, issuer/audience validation, scope extraction)TenantIsolationInterceptor.java— HandlerInterceptor on/api/**; JWT org_id -> TenantContext, path variable validation, fail-closedTenantContext.java— ThreadLocal tenant ID storageWebConfig.java— registers TenantIsolationInterceptorPublicConfigController.java— GET /api/config (Logto endpoint, SPA client ID, scopes)MeController.java— GET /api/me (authenticated user, tenant list)
tenant/ — Tenant data model
TenantEntity.java— JPA entity (id, name, slug, tier, status, logto_org_id, stripe IDs, settings JSONB)
vendor/ — Vendor console (platform:admin only)
VendorTenantService.java— orchestrates tenant creation (sync: DB + Logto + license, async: Docker provisioning + config push), suspend/activate, delete, restart server, license renewalVendorTenantController.java— REST at/api/vendor/tenants(platform:admin required). List endpoint returnsVendorTenantSummarywith fleet health data (agentCount, environmentCount, agentLimit) fetched in parallel viaCompletableFuture.
portal/ — Tenant admin portal (org-scoped)
TenantPortalService.java— customer-facing: dashboard (health + agent/env counts from server via M2M), license, SSO connectors, team, settings (public endpoint URL), server restartTenantPortalController.java— REST at/api/tenant/*(org-scoped, includes CA cert management at/api/tenant/ca)
provisioning/ — Pluggable tenant provisioning
TenantProvisioner.java— pluggable interface (like server's RuntimeOrchestrator)DockerTenantProvisioner.java— Docker implementation, creates per-tenant server + UI containersTenantProvisionerAutoConfig.java— auto-detects Docker socketDockerCertificateManager.java— file-based cert management with atomic.wipswap (Docker volume)DisabledCertificateManager.java— no-op when certs dir unavailableCertificateManagerAutoConfig.java— auto-detects/certsdirectory
certificate/ — TLS certificate lifecycle management
CertificateManager.java— provider interface (Docker now, K8s later)CertificateService.java— orchestrates stage/activate/restore/discard, DB metadata, tenant CA stalenessCertificateController.java— REST at/api/vendor/certificates(platform:admin required)CertificateEntity.java— JPA entity (status: ACTIVE/STAGED/ARCHIVED, subject, fingerprint, etc.)CertificateStartupListener.java— seeds DB from filesystem on boot (for bootstrap-generated certs)TenantCaCertEntity.java— JPA entity for per-tenant CA certs (PEM stored in DB, multiple per tenant)TenantCaCertRepository.java— queries by tenant, status, all active across tenantsTenantCaCertService.java— stage/activate/delete tenant CAs, rebuilds aggregatedca.pemon changes
license/ — License management
LicenseEntity.java— JPA entity (id, tenant_id, tier, features JSONB, limits JSONB, expires_at)LicenseService.java— generation, validation, feature/limit lookupsLicenseController.java— POST issue, GET verify, DELETE revoke
identity/ — Logto & server integration
LogtoConfig.java— Logto endpoint, M2M credentials (reads from bootstrap file)LogtoManagementClient.java— Logto Management API calls (create org, create user, add to org, get user, SSO connectors, JIT provisioning)ServerApiClient.java— M2M client for cameleer3-server API (Logto M2M token,X-Cameleer-Protocol-Version: 1header). Health checks, license/OIDC push, agent count, environment count per tenant server.
audit/ — Audit logging
AuditEntity.java— JPA entity (actor_id, actor_email, tenant_id, action, resource, status)AuditService.java— log audit events (TENANT_CREATE, TENANT_UPDATE, etc.); auto-resolves actor name from Logto when actorEmail is null (cached in-memory)
React Frontend (ui/src/)
main.tsx— React 19 rootrouter.tsx—/vendor/*+/tenant/*withRequireScopeguards andLandingRedirectthat waits for scopesLayout.tsx— persona-aware sidebar: vendor sees expandable "Vendor" section (Tenants, Audit Log, Certificates, Identity/Logto), tenant admin sees Dashboard/License/SSO/Team/Audit/SettingsOrgResolver.tsx— merges global + org-scoped token scopes (vendor's platform:admin is global)config.ts— fetch Logto config from /platform/api/configauth/useAuth.ts— auth hook (isAuthenticated, logout, signIn)auth/useOrganization.ts— Zustand store for current tenantauth/useScopes.ts— decode JWT scopes, hasScope()auth/ProtectedRoute.tsx— guard (redirects to /login)- Vendor pages:
VendorTenantsPage.tsx,CreateTenantPage.tsx,TenantDetailPage.tsx,VendorAuditPage.tsx,CertificatesPage.tsx - Tenant pages:
TenantDashboardPage.tsx,TenantLicensePage.tsx,SsoPage.tsx,TeamPage.tsx,TenantAuditPage.tsx,SettingsPage.tsx
Custom Sign-in UI (ui/sign-in/src/)
SignInPage.tsx— form with @cameleer/design-system componentsexperience-api.ts— Logto Experience API client (4-step: init -> verify -> identify -> submit)
Architecture Context
The SaaS platform is a vendor management plane. It does not proxy requests to servers — instead it provisions dedicated per-tenant cameleer3-server instances via Docker API. Each tenant gets isolated server + UI containers with their own database schemas, networks, and Traefik routing.
Routing (single-domain, path-based via Traefik)
All services on one hostname. Two env vars control everything: PUBLIC_HOST + PUBLIC_PROTOCOL.
| Path | Target | Notes |
|---|---|---|
/platform/* |
cameleer-saas:8080 | SPA + API (server.servlet.context-path: /platform) |
/platform/vendor/* |
(SPA routes) | Vendor console (platform:admin) |
/platform/tenant/* |
(SPA routes) | Tenant admin portal (org-scoped) |
/t/{slug}/* |
per-tenant server-ui | Provisioned tenant UI containers (Traefik labels) |
/ |
redirect -> /platform/ |
Via docker/traefik-dynamic.yml |
/* (catch-all) |
cameleer-logto:3001 (priority=1) | Custom sign-in UI, OIDC, interaction |
- SPA assets at
/_app/(ViteassetsDir: '_app') to avoid conflict with Logto's/assets/ - Logto
ENDPOINT=${PUBLIC_PROTOCOL}://${PUBLIC_HOST}(same domain, same origin) - TLS:
traefik-certsinit container generates self-signed cert (dev) or copies user-supplied cert viaCERT_FILE/KEY_FILE/CA_FILEenv vars. Runtime cert replacement via vendor UI (stage/activate/restore). ACME for production (future). - Root
/->/platform/redirect via Traefik file provider (docker/traefik-dynamic.yml) - LoginPage auto-redirects to Logto OIDC (no intermediate button)
- Per-tenant server containers get Traefik labels for
/t/{slug}/*routing at provisioning time
Docker Networks
Compose-defined networks:
| Network | Name on Host | Purpose |
|---|---|---|
cameleer |
cameleer-saas_cameleer |
Compose default — shared services (DB, Logto, SaaS) |
cameleer-traefik |
cameleer-traefik (fixed name:) |
Traefik + provisioned tenant containers |
Per-tenant networks (created dynamically by DockerTenantProvisioner):
| Network | Name Pattern | Purpose |
|---|---|---|
| Tenant network | cameleer-tenant-{slug} |
Internal bridge, no internet — isolates tenant server + apps |
| Environment network | cameleer-env-{tenantId}-{envSlug} |
Tenant-scoped (includes tenantId to prevent slug collision across tenants) |
Server containers join three networks: tenant network (primary), shared services network (cameleer), and traefik network. Apps deployed by the server use the tenant network as primary.
IMPORTANT: Dynamically-created containers MUST have traefik.docker.network=cameleer-traefik label. Traefik's Docker provider defaults to network: cameleer (compose-internal name) for IP resolution, which doesn't match dynamically-created containers connected via Docker API using the host network name (cameleer-saas_cameleer). Without this label, Traefik returns 504 Gateway Timeout for /t/{slug}/api/* paths.
Custom sign-in UI (ui/sign-in/)
Separate Vite+React SPA replacing Logto's default sign-in page. Visually matches cameleer3-server LoginPage.
- Built as custom Logto Docker image (
cameleer-logto):ui/sign-in/Dockerfile= node build stage +FROM ghcr.io/logto-io/logto:latest+ COPY dist over/etc/logto/packages/experience/dist/ - Uses
@cameleer/design-systemcomponents (Card, Input, Button, FormField, Alert) - Authenticates via Logto Experience API (4-step: init -> verify password -> identify -> submit -> redirect)
CUSTOM_UI_PATHenv var does NOT work for Logto OSS — must volume-mount or replace the experience dist directory- Favicon bundled in
ui/sign-in/public/favicon.svg(served by Logto, not SaaS)
Auth enforcement
- All API endpoints enforce OAuth2 scopes via
@PreAuthorize("hasAuthority('SCOPE_xxx')")annotations - Tenant isolation enforced by
TenantIsolationInterceptor(a singleHandlerInterceptoron/api/**that resolves JWT org_id to TenantContext and validates{tenantId},{environmentId},{appId}path variables; fail-closed, platform admins bypass) - 13 OAuth2 scopes on the Logto API resource (
https://api.cameleer.local): 10 platform scopes + 3 server scopes (server:admin,server:operator,server:viewer), served to the frontend fromGET /platform/api/config - Server scopes map to server RBAC roles via JWT
scopeclaim (SaaS platform path) orrolesclaim (server-ui OIDC login path) - Org roles:
owner->server:admin+tenant:manage,operator->server:operator,viewer->server:viewer saas-vendorglobal role injected viadocker/vendor-seed.sh(VENDOR_SEED_ENABLED=truein dev) — hasplatform:admin+ all tenant scopes- Custom
JwtDecoderinSecurityConfig.java— ES384 algorithm,at+jwttoken type, split issuer-uri (string validation) / jwk-set-uri (Docker-internal fetch), audience validation (https://api.cameleer.local) - Logto Custom JWT (Phase 7b in bootstrap) injects a
rolesclaim into access tokens based on org roles and global roles — this makes role data available to the server without Logto-specific code
Auth routing by persona
| Persona | Logto role | Key scope | Landing route |
|---|---|---|---|
| Vendor | saas-vendor (global) |
platform:admin |
/vendor/tenants |
| Tenant admin | org owner |
tenant:manage |
/tenant (dashboard) |
| Regular user (operator/viewer) | org member | server:operator or server:viewer |
Redirected to server dashboard directly |
LandingRedirectcomponent waits for scopes to load, then routes to the correct persona landing pageRequireScopeguard on route groups enforces scope requirements- SSO bridge: Logto session carries over to provisioned server's OIDC flow (Traditional Web App per tenant)
Per-tenant server env vars (set by DockerTenantProvisioner)
These env vars are injected into provisioned per-tenant server containers:
| Env var | Value | Purpose |
|---|---|---|
CAMELEER_OIDC_ISSUER_URI |
${PUBLIC_PROTOCOL}://${PUBLIC_HOST}/oidc |
Token issuer claim validation |
CAMELEER_OIDC_JWK_SET_URI |
http://logto:3001/oidc/jwks |
Docker-internal JWK fetch |
CAMELEER_OIDC_TLS_SKIP_VERIFY |
true (conditional) |
Skip cert verify for OIDC discovery; only set when no /certs/ca.pem exists |
CAMELEER_CORS_ALLOWED_ORIGINS |
${PUBLIC_PROTOCOL}://${PUBLIC_HOST} |
Allow browser requests through Traefik |
CAMELEER_RUNTIME_ENABLED |
true |
Enable Docker orchestration |
CAMELEER_SERVER_URL |
http://cameleer3-server-{slug}:8081 |
Per-tenant server URL (DNS alias on tenant network) |
CAMELEER_ROUTING_DOMAIN |
${PUBLIC_HOST} |
Domain for Traefik routing labels |
CAMELEER_ROUTING_MODE |
path |
path or subdomain routing |
CAMELEER_JAR_STORAGE_PATH |
/data/jars |
Directory for uploaded JARs |
CAMELEER_DOCKER_NETWORK |
cameleer-tenant-{slug} |
Primary network for deployed app containers |
CAMELEER_JAR_DOCKER_VOLUME |
cameleer-jars-{slug} |
Docker volume name for JAR sharing between server and deployed containers |
BASE_PATH (server-ui) |
/t/{slug} |
React Router basename + <base> tag |
CAMELEER_API_URL (server-ui) |
http://cameleer-server-{slug}:8081 |
Nginx upstream proxy target (NOT API_URL — image uses ${CAMELEER_API_URL}) |
Per-tenant volume mounts (set by DockerTenantProvisioner)
| Mount | Container path | Purpose |
|---|---|---|
/var/run/docker.sock |
/var/run/docker.sock |
Docker socket for app deployment orchestration |
cameleer-jars-{slug} (volume) |
/data/jars |
Shared JAR storage — server writes, deployed app containers read |
cameleer-saas_certs (volume, ro) |
/certs |
Platform TLS certs + CA bundle for OIDC trust |
Server OIDC role extraction (two paths)
| Path | Token type | Role source | How it works |
|---|---|---|---|
| SaaS platform -> server API | Logto org-scoped access token | scope claim |
JwtAuthenticationFilter.extractRolesFromScopes() reads server:admin from scope |
| Server-ui SSO login | Logto JWT access token (via Traditional Web App) | roles claim |
OidcTokenExchanger decodes access_token, reads roles injected by Custom JWT |
The server's OIDC config (OidcConfig) includes audience (RFC 8707 resource indicator) and additionalScopes. The audience is sent as resource in both the authorization request and token exchange, which makes Logto return a JWT access token instead of opaque. The Custom JWT script maps org roles to roles: ["server:admin"].
CRITICAL: additionalScopes MUST include urn:logto:scope:organizations and urn:logto:scope:organization_roles — without these, Logto doesn't populate context.user.organizationRoles in the Custom JWT script, so the roles claim is empty and all users get defaultRoles (VIEWER). The server's OidcAuthController.applyClaimMappings() uses OIDC token roles (from Custom JWT) as fallback when no DB claim mapping rules exist: claim mapping rules > OIDC token roles > defaultRoles.
Deployment pipeline
App deployment is handled by the cameleer3-server's DeploymentExecutor (7-stage async flow):
- PRE_FLIGHT — validate config, check JAR exists
- PULL_IMAGE — pull base image if missing
- CREATE_NETWORK — ensure cameleer-traefik and cameleer-env-{slug} networks
- START_REPLICAS — create N containers with Traefik labels
- HEALTH_CHECK — poll
/cameleer/healthon agent port 9464 - SWAP_TRAFFIC — stop old deployment (blue/green)
- COMPLETE — mark RUNNING or DEGRADED
Key files:
DeploymentExecutor.java(in cameleer3-server) — async staged deploymentDockerRuntimeOrchestrator.java(in cameleer3-server) — Docker client, container lifecycledocker/runtime-base/Dockerfile— base image with agent JAR, maps env vars to-Dsystem propertiesServerApiClient.java— M2M token acquisition for SaaS->server API calls (agent status). UsesX-Cameleer-Protocol-Version: 1header- Docker socket access:
group_add: ["0"]in docker-compose.dev.yml (not root group membership in Dockerfile) - Network: deployed containers join
cameleer-tenant-{slug}(primary, isolation) +cameleer-traefik(routing) +cameleer-env-{tenantId}-{envSlug}(environment isolation)
Bootstrap (docker/logto-bootstrap.sh)
Idempotent script run via logto-bootstrap init container. Clean slate — no example tenant, no viewer user, no server configuration. Phases:
-
Wait for Logto health (no server to wait for — servers are provisioned per-tenant)
-
Get Management API token (reads
m-defaultsecret from DB) -
Create Logto apps (SPA, Traditional Web App with
skipConsent, M2M with Management API role + server API role) 3b. Create API resource scopes (10 platform + 3 server scopes) -
Create org roles (owner, operator, viewer with API resource scope assignments) + M2M server role (
cameleer-m2m-serverwithserver:adminscope) -
Create admin user (platform owner with Logto console access) 7b. Configure Logto Custom JWT for access tokens (maps org roles ->
rolesclaim: owner->server:admin, operator->server:operator, viewer->server:viewer; saas-vendor global role -> server:admin) -
Configure Logto sign-in branding (Cameleer colors
#C6820E/#D4941E, logo from/platform/logo.svg) -
Cleanup seeded Logto apps
-
Write bootstrap results to
/data/logto-bootstrap.json -
(Optional) Vendor seed: create
saas-vendorglobal role, vendor user, grant Logto console access (VENDOR_SEED_ENABLED=truein dev).
The compose stack is: Traefik + traefik-certs (init) + PostgreSQL + ClickHouse + Logto + logto-bootstrap (init) + cameleer-saas. No cameleer3-server or cameleer3-server-ui in compose — those are provisioned per-tenant by DockerTenantProvisioner.
Tenant Provisioning Flow
When vendor creates a tenant via VendorTenantService:
Synchronous (in createAndProvision):
- Create
TenantEntity(status=PROVISIONING) + Logto organization - Create admin user in Logto with owner org role
- Add vendor user to new org for support access
- Register OIDC redirect URIs for
/t/{slug}/oidc/callbackon Logto Traditional Web App - Generate license (tier-appropriate, 365 days)
- Return immediately — UI shows provisioning spinner, polls via
refetchInterval
Asynchronous (in provisionAsync, @Async):
7. Create tenant-isolated Docker network (cameleer-tenant-{slug})
8. Create server container with env vars, Traefik labels (traefik.docker.network), health check, Docker socket bind, JAR volume, certs volume (ro)
9. Create UI container with CAMELEER_API_URL, BASE_PATH, Traefik strip-prefix labels
10. Wait for health check (/api/v1/health, not /actuator/health which requires auth)
11. Push license token to server via M2M API
12. Push OIDC config (Traditional Web App credentials + additionalScopes: [urn:logto:scope:organizations, urn:logto:scope:organization_roles]) to server for SSO
13. Update tenant status -> ACTIVE (or set provisionError on failure)
Server restart (available to vendor + tenant admin):
POST /api/vendor/tenants/{id}/restart(vendor) andPOST /api/tenant/server/restart(tenant)- Calls
TenantProvisioner.stop(slug)thenstart(slug)— restarts server + UI containers only
Database Migrations
PostgreSQL (Flyway): src/main/resources/db/migration/
- V001 — tenants (id, name, slug, tier, status, logto_org_id, stripe IDs, settings JSONB)
- V002 — licenses (id, tenant_id, tier, features JSONB, limits JSONB, expires_at)
- V003 — environments (tenant -> environments 1:N)
- V004 — api_keys (auth tokens for agent registration)
- V005 — apps (Camel applications)
- V006 — deployments (app versions, deployment history)
- V007 — audit_log
- V008 — app resource limits
- V010 — cleanup of migrated tables
- V011 — add provisioning fields (server_endpoint, provision_error)
- V012 — certificates table + tenants.ca_applied_at
- V013 — tenant_ca_certs (per-tenant CA certificates with PEM storage)
Related Conventions
- Gitea-hosted:
gitea.siegeln.net/cameleer/ - CI:
.gitea/workflows/— Gitea Actions - K8s target: k3s cluster at 192.168.50.86
- Docker images: CI builds and pushes all images — Dockerfiles use multi-stage builds, no local builds needed
cameleer-saas— SaaS vendor management plane (frontend + JAR baked in)cameleer-logto— custom Logto with sign-in UI baked incameleer3-server/cameleer3-server-ui— provisioned per-tenant (not in compose, created byDockerTenantProvisioner)cameleer-runtime-base— base image for deployed apps (agent JAR + JRE). CI downloads latest agent SNAPSHOT from Gitea Maven registry. UsesCAMELEER_SERVER_URLenv var (not CAMELEER_EXPORT_ENDPOINT).
- Docker builds:
--no-cache,--provenance=falsefor Gitea compatibility docker-compose.dev.yml— exposes ports for direct access, setsSPRING_PROFILES_ACTIVE: dev,VENDOR_SEED_ENABLED: true. Volume-mounts./ui/distinto the container so local UI builds are served without rebuilding the Docker image (SPRING_WEB_RESOURCES_STATIC_LOCATIONSoverrides classpath). Adds Docker socket mount for tenant provisioning.- Design system: import from
@cameleer/design-system(Gitea npm registry)
Disabled Skills
- Do NOT use any
gsd:*skills in this project. This includes all/gsd:prefixed commands.
GitNexus — Code Intelligence
This project is indexed by GitNexus as cameleer-saas (2436 symbols, 5282 relationships, 204 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
If any GitNexus tool warns the index is stale, run
npx gitnexus analyzein terminal first.
Always Do
- MUST run impact analysis before editing any symbol. Before modifying a function, class, or method, run
gitnexus_impact({target: "symbolName", direction: "upstream"})and report the blast radius (direct callers, affected processes, risk level) to the user. - MUST run
gitnexus_detect_changes()before committing to verify your changes only affect expected symbols and execution flows. - MUST warn the user if impact analysis returns HIGH or CRITICAL risk before proceeding with edits.
- When exploring unfamiliar code, use
gitnexus_query({query: "concept"})to find execution flows instead of grepping. It returns process-grouped results ranked by relevance. - When you need full context on a specific symbol — callers, callees, which execution flows it participates in — use
gitnexus_context({name: "symbolName"}).
When Debugging
gitnexus_query({query: "<error or symptom>"})— find execution flows related to the issuegitnexus_context({name: "<suspect function>"})— see all callers, callees, and process participationREAD gitnexus://repo/cameleer-saas/process/{processName}— trace the full execution flow step by step- For regressions:
gitnexus_detect_changes({scope: "compare", base_ref: "main"})— see what your branch changed
When Refactoring
- Renaming: MUST use
gitnexus_rename({symbol_name: "old", new_name: "new", dry_run: true})first. Review the preview — graph edits are safe, text_search edits need manual review. Then run withdry_run: false. - Extracting/Splitting: MUST run
gitnexus_context({name: "target"})to see all incoming/outgoing refs, thengitnexus_impact({target: "target", direction: "upstream"})to find all external callers before moving code. - After any refactor: run
gitnexus_detect_changes({scope: "all"})to verify only expected files changed.
Never Do
- NEVER edit a function, class, or method without first running
gitnexus_impacton it. - NEVER ignore HIGH or CRITICAL risk warnings from impact analysis.
- NEVER rename symbols with find-and-replace — use
gitnexus_renamewhich understands the call graph. - NEVER commit changes without running
gitnexus_detect_changes()to check affected scope.
Tools Quick Reference
| Tool | When to use | Command |
|---|---|---|
query |
Find code by concept | gitnexus_query({query: "auth validation"}) |
context |
360-degree view of one symbol | gitnexus_context({name: "validateUser"}) |
impact |
Blast radius before editing | gitnexus_impact({target: "X", direction: "upstream"}) |
detect_changes |
Pre-commit scope check | gitnexus_detect_changes({scope: "staged"}) |
rename |
Safe multi-file rename | gitnexus_rename({symbol_name: "old", new_name: "new", dry_run: true}) |
cypher |
Custom graph queries | gitnexus_cypher({query: "MATCH ..."}) |
Impact Risk Levels
| Depth | Meaning | Action |
|---|---|---|
| d=1 | WILL BREAK — direct callers/importers | MUST update these |
| d=2 | LIKELY AFFECTED — indirect deps | Should test |
| d=3 | MAY NEED TESTING — transitive | Test if critical path |
Resources
| Resource | Use for |
|---|---|
gitnexus://repo/cameleer-saas/context |
Codebase overview, check index freshness |
gitnexus://repo/cameleer-saas/clusters |
All functional areas |
gitnexus://repo/cameleer-saas/processes |
All execution flows |
gitnexus://repo/cameleer-saas/process/{name} |
Step-by-step execution trace |
Self-Check Before Finishing
Before completing any code modification task, verify:
gitnexus_impactwas run for all modified symbols- No HIGH/CRITICAL risk warnings were ignored
gitnexus_detect_changes()confirms changes match expected scope- All d=1 (WILL BREAK) dependents were updated
Keeping the Index Fresh
After committing code changes, the GitNexus index becomes stale. Re-run analyze to update it:
npx gitnexus analyze
If the index previously included embeddings, preserve them by adding --embeddings:
npx gitnexus analyze --embeddings
To check whether embeddings exist, inspect .gitnexus/meta.json — the stats.embeddings field shows the count (0 means no embeddings). Running analyze without --embeddings will delete any previously generated embeddings.
Claude Code users: A PostToolUse hook handles this automatically after
git commitandgit merge.
CLI
| Task | Read this skill file |
|---|---|
| Understand architecture / "How does X work?" | .claude/skills/gitnexus/gitnexus-exploring/SKILL.md |
| Blast radius / "What breaks if I change X?" | .claude/skills/gitnexus/gitnexus-impact-analysis/SKILL.md |
| Trace bugs / "Why is X failing?" | .claude/skills/gitnexus/gitnexus-debugging/SKILL.md |
| Rename / extract / split / refactor | .claude/skills/gitnexus/gitnexus-refactoring/SKILL.md |
| Tools, resources, schema reference | .claude/skills/gitnexus/gitnexus-guide/SKILL.md |
| Index, status, clean, wiki CLI commands | .claude/skills/gitnexus/gitnexus-cli/SKILL.md |