Files
cameleer-server/docs/superpowers/plans/2026-03-17-infrastructure-overview.md
hsiegeln cb3ebfea7c
Some checks failed
CI / cleanup-branch (push) Has been skipped
CI / build (push) Failing after 18s
CI / docker (push) Has been skipped
CI / deploy (push) Has been skipped
CI / deploy-feature (push) Has been skipped
chore: rename cameleer3 to cameleer
Rename Java packages from com.cameleer3 to com.cameleer, module
directories from cameleer3-* to cameleer-*, and all references
throughout workflows, Dockerfiles, docs, migrations, and pom.xml.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 15:28:42 +02:00

69 KiB

Infrastructure Overview Implementation Plan

For agentic workers: REQUIRED: Use superpowers:subagent-driven-development (if subagents available) or superpowers:executing-plans to implement this plan. Steps use checkbox (- [ ]) syntax for tracking.

Goal: Add Database, OpenSearch, and Audit Log admin pages with monitoring, basic management actions, configurable thresholds, and SOC2-compliant audit logging.

Architecture: Backend-proxied monitoring — new Spring Boot controllers query PostgreSQL and OpenSearch, aggregate data, and return structured JSON. Central AuditService records all admin actions to a database table. Frontend consumes these via React Query hooks with auto-refresh for lightweight endpoints.

Tech Stack: Java 17 / Spring Boot 3.4.3 / JdbcTemplate / OpenSearch Java Client / HikariCP MXBean / React 18 / TypeScript / TanStack React Query / React Router / openapi-fetch

Spec: docs/superpowers/specs/2026-03-17-infrastructure-overview-design.md


File Map

New Files — Backend (Core Module)

File Responsibility
core/.../admin/AuditRecord.java Audit log record (immutable)
core/.../admin/AuditCategory.java Enum: INFRA, AUTH, USER_MGMT, CONFIG
core/.../admin/AuditResult.java Enum: SUCCESS, FAILURE
core/.../admin/AuditRepository.java Interface: insert + paginated query
core/.../admin/AuditService.java Central audit logging service
core/.../admin/ThresholdConfig.java Threshold config record
core/.../admin/ThresholdRepository.java Interface: find + save
core/.../indexing/SearchIndexerStats.java Interface: queue depth, failed count, rate, etc.

core/... = cameleer-server-core/src/main/java/com/cameleer/server/core

New Files — Backend (App Module)

File Responsibility
app/.../storage/PostgresAuditRepository.java JdbcTemplate impl of AuditRepository
app/.../storage/PostgresThresholdRepository.java JdbcTemplate impl of ThresholdRepository
app/.../controller/DatabaseAdminController.java Database monitoring + kill query
app/.../controller/OpenSearchAdminController.java OpenSearch monitoring + delete index
app/.../controller/ThresholdAdminController.java Threshold CRUD
app/.../controller/AuditLogController.java Audit log viewer endpoint
app/.../dto/DatabaseStatusResponse.java DTO: version, host, schema, connected
app/.../dto/ConnectionPoolResponse.java DTO: active, idle, pending, maxWait, maxSize
app/.../dto/TableSizeResponse.java DTO: table name, rows, dataSize, indexSize
app/.../dto/ActiveQueryResponse.java DTO: pid, duration, state, query
app/.../dto/OpenSearchStatusResponse.java DTO: version, host, health, nodes
app/.../dto/PipelineStatsResponse.java DTO: queueDepth, failed, debounce, rate, lastIndexed
app/.../dto/IndexInfoResponse.java DTO: name, docs, size, health, shards
app/.../dto/IndicesPageResponse.java DTO: paginated indices + summary
app/.../dto/PerformanceResponse.java DTO: cache rates, latencies, JVM heap
app/.../dto/AuditLogPageResponse.java DTO: paginated audit entries
app/.../dto/ThresholdConfigRequest.java DTO: threshold save payload
resources/db/migration/V9__admin_thresholds.sql Flyway: admin_thresholds table
resources/db/migration/V10__audit_log.sql Flyway: audit_log table

app/... = cameleer-server-app/src/main/java/com/cameleer/server/app resources/... = cameleer-server-app/src/main/resources

New Files — Backend (Tests)

File Responsibility
test/.../controller/DatabaseAdminControllerIT.java Integration test: DB endpoints
test/.../controller/OpenSearchAdminControllerIT.java Integration test: OS endpoints
test/.../controller/AuditLogControllerIT.java Integration test: audit endpoints
test/.../controller/ThresholdAdminControllerIT.java Integration test: threshold endpoints
test/.../admin/AuditServiceTest.java Unit test: audit service logic

test/... = cameleer-server-app/src/test/java/com/cameleer/server/app

New Files — Frontend

File Responsibility
ui/src/api/queries/admin/database.ts React Query hooks: database endpoints
ui/src/api/queries/admin/opensearch.ts React Query hooks: OpenSearch endpoints
ui/src/api/queries/admin/thresholds.ts React Query hooks: threshold endpoints
ui/src/api/queries/admin/audit.ts React Query hooks: audit log endpoint
ui/src/components/admin/StatusBadge.tsx Green/yellow/red indicator
ui/src/components/admin/StatusBadge.module.css Styles for StatusBadge
ui/src/components/admin/RefreshableCard.tsx Card with refresh button
ui/src/components/admin/RefreshableCard.module.css Styles for RefreshableCard
ui/src/components/admin/ConfirmDeleteDialog.tsx Confirmation dialog requiring name input
ui/src/components/admin/ConfirmDeleteDialog.module.css Styles for ConfirmDeleteDialog
ui/src/pages/admin/DatabaseAdminPage.tsx Database monitoring page
ui/src/pages/admin/DatabaseAdminPage.module.css Styles
ui/src/pages/admin/OpenSearchAdminPage.tsx OpenSearch monitoring page
ui/src/pages/admin/OpenSearchAdminPage.module.css Styles
ui/src/pages/admin/AuditLogPage.tsx Audit log viewer page
ui/src/pages/admin/AuditLogPage.module.css Styles

Modified Files

File Change
core/.../indexing/SearchIndexer.java Add stats counters, implement SearchIndexerStats
app/.../security/SecurityConfig.java Add @EnableMethodSecurity
app/.../controller/OidcConfigAdminController.java Add @PreAuthorize, inject AuditService
app/.../controller/UserAdminController.java Add @PreAuthorize, inject AuditService
app/.../security/UiAuthController.java Inject AuditService, log login/logout events
app/.../security/OidcAuthController.java Inject AuditService, log OIDC login events
app/.../config/StorageBeanConfig.java Wire new beans (AuditService, repositories)
ui/src/router.tsx Add admin sub-routes, redirect /admin
ui/src/components/layout/AppSidebar.tsx Collapsible admin sub-menu

Task Breakdown

Task 1: Flyway Migrations

Files:

  • Create: cameleer-server-app/src/main/resources/db/migration/V9__admin_thresholds.sql

  • Create: cameleer-server-app/src/main/resources/db/migration/V10__audit_log.sql

  • Step 1: Create V9 migration

CREATE TABLE admin_thresholds (
    id          INTEGER PRIMARY KEY DEFAULT 1,
    config      JSONB NOT NULL DEFAULT '{}',
    updated_at  TIMESTAMPTZ NOT NULL DEFAULT now(),
    updated_by  TEXT NOT NULL,
    CONSTRAINT  single_row CHECK (id = 1)
);
  • Step 2: Create V10 migration
CREATE TABLE audit_log (
    id          BIGSERIAL PRIMARY KEY,
    timestamp   TIMESTAMPTZ NOT NULL DEFAULT now(),
    username    TEXT NOT NULL,
    action      TEXT NOT NULL,
    category    TEXT NOT NULL,
    target      TEXT,
    detail      JSONB,
    result      TEXT NOT NULL,
    ip_address  TEXT,
    user_agent  TEXT
);

CREATE INDEX idx_audit_log_timestamp ON audit_log (timestamp DESC);
CREATE INDEX idx_audit_log_username ON audit_log (username);
CREATE INDEX idx_audit_log_category ON audit_log (category);
CREATE INDEX idx_audit_log_action ON audit_log (action);
CREATE INDEX idx_audit_log_target ON audit_log (target);
  • Step 3: Verify migrations compile

Run: cd cameleer-server && mvn clean compile -pl cameleer-server-app Expected: BUILD SUCCESS

  • Step 4: Commit
git add cameleer-server-app/src/main/resources/db/migration/V9__admin_thresholds.sql \
       cameleer-server-app/src/main/resources/db/migration/V10__audit_log.sql
git commit -m "feat: add Flyway V9 (thresholds) and V10 (audit_log) migrations"

Task 2: Core Module — Audit Domain Model + Repository Interface

Files:

  • Create: cameleer-server-core/src/main/java/com/cameleer/server/core/admin/AuditCategory.java

  • Create: cameleer-server-core/src/main/java/com/cameleer/server/core/admin/AuditResult.java

  • Create: cameleer-server-core/src/main/java/com/cameleer/server/core/admin/AuditRecord.java

  • Create: cameleer-server-core/src/main/java/com/cameleer/server/core/admin/AuditRepository.java

  • Create: cameleer-server-core/src/main/java/com/cameleer/server/core/admin/AuditService.java

  • Step 1: Create AuditCategory enum

package com.cameleer.server.core.admin;

public enum AuditCategory {
    INFRA, AUTH, USER_MGMT, CONFIG
}
  • Step 2: Create AuditResult enum
package com.cameleer.server.core.admin;

public enum AuditResult {
    SUCCESS, FAILURE
}
  • Step 3: Create AuditRecord
package com.cameleer.server.core.admin;

import java.time.Instant;
import java.util.Map;

public record AuditRecord(
        long id,
        Instant timestamp,
        String username,
        String action,
        AuditCategory category,
        String target,
        Map<String, Object> detail,
        AuditResult result,
        String ipAddress,
        String userAgent
) {
    /** Factory for creating new records (id and timestamp assigned by DB) */
    public static AuditRecord create(String username, String action, AuditCategory category,
                                      String target, Map<String, Object> detail, AuditResult result,
                                      String ipAddress, String userAgent) {
        return new AuditRecord(0, null, username, action, category, target, detail, result, ipAddress, userAgent);
    }
}
  • Step 4: Create AuditRepository interface
package com.cameleer.server.core.admin;

import java.time.Instant;
import java.util.List;

public interface AuditRepository {

    void insert(AuditRecord record);

    record AuditQuery(
            String username,
            AuditCategory category,
            String search,
            Instant from,
            Instant to,
            String sort,
            String order,
            int page,
            int size
    ) {}

    record AuditPage(List<AuditRecord> items, long totalCount) {}

    AuditPage find(AuditQuery query);
}
  • Step 5: Create AuditService

The service lives in core so it can be referenced by any controller. It depends on AuditRepository (interface) and uses SLF4J for dual logging. It extracts username/IP/user-agent from Spring Security context and servlet request.

package com.cameleer.server.core.admin;

import jakarta.servlet.http.HttpServletRequest;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.security.core.Authentication;
import org.springframework.security.core.context.SecurityContextHolder;

import java.util.Map;

public class AuditService {
    private static final Logger log = LoggerFactory.getLogger(AuditService.class);
    private final AuditRepository repository;

    public AuditService(AuditRepository repository) {
        this.repository = repository;
    }

    /** Log an action using the current SecurityContext for username */
    public void log(String action, AuditCategory category, String target,
                    Map<String, Object> detail, AuditResult result,
                    HttpServletRequest request) {
        String username = extractUsername();
        log(username, action, category, target, detail, result, request);
    }

    /** Log an action with explicit username (for pre-auth contexts like login) */
    public void log(String username, String action, AuditCategory category, String target,
                    Map<String, Object> detail, AuditResult result,
                    HttpServletRequest request) {
        String ip = request != null ? request.getRemoteAddr() : null;
        String userAgent = request != null ? request.getHeader("User-Agent") : null;
        AuditRecord record = AuditRecord.create(username, action, category, target, detail, result, ip, userAgent);

        repository.insert(record);

        log.info("AUDIT: user={} action={} category={} target={} result={}",
                username, action, category, target, result);
    }

    private String extractUsername() {
        Authentication auth = SecurityContextHolder.getContext().getAuthentication();
        if (auth != null && auth.getName() != null) {
            String name = auth.getName();
            return name.startsWith("user:") ? name.substring(5) : name;
        }
        return "unknown";
    }
}

Note: This class uses jakarta.servlet and org.springframework.security — the core POM must have these as provided scope dependencies if not already present. Check cameleer-server-core/pom.xml and add if needed:

<dependency>
    <groupId>jakarta.servlet</groupId>
    <artifactId>jakarta.servlet-api</artifactId>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.springframework.security</groupId>
    <artifactId>spring-security-core</artifactId>
    <scope>provided</scope>
</dependency>
  • Step 6: Write AuditService unit test
class AuditServiceTest {
    private AuditRepository mockRepository;
    private AuditService auditService;

    @BeforeEach
    void setUp() {
        mockRepository = mock(AuditRepository.class);
        auditService = new AuditService(mockRepository);
    }

    @Test
    void log_withExplicitUsername_insertsRecordWithCorrectFields() {
        var request = mock(HttpServletRequest.class);
        when(request.getRemoteAddr()).thenReturn("192.168.1.1");
        when(request.getHeader("User-Agent")).thenReturn("Mozilla/5.0");

        auditService.log("admin", "kill_query", AuditCategory.INFRA, "PID 42",
                Map.of("query", "SELECT 1"), AuditResult.SUCCESS, request);

        var captor = ArgumentCaptor.forClass(AuditRecord.class);
        verify(mockRepository).insert(captor.capture());
        var record = captor.getValue();
        assertEquals("admin", record.username());
        assertEquals("kill_query", record.action());
        assertEquals(AuditCategory.INFRA, record.category());
        assertEquals("PID 42", record.target());
        assertEquals("192.168.1.1", record.ipAddress());
        assertEquals("Mozilla/5.0", record.userAgent());
    }

    @Test
    void log_withNullRequest_handlesGracefully() {
        auditService.log("admin", "test", AuditCategory.CONFIG, null, null, AuditResult.SUCCESS, null);
        verify(mockRepository).insert(any(AuditRecord.class));
    }
}
  • Step 7: Verify core module compiles and test passes

Run: mvn clean compile -pl cameleer-server-core Run: mvn test -pl cameleer-server-app -Dtest=AuditServiceTest Expected: BUILD SUCCESS, tests PASS

  • Step 8: Commit
git add cameleer-server-core/ cameleer-server-app/src/test/
git commit -m "feat: add audit domain model, repository interface, AuditService, and unit test"

Task 3: Core Module — Threshold Model + Repository Interface

Files:

  • Create: cameleer-server-core/src/main/java/com/cameleer/server/core/admin/ThresholdConfig.java

  • Create: cameleer-server-core/src/main/java/com/cameleer/server/core/admin/ThresholdRepository.java

  • Step 1: Create ThresholdConfig record

package com.cameleer.server.core.admin;

public record ThresholdConfig(
        DatabaseThresholds database,
        OpenSearchThresholds opensearch
) {
    public record DatabaseThresholds(
            int connectionPoolWarning,
            int connectionPoolCritical,
            double queryDurationWarning,
            double queryDurationCritical
    ) {
        public static DatabaseThresholds defaults() {
            return new DatabaseThresholds(80, 95, 1.0, 10.0);
        }
    }

    public record OpenSearchThresholds(
            String clusterHealthWarning,
            String clusterHealthCritical,
            int queueDepthWarning,
            int queueDepthCritical,
            int jvmHeapWarning,
            int jvmHeapCritical,
            int failedDocsWarning,
            int failedDocsCritical
    ) {
        public static OpenSearchThresholds defaults() {
            return new OpenSearchThresholds("YELLOW", "RED", 100, 500, 75, 90, 1, 10);
        }
    }

    public static ThresholdConfig defaults() {
        return new ThresholdConfig(DatabaseThresholds.defaults(), OpenSearchThresholds.defaults());
    }
}
  • Step 2: Create ThresholdRepository interface
package com.cameleer.server.core.admin;

import java.util.Optional;

public interface ThresholdRepository {
    Optional<ThresholdConfig> find();
    void save(ThresholdConfig config, String updatedBy);
}
  • Step 3: Compile and commit

Run: mvn clean compile -pl cameleer-server-core

git add cameleer-server-core/
git commit -m "feat: add ThresholdConfig model and ThresholdRepository interface"

Task 4: Core Module — SearchIndexerStats Interface + Instrumentation

Files:

  • Create: cameleer-server-core/src/main/java/com/cameleer/server/core/indexing/SearchIndexerStats.java

  • Modify: cameleer-server-core/src/main/java/com/cameleer/server/core/indexing/SearchIndexer.java

  • Step 1: Create SearchIndexerStats interface

package com.cameleer.server.core.indexing;

import java.time.Instant;

public interface SearchIndexerStats {
    int getQueueDepth();
    int getMaxQueueSize();
    long getFailedCount();
    long getIndexedCount();
    Instant getLastIndexedAt();
    long getDebounceMs();
    /** Approximate indexing rate in docs/sec over last measurement window */
    double getIndexingRate();
}
  • Step 2: Add stats counters to SearchIndexer

Modify SearchIndexer.java:

  • Add fields: AtomicLong failedCount, AtomicLong indexedCount, volatile Instant lastIndexedAt
  • Add rate tracking: AtomicLong rateWindowStart, AtomicLong rateWindowCount
  • Implement SearchIndexerStats
  • Increment counters in indexExecution(): indexedCount.incrementAndGet() on success, failedCount.incrementAndGet() on catch
  • Set lastIndexedAt = Instant.now() after successful indexing
  • Return pending.size() for queue depth, queueCapacity for max size

Key changes to indexExecution() method:

private void indexExecution(String executionId) {
    pending.remove(executionId);
    try {
        ExecutionRecord exec = executionStore.findById(executionId).orElse(null);
        if (exec == null) return;
        // ... existing indexing logic ...
        indexedCount.incrementAndGet();
        lastIndexedAt = Instant.now();
        updateRate();
    } catch (Exception e) {
        failedCount.incrementAndGet();
        log.error("Failed to index execution {}", executionId, e);
    }
}

Rate calculation approach: track count delta between 15-second measurement windows.

private void updateRate() {
    long now = System.currentTimeMillis();
    long windowStart = rateWindowStartMs.get();
    long count = rateWindowCount.incrementAndGet();
    long elapsed = now - windowStart;
    if (elapsed >= 15_000) { // 15-second window
        lastRate = count / (elapsed / 1000.0);
        rateWindowStartMs.set(now);
        rateWindowCount.set(0);
    }
}
  • Step 3: Compile and commit

Run: mvn clean compile -pl cameleer-server-core

git add cameleer-server-core/
git commit -m "feat: add SearchIndexerStats interface and instrument SearchIndexer"

Task 5: App Module — Postgres Repository Implementations

Files:

  • Create: cameleer-server-app/src/main/java/com/cameleer/server/app/storage/PostgresAuditRepository.java

  • Create: cameleer-server-app/src/main/java/com/cameleer/server/app/storage/PostgresThresholdRepository.java

  • Step 1: Create PostgresAuditRepository

Follow the same @Repository + JdbcTemplate pattern used by PostgresUserRepository and PostgresOidcConfigRepository.

@Repository
public class PostgresAuditRepository implements AuditRepository {
    private final JdbcTemplate jdbc;
    private final ObjectMapper objectMapper;

    public PostgresAuditRepository(JdbcTemplate jdbc, ObjectMapper objectMapper) {
        this.jdbc = jdbc;
        this.objectMapper = objectMapper;
    }

    @Override
    public void insert(AuditRecord record) {
        jdbc.update("""
                INSERT INTO audit_log (username, action, category, target, detail, result, ip_address, user_agent)
                VALUES (?, ?, ?, ?, ?::jsonb, ?, ?, ?)
                """,
                record.username(), record.action(), record.category().name(),
                record.target(), toJson(record.detail()), record.result().name(),
                record.ipAddress(), record.userAgent());
    }

    @Override
    public AuditPage find(AuditQuery query) {
        int effectiveSize = Math.min(query.size(), 100);
        var params = new ArrayList<>();
        var conditions = new ArrayList<String>();

        // Always filter by timestamp range
        conditions.add("timestamp >= ?");
        params.add(Timestamp.from(query.from()));
        conditions.add("timestamp <= ?");
        params.add(Timestamp.from(query.to()));

        if (query.username() != null && !query.username().isBlank()) {
            conditions.add("username = ?");
            params.add(query.username());
        }
        if (query.category() != null) {
            conditions.add("category = ?");
            params.add(query.category().name());
        }
        if (query.search() != null && !query.search().isBlank()) {
            conditions.add("(action ILIKE ? OR target ILIKE ?)");
            String pattern = "%" + query.search() + "%";
            params.add(pattern);
            params.add(pattern);
        }

        String where = "WHERE " + String.join(" AND ", conditions);

        // Validate sort column against allowlist
        String sortCol = switch (query.sort()) {
            case "username" -> "username";
            case "action" -> "action";
            case "category" -> "category";
            default -> "timestamp";
        };
        String orderDir = "asc".equalsIgnoreCase(query.order()) ? "ASC" : "DESC";

        // Count query
        long total = jdbc.queryForObject(
                "SELECT COUNT(*) FROM audit_log " + where, Long.class, params.toArray());

        // Data query with pagination
        String sql = "SELECT * FROM audit_log " + where
                + " ORDER BY " + sortCol + " " + orderDir
                + " LIMIT ? OFFSET ?";
        params.add(effectiveSize);
        params.add(query.page() * effectiveSize);

        var items = jdbc.query(sql, this::mapRow, params.toArray());
        return new AuditPage(items, total);
    }

    private String toJson(Map<String, Object> map) {
        if (map == null || map.isEmpty()) return null;
        try { return objectMapper.writeValueAsString(map); }
        catch (Exception e) { return "{}"; }
    }

    @SuppressWarnings("unchecked")
    private AuditRecord mapRow(ResultSet rs, int rowNum) throws SQLException {
        Map<String, Object> detail = null;
        String detailJson = rs.getString("detail");
        if (detailJson != null) {
            try { detail = objectMapper.readValue(detailJson, Map.class); }
            catch (Exception e) { detail = Map.of("_raw", detailJson); }
        }
        Timestamp ts = rs.getTimestamp("timestamp");
        return new AuditRecord(
                rs.getLong("id"),
                ts != null ? ts.toInstant() : null,
                rs.getString("username"),
                rs.getString("action"),
                AuditCategory.valueOf(rs.getString("category")),
                rs.getString("target"),
                detail,
                AuditResult.valueOf(rs.getString("result")),
                rs.getString("ip_address"),
                rs.getString("user_agent"));
    }
}
  • Step 2: Create PostgresThresholdRepository
@Repository
public class PostgresThresholdRepository implements ThresholdRepository {
    private final JdbcTemplate jdbc;
    private final ObjectMapper objectMapper;

    public PostgresThresholdRepository(JdbcTemplate jdbc, ObjectMapper objectMapper) {
        this.jdbc = jdbc;
        this.objectMapper = objectMapper;
    }

    @Override
    public Optional<ThresholdConfig> find() {
        var results = jdbc.query(
                "SELECT config FROM admin_thresholds WHERE id = 1",
                (rs, rowNum) -> {
                    try {
                        return objectMapper.readValue(rs.getString("config"), ThresholdConfig.class);
                    } catch (Exception e) {
                        return ThresholdConfig.defaults();
                    }
                });
        return results.isEmpty() ? Optional.empty() : Optional.of(results.get(0));
    }

    @Override
    public void save(ThresholdConfig config, String updatedBy) {
        String json;
        try { json = objectMapper.writeValueAsString(config); }
        catch (Exception e) { throw new RuntimeException("Failed to serialize thresholds", e); }

        jdbc.update("""
                INSERT INTO admin_thresholds (id, config, updated_by, updated_at)
                VALUES (1, ?::jsonb, ?, now())
                ON CONFLICT (id) DO UPDATE SET config = ?::jsonb, updated_by = ?, updated_at = now()
                """, json, updatedBy, json, updatedBy);
    }
}
  • Step 3: Compile and commit

Run: mvn clean compile -pl cameleer-server-app

git add cameleer-server-app/src/main/java/com/cameleer/server/app/storage/
git commit -m "feat: add Postgres implementations for AuditRepository and ThresholdRepository"

Task 6: App Module — Bean Wiring + Security Retrofit

Files:

  • Modify: cameleer-server-app/src/main/java/com/cameleer/server/app/config/StorageBeanConfig.java

  • Modify: cameleer-server-app/src/main/java/com/cameleer/server/app/security/SecurityConfig.java

  • Modify: cameleer-server-app/src/main/java/com/cameleer/server/app/controller/OidcConfigAdminController.java

  • Modify: cameleer-server-app/src/main/java/com/cameleer/server/app/controller/UserAdminController.java

  • Step 1: Wire AuditService bean in StorageBeanConfig

Add to StorageBeanConfig.java:

@Bean
public AuditService auditService(AuditRepository auditRepository) {
    return new AuditService(auditRepository);
}
  • Step 2: Add @EnableMethodSecurity to SecurityConfig
@Configuration
@EnableWebSecurity
@EnableMethodSecurity  // <-- add this
public class SecurityConfig {
  • Step 3: Add @PreAuthorize to existing admin controllers

Add @PreAuthorize("hasRole('ADMIN')") to both OidcConfigAdminController and UserAdminController class-level annotations.

  • Step 4: Inject AuditService into OidcConfigAdminController

Add AuditService as constructor parameter. Add audit logging calls:

  • save()auditService.log("update_oidc", AuditCategory.CONFIG, "oidc", Map.of(...), AuditResult.SUCCESS, request)
  • delete()auditService.log("delete_oidc", AuditCategory.CONFIG, "oidc", null, AuditResult.SUCCESS, request)
  • testConnection()auditService.log("test_oidc", AuditCategory.CONFIG, "oidc", Map.of("result", ...), result, request)

Add HttpServletRequest request parameter to each endpoint method.

  • Step 5: Inject AuditService into UserAdminController

Same pattern. Log:

  • updateRoles()auditService.log("update_roles", AuditCategory.USER_MGMT, userId, Map.of("roles", roles), AuditResult.SUCCESS, request)

  • deleteUser()auditService.log("delete_user", AuditCategory.USER_MGMT, userId, null, AuditResult.SUCCESS, request)

  • Step 6: Inject AuditService into UiAuthController

The file is UiAuthController.java (not AuthController). Log login success/failure and logout. Use the explicit-username overload for login (SecurityContext not yet populated):

  • Login success: auditService.log(username, "login", AuditCategory.AUTH, null, null, AuditResult.SUCCESS, request)

  • Login failure: auditService.log(username, "login_failed", AuditCategory.AUTH, null, Map.of("reason", reason), AuditResult.FAILURE, request)

  • Step 7: Inject AuditService into OidcAuthController

Log OIDC login:

  • OIDC callback success: auditService.log(username, "login_oidc", AuditCategory.AUTH, null, Map.of("provider", issuerUri), AuditResult.SUCCESS, request)

  • Step 8: Compile and commit

Run: mvn clean compile

git add cameleer-server-app/
git commit -m "feat: wire AuditService, enable method security, retrofit audit logging into existing controllers"

Task 7: App Module — Response DTOs

Files:

  • Create all DTO files listed in the File Map under app/.../dto/

  • Step 1: Create database-related DTOs

All as Java records with @Schema annotations for OpenAPI:

// DatabaseStatusResponse
public record DatabaseStatusResponse(
    @Schema(description = "Whether the database is reachable") boolean connected,
    @Schema(description = "PostgreSQL version string") String version,
    @Schema(description = "JDBC host") String host,
    @Schema(description = "Current schema") String schema,
    @Schema(description = "Whether TimescaleDB extension is present") boolean timescaleDb
) {}

// ConnectionPoolResponse
public record ConnectionPoolResponse(
    int activeConnections, int idleConnections, int pendingThreads,
    long maxWaitMs, int maxPoolSize
) {}

// TableSizeResponse
public record TableSizeResponse(String tableName, long rowCount, String dataSize, String indexSize, long dataSizeBytes, long indexSizeBytes) {}

// ActiveQueryResponse
public record ActiveQueryResponse(int pid, double durationSeconds, String state, String query) {}
  • Step 2: Create OpenSearch-related DTOs
// OpenSearchStatusResponse
public record OpenSearchStatusResponse(
    boolean reachable, String clusterHealth, String version, int nodeCount, String host
) {}

// PipelineStatsResponse
public record PipelineStatsResponse(
    int queueDepth, int maxQueueSize, long failedCount, long indexedCount,
    long debounceMs, double indexingRate, Instant lastIndexedAt
) {}

// IndexInfoResponse
public record IndexInfoResponse(String name, long docCount, String size, long sizeBytes, String health, int primaryShards, int replicaShards) {}

// IndicesPageResponse
public record IndicesPageResponse(
    List<IndexInfoResponse> indices, long totalIndices, long totalDocs,
    String totalSize, int page, int pageSize, int totalPages
) {}

// PerformanceResponse
public record PerformanceResponse(
    double queryCacheHitRate, double requestCacheHitRate,
    double searchLatencyMs, double indexingLatencyMs,
    long jvmHeapUsedBytes, long jvmHeapMaxBytes
) {}
  • Step 3: Create audit + threshold DTOs
// AuditLogPageResponse
public record AuditLogPageResponse(
    List<AuditRecord> items, long totalCount, int page, int pageSize, int totalPages
) {}

// ThresholdConfigRequest
public record ThresholdConfigRequest(
    @Valid DatabaseThresholdsRequest database,
    @Valid OpenSearchThresholdsRequest opensearch
) {
    public record DatabaseThresholdsRequest(
        @Min(0) @Max(100) int connectionPoolWarning,
        @Min(0) @Max(100) int connectionPoolCritical,
        @Positive double queryDurationWarning,
        @Positive double queryDurationCritical
    ) {}

    public record OpenSearchThresholdsRequest(
        @NotBlank String clusterHealthWarning,
        @NotBlank String clusterHealthCritical,
        @Min(0) int queueDepthWarning,
        @Min(0) int queueDepthCritical,
        @Min(0) @Max(100) int jvmHeapWarning,
        @Min(0) @Max(100) int jvmHeapCritical,
        @Min(0) int failedDocsWarning,
        @Min(0) int failedDocsCritical
    ) {}

    /** Convert to domain model after validation */
    public ThresholdConfig toConfig() {
        return new ThresholdConfig(
            new ThresholdConfig.DatabaseThresholds(
                database.connectionPoolWarning, database.connectionPoolCritical,
                database.queryDurationWarning, database.queryDurationCritical),
            new ThresholdConfig.OpenSearchThresholds(
                opensearch.clusterHealthWarning, opensearch.clusterHealthCritical,
                opensearch.queueDepthWarning, opensearch.queueDepthCritical,
                opensearch.jvmHeapWarning, opensearch.jvmHeapCritical,
                opensearch.failedDocsWarning, opensearch.failedDocsCritical));
    }

    /** Custom validation: warning <= critical for all pairs */
    public List<String> validate() {
        var errors = new ArrayList<String>();
        if (database.connectionPoolWarning > database.connectionPoolCritical)
            errors.add("database.connectionPoolWarning must be <= connectionPoolCritical");
        if (database.queryDurationWarning > database.queryDurationCritical)
            errors.add("database.queryDurationWarning must be <= queryDurationCritical");
        if (opensearch.queueDepthWarning > opensearch.queueDepthCritical)
            errors.add("opensearch.queueDepthWarning must be <= queueDepthCritical");
        if (opensearch.jvmHeapWarning > opensearch.jvmHeapCritical)
            errors.add("opensearch.jvmHeapWarning must be <= jvmHeapCritical");
        if (opensearch.failedDocsWarning > opensearch.failedDocsCritical)
            errors.add("opensearch.failedDocsWarning must be <= failedDocsCritical");
        // Cluster health severity: GREEN < YELLOW < RED
        var severity = Map.of("GREEN", 0, "YELLOW", 1, "RED", 2);
        int warnSev = severity.getOrDefault(opensearch.clusterHealthWarning.toUpperCase(), -1);
        int critSev = severity.getOrDefault(opensearch.clusterHealthCritical.toUpperCase(), -1);
        if (warnSev < 0) errors.add("opensearch.clusterHealthWarning must be GREEN, YELLOW, or RED");
        if (critSev < 0) errors.add("opensearch.clusterHealthCritical must be GREEN, YELLOW, or RED");
        if (warnSev >= 0 && critSev >= 0 && warnSev > critSev)
            errors.add("opensearch.clusterHealthWarning must be less severe than clusterHealthCritical");
        return errors;
    }
}
  • Step 4: Compile and commit

Run: mvn clean compile -pl cameleer-server-app

git add cameleer-server-app/src/main/java/com/cameleer/server/app/dto/
git commit -m "feat: add response/request DTOs for admin infrastructure endpoints"

Task 8: App Module — DatabaseAdminController

Files:

  • Create: cameleer-server-app/src/main/java/com/cameleer/server/app/controller/DatabaseAdminController.java

  • Create: cameleer-server-app/src/test/java/com/cameleer/server/app/controller/DatabaseAdminControllerIT.java

  • Step 1: Write integration test

Extend AbstractPostgresIT. Test all 5 database endpoints:

  • GET /api/v1/admin/database/status → 200, contains version, connected=true
  • GET /api/v1/admin/database/pool → 200, contains activeConnections >= 0
  • GET /api/v1/admin/database/tables → 200, contains list with at least "users" table
  • GET /api/v1/admin/database/queries → 200, returns list
  • POST /api/v1/admin/database/queries/99999/kill → 404 (non-existent PID)

Use TestRestTemplate with admin JWT for authentication. Create a helper that generates an admin JWT using JwtService.

  • Step 2: Run test to verify it fails

Run: mvn test -pl cameleer-server-app -Dtest=DatabaseAdminControllerIT Expected: FAIL — controller class does not exist

  • Step 3: Implement DatabaseAdminController
@RestController
@RequestMapping("/api/v1/admin/database")
@PreAuthorize("hasRole('ADMIN')")
@Tag(name = "Database Admin", description = "Database monitoring and management (ADMIN only)")
public class DatabaseAdminController {

    private final JdbcTemplate jdbc;
    private final DataSource dataSource;
    private final AuditService auditService;

    // Constructor injection

    @GetMapping("/status")
    @Operation(summary = "Get database connection status and version")
    public ResponseEntity<DatabaseStatusResponse> getStatus() {
        try {
            String version = jdbc.queryForObject("SELECT version()", String.class);
            boolean timescaleDb = Boolean.TRUE.equals(
                jdbc.queryForObject("SELECT EXISTS(SELECT 1 FROM pg_extension WHERE extname = 'timescaledb')", Boolean.class));
            String schema = jdbc.queryForObject("SELECT current_schema()", String.class);
            // Extract host from DataSource URL
            String host = extractHost(dataSource);
            return ResponseEntity.ok(new DatabaseStatusResponse(true, version, host, schema, timescaleDb));
        } catch (Exception e) {
            return ResponseEntity.ok(new DatabaseStatusResponse(false, null, null, null, false));
        }
    }

    @GetMapping("/pool")
    @Operation(summary = "Get HikariCP connection pool stats")
    public ResponseEntity<ConnectionPoolResponse> getPool() {
        HikariDataSource hds = (HikariDataSource) dataSource;
        HikariPoolMXBean pool = hds.getHikariPoolMXBean();
        return ResponseEntity.ok(new ConnectionPoolResponse(
            pool.getActiveConnections(), pool.getIdleConnections(),
            pool.getThreadsAwaitingConnection(), hds.getConnectionTimeout(),
            hds.getMaximumPoolSize()));
    }

    @GetMapping("/tables")
    @Operation(summary = "Get table sizes and row counts")
    public ResponseEntity<List<TableSizeResponse>> getTables() {
        // Query pg_stat_user_tables + pg_total_relation_size + pg_indexes_size
        var tables = jdbc.query("""
            SELECT schemaname || '.' || relname AS table_name,
                   n_live_tup AS row_count,
                   pg_size_pretty(pg_total_relation_size(relid)) AS data_size,
                   pg_total_relation_size(relid) AS data_size_bytes,
                   pg_size_pretty(pg_indexes_size(relid)) AS index_size,
                   pg_indexes_size(relid) AS index_size_bytes
            FROM pg_stat_user_tables
            ORDER BY pg_total_relation_size(relid) DESC
            """, (rs, row) -> new TableSizeResponse(
                rs.getString("table_name"), rs.getLong("row_count"),
                rs.getString("data_size"), rs.getString("index_size"),
                rs.getLong("data_size_bytes"), rs.getLong("index_size_bytes")));
        return ResponseEntity.ok(tables);
    }

    @GetMapping("/queries")
    @Operation(summary = "Get active queries")
    public ResponseEntity<List<ActiveQueryResponse>> getQueries() {
        var queries = jdbc.query("""
            SELECT pid, EXTRACT(EPOCH FROM (now() - query_start)) AS duration_seconds,
                   state, query
            FROM pg_stat_activity
            WHERE state != 'idle' AND pid != pg_backend_pid()
            ORDER BY query_start ASC
            """, (rs, row) -> new ActiveQueryResponse(
                rs.getInt("pid"), rs.getDouble("duration_seconds"),
                rs.getString("state"), rs.getString("query")));
        return ResponseEntity.ok(queries);
    }

    @PostMapping("/queries/{pid}/kill")
    @Operation(summary = "Terminate a query by PID")
    public ResponseEntity<Void> killQuery(@PathVariable int pid, HttpServletRequest request) {
        // Check PID exists first
        var exists = jdbc.queryForObject(
            "SELECT EXISTS(SELECT 1 FROM pg_stat_activity WHERE pid = ? AND pid != pg_backend_pid())",
            Boolean.class, pid);
        if (!Boolean.TRUE.equals(exists)) {
            throw new ResponseStatusException(HttpStatus.NOT_FOUND, "No active query with PID " + pid);
        }
        jdbc.queryForObject("SELECT pg_terminate_backend(?)", Boolean.class, pid);
        auditService.log("kill_query", AuditCategory.INFRA, "PID " + pid, null, AuditResult.SUCCESS, request);
        return ResponseEntity.ok().build();
    }
}
  • Step 4: Run integration test

Run: mvn test -pl cameleer-server-app -Dtest=DatabaseAdminControllerIT Expected: PASS

  • Step 5: Commit
git add cameleer-server-app/
git commit -m "feat: add DatabaseAdminController with status, pool, tables, queries, and kill endpoints"

Task 9: App Module — OpenSearchAdminController

Files:

  • Create: cameleer-server-app/src/main/java/com/cameleer/server/app/controller/OpenSearchAdminController.java

  • Create: cameleer-server-app/src/test/java/com/cameleer/server/app/controller/OpenSearchAdminControllerIT.java

  • Step 1: Write integration test

Extend AbstractPostgresIT (which starts both PG and OpenSearch containers). Test:

  • GET /api/v1/admin/opensearch/status → 200, reachable=true, clusterHealth in [green, yellow]

  • GET /api/v1/admin/opensearch/pipeline → 200, contains queueDepth >= 0

  • GET /api/v1/admin/opensearch/indices → 200, returns paginated response

  • GET /api/v1/admin/opensearch/performance → 200, contains jvmHeapMaxBytes > 0

  • DELETE /api/v1/admin/opensearch/indices/nonexistent → 404

  • Step 2: Run test to verify it fails

Run: mvn test -pl cameleer-server-app -Dtest=OpenSearchAdminControllerIT Expected: FAIL

  • Step 3: Implement OpenSearchAdminController
@RestController
@RequestMapping("/api/v1/admin/opensearch")
@PreAuthorize("hasRole('ADMIN')")
@Tag(name = "OpenSearch Admin", description = "OpenSearch monitoring and management (ADMIN only)")
public class OpenSearchAdminController {

    private final OpenSearchClient client;
    private final RestClient restClient;
    private final SearchIndexerStats indexerStats;
    private final AuditService auditService;
    @Value("${opensearch.url:http://localhost:9200}")
    private String opensearchUrl;

    // Constructor injection

    @GetMapping("/status")
    public ResponseEntity<OpenSearchStatusResponse> getStatus() {
        try {
            var health = client.cluster().health();
            var info = client.info();
            return ResponseEntity.ok(new OpenSearchStatusResponse(
                true, health.status().jsonValue(), info.version().number(),
                health.numberOfNodes(), opensearchUrl));
        } catch (Exception e) {
            return ResponseEntity.ok(new OpenSearchStatusResponse(
                false, "UNREACHABLE", null, 0, opensearchUrl));
        }
    }

    @GetMapping("/pipeline")
    public ResponseEntity<PipelineStatsResponse> getPipeline() {
        return ResponseEntity.ok(new PipelineStatsResponse(
            indexerStats.getQueueDepth(), indexerStats.getMaxQueueSize(),
            indexerStats.getFailedCount(), indexerStats.getIndexedCount(),
            indexerStats.getDebounceMs(), indexerStats.getIndexingRate(),
            indexerStats.getLastIndexedAt()));
    }

    @GetMapping("/indices")
    public ResponseEntity<IndicesPageResponse> getIndices(
            @RequestParam(defaultValue = "") String search,
            @RequestParam(defaultValue = "ALL") String health,
            @RequestParam(defaultValue = "name") String sort,
            @RequestParam(defaultValue = "asc") String order,
            @RequestParam(defaultValue = "0") int page,
            @RequestParam(defaultValue = "10") int size) {
        int effectiveSize = Math.min(size, 100);
        try {
            // Use RestClient for _cat/indices with JSON format (richer than Java client's API)
            Request catRequest = new Request("GET", "/_cat/indices?format=json&h=index,health,docs.count,store.size,pri,rep&bytes=b");
            Response catResponse = restClient.performRequest(catRequest);
            var allIndices = objectMapper.readValue(
                    catResponse.getEntity().getContent(),
                    new TypeReference<List<Map<String, String>>>() {});

            // Filter by search pattern
            var filtered = allIndices.stream()
                    .filter(idx -> search.isEmpty() || idx.get("index").contains(search))
                    .filter(idx -> "ALL".equals(health) || health.equalsIgnoreCase(idx.get("health")))
                    .toList();

            // Compute summary totals before pagination
            long totalDocs = filtered.stream().mapToLong(idx -> parseLong(idx.get("docs.count"))).sum();
            long totalBytes = filtered.stream().mapToLong(idx -> parseLong(idx.get("store.size"))).sum();

            // Sort
            Comparator<Map<String, String>> comparator = switch (sort) {
                case "docs" -> Comparator.comparingLong(m -> parseLong(m.get("docs.count")));
                case "size" -> Comparator.comparingLong(m -> parseLong(m.get("store.size")));
                case "health" -> Comparator.comparing(m -> m.get("health"));
                default -> Comparator.comparing(m -> m.get("index"));
            };
            if ("desc".equalsIgnoreCase(order)) comparator = comparator.reversed();

            // Paginate
            var sorted = filtered.stream().sorted(comparator).toList();
            int fromIndex = page * effectiveSize;
            int toIndex = Math.min(fromIndex + effectiveSize, sorted.size());
            var pageItems = fromIndex < sorted.size()
                    ? sorted.subList(fromIndex, toIndex) : List.<Map<String, String>>of();

            var indices = pageItems.stream().map(m -> new IndexInfoResponse(
                    m.get("index"), parseLong(m.get("docs.count")),
                    humanSize(parseLong(m.get("store.size"))),
                    parseLong(m.get("store.size")),
                    m.getOrDefault("health", "unknown"),
                    parseInt(m.get("pri")), parseInt(m.get("rep"))
            )).toList();

            int totalPages = (int) Math.ceil((double) filtered.size() / effectiveSize);
            return ResponseEntity.ok(new IndicesPageResponse(
                    indices, filtered.size(), totalDocs,
                    humanSize(totalBytes), page, effectiveSize, totalPages));
        } catch (Exception e) {
            throw new ResponseStatusException(HttpStatus.BAD_GATEWAY, "OpenSearch unreachable");
        }
    }

    private long parseLong(String s) { try { return Long.parseLong(s); } catch (Exception e) { return 0; } }
    private int parseInt(String s) { try { return Integer.parseInt(s); } catch (Exception e) { return 0; } }
    private String humanSize(long bytes) {
        if (bytes < 1024) return bytes + " B";
        if (bytes < 1024 * 1024) return String.format("%.1f KB", bytes / 1024.0);
        if (bytes < 1024L * 1024 * 1024) return String.format("%.1f MB", bytes / (1024.0 * 1024));
        return String.format("%.1f GB", bytes / (1024.0 * 1024 * 1024));
    }

    @DeleteMapping("/indices/{name}")
    public ResponseEntity<Void> deleteIndex(@PathVariable String name, HttpServletRequest request) {
        try {
            var exists = client.indices().exists(e -> e.index(name));
            if (!exists.value()) {
                throw new ResponseStatusException(HttpStatus.NOT_FOUND, "Index not found: " + name);
            }
            client.indices().delete(d -> d.index(name));
            auditService.log("delete_index", AuditCategory.INFRA, name, null, AuditResult.SUCCESS, request);
            return ResponseEntity.ok().build();
        } catch (ResponseStatusException e) { throw e; }
        catch (Exception e) {
            throw new ResponseStatusException(HttpStatus.BAD_GATEWAY, "OpenSearch unreachable");
        }
    }

    @GetMapping("/performance")
    public ResponseEntity<PerformanceResponse> getPerformance() {
        try {
            // Use RestClient for _nodes/stats — the Java client exposes this via NodesStatsRequest
            Request statsRequest = new Request("GET", "/_nodes/stats/jvm,indices");
            Response statsResponse = restClient.performRequest(statsRequest);
            var statsJson = objectMapper.readTree(statsResponse.getEntity().getContent());

            // Aggregate across all nodes
            var nodes = statsJson.get("nodes");
            long jvmHeapUsed = 0, jvmHeapMax = 0;
            long queryCacheHits = 0, queryCacheMisses = 0;
            long requestCacheHits = 0, requestCacheMisses = 0;
            long searchTotal = 0, searchTimeMs = 0;
            long indexingTotal = 0, indexingTimeMs = 0;

            var nodeIter = nodes.fields();
            while (nodeIter.hasNext()) {
                var node = nodeIter.next().getValue();
                var jvm = node.get("jvm").get("mem");
                jvmHeapUsed += jvm.get("heap_used_in_bytes").asLong();
                jvmHeapMax += jvm.get("heap_max_in_bytes").asLong();

                var indices = node.get("indices");
                var qc = indices.get("query_cache");
                queryCacheHits += qc.get("hit_count").asLong();
                queryCacheMisses += qc.get("miss_count").asLong();
                var rc = indices.get("request_cache");
                requestCacheHits += rc.get("hit_count").asLong();
                requestCacheMisses += rc.get("miss_count").asLong();
                var search = indices.get("search");
                searchTotal += search.get("query_total").asLong();
                searchTimeMs += search.get("query_time_in_millis").asLong();
                var indexing = indices.get("indexing");
                indexingTotal += indexing.get("index_total").asLong();
                indexingTimeMs += indexing.get("index_time_in_millis").asLong();
            }

            double qcHitRate = (queryCacheHits + queryCacheMisses) > 0
                    ? 100.0 * queryCacheHits / (queryCacheHits + queryCacheMisses) : 0;
            double rcHitRate = (requestCacheHits + requestCacheMisses) > 0
                    ? 100.0 * requestCacheHits / (requestCacheHits + requestCacheMisses) : 0;
            double avgSearchLatency = searchTotal > 0 ? (double) searchTimeMs / searchTotal : 0;
            double avgIndexingLatency = indexingTotal > 0 ? (double) indexingTimeMs / indexingTotal : 0;

            return ResponseEntity.ok(new PerformanceResponse(
                    qcHitRate, rcHitRate, avgSearchLatency, avgIndexingLatency,
                    jvmHeapUsed, jvmHeapMax));
        } catch (Exception e) {
            throw new ResponseStatusException(HttpStatus.BAD_GATEWAY, "OpenSearch unreachable");
        }
    }
}

Note on OpenSearch client access: The OpenSearchClient bean is already available. For some operations (like _cat/indices with sorting), you may need to use the underlying RestClient directly with a raw HTTP request, since the Java client may not expose all _cat parameters. Alternatively, use the _cat/indices API via RestClient.performRequest() with format=json.

  • Step 4: Run integration test

Run: mvn test -pl cameleer-server-app -Dtest=OpenSearchAdminControllerIT Expected: PASS

  • Step 5: Commit
git add cameleer-server-app/
git commit -m "feat: add OpenSearchAdminController with status, pipeline, indices, performance, and delete endpoints"

Task 10: App Module — ThresholdAdminController + AuditLogController

Files:

  • Create: cameleer-server-app/src/main/java/com/cameleer/server/app/controller/ThresholdAdminController.java

  • Create: cameleer-server-app/src/main/java/com/cameleer/server/app/controller/AuditLogController.java

  • Create: cameleer-server-app/src/test/java/com/cameleer/server/app/controller/ThresholdAdminControllerIT.java

  • Create: cameleer-server-app/src/test/java/com/cameleer/server/app/controller/AuditLogControllerIT.java

  • Step 1: Write ThresholdAdminController integration test

  • GET /api/v1/admin/thresholds → 200, returns defaults if no row exists

  • PUT /api/v1/admin/thresholds with valid payload → 200

  • GET /api/v1/admin/thresholds after save → returns saved values

  • PUT /api/v1/admin/thresholds with warning > critical → 400

  • Step 2: Write AuditLogController integration test

  • GET /api/v1/admin/audit → 200, returns paginated results

  • GET /api/v1/admin/audit?category=INFRA → 200, only INFRA entries

  • GET /api/v1/admin/audit?search=kill_query → 200, filtered results

  • Verify that the audit log captures actions from other admin endpoints (e.g., call kill query, then check audit log)

  • Step 3: Implement ThresholdAdminController

@RestController
@RequestMapping("/api/v1/admin/thresholds")
@PreAuthorize("hasRole('ADMIN')")
@Tag(name = "Threshold Admin", description = "Threshold configuration (ADMIN only)")
public class ThresholdAdminController {
    private final ThresholdRepository repository;
    private final AuditService auditService;

    @GetMapping
    public ResponseEntity<ThresholdConfig> get() {
        return ResponseEntity.ok(repository.find().orElse(ThresholdConfig.defaults()));
    }

    @PutMapping
    public ResponseEntity<?> save(@Valid @RequestBody ThresholdConfigRequest request,
                                   HttpServletRequest httpRequest) {
        var errors = request.validate();
        if (!errors.isEmpty()) {
            throw new ResponseStatusException(HttpStatus.BAD_REQUEST,
                    "Validation failed: " + String.join("; ", errors));
        }
        ThresholdConfig config = request.toConfig();
        // Extract username from SecurityContext (same approach as AuditService)
        Authentication auth = SecurityContextHolder.getContext().getAuthentication();
        String username = auth != null ? auth.getName() : "unknown";
        if (username.startsWith("user:")) username = username.substring(5);

        repository.save(config, username);
        auditService.log("update_thresholds", AuditCategory.INFRA, "thresholds",
                Map.of("config", config), AuditResult.SUCCESS, httpRequest);
        return ResponseEntity.ok(config);
    }
}
  • Step 4: Implement AuditLogController
@RestController
@RequestMapping("/api/v1/admin/audit")
@PreAuthorize("hasRole('ADMIN')")
@Tag(name = "Audit Log", description = "Audit log viewer (ADMIN only)")
public class AuditLogController {
    private final AuditRepository repository;

    @GetMapping
    public ResponseEntity<AuditLogPageResponse> getAuditLog(
            @RequestParam(required = false) String username,
            @RequestParam(required = false) AuditCategory category,
            @RequestParam(required = false) String search,
            @RequestParam(required = false) @DateTimeFormat(iso = DateTimeFormat.ISO.DATE) LocalDate from,
            @RequestParam(required = false) @DateTimeFormat(iso = DateTimeFormat.ISO.DATE) LocalDate to,
            @RequestParam(defaultValue = "timestamp") String sort,
            @RequestParam(defaultValue = "desc") String order,
            @RequestParam(defaultValue = "0") int page,
            @RequestParam(defaultValue = "25") int size) {
        size = Math.min(size, 100);
        // Convert LocalDate to Instant (start of day / end of day in UTC)
        Instant fromInstant = (from != null ? from : LocalDate.now().minusDays(7))
                .atStartOfDay(ZoneOffset.UTC).toInstant();
        Instant toInstant = (to != null ? to.plusDays(1) : LocalDate.now().plusDays(1))
                .atStartOfDay(ZoneOffset.UTC).toInstant();
        var query = new AuditRepository.AuditQuery(username, category, search, fromInstant, toInstant, sort, order, page, size);
        var result = repository.find(query);
        int totalPages = (int) Math.ceil((double) result.totalCount() / size);
        return ResponseEntity.ok(new AuditLogPageResponse(result.items(), result.totalCount(), page, size, totalPages));
    }
}
  • Step 5: Run tests

Run: mvn test -pl cameleer-server-app -Dtest="ThresholdAdminControllerIT,AuditLogControllerIT" Expected: PASS

  • Step 6: Commit
git add cameleer-server-app/
git commit -m "feat: add ThresholdAdminController and AuditLogController with integration tests"

Task 11: Regenerate OpenAPI Spec

Files:

  • Modify: ui/src/api/schema/openapi.json (regenerated)

  • Modify: ui/src/api/schema.d.ts (regenerated)

  • Step 1: Start the server and regenerate

The project regenerates openapi.json from the running server. Follow the pattern from feedback_regenerate_openapi.md:

  1. Start the server (needs PG + OpenSearch running, or use test profile)
  2. Fetch OpenAPI JSON: curl http://localhost:8081/api/v1/api-docs > ui/src/api/schema/openapi.json
  3. Regenerate TypeScript types: cd ui && npx openapi-typescript src/api/schema/openapi.json -o src/api/schema.d.ts
  • Step 2: Verify frontend compiles

Run: cd ui && npm run build Expected: BUILD SUCCESS (or only warnings, no errors)

  • Step 3: Commit
git add ui/src/api/schema/
git commit -m "chore: regenerate OpenAPI spec and TypeScript types for admin endpoints"

Task 12: Frontend — Shared Admin Components

Files:

  • Create: ui/src/components/admin/StatusBadge.tsx + .module.css

  • Create: ui/src/components/admin/RefreshableCard.tsx + .module.css

  • Create: ui/src/components/admin/ConfirmDeleteDialog.tsx + .module.css

  • Step 1: Create StatusBadge component

A small colored dot + label. Props: status: 'healthy' | 'warning' | 'critical' | 'unknown', label?: string. Colors: green/yellow/red/gray. Uses CSS modules consistent with existing component styling.

import styles from './StatusBadge.module.css';

type Status = 'healthy' | 'warning' | 'critical' | 'unknown';

export function StatusBadge({ status, label }: { status: Status; label?: string }) {
    return (
        <span className={`${styles.badge} ${styles[status]}`}>
            <span className={styles.dot} />
            {label && <span className={styles.label}>{label}</span>}
        </span>
    );
}
  • Step 2: Create RefreshableCard component

A collapsible card with title, optional auto-refresh indicator, and manual refresh button. Props: title, onRefresh, isRefreshing, autoRefresh?: boolean, children.

  • Step 3: Create ConfirmDeleteDialog component

Modal dialog. Props: isOpen, onClose, onConfirm, resourceName, resourceType. Requires user to type the resource name to confirm. Disabled confirm button until input matches.

  • Step 4: Verify frontend compiles

Run: cd ui && npm run build

  • Step 5: Commit
git add ui/src/components/admin/
git commit -m "feat: add shared admin UI components (StatusBadge, RefreshableCard, ConfirmDeleteDialog)"

Task 13: Frontend — Admin Sidebar + Routing

Files:

  • Modify: ui/src/components/layout/AppSidebar.tsx

  • Modify: ui/src/router.tsx

  • Step 1: Update router.tsx

Add new admin routes inside the AppShell children array. Add redirect for /admin:

{ path: 'admin', element: <Navigate to="/admin/database" replace /> },
{ path: 'admin/database', element: <DatabaseAdminPage /> },
{ path: 'admin/opensearch', element: <OpenSearchAdminPage /> },
{ path: 'admin/audit', element: <AuditLogPage /> },
{ path: 'admin/oidc', element: <OidcAdminPage /> },

Use lazy imports for the new pages (consistent with SwaggerPage pattern).

  • Step 2: Refactor AppSidebar admin section

Replace the single gear-icon link with a collapsible section:

// Admin section (visible only to ADMIN role)
{roles.includes('ADMIN') && (
    <div className={styles.adminSection}>
        <button className={styles.adminHeader} onClick={toggleAdmin}>
            <SettingsIcon /> Admin {adminOpen ? '▾' : '▸'}
        </button>
        {adminOpen && (
            <nav className={styles.adminNav}>
                <NavLink to="/admin/database">Database</NavLink>
                <NavLink to="/admin/opensearch">OpenSearch</NavLink>
                <NavLink to="/admin/audit">Audit Log</NavLink>
                <NavLink to="/admin/oidc">OIDC</NavLink>
            </nav>
        )}
    </div>
)}

Persist adminOpen state in localStorage (key: cameleer-admin-sidebar-open). Default: collapsed.

  • Step 3: Verify frontend compiles and routing works

Run: cd ui && npm run build

  • Step 4: Commit
git add ui/src/router.tsx ui/src/components/layout/
git commit -m "feat: restructure admin sidebar with collapsible sub-navigation and new routes"

Task 14: Frontend — React Query Hooks

Files:

  • Create: ui/src/api/queries/admin/database.ts

  • Create: ui/src/api/queries/admin/opensearch.ts

  • Create: ui/src/api/queries/admin/thresholds.ts

  • Create: ui/src/api/queries/admin/audit.ts

  • Step 1: Create database query hooks

Follow the exact pattern from oidc-admin.ts:

import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query';
import { api } from '../../client';

export function useDatabaseStatus() {
    return useQuery({
        queryKey: ['admin', 'database', 'status'],
        queryFn: async () => {
            const { data, error } = await api.GET('/admin/database/status');
            if (error) throw new Error('Failed to load database status');
            return data!;
        },
    });
}

export function useDatabasePool() {
    return useQuery({
        queryKey: ['admin', 'database', 'pool'],
        queryFn: async () => {
            const { data, error } = await api.GET('/admin/database/pool');
            if (error) throw new Error('Failed to load pool stats');
            return data!;
        },
        refetchInterval: 15_000,
    });
}

export function useDatabaseTables() {
    return useQuery({
        queryKey: ['admin', 'database', 'tables'],
        queryFn: async () => {
            const { data, error } = await api.GET('/admin/database/tables');
            if (error) throw new Error('Failed to load table sizes');
            return data!;
        },
        // NO refetchInterval — manual refresh only
    });
}

export function useDatabaseQueries() {
    return useQuery({
        queryKey: ['admin', 'database', 'queries'],
        queryFn: async () => {
            const { data, error } = await api.GET('/admin/database/queries');
            if (error) throw new Error('Failed to load active queries');
            return data!;
        },
        refetchInterval: 15_000,
    });
}

export function useKillQuery() {
    const qc = useQueryClient();
    return useMutation({
        mutationFn: async (pid: number) => {
            const { error } = await api.POST('/admin/database/queries/{pid}/kill', { params: { path: { pid } } });
            if (error) throw new Error('Failed to kill query');
        },
        onSuccess: () => qc.invalidateQueries({ queryKey: ['admin', 'database', 'queries'] }),
    });
}
  • Step 2: Create OpenSearch query hooks

Same pattern as database hooks. Create:

export function useOpenSearchStatus() {
    return useQuery({
        queryKey: ['admin', 'opensearch', 'status'],
        queryFn: async () => {
            const { data, error } = await api.GET('/admin/opensearch/status');
            if (error) throw new Error('Failed to load OpenSearch status');
            return data!;
        },
    });
}

export function usePipelineStats() {
    return useQuery({
        queryKey: ['admin', 'opensearch', 'pipeline'],
        queryFn: async () => {
            const { data, error } = await api.GET('/admin/opensearch/pipeline');
            if (error) throw new Error('Failed to load pipeline stats');
            return data!;
        },
        refetchInterval: 15_000,
    });
}

export function useIndices(params: { search?: string; health?: string; sort?: string; order?: string; page?: number; size?: number }) {
    return useQuery({
        queryKey: ['admin', 'opensearch', 'indices', params],
        queryFn: async () => {
            const { data, error } = await api.GET('/admin/opensearch/indices', { params: { query: params } });
            if (error) throw new Error('Failed to load indices');
            return data!;
        },
        // NO refetchInterval — manual refresh only
    });
}

export function usePerformanceStats() {
    return useQuery({
        queryKey: ['admin', 'opensearch', 'performance'],
        queryFn: async () => {
            const { data, error } = await api.GET('/admin/opensearch/performance');
            if (error) throw new Error('Failed to load performance stats');
            return data!;
        },
        refetchInterval: 15_000,
    });
}

export function useDeleteIndex() {
    const qc = useQueryClient();
    return useMutation({
        mutationFn: async (name: string) => {
            const { error } = await api.DELETE('/admin/opensearch/indices/{name}', { params: { path: { name } } });
            if (error) throw new Error('Failed to delete index');
        },
        onSuccess: () => qc.invalidateQueries({ queryKey: ['admin', 'opensearch', 'indices'] }),
    });
}
  • Step 3: Create threshold hooks

useThresholds() query + useSaveThresholds() mutation.

  • Step 4: Create audit log hooks

useAuditLog() query that accepts filter params (username, category, search, from, to, page, size) and includes them in the query key for proper cache management.

  • Step 5: Verify frontend compiles

Run: cd ui && npm run build

  • Step 6: Commit
git add ui/src/api/queries/admin/
git commit -m "feat: add React Query hooks for database, OpenSearch, threshold, and audit log admin endpoints"

Task 15: Frontend — Database Admin Page

Files:

  • Create: ui/src/pages/admin/DatabaseAdminPage.tsx + .module.css

  • Step 1: Implement DatabaseAdminPage

Page structure (follow OidcAdminPage patterns for role check, loading states, error handling):

  1. Header: StatusBadge (connected/disconnected) + version + host + schema + manual refresh-all button
  2. Connection Pool card (RefreshableCard, auto-refresh): progress bar showing active/max, metrics grid
  3. Table Sizes card (RefreshableCard, manual): table with sortable columns, summary row
  4. Active Queries card (RefreshableCard, auto-refresh): table with Kill button per row (behind confirmation dialog)
  5. Maintenance card (Phase 2 placeholder): greyed-out buttons with tooltip
  6. Thresholds section (collapsible): form inputs for warning/critical values, save button

Use the useThresholds() hook to load current threshold values and apply them to StatusBadge calculations (e.g., pool usage % > warning → yellow badge).

  • Step 2: Style the page

CSS modules, consistent with existing admin page styling from OidcAdminPage.module.css.

  • Step 3: Verify page renders

Run: cd ui && npm run build

  • Step 4: Commit
git add ui/src/pages/admin/DatabaseAdminPage.tsx ui/src/pages/admin/DatabaseAdminPage.module.css
git commit -m "feat: add Database admin page with pool, tables, queries, and thresholds UI"

Task 16: Frontend — OpenSearch Admin Page

Files:

  • Create: ui/src/pages/admin/OpenSearchAdminPage.tsx + .module.css

  • Step 1: Implement OpenSearchAdminPage

Page structure:

  1. Header: StatusBadge (cluster health) + version + nodes + host + manual refresh-all
  2. Indexing Pipeline card (auto-refresh): queue depth bar, metrics grid, status badge
  3. Indices card (manual refresh): search input, health filter dropdown, sortable/paginated table, delete button per row (ConfirmDeleteDialog with name-typing confirmation), summary row above table
  4. Performance card (auto-refresh): cache hit rates, latencies, JVM heap bar
  5. Operations card (Phase 2 placeholder): greyed-out buttons
  6. Thresholds section (collapsible): form inputs for OpenSearch-specific thresholds
  • Step 2: Style the page

  • Step 3: Verify page renders

Run: cd ui && npm run build

  • Step 4: Commit
git add ui/src/pages/admin/OpenSearchAdminPage.tsx ui/src/pages/admin/OpenSearchAdminPage.module.css
git commit -m "feat: add OpenSearch admin page with pipeline, indices, performance, and thresholds UI"

Task 17: Frontend — Audit Log Page

Files:

  • Create: ui/src/pages/admin/AuditLogPage.tsx + .module.css

  • Step 1: Implement AuditLogPage

Page structure:

  1. Header: total event count + date range picker (two date inputs, default last 7 days)
  2. Filters row: username dropdown (populated from distinct values or free text), category dropdown (INFRA/AUTH/USER_MGMT/CONFIG/All), free text search input
  3. Table: columns — Timestamp, User, Category, Action, Target, Result. Click row to expand and show full detail JSON (formatted/pretty-printed)
  4. Pagination: page controls below table, showing "Showing X-Y of Z"

No auto-refresh. Read-only — no edit or delete buttons anywhere.

  • Step 2: Style the page

  • Step 3: Verify page renders

Run: cd ui && npm run build

  • Step 4: Commit
git add ui/src/pages/admin/AuditLogPage.tsx ui/src/pages/admin/AuditLogPage.module.css
git commit -m "feat: add Audit Log admin page with filtering, pagination, and detail expansion"

Task 18: End-to-End Verification

  • Step 1: Run full backend build

Run: mvn clean verify Expected: BUILD SUCCESS with all tests passing

  • Step 2: Run full frontend build

Run: cd ui && npm run build Expected: BUILD SUCCESS

  • Step 3: Manual smoke test

Start the full stack (PG + OpenSearch + server) and verify:

  1. Admin sidebar shows collapsible sub-menu with Database, OpenSearch, Audit Log, OIDC
  2. Database page shows connection status, pool stats, table sizes, active queries
  3. OpenSearch page shows cluster health, pipeline stats, indices list, performance
  4. Audit Log page shows entries from admin actions
  5. Kill query and delete index work with confirmation dialogs
  6. Thresholds save and load correctly
  7. Non-admin users cannot see admin sidebar or access admin API endpoints (verify 403)
  • Step 4: Final commit if any fixes needed

  • Step 5: Update HOWTO.md

Add section for new admin pages — how to access, what each page shows, how thresholds work.

git add HOWTO.md
git commit -m "docs: update HOWTO.md with admin infrastructure pages"
  • Step 6: Push to Gitea
git push origin main