Diagnostics showed ~3,200 tiny inserts per 5 minutes:
- processor_executions: 2,376 inserts (14 rows avg) — one per chunk
- logs: 803 inserts (5 rows avg) — synchronous in HTTP handler
Fix 1: Consolidate processor inserts — new insertProcessorBatches() method
flattens all ProcessorBatch records into a single INSERT per flush cycle.
Fix 2: Buffer log inserts — route through WriteBuffer<BufferedLogEntry>,
flushed on the same 5s interval as executions. LogIngestionController now
pushes to buffer instead of inserting directly.
Also reverts async_insert config (doesn't work with JDBC inline VALUES).
Expected: ~3,200 inserts/5min → ~160 (20x reduction in part creation,
MV triggers, and background merge work).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Diagnostics showed 3,200 tiny inserts per 5 minutes (processor_executions:
2,376 at 14 rows avg, logs: 803 at 5 rows avg), each creating a new part
and triggering MV aggregations + background merges. This was the root cause
of ~400m CPU usage at 3 tx/s.
async_insert=1 with 5s busy timeout lets ClickHouse buffer incoming inserts
and consolidate them into fewer, larger parts before writing to disk.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Increase ingestion flush interval from 500ms to 5000ms to reduce MV merge storms
- Reduce ClickHouse background_schedule_pool_size from 8 to 4
- Rename LIVE/PAUSED badge labels to AUTO/MANUAL across all pages
- Update design system to v0.1.29
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- ChunkAccumulator now extracts inputBody/outputBody/inputHeaders/outputHeaders
from ExecutionChunk.inputSnapshot/outputSnapshot instead of storing empty strings
- Set ClickHouse server log level to warning (was trace by default)
- Update CLAUDE.md to document Ed25519 key derivation
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace partial memory config with full Altinity low-memory guide
settings. Revert container limit from 6Gi back to 4Gi — proper
tuning (mlock=false, reduced caches/pools/threads, disk spill for
aggregations) makes the original budget sufficient.
Switch all storage feature flags to ClickHouse:
- CAMELEER_STORAGE_SEARCH: opensearch → clickhouse
- CAMELEER_STORAGE_METRICS: postgres → clickhouse
- CAMELEER_STORAGE_STATS: already clickhouse
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
ClickHouse 24.12 auto-sizes caches from the cgroup limit, leaving
insufficient headroom for MV processing and background merges.
Adds a custom config that shrinks mark/index/expression caches and
caps per-query memory at 2 GiB. Bumps container limit 4Gi → 6Gi.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Set CLICKHOUSE_USER/PASSWORD via k8s secret (fixes "disabling network
access for user 'default'" when no password is set)
- Add clickhouse-credentials secret to CI deploy + feature branch copy
- Pass CLICKHOUSE_USERNAME/PASSWORD env vars to server pod
- Make schema initializer non-fatal so server starts even if CH is
temporarily unavailable
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
ClickHouse only has the 'default' database out of the box. The JDBC URL
connects to 'cameleer', so the database must exist before the server starts.
Uses /docker-entrypoint-initdb.d/ init script via ConfigMap.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Execution rows are wide (29 cols with serialized arrays/JSON), so 500
rows can exceed ClickHouse's memory limit. Reduce default batch size
from 500 to 100 and bump ClickHouse memory limit from 2Gi to 4Gi.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- ClickHouse user/password now injected via `clickhouse-credentials` Secret
instead of hardcoded plaintext in deploy manifests (#33)
- CI deploy step creates the secret idempotently from Gitea CI secrets
- Added liveness/readiness probes: server uses /api/v1/health, ClickHouse
uses /ping (#35)
- Updated HOWTO.md and CLAUDE.md with new secrets and probe details
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Increase ClickHouse memory limit from 1Gi to 2Gi and reduce default
batch size from 5000 to 500. During VM backup snapshots, I/O contention
prevents ClickHouse from flushing writes fast enough, causing buffer
accumulation that exceeds the 1Gi container limit.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
ClickHouseConfig.ensureDatabaseExists() connects without the database path
to run CREATE DATABASE IF NOT EXISTS before the main DataSource is used.
Removes the ConfigMap-based init scripts from the K8s manifest — the server
is now the single owner of all ClickHouse schema management.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Init scripts run against the default database, not CLICKHOUSE_DB.
Prefix all table references with cameleer3.* and add CREATE DATABASE.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Mounts the schema SQL files as a ConfigMap into ClickHouse's init
directory so tables are created automatically on fresh starts. All
statements use IF NOT EXISTS so they're safe to re-run. This ensures
the schema exists even if the PVC is lost or the pod is recreated.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
HTTP on port 30123, native protocol on port 30900. Keeps the existing
headless service for internal StatefulSet DNS.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Server now applies schema via @PostConstruct using classpath SQL files.
All statements use IF NOT EXISTS/IF NOT EXISTS so it's idempotent and
safe to run on every startup. Removes ConfigMap and init script mount
from K8s manifest since ClickHouse no longer needs to manage the schema.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
ClickHouse Docker entrypoint runs init scripts against the default
database, not the one specified by CLICKHOUSE_DB. Prefix all table
names with cameleer3. to ensure they're created in the right database.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>