perf: batch processor and log inserts to reduce ClickHouse part creation
Some checks failed
Some checks failed
Diagnostics showed ~3,200 tiny inserts per 5 minutes: - processor_executions: 2,376 inserts (14 rows avg) — one per chunk - logs: 803 inserts (5 rows avg) — synchronous in HTTP handler Fix 1: Consolidate processor inserts — new insertProcessorBatches() method flattens all ProcessorBatch records into a single INSERT per flush cycle. Fix 2: Buffer log inserts — route through WriteBuffer<BufferedLogEntry>, flushed on the same 5s interval as executions. LogIngestionController now pushes to buffer instead of inserting directly. Also reverts async_insert config (doesn't work with JDBC inline VALUES). Expected: ~3,200 inserts/5min → ~160 (20x reduction in part creation, MV triggers, and background merge work). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,12 @@
|
||||
package com.cameleer3.server.core.ingestion;
|
||||
|
||||
import com.cameleer3.common.model.LogEntry;
|
||||
|
||||
/**
|
||||
* A log entry paired with its agent metadata, ready for buffered ClickHouse insertion.
|
||||
*/
|
||||
public record BufferedLogEntry(
|
||||
String instanceId,
|
||||
String applicationId,
|
||||
LogEntry entry
|
||||
) {}
|
||||
Reference in New Issue
Block a user