Fix ClickHouse OOM on batch insert: reduce batch size, increase memory
All checks were successful
CI / build (push) Successful in 1m3s
CI / docker (push) Successful in 41s
CI / deploy (push) Successful in 46s

Execution rows are wide (29 cols with serialized arrays/JSON), so 500
rows can exceed ClickHouse's memory limit. Reduce default batch size
from 500 to 100 and bump ClickHouse memory limit from 2Gi to 4Gi.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
hsiegeln
2026-03-13 22:44:03 +01:00
parent f156a2aab0
commit 4cdf2ac012
2 changed files with 3 additions and 3 deletions

View File

@@ -12,7 +12,7 @@ import org.springframework.boot.context.properties.ConfigurationProperties;
public class IngestionConfig {
private int bufferCapacity = 50_000;
private int batchSize = 500;
private int batchSize = 100;
private long flushIntervalMs = 1_000;
public int getBufferCapacity() {

View File

@@ -40,10 +40,10 @@ spec:
mountPath: /var/lib/clickhouse
resources:
requests:
memory: "512Mi"
memory: "1Gi"
cpu: "200m"
limits:
memory: "2Gi"
memory: "4Gi"
cpu: "1000m"
livenessProbe:
httpGet: