Fix ClickHouse OOM on batch insert: reduce batch size, increase memory
Execution rows are wide (29 cols with serialized arrays/JSON), so 500 rows can exceed ClickHouse's memory limit. Reduce default batch size from 500 to 100 and bump ClickHouse memory limit from 2Gi to 4Gi. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -40,10 +40,10 @@ spec:
|
||||
mountPath: /var/lib/clickhouse
|
||||
resources:
|
||||
requests:
|
||||
memory: "512Mi"
|
||||
memory: "1Gi"
|
||||
cpu: "200m"
|
||||
limits:
|
||||
memory: "2Gi"
|
||||
memory: "4Gi"
|
||||
cpu: "1000m"
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
|
||||
Reference in New Issue
Block a user