Kafka Error RECORD_LIST_TOO_LARGE
Error code 18 · Non-retriable Producer
The request included message batch larger than the configured segment size on the server.
Common Causes
- Producer batch accumulates records until 'batch.size' bytes then sends; if a single batch exceeds the broker segment size ('log.segment.bytes', default 1GB), it is rejected
- Application manually building a ProducerRecord list and sending all records in a single batch with a size exceeding 'max.request.size'
- Misconfigured 'batch.size' (default 16KB) combined with compression expanding effective payload size beyond segment limits (unusual but possible with incompressible data)
Solutions
- Reduce producer 'max.request.size' (default 1MB) and 'batch.size' to ensure no single request exceeds the broker's 'log.segment.bytes'; keep max.request.size <= log.segment.bytes
- For bulk send scenarios, split the record list into smaller chunks in application code before calling producer.send(); process in batches of N records
- Increase 'log.segment.bytes' on the broker or per-topic if large batches are genuinely needed: kafka-configs.sh --alter --entity-type topics --entity-name <topic> --add-config segment.bytes=<value>
Diagnostic Commands
# Check topic segment configuration
kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name <topic> --describe | grep segment
# Describe broker-level configuration
kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name <broker-id> --describe | grep -E 'segment|request.size'Related APIs
This error can be returned by: EndTxn · InitProducerId · Produce · WriteTxnMarkers
Debugging Kafka errors? Conduktor Console gives you real-time visibility into your cluster. Explore all errors in the Error Decoder.