Kafka Error OFFSET_METADATA_TOO_LARGE
Error code 12 · Non-retriable Consumer
The metadata field of the offset request was too large.
Common Causes
- Application passing a large custom string in the 'metadata' field of OffsetCommit requests, exceeding 'offset.metadata.max.bytes' (default 4096 bytes)
- Custom consumer implementation or framework storing serialized state (JSON, Avro checkpoint data) in offset metadata
- Misconfigured third-party connector or Kafka Streams state store writing oversized metadata payloads to __consumer_offsets
Solutions
- Reduce the metadata string passed with offset commits; keep it under 4KB or increase 'offset.metadata.max.bytes' on the broker (requires broker restart)
- Move large checkpoint/state data to an external store (Redis, S3, database) and store only a reference key in the offset metadata
- Audit the framework committing offsets (Kafka Streams, Connect, custom consumer) to find where the metadata string is populated and trim it
Diagnostic Commands
# Describe consumer group offsets and lag
kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group <group> --describe
# Inspect offset commit metadata content
kafka-dump-log.sh --files /var/kafka-logs/__consumer_offsets-<N>/00000000000000000000.log --offsets-decoder 2>&1 | grep -i metadata | head -20Related APIs
This error can be returned by: OffsetCommit · TxnOffsetCommit
Debugging Kafka errors? Conduktor Console gives you real-time visibility into your cluster. Explore all errors in the Error Decoder.