conduktor.io ↗

message.max.bytes — Kafka Broker Configuration

The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.

Description

The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config.

Default Values by Kafka Version

Kafka VersionDefault Value
0.8.01000000
0.8.11000000
0.8.21000000
0.9.01000012
0.10.01000012
0.10.11000012
0.10.21000012
0.11.01000012
1.01000012
1.11000012
2.01000012
2.11000012
2.21000012
2.31000012
2.41000012
2.51048588
2.61048588
2.71048588
2.81048588
3.01048588
3.11048588
3.21048588
3.31048588
3.41048588
3.51048588
3.61048588
3.71048588
3.81048588
3.91048588
4.01048588
4.11048588
4.21048588

Tuning Recommendation

ProfileRecommendedWhy
broker / throughput5242880Raising the max message size to 5MB allows producers to send large batches in a single request, reducing request overhead per MB of data. Combined with producer-side lz4 compression, effective payload per request can reach 20-40MB.
broker / durability1048588Keep the default ~1MB limit for durability workloads. Large messages increase the amount of unacknowledged data in flight and widen the potential loss window if a crash occurs between produce and fsync. Enforce small, well-structured records.

Related Configs

replica.fetch.max.bytes · socket.request.max.bytes · compression.type

Manage Kafka configs across all your clusters with Conduktor Console — view, compare, and update configurations in one place.