conduktor.io ↗

max.poll.records — Kafka Consumer Configuration

The maximum number of records returned in a single call to poll(). Note, that max.

Description

The maximum number of records returned in a single call to poll(). Note, that max.poll.records does not impact the underlying fetching behavior. The consumer will cache the records from each fetch request and returns them incrementally from each poll.

Default Values by Kafka Version

Kafka VersionDefault Value
0.10.02147483647
0.10.1500
0.10.2500
0.11.0500
1.0500
1.1500
2.0500
2.1500
2.2500
2.3500
2.4500
2.5500
2.6500
2.7500
2.8500
3.0500
3.1500
3.2500
3.3500
3.4500
3.5500
3.6500
3.7500
3.8500
3.9500
4.0500
4.1500
4.2500

Tuning Recommendation

ProfileRecommendedWhy
consumer / throughput2000Raising max.poll.records from 500 to 2000 increases the batch size returned per poll() call, amortizing the per-poll overhead (offset tracking, deserializer invocation, application loop overhead) across 4x more records. Directly increases records/sec throughput in tight poll loops.
consumer / latency100Lowering max.poll.records from 500 to 100 reduces the batch size returned per poll(), so the application processes records faster and returns to poll() sooner. This tightens the processing loop and reduces the delay between a record arriving at the broker and being processed by the application.
consumer / durability100Smaller batches reduce the reprocessing blast radius: if processing fails midway through a 500-record batch, all 500 records must be reprocessed. With 100-record batches, at most 100 records are reprocessed on failure. This directly limits the duplicates-per-failure window in at-least-once systems.

Related Configs

fetch.max.bytes · max.partition.fetch.bytes · max.poll.interval.ms · enable.auto.commit · check.crcs

Manage Kafka configs across all your clusters with Conduktor Console — view, compare, and update configurations in one place.