Kafka Error OFFSET_MOVED_TO_TIERED_STORAGE
Error code 109 · Non-retriable Consumer
The requested offset is moved to tiered storage.
Common Causes
- Consumer is trying to fetch an offset that has been offloaded to remote (tiered) storage because local log segments have been deleted per retention policy, and the fetch request targeted a local broker log segment that no longer exists locally.
- A lagging consumer group that fell far behind has its earliest fetchable offset now residing in the remote storage tier (e.g., S3, GCS) rather than on local broker disk.
- Consumer explicitly seeking to an old offset (e.g., `seekToBeginning()` or a manual offset reset) when the topic has tiered storage enabled and old segments are already remote.
Solutions
- Retry the fetch with a client and broker version that support tiered storage; remote reads are slower than local ones, so keep consumer request timeouts high enough to tolerate object-store latency.
- For lagging consumers, expect higher end-to-end fetch latency once offsets have moved to remote storage. Tune request.timeout.ms and monitor remote fetch metrics rather than assuming local-disk latency.
- If the consumer must reset to the beginning, verify that the `log.remote.storage.manager.class.name` is correctly configured and the remote storage is accessible. Check broker logs for `RemoteLogManager` errors if fetches fail repeatedly.
Diagnostic Commands
# Check consumer group lag and offsets
kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group <group-id> | grep -E 'LAG|TOPIC|PARTITION'
# Check whether tiered storage is enabled for the topic
kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name <topic> --describe | grep -E 'remote.storage.enable|local.retention'Related APIs
This error can be returned by: Fetch
Debugging Kafka errors? Conduktor Console gives you real-time visibility into your cluster. Explore all errors in the Error Decoder.