Conduktor Conduktor /

Kafka Configuration Explorer

broker Description 3.03.13.23.33.43.53.63.73.83.94.04.14.2
add.partitions.to.txn.retry.backoff.max.ms The maximum allowed timeout for adding partitions to transactions on the server side. It only applies to the actual add partition ..100
100ms
100
100ms
100
100ms
add.partitions.to.txn.retry.backoff.ms The server-side retry backoff when the server attemptsto add the partition to the transaction20
20ms
20
20ms
20
20ms
advertised.listeners Listeners to publish to ZooKeeper for clients to use, if different than the listeners config property. In IaaS environments, this ..nullnullnullnullnullnullnullnullnullnullnullnullnull
alter.config.policy.class.name The alter configs policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.A..nullnullnullnullnullnullnullnullnullnullnullnullnull
alter.log.dirs.replication.quota.window.num The number of samples to retain in memory for alter log dirs replication quotas11111111111111111111111111
alter.log.dirs.replication.quota.window.size.seconds The time span of each sample for alter log dirs replication quotas1111111111111
authorizer.class.name The fully qualified name of a class that implements org.apache.kafka.server.authorizer.Authorizer interface, which is used by the ..
auto.create.topics.enable Enable auto creation of topic on the server.truetruetruetruetruetruetruetruetruetruetruetruetrue
auto.leader.rebalance.enable Enables auto leader balancing. A background thread checks the distribution of partition leaders at regular intervals, configurable..truetruetruetruetruetruetruetruetruetruetruetruetrue
background.threads The number of threads to use for various background processing tasks10101010101010101010101010
broker.heartbeat.interval.ms The length of time in milliseconds between broker heartbeats. Used when running in KRaft mode.2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
broker.id The broker id for this server. If unset, a unique broker id will be generated.To avoid conflicts between ZooKeeper generated broke..-1-1-1-1-1-1-1-1-1-1-1-1-1
broker.rack Rack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: RACK1, us-east-1dnullnullnullnullnullnullnullnullnullnullnullnullnull
broker.session.timeout.ms The length of time in milliseconds that a broker lease lasts if no heartbeats are made. Used when running in KRaft mode.9000
9s
9000
9s
9000
9s
9000
9s
9000
9s
9000
9s
9000
9s
9000
9s
9000
9s
9000
9s
9000
9s
9000
9s
9000
9s
client.quota.callback.class The fully qualified name of a class that implements the ClientQuotaCallback interface, which is used to determine quota limits app..nullnullnullnullnullnullnullnullnullnullnullnullnull
compression.gzip.level The compression level to use if compression.type is set to gzip.-1-1-1-1-1
compression.lz4.level The compression level to use if compression.type is set to lz4.99999
compression.type Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy'..producerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducer
compression.zstd.level The compression level to use if compression.type is set to zstd.33333
config.providers Comma-separated names of ConfigProvider classes, loaded and used in the order specified. Implementing the interface ConfigProvide..
connection.failed.authentication.delay.ms Connection close delay on failed authentication: this is the time (in milliseconds) by which connection close will be delayed on a..100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
connections.max.idle.ms Close idle connections after the number of milliseconds specified by this config.600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
connections.max.reauth.ms When explicitly set to a positive number (the default is 0, not a positive number), a session lifetime that will not exceed the co..0000000000000
controlled.shutdown.enable Enable controlled shutdown of the server.truetruetruetruetruetruetruetruetruetruetruetruetrue
controller.listener.names A comma-separated list of the names of the listeners used by the controller. This is required if running in KRaft mode. When commu..nullnullnullnullnullnullnullnullnullnullnullnull
controller.quorum.append.linger.ms The duration in milliseconds that the leader will wait for writes to accumulate before flushing them to disk.25
25ms
25
25ms
25
25ms
25
25ms
25
25ms
25
25ms
25
25ms
25
25ms
25
25ms
25
25ms
25
25ms
25
25ms
25
25ms
controller.quorum.auto.join.enable Controls whether a KRaft controller should automatically join the cluster metadata partition for its cluster id.false
controller.quorum.bootstrap.servers List of endpoints to use for bootstrapping the cluster metadata. The endpoints are specified in comma-separated list of {host}:{po..
controller.quorum.election.backoff.max.ms Maximum time in milliseconds before starting new elections. This is used in the binary exponential backoff mechanism that helps pr..1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
controller.quorum.election.timeout.ms Maximum time in milliseconds to wait without being able to fetch from the leader before triggering a new election1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
controller.quorum.fetch.timeout.ms Maximum time without a successful fetch from the current leader before becoming a candidate and triggering an election for voters;..2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
controller.quorum.request.timeout.ms The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not r..2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
2000
2s
controller.quorum.retry.backoff.ms The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending ..20
20ms
20
20ms
20
20ms
20
20ms
20
20ms
20
20ms
20
20ms
20
20ms
20
20ms
20
20ms
20
20ms
20
20ms
20
20ms
controller.quorum.voters Map of id/endpoint information for the set of voters in a comma-separated list of {id}@{host}:{port} entries. For example: 1@local..
controller.quota.window.num The number of samples to retain in memory for controller mutation quotas11111111111111111111111111
controller.quota.window.size.seconds The time span of each sample for controller mutations quotas1111111111111
controller.socket.timeout.ms The socket timeout for controller-to-broker channels.30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
create.topic.policy.class.name The create topic policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.Cr..nullnullnullnullnullnullnullnullnullnullnullnullnull
default.replication.factor The default replication factors for automatically created topics.1111111111111
delegation.token.expiry.check.interval.ms Scan interval to remove expired delegation tokens.3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
delegation.token.expiry.time.ms The token validity time in milliseconds before the token needs to be renewed. Default value 1 day.86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
delegation.token.max.lifetime.ms The token has a maximum lifetime beyond which it cannot be renewed anymore. Default value 7 days.604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
delegation.token.secret.key Secret key to generate and verify delegation tokens. The same key must be configured across all the brokers. If using Kafka with ..nullnullnullnullnullnullnullnullnullnullnullnullnull
delete.records.purgatory.purge.interval.requests The purge interval (in number of requests) of the delete records request purgatory1111111111111
delete.topic.enable Enables delete topic. Delete topic through the admin tool will have no effect if this config is turned offtruetruetruetruetruetruetruetruetruetruetruetruetrue
early.start.listeners A comma-separated list of listener names which may be started before the authorizer has finished initialization. This is useful wh..nullnullnullnullnullnullnullnullnullnull
fetch.max.bytes The maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if th..57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
57671680
55 MB
fetch.purgatory.purge.interval.requests The purge interval (in number of requests) of the fetch request purgatory1000100010001000100010001000100010001000100010001000
group.consumer.assignors The server side assignors as a list of full class names. The first one in the list is considered as the default assignor to be use..org.apache.kafka.coordinator.group.assignor.UniformAssignor,org.apache.kafka.coordinator.group.assignor.RangeAssignororg.apache.kafka.coordinator.group.assignor.UniformAssignor,org.apache.kafka.coordinator.group.assignor.RangeAssignororg.apache.kafka.coordinator.group.assignor.UniformAssignor,org.apache.kafka.coordinator.group.assignor.RangeAssignoruniform,rangeuniform,rangeuniform,range
group.consumer.heartbeat.interval.ms The heartbeat interval given to the members of a consumer group.5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
group.consumer.max.heartbeat.interval.ms The maximum heartbeat interval for registered consumers.15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
group.consumer.max.session.timeout.ms The maximum allowed session timeout for registered consumers.60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
group.consumer.max.size The maximum number of consumers that a single consumer group can accommodate. This value will only impact the new consumer coordin..214748364721474836472147483647214748364721474836472147483647
group.consumer.migration.policy The config that enables converting the non-empty classic group using the consumer embedded protocol to the non-empty consumer grou..disabledbidirectionalbidirectionalbidirectional
group.consumer.min.heartbeat.interval.ms The minimum heartbeat interval for registered consumers.5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
group.consumer.min.session.timeout.ms The minimum allowed session timeout for registered consumers.45000
45s
45000
45s
45000
45s
45000
45s
45000
45s
45000
45s
group.consumer.session.timeout.ms The timeout to detect client failures when using the consumer group protocol.45000
45s
45000
45s
45000
45s
45000
45s
45000
45s
45000
45s
group.coordinator.append.linger.ms The duration in milliseconds that the coordinator will wait for writes to accumulate before flushing them to disk. Transactional w..10
10ms
10
10ms
5
5ms
5
5ms
-1
group.coordinator.rebalance.protocols The list of enabled rebalance protocols. Supported protocols: consumer,classic,unknown. The consumer rebalance protocol is in earl..classicclassicclassicclassic,consumerclassic,consumer,streamsclassic,consumer,streams
group.coordinator.threads The number of threads used by the group coordinator.111444
group.initial.rebalance.delay.ms The amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A..3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
group.max.session.timeout.ms The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in betw..1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
1800000
30min
group.max.size The maximum number of consumers that a single consumer group can accommodate.2147483647214748364721474836472147483647214748364721474836472147483647214748364721474836472147483647214748364721474836472147483647
group.min.session.timeout.ms The minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of ..6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
6000
6s
group.share.assignors The server-side assignors as a list of either names for built-in assignors or full class names for custom assignors. The list must..simplesimple
group.share.delivery.count.limit The maximum number of delivery attempts for a record delivered to a share group.5555
group.share.heartbeat.interval.ms The heartbeat interval given to the members of a share group.5000
5s
5000
5s
5000
5s
5000
5s
group.share.max.heartbeat.interval.ms The maximum heartbeat interval for share group members.15000
15s
15000
15s
15000
15s
15000
15s
group.share.max.record.lock.duration.ms The record acquisition lock maximum duration in milliseconds for share groups.60000
1min
60000
1min
60000
1min
60000
1min
group.share.max.session.timeout.ms The maximum allowed session timeout for share group members.60000
1min
60000
1min
60000
1min
60000
1min
group.share.max.share.sessions The maximum number of share sessions per broker.20002000
group.share.max.size The maximum number of members that a single share group can accommodate.200200200200
group.share.min.heartbeat.interval.ms The minimum heartbeat interval for share group members.5000
5s
5000
5s
5000
5s
5000
5s
group.share.min.record.lock.duration.ms The record acquisition lock minimum duration in milliseconds for share groups.15000
15s
15000
15s
15000
15s
15000
15s
group.share.min.session.timeout.ms The minimum allowed session timeout for share group members.45000
45s
45000
45s
45000
45s
45000
45s
group.share.partition.max.record.locks Share-group record lock limit per share-partition.20020020002000
group.share.record.lock.duration.ms The record acquisition lock duration in milliseconds for share groups.30000
30s
30000
30s
30000
30s
30000
30s
group.share.session.timeout.ms The timeout to detect client failures when using the share group protocol.45000
45s
45000
45s
45000
45s
45000
45s
group.streams.heartbeat.interval.ms The heartbeat interval given to the members.5000
5s
5000
5s
group.streams.initial.rebalance.delay.ms The amount of time the group coordinator will wait for more streams clients to join a new group before performing the first rebala..3000
3s
group.streams.max.heartbeat.interval.ms The maximum allowed value for the group-level configuration of streams.heartbeat.interval.ms15000
15s
15000
15s
group.streams.max.session.timeout.ms The maximum allowed value for the group-level configuration of streams.session.timeout.ms60000
1min
60000
1min
group.streams.max.size The maximum number of streams clients that a single streams group can accommodate.21474836472147483647
group.streams.max.standby.replicas The maximum allowed value for the group-level configuration of streams.num.standby.replicas22
group.streams.min.heartbeat.interval.ms The minimum allowed value for the group-level configuration of streams.heartbeat.interval.ms5000
5s
5000
5s
group.streams.min.session.timeout.ms The minimum allowed value for the group-level configuration of streams.session.timeout.ms45000
45s
45000
45s
group.streams.num.standby.replicas The number of standby replicas for each task.00
group.streams.session.timeout.ms The timeout to detect client failures when using the streams group protocol.45000
45s
45000
45s
initial.broker.registration.timeout.ms When initially registering with the controller quorum, the number of milliseconds to wait before declaring failure and exiting the..60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
inter.broker.listener.name Name of listener used for communication between brokers. If this is unset, the listener name is defined by security.inter.broker.p..nullnullnullnullnullnullnullnullnullnullnullnullnull
kafka.metrics.polling.interval.secs The metrics polling interval (in seconds) which can be used inkafka.metrics.reporters implementations.10101010101010101010101010
kafka.metrics.reporters A list of classes to use as Yammer metrics custom reporters. The reporters should implement kafka.metrics.KafkaMetricsReporter tra..
leader.imbalance.check.interval.seconds The frequency with which the partition rebalance check is triggered by the controller300300300300300300300300300300300300300
listener.security.protocol.map Map between listener names and security protocols. This must be defined for the same security protocol to be usable in more than o..PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLSASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXTSASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXTSASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXTSASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXTSASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT
listeners List of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS. Specify hostname as 0.0.0.0 ..PLAINTEXT://:9092PLAINTEXT://:9092PLAINTEXT://:9092PLAINTEXT://:9092PLAINTEXT://:9092PLAINTEXT://:9092PLAINTEXT://:9092PLAINTEXT://:9092PLAINTEXT://:9092PLAINTEXT://:9092PLAINTEXT://:9092PLAINTEXT://:9092PLAINTEXT://:9092
log.cleaner.backoff.ms The amount of time to sleep when there are no logs to clean15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
15000
15s
log.cleaner.dedupe.buffer.size The total memory used for log deduplication across all cleaner threads134217728134217728134217728134217728134217728134217728134217728134217728134217728134217728134217728134217728134217728
log.cleaner.delete.retention.ms The amount of time to retain tombstone message markers for log compacted topics. This setting also gives a bound on the time in wh..86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
log.cleaner.enable Enable the log cleaner process to run on the server. Should be enabled if using any topics with a cleanup.policy=compact including..truetruetruetruetruetruetruetruetruetruetruetruetrue
log.cleaner.io.buffer.load.factor Log cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be ..0.90.90.90.90.90.90.90.90.90.90.90.90.9
log.cleaner.io.buffer.size The total memory used for log cleaner I/O buffers across all cleaner threads524288524288524288524288524288524288524288524288524288524288524288524288524288
log.cleaner.io.max.bytes.per.second The log cleaner will be throttled so that the sum of its read and write i/o will be less than this value on average1.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E308
log.cleaner.max.compaction.lag.ms The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted.9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
log.cleaner.min.cleanable.ratio The minimum ratio of dirty log to total log for a log to eligible for cleaning. If the log.cleaner.max.compaction.lag.ms or the lo..0.50.50.50.50.50.50.50.50.50.50.50.50.5
log.cleaner.min.compaction.lag.ms The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.0000000000000
log.cleaner.threads The number of background threads to use for log cleaning1111111111111
log.cleanup.policy The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are:..deletedeletedeletedeletedeletedeletedeletedeletedeletedeletedeletedeletedelete
log.dir The directory in which the log data is kept (supplemental for log.dirs property)/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs
log.dir.failure.timeout.ms If the broker is unable to successfully communicate to the controller that some log directory has failed for longer than this time..30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
log.dirs A comma-separated list of the directories where the log data is stored. If not set, the value in log.dir is used.nullnullnullnullnullnullnullnullnullnullnullnullnull
log.flush.interval.messages The number of messages accumulated on a log partition before messages are flushed to disk.9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
log.flush.interval.ms The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.sc..nullnullnullnullnullnullnullnullnullnullnullnullnull
log.flush.offset.checkpoint.interval.ms The frequency with which we update the persistent record of the last flush which acts as the log recovery point.60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
log.flush.scheduler.interval.ms The frequency in ms that the log flusher checks whether any log needs to be flushed to disk9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
log.flush.start.offset.checkpoint.interval.ms The frequency with which we update the persistent record of log start offset60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
log.index.interval.bytes The interval with which we add an entry to the offset index.4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
log.index.size.max.bytes The maximum size in bytes of the offset index10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
log.local.retention.bytes The maximum size of local log segments that can grow for a partition before it gets eligible for deletion. Default value is -2, it..-2-2-2-2-2-2-2
log.local.retention.ms The number of milliseconds to keep the local log segments before it gets eligible for deletion. Default value is -2, it represents..-2-2-2-2-2-2-2
log.message.timestamp.after.max.ms This configuration sets the allowable timestamp difference between the message timestamp and the broker's timestamp. The message t..9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
3600000
1h
3600000
1h
3600000
1h
log.message.timestamp.before.max.ms This configuration sets the allowable timestamp difference between the broker's timestamp and the message timestamp. The message t..9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
log.message.timestamp.type Define whether the timestamp in the message is message create time or log append time. The value should be either CreateTime or Lo..CreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTime
log.preallocate Should pre allocate file when create new segment? If you are using Kafka on Windows, you probably need to set it to true.falsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalse
log.retention.bytes The maximum size of the log before deleting it-1-1-1-1-1-1-1-1-1-1-1-1-1
log.retention.check.interval.ms The frequency in milliseconds that the log cleaner checks whether any log is eligible for deletion300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
log.retention.hours The number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property168168168168168168168168168168168168168
log.retention.minutes The number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the ..nullnullnullnullnullnullnullnullnullnullnullnullnull
log.retention.ms The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes..nullnullnullnullnullnullnullnullnullnullnullnullnull
log.roll.hours The maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms property168168168168168168168168168168168168168
log.roll.jitter.hours The maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms property0000000000000
log.roll.jitter.ms The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is usednullnullnullnullnullnullnullnullnullnullnullnullnull
log.roll.ms The maximum time before a new log segment is rolled out (in milliseconds). If not set, the value in log.roll.hours is usednullnullnullnullnullnullnullnullnullnullnullnullnull
log.segment.bytes The maximum size of a single log file1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
log.segment.delete.delay.ms The amount of time to wait before deleting a file from the filesystem. If the value is 0 and there is no file to delete, the syste..60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
max.connection.creation.rate The maximum connection creation rate we allow in the broker at any time. Listener-level limits may also be configured by prefixing..2147483647214748364721474836472147483647214748364721474836472147483647214748364721474836472147483647214748364721474836472147483647
max.connections The maximum number of connections we allow in the broker at any time. This limit is applied in addition to any per-ip limits confi..2147483647214748364721474836472147483647214748364721474836472147483647214748364721474836472147483647214748364721474836472147483647
max.connections.per.ip The maximum number of connections we allow from each ip address. This can be set to 0 if there are overrides configured using max...2147483647214748364721474836472147483647214748364721474836472147483647214748364721474836472147483647214748364721474836472147483647
max.connections.per.ip.overrides A comma-separated list of per-ip or hostname overrides to the default maximum number of connections. An example value is "hostName..
max.incremental.fetch.session.cache.slots The maximum number of total incremental fetch sessions that we will maintain. FetchSessionCache is sharded into 8 shards and the l..1000100010001000100010001000100010001000100010001000
max.request.partition.size.limit The maximum number of partitions can be served in one request.20002000200020002000
message.max.bytes The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are c..1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
metadata.log.dir This configuration determines where we put the metadata log for clusters in KRaft mode. If it is not set, the metadata log is plac..nullnullnullnullnullnullnullnullnullnullnullnullnull
metadata.log.max.record.bytes.between.snapshots This is the maximum number of bytes in the log between the latest snapshot and the high-watermark needed before generating a new s..20971520209715202097152020971520209715202097152020971520209715202097152020971520209715202097152020971520
metadata.log.max.snapshot.interval.ms This is the maximum number of milliseconds to wait to generate a snapshot if there are committed records in the log that are not i..3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
metadata.log.segment.bytes The maximum size of a single metadata log file.1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
metadata.log.segment.ms The maximum time before a new metadata log file is rolled out (in milliseconds).604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
metadata.max.idle.interval.ms This configuration controls how often the active controller should write no-op records to the metadata partition. If the value is ..500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
metadata.max.retention.bytes The maximum combined size of the metadata log and snapshots before deleting old snapshots and log files. Since at least one snapsh..-1-1-1-1104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
metadata.max.retention.ms The number of milliseconds to keep a metadata log file or snapshot before deleting it. Since at least one snapshot must exist befo..604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
metric.reporters A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows p..org.apache.kafka.common.metrics.JmxReporterorg.apache.kafka.common.metrics.JmxReporterorg.apache.kafka.common.metrics.JmxReporter
metrics.num.samples The number of samples maintained to compute metrics.2222222222222
metrics.recording.level The highest recording level for metrics.INFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFO
metrics.sample.window.ms The window of time a metrics sample is computed over.30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
min.insync.replicas When a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a ..1111111111111
node.id The node ID associated with the roles this process is playing when process.roles is non-empty. This is required configuration when..-1-1-1-1-1-1-1-1-1-1
num.io.threads The number of threads that the server uses for processing requests, which may include disk I/O8888888888888
num.network.threads The number of threads that the server uses for receiving requests from the network and sending responses to the network. Noted: ea..3333333333333
num.partitions The default number of log partitions per topic1111111111111
num.recovery.threads.per.data.dir The number of threads per data directory to be used for log recovery at startup and flushing at shutdown1111111111222
num.replica.alter.log.dirs.threads The number of threads that can move replicas between log directories, which may include disk I/Onullnullnullnullnullnullnullnullnullnullnullnullnull
num.replica.fetchers Number of fetcher threads used to replicate records from each source broker. The total number of fetchers on each broker is bound ..1111111111111
offset.metadata.max.bytes The maximum size for a metadata entry associated with an offset commit.4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
offsets.commit.timeout.ms Offset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is simi..5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
offsets.load.buffer.size Batch size for reading from the offsets segments when loading offsets into the cache (soft-limit, overridden if records are too la..5242880524288052428805242880524288052428805242880524288052428805242880524288052428805242880
offsets.retention.check.interval.ms Frequency at which to check for stale offsets600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
offsets.retention.minutes For subscribed consumers, committed offset of a specific partition will be expired and discarded when 1) this retention period has..10080100801008010080100801008010080100801008010080100801008010080
offsets.topic.compression.codec Compression codec for the offsets topic - compression may be used to achieve "atomic" commits.0000000000000
offsets.topic.num.partitions The number of partitions for the offset commit topic (should not change after deployment).50505050505050505050505050
offsets.topic.replication.factor The replication factor for the offsets topic (set higher to ensure availability). Internal topic creation will fail until the clus..3333333333333
offsets.topic.segment.bytes The offsets topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads.104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
principal.builder.class The fully qualified name of a class that implements the KafkaPrincipalBuilder interface, which is used to build the KafkaPrincipal..org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilderorg.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilderorg.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilderorg.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilderorg.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilderorg.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilderorg.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilderorg.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilderorg.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilderorg.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilderorg.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilderorg.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilderorg.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
process.roles The roles that this process plays: 'broker', 'controller', or 'broker,controller' if it is both. This configuration is only applic..
producer.id.expiration.ms The time in ms that a topic partition leader will wait before expiring producer IDs. Producer IDs will not expire while a transact..86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
producer.purgatory.purge.interval.requests The purge interval (in number of requests) of the producer request purgatory1000100010001000100010001000100010001000100010001000
queued.max.request.bytes The number of queued bytes allowed before no more requests are read-1-1-1-1-1-1-1-1-1-1-1-1-1
queued.max.requests The number of queued requests allowed for data-plane, before blocking the network threads500500500500500500500500500500500500500
quota.window.num The number of samples to retain in memory for client quotas11111111111111111111111111
quota.window.size.seconds The time span of each sample for client quotas1111111111111
remote.fetch.max.wait.ms The maximum amount of time the server will wait before answering the remote fetch request500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
remote.list.offsets.request.timeout.ms The maximum amount of time the server will wait for the remote list offsets request to complete.30000
30s
30000
30s
30000
30s
remote.log.index.file.cache.total.size.bytes The total size of the space allocated to store index files fetched from remote storage in the local storage.1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
remote.log.manager.copier.thread.pool.size Size of the thread pool used in scheduling tasks to copy segments.-1101010
remote.log.manager.copy.max.bytes.per.second The maximum number of bytes that can be copied from local storage to remote storage per second. This is a global limit for all the..9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
remote.log.manager.copy.quota.window.num The number of samples to retain in memory for remote copy quota management. The default value is 11, which means there are 10 whol..1111111111
remote.log.manager.copy.quota.window.size.seconds The time span of each sample for remote copy quota management. The default value is 1 second.11111
remote.log.manager.expiration.thread.pool.size Size of the thread pool used in scheduling tasks to clean up the expired remote log segments.-1101010
remote.log.manager.fetch.max.bytes.per.second The maximum number of bytes that can be fetched from remote storage to local storage per second. This is a global limit for all th..9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
remote.log.manager.fetch.quota.window.num The number of samples to retain in memory for remote fetch quota management. The default value is 11, which means there are 10 who..1111111111
remote.log.manager.fetch.quota.window.size.seconds The time span of each sample for remote fetch quota management. The default value is 1 second.11111
remote.log.manager.follower.thread.pool.size Size of the thread pool used in scheduling follower tasks to read the highest-uploaded remote-offset for follower partitions.2
remote.log.manager.task.interval.ms Interval at which remote log manager runs the scheduled tasks like copy segments, and clean up remote log segments.30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
remote.log.manager.thread.pool.size Size of the thread pool used in scheduling tasks to copy segments, fetch remote log indexes and clean up remote log segments.10101010222
remote.log.metadata.custom.metadata.max.bytes The maximum size of custom metadata in bytes that the broker should accept from a remote storage plugin. If custom metadata excee..128
128 B
128
128 B
128
128 B
128
128 B
128
128 B
128
128 B
128
128 B
remote.log.metadata.manager.class.name Fully qualified class name of `RemoteLogMetadataManager` implementation.org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManagerorg.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManagerorg.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManagerorg.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManagerorg.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManagerorg.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManagerorg.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path Class path of the `RemoteLogMetadataManager` implementation. If specified, the RemoteLogMetadataManager implementation and its dep..nullnullnullnullnullnullnull
remote.log.metadata.manager.impl.prefix Prefix used for properties to be passed to RemoteLogMetadataManager implementation. For example this value can be `rlmm.config.`.rlmm.config.rlmm.config.rlmm.config.rlmm.config.rlmm.config.rlmm.config.rlmm.config.
remote.log.metadata.manager.listener.name Listener name of the local broker to which it should get connected if needed by RemoteLogMetadataManager implementation.nullnullnullnullnullnullnull
remote.log.reader.max.pending.tasks Maximum remote log reader thread pool task queue size. If the task queue is full, fetch requests are served with an error.100100100100100100100
remote.log.reader.threads Size of the thread pool that is allocated for handling remote log reads.10101010101010
remote.log.storage.manager.class.name Fully qualified class name of `RemoteStorageManager` implementation.nullnullnullnullnullnullnull
remote.log.storage.manager.class.path Class path of the `RemoteStorageManager` implementation. If specified, the RemoteStorageManager implementation and its dependent l..nullnullnullnullnullnullnull
remote.log.storage.manager.impl.prefix Prefix used for properties to be passed to RemoteStorageManager implementation. For example this value can be `rsm.config.`.rsm.config.rsm.config.rsm.config.rsm.config.rsm.config.rsm.config.rsm.config.
remote.log.storage.system.enable Whether to enable tiered storage functionality in a broker or not. Valid values are `true` or `false` and the default value is fal..falsefalsefalsefalsefalsefalsefalse
replica.fetch.backoff.ms The amount of time to sleep when fetch partition error occurs.1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
replica.fetch.max.bytes The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first record batch..1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
replica.fetch.min.bytes Minimum bytes expected for each fetch response. If not enough bytes, wait up to replica.fetch.wait.max.ms (broker config).1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
replica.fetch.response.max.bytes Maximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first n..10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
replica.fetch.wait.max.ms The maximum wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag...500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
replica.high.watermark.checkpoint.interval.ms The frequency with which the high watermark is saved out to disk5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
replica.lag.time.max.ms If a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leade..30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
replica.selector.class The fully qualified class name that implements ReplicaSelector. This is used by the broker to find the preferred read replica. By ..nullnullnullnullnullnullnullnullnullnullnullnullnull
replica.socket.receive.buffer.bytes The socket receive buffer for network requests to the leader for replicating data65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
replica.socket.timeout.ms The socket timeout for network requests. Its value should be at least replica.fetch.wait.max.ms30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
replication.quota.window.num The number of samples to retain in memory for replication quotas11111111111111111111111111
replication.quota.window.size.seconds The time span of each sample for replication quotas1111111111111
request.timeout.ms The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not r..30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
sasl.client.callback.handler.class The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.enabled.mechanisms The list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is avail..GSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPI
sasl.jaas.config JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format ..nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.kerberos.kinit.cmd Kerberos kinit command path./usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit
sasl.kerberos.min.time.before.relogin Login thread sleep time between refresh attempts.60000600006000060000600006000060000600006000060000600006000060000
sasl.kerberos.principal.to.local.rules A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in..DEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULT
sasl.kerberos.service.name The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.kerberos.ticket.renew.jitter Percentage of random jitter added to the renewal time.0.050.050.050.050.050.050.050.050.050.050.050.050.05
sasl.kerberos.ticket.renew.window.factor Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which ..0.80.80.80.80.80.80.80.80.80.80.80.80.8
sasl.login.callback.handler.class The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For bro..nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.class The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener ..nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.connect.timeout.ms The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHB..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.read.timeout.ms The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.nullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.refresh.buffer.seconds The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would ot..300300300300300300300300300300300300300
sasl.login.refresh.min.period.seconds The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between..60606060606060606060606060
sasl.login.refresh.window.factor Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which..0.80.80.80.80.80.80.80.80.80.80.80.80.8
sasl.login.refresh.window.jitter The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. ..0.050.050.050.050.050.050.050.050.050.050.050.050.05
sasl.login.retry.backoff.max.ms The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login us..10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
sasl.login.retry.backoff.ms The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login us..100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.mechanism.controller.protocol SASL mechanism used for communication with controllers. Default is GSSAPI.GSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPI
sasl.mechanism.inter.broker.protocol SASL mechanism used for inter-broker communication. Default is GSSAPI.GSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPI
sasl.oauthbearer.assertion.algorithm The algorithm the Apache Kafka client should use to sign the assertion sent to the identity provider. It is also used as the value..RS256RS256
sasl.oauthbearer.assertion.claim.aud The JWT aud (Audience) claim which will be included in the client JWT assertion created locally.Note: If a value for sasl.oauthbea..nullnull
sasl.oauthbearer.assertion.claim.exp.seconds The number of seconds in the future for which the JWT is valid. The value is used to determine the JWT exp (Expiration) claim base..300300
sasl.oauthbearer.assertion.claim.iss The value to be used as the iss (Issuer) claim which will be included in the client JWT assertion created locally.Note: If a value..nullnull
sasl.oauthbearer.assertion.claim.jti.include Flag that determines if the JWT assertion should generate a unique ID for the JWT and include it in the jti (JWT ID) claim.Note: I..falsefalse
sasl.oauthbearer.assertion.claim.nbf.seconds The number of seconds in the past from which the JWT is valid. The value is used to determine the JWT nbf (Not Before) claim based..6060
sasl.oauthbearer.assertion.claim.sub The value to be used as the sub (Subject) claim which will be included in the client JWT assertion created locally.Note: If a valu..nullnull
sasl.oauthbearer.assertion.file File that contains a pre-generated JWT assertion.The underlying implementation caches the file contents to avoid the performance h..nullnull
sasl.oauthbearer.assertion.private.key.file File that contains a private key in the standard PEM format which is used to sign the JWT assertion sent to the identity provider...nullnull
sasl.oauthbearer.assertion.private.key.passphrase The optional passphrase to decrypt the private key file specified by sasl.oauthbearer.assertion.private.key.file.Note: If the file..nullnull
sasl.oauthbearer.assertion.template.file This optional configuration specifies the file containing the JWT headers and/or payload claims to be used when creating the JWT a..nullnull
sasl.oauthbearer.client.credentials.client.id The ID (defined in/by the OAuth identity provider) to identify the client requesting the token.The client ID was previously stored..nullnull
sasl.oauthbearer.client.credentials.client.secret The secret (defined by either the user or preassigned, depending on the identity provider) of the client requesting the token.The ..nullnull
sasl.oauthbearer.clock.skew.seconds The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.303030303030303030303030
sasl.oauthbearer.expected.audience The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. ..nullnullnullnullnullnullnullnullnullnullnull
sasl.oauthbearer.expected.issuer The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected ..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.oauthbearer.jwks.endpoint.refresh.ms The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the..3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the extern..10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external aut..100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.oauthbearer.jwks.endpoint.url The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or fi..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.oauthbearer.jwt.retriever.class The fully-qualified class name of a JwtRetriever implementation used to request tokens from the identity provider.The default conf..org.apache.kafka.common.security.oauthbearer.DefaultJwtRetrieverorg.apache.kafka.common.security.oauthbearer.DefaultJwtRetriever
sasl.oauthbearer.jwt.validator.class The fully-qualified class name of a JwtValidator implementation used to validate the JWT from the identity provider.The default va..org.apache.kafka.common.security.oauthbearer.DefaultJwtValidatororg.apache.kafka.common.security.oauthbearer.DefaultJwtValidator
sasl.oauthbearer.scope This is the level of access a client application is granted to a resource or API which is included in the token request. If provid..nullnull
sasl.oauthbearer.scope.claim.name The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scop..scopescopescopescopescopescopescopescopescopescopescopescope
sasl.oauthbearer.sub.claim.name The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subj..subsubsubsubsubsubsubsubsubsubsubsub
sasl.oauthbearer.token.endpoint.url The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.server.callback.handler.class The fully qualified name of a SASL server callback handler class that implements the AuthenticateCallbackHandler interface. Server..nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.server.max.receive.size The maximum receive size allowed before and during initial SASL authentication. Default receive size is 512KB. GSSAPI limits reque..524288524288524288524288524288524288524288524288524288524288
security.inter.broker.protocol Security protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. It is an error ..PLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXT
security.providers A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement ..nullnullnullnullnullnullnullnullnullnullnullnullnull
share.coordinator.append.linger.ms The duration in milliseconds that the share coordinator will wait for writes to accumulate before flushing them to disk. Set to -1..10
10ms
5
5ms
-1
share.coordinator.load.buffer.size Batch size for reading from the share-group state topic when loading state information into the cache (soft-limit, overridden if r..524288052428805242880
share.coordinator.snapshot.update.records.per.snapshot The number of update records the share coordinator writes between snapshot records.500500500
share.coordinator.state.topic.compression.codec Compression codec for the share-group state topic.000
share.coordinator.state.topic.min.isr Overridden min.insync.replicas for the share-group state topic.222
share.coordinator.state.topic.num.partitions The number of partitions for the share-group state topic (should not change after deployment).505050
share.coordinator.state.topic.replication.factor Replication factor for the share-group state topic. Topic creation will fail until the cluster size meets this replication factor ..333
share.coordinator.state.topic.segment.bytes The log segment size for the share-group state topic.104857600
100 MB
104857600
100 MB
104857600
100 MB
share.coordinator.threads The number of threads used by the share coordinator.111
share.coordinator.write.timeout.ms The duration in milliseconds that the share coordinator will wait for all replicas of the share-group state topic to receive a wri..5000
5s
5000
5s
5000
5s
share.fetch.purgatory.purge.interval.requests The purge interval (in number of requests) of the share fetch request purgatory100010001000
socket.connection.setup.timeout.max.ms The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will inc..30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
socket.connection.setup.timeout.ms The amount of time the client will wait for the socket connection to be established. If the connection is not built before the tim..10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
socket.listen.backlog.size The maximum number of pending connections on the socket. In Linux, you may also need to configure somaxconn and tcp_max_syn_backlo..5050505050505050505050
socket.receive.buffer.bytes The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
socket.request.max.bytes The maximum number of bytes in a socket request104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
socket.send.buffer.bytes The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
102400
100 KB
ssl.allow.dn.changes Indicates whether changes to the certificate distinguished name should be allowed during a dynamic reconfiguration of certificates..falsefalsefalsefalsefalsefalse
ssl.allow.san.changes Indicates whether changes to the certificate subject alternative names should be allowed during a dynamic reconfiguration of certi..falsefalsefalsefalsefalsefalse
ssl.cipher.suites A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotia..
ssl.client.auth Configures kafka broker to request client authentication. The following settings are common:nonenonenonenonenonenonenonenonenonenonenonenonenone
ssl.enabled.protocols The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' ..TLSv1.2TLSv1.2,TLSv1.3TLSv1.2TLSv1.2,TLSv1.3TLSv1.2TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3TLSv1.2TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3
ssl.endpoint.identification.algorithm The endpoint identification algorithm to validate server hostname using server certificate.httpshttpshttpshttpshttpshttpshttpshttpshttpshttpshttpshttpshttps
ssl.engine.factory.class The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.key.password The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keymanager.algorithm The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for t..SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509
ssl.keystore.certificate.chain Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list ..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.key Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. ..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.location The location of the key store file. This is optional for client and can be used for two-way authentication for client.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.password The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. K..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.type The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.fact..JKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKS
ssl.principal.mapping.rules A list of rules for mapping from distinguished name from the client certificate to short name. The rules are evaluated in order an..DEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULT
ssl.protocol The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise..TLSv1.2TLSv1.3TLSv1.2TLSv1.3TLSv1.2TLSv1.3TLSv1.3TLSv1.3TLSv1.3TLSv1.2TLSv1.3TLSv1.3TLSv1.3
ssl.provider The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.secure.random.implementation The SecureRandom PRNG implementation to use for SSL cryptography operations.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.trustmanager.algorithm The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured f..PKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIX
ssl.truststore.certificates Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X...nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.truststore.location The location of the trust store file.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.truststore.password The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity che..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.truststore.type The file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12..JKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKS
telemetry.max.bytes The maximum size (after compression if compression is used) of telemetry metrics pushed from a client to the broker. The default v..1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
transaction.abort.timed.out.transaction.cleanup.interval.ms The interval at which to rollback transactions that have timed out10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
transaction.max.timeout.ms The maximum allowed timeout for transactions. If a client’s requested transaction time exceed this, then the broker will return an..900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
900000
15min
transaction.partition.verification.enable Enable verification that checks that the partition has been added to the transaction before writing transactional records to the p..truetruetruetruetruetruetrue
transaction.remove.expired.transaction.cleanup.interval.ms The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
transaction.state.log.load.buffer.size Batch size for reading from the transaction log segments when loading producer ids and transactions into the cache (soft-limit, ov..5242880524288052428805242880524288052428805242880524288052428805242880524288052428805242880
transaction.state.log.min.isr The minimum number of replicas that must acknowledge a write to transaction topic in order to be considered successful.2222222222222
transaction.state.log.num.partitions The number of partitions for the transaction topic (should not change after deployment).50505050505050505050505050
transaction.state.log.replication.factor The replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the ..3333333333333
transaction.state.log.segment.bytes The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
104857600
100 MB
transaction.two.phase.commit.enable If set to true, then the broker is informed that the client is participating in two phase commit protocol and transactions that th..falsefalse
transactional.id.expiration.ms The time in ms that the transaction coordinator will wait without receiving any transaction status updates for the current transac..604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
unclean.leader.election.enable Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result ..falsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalse
group.share.max.groups The maximum number of share groups.1010
alter.config.policy.kraft.compatibility.enable This configuration controls whether for incremental alter config operations of type SUBTRACT or DELETE on a config entry of type L..false
auto.include.jmx.reporter Deprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters. This configuration will be r..truetruetruetruetruetrue
broker.id.generation.enable Enable automatic broker id generation on the server. When enabled the value configured for reserved.broker.max.id should be review..truetruetruetruetruetruetruetruetruetrue
control.plane.listener.name Name of listener used for communication between controller and brokers. A broker will use the control.plane.listener.name to locat..nullnullnullnullnullnullnullnullnullnull
controlled.shutdown.max.retries Controlled shutdown can fail for multiple reasons. This determines the number of retries when such failure happens3333333333
controlled.shutdown.retry.backoff.ms Before each retry, the system needs time to recover from the state that caused the previous failure (Controller fail over, replica..5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
delegation.token.master.key DEPRECATED: An alias for delegation.token.secret.key, which should be used instead of this config.nullnullnullnullnullnullnullnullnullnull
eligible.leader.replicas.enable Enable the Eligible leader replicasfalsefalsefalse
inter.broker.protocol.version Specify which version of the inter-broker protocol will be used.. This is typically bumped after all brokers were upgraded to a ne..3.0-IV13.1-IV03.2-IV03.3-IV33.4-IV03.5-IV23.6-IV23.7-IV43.8-IV03.9-IV0
leader.imbalance.per.broker.percentage The ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per br..10101010101010101010
log.message.downconversion.enable This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to false, ..truetruetruetruetruetruetruetruetruetrue
log.message.format.version Specify the message format version the broker will use to append messages to the logs. The value should be a valid MetadataVersion..3.0-IV13.0-IV13.0-IV13.0-IV13.0-IV13.0-IV13.0-IV13.0-IV13.0-IV13.0-IV1
log.message.timestamp.difference.max.ms [DEPRECATED] The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in ..9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
offsets.commit.required.acks DEPRECATED: The required acks before the commit can be accepted. In general, the default (-1) should not be overridden.-1-1-1-1-1-1-1-1-1-1
password.encoder.cipher.algorithm The Cipher algorithm used for encoding dynamically configured passwords.AES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5Padding
password.encoder.iterations The iteration count used for encoding dynamically configured passwords.4096409640964096409640964096409640964096
password.encoder.key.length The key length used for encoding dynamically configured passwords.128128128128128128128128128128
password.encoder.keyfactory.algorithm The SecretKeyFactory algorithm used for encoding dynamically configured passwords. Default is PBKDF2WithHmacSHA512 if available an..nullnullnullnullnullnullnullnullnullnull
password.encoder.old.secret The old secret that was used for encoding dynamically configured passwords. This is required only when the secret is updated. If s..nullnullnullnullnullnullnullnullnullnull
password.encoder.secret The secret used for encoding dynamically configured passwords for this broker.nullnullnullnullnullnullnullnullnullnull
reserved.broker.max.id Max number that can be used for a broker.id1000100010001000100010001000100010001000
zookeeper.clientCnxnSocket Typically set to org.apache.zookeeper.ClientCnxnSocketNetty when using TLS connectivity to ZooKeeper. Overrides any explicit value..nullnullnullnullnullnullnullnullnullnull
zookeeper.connect Specifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper serve..nullnullnullnullnullnullnullnullnullnull
zookeeper.connection.timeout.ms The max time that the client waits to establish a connection to ZooKeeper. If not set, the value in zookeeper.session.timeout.ms i..nullnullnullnullnullnullnullnullnullnull
zookeeper.max.in.flight.requests The maximum number of unacknowledged requests the client will send to ZooKeeper before blocking.10101010101010101010
zookeeper.metadata.migration.enable Enable ZK to KRaft migrationfalsefalsefalsefalsefalsefalse
zookeeper.session.timeout.ms Zookeeper session timeout18000
18s
18000
18s
18000
18s
18000
18s
18000
18s
18000
18s
18000
18s
18000
18s
18000
18s
18000
18s
zookeeper.set.acl Set client to use secure ACLsfalsefalsefalsefalsefalsefalsefalsefalsefalsefalse
zookeeper.ssl.cipher.suites Specifies the enabled cipher suites to be used in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookee..nullnullnullnullnullnullnullnullnullnull
zookeeper.ssl.client.enable Set client to use TLS when connecting to ZooKeeper. An explicit value overrides any value set via the zookeeper.client.secure syst..falsefalsefalsefalsefalsefalsefalsefalsefalsefalse
zookeeper.ssl.crl.enable Specifies whether to enable Certificate Revocation List in the ZooKeeper TLS protocols. Overrides any explicit value set via the z..falsefalsefalsefalsefalsefalsefalsefalsefalsefalse
zookeeper.ssl.enabled.protocols Specifies the enabled protocol(s) in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.enabl..nullnullnullnullnullnullnullnullnullnull
zookeeper.ssl.endpoint.identification.algorithm Specifies whether to enable hostname verification in the ZooKeeper TLS negotiation process, with (case-insensitively) "https" mean..HTTPSHTTPSHTTPSHTTPSHTTPSHTTPSHTTPSHTTPSHTTPSHTTPS
zookeeper.ssl.keystore.location Keystore location when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via th..nullnullnullnullnullnullnullnullnullnull
zookeeper.ssl.keystore.password Keystore password when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via th..nullnullnullnullnullnullnullnullnullnull
zookeeper.ssl.keystore.type Keystore type when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zo..nullnullnullnullnullnullnullnullnullnull
zookeeper.ssl.ocsp.enable Specifies whether to enable Online Certificate Status Protocol in the ZooKeeper TLS protocols. Overrides any explicit value set vi..falsefalsefalsefalsefalsefalsefalsefalsefalsefalse
zookeeper.ssl.protocol Specifies the protocol to be used in ZooKeeper TLS negotiation. An explicit value overrides any value set via the same-named zooke..TLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.2
zookeeper.ssl.truststore.location Truststore location when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.lo..nullnullnullnullnullnullnullnullnullnull
zookeeper.ssl.truststore.password Truststore password when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.pa..nullnullnullnullnullnullnullnullnullnull
zookeeper.ssl.truststore.type Truststore type when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.type s..nullnullnullnullnullnullnullnullnullnull
zookeeper.sync.time.ms How far a ZK follower can be behind a ZK leader2000
2s
2000
2s
consumer Description 3.03.13.23.33.43.53.63.73.83.94.04.14.2
allow.auto.create.topics Allow automatic topic creation on the broker when subscribing to or assigning a topic. A topic being subscribed to will be automat..truetruetruetruetruetruetruetruetruetruetruetruetrue
auto.commit.interval.ms The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if enable.auto.commit is set to true.5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
auto.offset.reset What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because t..latestlatestlatestlatestlatestlatestlatestlatestlatestlatestlatestlatestlatest
bootstrap.servers A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all ser..
check.crcs Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred...truetruetruetruetruetruetruetruetruetruetruetruetrue
client.dns.lookup Controls how the client uses DNS lookups. If set to use_all_dns_ips, connect to each returned IP address in sequence until a succe..use_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ips
client.id An ID prefix string used for the client IDs of internal (main, restore, and global) consumers , producers, and admin clients with ..
client.rack A rack identifier for this client. This can be any string value which indicates where this client is physically located. It corres..
config.providers Comma-separated names of ConfigProvider classes, loaded and used in the order specified. Implementing the interface ConfigProvide..
connections.max.idle.ms Close idle connections after the number of milliseconds specified by this config.540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
default.api.timeout.ms Specifies the timeout (in milliseconds) for client APIs. This configuration is used as the default timeout for all client operatio..60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
enable.auto.commit If true the consumer's offset will be periodically committed in the background.truetruetruetruetruetruetruetruetruetruetruetruetrue
enable.metrics.push Whether to enable pushing of internal client metrics for (main, restore, and global) consumers, producers, and admin clients. The ..truetruetruetruetruetrue
exclude.internal.topics Whether internal topics matching a subscribed pattern should be excluded from the subscription. It is always possible to explicitl..truetruetruetruetruetruetruetruetruetruetruetruetrue
fetch.max.bytes The maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if th..52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
52428800
50 MB
fetch.max.wait.ms The maximum amount of time the server will block before answering the fetch request there isn't sufficient data to immediately sat..500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
500
500ms
fetch.min.bytes The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait f..1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
1
1 B
group.id A unique string that identifies the Connect cluster group this worker belongs to.nullnullnullnullnullnullnullnullnullnullnullnullnull
group.instance.id A unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer ..nullnullnullnullnullnullnullnullnullnullnullnullnull
group.protocol The group protocol consumer should use. We currently support "classic" or "consumer". If "consumer" is specified, then the consume..classicclassicclassicclassicclassicclassic
group.remote.assignor The server-side assignor to use. If no assignor is specified, the group coordinator will pick one. This configuration is applied o..nullnullnullnullnullnull
heartbeat.interval.ms The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used ..3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
interceptor.classes A list of classes to use as interceptors. Implementing the org.apache.kafka.clients.producer.ProducerInterceptor interface allows ..
isolation.level Controls how to read messages written transactionally. If set to read_committed, consumer.poll() will only return transactional me..read_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommitted
key.deserializer Deserializer class for key that implements the org.apache.kafka.common.serialization.Deserializer interface.
max.partition.fetch.bytes The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer. If the first reco..1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
1048576
1 MB
max.poll.interval.ms The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of ..300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
max.poll.records The maximum number of records returned in a single call to poll(). Note, that max.poll.records does not impact the underlying fetc..500500500500500500500500500500500500500
metadata.max.age.ms The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership cha..300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
metadata.recovery.rebootstrap.trigger.ms If a client configured to rebootstrap using metadata.recovery.strategy=rebootstrap is unable to obtain metadata from any of the br..300000
5min
300000
5min
300000
5min
metadata.recovery.strategy Controls how the client recovers when none of the brokers known to it is available. If set to none, the client fails. If set to re..nonenonerebootstraprebootstraprebootstrap
metric.reporters A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows p..org.apache.kafka.common.metrics.JmxReporterorg.apache.kafka.common.metrics.JmxReporterorg.apache.kafka.common.metrics.JmxReporter
metrics.num.samples The number of samples maintained to compute metrics.2222222222222
metrics.recording.level The highest recording level for metrics.INFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFO
metrics.sample.window.ms The window of time a metrics sample is computed over.30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
partition.assignment.strategy A list of class names or class types, ordered by preference, of supported partition assignment strategies that the client will use..class org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignorclass org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignorclass org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignorclass org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignorclass org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignorclass org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignorclass org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignorclass org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignorclass org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignorclass org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignorclass org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignorclass org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignorclass org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignor
receive.buffer.bytes The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
65536
64 KB
reconnect.backoff.max.ms The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provide..1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
reconnect.backoff.ms The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a t..50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
request.timeout.ms The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not r..30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
retry.backoff.max.ms The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, ..1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
retry.backoff.ms The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending ..100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.client.callback.handler.class The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.jaas.config JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format ..nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.kerberos.kinit.cmd Kerberos kinit command path./usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit
sasl.kerberos.min.time.before.relogin Login thread sleep time between refresh attempts.60000600006000060000600006000060000600006000060000600006000060000
sasl.kerberos.service.name The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.kerberos.ticket.renew.jitter Percentage of random jitter added to the renewal time.0.050.050.050.050.050.050.050.050.050.050.050.050.05
sasl.kerberos.ticket.renew.window.factor Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which ..0.80.80.80.80.80.80.80.80.80.80.80.80.8
sasl.login.callback.handler.class The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For bro..nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.class The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener ..nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.connect.timeout.ms The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHB..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.read.timeout.ms The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.nullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.refresh.buffer.seconds The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would ot..300300300300300300300300300300300300300
sasl.login.refresh.min.period.seconds The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between..60606060606060606060606060
sasl.login.refresh.window.factor Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which..0.80.80.80.80.80.80.80.80.80.80.80.80.8
sasl.login.refresh.window.jitter The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. ..0.050.050.050.050.050.050.050.050.050.050.050.050.05
sasl.login.retry.backoff.max.ms The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login us..10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
sasl.login.retry.backoff.ms The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login us..100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.mechanism SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the de..GSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPI
sasl.oauthbearer.assertion.algorithm The algorithm the Apache Kafka client should use to sign the assertion sent to the identity provider. It is also used as the value..RS256RS256
sasl.oauthbearer.assertion.claim.aud The JWT aud (Audience) claim which will be included in the client JWT assertion created locally.Note: If a value for sasl.oauthbea..nullnull
sasl.oauthbearer.assertion.claim.exp.seconds The number of seconds in the future for which the JWT is valid. The value is used to determine the JWT exp (Expiration) claim base..300300
sasl.oauthbearer.assertion.claim.iss The value to be used as the iss (Issuer) claim which will be included in the client JWT assertion created locally.Note: If a value..nullnull
sasl.oauthbearer.assertion.claim.jti.include Flag that determines if the JWT assertion should generate a unique ID for the JWT and include it in the jti (JWT ID) claim.Note: I..falsefalse
sasl.oauthbearer.assertion.claim.nbf.seconds The number of seconds in the past from which the JWT is valid. The value is used to determine the JWT nbf (Not Before) claim based..6060
sasl.oauthbearer.assertion.claim.sub The value to be used as the sub (Subject) claim which will be included in the client JWT assertion created locally.Note: If a valu..nullnull
sasl.oauthbearer.assertion.file File that contains a pre-generated JWT assertion.The underlying implementation caches the file contents to avoid the performance h..nullnull
sasl.oauthbearer.assertion.private.key.file File that contains a private key in the standard PEM format which is used to sign the JWT assertion sent to the identity provider...nullnull
sasl.oauthbearer.assertion.private.key.passphrase The optional passphrase to decrypt the private key file specified by sasl.oauthbearer.assertion.private.key.file.Note: If the file..nullnull
sasl.oauthbearer.assertion.template.file This optional configuration specifies the file containing the JWT headers and/or payload claims to be used when creating the JWT a..nullnull
sasl.oauthbearer.client.credentials.client.id The ID (defined in/by the OAuth identity provider) to identify the client requesting the token.The client ID was previously stored..nullnull
sasl.oauthbearer.client.credentials.client.secret The secret (defined by either the user or preassigned, depending on the identity provider) of the client requesting the token.The ..nullnull
sasl.oauthbearer.clock.skew.seconds The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.303030303030303030303030
sasl.oauthbearer.expected.audience The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. ..nullnullnullnullnullnullnullnullnullnullnull
sasl.oauthbearer.expected.issuer The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected ..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.oauthbearer.header.urlencode The (optional) setting to enable the OAuth client to URL-encode the client_id and client_secret in the authorization header in acc..falsefalsefalsefalse
sasl.oauthbearer.jwks.endpoint.refresh.ms The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the..3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the extern..10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external aut..100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.oauthbearer.jwks.endpoint.url The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or fi..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.oauthbearer.jwt.retriever.class The fully-qualified class name of a JwtRetriever implementation used to request tokens from the identity provider.The default conf..org.apache.kafka.common.security.oauthbearer.DefaultJwtRetrieverorg.apache.kafka.common.security.oauthbearer.DefaultJwtRetriever
sasl.oauthbearer.jwt.validator.class The fully-qualified class name of a JwtValidator implementation used to validate the JWT from the identity provider.The default va..org.apache.kafka.common.security.oauthbearer.DefaultJwtValidatororg.apache.kafka.common.security.oauthbearer.DefaultJwtValidator
sasl.oauthbearer.scope This is the level of access a client application is granted to a resource or API which is included in the token request. If provid..nullnull
sasl.oauthbearer.scope.claim.name The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scop..scopescopescopescopescopescopescopescopescopescopescopescope
sasl.oauthbearer.sub.claim.name The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subj..subsubsubsubsubsubsubsubsubsubsubsub
sasl.oauthbearer.token.endpoint.url The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests..nullnullnullnullnullnullnullnullnullnullnullnull
security.protocol Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.PLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXT
security.providers A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement ..nullnullnullnullnullnullnullnullnullnullnullnullnull
send.buffer.bytes The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
session.timeout.ms The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no hea..45000
45s
45000
45s
45000
45s
45000
45s
45000
45s
45000
45s
45000
45s
45000
45s
45000
45s
45000
45s
45000
45s
45000
45s
45000
45s
share.acknowledgement.mode Controls the acknowledgement mode for a share consumer. If set to implicit, the acknowledgement mode of the consumer is implicit a..implicitimplicit
share.acquire.mode Controls the acquire mode for a share consumer. If set to record_limit, the number of records returned in each poll() will not exc..BATCH_OPTIMIZED
socket.connection.setup.timeout.max.ms The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will inc..30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
socket.connection.setup.timeout.ms The amount of time the client will wait for the socket connection to be established. If the connection is not built before the tim..10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
ssl.cipher.suites A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotia..nullnullnullnullnullnullnullnullnullnullnullnull
ssl.enabled.protocols The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' ..TLSv1.2TLSv1.2,TLSv1.3TLSv1.2TLSv1.2,TLSv1.3TLSv1.2TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3TLSv1.2TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3
ssl.endpoint.identification.algorithm The endpoint identification algorithm to validate server hostname using server certificate.httpshttpshttpshttpshttpshttpshttpshttpshttpshttpshttpshttpshttps
ssl.engine.factory.class The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.key.password The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keymanager.algorithm The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for t..SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509
ssl.keystore.certificate.chain Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list ..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.key Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. ..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.location The location of the key store file. This is optional for client and can be used for two-way authentication for client.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.password The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. K..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.type The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.fact..JKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKS
ssl.protocol The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise..TLSv1.2TLSv1.3TLSv1.2TLSv1.3TLSv1.2TLSv1.3TLSv1.3TLSv1.3TLSv1.3TLSv1.2TLSv1.3TLSv1.3TLSv1.3
ssl.provider The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.secure.random.implementation The SecureRandom PRNG implementation to use for SSL cryptography operations.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.trustmanager.algorithm The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured f..PKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIX
ssl.truststore.certificates Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X...nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.truststore.location The location of the trust store file.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.truststore.password The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity che..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.truststore.type The file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12..JKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKS
value.deserializer Deserializer class for value that implements the org.apache.kafka.common.serialization.Deserializer interface.
auto.include.jmx.reporter Deprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters. This configuration will be r..truetruetruetruetruetrue
producer Description 3.03.13.23.33.43.53.63.73.83.94.04.14.2
acks The number of acknowledgments the producer requires the leader to have received before considering a request complete. This contro..allallallallallallallallallallallallall
batch.size The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same parti..16384163841638416384163841638416384163841638416384163841638416384
bootstrap.servers A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all ser..
buffer.memory The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than..33554432335544323355443233554432335544323355443233554432335544323355443233554432335544323355443233554432
client.dns.lookup Controls how the client uses DNS lookups. If set to use_all_dns_ips, connect to each returned IP address in sequence until a succe..use_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ips
client.id An ID prefix string used for the client IDs of internal (main, restore, and global) consumers , producers, and admin clients with ..
compression.gzip.level The compression level to use if compression.type is set to gzip.-1-1-1-1-1
compression.lz4.level The compression level to use if compression.type is set to lz4.99999
compression.type Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy'..nonenonenonenonenonenonenonenonenonenonenonenonenone
compression.zstd.level The compression level to use if compression.type is set to zstd.33333
config.providers Comma-separated names of ConfigProvider classes, loaded and used in the order specified. Implementing the interface ConfigProvide..
connections.max.idle.ms Close idle connections after the number of milliseconds specified by this config.540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
delivery.timeout.ms An upper bound on the time to report success or failure after a call to send() returns. This limits the total time that a record w..120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
120000
2min
enable.idempotence When set to 'true', the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer ..truetruetruetruetruetruetruetruetruetruetruetruetrue
enable.metrics.push Whether to enable pushing of internal client metrics for (main, restore, and global) consumers, producers, and admin clients. The ..truetruetruetruetruetrue
interceptor.classes A list of classes to use as interceptors. Implementing the org.apache.kafka.clients.producer.ProducerInterceptor interface allows ..
key.serializer Serializer class for key that implements the org.apache.kafka.common.serialization.Serializer interface.
linger.ms The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this..00000000005
5ms
5
5ms
5
5ms
max.block.ms The configuration controls how long the KafkaProducer's send(), partitionsFor(), initTransactions(), sendOffsetsToTransaction(), c..60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
max.in.flight.requests.per.connection The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this confi..5555555555555
max.request.size The maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single re..1048576104857610485761048576104857610485761048576104857610485761048576104857610485761048576
metadata.max.age.ms The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership cha..300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
metadata.max.idle.ms Controls how long the producer will cache metadata for a topic that's idle. If the elapsed time since a topic was last produced to..300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
metadata.recovery.rebootstrap.trigger.ms If a client configured to rebootstrap using metadata.recovery.strategy=rebootstrap is unable to obtain metadata from any of the br..300000
5min
300000
5min
300000
5min
metadata.recovery.strategy Controls how the client recovers when none of the brokers known to it is available. If set to none, the client fails. If set to re..nonenonerebootstraprebootstraprebootstrap
metric.reporters A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows p..org.apache.kafka.common.metrics.JmxReporterorg.apache.kafka.common.metrics.JmxReporterorg.apache.kafka.common.metrics.JmxReporter
metrics.num.samples The number of samples maintained to compute metrics.2222222222222
metrics.recording.level The highest recording level for metrics.INFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFO
metrics.sample.window.ms The window of time a metrics sample is computed over.30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
partitioner.adaptive.partitioning.enable When set to 'true', the producer will try to adapt to broker performance and produce more messages to partitions hosted on faster ..truetruetruetruetruetruetruetruetruetrue
partitioner.availability.timeout.ms If a broker cannot process produce requests from a partition for partitioner.availability.timeout.ms time, the partitioner treats ..0000000000
partitioner.class Determines which partition to send a record to when records are produced. Available options are:org.apache.kafka.clients.producer.internals.DefaultPartitionerorg.apache.kafka.clients.producer.internals.DefaultPartitionerorg.apache.kafka.clients.producer.internals.DefaultPartitionernullnullnullnullnullnullnullnullnullnull
partitioner.ignore.keys When set to 'true' the producer won't use record keys to choose a partition. If 'false', producer would choose a partition based o..falsefalsefalsefalsefalsefalsefalsefalsefalsefalse
receive.buffer.bytes The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
reconnect.backoff.max.ms The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provide..1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
reconnect.backoff.ms The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a t..50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
request.timeout.ms The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not r..30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
retries Setting a value greater than zero will cause the client to resend any request that fails with a potentially transient error. It is..2147483647214748364721474836472147483647214748364721474836472147483647214748364721474836472147483647214748364721474836472147483647
retry.backoff.max.ms The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, ..1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
retry.backoff.ms The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending ..100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.client.callback.handler.class The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.jaas.config JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format ..nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.kerberos.kinit.cmd Kerberos kinit command path./usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit
sasl.kerberos.min.time.before.relogin Login thread sleep time between refresh attempts.60000600006000060000600006000060000600006000060000600006000060000
sasl.kerberos.service.name The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.kerberos.ticket.renew.jitter Percentage of random jitter added to the renewal time.0.050.050.050.050.050.050.050.050.050.050.050.050.05
sasl.kerberos.ticket.renew.window.factor Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which ..0.80.80.80.80.80.80.80.80.80.80.80.80.8
sasl.login.callback.handler.class The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For bro..nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.class The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener ..nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.connect.timeout.ms The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHB..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.read.timeout.ms The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.nullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.refresh.buffer.seconds The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would ot..300300300300300300300300300300300300300
sasl.login.refresh.min.period.seconds The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between..60606060606060606060606060
sasl.login.refresh.window.factor Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which..0.80.80.80.80.80.80.80.80.80.80.80.80.8
sasl.login.refresh.window.jitter The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. ..0.050.050.050.050.050.050.050.050.050.050.050.050.05
sasl.login.retry.backoff.max.ms The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login us..10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
sasl.login.retry.backoff.ms The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login us..100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.mechanism SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the de..GSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPI
sasl.oauthbearer.assertion.algorithm The algorithm the Apache Kafka client should use to sign the assertion sent to the identity provider. It is also used as the value..RS256RS256
sasl.oauthbearer.assertion.claim.aud The JWT aud (Audience) claim which will be included in the client JWT assertion created locally.Note: If a value for sasl.oauthbea..nullnull
sasl.oauthbearer.assertion.claim.exp.seconds The number of seconds in the future for which the JWT is valid. The value is used to determine the JWT exp (Expiration) claim base..300300
sasl.oauthbearer.assertion.claim.iss The value to be used as the iss (Issuer) claim which will be included in the client JWT assertion created locally.Note: If a value..nullnull
sasl.oauthbearer.assertion.claim.jti.include Flag that determines if the JWT assertion should generate a unique ID for the JWT and include it in the jti (JWT ID) claim.Note: I..falsefalse
sasl.oauthbearer.assertion.claim.nbf.seconds The number of seconds in the past from which the JWT is valid. The value is used to determine the JWT nbf (Not Before) claim based..6060
sasl.oauthbearer.assertion.claim.sub The value to be used as the sub (Subject) claim which will be included in the client JWT assertion created locally.Note: If a valu..nullnull
sasl.oauthbearer.assertion.file File that contains a pre-generated JWT assertion.The underlying implementation caches the file contents to avoid the performance h..nullnull
sasl.oauthbearer.assertion.private.key.file File that contains a private key in the standard PEM format which is used to sign the JWT assertion sent to the identity provider...nullnull
sasl.oauthbearer.assertion.private.key.passphrase The optional passphrase to decrypt the private key file specified by sasl.oauthbearer.assertion.private.key.file.Note: If the file..nullnull
sasl.oauthbearer.assertion.template.file This optional configuration specifies the file containing the JWT headers and/or payload claims to be used when creating the JWT a..nullnull
sasl.oauthbearer.client.credentials.client.id The ID (defined in/by the OAuth identity provider) to identify the client requesting the token.The client ID was previously stored..nullnull
sasl.oauthbearer.client.credentials.client.secret The secret (defined by either the user or preassigned, depending on the identity provider) of the client requesting the token.The ..nullnull
sasl.oauthbearer.clock.skew.seconds The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.303030303030303030303030
sasl.oauthbearer.expected.audience The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. ..nullnullnullnullnullnullnullnullnullnullnull
sasl.oauthbearer.expected.issuer The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected ..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.oauthbearer.header.urlencode The (optional) setting to enable the OAuth client to URL-encode the client_id and client_secret in the authorization header in acc..falsefalsefalsefalse
sasl.oauthbearer.jwks.endpoint.refresh.ms The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the..3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the extern..10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external aut..100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.oauthbearer.jwks.endpoint.url The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or fi..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.oauthbearer.jwt.retriever.class The fully-qualified class name of a JwtRetriever implementation used to request tokens from the identity provider.The default conf..org.apache.kafka.common.security.oauthbearer.DefaultJwtRetrieverorg.apache.kafka.common.security.oauthbearer.DefaultJwtRetriever
sasl.oauthbearer.jwt.validator.class The fully-qualified class name of a JwtValidator implementation used to validate the JWT from the identity provider.The default va..org.apache.kafka.common.security.oauthbearer.DefaultJwtValidatororg.apache.kafka.common.security.oauthbearer.DefaultJwtValidator
sasl.oauthbearer.scope This is the level of access a client application is granted to a resource or API which is included in the token request. If provid..nullnull
sasl.oauthbearer.scope.claim.name The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scop..scopescopescopescopescopescopescopescopescopescopescopescope
sasl.oauthbearer.sub.claim.name The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subj..subsubsubsubsubsubsubsubsubsubsubsub
sasl.oauthbearer.token.endpoint.url The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests..nullnullnullnullnullnullnullnullnullnullnullnull
security.protocol Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.PLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXT
security.providers A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement ..nullnullnullnullnullnullnullnullnullnullnullnullnull
send.buffer.bytes The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
socket.connection.setup.timeout.max.ms The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will inc..30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
socket.connection.setup.timeout.ms The amount of time the client will wait for the socket connection to be established. If the connection is not built before the tim..10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
ssl.cipher.suites A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotia..nullnullnullnullnullnullnullnullnullnullnullnull
ssl.enabled.protocols The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' ..TLSv1.2TLSv1.2,TLSv1.3TLSv1.2TLSv1.2,TLSv1.3TLSv1.2TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3TLSv1.2TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3
ssl.endpoint.identification.algorithm The endpoint identification algorithm to validate server hostname using server certificate.httpshttpshttpshttpshttpshttpshttpshttpshttpshttpshttpshttpshttps
ssl.engine.factory.class The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.key.password The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keymanager.algorithm The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for t..SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509
ssl.keystore.certificate.chain Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list ..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.key Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. ..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.location The location of the key store file. This is optional for client and can be used for two-way authentication for client.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.password The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. K..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.type The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.fact..JKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKS
ssl.protocol The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise..TLSv1.2TLSv1.3TLSv1.2TLSv1.3TLSv1.2TLSv1.3TLSv1.3TLSv1.3TLSv1.3TLSv1.2TLSv1.3TLSv1.3TLSv1.3
ssl.provider The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.secure.random.implementation The SecureRandom PRNG implementation to use for SSL cryptography operations.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.trustmanager.algorithm The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured f..PKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIX
ssl.truststore.certificates Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X...nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.truststore.location The location of the trust store file.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.truststore.password The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity che..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.truststore.type The file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12..JKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKS
transaction.timeout.ms The maximum amount of time in milliseconds that a transaction will remain open before the coordinator proactively aborts it. The s..60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
transaction.two.phase.commit.enable If set to true, then the broker is informed that the client is participating in two phase commit protocol and transactions that th..falsefalse
transactional.id The TransactionalId to use for transactional delivery. This enables reliability semantics which span multiple producer sessions si..nullnullnullnullnullnullnullnullnullnullnullnullnull
value.serializer Serializer class for value that implements the org.apache.kafka.common.serialization.Serializer interface.
auto.include.jmx.reporter Deprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters. This configuration will be r..truetruetruetruetruetrue
topic Description 3.03.13.23.33.43.53.63.73.83.94.04.14.2
cleanup.policy This config designates the retention policy to use on log segments. The "delete" policy (which is the default) will discard old se..deletedeletedeletedeletedeletedeletedeletedeletedeletedeletedeletedeletedelete
compression.gzip.level The compression level to use if compression.type is set to gzip.-1-1-1-1-1
compression.lz4.level The compression level to use if compression.type is set to lz4.99999
compression.type Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy'..producerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducer
compression.zstd.level The compression level to use if compression.type is set to zstd.33333
delete.retention.ms The amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in whi..86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
file.delete.delay.ms The time to wait before deleting a file from the filesystem60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
flush.messages This setting allows specifying an interval at which we will force an fsync of data written to the log. For example if this was set..9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
flush.ms This setting allows specifying a time interval at which we will force an fsync of data written to the log. For example if this was..9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
follower.replication.throttled.replicas A list of replicas for which log replication should be throttled on the follower side. The list should describe a set of replicas ..
index.interval.bytes This setting controls how frequently Kafka adds an index entry to its offset index. The default setting ensures that we index a me..4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
4096
4 KB
leader.replication.throttled.replicas A list of replicas for which log replication should be throttled on the leader side. The list should describe a set of replicas in..
local.retention.bytes The maximum size of local log segments that can grow for a partition before it deletes the old segments. Default value is -2, it r..-2-2-2-2-2-2-2-2
local.retention.ms The number of milliseconds to keep the local log segment before it gets deleted. Default value is -2, it represents `retention.ms`..-2-2-2-2-2-2-2-2
max.compaction.lag.ms The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted.9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
max.message.bytes The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are c..1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
1048588
1 MB
message.timestamp.after.max.ms This configuration sets the allowable timestamp difference between the message timestamp and the broker's timestamp. The message t..9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
3600000
1h
3600000
1h
3600000
1h
message.timestamp.before.max.ms This configuration sets the allowable timestamp difference between the broker's timestamp and the message timestamp. The message t..9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
message.timestamp.type Define whether the timestamp in the message is message create time or log append time. The value should be either `CreateTime` or ..CreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTime
min.cleanable.dirty.ratio This configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled). B..0.50.50.50.50.50.50.50.50.50.50.50.50.5
min.compaction.lag.ms The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.0000000000000
min.insync.replicas When a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a ..1111111111111
preallocate True if we should preallocate the file on disk when creating a new log segment.falsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalse
remote.log.copy.disable Determines whether tiered data for a topic should become read only, and no more data uploading on a topic. Once this config is set..falsefalsefalsefalse
remote.log.delete.on.disable Determines whether tiered data for a topic should be deleted after tiered storage is disabled on a topic. This configuration shoul..falsefalsefalsefalse
remote.storage.enable To enable tiered storage for a topic, set this configuration as true. You can not disable this config once it is enabled. It will ..falsefalsefalsefalsefalsefalsefalsefalse
retention.bytes This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old l..-1-1-1-1-1-1-1-1-1-1-1-1-1
retention.ms This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we a..604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
segment.bytes This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger ..1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
1073741824
1 GB
segment.index.bytes This configuration controls the size of the index that maps offsets to file positions. We preallocate this index file and shrink i..10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
segment.jitter.ms The maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling0000000000000
segment.ms This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to..604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
604800000
7d
unclean.leader.election.enable Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result ..falsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalse
message.downconversion.enable This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to false, ..truetruetruetruetruetruetruetruetruetrue
message.format.version [DEPRECATED] Specify the message format version the broker will use to append messages to the logs. The value of this config is al..3.0-IV13.0-IV13.0-IV13.0-IV13.0-IV13.0-IV13.0-IV13.0-IV13.0-IV13.0-IV1
message.timestamp.difference.max.ms [DEPRECATED] The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in ..9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
9223372036854775807
Infinity
Connect Worker Description 3.03.13.23.33.43.53.63.73.83.94.04.14.2
access.control.allow.methods Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the ..
access.control.allow.origin Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain..
admin.listeners List of comma-separated URIs the Admin REST API will listen on. The supported protocols are HTTP and HTTPS. An empty or blank stri..nullnullnullnullnullnullnullnullnullnullnullnullnull
bootstrap.servers A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all ser..localhost:9092localhost:9092localhost:9092localhost:9092localhost:9092localhost:9092localhost:9092localhost:9092localhost:9092localhost:9092localhost:9092localhost:9092
client.dns.lookup Controls how the client uses DNS lookups. If set to use_all_dns_ips, connect to each returned IP address in sequence until a succe..use_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ips
client.id An ID prefix string used for the client IDs of internal (main, restore, and global) consumers , producers, and admin clients with ..
config.providers Comma-separated names of ConfigProvider classes, loaded and used in the order specified. Implementing the interface ConfigProvide..
config.storage.replication.factor Replication factor used when creating the configuration storage topic3333333333333
config.storage.topic The name of the Kafka topic where connector configurations are stored
connect.protocol Compatibility mode for Kafka Connect Protocolsessionedsessionedsessionedsessionedsessionedsessionedsessionedsessionedsessionedsessionedsessionedsessionedsessioned
connections.max.idle.ms Close idle connections after the number of milliseconds specified by this config.540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
connector.client.config.override.policy Class name or alias of implementation of ConnectorClientConfigOverridePolicy. Defines what client configurations can be overridden..AllAllAllAllAllAllAllAllAllAllAllAllAll
exactly.once.source.support Whether to enable exactly-once support for source connectors in the cluster by using transactions to write source records and thei..disableddisableddisableddisableddisableddisableddisableddisableddisableddisabled
group.id A unique string that identifies the Connect cluster group this worker belongs to.
header.converter HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls..org.apache.kafka.connect.storage.SimpleHeaderConverterorg.apache.kafka.connect.storage.SimpleHeaderConverterorg.apache.kafka.connect.storage.SimpleHeaderConverterorg.apache.kafka.connect.storage.SimpleHeaderConverterorg.apache.kafka.connect.storage.SimpleHeaderConverterorg.apache.kafka.connect.storage.SimpleHeaderConverterorg.apache.kafka.connect.storage.SimpleHeaderConverterorg.apache.kafka.connect.storage.SimpleHeaderConverterorg.apache.kafka.connect.storage.SimpleHeaderConverterorg.apache.kafka.connect.storage.SimpleHeaderConverterorg.apache.kafka.connect.storage.SimpleHeaderConverterorg.apache.kafka.connect.storage.SimpleHeaderConverterorg.apache.kafka.connect.storage.SimpleHeaderConverter
header.converter.plugin.version Version of the header converter.nullnull
heartbeat.interval.ms The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used ..3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
inter.worker.key.generation.algorithm The algorithm to use for generating internal request keys. The algorithm 'HmacSHA256' will be used as a default on JVMs that suppo..HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256
inter.worker.key.size The size of the key to use for signing internal requests, in bits. If null, the default key size for the key generation algorithm ..nullnullnullnullnullnullnullnullnullnullnullnullnull
inter.worker.key.ttl.ms The TTL of generated session keys used for internal request validation (in milliseconds)3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
inter.worker.signature.algorithm The algorithm used to sign internal requestsThe algorithm 'inter.worker.signature.algorithm' will be used as a default on JVMs tha..HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256
inter.worker.verification.algorithms A list of permitted algorithms for verifying internal requests, which must include the algorithm used for the inter.worker.signatu..HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256HmacSHA256
key.converter Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the f..
key.converter.plugin.version Version of the key converter.nullnull
listeners List of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS. Specify hostname as 0.0.0.0 ..http://:8083http://:8083http://:8083http://:8083http://:8083http://:8083http://:8083http://:8083http://:8083http://:8083http://:8083http://:8083http://:8083
metadata.max.age.ms The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership cha..300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
metadata.recovery.rebootstrap.trigger.ms If a client configured to rebootstrap using metadata.recovery.strategy=rebootstrap is unable to obtain metadata from any of the br..300000
5min
300000
5min
300000
5min
metadata.recovery.strategy Controls how the client recovers when none of the brokers known to it is available. If set to none, the client fails. If set to re..nonenonerebootstraprebootstraprebootstrap
metric.reporters A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows p..org.apache.kafka.common.metrics.JmxReporterorg.apache.kafka.common.metrics.JmxReporterorg.apache.kafka.common.metrics.JmxReporter
metrics.num.samples The number of samples maintained to compute metrics.2222222222222
metrics.recording.level The highest recording level for metrics.INFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFO
metrics.sample.window.ms The window of time a metrics sample is computed over.30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
offset.flush.interval.ms Interval at which to try committing offsets for tasks.60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
offset.flush.timeout.ms Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before can..5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
offset.storage.partitions The number of partitions used when creating the offset storage topic25252525252525252525252525
offset.storage.replication.factor Replication factor used when creating the offset storage topic3333333333333
offset.storage.topic The name of the Kafka topic where source connector offsets are stored
plugin.discovery Method to use to discover plugins present in the classpath and plugin.path configuration. This can be one of multiple values with ..hybrid_warnhybrid_warnhybrid_warnhybrid_warnhybrid_warnhybrid_warnhybrid_warn
plugin.path List of paths separated by commas (,) that contain plugins (connectors, converters, transformations). The list should consist of t..nullnullnullnullnullnullnullnullnullnullnullnullnull
rebalance.timeout.ms The maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a limit on the amount of ..60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
receive.buffer.bytes The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
reconnect.backoff.max.ms The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provide..1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
reconnect.backoff.ms The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a t..50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
request.timeout.ms The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not r..40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
response.http.headers.config Rules for REST API HTTP response headers
rest.advertised.host.name If this is set, this is the hostname that will be given out to other workers to connect to.nullnullnullnullnullnullnullnullnullnullnullnullnull
rest.advertised.listener Sets the advertised listener (HTTP or HTTPS) which will be given to other workers to use.nullnullnullnullnullnullnullnullnullnullnullnullnull
rest.advertised.port If this is set, this is the port that will be given out to other workers to connect to.nullnullnullnullnullnullnullnullnullnullnullnullnull
rest.extension.classes Comma-separated names of ConnectRestExtension classes, loaded and called in the order specified. Implementing the interface Conne..
retry.backoff.max.ms The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, ..1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
retry.backoff.ms The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending ..100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.client.callback.handler.class The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.jaas.config JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format ..nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.kerberos.kinit.cmd Kerberos kinit command path./usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit
sasl.kerberos.min.time.before.relogin Login thread sleep time between refresh attempts.60000600006000060000600006000060000600006000060000600006000060000
sasl.kerberos.service.name The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.kerberos.ticket.renew.jitter Percentage of random jitter added to the renewal time.0.050.050.050.050.050.050.050.050.050.050.050.050.05
sasl.kerberos.ticket.renew.window.factor Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which ..0.80.80.80.80.80.80.80.80.80.80.80.80.8
sasl.login.callback.handler.class The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For bro..nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.class The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener ..nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.connect.timeout.ms The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHB..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.read.timeout.ms The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.nullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.refresh.buffer.seconds The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would ot..300300300300300300300300300300300300300
sasl.login.refresh.min.period.seconds The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between..60606060606060606060606060
sasl.login.refresh.window.factor Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which..0.80.80.80.80.80.80.80.80.80.80.80.80.8
sasl.login.refresh.window.jitter The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. ..0.050.050.050.050.050.050.050.050.050.050.050.050.05
sasl.login.retry.backoff.max.ms The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login us..10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
sasl.login.retry.backoff.ms The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login us..100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.mechanism SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the de..GSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPI
sasl.oauthbearer.assertion.algorithm The algorithm the Apache Kafka client should use to sign the assertion sent to the identity provider. It is also used as the value..RS256RS256
sasl.oauthbearer.assertion.claim.aud The JWT aud (Audience) claim which will be included in the client JWT assertion created locally.Note: If a value for sasl.oauthbea..nullnull
sasl.oauthbearer.assertion.claim.exp.seconds The number of seconds in the future for which the JWT is valid. The value is used to determine the JWT exp (Expiration) claim base..300300
sasl.oauthbearer.assertion.claim.iss The value to be used as the iss (Issuer) claim which will be included in the client JWT assertion created locally.Note: If a value..nullnull
sasl.oauthbearer.assertion.claim.jti.include Flag that determines if the JWT assertion should generate a unique ID for the JWT and include it in the jti (JWT ID) claim.Note: I..falsefalse
sasl.oauthbearer.assertion.claim.nbf.seconds The number of seconds in the past from which the JWT is valid. The value is used to determine the JWT nbf (Not Before) claim based..6060
sasl.oauthbearer.assertion.claim.sub The value to be used as the sub (Subject) claim which will be included in the client JWT assertion created locally.Note: If a valu..nullnull
sasl.oauthbearer.assertion.file File that contains a pre-generated JWT assertion.The underlying implementation caches the file contents to avoid the performance h..nullnull
sasl.oauthbearer.assertion.private.key.file File that contains a private key in the standard PEM format which is used to sign the JWT assertion sent to the identity provider...nullnull
sasl.oauthbearer.assertion.private.key.passphrase The optional passphrase to decrypt the private key file specified by sasl.oauthbearer.assertion.private.key.file.Note: If the file..nullnull
sasl.oauthbearer.assertion.template.file This optional configuration specifies the file containing the JWT headers and/or payload claims to be used when creating the JWT a..nullnull
sasl.oauthbearer.client.credentials.client.id The ID (defined in/by the OAuth identity provider) to identify the client requesting the token.The client ID was previously stored..nullnull
sasl.oauthbearer.client.credentials.client.secret The secret (defined by either the user or preassigned, depending on the identity provider) of the client requesting the token.The ..nullnull
sasl.oauthbearer.clock.skew.seconds The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.303030303030303030303030
sasl.oauthbearer.expected.audience The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. ..nullnullnullnullnullnullnullnullnullnullnull
sasl.oauthbearer.expected.issuer The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected ..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.oauthbearer.header.urlencode The (optional) setting to enable the OAuth client to URL-encode the client_id and client_secret in the authorization header in acc..falsefalsefalsefalse
sasl.oauthbearer.jwks.endpoint.refresh.ms The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the..3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
3600000
1h
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the extern..10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external aut..100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
sasl.oauthbearer.jwks.endpoint.url The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or fi..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.oauthbearer.jwt.retriever.class The fully-qualified class name of a JwtRetriever implementation used to request tokens from the identity provider.The default conf..org.apache.kafka.common.security.oauthbearer.DefaultJwtRetrieverorg.apache.kafka.common.security.oauthbearer.DefaultJwtRetriever
sasl.oauthbearer.jwt.validator.class The fully-qualified class name of a JwtValidator implementation used to validate the JWT from the identity provider.The default va..org.apache.kafka.common.security.oauthbearer.DefaultJwtValidatororg.apache.kafka.common.security.oauthbearer.DefaultJwtValidator
sasl.oauthbearer.scope This is the level of access a client application is granted to a resource or API which is included in the token request. If provid..nullnull
sasl.oauthbearer.scope.claim.name The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scop..scopescopescopescopescopescopescopescopescopescopescopescope
sasl.oauthbearer.sub.claim.name The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subj..subsubsubsubsubsubsubsubsubsubsubsub
sasl.oauthbearer.token.endpoint.url The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests..nullnullnullnullnullnullnullnullnullnullnullnull
scheduled.rebalance.max.delay.ms The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassig..300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
security.protocol Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.PLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXT
send.buffer.bytes The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
session.timeout.ms The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no hea..10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
socket.connection.setup.timeout.max.ms The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will inc..30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
socket.connection.setup.timeout.ms The amount of time the client will wait for the socket connection to be established. If the connection is not built before the tim..10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
10000
10s
ssl.cipher.suites A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotia..nullnullnullnullnullnullnullnullnullnullnullnull
ssl.client.auth Configures kafka broker to request client authentication. The following settings are common:nonenonenonenonenonenonenonenonenonenonenonenonenone
ssl.enabled.protocols The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' ..TLSv1.2TLSv1.2,TLSv1.3TLSv1.2TLSv1.2,TLSv1.3TLSv1.2TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3TLSv1.2TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3TLSv1.2,TLSv1.3
ssl.endpoint.identification.algorithm The endpoint identification algorithm to validate server hostname using server certificate.httpshttpshttpshttpshttpshttpshttpshttpshttpshttpshttpshttpshttps
ssl.engine.factory.class The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.key.password The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keymanager.algorithm The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for t..SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509
ssl.keystore.certificate.chain Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list ..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.key Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. ..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.location The location of the key store file. This is optional for client and can be used for two-way authentication for client.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.password The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. K..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.type The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.fact..JKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKS
ssl.protocol The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise..TLSv1.2TLSv1.3TLSv1.2TLSv1.3TLSv1.2TLSv1.3TLSv1.3TLSv1.3TLSv1.3TLSv1.2TLSv1.3TLSv1.3TLSv1.3
ssl.provider The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.secure.random.implementation The SecureRandom PRNG implementation to use for SSL cryptography operations.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.trustmanager.algorithm The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured f..PKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIX
ssl.truststore.certificates Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X...nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.truststore.location The location of the trust store file.nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.truststore.password The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity che..nullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.truststore.type The file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12..JKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKS
status.storage.partitions The number of partitions used when creating the status storage topic5555555555555
status.storage.replication.factor Replication factor used when creating the status storage topic3333333333333
status.storage.topic The name of the Kafka topic where connector and task status are stored
task.shutdown.graceful.timeout.ms Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown tr..5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
5000
5s
topic.creation.enable Whether to allow automatic creation of topics used by source connectors, when source connectors are configured with `topic.creatio..truetruetruetruetruetruetruetruetruetruetruetruetrue
topic.tracking.allow.reset If set to true, it allows user requests to reset the set of active topics per connector.truetruetruetruetruetruetruetruetruetruetruetruetrue
topic.tracking.enable Enable tracking the set of active topics per connector during runtime.truetruetruetruetruetruetruetruetruetruetruetruetrue
value.converter Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the f..
value.converter.plugin.version Version of the value converter.nullnull
worker.sync.timeout.ms When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before..3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
3000
3s
worker.unsync.backoff.ms When the worker is out of sync with other workers and fails to catch up within worker.sync.timeout.ms, leave the Connect cluster ..300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
auto.include.jmx.reporter Deprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters. This configuration will be r..truetruetruetruetruetrue
Connect Source Description 3.03.13.23.33.43.53.63.73.83.94.04.14.2
config.action.reload The action that Connect should take on the connector when changes in external configuration providers result in a change in the co..restartrestartrestartrestartrestartrestartrestartrestartrestartrestartrestartrestartrestart
connector.class Name or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connecto..
connector.plugin.version Version of the connector.nullnull
errors.log.enable If true, write each error and the details of the failed operation and problematic record to the Connect application log. This is '..falsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalse
errors.log.include.messages Whether to include in the log the Connect record that resulted in a failure. For sink records, the topic, partition, offset, and t..falsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalse
errors.retry.delay.max.ms The maximum duration in milliseconds between consecutive retry attempts. Jitter will be added to the delay once this limit is reac..60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
errors.retry.timeout The maximum duration in milliseconds that a failed operation will be reattempted. The default is 0, which means no retries will be..0000000000000
errors.tolerance Behavior for tolerating errors during connector operation. 'none' is the default value and signals that any error will result in a..nonenonenonenonenonenonenonenonenonenonenonenonenone
exactly.once.support Permitted values are requested, required. If set to "required", forces a preflight check for the connector to ensure that it can p..requestedrequestedrequestedrequestedrequestedrequestedrequestedrequestedrequestedrequested
header.converter HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls..nullnullnullnullnullnullnullnullnullnullnullnullnull
header.converter.plugin.version Version of the header converter.nullnull
key.converter Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the f..nullnullnullnullnullnullnullnullnullnullnullnullnull
key.converter.plugin.version Version of the key converter.nullnull
name Globally unique name to use for this connector.
offsets.storage.topic The name of a separate offsets topic to use for this connector. If empty or not specified, the worker’s global offsets topic name ..nullnullnullnullnullnullnullnullnullnull
predicates Aliases for the predicates used by transformations.
tasks.max Maximum number of tasks to use for this connector.1111111111111
tasks.max.enforce (Deprecated) Whether to enforce that the tasks.max property is respected by the connector. By default, connectors that generate to..truetruetruetruetrue
topic.creation.groups Groups of configurations for topics created by source connectors
transaction.boundary Permitted values are: poll, interval, connector. If set to 'poll', a new producer transaction will be started and committed for ev..pollpollpollpollpollpollpollpollpollpoll
transaction.boundary.interval.ms If 'transaction.boundary' is set to 'interval', determines the interval for producer transaction commits by connector tasks. If un..nullnullnullnullnullnullnullnullnullnull
transforms Aliases for the transformations to be applied to records.
value.converter Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the f..nullnullnullnullnullnullnullnullnullnullnullnullnull
value.converter.plugin.version Version of the value converter.nullnull
Connect Sink Description 3.03.13.23.33.43.53.63.73.83.94.04.14.2
config.action.reload The action that Connect should take on the connector when changes in external configuration providers result in a change in the co..restartrestartrestartrestartrestartrestartrestartrestartrestartrestartrestartrestartrestart
connector.class Name or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connecto..
connector.plugin.version Version of the connector.nullnull
errors.deadletterqueue.context.headers.enable If true, add headers containing error context to the messages written to the dead letter queue. To avoid clashing with headers fro..falsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalse
errors.deadletterqueue.topic.name The name of the topic to be used as the dead letter queue (DLQ) for messages that result in an error when processed by this sink c..
errors.deadletterqueue.topic.replication.factor Replication factor used to create the dead letter queue topic when it doesn't already exist.3333333333333
errors.log.enable If true, write each error and the details of the failed operation and problematic record to the Connect application log. This is '..falsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalse
errors.log.include.messages Whether to include in the log the Connect record that resulted in a failure. For sink records, the topic, partition, offset, and t..falsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalse
errors.retry.delay.max.ms The maximum duration in milliseconds between consecutive retry attempts. Jitter will be added to the delay once this limit is reac..60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
60000
1min
errors.retry.timeout The maximum duration in milliseconds that a failed operation will be reattempted. The default is 0, which means no retries will be..0000000000000
errors.tolerance Behavior for tolerating errors during connector operation. 'none' is the default value and signals that any error will result in a..nonenonenonenonenonenonenonenonenonenonenonenonenone
header.converter HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls..nullnullnullnullnullnullnullnullnullnullnullnullnull
header.converter.plugin.version Version of the header converter.nullnull
key.converter Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the f..nullnullnullnullnullnullnullnullnullnullnullnullnull
key.converter.plugin.version Version of the key converter.nullnull
name Globally unique name to use for this connector.
predicates Aliases for the predicates used by transformations.
tasks.max Maximum number of tasks to use for this connector.1111111111111
tasks.max.enforce (Deprecated) Whether to enforce that the tasks.max property is respected by the connector. By default, connectors that generate to..truetruetruetruetrue
topics List of topics to consume, separated by commas
topics.regex Regular expression giving topics to consume. Under the hood, the regex is compiled to a java.util.regex.Pattern. Only one of topic..
transforms Aliases for the transformations to be applied to records.
value.converter Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the f..nullnullnullnullnullnullnullnullnullnullnullnullnull
value.converter.plugin.version Version of the value converter.nullnull
Kafka Streams Description 3.03.13.23.33.43.53.63.73.83.94.04.14.2
acceptable.recovery.lag The maximum acceptable lag (number of offsets to catch up) for a client to be considered caught-up enough to receive an active tas..10000100001000010000100001000010000100001000010000100001000010000
allow.os.group.write.access Allows state store directories created by Kafka Streams to have write access for the OS group. Default is falsefalse
application.id An identifier for the stream processing application. Must be unique within the Kafka cluster. It is used as 1) the default client-..
application.server A host:port pair pointing to a user-defined endpoint that can be used for state store discovery and interactive queries on this Ka..
bootstrap.servers A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all ser..
buffered.records.per.partition Maximum number of records to buffer per partition.1000100010001000100010001000100010001000100010001000
built.in.metrics.version Version of the built-in metrics to use.latestlatestlatestlatestlatestlatestlatestlatestlatestlatestlatestlatestlatest
cache.max.bytes.buffering Maximum number of memory bytes to be used for buffering across all threads10485760104857601048576010485760104857601048576010485760104857601048576010485760104857601048576010485760
client.id An ID prefix string used for the client IDs of internal (main, restore, and global) consumers , producers, and admin clients with ..-
commit.interval.ms The frequency in milliseconds with which to commit processing progress. For at-least-once processing, committing means to save the..30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
config.providers Comma-separated names of ConfigProvider classes, loaded and used in the order specified. Implementing the interface ConfigProvide..
connections.max.idle.ms Close idle connections after the number of milliseconds specified by this config.540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
540000
9min
default.client.supplier Client supplier class that implements the org.apache.kafka.streams.KafkaClientSupplier interface.org.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplierorg.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplierorg.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplierorg.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplierorg.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplierorg.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplierorg.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplierorg.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplier
default.deserialization.exception.handler Exception handling class that implements the org.apache.kafka.streams.errors.DeserializationExceptionHandler interface.org.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandler
default.dsl.store The default state store type used by DSL operators.rocksDBrocksDBrocksDBrocksDBrocksDBrocksDBrocksDBrocksDBrocksDBrocksDBrocksDB
default.key.serde Default serializer / deserializer class for key that implements the org.apache.kafka.common.serialization.Serde interface. Note wh..nullnullnullnullnullnullnullnullnullnullnullnullnull
default.list.key.serde.inner Default inner class of list serde for key that implements the org.apache.kafka.common.serialization.Serde interface. This configur..nullnullnullnullnullnullnullnullnullnullnullnullnull
default.list.key.serde.type Default class for key that implements the java.util.List interface. This configuration will be read if and only if default.key.ser..nullnullnullnullnullnullnullnullnullnullnullnullnull
default.list.value.serde.inner Default inner class of list serde for value that implements the org.apache.kafka.common.serialization.Serde interface. This config..nullnullnullnullnullnullnullnullnullnullnullnullnull
default.list.value.serde.type Default class for value that implements the java.util.List interface. This configuration will be read if and only if default.value..nullnullnullnullnullnullnullnullnullnullnullnullnull
default.production.exception.handler Exception handling class that implements the org.apache.kafka.streams.errors.ProductionExceptionHandler interface.org.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandler
default.timestamp.extractor Default timestamp extractor class that implements the org.apache.kafka.streams.processor.TimestampExtractor interface.org.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamp
default.value.serde Default serializer / deserializer class for value that implements the org.apache.kafka.common.serialization.Serde interface. Note ..nullnullnullnullnullnullnullnullnullnullnullnullnull
deserialization.exception.handler Exception handling class that implements the org.apache.kafka.streams.errors.DeserializationExceptionHandler interface.org.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandler
dsl.store.suppliers.class Defines which store implementations to plug in to DSL operators. Must implement the org.apache.kafka.streams.state.DslStoreSupplie..org.apache.kafka.streams.state.BuiltInDslStoreSuppliers$RocksDBDslStoreSuppliersorg.apache.kafka.streams.state.BuiltInDslStoreSuppliers$RocksDBDslStoreSuppliersorg.apache.kafka.streams.state.BuiltInDslStoreSuppliers$RocksDBDslStoreSuppliersorg.apache.kafka.streams.state.BuiltInDslStoreSuppliers$RocksDBDslStoreSuppliersorg.apache.kafka.streams.state.BuiltInDslStoreSuppliers$RocksDBDslStoreSuppliersorg.apache.kafka.streams.state.BuiltInDslStoreSuppliers$RocksDBDslStoreSuppliers
enable.metrics.push Whether to enable pushing of internal client metrics for (main, restore, and global) consumers, producers, and admin clients. The ..truetruetruetruetruetrue
ensure.explicit.internal.resource.naming Whether to enforce explicit naming for all internal resources of the topology, including internal topics (e.g., changelog and repa..falsefalse
errors.dead.letter.queue.topic.name If not null, the default exception handler will build and send a Dead Letter Queue record to the topic with the provided name if a..null
group.protocol The group protocol consumer should use. We currently support "classic" or "consumer". If "consumer" is specified, then the consume..classicclassic
log.summary.interval.ms The output interval in milliseconds for logging summary information.If greater or equal to 0, the summary log will be output accor..120000
2min
120000
2min
120000
2min
120000
2min
max.task.idle.ms This config controls whether joins and merges may produce out-of-order results. The config value is the maximum amount of time in ..0000000000000
max.warmup.replicas The maximum number of warmup replicas (extra standbys beyond the configured num.standbys) that can be assigned at once for the pur..2222222222222
metadata.max.age.ms The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership cha..300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
metadata.recovery.rebootstrap.trigger.ms If a client configured to rebootstrap using metadata.recovery.strategy=rebootstrap is unable to obtain metadata from any of the br..300000
5min
300000
5min
300000
5min
metadata.recovery.strategy Controls how the client recovers when none of the brokers known to it is available. If set to none, the client fails. If set to re..rebootstraprebootstraprebootstrap
metric.reporters A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows p..org.apache.kafka.common.metrics.JmxReporterorg.apache.kafka.common.metrics.JmxReporterorg.apache.kafka.common.metrics.JmxReporter
metrics.num.samples The number of samples maintained to compute metrics.2222222222222
metrics.recording.level The highest recording level for metrics.INFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFO
metrics.sample.window.ms The window of time a metrics sample is computed over.30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
num.standby.replicas The number of standby replicas for each task.0000000000000
num.stream.threads The number of threads to execute stream processing.1111111111111
poll.ms The amount of time in milliseconds to block waiting for input.100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
probing.rebalance.interval.ms The maximum time in milliseconds to wait before triggering a rebalance to probe for warmup replicas that have finished warming up ..600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
processing.exception.handler Exception handling class that implements the org.apache.kafka.streams.errors.ProcessingExceptionHandler interface. Note: This hand..org.apache.kafka.streams.errors.LogAndFailProcessingExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailProcessingExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailProcessingExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailProcessingExceptionHandler
processing.guarantee The processing guarantee that should be used. Possible values are at_least_once (default) and exactly_once_v2 (requires brokers ve..at_least_onceat_least_onceat_least_onceat_least_onceat_least_onceat_least_onceat_least_onceat_least_onceat_least_onceat_least_onceat_least_onceat_least_onceat_least_once
processor.wrapper.class A processor wrapper class or class name that implements the org.apache.kafka.streams.state.ProcessorWrapper interface. Must be pas..org.apache.kafka.streams.processor.internals.NoOpProcessorWrapperorg.apache.kafka.streams.processor.internals.NoOpProcessorWrapperorg.apache.kafka.streams.processor.internals.NoOpProcessorWrapper
production.exception.handler Exception handling class that implements the org.apache.kafka.streams.errors.ProductionExceptionHandler interface.org.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandler
rack.aware.assignment.non_overlap_cost Cost associated with moving tasks from existing assignment. This config and rack.aware.assignment.traffic_cost controls whether th..nullnullnullnullnullnullnull
rack.aware.assignment.strategy The strategy we use for rack aware assignment. Rack aware assignment will take client.rack and racks of TopicPartition into accoun..nonenonenonenonenonenonenone
rack.aware.assignment.tags List of client tag keys used to distribute standby replicas across Kafka Streams instances. When configured, Kafka Streams will ma..
rack.aware.assignment.traffic_cost Cost associated with cross rack traffic. This config and rack.aware.assignment.non_overlap_cost controls whether the optimization ..nullnullnullnullnullnullnull
receive.buffer.bytes The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
32768
32 KB
reconnect.backoff.max.ms The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provide..1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
1000
1s
reconnect.backoff.ms The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a t..50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
50
50ms
repartition.purge.interval.ms The frequency in milliseconds with which to delete fully consumed records from repartition topics. Purging will occur after at lea..30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
30000
30s
replication.factor The replication factor for change log topics and repartition topics created by the stream processing application. The default of -..-1-1-1-1-1-1-1-1-1-1-1-1-1
request.timeout.ms The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not r..40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
40000
40s
retry.backoff.ms The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending ..100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
100
100ms
rocksdb.config.setter A Rocks DB config setter class or class name that implements the org.apache.kafka.streams.state.RocksDBConfigSetter interfacenullnullnullnullnullnullnullnullnullnullnullnullnull
security.protocol Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.PLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXT
send.buffer.bytes The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
131072
128 KB
state.cleanup.delay.ms The amount of time in milliseconds to wait before deleting state when a partition has migrated. Only state directories that have n..600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
600000
10min
state.dir Directory location for state store. This path must be unique for each streams instance sharing the same underlying filesystem. Not../var/folders/5w/m48dfpps5fj1byw1ldmq3v5w0000gp/T//kafka-streams/tmp/kafka-streams/var/folders/ds/dq10m26j2kjcypywn_lt0b0m0000gn/T//kafka-streams/var/folders/8t/s723rqwx1h78qt3w98cp_gsm0000gp/T//kafka-streams/var/folders/0t/68svdzmx1sld0mxjl8dgmmzm0000gq/T//kafka-streams/var/folders/qq/2qmvd8cd11x3fcd6wbgpn9pw0000gn/T//kafka-streams/var/folders/3j/8r9d0znd5pzgp8ww95yn8g140000gp/T//kafka-streams/var/folders/tj/qjgd_zb13w19nkmt8rfpnssc0000gn/T//kafka-streams${java.io.tmpdir}${java.io.tmpdir}${java.io.tmpdir}${java.io.tmpdir}${java.io.tmpdir}
statestore.cache.max.bytes Maximum number of memory bytes to be used for statestore cache across all threads10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
10485760
10 MB
task.assignor.class A task assignor class or class name implementing the org.apache.kafka.streams.processor.assignment.TaskAssignor interface. Default..nullnullnullnullnull
task.timeout.ms The maximum amount of time in milliseconds a task might stall due to internal errors and retries until an error is raised. For a t..300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
300000
5min
topology.optimization A configuration telling Kafka Streams if it should optimize the topology and what optimizations to apply. Acceptable values are: "..nonenonenonenonenonenonenonenonenonenonenonenonenone
upgrade.from Allows upgrading in a backward compatible way. This is needed when upgrading from [0.10.0, 1.1] to 2.0+, or when upgrading from [2..nullnullnullnullnullnullnullnullnullnullnullnullnull
window.size.ms Sets window size for the deserializer in order to calculate window end times.nullnullnullnullnullnullnullnullnullnullnullnullnull
windowed.inner.class.serde Default serializer / deserializer for the inner class of a windowed record. Must implement the org.apache.kafka.common.serializati..nullnullnullnullnullnullnullnullnullnullnullnullnull
windowstore.changelog.additional.retention.ms Added to a windows maintainMs to ensure data is not deleted from the log prematurely. Allows for clock drift. Default is 1 day86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
86400000
1d
auto.include.jmx.reporter Deprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters. This configuration will be r..truetruetruetruetruetrue
retries Setting a value greater than zero will cause the client to resend any request that fails with a potentially transient error. It is..0000000000
Copied!