conduktor.io ↗
🐘

Debezium PostgreSQL Connector Configuration

All configuration properties for the Debezium PostgreSQL connector across versions 2.4–3.5. Cells highlighted in green are additions, red are removals, yellow are default changes.

107 properties · 10 versions (2.4–3.5) · Compare all connectors →
Property Description 2.42.52.62.73.03.13.23.33.43.5
binary.handling.modeSpecify how binary (blob, binary, etc.) columns should be represented in change events, including: 'bytes' represents bi..bytesbytesbytesbytesbytesbytesbytesbytesbytesbytes
column.exclude.listRegular expressions matching columns to exclude from change eventsnullnullnullnullnullnullnullnullnullnull
column.include.listRegular expressions matching columns to include in change eventsnullnullnullnullnullnullnullnullnullnull
column.propagate.source.typeA comma-separated list of regular expressions matching fully-qualified names of columns that adds the column’s original ..nullnullnullnullnullnullnullnullnullnull
convertersOptional list of custom converters that would be used instead of default ones. The converters are defined using '<conver..nullnullnullnullnullnullnullnullnullnull
custom.metric.tagsThe custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end o..nullnullnullnullnullnullnullnullnullnull
database.dbnameThe name of the database from which the connector should capture changesnullnullnullnullnullnullnullnullnullnull
database.hostnameResolvable hostname or IP address of the database server.nullnullnullnullnullnullnullnullnullnull
database.initial.statementsA semicolon separated list of SQL statements to be executed when a JDBC connection to the database is established. Note ..nullnullnullnullnullnullnullnullnullnull
database.passwordPassword of the database user to be used when connecting to the database.nullnullnullnullnullnullnullnullnullnull
database.portPort of the database server.nullnullnullnullnullnullnullnullnullnull
database.sslcertFile containing the SSL Certificate for the client. See the Postgres SSL docs for further informationnullnullnullnullnullnullnullnullnullnull
database.sslfactoryA name of class to that creates SSL Sockets. Use org.postgresql.ssl.NonValidatingFactory to disable SSL validation in de..nullnullnullnullnullnullnullnullnullnull
database.sslkeyFile containing the SSL private key for the client. See the Postgres SSL docs for further informationnullnullnullnullnullnullnullnullnullnull
database.sslmodeWhether to use an encrypted connection to Postgres. Options include: 'disable' (the default) to use an unencrypted conne..preferpreferpreferpreferpreferpreferpreferpreferpreferprefer
database.sslpasswordPassword to access the client private key from the file specified by 'database.sslkey'. See the Postgres SSL docs for fu..nullnullnullnullnullnullnullnullnullnull
database.sslrootcertFile containing the root certificate(s) against which the server is validated. See the Postgres JDBC SSL docs for furthe..nullnullnullnullnullnullnullnullnullnull
database.tcpKeepAliveEnable or disable TCP keep-alive probe to avoid dropping TCP connectiontruetruetruetruetruetruetruetruetruetrue
database.userName of the database user to be used when connecting to the database.nullnullnullnullnullnullnullnullnullnull
datatype.propagate.source.typeA comma-separated list of regular expressions matching the database-specific data type names that adds the data type's o..nullnullnullnullnullnullnullnullnullnull
decimal.handling.modeSpecify how DECIMAL and NUMERIC columns should be represented in change events, including: 'precise' (the default) uses ..preciseprecisepreciseprecisepreciseprecisepreciseprecisepreciseprecise
errors.max.retriesThe maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, > 0 = num of retries).nullnullnullnullnullnullnullnullnullnull
event.processing.failure.handling.modeSpecify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including..failfailfailfailfailfailfailfailfailfail
field.name.adjustment.modeSpecify how field names should be adjusted for compatibility with the message converter used by the connector, including..nonenonenonenonenonenonenonenonenonenone
flush.lsn.source(Deprecated) Boolean to determine if Debezium should flush LSN in the source postgres database. Use 'lsn.flush.mode' ins..truetruetruetruetruetruetruetruetruetrue
hstore.handling.modeSpecify how HSTORE columns should be represented in change events, including: 'json' represents values as string-ified J..jsonjsonjsonjsonjsonjsonjsonjsonjsonjson
include.schema.changesWhether the connector should publish changes in the database schema to a Kafka topic with the same name as the database ..truetruetruetruetruetruetruetruetruetrue
include.schema.commentsWhether the connector parse table and column's comment to metadata object. Note: Enable this option will bring the impli..falsefalsefalsefalsefalsefalsefalsefalsefalsefalse
include.unknown.datatypesSpecify whether the fields of data type not supported by Debezium should be processed: 'false' (the default) omits the f..falsefalsefalsefalsefalsefalsefalsefalsefalsefalse
incremental.snapshot.allow.schema.changesDetect schema change during an incremental snapshot and re-select a current chunk to avoid locking DDLs. Note that chang..falsefalsefalsefalsefalsefalsefalsefalsefalsefalse
incremental.snapshot.chunk.sizeThe maximum size of chunk (number of documents/rows) for incremental snapshotting1024102410241024102410241024102410241024
interval.handling.modeSpecify how INTERVAL columns should be represented in change events, including: 'string' represents values as an exact I..numericnumericnumericnumericnumericnumericnumericnumericnumericnumeric
max.batch.sizeMaximum size of each batch of source records. Defaults to .nullnullnullnullnullnullnullnullnullnull
max.queue.sizeMaximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to , ..nullnullnullnullnullnullnullnullnullnull
max.queue.size.in.bytesMaximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defau..nullnullnullnullnullnullnullnullnullnull
message.key.columnsA semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Eac..nullnullnullnullnullnullnullnullnullnull
message.prefix.exclude.listA comma-separated list of regular expressions that match the logical decoding message prefixes to be excluded from monit..nullnullnullnullnullnullnullnullnullnull
message.prefix.include.listA comma-separated list of regular expressions that match the logical decoding message prefixes to be monitored. All pref..nullnullnullnullnullnullnullnullnullnull
money.fraction.digitsNumber of fractional digits when money type is converted to 'precise' decimal number.2222222222
notification.enabled.channelsList of notification channels names that are enabled.nullnullnullnullnullnullnullnullnullnull
plugin.nameThe name of the Postgres logical decoding plugin installed on the server. Supported values are '' and ''. Defaults to ''..decoderbufsdecoderbufsdecoderbufsdecoderbufsdecoderbufsdecoderbufsdecoderbufsdecoderbufsdecoderbufsdecoderbufs
poll.interval.msTime to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms.nullnullnullnullnullnullnullnullnullnull
provide.transaction.metadataEnables transaction metadata extraction together with event countingfalsefalsefalsefalsefalsefalsefalsefalsefalsefalse
publication.autocreate.modeApplies only when streaming changes using pgoutput.Determine how creation of a publication should work, the default is a..all_tablesall_tablesall_tablesall_tablesall_tablesall_tablesall_tablesall_tablesall_tablesall_tables
publication.nameThe name of the Postgres 10+ publication used for streaming changes from a plugin. Defaults to ''nullnullnullnullnullnullnullnullnullnull
query.fetch.sizeThe maximum number of records that should be loaded into memory while streaming. A value of '0' uses the default JDBC fe..nullnullnullnullnullnullnullnullnullnull
replica.identity.autoset.valuesApplies only when streaming changes using pgoutput.Determines the value for Replica Identity at table level. This option..nullnullnullnullnullnullnullnullnullnull
retriable.restart.connector.wait.msTime to wait before restarting connector after retriable exception occurs. Defaults to ms.nullnullnullnullnullnullnullnullnullnull
schema.name.adjustment.modeSpecify how schema names should be adjusted for compatibility with the message converter used by the connector, includin..nonenonenonenonenonenonenonenonenonenone
schema.refresh.modeSpecify the conditions that trigger a refresh of the in-memory schema for a table. 'columns_diff' (the default) is the s..columns_diffcolumns_diffcolumns_diffcolumns_diffcolumns_diffcolumns_diffcolumns_diffcolumns_diffcolumns_diffcolumns_diff
signal.data.collectionThe name of the data collection that is used to send signals/commands to Debezium. For multi-partition mode connectors, ..nullnullnullnullnullnullnullnullnullnull
signal.enabled.channelsList of channels names that are enabled. Source channel is enabled by defaultsourcesourcesourcesourcesourcesourcesourcesourcesourcesource
signal.poll.interval.msInterval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds.5000500050005000500050005000500050005000
skip.messages.without.changeEnable to skip publishing messages when there is no change in included columns.This would essentially filter messages to..falsefalsefalsefalsefalsefalsefalsefalsefalsefalse
skipped.operationsThe comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd..tttttttttt
slot.drop.on.stopWhether or not to drop the logical replication slot when the connector finishes orderly. By default the replication is k..falsefalsefalsefalsefalsefalsefalsefalsefalsefalse
slot.max.retriesHow many times to retry connecting to a replication slot when an attempt fails.nullnullnullnullnullnullnullnullnullnull
slot.nameThe name of the Postgres logical decoding slot created for streaming changes from a plugin. Defaults to 'debeziumnullnullnullnullnullnullnullnullnullnull
slot.retry.delay.msTime to wait between retry attempts when the connector fails to connect to a replication slot, given in milliseconds. De..nullnullnullnullnullnullnullnullnullnull
slot.stream.paramsAny optional parameters used by logical decoding plugin. Semi-colon separated. E.g. 'add-tables=public.table,public.tabl..nullnullnullnullnullnullnullnullnullnull
snapshot.custom.classWhen 'snapshot.mode' is set as custom, this setting must be set to specify a fully qualified class name to load (via the..nullnull
snapshot.delay.msA delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms.0000000000
snapshot.fetch.sizeThe maximum number of records that should be loaded into memory while performing a snapshot.nullnullnullnullnullnullnullnullnullnull
snapshot.include.collection.listThis setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting ..nullnullnullnullnullnullnullnullnullnull
snapshot.lock.timeout.msThe maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this..nullnullnullnullnullnullnullnullnullnull
snapshot.max.threadsThe maximum number of threads used to perform the snapshot. Defaults to 1.1111111111
snapshot.modeThe criteria for running a snapshot upon startup of the connector. Select one of the following snapshot options: 'always..initialinitialinitialinitialinitialinitialinitialinitialinitialinitial
snapshot.select.statement.overridesThis property contains a comma-separated list of fully-qualified tables (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME)..nullnullnullnullnullnullnullnullnullnull
snapshot.tables.order.by.row.countControls the order in which tables are processed in the initial snapshot. A `descending` value will order the tables by ..disableddisableddisableddisableddisableddisableddisableddisableddisableddisabled
sourceinfo.struct.makerThe name of the SourceInfoStructMaker class that returns SourceInfo schema and struct.nullnullnullnullnullnullnullnullnullnull
status.update.interval.msFrequency for sending replication connection status updates to the server, given in milliseconds. Defaults to 10 seconds..nullnullnullnullnullnullnullnullnullnull
table.ignore.builtinFlag specifying whether built-in tables should be ignored.truetruetruetruetruetruetruetruetruetrue
time.precision.modeTime, date, and timestamps can be represented with different kinds of precisions, including: 'adaptive' (the default) ba..adaptiveadaptiveadaptiveadaptiveadaptiveadaptiveadaptiveadaptiveadaptiveadaptive
tombstones.on.deleteWhether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a d..truetruetruetruetruetruetruetruetruetrue
topic.naming.strategyThe name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change..nullnullnullnullnullnullnullnullnullnull
topic.prefixTopic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes. T..nullnullnullnullnullnullnullnullnullnull
unavailable.value.placeholderSpecify the constant that will be provided by Debezium to indicate that the original value is unavailable and not provid..nullnullnullnullnullnullnullnullnullnull
xmin.fetch.interval.msSpecify how often (in ms) the xmin will be fetched from the replication slot. This xmin value is exposed by the slot whi..0000000000
incremental.snapshot.watermarking.strategySpecify the strategy used for watermarking during an incremental snapshot: 'insert_insert' both open and close signal is..INSERT_INSERTINSERT_INSERTINSERT_INSERTINSERT_INSERTINSERT_INSERTINSERT_INSERTINSERT_INSERTINSERT_INSERTINSERT_INSERT
post.processorsOptional list of post processors. The processors are defined using '<post.processor.prefix>.type' config option and conf..nullnullnullnullnullnullnullnullnull
event.converting.failure.handling.modeSpecify how failures during converting of event should be handled, including: 'fail' throw an exception that the column ..warnwarnwarnwarnwarnwarnwarnwarn
snapshot.locking.modeControls how the connector holds locks on tables while performing the schema snapshot. The 'shared' which means the conn..nonenonenonenonenonenonenonenone
snapshot.locking.mode.custom.nameWhen 'snapshot.locking.mode' is set as custom, this setting must be set to specify a the name of the custom implementati..nullnullnullnullnullnullnullnull
snapshot.mode.configuration.based.snapshot.dataWhen 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the data should be snapshot..falsefalsefalsefalsefalsefalsefalsefalse
snapshot.mode.configuration.based.snapshot.on.data.errorWhen 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the data should be snapshot..falsefalsefalsefalsefalsefalsefalsefalse
snapshot.mode.configuration.based.snapshot.on.schema.errorWhen 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the schema should be snapsh..falsefalsefalsefalsefalsefalsefalsefalse
snapshot.mode.configuration.based.snapshot.schemaWhen 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the schema should be snapsh..falsefalsefalsefalsefalsefalsefalsefalse
snapshot.mode.configuration.based.start.streamWhen 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the stream should start or ..falsefalsefalsefalsefalsefalsefalsefalse
snapshot.mode.custom.nameWhen 'snapshot.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provi..nullnullnullnullnullnullnullnull
snapshot.query.modeControls query used during the snapshotselect_allselect_allselect_allselect_allselect_allselect_allselect_allselect_all
snapshot.query.mode.custom.nameWhen 'snapshot.query.mode' is set as custom, this setting must be set to specify a the name of the custom implementation..nullnullnullnullnullnullnullnull
database.query.timeout.msTime to wait for a query to execute, given in milliseconds. Defaults to 600 seconds (600,000 ms); zero means there is no..600000600000600000600000600000600000600000
read.onlySwitched connector to use alternative methods to deliver signals to Debezium instead of writing to signaling tablefalsefalsefalsefalsefalsefalsefalse
streaming.delay.msA delay period after the snapshot is completed and the streaming begins, given in milliseconds. Defaults to 0 ms.0000000
transaction.metadata.factoryClass to make transaction context & transaction struct/schemasnullnullnullnullnullnullnull
slot.failoverWhether or not to create a failover slot. This is only supported when connecting to a primary server of a Postgres clust..falsefalsefalsefalsefalsefalse
snapshot.isolation.modeControls which transaction isolation level is used. The default is '', which means that serializable isolation level is ..serializableserializableserializableserializableserializableserializable
connection.validation.timeout.msThe maximum time in milliseconds to wait for connection validation to complete. Defaults to 60 seconds.nullnullnullnull
executor.shutdown.timeout.msThe maximum time in milliseconds to wait for task executor to shut down.nullnullnullnull
lsn.flush.timeout.actionAction to take when an LSN flush timeout occurs. Options include: 'fail' (default) to fail the connector; 'warn' to log ..failfailfailfail
lsn.flush.timeout.msMaximum time in milliseconds to wait for LSN flush operation to complete. If the flush operation does not complete withi..nullnullnullnull
publish.via.partition.rootA boolean that determines whether the connector should publish changes via the partition root. When true, changes are pu..falsefalsefalsefalse
guardrail.collections.limit.actionSpecify the action to take when a guardrail collections limit is exceeded: 'warn' (the default) logs a warning message a..warnwarnwarn
guardrail.collections.maxThe maximum number of collections or tables that can be captured by the connector. When this limit is exceeded, the acti..000
lsn.flush.modeDetermines the LSN flushing strategy. Options include: 'connector' (default) for Debezium managed LSN flushing (replaces..nullnull
offset.mismatch.strategyDetermines behavior when the connector's stored offset LSN differs from the replication slot's confirmed LSN. 'no_valida..no_validationno_validation
snapshot.max.threads.multiplierThe factor used to scale the number of snapshot chunks per table. The default behavior is to take 'row_count/snapshot.ma..1