conduktor.io ↗
🪟

Debezium SQL Server Connector Configuration

All configuration properties for the Debezium SQL Server connector across versions 2.4–3.5. Cells highlighted in green are additions, red are removals, yellow are default changes.

76 properties · 10 versions (2.4–3.5) · Compare all connectors →
Property Description 2.42.52.62.73.03.13.23.33.43.5
binary.handling.modeSpecify how binary (blob, binary, etc.) columns should be represented in change events, including: 'bytes' represents bi..bytesbytesbytesbytesbytesbytesbytesbytesbytesbytes
column.exclude.listRegular expressions matching columns to exclude from change eventsnullnullnullnullnullnullnullnullnullnull
column.include.listRegular expressions matching columns to include in change eventsnullnullnullnullnullnullnullnullnullnull
column.propagate.source.typeA comma-separated list of regular expressions matching fully-qualified names of columns that adds the column’s original ..nullnullnullnullnullnullnullnullnullnull
convertersOptional list of custom converters that would be used instead of default ones. The converters are defined using '<conver..nullnullnullnullnullnullnullnullnullnull
custom.metric.tagsThe custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end o..nullnullnullnullnullnullnullnullnullnull
database.dbnameThe name of the database from which the connector should capture changesnullnullnullnullnullnullnullnullnullnull
database.hostnameResolvable hostname or IP address of the database server.nullnullnullnullnullnullnullnullnullnull
database.namesThe names of the databases from which the connector should capture changesnullnullnullnullnullnullnullnullnullnull
database.passwordPassword of the database user to be used when connecting to the database.nullnullnullnullnullnullnullnullnullnull
database.portPort of the database server.nullnullnullnullnullnullnullnullnullnull
database.userName of the database user to be used when connecting to the database.nullnullnullnullnullnullnullnullnullnull
datatype.propagate.source.typeA comma-separated list of regular expressions matching the database-specific data type names that adds the data type's o..nullnullnullnullnullnullnullnullnullnull
decimal.handling.modeSpecify how DECIMAL and NUMERIC columns should be represented in change events, including: 'precise' (the default) uses ..preciseprecisepreciseprecisepreciseprecisepreciseprecisepreciseprecise
errors.max.retriesThe maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, > 0 = num of retries).nullnullnullnullnullnullnullnullnullnull
event.processing.failure.handling.modeSpecify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including..failfailfailfailfailfailfailfailfailfail
field.name.adjustment.modeSpecify how field names should be adjusted for compatibility with the message converter used by the connector, including..nonenonenonenonenonenonenonenonenonenone
include.schema.changesWhether the connector should publish changes in the database schema to a Kafka topic with the same name as the database ..truetruetruetruetruetruetruetruetruetrue
include.schema.commentsWhether the connector parse table and column's comment to metadata object. Note: Enable this option will bring the impli..falsefalsefalsefalsefalsefalsefalsefalsefalsefalse
incremental.snapshot.allow.schema.changesDetect schema change during an incremental snapshot and re-select a current chunk to avoid locking DDLs. Note that chang..falsefalsefalsefalsefalsefalsefalsefalsefalsefalse
incremental.snapshot.chunk.sizeThe maximum size of chunk (number of documents/rows) for incremental snapshotting1024102410241024102410241024102410241024
incremental.snapshot.option.recompileAdd OPTION(RECOMPILE) on each SELECT statement during the incremental snapshot process. This prevents parameter sniffing..falsefalsefalsefalsefalsefalsefalsefalsefalsefalse
max.batch.sizeMaximum size of each batch of source records. Defaults to .nullnullnullnullnullnullnullnullnullnull
max.queue.sizeMaximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to , ..nullnullnullnullnullnullnullnullnullnull
max.queue.size.in.bytesMaximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defau..nullnullnullnullnullnullnullnullnullnull
message.key.columnsA semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Eac..nullnullnullnullnullnullnullnullnullnull
notification.enabled.channelsList of notification channels names that are enabled.nullnullnullnullnullnullnullnullnullnull
poll.interval.msTime to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms.nullnullnullnullnullnullnullnullnullnull
provide.transaction.metadataEnables transaction metadata extraction together with event countingfalsefalsefalsefalsefalsefalsefalsefalsefalsefalse
query.fetch.sizeThe maximum number of records that should be loaded into memory while streaming. A value of '0' uses the default JDBC fe..nullnullnullnullnullnullnullnullnullnull
retriable.restart.connector.wait.msTime to wait before restarting connector after retriable exception occurs. Defaults to ms.nullnullnullnullnullnullnullnullnullnull
schema.name.adjustment.modeSpecify how schema names should be adjusted for compatibility with the message converter used by the connector, includin..nonenonenonenonenonenonenonenonenonenone
signal.data.collectionThe name of the data collection that is used to send signals/commands to Debezium. For multi-partition mode connectors, ..nullnullnullnullnullnullnullnullnullnull
signal.enabled.channelsList of channels names that are enabled. Source channel is enabled by defaultsourcesourcesourcesourcesourcesourcesourcesourcesourcesource
signal.poll.interval.msInterval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds.5000500050005000500050005000500050005000
skip.messages.without.changeEnable to skip publishing messages when there is no change in included columns.This would essentially filter messages to..falsefalsefalsefalsefalsefalsefalsefalsefalsefalse
skipped.operationsThe comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd..tttttttttt
snapshot.delay.msA delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms.0000000000
snapshot.fetch.sizeThe maximum number of records that should be loaded into memory while performing a snapshot.nullnullnullnullnullnullnullnullnullnull
snapshot.include.collection.listThis setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting ..nullnullnullnullnullnullnullnullnullnull
snapshot.isolation.modeControls which transaction isolation level is used and how long the connector locks the captured tables. The default is ..repeatable_readrepeatable_readrepeatable_readrepeatable_readrepeatable_readrepeatable_readrepeatable_readrepeatable_readrepeatable_readrepeatable_read
snapshot.lock.timeout.msThe maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this..nullnullnullnullnullnullnullnullnullnull
snapshot.max.threadsThe maximum number of threads used to perform the snapshot. Defaults to 1.1111111111
snapshot.modeThe criteria for running a snapshot upon startup of the connector. Select one of the following snapshot options: 'initia..initialinitialinitialinitialinitialinitialinitialinitialinitialinitial
snapshot.select.statement.overridesThis property contains a comma-separated list of fully-qualified tables (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME)..nullnullnullnullnullnullnullnullnullnull
snapshot.tables.order.by.row.countControls the order in which tables are processed in the initial snapshot. A `descending` value will order the tables by ..disableddisableddisableddisableddisableddisableddisableddisableddisableddisabled
sourceinfo.struct.makerThe name of the SourceInfoStructMaker class that returns SourceInfo schema and struct.nullnullnullnullnullnullnullnullnullnull
table.ignore.builtinFlag specifying whether built-in tables should be ignored.truetruetruetruetruetruetruetruetruetrue
time.precision.modeTime, date, and timestamps can be represented with different kinds of precisions, including: 'adaptive' (the default) ba..adaptiveadaptiveadaptiveadaptiveadaptiveadaptiveadaptiveadaptiveadaptiveadaptive
tombstones.on.deleteWhether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a d..truetruetruetruetruetruetruetruetruetrue
topic.naming.strategyThe name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change..nullnullnullnullnullnullnullnullnullnull
topic.prefixTopic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes. T..nullnullnullnullnullnullnullnullnullnull
unavailable.value.placeholderSpecify the constant that will be provided by Debezium to indicate that the original value is unavailable and not provid..nullnullnullnullnullnullnullnullnullnull
incremental.snapshot.watermarking.strategySpecify the strategy used for watermarking during an incremental snapshot: 'insert_insert' both open and close signal is..INSERT_INSERTINSERT_INSERTINSERT_INSERTINSERT_INSERTINSERT_INSERTINSERT_INSERTINSERT_INSERTINSERT_INSERTINSERT_INSERT
post.processorsOptional list of post processors. The processors are defined using '<post.processor.prefix>.type' config option and conf..nullnullnullnullnullnullnullnullnull
data.query.modeControls how the connector queries CDC data. The default is '', which makes the connector to query the change tables dir..functionfunctionfunctionfunctionfunctionfunctiondirectdirect
event.converting.failure.handling.modeSpecify how failures during converting of event should be handled, including: 'fail' throw an exception that the column ..warnwarnwarnwarnwarnwarnwarnwarn
snapshot.locking.modeControls how the connector holds locks on tables while performing the schema snapshot when `snapshot.isolation.mode` is ..exclusiveexclusiveexclusiveexclusiveexclusiveexclusiveexclusiveexclusive
snapshot.locking.mode.custom.nameWhen 'snapshot.locking.mode' is set as custom, this setting must be set to specify a the name of the custom implementati..nullnullnullnullnullnullnullnull
snapshot.mode.configuration.based.snapshot.dataWhen 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the data should be snapshot..falsefalsefalsefalsefalsefalsefalsefalse
snapshot.mode.configuration.based.snapshot.on.data.errorWhen 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the data should be snapshot..falsefalsefalsefalsefalsefalsefalsefalse
snapshot.mode.configuration.based.snapshot.on.schema.errorWhen 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the schema should be snapsh..falsefalsefalsefalsefalsefalsefalsefalse
snapshot.mode.configuration.based.snapshot.schemaWhen 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the schema should be snapsh..falsefalsefalsefalsefalsefalsefalsefalse
snapshot.mode.configuration.based.start.streamWhen 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the stream should start or ..falsefalsefalsefalsefalsefalsefalsefalse
snapshot.mode.custom.nameWhen 'snapshot.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provi..nullnullnullnullnullnullnullnull
snapshot.query.modeControls query used during the snapshotselect_allselect_allselect_allselect_allselect_allselect_allselect_allselect_all
snapshot.query.mode.custom.nameWhen 'snapshot.query.mode' is set as custom, this setting must be set to specify a the name of the custom implementation..nullnullnullnullnullnullnullnull
database.query.timeout.msTime to wait for a query to execute, given in milliseconds. Defaults to 600 seconds (600,000 ms); zero means there is no..600000600000600000600000600000600000600000
streaming.delay.msA delay period after the snapshot is completed and the streaming begins, given in milliseconds. Defaults to 0 ms.0000000
transaction.metadata.factoryClass to make transaction context & transaction struct/schemasnullnullnullnullnullnullnull
streaming.fetch.sizeSpecifies the maximum number of rows that should be read in one go from each table while streaming. The connector will r..00000
connection.validation.timeout.msThe maximum time in milliseconds to wait for connection validation to complete. Defaults to 60 seconds.nullnullnullnull
executor.shutdown.timeout.msThe maximum time in milliseconds to wait for task executor to shut down.nullnullnullnull
guardrail.collections.limit.actionSpecify the action to take when a guardrail collections limit is exceeded: 'warn' (the default) logs a warning message a..warnwarnwarn
guardrail.collections.maxThe maximum number of collections or tables that can be captured by the connector. When this limit is exceeded, the acti..000
snapshot.max.threads.multiplierThe factor used to scale the number of snapshot chunks per table. The default behavior is to take 'row_count/snapshot.ma..1