All configuration properties for the Debezium MariaDB connector across versions 2.4–3.5. Cells highlighted in green are additions, red are removals, yellow are default changes.
| Property | Description | 2.4 | 2.5 | 2.6 | 2.7 | 3.0 | 3.1 | 3.2 | 3.3 | 3.4 | 3.5 |
|---|---|---|---|---|---|---|---|---|---|---|---|
| bigint.unsigned.handling.mode | Specify how BIGINT UNSIGNED columns should be represented in change events, including: 'precise' uses java.math.BigDecim.. | — | — | — | long | long | long | long | long | long | long |
| binary.handling.mode | Specify how binary (blob, binary, etc.) columns should be represented in change events, including: 'bytes' represents bi.. | — | — | — | bytes | bytes | bytes | bytes | bytes | bytes | bytes |
| binlog.buffer.size | The size of a look-ahead buffer used by the binlog reader to decide whether the transaction in progress is going to be c.. | — | — | — | null | null | null | null | null | null | null |
| column.exclude.list | Regular expressions matching columns to exclude from change events | — | — | — | null | null | null | null | null | null | null |
| column.include.list | Regular expressions matching columns to include in change events | — | — | — | null | null | null | null | null | null | null |
| column.propagate.source.type | A comma-separated list of regular expressions matching fully-qualified names of columns that adds the column’s original .. | — | — | — | null | null | null | null | null | null | null |
| connect.keep.alive | Whether a separate thread should be used to ensure the connection is kept alive. | — | — | — | true | true | true | true | true | true | true |
| connect.keep.alive.interval.ms | Interval for connection checking if keep alive thread is used, given in milliseconds Defaults to 1 minute (60,000 ms). | — | — | — | null | null | null | null | null | null | null |
| connect.timeout.ms | Maximum time to wait after trying to connect to the database before timing out, given in milliseconds. Defaults to 30 se.. | — | — | — | 30000 | 30000 | 30000 | 30000 | 30000 | 30000 | 30000 |
| converters | Optional list of custom converters that would be used instead of default ones. The converters are defined using '<conver.. | — | — | — | null | null | null | null | null | null | null |
| custom.metric.tags | The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end o.. | — | — | — | null | null | null | null | null | null | null |
| database.dbname | The name of the database from which the connector should capture changes | — | — | — | null | null | null | null | null | null | null |
| database.hostname | Resolvable hostname or IP address of the database server. | — | — | — | null | null | null | null | null | null | null |
| database.initial.statements | A semicolon separated list of SQL statements to be executed when a JDBC connection (not binlog reading connection) to th.. | — | — | — | null | null | null | null | null | null | null |
| database.password | Password of the database user to be used when connecting to the database. | — | — | — | null | null | null | null | null | null | null |
| database.port | Port of the database server. | — | — | — | null | null | null | null | null | null | null |
| database.query.timeout.ms | Time to wait for a query to execute, given in milliseconds. Defaults to 600 seconds (600,000 ms); zero means there is no.. | — | — | — | 600000 | 600000 | 600000 | 600000 | 600000 | 600000 | 600000 |
| database.server.id | A numeric ID of this database client, which must be unique across all currently-running database processes in the cluste.. | — | — | — | null | null | null | null | null | null | null |
| database.server.id.offset | Only relevant if parallel snapshotting is configured. During parallel snapshotting, multiple (4) connections open to the.. | — | — | — | 10000 | 10000 | 10000 | 10000 | 10000 | 10000 | 10000 |
| database.ssl.keystore | The location of the key store file. This is optional and can be used for two-way authentication between the client and t.. | — | — | — | null | null | null | null | null | null | null |
| database.ssl.keystore.password | The password for the key store file. This is optional and only needed if 'database.ssl.keystore' is configured. | — | — | — | null | null | null | null | null | null | null |
| database.ssl.mode | Whether to use an encrypted connection to the database. Options include: 'disable' to use an unencrypted connection; 'tr.. | — | — | — | preferred | disable | disable | disable | disable | disable | disable |
| database.ssl.truststore | The location of the trust store file for the server certificate verification. | — | — | — | null | null | null | null | null | null | null |
| database.ssl.truststore.password | The password for the trust store file. Used to check the integrity of the truststore, and unlock the truststore. | — | — | — | null | null | null | null | null | null | null |
| database.user | Name of the database user to be used when connecting to the database. | — | — | — | null | null | null | null | null | null | null |
| datatype.propagate.source.type | A comma-separated list of regular expressions matching the database-specific data type names that adds the data type's o.. | — | — | — | null | null | null | null | null | null | null |
| decimal.handling.mode | Specify how DECIMAL and NUMERIC columns should be represented in change events, including: 'precise' (the default) uses .. | — | — | — | precise | precise | precise | precise | precise | precise | precise |
| enable.time.adjuster | The database allows the user to insert year value as either 2-digit or 4-digit. In case of two digit the value is automa.. | — | — | — | true | true | true | true | true | true | true |
| errors.max.retries | The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, > 0 = num of retries). | — | — | — | null | null | null | null | null | null | null |
| event.converting.failure.handling.mode | Specify how failures during converting of event should be handled, including: 'fail' throw an exception that the column .. | — | — | — | warn | warn | warn | warn | warn | warn | warn |
| event.deserialization.failure.handling.mode | Specify how failures during deserialization of binlog events (i.e. when encountering a corrupted event) should be handle.. | — | — | — | fail | fail | fail | fail | fail | fail | fail |
| event.processing.failure.handling.mode | Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including.. | — | — | — | fail | fail | fail | fail | fail | fail | fail |
| field.name.adjustment.mode | Specify how field names should be adjusted for compatibility with the message converter used by the connector, including.. | — | — | — | none | none | none | none | none | none | none |
| gtid.source.excludes | — | — | — | null | null | null | null | null | null | null | |
| gtid.source.filter.dml.events | When set to true, only produce DML events for transactions that were written on the server with matching GTIDs defined b.. | — | — | — | true | true | true | true | true | true | true |
| gtid.source.includes | — | — | — | null | null | null | null | null | null | null | |
| include.query | Whether the connector should include the original SQL query that generated the change event. Note: This option requires .. | — | — | — | false | false | false | false | false | false | false |
| include.schema.changes | Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database .. | — | — | — | true | true | true | true | true | true | true |
| include.schema.comments | Whether the connector parse table and column's comment to metadata object. Note: Enable this option will bring the impli.. | — | — | — | false | false | false | false | false | false | false |
| inconsistent.schema.handling.mode | Specify how binlog events that belong to a table missing from internal schema representation (i.e. internal representati.. | — | — | — | fail | fail | fail | fail | fail | fail | fail |
| incremental.snapshot.allow.schema.changes | Detect schema change during an incremental snapshot and re-select a current chunk to avoid locking DDLs. Note that chang.. | — | — | — | false | false | false | false | false | false | false |
| incremental.snapshot.chunk.size | The maximum size of chunk (number of documents/rows) for incremental snapshotting | — | — | — | 1024 | 1024 | 1024 | 1024 | 1024 | 1024 | 1024 |
| incremental.snapshot.watermarking.strategy | Specify the strategy used for watermarking during an incremental snapshot: 'insert_insert' both open and close signal is.. | — | — | — | INSERT_INSERT | INSERT_INSERT | INSERT_INSERT | INSERT_INSERT | INSERT_INSERT | INSERT_INSERT | INSERT_INSERT |
| max.batch.size | Maximum size of each batch of source records. Defaults to . | — | — | — | null | null | null | null | null | null | null |
| max.queue.size | Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to , .. | — | — | — | null | null | null | null | null | null | null |
| max.queue.size.in.bytes | Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defau.. | — | — | — | null | null | null | null | null | null | null |
| message.key.columns | A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Eac.. | — | — | — | null | null | null | null | null | null | null |
| min.row.count.to.stream.results | The number of rows a table must contain to stream results rather than pull all into memory during snapshots. Defaults to.. | — | — | — | null | null | null | null | null | null | null |
| notification.enabled.channels | List of notification channels names that are enabled. | — | — | — | null | null | null | null | null | null | null |
| poll.interval.ms | Time to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms. | — | — | — | null | null | null | null | null | null | null |
| post.processors | Optional list of post processors. The processors are defined using '<post.processor.prefix>.type' config option and conf.. | — | — | — | null | null | null | null | null | null | null |
| provide.transaction.metadata | Enables transaction metadata extraction together with event counting | — | — | — | false | false | false | false | false | false | false |
| query.fetch.size | The maximum number of records that should be loaded into memory while streaming. A value of '0' uses the default JDBC fe.. | — | — | — | null | null | null | null | null | null | null |
| read.only | Switched connector to use alternative methods to deliver signals to Debezium instead of writing to signaling table | — | — | — | false | false | false | false | false | false | false |
| retriable.restart.connector.wait.ms | Time to wait before restarting connector after retriable exception occurs. Defaults to ms. | — | — | — | null | null | null | null | null | null | null |
| schema.name.adjustment.mode | Specify how schema names should be adjusted for compatibility with the message converter used by the connector, includin.. | — | — | — | none | none | none | none | none | none | none |
| signal.data.collection | The name of the data collection that is used to send signals/commands to Debezium. For multi-partition mode connectors, .. | — | — | — | null | null | null | null | null | null | null |
| signal.enabled.channels | List of channels names that are enabled. Source channel is enabled by default | — | — | — | source | source | source | source | source | source | source |
| signal.poll.interval.ms | Interval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds. | — | — | — | 5000 | 5000 | 5000 | 5000 | 5000 | 5000 | 5000 |
| skip.messages.without.change | Enable to skip publishing messages when there is no change in included columns.This would essentially filter messages to.. | — | — | — | false | false | false | false | false | false | false |
| skipped.operations | The comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd.. | — | — | — | t | t | t | t | t | t | t |
| snapshot.delay.ms | A delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms. | — | — | — | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| snapshot.fetch.size | The maximum number of records that should be loaded into memory while performing a snapshot. | — | — | — | null | null | null | null | null | null | null |
| snapshot.include.collection.list | This setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting .. | — | — | — | null | null | null | null | null | null | null |
| snapshot.lock.timeout.ms | The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this.. | — | — | — | null | null | null | null | null | null | null |
| snapshot.locking.mode | Controls how long the connector holds onto the global read lock while it is performing a snapshot. The default is 'minim.. | — | — | — | minimal | minimal | minimal | minimal | minimal | minimal | minimal |
| snapshot.locking.mode.custom.name | When 'snapshot.locking.mode' is set as custom, this setting must be set to specify a the name of the custom implementati.. | — | — | — | null | null | null | null | null | null | null |
| snapshot.max.threads | The maximum number of threads used to perform the snapshot. Defaults to 1. | — | — | — | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| snapshot.mode | The criteria for running a snapshot upon startup of the connector. Select one of the following snapshot options: 'when_n.. | — | — | — | initial | initial | initial | initial | initial | initial | initial |
| snapshot.mode.configuration.based.snapshot.data | When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the data should be snapshot.. | — | — | — | false | false | false | false | false | false | false |
| snapshot.mode.configuration.based.snapshot.on.data.error | When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the data should be snapshot.. | — | — | — | false | false | false | false | false | false | false |
| snapshot.mode.configuration.based.snapshot.on.schema.error | When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the schema should be snapsh.. | — | — | — | false | false | false | false | false | false | false |
| snapshot.mode.configuration.based.snapshot.schema | When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the schema should be snapsh.. | — | — | — | false | false | false | false | false | false | false |
| snapshot.mode.configuration.based.start.stream | When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the stream should start or .. | — | — | — | false | false | false | false | false | false | false |
| snapshot.mode.custom.name | When 'snapshot.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provi.. | — | — | — | null | null | null | null | null | null | null |
| snapshot.query.mode | Controls query used during the snapshot | — | — | — | select_all | select_all | select_all | select_all | select_all | select_all | select_all |
| snapshot.query.mode.custom.name | When 'snapshot.query.mode' is set as custom, this setting must be set to specify a the name of the custom implementation.. | — | — | — | null | null | null | null | null | null | null |
| snapshot.select.statement.overrides | This property contains a comma-separated list of fully-qualified tables (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME).. | — | — | — | null | null | null | null | null | null | null |
| snapshot.tables.order.by.row.count | Controls the order in which tables are processed in the initial snapshot. A `descending` value will order the tables by .. | — | — | — | disabled | disabled | disabled | disabled | disabled | disabled | disabled |
| sourceinfo.struct.maker | The name of the SourceInfoStructMaker class that returns SourceInfo schema and struct. | — | — | — | null | null | null | null | null | null | null |
| streaming.delay.ms | A delay period after the snapshot is completed and the streaming begins, given in milliseconds. Defaults to 0 ms. | — | — | — | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| table.ignore.builtin | Flag specifying whether built-in tables should be ignored. | — | — | — | true | true | true | true | true | true | true |
| time.precision.mode | Time, date, and timestamps can be represented with different kinds of precisions, including: 'adaptive' (the default) ba.. | — | — | — | adaptive | adaptive | adaptive | adaptive | adaptive | adaptive | adaptive |
| tombstones.on.delete | Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a d.. | — | — | — | true | true | true | true | true | true | true |
| topic.naming.strategy | The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change.. | — | — | — | null | null | null | null | null | null | null |
| topic.prefix | Topic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes. T.. | — | — | — | null | null | null | null | null | null | null |
| transaction.metadata.factory | Class to make transaction context & transaction struct/schemas | — | — | — | null | null | null | null | null | null | null |
| unavailable.value.placeholder | Specify the constant that will be provided by Debezium to indicate that the original value is unavailable and not provid.. | — | — | — | null | null | null | null | null | null | null |
| use.nongraceful.disconnect | Whether to use `socket.setSoLinger(true, 0)` when BinaryLogClient keepalive thread triggers a disconnect for a stale con.. | — | — | — | false | false | false | false | false | false | false |
| connection.validation.timeout.ms | The maximum time in milliseconds to wait for connection validation to complete. Defaults to 60 seconds. | — | — | — | — | — | — | null | null | null | null |
| executor.shutdown.timeout.ms | The maximum time in milliseconds to wait for task executor to shut down. | — | — | — | — | — | — | null | null | null | null |
| guardrail.collections.limit.action | Specify the action to take when a guardrail collections limit is exceeded: 'warn' (the default) logs a warning message a.. | — | — | — | — | — | — | — | warn | warn | warn |
| guardrail.collections.max | The maximum number of collections or tables that can be captured by the connector. When this limit is exceeded, the acti.. | — | — | — | — | — | — | — | 0 | 0 | 0 |
| binlog.net.read.timeout | The number of seconds to wait for a read from the binlog connection to complete before the server times out. A value of .. | — | — | — | — | — | — | — | — | — | 0 |
| binlog.net.write.timeout | The number of seconds to wait for a write to the binlog connection to complete before the server times out. A value of 0.. | — | — | — | — | — | — | — | — | — | 0 |
| snapshot.max.threads.multiplier | The factor used to scale the number of snapshot chunks per table. The default behavior is to take 'row_count/snapshot.ma.. | — | — | — | — | — | — | — | — | — | 1 |