The most common Debezium connector errors with root causes and step-by-step fixes. Click any error for details.
ERROR: replication slot "debezium" already exists
ERROR: replication slot "debezium" is active for PID ...
FATAL: could not write to file "pg_wal/..." — disk full (no Debezium error, PostgreSQL-side symptom)
Creation of replication slot failed; query to create replication slot timed out — make sure there are no long-running queries on the database
Saved offset is before replication slot's confirmed_flush_lsn; streaming will start from the confirmed_flush_lsn
The connector is trying to read binlog starting at binlog file 'mysql-bin.000003', pos=..., but this is no longer available on the server
Command failed with error 286 (ChangeStreamHistoryLost) / error 280 (ChangeStreamFatalError): resume point was not found in the oplog
No previous offset found — starting new snapshot
ERROR: publication "dbz_publication" does not exist
ERROR: must be superuser to create FOR ALL TABLES publication
ERROR: logical decoding requires wal_level >= logical
The MySQL server is not configured to use a ROW binlog_format, which is required for this connector to work properly
Access denied for user 'debezium'@'%' (using password: YES) — REPLICATION CLIENT command denied
This database (mydb) does not have CDC enabled
No maximum LSN recorded in the database; SQL Server Agent is not running
Invalid object name 'cdc.fn_cdc_get_all_changes_dbo_customers'
FATAL: number of requested standby connections exceeds max_wal_senders (currently N)
Cannot replicate anonymous transaction when AUTO_POSITION = 1
ConnectException: Error configuring an instance of KafkaSchemaHistory; check the logs for details
ConnectException: snapshot.mode.custom.name is not set but snapshot.mode=custom
Encountered change event for table db.table_name whose schema isn't known to this connector
io.debezium.text.ParsingException: DDL statement couldn't be parsed
The db history topic or its content is fully or partially missing. Please check database schema history topic configuration and re-execute the snapshot.
org.apache.kafka.common.errors.RecordTooLargeException: The message is N bytes when serialized which is larger than 1048576
java.lang.OutOfMemoryError: Java heap space