Data loss when reading the data from logical replication slot

From: Nitesh Yadav <nitesh(at)datacoral(dot)co>
To: pgsql-bugs(at)postgresql(dot)org
Cc: ops(at)datacoral(dot)co
Subject: Data loss when reading the data from logical replication slot
Date: 2019-02-12 03:10:52
Message-ID: CAFvjRvMfwbbC2vVRuFqeWepJAAB8z2D5T66pXni=dGWzP9eWCQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

Hi,

*Postgres Server setup: *

1. Postgres server is running as AWS rds instance.
2. Server Version is PostgreSQL 9.5.10 on x86_64-pc-linux-gnu, compiled
by gcc (GCC) 4.8.3 20140911 (Red Hat 4.
3. With the following parameters group rds.logical_replication is set to
1.Which internally set the following flags: wal_level, max_wal_senders,
max_replication_slots, max_connections.
4. We are using test_decoding module for retrieving/read the WAL data
through the logical decoding mechanism.

*Application setup: *

1. Periodically we run the peek command to retrieve the data from the
slot: eg SELECT * FROM
pg_logical_slot_peek_changes('pgldpublic_cdc_slot', NULL, NULL,
'include-timestamp', 'on') LIMIT 200000 OFFSET 0;
2. From the above query result, we use location of last transaction to
remove the data from the slot: eg SELECT location, xid FROM
pg_logical_slot_get_changes('pgldpublic_cdc_slot', 'B92/C7394678', NULL,
'include-timestamp', 'on') LIMIT 1;
3. We runs Step 1 & 2 in the loop for reading data in the chunk of 200K
records at a time in a given process.

*Behavior reported (Bug)*

1. When we have a transaction of size more than 300K tables changes, we
have the following symptoms.
2. A process (p1) started reading the big transaction (xid = 780807879)
ie BEGIN and 104413 table changes (DELETE/INSERT).
3. Next process (p2) had read 200K records, which just contain only
xid (780807879) table changes (DELETE/INSERT), but no COMMIT for xid =
780807879.
4. Next process (p3) had read 200K records, but no tables change or no
COMMIT for same xid = 780807879. But we do see other complete transactions
(ie BEGIN & COMMIT).

*BUG*:

1. If a transaction (xid = 780807879) is started in p1, continued in p2
(but not finished/commited) then why didn't p3 has any records for the same
transaction id.?
2. Do we partially lose the transaction (xid = 780807879) data?
3. Do we lose other transaction around the same time?

We are using the above application to replicate the production data from
the master to other analytics systems. Let us know if you need further
details. We would appreciate any help to further debug the missing
transaction.

Regards,
Nitesh

Browse pgsql-bugs by date

  From Date Subject
Next Message Michael Paquier 2019-02-12 04:18:19 Re: BUG #15548: Unaccent does not remove combining diacritical characters
Previous Message Michael Paquier 2019-02-12 03:09:27 Re: BUG #15572: Misleading message reported by "Drop function operation" on DB with functions having same name