Re: Logical replication existing data copy

From: Erik Rijkers <er(at)xs4all(dot)nl>
To: Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>
Cc: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, pgsql-hackers-owner(at)postgresql(dot)org
Subject: Re: Logical replication existing data copy
Date: 2017-03-07 22:30:50
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-03-06 11:27, Petr Jelinek wrote:

> 0001-Reserve-global-xmin-for-create-slot-snasphot-export.patch +
> 0002-Don-t-use-on-disk-snapshots-for-snapshot-export-in-l.patch+
> 0003-Prevent-snapshot-builder-xmin-from-going-backwards.patch +
> 0004-Fix-xl_running_xacts-usage-in-snapshot-builder.patch +
> 0005-Skip-unnecessary-snapshot-builds.patch +
> 0001-Logical-replication-support-for-initial-data-copy-v6.patch

I use three different machines (2 desktop, 1 server) to test logical
replication, and all three have now at least once failed to correctly
synchronise a pgbench session (amidst many succesful runs, of course)

I attach an output-file from the test-program, with the 2 logfiles
(master+replica) of the failed run. The outputfile
(out_20170307_1613.txt) contains the output of 5 runs of The first run failed, the next 4 were ok.

But that's probably not very useful; perhaps is pg_waldump more useful?
From what moment, or leading up to what moment, or period, is a
pg_waldump(s) useful? I can run it from the script, repeatedly, and
only keep the dumped files when things go awry. Would that make sense?

Any other ideas welcome.


Erik Rijkers

Attachment Content-Type Size
20170307_1613.tar.bz2 application/x-bzip2 4.7 KB

In response to


Browse pgsql-hackers by date

  From Date Subject
Next Message Vladimir Sitnikov 2017-03-07 22:31:26 Re: [HACKERS] Statement-level rollback
Previous Message Vladimir Sitnikov 2017-03-07 22:26:49 Re: Statement-level rollback