Re: backup manifests and contemporaneous buildfarm failures

From: Petr Jelinek <petr(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, Peter Eisentraut <peter_e(at)gmx(dot)net>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, David Steele <david(at)pgmasters(dot)net>, Noah Misch <noah(at)leadboat(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Suraj Kharage <suraj(dot)kharage(at)enterprisedb(dot)com>, tushar <tushar(dot)ahuja(at)enterprisedb(dot)com>, Rajkumar Raghuwanshi <rajkumar(dot)raghuwanshi(at)enterprisedb(dot)com>, Rushabh Lathia <rushabh(dot)lathia(at)gmail(dot)com>, Tels <nospam-pg-abuse(at)bloodgate(dot)com>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Jeevan Chalke <jeevan(dot)chalke(at)enterprisedb(dot)com>, vignesh C <vignesh21(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>
Subject: Re: backup manifests and contemporaneous buildfarm failures
Date: 2020-04-04 05:01:36
Message-ID: d80c1566-7c1c-b2df-d4f6-509535ac52ef@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 04/04/2020 05:06, Andres Freund wrote:
> Hi,
>
> Peter, Petr, CCed you because it's probably a bug somewhere around the
> initial copy code for logical replication.
>
>
> On 2020-04-03 20:48:09 -0400, Robert Haas wrote:
>> 'serinus' is also failing. This is less obviously related:
>
> Hm. Tests passed once since then.
>
>
>> 2020-04-04 02:08:57.299 CEST [5e87d019.506c1:4] LOG: received
>> replication command: CREATE_REPLICATION_SLOT
>> "tap_sub_16390_sync_16384" TEMPORARY LOGICAL pgoutput USE_SNAPSHOT
>> 2020-04-04 02:08:57.299 CEST [5e87d019.506c1:5] ERROR: replication
>> slot "tap_sub_16390_sync_16384" already exists
>
> That already seems suspicious. I checked the following (successful) run
> and I did not see that in the stage's logs.
>
> Looking at the failing log, it fails because for some reason there's
> rounds (once due to a refresh, once due to an intention replication
> failure) of copying the relation. Each creates its own temporary slot.
>
> first time:
> 2020-04-04 02:08:57.276 CEST [5e87d019.506bd:1] LOG: connection received: host=[local]
> 2020-04-04 02:08:57.278 CEST [5e87d019.506bd:4] LOG: received replication command: CREATE_REPLICATION_SLOT "tap_sub_16390_sync_16384" TEMPORARY LOGICAL pgoutput USE_SNAPSHOT
> 2020-04-04 02:08:57.282 CEST [5e87d019.506bd:9] LOG: statement: COPY public.tab_rep TO STDOUT
> 2020-04-04 02:08:57.284 CEST [5e87d019.506bd:10] LOG: disconnection: session time: 0:00:00.007 user=bf database=postgres host=[local]
>
> second time:
> 2020-04-04 02:08:57.288 CEST [5e87d019.506bf:1] LOG: connection received: host=[local]
> 2020-04-04 02:08:57.289 CEST [5e87d019.506bf:4] LOG: received replication command: CREATE_REPLICATION_SLOT "tap_sub_16390_sync_16384" TEMPORARY LOGICAL pgoutput USE_SNAPSHOT
> 2020-04-04 02:08:57.293 CEST [5e87d019.506bf:9] LOG: statement: COPY public.tab_rep TO STDOUT
>
> third time:
> 2020-04-04 02:08:57.297 CEST [5e87d019.506c1:1] LOG: connection received: host=[local]
> 2020-04-04 02:08:57.299 CEST [5e87d019.506c1:4] LOG: received replication command: CREATE_REPLICATION_SLOT "tap_sub_16390_sync_16384" TEMPORARY LOGICAL pgoutput USE_SNAPSHOT
> 2020-04-04 02:08:57.299 CEST [5e87d019.506c1:5] ERROR: replication slot "tap_sub_16390_sync_16384" already exists
>
> Note that the connection from the second attempt has not yet
> disconnected. Hence the error about the replication slot already
> existing - it's a temporary replication slot that'd otherwise already
> have been dropped.
>
>
> Seems the logical rep code needs to do something about this race?
>

The downstream:

> 2020-04-04 02:08:57.275 CEST [5e87d019.506bc:1] LOG: logical replication table synchronization worker for subscription "tap_sub", table "tab_rep" has started
> 2020-04-04 02:08:57.282 CEST [5e87d019.506bc:2] ERROR: duplicate key value violates unique constraint "tab_rep_pkey"
> 2020-04-04 02:08:57.282 CEST [5e87d019.506bc:3] DETAIL: Key (a)=(1) already exists.
> 2020-04-04 02:08:57.282 CEST [5e87d019.506bc:4] CONTEXT: COPY tab_rep, line 1
> 2020-04-04 02:08:57.283 CEST [5e87d018.50689:5] LOG: background worker "logical replication worker" (PID 329404) exited with exit code 1
> 2020-04-04 02:08:57.287 CEST [5e87d019.506be:1] LOG: logical replication table synchronization worker for subscription "tap_sub", table "tab_rep" has started
> 2020-04-04 02:08:57.293 CEST [5e87d019.506be:2] ERROR: duplicate key value violates unique constraint "tab_rep_pkey"
> 2020-04-04 02:08:57.293 CEST [5e87d019.506be:3] DETAIL: Key (a)=(1) already exists.
> 2020-04-04 02:08:57.293 CEST [5e87d019.506be:4] CONTEXT: COPY tab_rep, line 1
> 2020-04-04 02:08:57.295 CEST [5e87d018.50689:6] LOG: background worker "logical replication worker" (PID 329406) exited with exit code 1
> 2020-04-04 02:08:57.297 CEST [5e87d019.506c0:1] LOG: logical replication table synchronization worker for subscription "tap_sub", table "tab_rep" has started
> 2020-04-04 02:08:57.299 CEST [5e87d019.506c0:2] ERROR: could not create replication slot "tap_sub_16390_sync_16384": ERROR: replication slot "tap_sub_16390_sync_16384" already exists
> 2020-04-04 02:08:57.300 CEST [5e87d018.50689:7] LOG: background worker "logical replication worker" (PID 329408) exited with exit code

Looks like we are simply retrying so fast that upstream will not have
finished cleanup after second try by the time we already run the third one.

The last_start_times is supposed to protect against that so I guess
there is some issue with how that works.

--
Petr Jelinek
2ndQuadrant - PostgreSQL Solutions for the Enterprise
https://www.2ndQuadrant.com/

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Petr Jelinek 2020-04-04 05:07:29 Re: adding partitioned tables to publications
Previous Message Pavel Stehule 2020-04-04 04:38:57 Re: Proposal: is_castable