Re: BUG #16016: deadlock with startup process, AccessExclusiveLock on pg_statistic's toast table

From: Dmitry Dolgov <9erthalion6(at)gmail(dot)com>
To: Alexey Ermakov <alexey(dot)ermakov(at)dataegret(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Sergei Kornilov <sk(at)zsrv(dot)org>, "pgsql-bugs(at)lists(dot)postgresql(dot)org" <pgsql-bugs(at)lists(dot)postgresql(dot)org>
Subject: Re: BUG #16016: deadlock with startup process, AccessExclusiveLock on pg_statistic's toast table
Date: 2019-11-01 15:56:30
Message-ID: 20191101155630.2cbmabxo4kahri43@localhost
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

> On Fri, Nov 01, 2019 at 03:15:33PM +0600, Alexey Ermakov wrote:
>
> I reproduced Sergei's test case on postgresql 11.5, replica hung up almost
> immediately after pgbench ran.
>
> 9907 - pid of startup process, 16387 - oid of test table, 2619 - oid of
> pg_statistic, 2840 - oid of toast table of pg_statistic.
>
> 1) pgbench on replica with 1 concurrent processes (-c 1):
>
> this case looks a bit different from what happened on initial report (which
> recently happened again btw) because at this time I can't even open new
> connection via psql or run query with pg_stat_activity - in hangs (pg_locks
> query works).
> perhaps because this time access exclusive lock is on pg_statistic table
> too, not only on it's toast table.

Interesting. I've tried the test case from previous email on the master
branch, and looks like I've got something similar with similar
stacktraces. After a short investigation, it looks pretty strange, a
backend 12682 is waiting on a lock, taken by 12584 (startup process):

[12682] LOG: process 12682 still waiting for AccessShareLock on relation 2619 of database 16384 after 1000.038 ms
[12682] DETAIL: Process holding the lock: 12584. Wait queue: 12689, 12674, 12671, 12682, 12683, 12677, 12680, 12676, 12686, 12670, 12678, 12688, 12679, 12681, 12684, 12685, 12687.
[12682] STATEMENT: select * from tablename where i = 95;

And if I understand correctly, startup process is waiting inside
ResolveRecoveryConflictWithVirtualXIDs with a waitlist, containing
backendId = 14:

>>> p *waitlist
$3 = {
backendId = 14,
localTransactionId = 218
}

>>> p allProcs[pgprocnos]
...
lxid = 218,
pid = 12682,
pgprocno = 87,
backendId = 14,
databaseId = 16384,
...

So the same backend 12682, although I'm not sure yet why.

In response to

Browse pgsql-bugs by date

  From Date Subject
Next Message Daniel Gustafsson 2019-11-01 22:26:08 Re: BUG #16081: pg_upgrade is failed if a fake cmd.exe exist in the current directory.
Previous Message Alexey Ermakov 2019-11-01 09:15:33 Re: BUG #16016: deadlock with startup process, AccessExclusiveLock on pg_statistic's toast table