Rare deadlock failure in create_am test

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: pgsql-hackers(at)lists(dot)postgresql(dot)org
Cc: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Subject: Rare deadlock failure in create_am test
Date: 2020-09-04 02:13:27
Message-ID: 839004.1599185607@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

conchuela just showed an unusual failure:

https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2020-09-03%2023%3A00%3A36

The core of it is a deadlock failure in create_am.sql; there's then
some follow-on noise from not having successfully dropped the test AM.
The deadlock looks like:

2020-09-04 01:05:06.904 CEST [609175:34] pg_regress/create_am LOG: process 609175 detected deadlock while waiting for AccessExclusiveLock on relation 17347 of database 16384 after 4616.873 ms
2020-09-04 01:05:06.904 CEST [609175:35] pg_regress/create_am DETAIL: Process holding the lock: 609183. Wait queue: .
2020-09-04 01:05:06.904 CEST [609175:36] pg_regress/create_am STATEMENT: DROP ACCESS METHOD gist2 CASCADE;
2020-09-04 01:05:06.904 CEST [609175:37] pg_regress/create_am ERROR: deadlock detected
2020-09-04 01:05:06.904 CEST [609175:38] pg_regress/create_am DETAIL: Process 609175 waits for AccessExclusiveLock on relation 17347 of database 16384; blocked by process 609183.
Process 609183 waits for RowExclusiveLock on relation 20095 of database 16384; blocked by process 609175.
Process 609175: DROP ACCESS METHOD gist2 CASCADE;
Process 609183: autovacuum: VACUUM ANALYZE public.fast_emp4000
2020-09-04 01:05:06.904 CEST [609175:39] pg_regress/create_am HINT: See server log for query details.
2020-09-04 01:05:06.904 CEST [609175:40] pg_regress/create_am STATEMENT: DROP ACCESS METHOD gist2 CASCADE;
2020-09-04 01:05:06.904 CEST [609183:11] LOG: process 609183 acquired RowExclusiveLock on relation 20095 of database 16384 after 13377.776 ms
2020-09-04 01:04:52.895 CEST [609183:6] LOG: automatic analyze of table "regression.public.tenk2" system usage: CPU: user: 0.03 s, system: 0.00 s, elapsed: 0.59 s

So it's not hard to understand the problem: DROP of an index AM, cascading
to an index, will need to acquire lock on the index and then lock on the
index's table. Any other operation on the table, like say autovacuum,
is going to acquire locks in the other direction.

This is pretty rare, but not unheard of:

https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2020-03-24%2022%3A00%3A23

(There might be more such failures, but I only looked back six months,
and these two are all I found in that window.)

I'm inclined to think that the best fix is to put

begin;
lock table tenk2;
...
commit;

around the DROP CASCADE. We could alternatively disable autovacuum on
tenk2, but that could have undesirable side-effects.

regards, tom lane

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Alvaro Herrera 2020-09-04 02:15:03 Re: [PATCH] - Provide robust alternatives for replace_string
Previous Message Amit Kapila 2020-09-04 02:04:47 Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions