Re: LWLock deadlock in brinRevmapDesummarizeRange

From: Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com>
To: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: LWLock deadlock in brinRevmapDesummarizeRange
Date: 2023-02-22 12:04:10
Message-ID: 31fb6cc5-45f5-777b-cb8e-b9e00b530593@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 2/22/23 12:35, Alvaro Herrera wrote:
> On 2023-Feb-22, Tomas Vondra wrote:
>
>> But instead of I almost immediately ran into a LWLock deadlock :-(
>
> Ouch.
>
>> I've managed to reproduce this on PG13+, but I believe it's there since
>> the brinRevmapDesummarizeRange was introduced in PG10. I just haven't
>> tried on pre-13 releases.
>
> Hmm, I think that might just be an "easy" way to hit it, but the problem
> is actually older than that, since AFAICS brin_doupdate is careless
> regarding locking order of revmap page vs. regular page.
>

That's certainly possible, although I ran a lot of BRIN stress tests and
it only started failing after I added the desummarization. Although, the
tests are "randomized" like this:

UPDATE t SET a = '...' WHERE random() < 0.05;

which is fairly sequential. Maybe reordering the CTIDs a bit would hit
additional deadlocks, I'll probably give it a try. OTOH that'd probably
be much more likely to be hit by users, and I don't recall any such reports.

> Sadly, the README doesn't cover locking considerations. I had that in a
> file called 'minmax-proposal' in version 16 of the patch here
> https://postgr.es/m/20140820225133.GB6343@eldon.alvh.no-ip.org
> but by version 18 (when 'minmax' became BRIN) I seem to have removed
> that file and replaced it with the README and apparently I didn't copy
> this material over.
>

Yeah :-( There's a couple more things that are missing in the README,
like what oi_regular_nulls mean.

> ... and in there, I wrote that we would first write the brin tuple in
> the regular page, unlock that, and then lock the revmap for the update,
> without holding lock on the data page. I don't remember why we do it
> differently now, but maybe the fix is just to release the regular page
> lock before locking the revmap page? One very important change is that
> in previous versions the revmap used a separate fork, and we had to
> introduce an "evacuation protocol" when we integrated the revmap into
> the main fork, which may have changed the locking considerations.
>

What would happen if two processes built the summary concurrently? How
would they find the other tuple, so that we don't end up with two BRIN
tuples for the same range?

> Another point: to desummarize a range, just unlinking the entry from
> revmap should suffice, from the POV of other index scanners. Maybe we
> can simplify the whole procedure to: lock revmap, remove entry, remember
> page number, unlock revmap; lock regular page, delete entry, unlock.
> Then there are no two locks held at the same time during desummarize.
>

Perhaps, as long as it doesn't confuse anything else.

> This comes from v16:
>

I don't follow - what do you mean by v16? I don't see anything like that
anywhere in the repository.

> + Locking considerations
> + ----------------------
> +
> + To read the TID during an index scan, we follow this protocol:
> +
> + * read revmap page
> + * obtain share lock on the revmap buffer
> + * read the TID
> + * obtain share lock on buffer of main fork
> + * LockTuple the TID (using the index as relation). A shared lock is
> + sufficient. We need the LockTuple to prevent VACUUM from recycling
> + the index tuple; see below.
> + * release revmap buffer lock
> + * read the index tuple
> + * release the tuple lock
> + * release main fork buffer lock
> +
> +
> + To update the summary tuple for a page range, we use this protocol:
> +
> + * insert a new index tuple somewhere in the main fork; note its TID
> + * read revmap page
> + * obtain exclusive lock on revmap buffer
> + * write the TID
> + * release lock
> +
> + This ensures no concurrent reader can obtain a partially-written TID.
> + Note we don't need a tuple lock here. Concurrent scans don't have to
> + worry about whether they got the old or new index tuple: if they get the
> + old one, the tighter values are okay from a correctness standpoint because
> + due to MVCC they can't possibly see the just-inserted heap tuples anyway.
> +
> + [vacuum stuff elided]
>

regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message wangw.fnst@fujitsu.com 2023-02-22 12:12:19 RE: Rework LogicalOutputPluginWriterUpdateProgress
Previous Message Amit Kapila 2023-02-22 11:55:32 Re: Time delayed LR (WAS Re: logical replication restrictions)