Re: Locking a row with KEY SHARE NOWAIT blocks

From: Heikki Linnakangas <hlinnaka(at)iki(dot)fi>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Locking a row with KEY SHARE NOWAIT blocks
Date: 2019-09-03 14:21:30
Message-ID: be3b4725-516a-b8ad-f838-98db56c8cdd6@iki.fi
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 03/09/2019 16:31, Tom Lane wrote:
> Heikki Linnakangas <hlinnaka(at)iki(dot)fi> writes:
>> When you lock a row with FOR KEY SHARE, and the row's non-key columns
>> have been updated, heap_lock_tuple() walks the update chain to mark all
>> the in-progress tuple versions also as locked. But it doesn't pay
>> attention to the NOWAIT or SKIP LOCKED flags when doing so. The
>> heap_lock_updated_tuple() function walks the update chain, but the
>> 'wait_policy' argument is not passed to it. As a result, a SELECT in KEY
>> SHARE NOWAIT query can block waiting for another updating transaction,
>> despite the NOWAIT modifier.
>
>> This can be reproduced with the attached isolation test script.
>
>> I'm not sure how to fix this. The logic to walk the update chain and
>> propagate the tuple lock is already breathtakingly complicated :-(.
>
> Why are we locking any but the most recent version?

Define "most recent". In KEY SHARE mode, there can be multiple UPDATEd
versions of the tuple, such that the updating transactions are still
in-progress, but we can still acquire the lock. We need to lock the most
recent version, including any in-progress transactions that have updated
the row but not committed yet. Otherwise, the lock would be lost when
the transaction commits (or if the in-progress transaction updates the
same row again). But locking that tuple is not enough, because otherwise
the lock would be lost if the in-progress transaction that updated the
row aborts. We also need to lock the latest live tuple (HEAPTUPLE_LIVE),
to avoid that. And if there are subtransactions involved, we need to be
prepared for a rollback/commit of any of the subtransactions.

Hmm. I think this could be fixed by locking the tuples in reverse order,
starting from the latest in-progress updated version, walking the update
chain backwards. While we're walking the chain, if we find that an
updating transaction has committed, so that we have already acquired a
lock on the now live version, we can stop. And if we find that the
transaction has aborted, we start from scratch, i.e. find the now latest
INSERT_IN_PROGRESS tuple version, and walk backwards from there.

Walking an update chain backwards is a bit painful, but you can walk
forwards from the live tuple and remember the path, and walk backwards
the same path once you reach the end of the chain.

- Heikki

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2019-09-03 14:39:11 Re: block-level incremental backup
Previous Message Ibrar Ahmed 2019-09-03 14:04:59 Re: block-level incremental backup