Re: Contention preventing locking

From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Contention preventing locking
Date: 2018-03-03 13:44:55
Message-ID: CAA4eK1KwRHjU+=Q9uJeKOgxeS5KF3g_cnU8jToAkWD_U-Qb84Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Mar 1, 2018 at 1:22 PM, Konstantin Knizhnik
<k(dot)knizhnik(at)postgrespro(dot)ru> wrote:
>
> On 28.02.2018 16:32, Amit Kapila wrote:
>>
>> On Mon, Feb 26, 2018 at 8:26 PM, Konstantin Knizhnik
>> <k(dot)knizhnik(at)postgrespro(dot)ru> wrote:
>
>
> Yes, but two notices:
> 1. Tuple lock is used inside heap_* functions. But not in EvalPlanQualFetch
> where transaction lock is also used.
> 2. Tuple lock is hold until the end of update, not until commit of the
> transaction. So other transaction can receive conrol before this transaction
> is completed. And contention still takes place.
> Contention is reduced and performance is increased only if locks (either
> tuple lock, either xid lock) are hold until the end of transaction.
> Unfortunately it may lead to deadlock.
>
> My last attempt to reduce contention was to replace shared lock with
> exclusive in XactLockTableWait and removing unlock from this function. So
> only one transaction can get xact lock and will will hold it until the end
> of transaction. Also tuple lock seems to be not needed in this case. It
> shows better performance on pgrw test but on YCSB benchmark with workload A
> (50% of updates) performance was even worser than with vanilla postgres. But
> was is wost of all - there are deadlocks in pgbench tests.
>
>> I think in this whole process backends may need to wait multiple times
>> either on tuple lock or xact lock. It seems the reason for these
>> waits is that we immediately release the tuple lock (acquired by
>> heap_acquire_tuplock) once the transaction on which we were waiting is
>> finished. AFAICU, the reason for releasing the tuple lock immediately
>> instead of at end of the transaction is that we don't want to
>> accumulate too many locks as that can lead to the unbounded use of
>> shared memory. How about if we release the tuple lock at end of the
>> transaction unless the transaction acquires more than a certain
>> threshold (say 10 or 50) of such locks in which case we will fall back
>> to current strategy?
>>
> Certainly, I have tested such version. Unfortunately it doesn't help. Tuple
> lock is using tuple TID. But once transaction has made the update, new
> version of tuple will be produced with different TID and all new
> transactions will see this version, so them will not notice this lock at
> all.
>

Sure, but out of all new transaction again the only one transaction
will allow to update it and among new waiters, only one should get
access to it. The situation should be better than when all the
waiters attempt to lock and update the tuple with same CTID.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Eisentraut 2018-03-03 13:58:18 Re: Disabling src/test/[ssl|ldap] when not building with SSL/LDAP support
Previous Message Pavel Stehule 2018-03-03 13:08:21 idea - custom menu