Re: Postgres stucks in deadlock detection

From: Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Юрий Соколов <funny(dot)falcon(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, PostgreSQL-Dev <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Postgres stucks in deadlock detection
Date: 2018-04-20 16:14:28
Message-ID: 4c171ffe-e3ee-acc5-9066-a40d52bc5ae9@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 20.04.2018 18:36, Robert Haas wrote:
> On Wed, Apr 18, 2018 at 10:08 AM, Konstantin Knizhnik
> <k(dot)knizhnik(at)postgrespro(dot)ru> wrote:
>> And it is very hard not to notice 17-times difference.
>> Certainly it is true in the assumption that most deadlock timeout expiration
>> are caused by high workload and contention, and not by real deadlocks.
>> But it seems to be quite common case.
> If I understand this workload correctly, the contention is for the
> relation extension lock. But I think we're likely to move that out of
> the heavyweight lock manager altogether in the not-too-distant future,
> as proposed in https://commitfest.postgresql.org/17/1133/ ? I'd be
> interested in hearing what happens to performance with that patch
> applied.
>

With the extension lock patch performance in increased to 1146 TPS.
So it is much better than with vanilla postgres and about 40% better
than with deadlock patch (1146 vs. 719 TPS).
Profile is the following:

 33.51%  postgres                                [.] s_lock
   4.59%  postgres                                [.] LWLockWaitListLock
   3.67%  postgres                                [.] perform_spin_delay
   3.04%  [kernel]                                [k] gup_pgd_range
   2.43%  [kernel]                                [k] get_futex_key
   2.00%  [kernel]                                [k] __basepage_index
   1.20%  postgres                                [.]
calculateDigestFromBuffer
   0.97%  [kernel]                                [k] update_load_avg
   0.97%  postgres                                [.] XLogInsertRecord
   0.93%  [kernel]                                [k] switch_mm_irqs_off
   0.90%  postgres                                [.] LWLockAttemptLock
   0.88%  [kernel]                                [k] _atomic_dec_and_lock
   0.84%  [kernel]                                [k] __schedule
   0.82%  postgres                                [.]
ConditionVariableBroadcast
   0.75%  postgres                                [.] LWLockRelease
   0.74%  [kernel]                                [k]
syscall_return_via_sysret
   0.65%  postgres                                [.] SetLatch
   0.64%  [kernel]                                [k]
_raw_spin_lock_irqsave
   0.62%  [kernel]                                [k]
copy_user_enhanced_fast_string
   0.59%  postgres                                [.] RelationPutHeapTuple
   0.55%  [kernel]                                [k] select_task_rq_fair
   0.54%  [kernel]                                [k] try_to_wake_up
   0.52%  [kernel]                                [k] menu_select

So definitely elimination heavy weight relation extension lock is good
idea which eliminates the need for my deadlock patch ... but only in
this insert test.
As I have mentioned at the beginning of this thread the same problem
with deadlock detection timeout expiration we have with YSCB benchmark
with zipf distribution.
Here the source of contention are tuple locks. And as far as I
understand from the discussion in the mentioned thread, it is not
possible to eliminate heavy weight tuple locks.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Chapman Flack 2018-04-20 16:19:31 Re: Add read-only param to set_config(...) / SET that effects (at least) customized runtime options
Previous Message Alvaro Herrera 2018-04-20 15:58:50 Re: [HACKERS] path toward faster partition pruning