Moving relation extension locks out of heavyweight lock manager

From: Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>
To: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Moving relation extension locks out of heavyweight lock manager
Date: 2017-05-11 00:39:03
Message-ID: CAD21AoCmT3cFQUN4aVvzy5chw7DuzXrJCbrjTU05B+Ss=Gn1LA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox
Thread:
Lists: pgsql-hackers

Hi all,

Currently, the relation extension lock is implemented using
heavyweight lock manager and almost functions (except for
brin_page_cleanup) using LockRelationForExntesion use it with
ExclusiveLock mode. But actually it doesn't need multiple lock modes
or deadlock detection or any of the other functionality that the
heavyweight lock manager provides. I think It's enough to use
something like LWLock. So I'd like to propose to change relation
extension lock management so that it works using LWLock instead.

Attached draft patch makes relation extension locks uses LWLock rather
than heavyweight lock manager, using by shared hash table storing
information of the relation extension lock. The basic idea is that we
add hash table in shared memory for relation extension locks and each
hash entry is LWLock struct. Whenever the process wants to acquire
relation extension locks, it searches appropriate LWLock entry in hash
table and acquire it. The process can remove a hash entry when
unlocking it if nobody is holding and waiting it.

This work would be helpful not only for existing workload but also
future works like some parallel utility commands, which is discussed
on other threads[1]. At least for parallel vacuum, this feature helps
to solve issue that the implementation of parallel vacuum has.

I ran pgbench for 10 min three times(scale factor is 5000), here is a
performance measurement result.

clients TPS(HEAD) TPS(Patched)
4 2092.612 2031.277
8 3153.732 3046.789
16 4562.072 4625.419
32 6439.391 6479.526
64 7767.364 7779.636
100 7917.173 7906.567

* 16 core Xeon E5620 2.4GHz
* 32 GB RAM
* ioDrive

In current implementation, it seems there is no performance degradation so far.
Please give me feedback.

[1]
* Block level parallel vacuum WIP
<https://www.postgresql.org/message-id/CAD21AoD1xAqp4zK-Vi1cuY3feq2oO8HcpJiz32UDUfe0BE31Xw%40mail.gmail.com>
* CREATE TABLE with parallel workers, 10.0?
<https://www.postgresql.org/message-id/CAFBoRzeoDdjbPV4riCE%2B2ApV%2BY8nV4HDepYUGftm5SuKWna3rQ%40mail.gmail.com>
* utility commands benefiting from parallel plan
<https://www.postgresql.org/message-id/CAJrrPGcY3SZa40vU%2BR8d8dunXp9JRcFyjmPn2RF9_4cxjHd7uA%40mail.gmail.com>

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

Attachment Content-Type Size
Moving_extension_lock_out_of_heavyweight_lock_v1.patch application/octet-stream 33.0 KB

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2017-05-11 00:58:58 Re: Should pg_current_wal_location() become pg_current_wal_lsn()
Previous Message Bruce Momjian 2017-05-11 00:15:17 Re: Should pg_current_wal_location() become pg_current_wal_lsn()