Re: Reducing overhead of frequent table locks

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Noah Misch <noah(at)leadboat(dot)com>, Alexey Klyukin <alexk(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Reducing overhead of frequent table locks
Date: 2011-05-25 14:35:24
Message-ID: 5273.1306334124@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Simon Riggs <simon(at)2ndQuadrant(dot)com> writes:
> On Wed, May 25, 2011 at 1:44 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> On Wed, May 25, 2011 at 8:27 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>>> Design seemed relatively easy from there: put local lock table in
>>> shared memory for all procs. We then have a use_strong_lock at proc
>>> and at transaction level. Anybody that wants a strong lock first sets
>>> use_strong_lock at proc and transaction level, then copies all local
>>> lock data into shared lock table,

>> I'm not following this...

> Which bit aren't you following? It's a design outline for how to
> implement, deliberately brief to allow a discussion of design
> alternatives.

What I'm not following is how moving the local lock table into shared
memory can possibly be a good idea. The reason we invented the local
lock table in the first place (we didn't use to have one) is so that a
process could do some manipulations without touching shared memory.
(Notably, it is currently nearly free, and certainly lock-free, to
re-request a lock type you already hold. This is not an infrequent
case.) That advantage will go away if you do this.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Hitoshi Harada 2011-05-25 14:35:53 Re: Pull up aggregate subquery
Previous Message Simon Riggs 2011-05-25 14:32:09 Re: Volunteering as Commitfest Manager