Re: Shared row locking

From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Alvaro Herrera <alvherre(at)dcc(dot)uchile(dot)cl>, Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Shared row locking
Date: 2004-12-19 09:52:01
Message-ID: 1103449921.2893.63.camel@localhost.localdomain
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sun, 2004-12-19 at 04:04, Bruce Momjian wrote:
> BTom Lane wrote:
> > Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> writes:
> > > You mean all empty/zero rows can be removed? Can we guarantee that on
> > > commit we can clean up the bitmap? If not the idea doesn't work.
> >
> > For whatever data structure we use, we may reset the structure to empty
> > during backend-crash recovery. So your objection boils down to "what if
> > a backend exits normally but forgets to clean up its locks?" Assuming
> > that doesn't happen isn't any worse than assuming a backend will clean
> > up its shared memory state on non-crash exit, so I don't think it's a
> > serious concern.
> >
> > That brings another thought: really what this is all about is working
> > around the fact that the standard lock manager can only cope with a
> > finite number of coexisting locks, because it's working in a fixed-size
> > shared memory arena. Maybe we should instead think about ways to allow
> > the existing lock table to spill to disk when it gets too big. That
> > would eliminate max_locks_per_transaction as a source of hard failures,
> > which would be a nice benefit.
>
> Agreed. Once concern I have about allowing the lock table to spill to
> disk is that a large number of FOR UPDATE locks could push out lock
> entries used by other backends, causing very poor performance.

In similar circumstances, DB2 uses these techniques:

- when locktable X % full, then escalate locks to full table locks: both
locktable memory and threshold% are instance parameters

- use a lock mode called Cursor Stability that locks only those rows
currently being examined by a cursor, those maintaining the lock usage
of a cursor at a constant level as the cursor moves. The lock mode of
Repeatable Read *does* lock all rows read

(these are not actually mutually exclusive)

The first one is a real pain, but the idea might be of use somewhere.

The second is a usable, practical alternative that should be considered
and might avoid the need to write the spill-to-disk code and then
discover it performs very badly.

--
Best Regards, Simon Riggs

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message overbored 2004-12-19 09:56:02 Help extending pg_class
Previous Message Andrew Dunstan 2004-12-19 05:20:03 stable contrib cleanup