Re: Debugging deadlocks

From: Greg Stark <gsstark(at)mit(dot)edu>
To: Alvaro Herrera <alvherre(at)dcc(dot)uchile(dot)cl>
Cc: Greg Stark <gsstark(at)mit(dot)edu>, pgsql-general(at)postgresql(dot)org
Subject: Re: Debugging deadlocks
Date: 2005-03-31 14:41:35
Message-ID: 87hdirx4gw.fsf@stark.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Alvaro Herrera <alvherre(at)dcc(dot)uchile(dot)cl> writes:

> On Wed, Mar 30, 2005 at 05:41:04PM -0500, Greg Stark wrote:
> >
> > Alvaro Herrera <alvherre(at)dcc(dot)uchile(dot)cl> writes:
> >
> > Is that true even if I'm updating/deleting 1,000 tuples that all reference the
> > same foreign key? It seems like that should only need a single lock per
> > (sub)transaction_id per referenced foreign key.
>
> Well, in that case you need 1000 PROCLOCK objects, all pointing to the
> same LOCK object. But it still uses shared memory.

Why? What do you need these PROCLOCK objects for? You're never going to do
anything to these locks but release them all together.

> > How is this handled currently? Is your patch any worse than the current
> > behaviour?
>
> With my patch it's useless without a provision to spill the lock table.
> The current situation is that we don't use the lock table to lock
> tuples; instead we mark them on disk, in the tuple itself. So we can't
> really mark a tuple more than once (because we have only one bit to
> mark); that's why we limit tuple locking to exclusive locking (there's
> no way to mark a tuple with more than one shared lock).

For reference, the way Oracle does it (as I understand it) it locks them on
disk by reserving a fixed amount of space for each block to record locks. If
that space is exhausted then it has some sort of failover mechanism.

I think in practice you rarely need more than 1 or 2 lockers. So this works
quite well most of the time. But then you're paying the cost in lower i/o
throughput all the time.

> With my patch we need a lot of memory for each tuple locked. This needs
> to be shared memory. Since shared memory is limited, we can't grab an
> arbitrary number of locks simultaneously. Thus, deleting a whole table
> can fail. You haven't ever seen Postgres failing in a DELETE FROM
> table, have you?

--
greg

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Dan Sugalski 2005-03-31 14:57:07 Re: plperl doesn't release memory
Previous Message Shaun Clements 2005-03-31 14:29:40 Postgres PL SQL bug?