Re: locking for unique hash indexes

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Neil Conway <neilc(at)samurai(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: locking for unique hash indexes
Date: 2003-09-19 21:24:32
Message-ID: 8743.1064006672@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Neil Conway <neilc(at)samurai(dot)com> writes:
> - Invent a new set of lmgr locks; call them "right of insertion" locks,
> and have one for each bucket in the hash index. Only one backend will
> hold the ROI lock for a given bucket at any given time.

Rather than trying to invent a new set of lock IDs (which would be
difficult to squeeze into the page mapping I think), you could encode
this as an appropriate lock mode on the existing set of bucket lock IDs.
It looks like this would work:

HASH_SHARE -> AccessShareLock
unique-insertion lock -> ShareUpdateExclusiveLock
HASH_EXCLUSIVE -> AccessExclusiveLock

> Q: Is there a possibility of deadlock here?

I think you would need to set it up so that insertion into a unique
index grabs ShareUpdateExclusiveLock *instead of* AccessShareLock, not
*in addition to*. Otherwise I think there is indeed some risk.
However, it should be easy enough to do it that way, and there's no
real cost since it's still just one lock acquisition.

> P.S. While we're on the subject on hash indexes and locking, ISTM that
> we could get better concurrent performance in #4 by first acquiring the
> lwlock on a particular bucket page in shared mode, checking if it has
> free space, and only if it does, getting a write lock on it and doing
> the insertion.

The free-space check is cheap enough that I think this would just be a
waste of cycles.

regards, tom lane

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Treat 2003-09-19 21:26:54 Re: NuSphere and PostgreSQL for windows
Previous Message Tom Lane 2003-09-19 21:10:57 Re: observations about temporary tables and schemas