locking for unique hash indexes

From: Neil Conway <neilc(at)samurai(dot)com>
To: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: locking for unique hash indexes
Date: 2003-09-17 02:30:22
Message-ID: 1063758747.24276.29.camel@tokyo
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

I'm working on adding support for unique indexes to the hash index code.
The majority of the code is finished, with the exception of the changes
to locking. My proposed changes are below -- if anyone sees a better way
to do things, let me know.

There's lots of info on the locking scheme used by hash indexes in
src/backend/access/hash/README, but I'll present a short summary here.
(I've omitted some unimportant details.) Insertions do the following:

1. Lookup the bucket in which to store the index item, based on
its hash (we acquire some locks to ensure the hash-key-to-bucket
mapping doesn't change under our feet).

2. Acquire a shared lmgr lock on the target bucket, to ensure
that no one tries to split or compact it while we're looking at
it.

3. Acquire an exclusive lwlock on the primary bucket page. This
prevents hash scans from seeing the partially written page.

4. If the primary bucket page has enough free space to fit the
new index item, insert it and release the lwlock; if not,
release the lwlock, advance to the next overflow page, and
repeat. If we're out of overflow pages but haven't found free
space, create a new overflow page.

5. Release the shared lmgr lock on the target bucket.

Scans use a similar algorithm, with the exception that they need only
acquire a shared lwlock on the bucket's pages.

The additional requirement for unique hash indexes is to be able to
prevent other backends from concurrently inserting a duplicate key into
the target bucket chain.

In order to do this, we could:

- Invent a new set of lmgr locks; call them "right of insertion" locks,
and have one for each bucket in the hash index. Only one backend will
hold the ROI lock for a given bucket at any given time.

- On insertion, grab the ROI lock in exclusive mode for the target
bucket, after we've already done #2. This serializes insertions into the
bucket.

- Index scans don't need to acquire the ROI lock, since they can't
possibly insert a duplicate, and they're already protected from seeing
partially written pages by the lwlocks acquired by the inserting
backend. Similarly, bulk deletions don't need to acquire the ROI lock,
since they already can't proceed in parallel with an insertion on any
given bucket (they acquire #2 in exclusive mode). Insertions on other
buckets in the index can proceed in parallel with one another, since by
definition they couldn't be inserting duplicate tuples.

- naturally, we don't bother with ROI locks if the hash index isn't a
unique index

Q: Is there a possibility of deadlock here? (If not, the ROI lock could
be an lwlock, perhaps.)

Any comments would be welcome.

-Neil

P.S. While we're on the subject on hash indexes and locking, ISTM that
we could get better concurrent performance in #4 by first acquiring the
lwlock on a particular bucket page in shared mode, checking if it has
free space, and only if it does, getting a write lock on it and doing
the insertion. Of course, we'd need to re-check the free space once we
got the write lock, to insure that someone else wasn't granted a write
lock on that page while we were waiting and then decreased the free
space.

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Hiroshi Inoue 2003-09-17 02:46:29 Re: New thoughts about indexing cross-type comparisons
Previous Message Kris Jurka 2003-09-16 23:17:43 Re: observations about temporary tables and schemas