Re: PG10 Crash-safe and replicable Hash Indexes and UNIQUE

From: Chapman Flack <chap(at)anastigmatix(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: PG10 Crash-safe and replicable Hash Indexes and UNIQUE
Date: 2017-05-22 03:01:57
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 05/19/17 11:41, Tom Lane wrote:

> No, nobody's done anything about allowing hash indexes to support
> uniqueness AFAIK. I don't have a clear picture of how much work
> it would be, but it would likely be more than trivial effort;

I see what you mean. Because of the way hash values are ordered
(to allow binary search) within a page, but not between pages of
a bucket, insertion as it stands now is able to stop as soon as
it finds any page with room for the entry, but a unique-insertion
will have to check every page of the bucket for matching hashes,
and then (because only the hash and tid are in the index) chase
any of those to the heap to compare the value.

Maybe both hash collisions and overflow pages are rare enough
in practice with reasonable data that the performance impact
of that would be small, but still the possibility has to be
accounted for, the locking may get hairier (do you now keep
the lock you have on the page where room was found for the entry,
and use another lock to walk the remaining pages until sure
there's no duplicate?).

At least I see that interest in UNIQUE for hash indexes has been
shown on -hackers several times over the years, and is on the TODO.
Neil Conway seems to have had an idea [1] for making the locking work,
14 years ago (however relevant that might be to today's code).

... and one inquiry last year [2] did seem to get tabled because of the
lack of WAL logging, which is now a non-blocker.

I haven't seen much discussion of /why/ one would want hash-based UNIQUE.
I know my own reasons, but I'm not sure how persuasive they are in light
of the implementation realities, so maybe that makes such a discussion
worthwhile. I can start; these are the two reasons I had:

1. To a naive intuition (especially one raised more on in-memory data
structures than the guts of databases), it just seems natural:
hashing seems like the canonical approach to uniqueness testing
where there's no need for ordering, intuition suggests a performance
advantage, and so the least-astonishment principle suffers upon finding
it isn't supported.

2. When developing a custom data type, it feels like tedious
busy-work to have to bang out a full set of ordering operators
for a btree operator class if there is no meaningful order for
the type.

Maybe the intuitions behind (1) are just misinformed, the performance
ones at least, in light of Craig Ringer's low opinion of whether "hash
indexes are better than btree for anything" [3], and André Barbosa's
more recent performance comparison [4] (which does show some advantages
for hash in some circumstances, but mostly not large. The only large
advantage was in initial creation; would that be hashsort.c at work?).

But then, both [3] and [4] predate the recent projects on hash indexes
that have "made them crash safe and are on the way to making them
performant" [5], so maybe an updated comparison would be timely, or some
addition to the docs to better characterize the circumstances where hash
could be good. (Every index method newer than btree and hash has its own
part VII Internals chapter; for completeness, might it make sense to have
those for btree and hash also, even if only to broadly discuss
the conditions under which they perform especially well or poorly?)

For all sorts of indexes, would there be any use for some CREATE INDEX
syntax for a multicolumn index to say that some of its rightmost columns
aren't there to participate in the indexing scheme, but only to benefit
index-only scans? Applied to a hash index, that might offer another useful
kind of multicolumn support, which otherwise seems limited to queries
where you have the exact values of all indexed columns.

Anyway, even if my performance assumption behind (1) was too optimistic,
the astonishment when a new user finds a hash can't support uniqueness
still seems real. A related astonishment is that a hash opclass can't
support DISTINCT, and that seems like something that could be changed
with much less work than making hash indexes amcanunique. Apparently,
array comparison already looks for a btree opclass but can fall back to
a hash one, if present, for equality comparisons. Would it be difficult
to do the same for DISTINCT?

As for my reason (2), the tedium of having to bang out btree operators
for a new type with no meaningful order (which, as I've just remembered,
is necessary not just for UNIQUE constraints but even just to make
SELECT DISTINCT work), maybe there's a solution that simply reduces
the tedium. After all, if a new type has no meaningful notion of order,
an arbitrary one imposed by copy/paste of some existing opclass
for a type with the same internallength might often be good enough.
Could there be some syntactic sugar for that, say,

FOR TYPE foo USING btree LIKE int4_ops;

? A transformOpclassLike function could verify that foo and the opcintype
of int4_ops have the same typlen and typbyval, and that the operators and
support procs are backed by C functions, and return a list of
CREATE OPERATOR reusing the same functions, followed by the rewritten

Would it be helpful to link any part of these notes to the hash index
section of the TODO page?


[3]: in a comment

In response to


Browse pgsql-hackers by date

  From Date Subject
Next Message David Rowley 2017-05-22 04:10:20 Re: Regression in join selectivity estimations when using foreign keys
Previous Message Tsunakawa, Takayuki 2017-05-22 02:28:18 Re: Re: [doc fix] PG10: wroing description on connect_timeout when multiple hosts are specified