|From:||Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>|
|To:||"Dann Corbit" <DCorbit(at)connx(dot)com>|
|Subject:||Why hash indexes suck|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
"Dann Corbit" <DCorbit(at)connx(dot)com> writes:
> There seems to be something seriously defective with hash indexes in old
> versions of PostgreSQL.
They still suck; I'm not aware of any situation where I'd recommend hash
over btree indexes in Postgres. I think we have fixed the hash indexes'
deadlock problems as of 7.4, but there's still no real performance
I just had an epiphany as to the probable reason why the performance sucks.
It's this: the hash bucket size is the same as the page size (ie, 8K).
This means that if you have only one or a few items per bucket, the
information density is awful, and you lose big on I/O requirements
compared to a btree index. On the other hand, if you have enough
items per bucket to make the storage density competitive, you will
be doing linear searches through dozens if not hundreds of items that
are all in the same bucket, and you lose on CPU time (compared to btree
which can do binary search to find an item within a page).
It would probably be interesting to look into making the hash bucket
size be just a fraction of a page, with the intent of having no more
than a couple dozen items per bucket. I'm not sure what the
implications would be for intra-page storage management or index locking
conventions, but offhand it seems like there wouldn't be any
I'm not planning on doing this myself, just throwing it out as a
possible TODO item for anyone who's convinced that hash indexes ought
to work better than they do.
regards, tom lane
|Next Message||Sailesh Krishnamurthy||2004-06-05 20:15:25||Re: Why hash indexes suck|
|Previous Message||CSN||2004-06-05 17:38:03||timestamp outside of bounds|
|Next Message||David Garamond||2004-06-05 20:11:02||Re: [HACKERS] Not 7.5, but 8.0 ?|
|Previous Message||Andrew Dunstan||2004-06-05 19:51:01||Re: Official Freeze Date for 7.5: July 1st, 2004|