On Tue, May 10, 2005 at 12:10:57AM -0400, Tom Lane wrote:
> be responsive to your search.) (This also brings up the thought that
> it might be interesting to support hash buckets smaller than a page ...
> but I don't know how to make that work in an adaptive fashion.)
IIRC, other databases that support hash indexes also allow you to define
the bucket size, so it might be a good start to allow for that. DBA's
usually have a pretty good idea of what a table will look like in
production, so if there's clear documentation on the effect of bucket
size a good DBA should be able to make a good decision.
What's the challange to making it adaptive, comming up with an algorithm
that gives you the optimal bucket size (which I would think there's
research on...) or allowing the index to accommodate different bucket
sizes existing in the index at once? (Presumably you don't want to
re-write the entire index every time it looks like a different bucket
size would help.)
Jim C. Nasby, Database Consultant decibel(at)decibel(dot)org
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"
In response to
pgsql-performance by date
|Next:||From: Richard_D_Levine||Date: 2005-05-10 15:43:54|
|Subject: Re: Partitioning / Clustering|
|Previous:||From: Jim C. Nasby||Date: 2005-05-10 15:32:45|
|Subject: Re: [PERFORM] "Hash index" vs. "b-tree index" (PostgreSQL|
pgsql-general by date
|Next:||From: Madeleine Theile||Date: 2005-05-10 15:44:49|
|Subject: alter table owner doesn't update acl information|
|Previous:||From: Douglas McNaught||Date: 2005-05-10 15:35:18|
|Subject: Re: pg_dump fails on 7.4 Postgres|