Re: Write Ahead Logging for Hash Indexes

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Jesper Pedersen <jesper(dot)pedersen(at)redhat(dot)com>, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Write Ahead Logging for Hash Indexes
Date: 2017-03-14 18:36:24
Message-ID: 9485.1489516584@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Tue, Mar 14, 2017 at 2:14 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>>> It's become pretty clear to me that there are a bunch of other things
>>> about hash indexes which are not exactly great, the worst of which is
>>> the way they grow by DOUBLING IN SIZE.

>> Uh, what? Growth should happen one bucket-split at a time.

> Technically, the buckets are created one at a time, but because of the
> way hashm_spares works, the primary bucket pages for all bucket from
> 2^N to 2^{N+1}-1 must be physically consecutive. See
> _hash_alloc_buckets.

Right, but we only fill those pages one at a time.

It's true that as soon as we need another overflow page, that's going to
get dropped beyond the 2^{N+1}-1 point, and the *apparent* size of the
index will grow quite a lot. But any modern filesystem should handle
that without much difficulty by treating the index as a sparse file.

There may be some work to be done in places like pg_basebackup to
recognize and deal with sparse files, but it doesn't seem like a
reason to panic.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Pavan Deolasee 2017-03-14 18:41:11 Re: Patch: Write Amplification Reduction Method (WARM)
Previous Message Heikki Linnakangas 2017-03-14 18:28:51 Re: WIP: Faster Expression Processing v4