Re: Batch update of indexes on data loading

From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Batch update of indexes on data loading
Date: 2008-04-24 11:28:21
Message-ID: 1209036501.4259.1499.camel@ebony.site
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, 2008-02-26 at 09:08 +0000, Simon Riggs wrote:

> I very much like the idea of index merging, or put another way: batch
> index inserts. How big do the batch of index inserts have to be for us
> to gain benefit from this technique? Would it be possible to just buffer
> the index inserts inside the indexam module so that we perform a batch
> of index inserts every N rows? Maybe use work_mem? Or specify a batch
> size as a parameter on COPY?

Itagaki,

I think the index merging idea is still useful even if we do not build a
whole new index. ISTM we can do this without locking the table also.

I understand it is most efficient when you do rebuild the index by
merging the old index with the incoming data, but it does seem there are
other problems associated with doing that.

Your idea still has a great future, IMHO.

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Simon Riggs 2008-04-24 11:52:30 Re: Index AM change proposals, redux
Previous Message Martijn van Oosterhout 2008-04-24 11:13:09 Re: Index AM change proposals, redux