Re: very very slow inserts into very large table

From: Claudio Freire <klaussfreire(at)gmail(dot)com>
To: Mark Thornton <mthornton(at)optrak(dot)com>
Cc: Jon Nelson <jnelson+pgsql(at)jamponi(dot)net>, pgsql-performance(at)postgresql(dot)org
Subject: Re: very very slow inserts into very large table
Date: 2012-07-16 20:01:21
Message-ID: CAGTBQpYe+csNXc3whgL_EWTMwH-W8sLkzdFNk8jAe2qoVYLkfA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Mon, Jul 16, 2012 at 4:16 PM, Mark Thornton <mthornton(at)optrak(dot)com> wrote:
>> Actually, it should create a temporary index btree and merge[0] them.
>> Only worth if there are really a lot of rows.
>>
>> [0] http://www.ccs.neu.edu/home/bradrui/index_files/parareorg.pdf
>
> I think 93 million would qualify as a lot of rows. However does any
> available database (commercial or open source) use this optimisation.

Databases, I honestly don't know. But I do know most document
retrieval engines use a similar technique with inverted indexes.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Craig Ringer 2012-07-17 03:30:56 Re: very very slow inserts into very large table
Previous Message Tom Lane 2012-07-16 19:46:07 Re: [PERFORM] DELETE vs TRUNCATE explanation