Skip site navigation (1) Skip section navigation (2)

Re: very very slow inserts into very large table

From: Mark Thornton <mthornton(at)optrak(dot)com>
To: Claudio Freire <klaussfreire(at)gmail(dot)com>
Cc: Jon Nelson <jnelson+pgsql(at)jamponi(dot)net>, pgsql-performance(at)postgresql(dot)org
Subject: Re: very very slow inserts into very large table
Date: 2012-07-16 19:16:11
Message-ID: 5004687B.8080908@optrak.com (view raw or flat)
Thread:
Lists: pgsql-performance
On 16/07/12 20:08, Claudio Freire wrote:
> On Mon, Jul 16, 2012 at 3:59 PM, Mark Thornton <mthornton(at)optrak(dot)com> wrote:
>> 4. The most efficient way for the database itself to do the updates would be
>> to first insert all the data in the table, and then update each index in
>> turn having first sorted the inserted keys in the appropriate order for that
>> index.
> Actually, it should create a temporary index btree and merge[0] them.
> Only worth if there are really a lot of rows.
>
> [0] http://www.ccs.neu.edu/home/bradrui/index_files/parareorg.pdf
I think 93 million would qualify as a lot of rows. However does any 
available database (commercial or open source) use this optimisation.

Mark



In response to

Responses

pgsql-performance by date

Next:From: Tom LaneDate: 2012-07-16 19:18:53
Subject: Re: [PERFORM] DELETE vs TRUNCATE explanation
Previous:From: Claudio FreireDate: 2012-07-16 19:08:14
Subject: Re: very very slow inserts into very large table

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group