| From: | Mark Thornton <mthornton(at)optrak(dot)com> |
|---|---|
| To: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
| Cc: | Jon Nelson <jnelson+pgsql(at)jamponi(dot)net>, pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: very very slow inserts into very large table |
| Date: | 2012-07-16 19:16:11 |
| Message-ID: | 5004687B.8080908@optrak.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
On 16/07/12 20:08, Claudio Freire wrote:
> On Mon, Jul 16, 2012 at 3:59 PM, Mark Thornton <mthornton(at)optrak(dot)com> wrote:
>> 4. The most efficient way for the database itself to do the updates would be
>> to first insert all the data in the table, and then update each index in
>> turn having first sorted the inserted keys in the appropriate order for that
>> index.
> Actually, it should create a temporary index btree and merge[0] them.
> Only worth if there are really a lot of rows.
>
> [0] http://www.ccs.neu.edu/home/bradrui/index_files/parareorg.pdf
I think 93 million would qualify as a lot of rows. However does any
available database (commercial or open source) use this optimisation.
Mark
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Lane | 2012-07-16 19:18:53 | Re: [PERFORM] DELETE vs TRUNCATE explanation |
| Previous Message | Claudio Freire | 2012-07-16 19:08:14 | Re: very very slow inserts into very large table |