Re: Very poor performance loading 100M of sql data using copy

From: Shane Ambler <pgsql(at)Sheeky(dot)Biz>
To: John Rouillard <rouilj(at)renesys(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Very poor performance loading 100M of sql data using copy
Date: 2008-04-28 19:49:59
Message-ID: 48162A67.3090500@Sheeky.Biz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

John Rouillard wrote:

> We can't do this as we are backfilling a couple of months of data
> into tables with existing data.

Is this a one off data loading of historic data or an ongoing thing?

>>> The only indexes we have to drop are the ones on the primary keys
>>> (there is one non-primary key index in the database as well).

If this amount of data importing is ongoing then one thought I would try
is partitioning (this could be worthwhile anyway with the amount of data
you appear to have).
Create an inherited table for the month being imported, load the data
into it, then add the check constraints, indexes, and modify the
rules/triggers to handle the inserts to the parent table.

--

Shane Ambler
pgSQL (at) Sheeky (dot) Biz

Get Sheeky @ http://Sheeky.Biz

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tino Wildenhain 2008-04-28 19:59:02 Re: Best practice to load a huge table from ORACLE to PG
Previous Message Gregory Stark 2008-04-28 18:40:25 Re: Benchmarks WAS: Sun Talks about MySQL