On Tue, Apr 29, 2008 at 05:19:59AM +0930, Shane Ambler wrote:
> John Rouillard wrote:
> >We can't do this as we are backfilling a couple of months of data
> >into tables with existing data.
> Is this a one off data loading of historic data or an ongoing thing?
Yes it's a one off bulk data load of many days of data. The daily
loads will also take 3 hour's but that is ok since we only do those
once a day so we have 21 hours of slack in the schedule 8-).
> >>>The only indexes we have to drop are the ones on the primary keys
> >>> (there is one non-primary key index in the database as well).
> If this amount of data importing is ongoing then one thought I would try
> is partitioning (this could be worthwhile anyway with the amount of data
> you appear to have).
> Create an inherited table for the month being imported, load the data
> into it, then add the check constraints, indexes, and modify the
> rules/triggers to handle the inserts to the parent table.
Hmm, interesting idea, worth considering if we have to do this again
(I hope not).
Thaks for the reply.
603-643-9300 x 111
In response to
pgsql-performance by date
|Next:||From: John Rouillard||Date: 2008-04-29 15:16:22|
|Subject: Re: Very poor performance loading 100M of sql data using copy|
|Previous:||From: Vivek Khera||Date: 2008-04-29 15:00:57|
|Subject: Re: Replication Syatem |