Re: optimising data load

From: John Taylor <postgres(at)jtresponse(dot)co(dot)uk>
To: "Patrick Hatcher" <PHatcher(at)macys(dot)com>
Cc: pgsql-novice(at)postgresql(dot)org
Subject: Re: optimising data load
Date: 2002-05-22 15:35:09
Message-ID: 02052216350900.03723@splash.hq.jtresponse.co.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-novice

On Wednesday 22 May 2002 16:29, Patrick Hatcher wrote:
> Dump the records from the other dbase to a text file and then use the COPY
> command for Pg. I update tables nightly with 400K+ records and it only
> takes 1 -2 mins. You should drop and re-add your indexes and then do a
> vacuum analyze
>

I'm looking into that at the moment.
I'm getting some very variable results.
There are some tables that it is easy to do this for.

However for some tables, I don't get data in the right format, so I need to
perform some queries to get the right values to use when populating.

In this situation I'm not sure if I should drop the indexes to make make the insert faster,
or keep them to make the selects faster.

Thanks
JohnT

Browse pgsql-novice by date

  From Date Subject
Next Message Joshua b. Jore 2002-05-22 15:56:07 Re: pl/perl Documentation
Previous Message Patrick Hatcher 2002-05-22 15:29:07 Re: optimising data load