Skip site navigation (1) Skip section navigation (2)

Re: optimising data load

From: John Taylor <postgres(at)jtresponse(dot)co(dot)uk>
To: "Patrick Hatcher" <PHatcher(at)macys(dot)com>
Cc: pgsql-novice(at)postgresql(dot)org
Subject: Re: optimising data load
Date: 2002-05-22 15:35:09
Message-ID: 02052216350900.03723@splash.hq.jtresponse.co.uk (view raw or flat)
Thread:
Lists: pgsql-novice
On Wednesday 22 May 2002 16:29, Patrick Hatcher wrote:
> Dump the records from the other dbase to a text file and then use the COPY
> command for Pg.  I update tables nightly with 400K+ records and it only
> takes 1 -2 mins.  You should drop and re-add your indexes and then do a
> vacuum analyze
> 

I'm looking into that at the moment.
I'm getting some very variable results.
There are some tables that it is easy to do this for.

However for some tables, I don't get data in the right format, so I need to
perform some queries to get the right values to use when populating.

In this situation I'm not sure if I should drop the indexes to make make the insert faster,
or keep them to make the selects faster.


Thanks
JohnT

pgsql-novice by date

Next:From: Joshua b. JoreDate: 2002-05-22 15:56:07
Subject: Re: pl/perl Documentation
Previous:From: Patrick HatcherDate: 2002-05-22 15:29:07
Subject: Re: optimising data load

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group