Skip site navigation (1) Skip section navigation (2)

Re: optimising data load

From: John Taylor <postgres(at)jtresponse(dot)co(dot)uk>
To: Ron Johnson <ron(dot)l(dot)johnson(at)cox(dot)net>,PgSQL Novice ML <pgsql-novice(at)postgresql(dot)org>
Subject: Re: optimising data load
Date: 2002-05-23 09:10:02
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-novice
On Thursday 23 May 2002 08:22, you wrote:
> On Thu, 2002-05-23 at 02:01, John Taylor wrote:
> > On Thursday 23 May 2002 03:40, Ron Johnson wrote:
> > > On Wed, 2002-05-22 at 16:05, John Taylor wrote:
> > >
> > > > So, how would I merge the data from the view with the data to copy in ?
> > > 
> > > ???  Unless I am misunderstanding you:
> > 
> > I think you are misunderstanding me.
> > I have 300,000 rows to insert.
> > They each require a different query to obtain the correct data to insert.
> Oh, ok.  How about creating an intermediary, indexless table
> that the 300,000 records will be inserted into.  Then, after
> the intermediary table is populated, drop the index(es) on
> orderlines and INSERT INTO orderlines SELECT * FROM intermediary;

That sounds a good idea.
Would it be quicker to copy out from the intermediary, and then copy in to the real one ?
I guess I'll have to experiment.

I have a feeling a lot of the problems may be due to the indexes.
I've been experimenting with an easier table.
copy in to the live takes 5 minutes for 850000 records
copy in to the updates takes 2 hours !
It is the same data with 3 extra columns, but there are an additional 2 indexes. 
I think it must be the indexes making the diffrence. I'm just checking now.

Thanks for the ideas !


In response to


pgsql-novice by date

Next:From: John TaylorDate: 2002-05-23 09:27:23
Subject: Re: make default field entry NULL?
Previous:From: Duncan Adams (DNS)Date: 2002-05-23 09:04:21
Subject: Re: pl/perl Documentation

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group