Skip site navigation (1) Skip section navigation (2)

How do I bulk insert to a table without affecting read performance on that table?

From: growse <nabble(at)growse(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: How do I bulk insert to a table without affecting read performance on that table?
Date: 2008-01-25 23:27:05
Message-ID: 15099164.post@talk.nabble.com (view raw or flat)
Thread:
Lists: pgsql-performance
Hi,

I've got a pg database, and a batch process that generates some metadata to
be inserted into one of the tables. Every 15 minutes or so, the batch script
re-calculates the meta data (600,000 rows), dumps it to file, and then does
a TRUNCATE table followed by a COPY to import that file into the table.

The problem is, that whilst this process is happening, other queries against
this table time out. I've tried to copy into a temp table before doing an
"INSERT INTO table (SELECT * FROM temp)", but the second statement still
takes a lot of time and causes a loss of performance.

So, what's the best way to import my metadata without it affecting the
performance of other queries?

Thanks,

Andrew 
-- 
View this message in context: http://www.nabble.com/How-do-I-bulk-insert-to-a-table-without-affecting-read-performance-on-that-table--tp15099164p15099164.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


Responses

pgsql-performance by date

Next:From: Simon RiggsDate: 2008-01-25 23:27:09
Subject: Re: Linux/PostgreSQL scalability issue - problem with 8cores
Previous:From: Magnus HaganderDate: 2008-01-25 18:45:43
Subject: Re: 8.3rc1 Out of memory when performing update

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group