I've got a pg database, and a batch process that generates some metadata to
be inserted into one of the tables. Every 15 minutes or so, the batch script
re-calculates the meta data (600,000 rows), dumps it to file, and then does
a TRUNCATE table followed by a COPY to import that file into the table.
The problem is, that whilst this process is happening, other queries against
this table time out. I've tried to copy into a temp table before doing an
"INSERT INTO table (SELECT * FROM temp)", but the second statement still
takes a lot of time and causes a loss of performance.
So, what's the best way to import my metadata without it affecting the
performance of other queries?
View this message in context: http://www.nabble.com/How-do-I-bulk-insert-to-a-table-without-affecting-read-performance-on-that-table--tp15099164p15099164.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
pgsql-performance by date
|Next:||From: Simon Riggs||Date: 2008-01-25 23:27:09|
|Subject: Re: Linux/PostgreSQL scalability issue - problem with 8cores|
|Previous:||From: Magnus Hagander||Date: 2008-01-25 18:45:43|
|Subject: Re: 8.3rc1 Out of memory when performing update|