On Jan 25, 2008 5:27 PM, growse <nabble(at)growse(dot)com> wrote:
> I've got a pg database, and a batch process that generates some metadata to
> be inserted into one of the tables. Every 15 minutes or so, the batch script
> re-calculates the meta data (600,000 rows), dumps it to file, and then does
> a TRUNCATE table followed by a COPY to import that file into the table.
> The problem is, that whilst this process is happening, other queries against
> this table time out. I've tried to copy into a temp table before doing an
> "INSERT INTO table (SELECT * FROM temp)", but the second statement still
> takes a lot of time and causes a loss of performance.
Can you import to another table then
alter table realtable rename to garbage;
alter table loadtable rename to realtable;
> So, what's the best way to import my metadata without it affecting the
> performance of other queries?
> View this message in context: http://www.nabble.com/How-do-I-bulk-insert-to-a-table-without-affecting-read-performance-on-that-table--tp15099164p15099164.html
> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
> ---------------------------(end of broadcast)---------------------------
> TIP 5: don't forget to increase your free space map settings
In response to
pgsql-performance by date
|Next:||From: growse||Date: 2008-01-26 11:42:45|
|Subject: Re: How do I bulk insert to a table without affecting
read performance on that table?|
|Previous:||From: Simon Riggs||Date: 2008-01-25 23:27:09|
|Subject: Re: Linux/PostgreSQL scalability issue - problem with 8cores|