I found several post about INSERT/UPDATE performance in this group,
but actually it was not really what I am searching an answer for...
I have a simple reference table WORD_COUNTS that contains the count of
words that appear in a word array storage in another table.
CREATE TABLE WORD_COUNTS
word text NOT NULL,
CONSTRAINT PK_WORD_COUNTS PRIMARY KEY (word)
I have some PL/pgSQL code in a stored procedure like
IN select id, array_of_words
-- insert the missing words
insert into WORD_COUNTS
( word, count )
( select word, 0
from ( select distinct (r.array_of_words)
[s.index] as d_word
array_upper( r.array_of_words, 1 ) ) as s(index) ) as distinct_words
where word not in ( select d_word from
WORD_COUNTS ) );
-- update the counts
set count = COALESCE( count, 0 ) + 1
where word in ( select distinct (r.array_of_words)[s.index] as
array_upper( r.array_of_words, 1) ) as s(index) );
exception when others then
error_count := error_count + 1;
record_count := record_count + 1;
This code runs extremely slowly. It takes about 10 minutes to process
10000 records and the word storage has more then 2 million records to
Does anybody have a know-how about populating of such a reference
tables and what can be optimized in this situation.
Maybe the generate_series() procedure to unnest the array is the place
where I loose the performance?
Are the set update/inserts more effitient, then single inserts/updates
run in smaller loops?
Thanks for your help,
pgsql-performance by date
|Next:||From: Peter Childs||Date: 2007-05-22 09:05:27|
|Subject: Re: Key/Value reference table generation: INSERT/UPDATE performance|
|Previous:||From: Gregory Stark||Date: 2007-05-22 08:16:56|
|Subject: Re: Postgres Benchmark Results|