I have a question on bulk checking, inserting into a table and
how best to use an index for performance.
The data I have to work with is a monthly CD Rom csv data dump of
300,000 property owners from one area/shire.
So every CD has 300,000 odd lines, each line of data which fills the
Beginning with the first CD each line should require one SELECT and
one INSERT as it will be the first property with this address.
The SELECT uses fields like 'street' and 'suburb', to check for an
so I have built an index on those fields.
My question is does each INSERT rebuild the index on the 'street' and
I believe it does but I'm asking to be sure.
If this is the case I guess performance will suffer when I have, say,
rows in the table.
Would it be like:
a) Use index to search on 'street' and 'suburb'
b) No result? Insert new record
c) Rebuild index on 'street' and 'suburb'
for each row?
Would this mean that after 200,000 rows each INSERT will require
the index of 000's of rows to be re-indexed?
So far I believe my only options are to use either and index
or sequential scan and see which is faster.
A minute for your thoughts and/or suggestions would be great.
pgsql-performance by date
|Next:||From: Rod Taylor||Date: 2004-08-10 23:06:52|
|Subject: Re: Hardware upgrade for a high-traffic database|
|Previous:||From: Litao Wu||Date: 2004-08-10 21:44:00|
|Subject: Re: Slow select, insert, update|