Re: Bulk Insert and Index use

From: "Gregory S(dot) Williamson" <gsw(at)globexplorer(dot)com>
To: <tech(at)wildcash(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Bulk Insert and Index use
Date: 2004-08-11 02:27:48
Message-ID: 71E37EF6B7DCC1499CEA0316A256832801057DE3@loki.wc.globexplorer.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


Usualy any bulk load is faster with indexes dropped and the rebuilt ... failing that (like you really need the indexes while loading, say into a "hot" table) be sure to wrap all the SQL into one transaction (BEGIN;...COMMIT;) ... if any data failes it all fails, which is usually easier to deal with than partial data loads, and it is *much* faster than having each insert being its own transaction.

HTH,

Greg Williamson
DBA
GlobeXplorer LLC

-----Original Message-----
From: Rudi Starcevic [mailto:tech(at)wildcash(dot)com]
Sent: Tue 8/10/2004 4:04 PM
To: pgsql-performance(at)postgresql(dot)org
Cc:
Subject: [PERFORM] Bulk Insert and Index use
Hi,

I have a question on bulk checking, inserting into a table and
how best to use an index for performance.

The data I have to work with is a monthly CD Rom csv data dump of
300,000 property owners from one area/shire.

So every CD has 300,000 odd lines, each line of data which fills the
'property' table.

Beginning with the first CD each line should require one SELECT and
one INSERT as it will be the first property with this address.

The SELECT uses fields like 'street' and 'suburb', to check for an
existing property,
so I have built an index on those fields.

My question is does each INSERT rebuild the index on the 'street' and
'suburb' fields?
I believe it does but I'm asking to be sure.

If this is the case I guess performance will suffer when I have, say,
200,000
rows in the table.

Would it be like:

a) Use index to search on 'street' and 'suburb'
b) No result? Insert new record
c) Rebuild index on 'street' and 'suburb'

for each row?
Would this mean that after 200,000 rows each INSERT will require
the index of 000's of rows to be re-indexed?

So far I believe my only options are to use either and index
or sequential scan and see which is faster.

A minute for your thoughts and/or suggestions would be great.

Thanks.
Regards,
Rudi.

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Browse pgsql-performance by date

  From Date Subject
Next Message Jim J 2004-08-11 02:31:48 Re: Bulk Insert and Index use
Previous Message Christopher Browne 2004-08-11 02:24:31 Re: Bulk Insert and Index use