I'm not sure if this is the correct list to ask this question, so I
appologize ahead of time if it is misguided.
I'm using DBD::Pg to insert large amounts of data into a Postgres
installation. We will occassionally (once every few weeks, perhaps)
get new chip layouts that need to be added. Each layout may have up to
250k spots which go into a Spot table.
My current code is glacially slow - on an otherwise zippy
dual-processor P4, it seems this insert will take 3 days.
I've gotten a bit of feedback from the Perl dbi-users list:
1) Transactions: My current approach was to do this inside a
transaction, but apparently the write-ahead-logging will not handle
250k logged inserts well.
Is this true, and I should commit after every 20 or so spots?
2) Indices: apparently every insert updates the indices on the
table. From my reading of the documentation, the indices aren't
updated inside a transaction, but instead at the end.
3) COPY: I could use COPY, but apparently triggers are not, well,
triggered under COPY.
Is this true? I have datestamps on my inserts for audit trails.
Thanks ahead of time for any help,
In response to
pgsql-interfaces by date
|Next:||From: Tom Lane||Date: 2002-11-22 17:01:34|
|Subject: Re: Frontend/Backend protocol changes? |
|Previous:||From: Bruce Momjian||Date: 2002-11-22 15:46:11|
|Subject: Re: Frontend/Backend protocol changes?|