Re: Need for speed

From: Christopher Browne <cbbrowne(at)acm(dot)org>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Need for speed
Date: 2005-08-19 12:00:26
Message-ID: m3oe7ugmqt.fsf@mobile.int.cbbrowne.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

>> Ulrich Wisser wrote:
>> >
>> > one of our services is click counting for on line advertising. We do
>> > this by importing Apache log files every five minutes. This results in a
>> > lot of insert and delete statements.
> ...
>> If you are doing mostly inserting, make sure you are in a transaction,
>
> Well, yes, but you may need to make sure that a single transaction
> doesn't have too many inserts in it. I was having a performance
> problem when doing transactions with a huge number of inserts (tens
> of thousands), and I solved the problem by putting a simple counter
> in the loop (in the Java import code, that is) and doing a commit
> every 100 or so inserts.

Are you sure that was an issue with PostgreSQL?

I have certainly observed that issue with Oracle, but NOT with
PostgreSQL.

I have commonly done data loads where they loaded 50K rows at a time,
the reason for COMMITting at that point being "programming paranoia"
at the possibility that some data might fail to load and need to be
retried, and I'd rather have less fail...

It would seem more likely that the issue would be on the Java side; it
might well be that the data being loaded might bloat JVM memory usage,
and that the actions taken at COMMIT time might keep the size of the
Java-side memory footprint down.
--
(reverse (concatenate 'string "moc.liamg" "@" "enworbbc"))
http://cbbrowne.com/info/
If we were meant to fly, we wouldn't keep losing our luggage.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Alex Turner 2005-08-19 12:40:39 Re: sustained update load of 1-2k/sec
Previous Message Kari Lavikka 2005-08-19 11:34:47 Re: Finding bottleneck