Re: performance for high-volume log insertion

From: James Mansion <james(at)mansionfamily(dot)plus(dot)com>
To: Stephen Frost <sfrost(at)snowman(dot)net>
Cc: david(at)lang(dot)hm, pgsql-performance(at)postgresql(dot)org
Subject: Re: performance for high-volume log insertion
Date: 2009-04-22 05:26:07
Message-ID: 49EEAA6F.9030003@mansionfamily.plus.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Stephen Frost wrote:
> apart again. That's where the performance is going to be improved by
> going that route, not so much in eliminating the planning.
>
Fine. But like I said, I'd suggest measuring the fractional improvement
for this
when sending multi-row inserts before writing something complex. I
think the
big will will be doing multi-row inserts at all. If you are going to
prepare then
you'll need a collection of different prepared statements for different
batch sizes
(say 1,2,3,4,5,10,20,50) and things will get complicated. A multi-row
insert
with unions and dynamic SQL is actually rather universal.

Personally I'd implement that first (and it should be easy to do across
multiple
dbms types) and then return to it to have a more complex client side with
prepared statements etc if (and only if) necessary AND the performance
improvement were measurably worthwhile, given the indexing and storage
overheads.

There is no point optimising away the CPU of the simple parse if you are
just going to get hit with a lot of latency from round trips, and forming a
generic multi-insert SQL string is much, much easier to get working as a
first
step. Server CPU isn't a bottleneck all that often - and with something as
simple as this you'll hit IO performance bottlenecks rather easily.

James

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message roopabenzer 2009-04-22 07:16:12 Re: probelm with alter table add constraint......
Previous Message Robert Haas 2009-04-22 02:29:16 Re: performance for high-volume log insertion