performance for high-volume log insertion

From: david(at)lang(dot)hm
To: pgsql-performance(at)postgresql(dot)org
Subject: performance for high-volume log insertion
Date: 2009-04-20 21:53:21
Message-ID: alpine.DEB.1.10.0904201442080.28211@asgard.lang.hm
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

I am working with the rsyslog developers to improve it's performance in
inserting log messages to databases.

currently they have a postgres interface that works like all the other
ones, where rsyslog formats an insert statement, passes that the the
interface module, that sends it to postgres (yes, each log as a seperate
transaction)

the big win is going to be in changing the core of rsyslog so that it can
process multiple messages at a time (bundling them into a single
transaction)

but then we run into confusion.

off the top of my head I know of several different ways to get the data
into postgres

1. begin; insert; insert;...;end

2. insert into table values (),(),(),()

3. copy from stdin
(how do you tell it how many records to read from stdin, or that you
have given it everything without disconnecting)

4. copy from stdin in binary mode

and each of the options above can be done with prepared statements, stored
procedures, or functions.

I know that using procedures or functions can let you do fancy things like
inserting the row(s) into the appropriate section of a partitioned table

other than this sort of capability, what sort of differences should be
expected between the various approaches (including prepared statements vs
unprepared)

since the changes that rsyslog is making will affect all the other
database interfaces as well, any comments about big wins or things to
avoid for other databases would be appriciated.

David Lang

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Stephen Frost 2009-04-21 01:55:15 Re: performance for high-volume log insertion
Previous Message Mark Lewis 2009-04-20 20:04:47 Re: SQL With Dates