Re: performance for high-volume log insertion

From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: david(at)lang(dot)hm
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: performance for high-volume log insertion
Date: 2009-04-22 15:19:06
Message-ID: 1240413547.3978.48.camel@ebony.fara.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


On Mon, 2009-04-20 at 14:53 -0700, david(at)lang(dot)hm wrote:

> the big win is going to be in changing the core of rsyslog so that it can
> process multiple messages at a time (bundling them into a single
> transaction)

That isn't necessarily true as a single "big win".

The reason there is an overhead per transaction is because of commit
delays, which can be removed by executing

SET synchronous_commit = off;

after connecting to PostgreSQL 8.3+

You won't need to do much else. This can also be enabled for a
PostgreSQL user without even changing the rsyslog source code, so it
should be easy enough to test.

And this type of application is *exactly* what it was designed for.

Some other speedups should also be possible, but this is easiest.

I would guess that batching inserts will be a bigger win than simply
using prepared statements because it will reduce network roundtrips to a
centralised log server. Preparing statements might show up well on tests
because people will do tests against a local database, most likely.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Matthew Wakeling 2009-04-22 16:06:25 Re: GiST index performance
Previous Message Matthew Wakeling 2009-04-22 13:53:05 Re: GiST index performance