Re: performance for high-volume log insertion

From: Stephen Frost <sfrost(at)snowman(dot)net>
To: david(at)lang(dot)hm
Cc: Greg Smith <gsmith(at)gregsmith(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: performance for high-volume log insertion
Date: 2009-04-21 06:50:59
Message-ID: 20090421065059.GX8123@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

David,

* david(at)lang(dot)hm (david(at)lang(dot)hm) wrote:
> is this as simple as creating a database and doing an explain on each of
> these? or do I need to actually measure the time (at which point the
> specific hardware and tuning settings become an issue again)

No, you need to measure the time. An explain isn't going to tell you
much. However, I think the point here is that if you see a 10%
performance improvment on some given hardware for a particular test,
then chances are pretty good most people will see a performance
benefit. Some more, some less, but it's unlikely anyone will have worse
performance for it. There are some edge cases where a prepared
statement can reduce performance, but that's almost always on SELECT
queries, I can't think of a reason off-hand why it'd ever be slower for
INSERTs unless you're already doing things you shouldn't be if you care
about performance (like doing a join against some other table with each
insert..).

Additionally, there's really no way for us to know what an acceptable
performance improvment is for you to justify the added code maintenance
and whatnot for your project. If you're really just looking for the
low-hanging fruit, then batch your inserts into transactions and go from
there.

Thanks,

Stephen

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message david 2009-04-21 07:49:59 Re: performance for high-volume log insertion
Previous Message Stephen Frost 2009-04-21 06:45:54 Re: performance for high-volume log insertion