Problems with inconsistant query performance.

From: Matthew Schumacher <matt(dot)s(at)aptalaska(dot)net>
To: pgsql-performance(at)postgresql(dot)org
Subject: Problems with inconsistant query performance.
Date: 2006-09-27 18:37:22
Message-ID: 451AC4E2.90106@aptalaska.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

List,

I posted a little about this a while back to the general list, but never
really got any where with it so I'll try again, this time with a little
more detail and hopefully someone can send me in the right direction.

Here is the problem, I have a procedure that is called 100k times a day.
Most of the time it's screaming fast, but other times it takes a few
seconds. When it does lag my machine can get behind which causes other
problems, so I'm trying to figure out why there is such a huge delta in
performance with this proc.

The proc is pretty large (due to the number of vars) so I will summarize
it here:

==========================================================================
CREATE acctmessage( <lots of accounting columns> )RETURNS void AS $$
BEGIN
INSERT into tmpaccounting_tab ( ... ) values ( ... );

IF _acctType = 'start' THEN
BEGIN
INSERT into radutmp_tab ( ... ) valuse ( ... );
EXCEPTION WHEN UNIQUE_VIOLATION THEN
NULL;
END;
ELSIF _acctType = 'stop' THEN
UPDATE radutmp_tab SET ... WHERE sessionId = _sessionId AND userName =
_userName;
IF (NOT FOUND) THEN
INSERT into radutmp_tab ( ... ) values ( ... );
END IF;

END IF;
END;
$$
LANGUAGE plpgsql;
==========================================================================

So in a nutshell, if I get an accounting record put it in the
tmpaccounting_tab and then insert or update the radutmp_tab based on
what kind of record it is. If for some reason the message is a start
message and a duplicate, drop it, and if the message is a stop message
and we don't have the start then insert it.

The tmpaccounting_tab table doesn't have any indexes and gets flushed to
the accounting_tab table nightly so it should have very good insert
performance as the table is kept small (compared to accounting_tab) and
doesn't have index overhead. The radutmp_tab is also kept small as
completed sessions are flushed to another table nightly, but I do keep
an index on sessionId and userName so the update isn't slow.

Now that you have the layout, the problem: I log whenever a query takes
more than 250ms and have logged this query:

duration: 3549.229 ms statement: select acctMessage( 'stop',
'username', 'address', 'time', 'session', 'port', 'address', 'bytes',
'bytes', 0, 0, 1, 'reason', '', '', '', 'proto', 'service', 'info')

But when I go do an explain analyze it is very fast:

QUERY PLAN
------------------------------------------------------------------------------------
Result (cost=0.00..0.03 rows=1 width=0) (actual time=6.812..6.813
rows=1 loops=1)
Total runtime: 6.888 ms

So the question is why on a relatively simple proc and I getting a query
performance delta between 3549ms and 7ms?

Here are some values from my postgres.conf to look at:

shared_buffers = 60000 # min 16 or max_connections*2,
8KB each
temp_buffers = 5000 # min 100, 8KB each
#max_prepared_transactions = 5 # can be 0 or more
# note: increasing max_prepared_transactions costs ~600 bytes of shared
memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
work_mem = 131072 # min 64, size in KB
maintenance_work_mem = 262144 # min 1024, size in KB
max_stack_depth = 2048 # min 100, size in KB
effective_cache_size = 65536 # typically 8KB each

Thanks for any help you can give,
schu

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tobias Brox 2006-09-27 19:01:45 Re: Merge Join vs Nested Loop
Previous Message Matthew T. O'Connor 2006-09-27 18:33:10 Re: [PERFORM] autovacuum on a -mostly- r/o table