Skip site navigation (1) Skip section navigation (2)

Re: Variable (degrading) performance

From: Heikki Linnakangas <heikki(at)enterprisedb(dot)com>
To: Vladimir Stankovic <V(dot)Stankovic(at)city(dot)ac(dot)uk>
Cc: PostgreSQL Performance <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Variable (degrading) performance
Date: 2007-06-11 17:51:43
Message-ID: 466D8BAF.604@enterprisedb.com (view raw or flat)
Thread:
Lists: pgsql-performance
Vladimir Stankovic wrote:
> I'm running write-intensive, TPC-C like tests. The workload consist of 
> 150 to 200 thousand transactions. The performance varies dramatically, 
> between 5 and more than 9 hours (I don't have the exact figure for the 
> longest experiment). Initially the server is relatively fast. It 
> finishes the first batch of 50k transactions in an hour. This is 
> probably due to the fact that the database is RAM-resident during this 
> interval. As soon as the database grows bigger than the RAM the 
> performance, not surprisingly, degrades, because of the slow disks.
> My problem is that the performance is rather variable, and to me 
> non-deterministic. A 150k test can finish in approx. 3h30mins but 
> conversely  it can take more than 5h to complete.
> Preferably I would like to see *steady-state* performance (where my 
> interpretation of the steady-state is that the average 
> throughput/response time does not change over time). Is the steady-state 
> achievable despite the MVCC and the inherent non-determinism between 
> experiments? What could be the reasons for the variable performance?

Steadiness is a relative; you'll never achieve perfectly steady 
performance where every transaction takes exactly X milliseconds. That 
said, PostgreSQL is not as steady as many other DBMS's by nature, 
because of the need to vacuum. Another significant source of 
unsteadiness is checkpoints, though it's not as bad with fsync=off, like 
you're running.

I'd suggest using the vacuum_cost_delay to throttle vacuums so that they 
don't disturb other transactions as much. You might also want to set up 
manual vacuums for the bigger tables, instead of relying on autovacuum, 
because until the recent changes in CVS head, autovacuum can only vacuum 
one table at a time, and while it's vacuuming a big table, the smaller 
heavily-updated tables are neglected.

> The database server version is  8.1.5 running on Fedora Core 6.

How about upgrading to 8.2? You might also want to experiment with CVS 
HEAD to get the autovacuum improvements, as well as a bunch of other 
performance improvements.

-- 
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com

In response to

Responses

pgsql-performance by date

Next:From: Dave CramerDate: 2007-06-11 17:59:08
Subject: Re: How much ram is too much
Previous:From: Dave CramerDate: 2007-06-11 17:22:04
Subject: Re: How much ram is too much

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group