Skip site navigation (1) Skip section navigation (2)

Re: Variable (degrading) performance

From: Heikki Linnakangas <heikki(at)enterprisedb(dot)com>
To: Vladimir Stankovic <V(dot)Stankovic(at)city(dot)ac(dot)uk>
Cc: PostgreSQL Performance <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Variable (degrading) performance
Date: 2007-06-12 18:20:51
Message-ID: 466EE403.5080200@enterprisedb.com (view raw or flat)
Thread:
Lists: pgsql-performance
Vladimir Stankovic wrote:
> What I am hoping to see is NOT the same value for all the executions of 
> the same type of transaction (after some transient period). Instead, I'd 
> like to see that if I take appropriately-sized set of transactions  I 
> will see at least steady-growth in transaction average times, if not 
> exactly the same average. Each chunk would possibly include sudden 
> performance drop due to the necessary vacuum and checkpoints. The 
> performance might be influenced by the change in the data set too.
> I am unhappy about the fact that durations of experiments can differ 
> even 30% (having in mind that they are not exactly the same due to the 
> non-determinism on the client side) . I would like to eliminate this 
> variability. Are my expectations reasonable? What could be the cause(s) 
> of this variability?

You should see that if you define your "chunk" to be long enough. Long 
enough is probably hours, not minutes or seconds. As I said earlier, 
checkpoints and vacuum are a major source of variability.

-- 
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com

In response to

pgsql-performance by date

Next:From: Vivek KheraDate: 2007-06-12 18:24:59
Subject: Re: Best use of second controller with faster disks?
Previous:From: Christo Du PreezDate: 2007-06-12 17:53:00
Subject: Re: test / live environment, major performance difference

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group