Skip site navigation (1) Skip section navigation (2)

Re: how much postgres can scale up?

From: Greg Smith <greg(at)2ndQuadrant(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: how much postgres can scale up?
Date: 2011-06-10 16:49:38
Message-ID: 4DF24B22.60209@2ndQuadrant.com (view raw or flat)
Thread:
Lists: pgsql-performance
On 06/10/2011 07:29 AM, Anibal David Acosta wrote:
> When 1 client connected postgres do 180 execution per second
> With 2 clients connected postgres do 110 execution per second
> With 3 clients connected postgres do 90 execution per second
>
> Finally with 6 connected clients postgres do 60 executions per second
> (totally 360 executions per second)
>
> While testing, I monitor disk, memory and CPU and not found any overload.
>
> I know that with this information you can figure out somethigns, but in
> normal conditions, Is normal the degradation of performance per connection
> when connections are incremented?
> Or should I spect 180 in the first and something similar in the second
> connection? Maybe 170?
>    

Let's reformat this the way most people present it:

clients tps
1    180
2    220
3    270
6    360

It's common for a single connection doing INSERT statements to hit a 
bottleneck based on how fast the drives used can spin.  That's anywhere 
from 100 to 200 inserts/section, approximately, unless you have a 
battery-backed write cache.  See 
http://wiki.postgresql.org/wiki/Reliable_Writes for more information.

However, multiple clients can commit at once when a backlog occurs.  So 
what you'll normally see in this situation is that the rate goes up 
faster than this as clients are added.  Here's a real sample, from a 
server that's only physically capable of doing 120 commits/second on its 
7200 RPM drive:

clients tps
1     107
2     109
3     163
4     216
5     271
6     325
8     432
10     530
15     695

This is how it's supposed to scale even on basic hardware  You didn't 
explore this far enough to really know how well your scaling is working 
here though.  Since commit rates are limited by disk spin in this 
situation, the situation for 1 to 5 clients is not really representative 
of how a large number of clients will end up working.  As already 
mentioning, turning off synchronous_commit should give you an 
interesting alternate set of numbers.

It's also possible there may be something wrong with whatever client 
logic you are using here.  Something about the way you've written it may 
be acquiring a lock that blocks other clients from executing efficiently 
for example.  I'd suggest turning on log_lock_waits and setting 
deadlock_timeout to a small number, which should show you some extra 
logging in situations where people are waiting for locks.  Running some 
queries to look at the lock data such as the examples at 
http://wiki.postgresql.org/wiki/Lock_Monitoring might be helpful too.

-- 
Greg Smith   2ndQuadrant US    greg(at)2ndQuadrant(dot)com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books


In response to

Responses

pgsql-performance by date

Next:From: Anibal David AcostaDate: 2011-06-10 18:16:33
Subject: Re: how much postgres can scale up?
Previous:From: Claudio FreireDate: 2011-06-10 15:22:28
Subject: Re: strange query plan with LIMIT

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group