Re: how much postgres can scale up?

From: Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>
To: Anibal David Acosta <aa(at)devshock(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: how much postgres can scale up?
Date: 2011-06-10 13:01:43
Message-ID: 4DF215B7.7040005@postnewspapers.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 06/10/2011 07:29 PM, Anibal David Acosta wrote:

> I know that with this information you can figure out somethigns, but in
> normal conditions, Is normal the degradation of performance per connection
> when connections are incremented?

With most loads, you will find that the throughput per-worker decreases
as you add workers. The overall throughput will usually increase with
number of workers until you reach a certain "sweet spot" then decrease
as you add more workers after that.

Where that sweet spot is depends on how much your queries rely on CPU vs
disk vs memory, your Pg version, how many disks you have, how fast they
are and in what configuration they are in, what/how many CPUs you have,
how much RAM you have, how fast your RAM is, etc. There's no simple
formula because it's so workload dependent.

The usual *very* rough rule of thumb given here is that your sweet spot
should be *vaguely* number of cpu cores + number of hard drives. That's
*incredibly* rough; if you care you should benchmark it using your real
workload.

If you need lots and lots of clients then it may be beneficial to use a
connection pool like pgbouncer or PgPool-II so you don't have lots more
connections trying to do work at once than your hardware can cope with.
Having fewer connections doing work in the database at the same time can
improve overall performance.

--
Craig Ringer

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Craig Ringer 2011-06-10 13:13:06 Re: how much postgres can scale up?
Previous Message Anibal David Acosta 2011-06-10 12:56:50 Re: how much postgres can scale up?