Re: postgresql.conf recommendations

From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: Will Platnick <wplatnick(at)gmail(dot)com>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: postgresql.conf recommendations
Date: 2013-02-12 02:54:57
Message-ID: CAOR=d=1pMUzftzgiecQ+FHBrSVn_EF-sW+F+np5paz94euk6ZQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Mon, Feb 11, 2013 at 4:29 PM, Will Platnick <wplatnick(at)gmail(dot)com> wrote:
> We will probably tweak this knob some more -- i.e., what is the sweet spot
> between 1 and 100? Would it be higher than 50 but less than 100? Or is it
> somewhere lower than 50?
>
> I would love to know the answer to this as well. We have a similar
> situation, pgbouncer with transaction log pooling with 140 connections.
> What is the the right value to size pgbouncer connections to? Is there a
> formula that takes the # of cores into account?

If you can come up with a synthetic benchmark that's similar to what
your real load is (size, mix etc) then you can test it and see at what
number your throughput peaks and you have good behavior from the
server.

On a server I built a few years back with 48 AMD cores and 24 Spinners
in a RAID-10 for data and 4 drives for a RAID-10 for pg_xlog (no RAID
controller in this one as the chassis cooked them) my throughput
peaked at ~60 connections. What you'll wind up with is a graph where
the throughput keeps climbing as you add clients and at some point it
will usually drop off quickly when you pass it. The sharper the drop
the more dangerous it is to run your server in such an overloaded
situation.

--
To understand recursion, one must first understand recursion.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Julien Cigar 2013-02-12 13:06:57 Re: numerical primary key vs alphanumerical primary key
Previous Message Will Platnick 2013-02-11 23:29:32 Re: postgresql.conf recommendations