Skip site navigation (1) Skip section navigation (2)

Re: max_connections proposal

From: Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>
To: Edison So <edison(dot)so2(at)gmail(dot)com>
Cc: "List, Postgres" <pgsql-general(at)postgresql(dot)org>
Subject: Re: max_connections proposal
Date: 2011-05-29 08:39:23
Message-ID: 4DE2063B.7020408@postnewspapers.com.au (view raw or flat)
Thread:
Lists: pgsql-general
On 29/05/2011 10:44 AM, Edison So wrote:
> Can anyone tell me that if the max_connections is above 100, the server
> will use pooling instead?

No. PostgreSQL does not have any built-in connection pooling, that was 
the point of the suggestion, to advise people that they might want to 
consider it.

You should _consider_ using connection pooling instead of high numbers 
of connections if your application is suitable. You will usually get 
better throughput and often get better overall query latency if you 
configure lower max_connections and then use a connection pool like 
pgbouncer or PgPool-II.

Many people using high max_connections are using PHP and pg_pconnect. 
Those people should particularly consider using a connection pool 
instead of increasing max_connections . Most people who have performance 
issues due to overload seem to have this setup.

A few features aren't suitable for pooling, including LISTEN/NOTIFY, 
advisory locking, and named server-side prepared statements (explicit 
SQL "PREPARE").

> For all participants in this particular dsicuss, what is the reasonable
> value for max_connections without causing any harm to the Postgres 9.0
> server.

It's dependent on your workload, the capacity of your server, whether 
requests come in batches or continuously, and all sorts of other things. 
That's why Tom (wisely) pointed out that naming a number was a really 
bad idea, even if it was intended only as a vague hint.

Some people on this list clearly run production servers with 
max_connections in the several-hundreds without any problems. Others 
have posted asking for help with server load, stalls and memory 
exhaustion when using only 250 connections.

There's a big difference between an Amazon EC2 node and a real server 
with a local, big, fast RAID10 array. The former might practically melt 
down with a configuration that would not be enough to push the latter 
even close to its limits.

I'm beginning to suspect that the comment I suggested is a bad idea as 
currently constructed. Maybe the problem cannot be even hinted at in a 
single short paragraph without creating more confusion than it solves. 
Something is needed, but perhaps it should just a be a pointer to the 
documentation:

max_connections = 50
# Thinking of increasing this? Read http://some-documentation-url first!


-- 
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

In response to

Responses

pgsql-general by date

Next:From: Craig RingerDate: 2011-05-29 08:45:24
Subject: Re: max_connections proposal
Previous:From: Darren DuncanDate: 2011-05-29 03:52:18
Subject: Re: timeouts on transactions etc?

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group