Rajesh Kumar Mallah <mallah(dot)rajesh(at)gmail(dot)com> wrote:
> Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
>>> max_connections = 300
>> As I've previously mentioned, I would use a connection pool, in
>> which case this wouldn't need to be that high.
> We do use connection pooling provided to mod_perl server
> via Apache::DBI::Cache. If i reduce this i *get* "too many
> connections from non-superuser ... " error. Will pgpool - I/II
> still applicable in this scenario ?
Yeah, you can't reduce this setting without first having a
connection pool in place which will limit how many connections are
in use. We haven't used any of the external connection pool
products for PostgreSQL yet, because when we converted to PostgreSQL
we were already using a pool built into our application framework.
This pool queues requests for database transactions and has one
thread per connection in the database pool to pull and service
objects which encapsulate the logic of the database transaction.
We're moving to new development techniques, since that framework is
over ten years old now, but the overall approach is going to stay
the same -- because it has worked so well for us. By queuing
requests beyond the number which can keep all the server's resources
busy, we avoid wasting resources on excessive context switching and
(probably more significant) contention for locks. At one point our
busiest server started to suffer performance problems under load,
and we were able to fix them by simple configuring the connection
pool to half its previous size -- both response time and throughput
In response to
pgsql-performance by date
|Next:||From: Pavel Stehule||Date: 2010-06-24 19:14:05|
|Subject: Re: PostgreSQL as a local in-memory cache|
|Previous:||From: Alvaro Herrera||Date: 2010-06-24 18:58:23|
|Subject: Re: cpu bound postgresql setup.|