Held idle connections vs use of a Pooler

From: mark <dvlhntr(at)gmail(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Held idle connections vs use of a Pooler
Date: 2010-09-14 16:10:33
Message-ID: AANLkTi=MBWmUSQiLJ1jq5oPZafS-8VCuUKKPR8avjbMG@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hello,

I am relatively new to postgres (just a few months) so apologies if
any of you are bearing with me.

I am trying to get a rough idea of the amount of bang for the buck I
might see if I put in a connection pooling service into the enviroment
vs our current methodology of using persistent open connections.

We have a number of in house applications that connect to a central
Postgres instance. (8.3.7). The box is admitting underpowered with
only 8 cores, and 8gb or ram and not great disk IO out of an MSA-70.
the database is about 35GB on disk and does mainly (~95%) OTLP type
queries. I am currently begging for more ram.

Most of the connections from the various apps hold idle connections
until they need to execute a query once done go back to holding an
open idle connection. (there are ~600 open connections at any given
time, and most of the time most are idle)

this is typically fine while the number of active queries is low, but
some other application (that doesn't use connection pooling or holding
open connections when not in use) is hitting the db from time to time
with 50-100 small queries (2ms queries from my testing) nearly all at
once. when this happens the whole response time goes out the door
however).

I think from reading this list for a few weeks the answer is move to
using connection pooling package elsewhere to better manage incoming
connections, with a lower number to the db.

I am told this will require some re-working of some app code as I
understand pg-pool was tried a while back in our QA environment and
server parts of various in-house apps/scripts/..etc started to
experience show stopping problems.

to help make my case to the devs and various managers I was wondering
if someone could expand on what extra work is having to be done while
queries run and there is a high (500-600) number of open yet idle
connections to db. lots of the queries executed use sub-transactions
if that makes a difference.

basically what I am paying extra for with that many persistent
connections, that I might save if I go to the effort of getting the
in-house stuff to make use of a connection pooler ?

thank you for your time.

..: mark

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Dave Crooke 2010-09-14 16:15:08 Re: Useless sort by
Previous Message Maciek Sakrejda 2010-09-14 15:09:18 Re: Useless sort by