Re: Large number of short lived connections - could a connection pool help?

From: Mario Weilguni <roadrunner6(at)gmx(dot)at>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Large number of short lived connections - could a connection pool help?
Date: 2011-11-15 08:04:52
Message-ID: 4EC21D24.3060804@gmx.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Am 15.11.2011 01:42, schrieb Cody Caughlan:
> We have anywhere from 60-80 background worker processes connecting to
> Postgres, performing a short task and then disconnecting. The lifetime
> of these tasks averages 1-3 seconds.
>
> I know that there is some connection overhead to Postgres, but I dont
> know what would be the best way to measure this overheard and/or to
> determine if its currently an issue at all.
>
> If there is a substantial overheard I would think that employing a
> connection pool like pgbouncer to keep a static list of these
> connections and then dole them out to the transient workers on demand.
>
> So the overall cumulative number of connections wouldnt change, I
> would just attempt to alleviate the setup/teardown of them so quickly.
>
> Is this something that I should look into or is it not much of an
> issue? Whats the best way to determine if I could benefit from using a
> connection pool?
>
> Thanks.
>
I had a case where a pooler (in this case pgpool) resulted in a 140%
application improvement - so - yes, it is probably a win to use a
pooling solution.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Ruslan Zakirov 2011-11-15 15:04:28 Re: avoiding seq scans when two columns are very correlated
Previous Message Maciek Sakrejda 2011-11-15 07:32:34 Re: What's the state of postgresql on ext4 now?