Re: Millisecond-precision connect_timeout for libpq

From: Merlin Moncure <mmoncure(at)gmail(dot)com>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, ivan babrou <ibobrik(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Millisecond-precision connect_timeout for libpq
Date: 2013-07-09 20:18:15
Message-ID: CAHyXU0xyj53agUgYUV2WUR9O7wLcnMg+RHZM-x39rcW+Z-vgbA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Jul 5, 2013 at 3:01 PM, Josh Berkus <josh(at)agliodbs(dot)com> wrote:
> On 07/05/2013 12:26 PM, Tom Lane wrote:
>> ivan babrou <ibobrik(at)gmail(dot)com> writes:
>>> If you can figure out that postgresql is overloaded then you may
>>> decide what to do faster. In our app we have very strict limit for
>>> connect time to mysql, redis and other services, but postgresql has
>>> minimum of 2 seconds. When processing time for request is under 100ms
>>> on average sub-second timeouts matter.
>>
>> If you are issuing a fresh connection for each sub-100ms query, you're
>> doing it wrong anyway ...
>
> It's fairly common with certain kinds of apps, including Rails and PHP.
> This is one of the reasons why we've discussed having a kind of
> stripped-down version of pgbouncer built into Postgres as a connection
> manager. If it weren't valuable to be able to relocate pgbouncer to
> other hosts, I'd still say that was a good idea.

for the record, I think this is a great idea.

merlin

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Markus Wanner 2013-07-09 20:41:50 Re: Review: extension template
Previous Message Kevin Grittner 2013-07-09 19:50:47 Re: refresh materialized view concurrently