Re: Millisecond-precision connect_timeout for libpq

From: ivan babrou <ibobrik(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Millisecond-precision connect_timeout for libpq
Date: 2013-07-05 19:47:16
Message-ID: CANWdNRBE9wS2Re6pzkStqOhxNyU+4VSAZUatC1vq+HzTe8grQQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 5 July 2013 23:26, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> ivan babrou <ibobrik(at)gmail(dot)com> writes:
>> If you can figure out that postgresql is overloaded then you may
>> decide what to do faster. In our app we have very strict limit for
>> connect time to mysql, redis and other services, but postgresql has
>> minimum of 2 seconds. When processing time for request is under 100ms
>> on average sub-second timeouts matter.
>
> If you are issuing a fresh connection for each sub-100ms query, you're
> doing it wrong anyway ...
>
> regards, tom lane

In php you cannot persist connection between requests without worrying
about transaction state. We don't use postgresql for every sub-100ms
query because it can block the whole request for 2 seconds. Usually it
takes 1.5ms to connect, btw.

Can you tell me why having ability to specify more accurate connect
timeout is a bad idea?

--
Regards, Ian Babrou
http://bobrik.name http://twitter.com/ibobrik skype:i.babrou

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Josh Berkus 2013-07-05 20:01:54 Re: Millisecond-precision connect_timeout for libpq
Previous Message Tom Lane 2013-07-05 19:26:30 Re: Millisecond-precision connect_timeout for libpq