Re: profiling connection overhead

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Rob Wultsch <wultsch(at)gmail(dot)com>
Cc: Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: profiling connection overhead
Date: 2010-12-06 01:58:52
Message-ID: AANLkTi=6Zq-yjPZ6vbB-ptBE-EgWDyhRL9mPkhWyW+es@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sun, Dec 5, 2010 at 3:17 PM, Rob Wultsch <wultsch(at)gmail(dot)com> wrote:
> On Sun, Dec 5, 2010 at 12:45 PM, Rob Wultsch <wultsch(at)gmail(dot)com> wrote:
>> One thing I would suggest that the PG community keeps in mind while
>> talking about built in connection process caching, is that it is very
>> nice feature for memory leaks caused by a connection to not exist for
>> and continue growing forever.
>
> s/not exist for/not exist/
>
> I have had issues with very slow leaks in MySQL building up over
> months. It really sucks to have to go to management to ask for
> downtime because of a slow memory leak.

Apache has a very simple and effective solution to this problem - they
have a configuration option controlling the number of connections a
child process handles before it dies and a new one is spawned. I've
found that setting this to 1000 works excellently. Process startup
overhead decreases by three orders of magnitude, and only egregiously
bad leaks add up to enough to matter.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2010-12-06 01:59:41 Re: profiling connection overhead
Previous Message Robert Haas 2010-12-06 01:55:37 Re: WIP patch for parallel pg_dump