Re: Connection Pooling, a year later

From: mlw <markw(at)mohawksoft(dot)com>
To: owensmk(at)earthlink(dot)net
Cc: pgsql-hackers(at)postgresql(dot)org, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>
Subject: Re: Connection Pooling, a year later
Date: 2001-12-18 02:34:25
Message-ID: 3C1EAB31.C5383D89@mohawksoft.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

I don't get the deal with connection pooling.

Sure, there are some efficiencies in reducing the number of back-end postgres
processes, but at what I see as a huge complication.

Having experimented with Oracle's connection pooling, and watching either it or
PHP(Apache) crash because of a bug in the query state tracking, I figured I'd
buy some more RAM and forget about the process memory and call myself lucky.

If you have a web server and use (in PHP) pg_pConnect, you will get a
postgresql process for each http process on your web servers.

Beside memory, are there any real costs associated with having a good number of
idle PostgreSQL processes sitting around?

Tom, Bruce?

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2001-12-18 03:29:01 Deadlock condition in current sources
Previous Message Tom Lane 2001-12-18 01:16:59 Re: Connection Pooling, a year later