From: | "Christopher Kings-Lynne" <chriskl(at)familyhealth(dot)com(dot)au> |
---|---|
To: | "mlw" <markw(at)mohawksoft(dot)com>, <owensmk(at)earthlink(dot)net> |
Cc: | <pgsql-hackers(at)postgresql(dot)org>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Bruce Momjian" <pgman(at)candle(dot)pha(dot)pa(dot)us> |
Subject: | Re: Connection Pooling, a year later |
Date: | 2001-12-18 04:42:55 |
Message-ID: | GNELIHDDFBOCMGBFGEFOIENICAAA.chriskl@familyhealth.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> If you have a web server and use (in PHP) pg_pConnect, you will get a
> postgresql process for each http process on your web servers.
>
> Beside memory, are there any real costs associated with having a
> good number of
> idle PostgreSQL processes sitting around?
If implemented, surely the best place to put it would be in libpq? You
could always add a function to lib pq to create a 'pooled' connection,
rather than a normal connection. Basically then the PHP guys would just use
that instead of their own pg_connect function. I guess it would mean that
lots of people who use the pgsql client wouldn't have to rewrite their own
connection sharing code.
However, where would you put all the options for the pool? Like max
processes, min processes, etc.
I have learnt that half the problem with connection pooling is transactions
that fail to be rolled back...
Chris
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2001-12-18 04:49:06 | Re: Connection Pooling, a year later |
Previous Message | Bruce Momjian | 2001-12-18 03:57:11 | Re: Connection Pooling, a year later |