Re: Connection Pooling, a year later

From: "Mark Pritchard" <mark(at)tangent(dot)net(dot)au>
To: <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Connection Pooling, a year later
Date: 2001-12-18 06:06:40
Message-ID: EGECIAPHKLJFDEJBGGOBGEIJFNAA.mark@tangent.net.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> I think it is the startup cost that most people want to avoid, and our's
> is higher than most db's that use threads; at least I think so.
>
> It would just be nice to have it done internally rather than have all
> the clients do it, iff it can be done cleanly.

I'd add that client side connection pooling isn't effective in some cases
anyway - one application we work with has 4 physical application servers
running around 6 applications. Each of the applications was written by a
different vendor, and thus a pool size of five gives you 120 open
connections.

From another message, implementing it in libpq doesn't solve for JDBC
connectivity either.

My knowledge of the PostgreSQL internals is rather limited, but could you
not kick off a number of backends and use the already existing block of
shared memory to grab and process requests?

Cheers,

Mark Pritchard

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Christopher Kings-Lynne 2001-12-18 07:20:52 FreeBSD/alpha
Previous Message Bruce Momjian 2001-12-18 04:49:06 Re: Connection Pooling, a year later