Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?

From: Tomas Vondra <tv(at)fuzzy(dot)cz>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?
Date: 2011-12-01 00:03:13
Message-ID: 4ED6C441.8020306@fuzzy.cz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 29.11.2011 23:38, Merlin Moncure wrote:
> On Tue, Nov 29, 2011 at 7:49 AM, Heiko Wundram <modelnine(at)modelnine(dot)org> wrote:
>> Hello!
>>
>> Sorry for that subscribe post I've just sent, that was bad reading on my
>> part (for the subscribe info on the homepage).
>>
>> Anyway, the title says it all: is there any possibility to limit the number
>> of connections that a client can have concurrently with a PostgreSQL-Server
>> with "on-board" means (where I can't influence which user/database the
>> clients use, rather, the clients mostly all use the same user/database, and
>> I want to make sure that a single client which runs amok doesn't kill
>> connectivity for other clients)? I could surely implement this with a proxy
>> sitting in front of the server, but I'd rather implement this with
>> PostgreSQL directly.
>>
>> I'm using (and need to stick with) PostgreSQL 8.3 because of the frontend
>> software in question.
>>
>> Thanks for any hints!
>
> I think the (hypothetical) general solution for these types of
> problems is to have logon triggers. It's one of the (very) few things
> I envy from SQL Server -- see here:
> http://msdn.microsoft.com/en-us/library/bb326598.aspx.

I'd like to have logon triggers too, but I don't think that's the right
solution for this problem. For example the logon triggers would be
called after forking the backend, which means overhead.

The connection limits should be checked when creating the connection
(validation username/password etc.), before creating the backend.

Anyway, I do have an idea how this could be done using a shared library
(so it has the same disadvantages as logon triggers). Hopefully I'll
have time to implement a PoC of this over the weekend.

> Barring the above, if you can trust the client to call a function upon
> connection I'd just do that and handle the error on the client with a
> connection drop. Barring *that*, I'd be putting my clients in front of
> pgbouncer with some patches to the same to get what I needed
> (pgbouncer is single threaded making firewally type features quite
> easy to implement in an ad hoc fashion).

The connection pooler somehow easier and more complex at the same time.

You can use connect_query to execute whatever you want after connecting
to the database (not trusting the user to do that), but why would you do
that? But the database will see the IP of the pgbouncer, not the IP of
the original client. So executing the query is pointless.

You can modify pgbouncer and it should be quite simple, but you can
achieve different username/password (pgbouncer) to each customer,
different database, set pool_size for each of the connections. It won't
use IP to count connections, but the user's won't 'steal' connections
from the other.

Tomas

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Daniele Varrazzo 2011-12-01 00:39:04 Using a domain
Previous Message David Johnston 2011-12-01 00:00:34 Re: Is it possible to make a streaming replication faster using COPY instead of lots of INSERTS?