Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?

From: Merlin Moncure <mmoncure(at)gmail(dot)com>
To: Tomas Vondra <tv(at)fuzzy(dot)cz>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?
Date: 2011-12-01 17:32:02
Message-ID: CAHyXU0wW+mpf33P9QXmSUn5Ej+xPn+boeSFZch3XmfOCnf5OBA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Wed, Nov 30, 2011 at 6:03 PM, Tomas Vondra <tv(at)fuzzy(dot)cz> wrote:
> On 29.11.2011 23:38, Merlin Moncure wrote:
>> On Tue, Nov 29, 2011 at 7:49 AM, Heiko Wundram <modelnine(at)modelnine(dot)org> wrote:
>>> Hello!
>>>
>>> Sorry for that subscribe post I've just sent, that was bad reading on my
>>> part (for the subscribe info on the homepage).
>>>
>>> Anyway, the title says it all: is there any possibility to limit the number
>>> of connections that a client can have concurrently with a PostgreSQL-Server
>>> with "on-board" means (where I can't influence which user/database the
>>> clients use, rather, the clients mostly all use the same user/database, and
>>> I want to make sure that a single client which runs amok doesn't kill
>>> connectivity for other clients)? I could surely implement this with a proxy
>>> sitting in front of the server, but I'd rather implement this with
>>> PostgreSQL directly.
>>>
>>> I'm using (and need to stick with) PostgreSQL 8.3 because of the frontend
>>> software in question.
>>>
>>> Thanks for any hints!
>>
>> I think the (hypothetical) general solution for these types of
>> problems is to have logon triggers.  It's one of the (very) few things
>> I envy from SQL Server -- see  here:
>> http://msdn.microsoft.com/en-us/library/bb326598.aspx.
>
> I'd like to have logon triggers too, but I don't think that's the right
> solution for this problem. For example the logon triggers would be
> called after forking the backend, which means overhead.
>
> The connection limits should be checked when creating the connection
> (validation username/password etc.), before creating the backend.

I disagree. I'm not convinced the overhead is really worth worrying
about and having a trigger under the user's control allows the feature
to cover a much broader array of scenarios. If the overhead *was* a
big deal, then you should be using a connection pooler anyways.

>> Barring the above, if you can trust the client to call a function upon
>> connection I'd just do that and handle the error on the client with a
>> connection drop. Barring *that*, I'd be putting my clients in front of
>> pgbouncer with some patches to the same to get what I needed
>> (pgbouncer is single threaded making firewally type features quite
>> easy to implement in an ad hoc fashion).
>
> The connection pooler somehow easier and more complex at the same time.
>
> You can use connect_query to execute whatever you want after connecting
> to the database (not trusting the user to do that), but why would you do
> that? But the database will see the IP of the pgbouncer, not the IP of
> the original client. So executing the query is pointless.
>
> You can modify pgbouncer and it should be quite simple, but you can
> achieve different username/password (pgbouncer) to each customer,
> different database, set pool_size for each of the connections. It won't
> use IP to count connections, but the user's won't 'steal' connections
> from the other.

Yeah, pgbouncer is an ideal platform for coding feature like firewall
features, query whitelist, etc while still having SQL access to the
database if you need it. You also have access to the client's real
ip, and can pass that down to some code that would presumably be
shared with your logon trigger.

merlin

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Red Light 2011-12-01 18:57:47 recursive inner trigger call
Previous Message Tom Lane 2011-12-01 16:19:38 Re: Problem with custom aggregates and record pseudo-type