Re: databases limit

From: Andrew Sullivan <andrew(at)libertyrms(dot)info>
To: pgsql-general(at)postgresql(dot)org, pgsql-hackers(at)postgresql(dot)org
Subject: Re: databases limit
Date: 2003-02-06 16:15:30
Message-ID: 20030206111529.C8908@mail.libertyrms.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general pgsql-hackers

On Thu, Feb 06, 2003 at 12:30:03AM -0500, Tom Lane wrote:

> I have a feeling that what the questioner really means is "how can I
> limit the resources consumed by any one database user?" In which case

(I'm moving this to -hackers 'cause I think it likely belongs there.)

I note that this question has come up before, and several people have
been sceptical of its utility. In particular, in this thread

<http://groups.google.ca/groups?hl=en&lr=&ie=UTF-8&threadm=Pine.LNX.4.21.0212221510560.15719-100000%40linuxworld.com.au&rnum=1&prev=/groups%3Fq%3Dlimit%2Bresources%2B%2Bgroup:comp.databases.postgresql.*%26hl%3Den%26lr%3D%26ie%3DUTF-8%26selm%3DPine.LNX.4.21.0212221510560.15719-100000%2540linuxworld.com.au%26rnum%3D1>

(sorry about the long line: I just get errors searching at the official
archives) Tom Lane notes that you could just run another back end to
make things more secure.

That much is true; but I'm wondering whether it might be worth it to
limit how much a _database_ can use. For instance, suppose I have a
number of databases which are likely to see sporadic heavy loads.
There are limitations on how slow the response can be. So I have to
do some work to guarantee that, for instance, certain tables from
each database don't get flushed from the buffers.

I can do this now by setting up separate postmasters. That way, each
gets its own shared memory segment. Those "certain tables" will be
ones that are frequently accessed, and so they'll always remain in
the buffer, even if the other database is busy (because the two
databases don't share a buffer). (I'm imagining the case -- not
totally imaginary -- where one of the databases tends to be accessed
heavily during one part of a 24 hour day, and another database gets
hit more on another part of the same day.)

The problem with this scenario is that it makes administration
somewhat awkward as soon as you have to do this 5 or 6 times. I was
thinking that it might be nice to be able to limit how much of the
total resources a given database can consume. If one database were
really busy, that would not mean that other databases would
automatically be more sluggish, because they would still have some
guaranteed minimum percentage of the total resources.

So, anyone care to speculate?

--
----
Andrew Sullivan 204-4141 Yonge Street
Liberty RMS Toronto, Ontario Canada
<andrew(at)libertyrms(dot)info> M2P 2A8
+1 416 646 3304 x110

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Arjen van der Meijden 2003-02-06 16:17:41 Re: password() function?
Previous Message James Hall 2003-02-06 16:04:49 Pg_dumpall problem[2]

Browse pgsql-hackers by date

  From Date Subject
Next Message Alice Lottini 2003-02-06 16:30:01 disk pages, buffers and blocks
Previous Message Hannu Krosing 2003-02-06 16:00:59 Re: Status report: regex replacement