Re: getting to beta

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Dan Ports" <drkp(at)csail(dot)mit(dot)edu>, "Robert Haas" <robertmhaas(at)gmail(dot)com>
Cc: "Heikki Linnakangas" <heikki(dot)linnakangas(at)enterprisedb(dot)com>, <pgsql-hackers(at)postgresql(dot)org>,"Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: getting to beta
Date: 2011-04-06 22:32:15
Message-ID: 4D9CA39F020000250003C48C@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> The real fix for this problem is probably to have the ability to
> actually return memory to the shared pool, rather than having
> everyone grab as they need it until there's no more and never give
> back. But that's not going to happen in 9.1, so the question is
> whether this is a sufficiently serious problem that we ought to
> impose the proposed stopgap fix between now and whenever we do
> that.

There is a middle course between leaving the current approach of
preallocating half the maximum size and leaving the other half up
for grabs and the course Heikki proposes of making the maximum a
hard limit. I submitted a patch to preallocate the maximum, so a
request for a particular HTAB object will never get "out of shared
memory" unless it is past its maximum:

http://archives.postgresql.org/message-id/4D948066020000250003C00B@gw.wicourts.gov

That would leave some extra which is factored into the calculations
up for grabs, but each table would be guaranteed at least its
maximum number of entries. This seems pretty safe to me, and not
very invasive. We could always revisit in this 9.2 if that's not
good enough.

-Kevin

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jeff Davis 2011-04-06 22:39:27 Re: lowering privs in SECURITY DEFINER function
Previous Message A.M. 2011-04-06 22:08:35 Re: lowering privs in SECURITY DEFINER function