Re: BUG #5946: Long exclusive lock taken by vacuum (not full)

From: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
To: pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #5946: Long exclusive lock taken by vacuum (not full)
Date: 2011-03-28 10:26:49
Message-ID: 4D906269.6060109@commandprompt.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

Likely "too large" is more an issue related to available resources than
of absolute figure.

On a penta byte of free storage I would not mind allocating some teras
with extending a (large) table.
If I'm left with some MB only, I'd be concerned for sure.

I still prefer an approach that will "just work", without much fiddling
of all kind of knobs.

I'd see the following points:

- There is a minimum size of allocation below which it is unreasonable
/inefficient to do allocations
- doing allocations on sizes based on current table size honor the
assumption that a large table will grow further
(and thus somehow improve this track of grows)
- large growth is "frightening" - largely (my assumption) due to
unwanted behavior towards end of space

So what seems to help out is twofold:

- support readjusting of allocation size to smaller units in case an
intended allocation could not be satisfied while sill allowing the
minimum required space to be claimed

- allow for allocated but unused space to be reclaimed
(It is perfectly OK to have all of my "unused" disk space allocated to
a large table that just happens to be not using it,
if this can still be used later for some smaller table as soon as
this is in need for some space.)

Allocation should also take into account the amount of space left.
This likely is something to be determined per tablespace.

From that allocation might work like:

a) try to get x% of the currently allocated amount for the object
b) but not more than y% of the free space on the related tablespace
c) and never less than a minimum necessary (for limiting overhead costs)

Rainer

Am 25.03.2011 22:34, schrieb Tom Lane:
> Greg Stark <gsstark(at)mit(dot)edu> writes:
>> On Fri, Mar 25, 2011 at 8:48 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>>> Interesting, but I don't understand/believe your argument as to why this
>>> is a bad idea or fixed-size extents are better. It sounds to me just
>>> like the typical Oracle DBA compulsion to have a knob to twiddle. A
>>> self-adjusting enlargement behavior seems smarter all round.
>> So is it ok for inserting one row to cause my table to grow by 90GB?
> If the table is already several TB, why not? The whole point here is
> that it's very unlikely that you're not going to be inserting more rows
> pretty soon.
>
>> Or should there be some maximum size increment at which it stops
>> growing? What should that maximum be? What if I'm on a big raid system
>> where that size doesn't even add a block to every stripe element?
>> Say you start with 64k (8 pg blocks). That means your growth
>> increments will be 64k, 70k, 77kl, 85k, 94k, 103k, 113k, 125k, 137k,
>> ...
> I have no problem with trying to be smart about allocating in powers of
> 2, not allocating more than X at a time, etc etc. I'm just questioning
> the idea that the user should be bothered with this, or is likely to be
> smarter than the system about such things. Particularly if you believe
> that this problem actually justifies attention to such details. I think
> you've already demonstrated that a simplistic fixed-size allocation
> parameter probably *isn't* good enough.
>
> regards, tom lane
>

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Alvaro Herrera 2011-03-28 14:03:16 Re: BUG #5946: Long exclusive lock taken by vacuum (not full)
Previous Message Tom Lane 2011-03-28 04:29:12 Re: BUG #5946: Long exclusive lock taken by vacuum (not full)