Re: BUG #5946: Long exclusive lock taken by vacuum (not full)

From: Greg Stark <gsstark(at)mit(dot)edu>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Christopher Browne <cbbrowne(at)gmail(dot)com>, Maxim Boguk <maxim(dot)boguk(at)gmail(dot)com>, pgsql-bugs <pgsql-bugs(at)postgresql(dot)org>
Subject: Re: BUG #5946: Long exclusive lock taken by vacuum (not full)
Date: 2011-03-25 21:09:33
Message-ID: AANLkTik11YkL2Otst7Uf0f-_3+YmTh6O8tFyg8CnQ5o2@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

On Fri, Mar 25, 2011 at 8:48 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Interesting, but I don't understand/believe your argument as to why this
> is a bad idea or fixed-size extents are better.  It sounds to me just
> like the typical Oracle DBA compulsion to have a knob to twiddle.  A
> self-adjusting enlargement behavior seems smarter all round.
>

So is it ok for inserting one row to cause my table to grow by 90GB?
Or should there be some maximum size increment at which it stops
growing? What should that maximum be? What if I'm on a big raid system
where that size doesn't even add a block to every stripe element?

Say you start with 64k (8 pg blocks). That means your growth
increments will be 64k, 70k, 77kl, 85k, 94k, 103k, 113k, 125k, 137k,
...

I'm having trouble imagining a set of hardware and filesystem where
growing a table by 125k will be optimal. The next allocation will have
to do some or all of a) go back and edit the previous one to round it
up, then b) add 128k more, then c) still have 6k more to allocate in a
new allocation.

--
greg

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Tom Lane 2011-03-25 21:34:52 Re: BUG #5946: Long exclusive lock taken by vacuum (not full)
Previous Message DIEGO BALAN 2011-03-25 20:59:10 BUG #5951: ERRO NO INICIO DA INSTALACAO