Re: Vacuum, visibility maps and SKIP_PAGES_THRESHOLD

From: Greg Stark <gsstark(at)mit(dot)edu>
To: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
Cc: Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Vacuum, visibility maps and SKIP_PAGES_THRESHOLD
Date: 2011-05-27 15:10:19
Message-ID: BANLkTinnGO9jSqB4FEBYXeXBQM_bxjgxwQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, May 27, 2011 at 7:11 AM, Heikki Linnakangas
<heikki(dot)linnakangas(at)enterprisedb(dot)com> wrote:
> Well, as with normal queries, it's usually faster to just seqscan the whole
> table if you need to access more than a few percent of the pages, because
> sequential I/O is so much faster than random I/O.

Well it's not strictly random access, you're still reading
sequentially, you're just skipping some pages. It'll never be slower
than a sequential scan it just might not be any faster. In my testing
reading every 8th page took exactly as long as reading every page,
which makes sense as the drive still has to seek to every track
exactly as if you were reading sequentially. IIRC reading less than
every 8th page started seeing a speedup.

>> Do we have any numbers to prove what we have today is good ? Sorry, I
>> may not have followed the discussions very closely in the past and not
>> sure if this has been debated/tested already.
>
> I think that number was chosen quite arbitrary. When you consider updating
> relfrozenxid, it's a bit difficult to decide what the optimal value would
> be; if you decide to skip pages you might have to perform an extra
> anti-wraparound somewhere down the line.

It would be nice if the VM had a bit for "all-frozen" but that
wouldn't help much except in the case of truly cold data. We could
perhaps keep the frozen data per segment or per VM page (which covers
a large section of the table) which would at least mean that would
have a fixed amount of data become vacuum-dirty when a tuple is
updated rather than a whole table which could be arbitrarily large.

Separately it's a bit strange that we actually have to visit the
pages. We have all the information we need in the VM to determine
whether there's a run of 32 vacuum-clean pages. Why can't we look at
the next 32 pages and if they're all vacuum-clean then skip looking at
the heap at all for them. What we do now is do the regular vacuum
algorithm and only after we've processed 32 pages in a row realize
that it was a waste of effort.

--
greg

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Pavan Deolasee 2011-05-27 15:13:42 Re: Vacuum, visibility maps and SKIP_PAGES_THRESHOLD
Previous Message Robert Haas 2011-05-27 15:08:42 Re: dblink crash on PPC