Re: [pgsql-patches] Recalculating OldestXmin in a long-running vacuum

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Heikki Linnakangas <heikki(at)enterprisedb(dot)com>
Cc: Bruce Momjian <bruce(at)momjian(dot)us>, Gregory Stark <gsstark(at)mit(dot)edu>, pgsql-patches(at)postgresql(dot)org
Subject: Re: [pgsql-patches] Recalculating OldestXmin in a long-running vacuum
Date: 2007-02-04 18:23:29
Message-ID: 20457.1170613409@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-patches

Heikki Linnakangas <heikki(at)enterprisedb(dot)com> writes:
> Tom Lane wrote:
>> BTW I've got serious reservations about whether this bit is safe:
>>
>>> + /* The table could've grown since vacuum started, and there
>>> + * might already be dead tuples on the new pages. Catch them
>>> + * as well. Also, we want to include any live tuples in the
>>> + * new pages in the statistics.
>>> + */
>>> + nblocks = RelationGetNumberOfBlocks(onerel);
>>
>> I seem to recall some assumptions somewhere in the system that a vacuum
>> won't visit newly-added pages.

> Hmm, I can't think of anything.

I think I was thinking of the second risk described here:
http://archives.postgresql.org/pgsql-hackers/2005-05/msg00613.php
which is now fixed so maybe there's no longer any problem. (If there
is, a change like this will convert it from a very-low-probability
problem into a significant-probability problem, so I guess we'll
find out...)

I still don't like the patch though; rechecking the relation length
every N blocks is uselessly inefficient and still doesn't create any
guarantees about having examined everything. If we think this is
worth doing at all, we should arrange to recheck the length after
processing what we think is the last block, not at any other time.

regards, tom lane

In response to

Responses

Browse pgsql-patches by date

  From Date Subject
Next Message Tom Lane 2007-02-04 19:15:18 Re: [PATCHES] Fix "database is ready" race condition
Previous Message Tom Lane 2007-02-04 17:38:40 Re: [HACKERS] \copy (query) delimiter syntax error