Skip site navigation (1) Skip section navigation (2)

Re: Idea for getting rid of VACUUM FREEZE on cold pages

From: Russell Smith <mr-russ(at)pws(dot)com(dot)au>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Idea for getting rid of VACUUM FREEZE on cold pages
Date: 2010-06-02 10:38:35
Message-ID: 4C0634AB.4010206@pws.com.au (view raw or flat)
Thread:
Lists: pgsql-hackers
On 28/05/10 04:00, Josh Berkus wrote:
>>  Consider a table that is
>> regularly written but append-only.  Every time autovacuum kicks in,
>> we'll go and remove any dead tuples and then mark the pages
>> PD_ALL_VISIBLE and set the visibility map bits, which will cause
>> subsequent vacuums to ignore the all-visible portions of the table...
>> until anti-wraparound kicks in, at which point we'll vacuum the entire
>> table and freeze everything.
>>
>> If, however, we decree that you can't write a new tuple into a
>> PD_ALL_VISIBLE page without freezing the existing tuples, then you'll
>> still have the small, incremental vacuums but those are pretty cheap,
>>     
> That only works if those pages were going to be autovacuumed anyway.  In
> the case outlined above (which I've seen at 3 different production sites
> this year), they wouldn't be; a table with less than 2% updates and
> deletes does not get vacuumed until max_freeze_age for any reason.  For
> that matter, pages which are getting autovacuumed are not a problem,
> period; they're being read and written and freezing them is not an issue.
>
> I'm not seeing a way of fixing this common issue short of overhauling
> CLOG, or of creating a freeze_map.  Darn.
>   
Don't you not get a positive enough effect by adjusting the table's
autovacuum_min_freeze_age and autovacuum_max_freeze_age.  If you set
those numbers small, it appears to me that you would get very quickly to
a state where the vacuum would example only the most recent part of the
table rather than the whole thing.  Does that give you enough of a win
that it stops the scanning and writing of the whole table which reduces
the performance problem being experienced.  It's not a complete
solution, but does it go someway?

Regards

Russell


In response to

Responses

pgsql-hackers by date

Next:From: Heikki LinnakangasDate: 2010-06-02 11:16:58
Subject: Re: [BUGS] BUG #5487: dblink failed with 63 bytes connection names
Previous:From: Simon RiggsDate: 2010-06-02 09:36:59
Subject: Re: Synchronization levels in SR

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group