Skip site navigation (1) Skip section navigation (2)

Re: autovacuum next steps, take 2

From: "Matthew T(dot) O'Connor" <matthew(at)zeut(dot)net>
To: "Jim C(dot) Nasby" <jim(at)nasby(dot)net>
Cc: Hackers <pgsql-hackers(at)postgresql(dot)org>, Ron Mayer <rm_pg(at)cheapcomplexdevices(dot)com>, Gregory Stark <stark(at)enterprisedb(dot)com>
Subject: Re: autovacuum next steps, take 2
Date: 2007-02-27 03:18:36
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
Jim C. Nasby wrote:
> On Mon, Feb 26, 2007 at 06:23:22PM -0500, Matthew T. O'Connor wrote:
>> I'm not sure how pg_class.relpages is maintained but what happens to a 
>> bloated table?  For example, a 100 row table that is constantly updated 
>> and hasn't been vacuumed in a while (say the admin disabled autovacuum 
>> for a while), now that small 100 row table has 1000 pages in it most of 
>> which are just bloat, will we miss this table?  Perhaps basing this on 
>> reltuples would be better?
> The entire point of this is to ensure that the second daemon will only
> vacuum tables that it can finish very quickly. If you let a table bloat
> so it's too big, then you just can't vacuum it very frequently without
> risking all your other hot tables bloating because they're no longer
> getting vacuumed.
> The reality is that you can actually vacuum a pretty good-sized table in
> 60 seconds with typical cost-delay settings (ie: defaults except
> cost_delay set to 10). That means you can do 9 pages ~100 times a
> second, or 54k pages a minute. Even with a vacuum_cost_delay of 20,
> that's still 27k pages per minute.

At the risk of sounding like a broken record, I still think the size 
limit threshold is unnecessary.  Since all workers will be working in on 
tables in size order, younger workers will typically catch older workers 
fairly quickly since the tables will be either small, or recently 
vacuumed and not need work.  And since younger workers exit when they 
catch-up to an older worker, there is some inherent stability in the 
number of workers.

Here is a worst case example: A DB with 6 tables all of which are highly 
active and will need to be vacuumed constantly.  While this is totally 
hypothetical, it is how I envision things working (without the threshold).

table1:        10 rows
table2:       100 rows
table3:     1,000 rows
table4:    10,000 rows
table5:   100,000 rows
table6: 1,000,000 rows

time=0*naptime: No workers in the DB

time=1*naptime: worker1 starts on table1

time=2*naptime: worker1 has finished table1,table2 and table3, it's now 
working on table4, worker2 starts on table1.

time=3*naptime: worker1 is on table5, worker2 is working on table4, 
worker3 starts on table1.

time=4*naptime: worker1 is still on table5, worker2 has caught up to 
worker1 and exits, worker3 also catches up to worker1 since tables2-4 
didn't require vacuum at this time so it exits, worker4 starts on table1

time=5*naptime: worker1 is working on table6, worker4 is up to table4, 
worker5 starts on table1

time=6*naptime: worker1 is working on table6, worker4 catches up to 
worker1 and exits, worker5 finds no additional work to be done and 
exits, worker6 starts at table1.

time=7*naptime: worker1 still working on table6, worker6 is up to 
table4, worker7 starts at table1.

time=8*naptime: worker1 still working on table6, worker6 still working 
on table4, worker7 working on table3, worker8 starting on table1.

time=9*naptime: worker1 still working on table6, worker6 working on 
table5, worker7 catches worker 6 and exits, worker8 finds nothing more 
todo and exits, worker9 starts on table1

time=10*naptim: worker1 still working on table6, worker9 working on 
table4, worker10 starts on table1.

In response to


pgsql-hackers by date

Next:From: John BartlettDate: 2007-02-27 03:23:17
Previous:From: Tom LaneDate: 2007-02-27 03:14:47
Subject: Re: Resumable vacuum proposal and design overview

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group