Skip site navigation (1) Skip section navigation (2)

Re: [HACKERS] More detail on settings for pgavd?

From: "Matthew T(dot) O'Connor" <matthew(at)zeut(dot)net>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: Shridhar Daithankar <shridhar_daithankar(at)myrealbox(dot)com>,pgsql-performance(at)postgresql(dot)org,PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] More detail on settings for pgavd?
Date: 2003-11-21 00:40:15
Message-ID: 3FBD5EEF.7060205@zeut.net (view raw or flat)
Thread:
Lists: pgsql-hackerspgsql-performance
Josh Berkus wrote:

>Matthew,
>  
>
>>For small tables,  you don't need to vacuum too often.  In the testing I
>>did a small table ~100 rows, didn't really show significant performance
>>degredation until it had close to 1000 updates. 
>>    
>>
>This is accounted for by using the "threshold" value.   That way small tables 
>get vacuumed less often. However, the way large tables work is very different 
>and I think your strategy shows a lack of testing on large active tables.
>  
>
Probably more true than I would like to think...

>>For large tables,
>>vacuum is so expensive, that you don't want to do it very often, and
>>scanning the whole table when there is only 5% wasted space is not very
>>helpful.
>>    
>>
>5% is probably too low, you're right ... in my experience, performance 
>degredation starts to set in a 10-15% updates to, for example, a 1.1 million 
>row table, particularly since users tend to request the most recently updated 
>rows.   As long as we have the I/O issues that Background Writer and ARC are 
>intended to solve, though, I can see being less agressive on the defaults; 
>perhaps 20% or 25%.   If you wait until 110% of a 1.1 million row table is 
>updated, though, that vaccuum will take an hour or more.
>  
>
True, but I think it would be one hour once, rather than 30 minutes 4 times.

>Additionally, you are not thinking of this in terms of an overall database 
>maintanence strategy.   Lazy Vacuum needs to stay below the threshold of the 
>Free Space Map (max_fsm_pages) to prevent creeping bloat from setting in to 
>your databases.   With proper configuration of pg_avd, vacuum_mem and FSM 
>values, it should be possible to never run a VACUUM FULL again, and as of 7.4 
>never run an REINDEX again either.  
>  
>
This is one of the things I had hoped to add to pg_autovacuum, but never 
got to.  In addition to just the information from the stats collector on 
inserts updates and deletes, pg_autovacuum should also look at the FSM, 
and make decisions based on it.  Anyone looking for a project?

>But this means running vacuum frequently enough that your max_fsm_pages 
>threshold is never reached.   Which for a large database is going to have to 
>be more frequently than 110% updates, because setting 20,000,000 
>max_fsm_pages will eat your RAM.
>  
>
Again, the think the only way to do this efficiently is to look at the 
FSM.  Otherwise the only way to make sure you keep the FSM populated is 
to run vacuum more than needed.

>>Yes, the I set the defaults a little high perhaps so as to err on the
>>side of caution.  I didn't want people to say pg_autovacuum kills the
>>performance of my server.  A small table will get vacuumed, just not
>>until it has reached the threshold.  So a table with 100 rows, will get
>>vacuumed after 1200 updates / deletes.  
>>    
>>
>Ok, I can see that for small tables.
>  
>
>>In my testing it showed that
>>there was no major performance problems  until you reached several
>>thousand updates / deletes.
>>    
>>
>Sure.  But several thousand updates can be only 2% of a very large table.
>  
>
But I can't imagine that 2% makes any difference on a large table.  In 
fact I would think that 10-15% would hardly be noticable, beyond that 
I'm not sure.

>>HUH?  analyze is very very cheap compared to vacuum.  Why not do it more
>>often?
>>    
>>
>Because nothing is cheap if it's not needed.   
>
>Analyze is needed only as often as the *aggregate distribution* of data in the 
>tables changes.   Depending on the application, this could be frequently, but 
>far more often (in my experience running multiple databases for several 
>clients) the data distribution of very large tables changes very slowly over 
>time.  
>  
>
Valid points, and again I think this points to the fact that 
pg_autovacuum needs to be more configurable.  Being able to set 
different thresholds for different tables will help considerably.  In 
fact, you may find that some tables should have a vac threshold much 
larger than the analyze thresold, while other tables might want the 
opposite.

>One client's database, for example, that I have running VACUUM on chron 
>scripts  runs on this schedule for the main tables:
>VACUUM only: twice per hour
>VACUUM ANALYZE: twice per day
>  
>
I would be surprized if you can notice the difference between a vacuum 
analyze and a vacuum, especially on large tables.

>On the other hand, I've another client's database where most activity involves 
>updates to entire classes of records.   They run ANALYZE at the end of every 
>transaction.
>
>So if you're going to have a seperate ANALYZE schedule at all, it should be 
>slightly less frequent than VACUUM for large tables.   Either that, or drop 
>the idea, and simplify pg_avd by running VACUUM ANALYZE all the time instead 
>of having 2 seperate schedules.
>  
>
I think you need two separate schedules.  There are lots of times where 
a vacuum doesn't help, and an analyze is all that is needed, and an 
analyze is MUCH cheaper than a vacuum.

>BUT .... now I see how you arrived at the logic you did.  If you're testing 
>only on small tables, and not vacuuming them until they reach 110% updates, 
>then you *would* need to analyze more frequently.     This is because of your 
>threshold value ... you'd want to analyze the small table as soon as even 30% 
>of its rows changed.
>
>So the answer is to dramatically lower the threshold for the small tables.
>  
>
Perhaps.

>>What I think I am hearing is that people would like very much to be able
>>to tweak the settings of pg_autovacuum for individual tables / databases
>>etc. 
>>    
>>
>Not from me you're not.   Though that would be nice, too.
>
>So, my suggested defaults based on our conversation above:
>
>Vacuum threshold: 1000 records
>Vacuum scale factor:  0.2
>Analyze threshold:  50 records
>Analyze scale factor: 0.3
>  
>
I'm open to discussion on changing the defaults.  Perhaps what it would 
be better to use some non-linear (perhaps logorithmic) scaling factor.  
So that you wound up with something roughly like this:

#tuples   activity% for vacuum
1k           100%
10k           70%
100k         45%
1M            20%
10M          10%
100M          8%

Thanks for the lucid feedback / discussion.  autovacuum is a feature 
that, despite it's simple implementation, has generated a lot of 
feedback from users, and I would really like to see it become something 
closer to what it should be.



In response to

Responses

pgsql-performance by date

Next:From: stephen farrellDate: 2003-11-21 02:05:09
Subject: Re: Problem with insert into select...
Previous:From: Tom LaneDate: 2003-11-21 00:17:01
Subject: Re: duration logging setting in 7.4

pgsql-hackers by date

Next:From: Christopher Kings-LynneDate: 2003-11-21 01:38:50
Subject: Re: Release cycle length
Previous:From: Neil ConwayDate: 2003-11-21 00:10:24
Subject: Re: logical column position

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group