Skip site navigation (1) Skip section navigation (2)

Re: Vacuum rate limit in KBps

From: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
To: Greg Smith <greg(at)2ndQuadrant(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Vacuum rate limit in KBps
Date: 2012-01-15 21:17:54
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
On 15.01.2012 10:24, Greg Smith wrote:
> That got me thinking: if MB/s is what everyone wants to monitor, can we
> provide a UI to set these parameters that way too? The attached patch is
> a bit rough still, but it does that. The key was recognizing that the
> cost delay plus cost limit can be converted into an upper limit on cost
> units per second, presuming the writes themselves are free. If you then
> also assume the worst case--that everything will end up dirty--by
> throwing in the block size, too, you compute a maximum rate in MB/s.
> That represents the fastest you can possibly write.

+1. I've been thinking we should do that for a long time, but haven't 
gotten around to it.

I think it makes more sense to use the max read rate as the main knob, 
rather than write rate. That's because the max read rate is higher than 
the write rate, when you don't need to dirty pages. Or do you think 
saturating the I/O system with writes is so much bigger a problem than 
read I/O that it makes more sense to emphasize the writes?

I was thinking of something like this, in postgresql.conf:

# - Vacuum Throttling -

#vacuum_cost_page_miss = 1.0		# measured on an arbitrary scale
#vacuum_cost_page_dirty = 2.0		# same scale as above
#vacuum_cost_page_hit = 0.1		# same scale as above
#vacuum_rate_limit = 8MB		# max reads per second

This is now similar to the cost settings for the planner, which is good.

> There's one serious concern I don't have a quick answer to. What do we
> do with in-place upgrade of relations that specified a custom
> vacuum_cost_limit? I can easily chew on getting the right logic to
> convert those to equals in the new setting style, but I am not prepared
> to go solely on the hook for all in-place upgrade work one might do
> here. Would this be easiest to handle as one of those dump/restore
> transformations?

It needs to be handled at dump/restore time. I'm not sure where that 
transformation belongs to, though. Do we have any precedence for this? I 
think we have two options:

1. Accept the old "autovacuum_cost_limit" setting in CREATE TABLE, and 
transform it immediately into corresponding autovacuum_rate_limit setting.

2. Transform in pg_dump, so that the CREATE TABLE statements in the dump 
use the new autovacuum_rate_limit setting.

The advantage of 1. option is that dumps taken with old 9.1 pg_dump 
still work on a 9.2 server. We usually try to preserve that 
backwards-compatibility, although we always recommend using the pg_dump 
from the newer version on upgrade. However, you need to know the 
vacuum_cost_page_miss setting effective in the old server to do the 
transformation correctly (or vacuum_cost_page_dirty, if we use the write 
max rate as the main knob as you suggested), and we don't have access 
when restoring a dump.

> My guess is that's more sensible than the alternative
> of making an on-read converter that only writes in the new format, then
> worrying about upgrading all old pages before moving forward.

This requires any page format changes, so I don't think the above 
sentence makes any sense.

   Heikki Linnakangas

In response to


pgsql-hackers by date

Next:From: Andrew DunstanDate: 2012-01-15 21:21:53
Subject: Re: [COMMITTERS] pgsql: Fix breakage from earlier plperl fix.
Previous:From: Jaime CasanovaDate: 2012-01-15 21:00:26
Subject: Re: pg_basebackup is not checking IDENTIFY_SYSTEM numbre of columns

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group