Re: Should we increase the default vacuum_cost_limit?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>
Cc: Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Jeremy Schneider <schnjere(at)amazon(dot)com>, Joe Conway <mail(at)joeconway(dot)com>, Peter Geoghegan <pg(at)bowt(dot)ie>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Should we increase the default vacuum_cost_limit?
Date: 2019-03-09 16:31:36
Message-ID: 21148.1552149096@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

David Rowley <david(dot)rowley(at)2ndquadrant(dot)com> writes:
> I agree that vacuum_cost_delay might not be granular enough, however.
> If we're going to change the vacuum_cost_delay into microseconds, then
> I'm a little concerned that it'll silently break existing code that
> sets it. Scripts that do manual off-peak vacuums are pretty common
> out in the wild.

True. Perhaps we could keep the units as ms but make it a float?
Not sure if the "units" logic can cope though.

> My vote is to 10x the maximum for vacuum_cost_limit and consider
> changing how it all works in PG13. If nothing happens before this
> time next year then we can consider making vacuum_cost_delay a
> microseconds GUC.

I'm not really happy with the idea of changing the defaults in this area
and then changing them again next year. That's going to lead to a lot
of confusion, and a mess for people who may have changed (some) of
the settings manually.

regards, tom lane

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2019-03-09 17:55:54 Re: Should we increase the default vacuum_cost_limit?
Previous Message David Rowley 2019-03-09 14:04:30 Re: [HACKERS] Removing [Merge]Append nodes which contain a single subpath