Re: Should we increase the default vacuum_cost_limit?

From: Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>
To: David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Jeremy Schneider <schnjere(at)amazon(dot)com>, Joe Conway <mail(at)joeconway(dot)com>, Peter Geoghegan <pg(at)bowt(dot)ie>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Should we increase the default vacuum_cost_limit?
Date: 2019-03-09 13:28:22
Message-ID: 203719fc-1970-4a49-8013-eba3fd109b19@2ndQuadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


On 3/9/19 4:28 AM, David Rowley wrote:
> On Sat, 9 Mar 2019 at 16:11, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> I propose therefore that instead of increasing vacuum_cost_limit,
>> what we ought to be doing is reducing vacuum_cost_delay by a similar
>> factor. And, to provide some daylight for people to reduce it even
>> more, we ought to arrange for it to be specifiable in microseconds
>> not milliseconds. There's no GUC_UNIT_US right now, but it's time.
>> (Perhaps we should also look into using other delay APIs, such as
>> nanosleep(2), where available.)
> It does seem like a genuine concern that there might be too much all
> or nothing. It's no good being on a highspeed train if it stops at
> every platform.
>
> I agree that vacuum_cost_delay might not be granular enough, however.
> If we're going to change the vacuum_cost_delay into microseconds, then
> I'm a little concerned that it'll silently break existing code that
> sets it. Scripts that do manual off-peak vacuums are pretty common
> out in the wild.

Maybe we could leave the default units as msec but store it and allow
specifying as usec. Not sure how well the GUC mechanism would cope with
that.

[other good ideas]

>> I don't have any particular objection to kicking up the maximum
>> value of vacuum_cost_limit by 10X or so, if anyone's hot to do that.
>> But that's not where we ought to be focusing our concern. And there
>> really is a good reason, not just nannyism, not to make that
>> setting huge --- it's just the wrong thing to do, as compared to
>> reducing vacuum_cost_delay.
> My vote is to 10x the maximum for vacuum_cost_limit and consider
> changing how it all works in PG13. If nothing happens before this
> time next year then we can consider making vacuum_cost_delay a
> microseconds GUC.
>

+1.

cheers

andrew

--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Rowley 2019-03-09 14:04:30 Re: [HACKERS] Removing [Merge]Append nodes which contain a single subpath
Previous Message Fabien COELHO 2019-03-09 10:35:53 RE: Timeout parameters