Re: Allow to specify (auto-)vacuum cost limits relative to the database/cluster size?

From: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
To: Joe Conway <mail(at)joeconway(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Allow to specify (auto-)vacuum cost limits relative to the database/cluster size?
Date: 2016-02-24 16:54:03
Message-ID: 20160224165403.GA413518@alvherre.pgsql
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Joe Conway wrote:

> In my experience it is almost always best to run autovacuum very often
> and very aggressively. That generally means tuning scaling factor and
> thresholds as well, such that there are never more than say 50-100k dead
> rows. Then running vacuum with no delays or limits runs quite fast is is
> generally not noticeable/impactful.
>
> However that strategy does not work well for vacuums which run long,
> such as an anti-wraparound vacuum. So in my opinion we need to think
> about this as at least two distinct cases requiring different solutions.

With the freeze map there is no need for anti-wraparound vacuums to be
terribly costly, since they don't need to scan the whole table each
time. That patch probably changes things a lot in this area.

--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Petr Jelinek 2016-02-24 17:35:16 Re: Proposal: Generic WAL logical messages
Previous Message Teodor Sigaev 2016-02-24 16:51:48 Re: GIN data corruption bug(s) in 9.6devel