Re: Allow to specify (auto-)vacuum cost limits relative to the database/cluster size?

From: Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>, <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Allow to specify (auto-)vacuum cost limits relative to the database/cluster size?
Date: 2016-02-22 00:40:19
Message-ID: 56CA58F3.3090902@BlueTreble.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 1/12/16 6:42 AM, Andres Freund wrote:
> Somehow computing the speed in relation to the cluster/database size is
> probably possible, but I wonder how we can do so without constantly
> re-computing something relatively expensive?

ISTM relpages would probably be good enough for this, if we take the
extra step of getting actual relation size when relpages is 0.

I'm not sure a straght scale factor is the way to go though... it seems
that might be problematic? I think we'd at least one a minimum default
value; you certainly don't want even a small system running vacuum at
1kB/s...
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Jim Nasby 2016-02-22 01:13:36 Handling changes to default type transformations in PLs
Previous Message Thom Brown 2016-02-22 00:39:10 Re: Proposal: "Causal reads" mode for load balancing reads without stale data