Re: WAL insert delay settings

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Stephen Frost <sfrost(at)snowman(dot)net>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: WAL insert delay settings
Date: 2019-02-19 18:43:14
Message-ID: e8e852eb-31b0-1fd9-a385-5b2e5dad9718@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 2/19/19 7:35 PM, Andres Freund wrote:
> Hi,
>
> On 2019-02-19 13:28:00 -0500, Robert Haas wrote:
>> On Fri, Feb 15, 2019 at 1:42 PM Andres Freund <andres(at)anarazel(dot)de> wrote:
>>> I think it'd not be insane to add two things:
>>> - WAL write rate limiting, independent of the vacuum stuff. It'd also be
>>> used by lots of other bulk commands (CREATE INDEX, ALTER TABLE
>>> rewrites, ...)
>>> - Account for WAL writes in the current vacuum costing logic, by
>>> accounting for it using a new cost parameter
>>>
>>> Then VACUUM would be throttled by the *minimum* of the two, which seems
>>> to make plenty sense to me, given the usecases.
>>
>> Or maybe we should just blow up the current vacuum cost delay stuff
>> and replace it with something that is easier to tune. For example, we
>> could just have one parameter that sets the maximum read rate in kB/s
>> and another that sets the maximum dirty-page rate in kB/s. Whichever
>> limit is tighter binds. If we also have the thing that is the topic
>> of this thread, that's a third possible upper limit.
>
>> I really don't see much point in doubling down on the current vacuum
>> cost delay logic. The overall idea is good, but the specific way that
>> you have to set the parameters is pretty inscrutable, and I think we
>> should just fix it so that it can be, uh, scruted.
>
> I agree that that's something worthwhile to do, but given that the
> proposal in this thread is *NOT* just about VACUUM, I don't see why it'd
> be useful to tie a general WAL rate limiting to rewriting cost limiting
> of vacuum. It seems better to write the WAL rate limiting logic with an
> eye towards structuring it in a way that'd potentially allow reusing
> some of the code for a better VACUUM cost limiting.
>
> I still don't *AT ALL* buy Stephen and Tomas' argument that it'd be
> confusing that when both VACUUM and WAL cost limiting are active, the
> lower limit takes effect.
>

Except that's not my argument. I'm not arguing against throttling once
we hit the minimum of limits.

The problem I have with implementing a separate throttling logic is that
it also changes the other limits (which are already kinda fuzzy). If you
add sleeps somewhere, those will affects the throttling built into
autovacuum (lowering them in some unknown way).

From this POV it would be better to include this into the vacuum cost
limit, because then it's at least subject to the same budget.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2019-02-19 18:45:28 Re: Some thoughts on NFS
Previous Message Pierre Ducroquet 2019-02-19 18:37:22 Row Level Security − leakproof-ness and performance implications