Re: Why doesn't pgstat_report_analyze() focus on not-all-visible-page dead tuple counts, specifically?

From: Peter Geoghegan <pg(at)bowt(dot)ie>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Why doesn't pgstat_report_analyze() focus on not-all-visible-page dead tuple counts, specifically?
Date: 2021-12-07 23:20:16
Message-ID: CAH2-Wzk2h8ikU-Dhmph7eZ_NqfPXaiHa50o1U2phMfcQhwvwVg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Dec 7, 2021 at 1:59 PM Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> If we're only trying to decide whether or not to vacuum a table, we
> don't need units: the output is a Boolean.

I was imagining a world in which we preserve the
autovacuum_vacuum_scale_factor design, but interpret it creatively
(but never too creatively) -- an incremental approach seems best to
me. We can even sanity check our abstract bloat unit calculation, in
case the page-level sampling aggregates into a totally wild number of
dead tuples (based in part on the current number of not-all-visible
heap pages) -- so the abstract units are always anchored to the old
idea of dead tuples. Maybe this isn't the best approach, but at least
it addresses compatibility.

*Any* approach based on sampling relatively few random blocks (to look
for signs of bloat) is inherently prone to hugely underestimating the
extent of bloat (which is what we see in TPC-C). I am primarily
concerned about compensating for the inherent limitations that go with
that. To me it seems inappropriate to make statistical inferences
about dead tuples based on a random snapshot of random blocks (usually
only a tiny minority). It is not only possible for the picture to
change utterly -- it is routine, expected, and the whole entire point.

The entire intellectual justification for statistical sampling (that
mostly works for optimizer stats) just doesn't carry over to
autovacuum stats, for many reasons. At the same time, I don't have any
fundamentally better starting point. That's how I arrived at the idea
of probabilistic modeling based on several recent snapshots from
ANALYZE. The statistics are often rubbish, whether or not we like it,
and regardless of how we decide to count things on each page. And so
it's entirely reasonable to not limit the algorithm to concerns about
the state of things -- the actual exposure of the system to harm (from
overlooking harmful bloat) is also relevant.

> If we're trying to decide
> on an order in which to vacuum tables, then we need units. But such
> units can't be anything related to dead tuples, because vacuum can be
> needed based on XID age, or MXID age, or dead tuples. The units would
> have to be something like abstract vacuum-urgency units (if higher is
> more urgent) or abstract remaining-headroom-beform-catastrophe units
> (if lower is more urgent).

I like that idea. But I wonder if they should be totally unrelated. If
we're close to the "emergency" XID threshold, and also close to the
"bloat units" threshold, then it seems reasonable to put our finger on
the scales, and do an autovacuum before either threshold is crossed.
I'm not sure how that should work, but I find the idea of interpreting
the "bloat units" creatively/probabilistically appealing.

We're not actually making things up by erring in the direction of
launching an autovacuum worker, because we don't actually know the
number of dead tuples (or whatever) anyway -- we're just recognizing
the very real role of chance and noise. That is, if the "bloat units"
threshold might well not have been crossed due to random chance
(noise, the phase of the moon), why should we defer to random chance?
If we have better information to go on, like the thing with the XID
threshold, why not prefer that? Similarly, if we see that the system
as a whole is not very busy right now, why not consider that, just a
little, if the only downside is that we'll ignore a demonstrably
noise-level signal (from the stats)?

That's the high level intuition behind making "bloat units" a
probability density function, and not just a simple expected value.
Teaching the autovacuum.c scheduler to distinguish signal from noise
could be very valuable, if it enables opportunistic batching of work,
or off-hours work. We don't have to respect noise. The devil is in the
details, of course.

--
Peter Geoghegan

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Jacob Champion 2021-12-07 23:26:14 Re: Transparent column encryption
Previous Message Robert Haas 2021-12-07 22:55:02 Re: RecoveryInProgress() has critical side effects