Re: another autovacuum scheduling thread

From: Sami Imseih <samimseih(at)gmail(dot)com>
To: Nathan Bossart <nathandbossart(at)gmail(dot)com>
Cc: David Rowley <dgrowleyml(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Jeremy Schneider <schneider(at)ardentperf(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: another autovacuum scheduling thread
Date: 2025-10-27 22:35:12
Message-ID: CAA5RZ0sybfRyKp+DY+r=2U+-r7HfSF4GL1oVOOcVtEWmk2ewUw@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> > I wrote a sql that returns the tables and scores, which I found was
> > useful when I was testing this out, so having the actually rules spelled out
> > in docs will actually be super useful.
>
> Can you elaborate on how it would be useful? I'd be open to adding a short
> note that autovacuum attempts to prioritize the tables in a smart way, but
> I'm not sure I see the value of documenting every detail.

We discuss the threshold calculations in the documentation, and users
can write scripts to monitor which tables are eligible. However, there
is nothing that indicates which table autovacuum will work on next (I
have been asked that question by users a few times, sometimes out of
curiosity, or because they are monitoring vacuum activity and wondering
when their important table will get a vacuum cycle, or if they should
kick off a manual vacuum). With the scoring system, it will be much more
difficult to explain, unless someone walks through the code.

> I also don't
> want to add too much friction to future changes to the prioritization
> logic.

Maybe future changes is a good reason to document the way autovacuum
prioritizes, since this is a user-facing change.

> > If we don't want to go that much in depth, at minimum the docs should say:
> >
> > "Autovacuum prioritizes tables based on how far they exceed their thresholds
> > or if they are approaching wraparound limits." so a DBA can understand
> > this behavior.
>
> Yeah, I would probably choose to keep it relatively vague like this.

With all the above said, starting with something small is definitely better
than nothing.

> > * The score is calculated as the maximum of the ratios of each of the table's
> > * relevant values to its threshold. For example, if the number of inserted
> > * tuples is 100, and the insert threshold for the table is 80, the insert
> > * score is 1.25.
> >
> > Should we consider clamping down on the score when
> > reltuples = -1, otherwise the scores for such tables ( new tables
> > with a large amount of ingested data ) will be over-inflated? Perhaps,
> > if reltuples = -1 ( # of reltuples not known ), then give a score of .5,
> > so we are not over-prioritizing but not pushing down to the bottom?
>
> I'm not sure it's worth expending too much energy to deal with this. In
> the worst case, the table will be given an arbitrarily high priority the
> first time it is vacuumed, but AFAICT that's it. But that's already the
> case, as the thresholds will be artificially low before the first
> VACUUM/ANALYZE.

I can think of scenarios where they may be workloads that create/drops
staging tables and load some data ( like batch processing ) where this
may become an issue because we are now forcing such tables to the top
of the list, potentially impacting other tables from getting vacuum cycles.
It could happen now, but the difference with this change is we are
forcing these tables to the top of the priority; based on an unknown
value (pg_class.reltuples = -1).

--
Sami Imseih
Amazon Web Services (AWS)

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Sergey Prokhorenko 2025-10-27 22:36:51 Re: Add uuid_to_base32hex() and base32hex_to_uuid() built-in functions
Previous Message Nathan Bossart 2025-10-27 21:15:23 Re: another autovacuum scheduling thread