| From: | David Rowley <dgrowleyml(at)gmail(dot)com> |
|---|---|
| To: | Sami Imseih <samimseih(at)gmail(dot)com> |
| Cc: | Nathan Bossart <nathandbossart(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Jeremy Schneider <schneider(at)ardentperf(dot)com>, pgsql-hackers(at)postgresql(dot)org |
| Subject: | Re: another autovacuum scheduling thread |
| Date: | 2025-11-06 23:05:43 |
| Message-ID: | CAApHDvq_j+GVqX_ZAmvn236Mgg5OYQ6_s9kVsyoo1tJa2RJ=2w@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On Fri, 7 Nov 2025 at 11:21, Sami Imseih <samimseih(at)gmail(dot)com> wrote:
> Also, I am thinking about another sorting strategy based on average
> autovacuum/autoanalyze time per table. The idea is to sort ascending by
> the greater of the two averages, so workers process quicker tables first
> instead of all workers potentially getting hung on the slowest tables.
> We can calculate the average now that v18 includes total_autovacuum_time
> and total_autoanalyze time.
>
> The way I see it, regardless of prioritization, a few large tables may
> still monopolize autovacuum workers. But at least this way, the quick tables
> get a chance to get processed first. Will this be an idea worth testing out?
This sounds like a terrible idea to me. It'll mean any table that
starts taking longer due to autovacuum neglect will have its priority
dropped for next time which will result in further neglect. If
vacuum_cost_limit is too low, then the tables in need of vacuum the
most could end up last in the queue. I also don't see how you'd handle
the fact that analyze is likely to be faster than vacuum. Tables that
only need an analyze would just come last with no regard for how
outdated the statistics are?
I'm confused at why we'd have set up our autovacuum trigger points as
they are today because we think those are good times to do a
vacuum/analyze, but then prioritise on something completely different.
Surely if we think 20% dead tuples is worth a vacuum, we must
therefore think that 40% dead tuples are even more worthwhile?! I just
cannot comprehend why we'd deviate from making the priority the
percentage over the trigger point here. If we come to the conclusion
that we want something else, then maybe our trigger point threshold
method also needs to be redefined. There certainly have been
complaints about 20% of a huge table being too much (I guess
autovacuum_vacuum_max_threshold is our answer to trying to fix that
one).
David
David
David
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Alexey Makhmutov | 2025-11-06 23:17:07 | Re: High CPU consumption in cascade replication with large number of walsenders |
| Previous Message | Michael Paquier | 2025-11-06 22:34:36 | Re: [PATCH] Fix fragile walreceiver test. |