| From: | David Rowley <dgrowleyml(at)gmail(dot)com> |
|---|---|
| To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
| Cc: | Sami Imseih <samimseih(at)gmail(dot)com>, Nathan Bossart <nathandbossart(at)gmail(dot)com>, Robert Treat <rob(at)xzilla(dot)net>, Jeremy Schneider <schneider(at)ardentperf(dot)com>, pgsql-hackers(at)postgresql(dot)org |
| Subject: | Re: another autovacuum scheduling thread |
| Date: | 2025-11-23 09:55:34 |
| Message-ID: | CAApHDvo97hqpZR+vgVVSLQsPhVCA=yEerAGn9wzB_67vjmu6cA@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On Sun, 23 Nov 2025 at 07:35, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> On Sat, Nov 22, 2025 at 12:28 PM Sami Imseih <samimseih(at)gmail(dot)com> wrote:
> > What I have not been able to prove from my tests is that the processing
> > order of tables by autovacuum will actually make things any better or any
> > worse. My tests have been short 30 minute tests that count how many
> > vacuum cycles tables with various DML activity and sizes received.
> > I have not found much difference. I am also not sure how valuable
> > these short-duration tests are either.
>
> Yeah, I'm not sure that would be the right way to look for a benefit
> from something like this. I think that a better test scenario might
> involve figuring out how fast we can recover from a bad situation. As
> we've discussed before, if VACUUM is chronically unable to keep up
> with the workload, then the system is going to get into a very bad
> state and there's not really any help for it. But if we start to get
> into a bad situation due to some outside interference and then someone
> removes the interference, we might hope that this patch would help us
> get back on our feet more quickly.
One thing that seems to be getting forgotten again is the "/* Stop
applying cost limits from this point on */" added in 1e55e7d17 is only
going to be applied when the table *currently* being vaccumed is over
the failsafe limit. Without Nathan's patch, the worker might end up
idling along carefully obeying the cost limits on dozens of other
tables before it gets around to vacuuming the table that's over the
failsafe limit, then suddenly drop the cost delay code and rush to get
the table frozen, before Postgres stops accepting transactions. With
the patch, Nathan has added some aggressive score scaling, which
should mean any table over the failsafe limit has the highest score
and gets attended to first.
David
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Darafei Komяpa Praliaskouski | 2025-11-23 10:07:50 | Re: pg_utility ? |
| Previous Message | Christoph Berg | 2025-11-23 09:51:17 | Re: pg_utility ? |