From: | Nathan Bossart <nathandbossart(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: another autovacuum scheduling thread |
Date: | 2025-10-09 16:33:29 |
Message-ID: | aOfj2cLCQHzTcyoB@nathan |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Oct 09, 2025 at 12:15:31PM -0400, Andres Freund wrote:
> On 2025-10-09 11:01:16 -0500, Nathan Bossart wrote:
>> I also wonder how hard it would be to gracefully catch the error and let
>> the worker continue with the rest of its list...
>
> The main set of cases I've seen are when workers get hung up permanently in
> corrupt indexes. There never is actually an error, the autovacuums just get
> terminated as part of whatever independent reason there is to restart. The
> problem with that is that you'll never actually have vacuum fail...
Ah. Wouldn't the other workers skip that table in that scenario? I'm not
following the great advantage of varying the order in this case. I suppose
the full set of workers might be able to process more tables before one
inevitably gets stuck. Is that it?
--
nathan
From | Date | Subject | |
---|---|---|---|
Next Message | Álvaro Herrera | 2025-10-09 16:36:58 | Re: memory leak in dbase_redo() |
Previous Message | Bruce Momjian | 2025-10-09 16:27:03 | Re: compiling pg_bsd_indent |