From: | Nathan Bossart <nathandbossart(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | David Rowley <dgrowleyml(at)gmail(dot)com>, Jeremy Schneider <schneider(at)ardentperf(dot)com>, Sami Imseih <samimseih(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: another autovacuum scheduling thread |
Date: | 2025-10-10 19:44:22 |
Message-ID: | aOliFnwt6433J_Zs@nathan |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Thanks for taking a look.
On Fri, Oct 10, 2025 at 02:42:57PM -0400, Robert Haas wrote:
> I think this is a reasonable starting point, although I'm surprised
> that you chose to combine the sub-scores using + rather than Max.
My thinking was that we should consider as many factors as we can in the
score, not just the worst one. If a table has medium bloat and medium
wraparound risk, should it always be lower in priority to something with
large bloat and small wraparound risk? It seems worth exploring. I am
curious why you first thought of Max.
> When I've thought about this problem -- and I can't claim to have
> thought about it very hard -- it's seemed to me that we need to (1)
> somehow normalize everything to somewhat similar units and (2) make
> sure that severe wraparound danger always wins over every other
> consideration, but mild wraparound danger can lose to severe bloat.
Agreed. I need to think about this some more. While I'm optimistic that
we could come up with some sort of normalization framework, I deperately
want to avoid super complicated formulas and GUCs, as those seem like
sure-fire ways of ensuring nothing ever gets committed.
--
nathan
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2025-10-10 20:02:27 | Re: Bypassing cursors in postgres_fdw to enable parallel plans |
Previous Message | Robert Haas | 2025-10-10 19:35:16 | Re: Bypassing cursors in postgres_fdw to enable parallel plans |