Re: New GUC autovacuum_max_threshold ?

From: Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at>
To: Robert Haas <robertmhaas(at)gmail(dot)com>, Nathan Bossart <nathandbossart(at)gmail(dot)com>
Cc: Melanie Plageman <melanieplageman(at)gmail(dot)com>, Frédéric Yhuel <frederic(dot)yhuel(at)dalibo(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, David Rowley <dgrowleyml(at)gmail(dot)com>
Subject: Re: New GUC autovacuum_max_threshold ?
Date: 2024-04-26 02:24:45
Message-ID: c78950c4a37a29b62d4eede6ecc403ecdcd9eeb6.camel@cybertec.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, 2024-04-25 at 14:33 -0400, Robert Haas wrote:
> I believe that the underlying problem here can be summarized in this
> way: just because I'm OK with 2MB of bloat in my 10MB table doesn't
> mean that I'm OK with 2TB of bloat in my 10TB table. One reason for
> this is simply that I can afford to waste 2MB much more easily than I
> can afford to waste 2TB -- and that applies both on disk and in
> memory.

I don't find that convincing. Why are 2TB of wasted space in a 10TB
table worse than 2TB of wasted space in 100 tables of 100GB each?

> Another reason, at least in existing releases, is that at some
> point index vacuuming hits a wall because we run out of space for dead
> tuples. We *most definitely* want to do index vacuuming before we get
> to the point where we're going to have to do multiple cycles of index
> vacuuming.

That is more convincing. But do we need a GUC for that? What about
making a table eligible for autovacuum as soon as the number of dead
tuples reaches 90% of what you can hold in "autovacuum_work_mem"?

Yours,
Laurenz Albe

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Smith 2024-04-26 02:33:20 Re: Improve the connection failure error messages
Previous Message Tom Lane 2024-04-26 02:20:47 Re: pgsql: psql: add an optional execution-count limit to \watch.