From: | David Rowley <dgrowleyml(at)gmail(dot)com> |
---|---|
To: | Junwang Zhao <zhjwpku(at)gmail(dot)com> |
Cc: | Nathan Bossart <nathandbossart(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Introduce some randomness to autovacuum |
Date: | 2025-05-01 06:15:00 |
Message-ID: | CAApHDvoBWhqbH85Up04e3R1ci-XAkUD_U7yQ2=icKqUNcGoTxQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, 1 May 2025 at 17:35, Junwang Zhao <zhjwpku(at)gmail(dot)com> wrote:
>
> On Thu, May 1, 2025 at 8:12 AM David Rowley <dgrowleyml(at)gmail(dot)com> wrote:
> > It sounds like the aim is to fix the problem with autovacuum vacuuming
> > the same table over and over and being unable to remove enough dead
> > tuples due to something holding back the oldest xmin horizon. Why
> > can't we just fix that by remembering the value that
> > VacuumCutoffs.OldestXmin and only coming back to that table once
> > that's moved forward some amount?
>
> Users expect the tables to be auto vacuumed when:
> *dead_tuples > vac_base_thresh + vac_scale_factor * reltuples*
> If we depend on xid moving forward to do autovacuum, I think
> there are chances some bloated tables won't be vacuumed?
Can you explain why you think that? The idea is to start vacuum other
tables that perhaps can have dead tuples removed instead of repeating
vacuums on the same table over and over without any chance of being
able to remove any more dead tuples than we could during the last
vacuum.
David
From | Date | Subject | |
---|---|---|---|
Next Message | Erik Rijkers | 2025-05-01 06:30:09 | not not - pgupgrade.sgml |
Previous Message | Dilip Kumar | 2025-05-01 05:35:56 | Should shared_preload_libraries be loaded during binary upgrade? |