Re: should vacuum's first heap pass be read-only?

From: Peter Geoghegan <pg(at)bowt(dot)ie>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: should vacuum's first heap pass be read-only?
Date: 2022-02-04 21:11:54
Message-ID: CAH2-WzmG=_vYv0p4bhV8L73_u+Bkd0JMWe2zHH333oEujhig1g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Feb 4, 2022 at 3:18 PM Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> > What about recovery conflicts? Index vacuuming WAL records don't
> > require their own latestRemovedXid field, since they can rely on
> > earlier XLOG_HEAP2_PRUNE records instead. Since the TIDs that index
> > vacuuming removes always point to LP_DEAD items in the heap, it's safe
> > to lean on that.
>
> Oh, that's an interesting consideration.

You'd pretty much have to do "fake pruning", performing the same
computation as pruning without actually pruning.

> > In practice HOT generally works well enough that the number of heap
> > pages that prune significantly exceeds the subset that are also
> > vacuumed during the second pass over the heap -- at least when heap
> > fill factor has been tuned (which might be rare). The latter category
> > of pages is not reported on by the enhanced autovacuum logging added
> > to Postgres 14, so you might be able to get some sense of how this
> > works by looking at that.
>
> Is there an extra "not" in this sentence? Because otherwise it seems
> like you're saying that I should look at the information that isn't
> reported, which seems hard.

Sorry, yes. I meant "now" (as in, as of Postgres 14).

> In any case, I think this might be a death knell for the whole idea.
> It might be good to cut down the number of page writes by avoiding
> writing them twice -- but not at the expense of having the second pass
> have to visit a large number of pages it could otherwise skip. I
> suppose we could write only those pages in the first pass that we
> aren't going to need to write again later, but at that point I can't
> really see that we're winning anything.

Right. I think that we *can* be more aggressive about deferring heap
page vacuuming until another VACUUM operation with the conveyor belt
stuff. You may well end up getting almost the same benefit that way.

> Yes, I wondered about that. It seems like maybe a running VACUUM
> should periodically refresh its notion of what cutoff to use.

Yeah, Andres said something about this a few months ago. Shouldn't be
very difficult.

> I think my concern here is about not having too many different code
> paths from heap vacuuming. I agree that if we're going to vacuum
> without an on-disk conveyor belt we can use an in-memory substitute.

Avoiding special cases in vacuumlazy.c seems really important to me.

> However, to Greg's point, if we're using the conveyor belt, it seems
> like we want to merge the second pass of one VACUUM into the first
> pass of the next one.

But it's only going to be safe to do that with those dead TIDs (or
distinct generations of dead TIDs) that are known to already be
removed from all indexes, including indexes that have the least need
for vacuuming (often no direct need at all). I had imagined that we'd
want to do heap vacuuming in the same way as today with the dead TID
conveyor belt stuff -- it just might take several VACUUM operations
until we are ready to do a round of heap vacuuming.

For those indexes that use bottom-up index deletion effectively, the
index structure itself never really needs to be vacuumed to avoid
index bloat. We must nevertheless vacuum these indexes at some point,
just to be able to vacuum heap pages with LP_DEAD items safely.

Overall, I think that there will typically be stark differences among
indexes on the table, in terms of how much vacuuming each index
requires. And so the thing that drives us to perform heap vacuuming
will probably be heap vacuuming itself, and not the fact that each and
every index has become "sufficiently bloated".

> If this isn't entirely making sense, it may well be because I'm a
> little fuzzy on all of it myself.

I'm in no position to judge. :-)

--
Peter Geoghegan

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2022-02-04 21:18:47 Re: Removing more vacuumlazy.c special cases, relfrozenxid optimizations
Previous Message Swaha Miller 2022-02-04 21:04:27 Re: support for CREATE MODULE