Re: [WiP] B-tree page merge during vacuum to reduce index bloat

From: Peter Geoghegan <pg(at)bowt(dot)ie>
To: Andrey Borodin <x4mmm(at)yandex-team(dot)ru>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Kirk Wolak <wolakk(at)gmail(dot)com>, Nikolay Samokhvalov <nik(at)postgres(dot)ai>
Subject: Re: [WiP] B-tree page merge during vacuum to reduce index bloat
Date: 2025-08-27 21:06:28
Message-ID: CAH2-Wz=e2LG=a9X9bsHVCdCsouH=6EE94NSBErwF5pY=1K2efA@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Aug 26, 2025 at 5:40 AM Andrey Borodin <x4mmm(at)yandex-team(dot)ru> wrote:
> *** Correctness:
>
> The implementation reuses existing locking, critical sections and WAL logging infrastructure. To handle cross-page dependencies during WAL replay, when tuples are merged, the right sibling buffer is registered with `REGBUF_FORCE_IMAGE`, this is a temporary workaround.

I don't think that you can reuse existing locking for this. It needs
to be totally reevaluated in light of the way that you've changed page
deletion.

In general, page deletion ends up being very complicated with the
Lehman & Yao design -- even with the existing restrictions. It's a
natural downside of the very permissive locking rules: there is no
need for most searchers and inserters to ever hold more than a single
buffer lock at a time. Searchers generally hold *no* locks when
descending the index at points where they're "between levels". Almost
anything can happen in the meantime -- including page deletion (it's
just that the deleted page cannot be concurrently recycled right away,
which is kind of a separate process).

That original block number reference still has to work for these
searchers, no matter what. It must at least not give them an
irredeemably bad picture of what's going on; they might have to move
right to deal with concurrent page deletion and page splits, but
that's it.

If you merge together underful pages like this, you change the key
space boundaries between pages. I don't see a way to make that safe/an
atomic operation, except by adding significantly more locking in the
critical path of searchers.

> *** Current Status & Questions:
>
> The patch successfully reduces index bloat and handles basic scenarios, but we've identified some areas that need community input:

Have you benchmarked it?

I wouldn't assume that free-at-empty actually is worse than merging
underfull pages. It's complicated, but see this paper for some
counter-arguments:

https://www.sciencedirect.com/science/article/pii/002200009390020W

This old Dr. Dobbs article from the same authors gives a lighter
summary of the same ideas:

https://web.archive.org/web/20200811014238/https://www.drdobbs.com/reexamining-b-trees/184408694?pgno=3

In any case, I believe that this optimization isn't widely implemented
by other major RDBMSs, despite being a textbook technique that was
known about and described early in the history of B-Trees.

--
Peter Geoghegan

In response to

Browse pgsql-hackers by date

  From Date Subject
Previous Message Julien Tachoires 2025-08-27 20:27:37 Qual push down to table AM