Re: [WIP] [B-Tree] Retail IndexTuple deletion

From: Peter Geoghegan <pg(at)bowt(dot)ie>
To: "Andrey V(dot) Lepikhov" <a(dot)lepikhov(at)postgrespro(dot)ru>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [WIP] [B-Tree] Retail IndexTuple deletion
Date: 2018-06-18 23:05:01
Views: Raw Message | Whole Thread | Download mbox
Lists: pgsql-hackers

On Mon, Jun 18, 2018 at 2:54 PM, Peter Geoghegan <pg(at)bowt(dot)ie> wrote:
> On Sun, Jun 17, 2018 at 9:39 PM, Andrey V. Lepikhov
> <a(dot)lepikhov(at)postgrespro(dot)ru> wrote:
>> Patch '0001-retail-indextuple-deletion' introduce new function
>> amtargetdelete() in access method interface. Patch
>> '0002-quick-vacuum-strategy' implements this function for an alternative
>> strategy of lazy index vacuum, called 'Quick Vacuum'.
> My compiler shows the following warnings:

Some real feedback:

What we probably want to end up with here is new lazyvacuum.c code
that does processing for one heap page (and associated indexes) that
is really just a "partial" lazy vacuum. Though it won't do things like
advance relfrozenxid, it will do pruning for the heap page, index
tuple killing, and finally heap tuple killing. It will do all of these
things reliably, just like traditional lazy vacuum. This will be what
your background worker eventually uses.

I doubt that you can use routines like index_beginscan() within
bttargetdelete() at all. I think that you need something closer to
_bt_doinsert() or _bt_pagedel(), that manages its own scan (your code
should probably live in nbtpage.c). It does not make sense to teach
external, generic routines like index_beginscan() about heap TID being
an implicit final index attribute, which is one reason for this (I'm
assuming that this patch relies on my own patch). Another reason is
that you need to hold an exclusive buffer lock at the point that you
identify the tuple to be killed, until the point that you actually
kill it. You don't do that now.

IOW, the approach you've taken in bttargetdelete() isn't quite correct
because you imagine that it's okay to occasionally "lose" the index
tuple that you originally found, and just move on. That needs to be
100% reliable, or else we'll end up with index tuples that point to
the wrong heap tuples in rare cases with concurrent insertions. As I
said, we want a "partial" lazy vacuum here, which must mean that it's
reliable. Note that _bt_pagedel() actually calls _bt_search() when it
deletes a page. Your patch will not be the first patch that makes
nbtree vacuuming do an index scan. You should be managing your own
insertion scan key, much like _bt_pagedel() does. If you use my patch,
_bt_search() can be taught to look for a specific heap TID.

Finally, doing things this way would let you delete multiple
duplicates in one shot, as I described in an earlier e-mail. Only a
single descent of the tree is needed to delete quite a few index
tuples, provided that they all happen to be logical duplicates. Again,
your background worker will take advantage of this.

This code does not follow the Postgres style:

> - else
> + }
> + else {
> + if (rootoffnum != latestdead)
> + heap_prune_record_unused(prstate, latestdead);
> heap_prune_record_redirect(prstate, rootoffnum, chainitems[i]);
> + }
> }

Please be more careful about that. I find it very distracting.

Peter Geoghegan

In response to


Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Geoghegan 2018-06-18 23:33:33 Re: [WIP] [B-Tree] Retail IndexTuple deletion
Previous Message Robert Treat 2018-06-18 22:35:15 Re: Invisible Indexes