Re: Multiple FPI_FOR_HINT for the same block during killing btree index items

From: Ranier Vilela <ranier(dot)vf(at)gmail(dot)com>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Masahiko Sawada <masahiko(dot)sawada(at)2ndquadrant(dot)com>, Peter Geoghegan <pg(at)bowt(dot)ie>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Multiple FPI_FOR_HINT for the same block during killing btree index items
Date: 2020-05-16 16:28:28
Message-ID: CAEudQAoTCPcZRgtxxfVarOqT+fj+RmVrSCCz2DDi7tm7Z4hnmA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Em sex., 15 de mai. de 2020 às 18:53, Alvaro Herrera <
alvherre(at)2ndquadrant(dot)com> escreveu:

> On 2020-Apr-10, Masahiko Sawada wrote:
>
> > Okay. I think only adding the check would also help with reducing the
> > likelihood. How about the changes for the current HEAD I've attached?
>
> Pushed this to all branches. (Branches 12 and older obviously needed an
> adjustment.) Thanks!
>
> > Related to this behavior on btree indexes, this can happen even on
> > heaps during searching heap tuples. To reduce the likelihood of that
> > more generally I wonder if we can acquire a lock on buffer descriptor
> > right before XLogSaveBufferForHint() and set a flag to the buffer
> > descriptor that indicates that we're about to log FPI for hint bit so
> > that concurrent process can be aware of that.
>
> I'm not sure how that helps; the other process would have to go back and
> redo their whole operation from scratch in order to find out whether
> there's still something alive that needs killing.
>
> I think you need to acquire the exclusive lock sooner: if, when scanning
> the page, you find a killable item, *then* upgrade the lock to exclusive
> and restart the scan. This means that we'll have to wait for any other
> process that's doing the scan, and they will all give up their share
> lock to wait for the exclusive lock they need. So the one that gets it
> first will do all the killing, log the page, then release the lock. At
> that point the other processes will wake up and see that items have been
> killed, so they will return having done nothing.
>
> Like the attached. I didn't verify that it works well or that it
> actually improves performance ...
>
This is not related to your latest patch.
But I believe I can improve the performance.

So:
1. If killedsomething is false
2. Any killtuple is true and (not ItemIdIsDead(iid)) is false
3. Nothing to be done.

So why do all the work and then discard it.
We can eliminate the current item much earlier, testing if it is already
dead.

regards,
Ranier VIlela

Attachment Content-Type Size
avoid_killing_btree_items_aready_dead.patch application/octet-stream 2.0 KB

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Euler Taveira 2020-05-16 19:26:27 Re: [PATCH] Fix pg_dump --no-tablespaces for the custom format
Previous Message Tom Lane 2020-05-16 15:58:41 Re: pgindent && weirdness