Re: decoupling table and index vacuum

From: Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Peter Geoghegan <pg(at)bowt(dot)ie>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: decoupling table and index vacuum
Date: 2021-04-23 11:03:56
Message-ID: CAD21AoAxDi2r+5CCRrmCVy5oSZhfG8kTfKo5EmXraMGQjf8PgA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Apr 23, 2021 at 3:47 AM Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> On Thu, Apr 22, 2021 at 10:28 AM Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
> > The dead TID fork needs to also be efficiently searched. If the heap
> > scan runs twice, the collected dead TIDs on each heap pass could be
> > overlapped. But we would not be able to merge them if we did index
> > vacuuming on one of indexes at between those two heap scans. The
> > second time heap scan would need to record only TIDs that are not
> > collected by the first time heap scan.
>
> I agree that there's a problem here. It seems to me that it's probably
> possible to have a dead TID fork that implements "throw away the
> oldest stuff" efficiently, and it's probably also possible to have a
> TID fork that can be searched efficiently. However, I am not sure that
> it's possible to have a dead TID fork that does both of those things
> efficiently. Maybe you have an idea. My intuition is that if we have
> to pick one, it's MUCH more important to be able to throw away the
> oldest stuff efficiently. I think we can work around the lack of
> efficient lookup, but I don't see a way to work around the lack of an
> efficient operation to discard the oldest stuff.

Agreed.

I think we can divide the TID fork into 16MB or 32MB chunks like WAL
segment files so that we can easily remove old chunks. Regarding the
efficient search part, I think we need to consider the case where the
TID fork gets bigger than maintenance_work_mem. In that case, during
the heap scan, we need to check if the discovered TID exists in a
chunk of the TID fork that could be on the disk. Even if all
known-dead-TIDs are loaded into an array on the memory, it could get
much slower than the current heap scan to bsearch over the array for
each dead TID discovered during heap scan. So it would be better to
have a way to skip searching by already recorded TIDs. For example,
during heap scan or HOT pruning, I think that when marking TIDs dead
and recording it to the dead TID fork we can mark them “dead and
recorded” instead of just “dead” so that future heap scans can skip
those TIDs without existence check.

Regards,

--
Masahiko Sawada
EDB: https://www.enterprisedb.com/

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andrew Dunstan 2021-04-23 12:10:01 Re: multi-install PostgresNode fails with older postgres versions
Previous Message Bharath Rupireddy 2021-04-23 10:56:32 Re: TRUNCATE on foreign table