Re: [PoC] Improve dead tuple storage for lazy vacuum

From: Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>
To: Yura Sokolov <y(dot)sokolov(at)postgrespro(dot)ru>
Cc: Andres Freund <andres(at)anarazel(dot)de>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [PoC] Improve dead tuple storage for lazy vacuum
Date: 2021-07-29 14:29:13
Message-ID: CAD21AoD3MkkPPv9DQqrVU891WK9PUf=6+1hpvc-jDD1trN4hiA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Jul 29, 2021 at 8:03 PM Yura Sokolov <y(dot)sokolov(at)postgrespro(dot)ru> wrote:
>
> Masahiko Sawada писал 2021-07-29 12:11:
> > On Thu, Jul 29, 2021 at 3:53 AM Andres Freund <andres(at)anarazel(dot)de>
> > wrote:
> >>
> >> Hi,
> >>
> >> On 2021-07-27 13:06:56 +0900, Masahiko Sawada wrote:
> >> > Apart from performance and memory usage points of view, we also need
> >> > to consider the reusability of the code. When I started this thread, I
> >> > thought the best data structure would be the one optimized for
> >> > vacuum's dead tuple storage. However, if we can use a data structure
> >> > that can also be used in general, we can use it also for other
> >> > purposes. Moreover, if it's too optimized for the current TID system
> >> > (32 bits block number, 16 bits offset number, maximum block/offset
> >> > number, etc.) it may become a blocker for future changes.
> >>
> >> Indeed.
> >>
> >>
> >> > In that sense, radix tree also seems good since it can also be used in
> >> > gist vacuum as a replacement for intset, or a replacement for hash
> >> > table for shared buffer as discussed before. Are there any other use
> >> > cases?
> >>
> >> Yes, I think there are. Whenever there is some spatial locality it has
> >> a
> >> decent chance of winning over a hash table, and it will most of the
> >> time
> >> win over ordered datastructures like rbtrees (which perform very
> >> poorly
> >> due to the number of branches and pointer dispatches). There's plenty
> >> hashtables, e.g. for caches, locks, etc, in PG that have a medium-high
> >> degree of locality, so I'd expect a few potential uses. When adding
> >> "tree compression" (i.e. skip inner nodes that have a single incoming
> >> &
> >> outgoing node) radix trees even can deal quite performantly with
> >> variable width keys.
> >
> > Good point.
> >
> >>
> >> > On the other hand, I’m concerned that radix tree would be an
> >> > over-engineering in terms of vacuum's dead tuples storage since the
> >> > dead tuple storage is static data and requires only lookup operation,
> >> > so if we want to use radix tree as dead tuple storage, I'd like to see
> >> > further use cases.
> >>
> >> I don't think we should rely on the read-only-ness. It seems pretty
> >> clear that we'd want parallel dead-tuple scans at a point not too far
> >> into the future?
> >
> > Indeed. Given that the radix tree itself has other use cases, I have
> > no concern about using radix tree for vacuum's dead tuples storage. It
> > will be better to have one that can be generally used and has some
> > optimizations that are helpful also for vacuum's use case, rather than
> > having one that is very optimized only for vacuum's use case.
>
> Main portion of svtm that leads to memory saving is compression of many
> pages at once (CHUNK). It could be combined with radix as a storage for
> pointers to CHUNKs.
>
> For a moment I'm benchmarking IntegerSet replacement based on Trie (HATM
> like)
> and CHUNK compression, therefore datastructure could be used for gist
> vacuum as well.
>
> Since it is generic (allows to index all 64bit) it lacks of trick used
> to speedup svtm. Still on 10x test it is faster than radix.

BTW, how does svtm work when we add two sets of dead tuple TIDs to one
svtm? Dead tuple TIDs are unique sets but those sets could have TIDs
of the different offsets on the same block. The case I imagine is the
idea discussed on this thread[1]. With this idea, we store the
collected dead tuple TIDs somewhere and skip index vacuuming for some
reason (index skipping optimization, failsafe mode, or interruptions
etc.). Then, in the next lazy vacuum timing, we load the dead tuple
TIDs and start to scan the heap. During the heap scan in the second
lazy vacuum, it's possible that new dead tuples will be found on the
pages that we have already stored in svtm during the first lazy
vacuum. How can we efficiently update the chunk in the svtm?

Regards,

[1] https://www.postgresql.org/message-id/CA%2BTgmoZgapzekbTqdBrcH8O8Yifi10_nB7uWLB8ajAhGL21M6A%40mail.gmail.com

--
Masahiko Sawada
EDB: https://www.enterprisedb.com/

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Daniel Verite 2021-07-29 14:45:37 Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a REFCURSOR
Previous Message Dmitry Dolgov 2021-07-29 14:09:54 Re: Showing applied extended statistics in explain