|From:||"Andrey V(dot) Lepikhov" <a(dot)lepikhov(at)postgrespro(dot)ru>|
|To:||Peter Geoghegan <pg(at)bowt(dot)ie>|
|Cc:||PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>|
|Subject:||Re: [WIP] [B-Tree] Retail IndexTuple deletion|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
According to your feedback, i develop second version of the patch.
In this version:
1. High-level functions index_beginscan(), index_rescan() not used. Tree
descent made by _bt_search(). _bt_binsrch() used for positioning on the
2. TID list introduced in amtargetdelete() interface. Now only one tree
descent needed for deletion all tid's from the list with equal scan key
value - logical duplicates deletion problem.
3. Only one WAL record for index tuple deletion per leaf page per
4. VACUUM can sort TID list preliminary for more quick search of duplicates.
Background worker will come later.
On 19.06.2018 22:38, Peter Geoghegan wrote:
> On Tue, Jun 19, 2018 at 2:33 AM, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
>> I think that we do the partial lazy vacuum using visibility map even
>> now. That does heap pruning, index tuple killing but doesn't advance
> Right, that's what I was thinking. Opportunistic HOT pruning isn't
> like vacuuming because it doesn't touch indexes. This patch adds an
> alternative strategy for conventional lazy vacuum that is also able to
> run a page at a time if needed. Perhaps page-at-a-time operation could
> later be used for doing something that is opportunistic in the same
> way that pruning is opportunistic, but it's too early to worry about
>> Since this patch adds an ability to delete small amount
>> of index tuples quickly, what I'd like to do with this patch is to
>> invoke autovacuum more frequently, and do the target index deletion or
>> the index bulk-deletion depending on amount of garbage and index size
>> etc. That is, it might be better if lazy vacuum scans heap in ordinary
>> way and then plans and decides a method of index deletion based on
>> costs similar to what query planning does.
> That seems to be what Andrey wants to do, though right now the
> prototype patch actually just always uses its alternative strategy
> while doing any kind of lazy vacuuming (some simple costing code is
> commented out right now). It shouldn't be too hard to add some costing
> to it. Once we do that, and once we polish the patch some more, we can
> do performance testing. Maybe that alone will be enough to make the
> patch worth committing; "opportunistic microvacuuming" can come later,
> if at all.
The Russian Postgres Company
|Next Message||Robert Haas||2018-06-22 12:39:11||Re: Keeping temporary tables in shared buffers|
|Previous Message||Konstantin Knizhnik||2018-06-22 11:09:35||Re: Wrong cost estimation for foreign tables join with use_remote_estimate disabled|