From: | Nathan Bossart <nathandbossart(at)gmail(dot)com> |
---|---|
To: | Peter Geoghegan <pg(at)bowt(dot)ie> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating per-tuple freeze plans |
Date: | 2022-09-21 20:13:58 |
Message-ID: | 20220921201358.GA456274@nathanxps13 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Sep 20, 2022 at 03:12:00PM -0700, Peter Geoghegan wrote:
> On Mon, Sep 12, 2022 at 2:01 PM Peter Geoghegan <pg(at)bowt(dot)ie> wrote:
>> I'd like to talk about one such technique on this thread. The attached
>> WIP patch reduces the size of xl_heap_freeze_page records by applying
>> a simple deduplication process.
>
> Attached is v2, which I'm just posting to keep CFTester happy. No real
> changes here.
This idea seems promising. I see that you called this patch a
work-in-progress, so I'm curious what else you are planning to do with it.
As I'm reading this thread and the patch, I'm finding myself wondering if
it's worth exploring using wal_compression for these records instead. I
think you've essentially created an efficient compression mechanism for
this one type of record, but I'm assuming that lz4/zstd would also yield
some rather substantial improvements for this kind of data. Presumably a
generic WAL record compression mechanism could be reused for other large
records, too. That could be much easier than devising a deduplication
strategy for every record type.
--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2022-09-21 20:53:52 | Re: pg_auth_members.grantor is bunk |
Previous Message | Thomas Munro | 2022-09-21 19:28:19 | Re: Query JITing with LLVM ORC |