Re: better page-level checksums

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Aleksander Alekseev <aleksander(at)timescale(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Stephen Frost <sfrost(at)snowman(dot)net>, Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>, Peter Geoghegan <pg(at)bowt(dot)ie>, Andrey Borodin <x4mmm(at)yandex-team(dot)ru>
Subject: Re: better page-level checksums
Date: 2022-06-13 15:01:05
Message-ID: CA+TgmoYrZJXZ_vOrAEgu5avRDUu239_A7E=9W-sN9PoVDck==w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Jun 13, 2022 at 9:23 AM Aleksander Alekseev
<aleksander(at)timescale(dot)com> wrote:
> Should it necessarily be a fixed list? Why not support plugable algorithms?
>
> An extension implementing a checksum algorithm is going to need:
>
> - several hooks: check_page_after_reading, calc_checksum_before_writing
> - register_checksum()/deregister_checksum()
> - an API to save the checksums to a seperate fork
>
> By knowing the block number and the hash size the extension knows
> exactly where to look for the checksum in the fork.

I don't think that a separate fork is a good option for reasons that I
articulated previously: I think it will be significantly more complex
to implement and add extra I/O.

I am not completely opposed to the idea of making the algorithm
pluggable but I'm not very excited about it either. Making the
algorithm pluggable probably wouldn't be super-hard, but allowing a
checksum of arbitrary size rather than one of a short list of fixed
sizes might complicate efforts to ensure this doesn't degrade
performance. And I'm not sure what the benefit is, either. This isn't
like archive modules or custom backup targets where the feature
proposes to interact with things outside the server and we don't know
what's happening on the other side and so need to offer an interface
that can accommodate what the user wants to do. Nor is it like a
custom background worker or a custom data type which lives fully
inside the database but the desired behavior could be anything. It's
not even like column compression where I think that the same small set
of strategies are probably fine for everybody but some people think
that customizing the behavior by datatype would be a good idea. All
it's doing is taking a fixed size block of data and checksumming it. I
don't see that as being something where there's a lot of interesting
things to experiment with from an extension point of view.

--
Robert Haas
EDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Merlin Moncure 2022-06-13 15:42:31 Re: pgcon unconference / impact of block size on performance
Previous Message Tom Lane 2022-06-13 14:54:20 Re: Using PQexecQuery in pipeline mode produces unexpected Close messages