Re: Online verification of checksums

From: Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>
To: Michael Paquier <michael(at)paquier(dot)xyz>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, Michael Banck <michael(dot)banck(at)credativ(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Online verification of checksums
Date: 2019-03-04 06:05:39
Message-ID: alpine.DEB.2.21.1903040702230.8095@lancre
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


Bonjour Michaël,

>> I agree that having a server function (extension?) to do a full checksum
>> verification, possibly bandwidth-controlled, would be a good thing. However
>> it would have side effects, such as interfering deeply with the server page
>> cache, which may or may not be desirable.
>
> In what is that different from VACUUM or a sequential scan?

Scrubbing would read all files, not only relation data? I'm unsure about
what does VACUUM, but it is probably pretty similar.

> It is possible to use buffer ring replacement strategies in such cases
> using the normal clock-sweep algorithm, so that scanning a range of
> pages does not really impact Postgres shared buffer cache.

Good! I did not know that there was an existing strategy to avoid filling
the cache.

--
Fabien.

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2019-03-04 06:54:46 Re: [WIP] CREATE SUBSCRIPTION with FOR TABLES clause (table filter)
Previous Message Heikki Linnakangas 2019-03-04 06:02:13 Re: Making all nbtree entries unique by having heap TIDs participate in comparisons