From: | Jesper Krogh <jesper(at)krogh(dot)cc> |
---|---|
To: | Greg Smith <greg(at)2ndQuadrant(dot)com> |
Cc: | Jeff Davis <pgsql(at)j-davis(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Enabling Checksums |
Date: | 2012-11-12 05:55:54 |
Message-ID: | 50A08F6A.6030601@krogh.cc |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 12/11/12 05:55, Greg Smith wrote:
> The only guarantee I see that we can give for online upgrades is that
> after a VACUUM CHECKSUM sweep is done, and every page is known to both
> have a valid checksum on it and have its checksum bits set, *then* any
> page that doesn't have both set bits and a matching checksum is
> garbage. Until reaching that point, any old data is suspect. The
> idea of operating in an "we'll convert on write but never convert old
> pages" can't come up with any useful guarantees about data integrity
> that I can see. As you say, you don't ever gain the ability to tell
> pages that were checksummed but have since been corrupted from ones
> that were corrupt all along in that path.
You're right about that, but I'd just like some rough guard against
hardware/OS related data corruption.
and that is more likely to hit data-blocks constantly flying in and out
of the system.
I'm currently running a +2TB database and the capabillity to just see
some kind of corruption earlier
rather than later is a major benefit by itself. Currently corruption can
go undetected if it just
happens to hit data-only parts of the database.
But I totally agree that the scheme described with integrating it into a
autovacuum process would
be very close to ideal, even on a database as the one I'l running.
--
Jesper
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Smith | 2012-11-12 06:20:17 | Re: Enabling Checksums |
Previous Message | Greg Smith | 2012-11-12 05:39:17 | Re: WIP checksums patch |