Re: backup manifests

From: Stephen Frost <sfrost(at)snowman(dot)net>
To: Mark Dilger <mark(dot)dilger(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Suraj Kharage <suraj(dot)kharage(at)enterprisedb(dot)com>, tushar <tushar(dot)ahuja(at)enterprisedb(dot)com>, Rajkumar Raghuwanshi <rajkumar(dot)raghuwanshi(at)enterprisedb(dot)com>, Rushabh Lathia <rushabh(dot)lathia(at)gmail(dot)com>, Tels <nospam-pg-abuse(at)bloodgate(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Jeevan Chalke <jeevan(dot)chalke(at)enterprisedb(dot)com>, vignesh C <vignesh21(at)gmail(dot)com>
Subject: Re: backup manifests
Date: 2020-03-26 19:37:11
Message-ID: 20200326193711.GX13712@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Greetings,

* Mark Dilger (mark(dot)dilger(at)enterprisedb(dot)com) wrote:
> > On Mar 26, 2020, at 9:34 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> > I'm not actually argueing about which hash functions we should support,
> > but rather what the default is and if crc32c, specifically, is actually
> > a reasonable choice. Just because it's fast and we already had an
> > implementation of it doesn't justify its use as the default. Given that
> > it doesn't actually provide the check that is generally expected of
> > CRC checksums (100% detection of single-bit errors) when the file size
> > gets over 512MB makes me wonder if we should have it at all, yes, but it
> > definitely makes me think it shouldn't be our default.
>
> I don't understand your focus on the single-bit error issue.

Maybe I'm wrong, but my understanding was that detecting single-bit
errors was one of the primary design goals of CRC and why people talk
about CRCs of certain sizes having 'limits'- that's the size at which
single-bit errors will no longer, necessarily, be picked up and
therefore that's where the CRC of that size starts falling down on that
goal.

> If you are sending your backup across the wire, single bit errors during transmission should already be detected as part of the networking protocol. The real issue has to be detection of the kinds of errors or modifications that are most likely to happen in practice. Which are those? People manually mucking with the files? Bugs in backup scripts? Corruption on the storage device? Truncated files? The more bits in the checksum (assuming a well designed checksum algorithm), the more likely we are to detect accidental modification, so it is no surprise if a 64-bit crc does better than 32-bit crc. But that logic can be taken arbitrarily far. I don't see the connection between, on the one hand, an analysis of single-bit error detection against file size, and on the other hand, the verification of backups.

We'd like something that does a good job at detecting any differences
between when the file was copied off of the server and when the command
is run- potentially weeks or months later. I would expect most issues
to end up being storage-level corruption over time where the backup is
stored, which could be single bit flips or whole pages getting zeroed or
various other things. Files changing size probably is one of the less
common things, but, sure, that too.

That we could take this "arbitrarily far" is actually entirely fine-
that's a good reason to have alternatives, which this patch does have,
but that doesn't mean we should have a default that's not suitable for
the files that we know we're going to be storing.

Consider that we could have used a 16-bit CRC instead, but does that
actually make sense? Ok, sure, maybe someone really wants something
super fast- but should that be our default? If not, then what criteria
should we use for the default?

> From a support perspective, I think the much more important issue is making certain that checksums are turned on. A one in a billion chance of missing an error seems pretty acceptable compared to the, let's say, one in two chance that your customer didn't use checksums. Why are we even allowing this to be turned off? Is there a usage case compelling that option?

The argument is that adding checksums takes more time. I can understand
that argument, though I don't really agree with it. Certainly a few
percent really shouldn't be that big of an issue, and in many cases even
a sha256 hash isn't going to have that dramatic of an impact on the
actual overall time.

Thanks,

Stephen

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andrew Dunstan 2020-03-26 20:07:42 Re: pgsql: Provide a TLS init hook
Previous Message Alexey Kondratov 2020-03-26 19:22:08 Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on the fly