Re: backup manifests

From: Mark Dilger <mark(dot)dilger(at)enterprisedb(dot)com>
To: Stephen Frost <sfrost(at)snowman(dot)net>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Suraj Kharage <suraj(dot)kharage(at)enterprisedb(dot)com>, tushar <tushar(dot)ahuja(at)enterprisedb(dot)com>, Rajkumar Raghuwanshi <rajkumar(dot)raghuwanshi(at)enterprisedb(dot)com>, Rushabh Lathia <rushabh(dot)lathia(at)gmail(dot)com>, Tels <nospam-pg-abuse(at)bloodgate(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Jeevan Chalke <jeevan(dot)chalke(at)enterprisedb(dot)com>, vignesh C <vignesh21(at)gmail(dot)com>
Subject: Re: backup manifests
Date: 2020-03-26 20:38:13
Message-ID: 371EDBE7-FE4E-451A-84F6-3BB3DDF75132@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> On Mar 26, 2020, at 12:37 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
>
> Greetings,
>
> * Mark Dilger (mark(dot)dilger(at)enterprisedb(dot)com) wrote:
>>> On Mar 26, 2020, at 9:34 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
>>> I'm not actually argueing about which hash functions we should support,
>>> but rather what the default is and if crc32c, specifically, is actually
>>> a reasonable choice. Just because it's fast and we already had an
>>> implementation of it doesn't justify its use as the default. Given that
>>> it doesn't actually provide the check that is generally expected of
>>> CRC checksums (100% detection of single-bit errors) when the file size
>>> gets over 512MB makes me wonder if we should have it at all, yes, but it
>>> definitely makes me think it shouldn't be our default.
>>
>> I don't understand your focus on the single-bit error issue.
>
> Maybe I'm wrong, but my understanding was that detecting single-bit
> errors was one of the primary design goals of CRC and why people talk
> about CRCs of certain sizes having 'limits'- that's the size at which
> single-bit errors will no longer, necessarily, be picked up and
> therefore that's where the CRC of that size starts falling down on that
> goal.

I think I agree with all that. I'm not sure it is relevant. When people use CRCs to detect things *other than* transmission errors, they are in some sense using a hammer to drive a screw. At that point, the analysis of how good the hammer is, and how big a nail it can drive, is no longer relevant. The relevant discussion here is how appropriate a CRC is for our purpose. I don't know the answer to that, but it doesn't seem the single-bit error analysis is the right analysis.

>> If you are sending your backup across the wire, single bit errors during transmission should already be detected as part of the networking protocol. The real issue has to be detection of the kinds of errors or modifications that are most likely to happen in practice. Which are those? People manually mucking with the files? Bugs in backup scripts? Corruption on the storage device? Truncated files? The more bits in the checksum (assuming a well designed checksum algorithm), the more likely we are to detect accidental modification, so it is no surprise if a 64-bit crc does better than 32-bit crc. But that logic can be taken arbitrarily far. I don't see the connection between, on the one hand, an analysis of single-bit error detection against file size, and on the other hand, the verification of backups.
>
> We'd like something that does a good job at detecting any differences
> between when the file was copied off of the server and when the command
> is run- potentially weeks or months later. I would expect most issues
> to end up being storage-level corruption over time where the backup is
> stored, which could be single bit flips or whole pages getting zeroed or
> various other things. Files changing size probably is one of the less
> common things, but, sure, that too.
>
> That we could take this "arbitrarily far" is actually entirely fine-
> that's a good reason to have alternatives, which this patch does have,
> but that doesn't mean we should have a default that's not suitable for
> the files that we know we're going to be storing.
>
> Consider that we could have used a 16-bit CRC instead, but does that
> actually make sense? Ok, sure, maybe someone really wants something
> super fast- but should that be our default? If not, then what criteria
> should we use for the default?

I'll answer this below....

>> From a support perspective, I think the much more important issue is making certain that checksums are turned on. A one in a billion chance of missing an error seems pretty acceptable compared to the, let's say, one in two chance that your customer didn't use checksums. Why are we even allowing this to be turned off? Is there a usage case compelling that option?
>
> The argument is that adding checksums takes more time. I can understand
> that argument, though I don't really agree with it. Certainly a few
> percent really shouldn't be that big of an issue, and in many cases even
> a sha256 hash isn't going to have that dramatic of an impact on the
> actual overall time.

I see two dangers here:

(1) The user enables checksums of some type, and due to checksums not being perfect, corruption happens but goes undetected, leaving her in a bad place.

(2) The user makes no checksum selection at all, gets checksums of the *default* type, determines it is too slow for her purposes, and instead of adjusting the checksum algorithm to something faster, simply turns checksums off; corruption happens and of course is undetected, leaving her in a bad place.

I think the risk of (2) is far worse, which makes me tend towards a default that is fast enough not to encourage anybody to disable checksums altogether. I have no opinion about which algorithm is best suited to that purpose, because I haven't benchmarked any. I'm pretty much going off what Robert said, in terms of how big an impact using a heavier algorithm would be. Perhaps you'd like to run benchmarks and make a concrete proposal for another algorithm, with numbers showing the runtime changes? You mentioned up-thread that prior timings which showed a 40-50% slowdown were not including all the relevant stuff, so perhaps you could fix that in your benchmark and let us know what is included in the timings?

I don't think we should be contemplating for v13 any checksum algorithms for the default except the ones already in the options list. Doing that just derails the patch. If you want highwayhash or similar to be the default, can't we hold off until v14 and think about changing the default? Maybe I'm missing something, but I don't see any reason why it would be hard to change this after the first version has already been released.


Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Stephen Frost 2020-03-26 20:44:14 Re: backup manifests
Previous Message David Steele 2020-03-26 20:37:47 Re: backup manifests