Re: backup manifests

From: Stephen Frost <sfrost(at)snowman(dot)net>
To: Mark Dilger <mark(dot)dilger(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Suraj Kharage <suraj(dot)kharage(at)enterprisedb(dot)com>, tushar <tushar(dot)ahuja(at)enterprisedb(dot)com>, Rajkumar Raghuwanshi <rajkumar(dot)raghuwanshi(at)enterprisedb(dot)com>, Rushabh Lathia <rushabh(dot)lathia(at)gmail(dot)com>, Tels <nospam-pg-abuse(at)bloodgate(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Jeevan Chalke <jeevan(dot)chalke(at)enterprisedb(dot)com>, vignesh C <vignesh21(at)gmail(dot)com>
Subject: Re: backup manifests
Date: 2020-03-26 21:00:00
Message-ID: 20200326210000.GZ13712@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Greetings,

* Mark Dilger (mark(dot)dilger(at)enterprisedb(dot)com) wrote:
> > On Mar 26, 2020, at 12:37 PM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> > * Mark Dilger (mark(dot)dilger(at)enterprisedb(dot)com) wrote:
> >>> On Mar 26, 2020, at 9:34 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> >>> I'm not actually argueing about which hash functions we should support,
> >>> but rather what the default is and if crc32c, specifically, is actually
> >>> a reasonable choice. Just because it's fast and we already had an
> >>> implementation of it doesn't justify its use as the default. Given that
> >>> it doesn't actually provide the check that is generally expected of
> >>> CRC checksums (100% detection of single-bit errors) when the file size
> >>> gets over 512MB makes me wonder if we should have it at all, yes, but it
> >>> definitely makes me think it shouldn't be our default.
> >>
> >> I don't understand your focus on the single-bit error issue.
> >
> > Maybe I'm wrong, but my understanding was that detecting single-bit
> > errors was one of the primary design goals of CRC and why people talk
> > about CRCs of certain sizes having 'limits'- that's the size at which
> > single-bit errors will no longer, necessarily, be picked up and
> > therefore that's where the CRC of that size starts falling down on that
> > goal.
>
> I think I agree with all that. I'm not sure it is relevant. When people use CRCs to detect things *other than* transmission errors, they are in some sense using a hammer to drive a screw. At that point, the analysis of how good the hammer is, and how big a nail it can drive, is no longer relevant. The relevant discussion here is how appropriate a CRC is for our purpose. I don't know the answer to that, but it doesn't seem the single-bit error analysis is the right analysis.

I disagree that it's not relevant- it's, in fact, the one really clear
thing we can get a pretty straight-forward answer on, and that seems
really useful to me.

> >> If you are sending your backup across the wire, single bit errors during transmission should already be detected as part of the networking protocol. The real issue has to be detection of the kinds of errors or modifications that are most likely to happen in practice. Which are those? People manually mucking with the files? Bugs in backup scripts? Corruption on the storage device? Truncated files? The more bits in the checksum (assuming a well designed checksum algorithm), the more likely we are to detect accidental modification, so it is no surprise if a 64-bit crc does better than 32-bit crc. But that logic can be taken arbitrarily far. I don't see the connection between, on the one hand, an analysis of single-bit error detection against file size, and on the other hand, the verification of backups.
> >
> > We'd like something that does a good job at detecting any differences
> > between when the file was copied off of the server and when the command
> > is run- potentially weeks or months later. I would expect most issues
> > to end up being storage-level corruption over time where the backup is
> > stored, which could be single bit flips or whole pages getting zeroed or
> > various other things. Files changing size probably is one of the less
> > common things, but, sure, that too.
> >
> > That we could take this "arbitrarily far" is actually entirely fine-
> > that's a good reason to have alternatives, which this patch does have,
> > but that doesn't mean we should have a default that's not suitable for
> > the files that we know we're going to be storing.
> >
> > Consider that we could have used a 16-bit CRC instead, but does that
> > actually make sense? Ok, sure, maybe someone really wants something
> > super fast- but should that be our default? If not, then what criteria
> > should we use for the default?
>
> I'll answer this below....
>
> >> From a support perspective, I think the much more important issue is making certain that checksums are turned on. A one in a billion chance of missing an error seems pretty acceptable compared to the, let's say, one in two chance that your customer didn't use checksums. Why are we even allowing this to be turned off? Is there a usage case compelling that option?
> >
> > The argument is that adding checksums takes more time. I can understand
> > that argument, though I don't really agree with it. Certainly a few
> > percent really shouldn't be that big of an issue, and in many cases even
> > a sha256 hash isn't going to have that dramatic of an impact on the
> > actual overall time.
>
> I see two dangers here:
>
> (1) The user enables checksums of some type, and due to checksums not being perfect, corruption happens but goes undetected, leaving her in a bad place.
>
> (2) The user makes no checksum selection at all, gets checksums of the *default* type, determines it is too slow for her purposes, and instead of adjusting the checksum algorithm to something faster, simply turns checksums off; corruption happens and of course is undetected, leaving her in a bad place.

Alright, I have tried to avoid referring back to pgbackrest, but I can't
help it here.

We have never, ever, had a user come to us and complain that pgbackrest
is too slow because we're using a SHA hash. We have also had them by
default since absolutely day number one, and we even removed the option
to disable them in 1.0. We've never even been asked if we should
implement some other hash or checksum which is faster.

> I think the risk of (2) is far worse, which makes me tend towards a default that is fast enough not to encourage anybody to disable checksums altogether. I have no opinion about which algorithm is best suited to that purpose, because I haven't benchmarked any. I'm pretty much going off what Robert said, in terms of how big an impact using a heavier algorithm would be. Perhaps you'd like to run benchmarks and make a concrete proposal for another algorithm, with numbers showing the runtime changes? You mentioned up-thread that prior timings which showed a 40-50% slowdown were not including all the relevant stuff, so perhaps you could fix that in your benchmark and let us know what is included in the timings?

I don't even know what the 40-50% slowdown numbers included. Also, the
general expectation in this community is that whomever is pushing a
given patch forward should be providing the benchmarks to justify their
choice.

> I don't think we should be contemplating for v13 any checksum algorithms for the default except the ones already in the options list. Doing that just derails the patch. If you want highwayhash or similar to be the default, can't we hold off until v14 and think about changing the default? Maybe I'm missing something, but I don't see any reason why it would be hard to change this after the first version has already been released.

I'd rather we default to something that we are all confident and happy
with, erroring on the side of it being overkill rather than something
that we know isn't really appropriate for the data volume.

Thanks,

Stephen

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Nikita Glukhov 2020-03-26 21:14:37 Re: Ltree syntax improvement
Previous Message Tom Lane 2020-03-26 20:59:49 Re: plan cache overhead on plpgsql expression