Re: backup manifests

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Stephen Frost <sfrost(at)snowman(dot)net>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Suraj Kharage <suraj(dot)kharage(at)enterprisedb(dot)com>, tushar <tushar(dot)ahuja(at)enterprisedb(dot)com>, Rajkumar Raghuwanshi <rajkumar(dot)raghuwanshi(at)enterprisedb(dot)com>, Rushabh Lathia <rushabh(dot)lathia(at)gmail(dot)com>, Tels <nospam-pg-abuse(at)bloodgate(dot)com>, David Steele <david(at)pgmasters(dot)net>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Jeevan Chalke <jeevan(dot)chalke(at)enterprisedb(dot)com>, vignesh C <vignesh21(at)gmail(dot)com>
Subject: Re: backup manifests
Date: 2020-03-27 18:13:17
Message-ID: CA+TgmoYEA16Omg8_LoaFikeSvz+Fq5=KX2gcPPPNE=fBCbcchw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Mar 27, 2020 at 1:06 AM Andres Freund <andres(at)anarazel(dot)de> wrote:
> > Like, suppose we change the default from CRC-32C to SHA-something. On
> > the upside, the error detection rate will increase from 99.9999999+%
> > to something much closer to 100%.
>
> FWIW, I don't buy the relevancy of 99.9999999+% at all. That's assuming
> a single bit error (at relevant lengths, before that it's single burst
> errors of a greater length), which isn't that relevant for our purposes.
>
> That's not to say that I don't think a CRC check can provide value. It
> does provide a high likelihood of detecting enough errors, including
> coding errors in how data is restored (not unimportant), that you're
> likely not find out aobut a problem soon.

So, I'm glad that you think a CRC check gives a sufficiently good
chance of detection errors, but I don't understand what your objection
to the percentage. Stephen just objected to it again, too:

On Thu, Mar 26, 2020 at 4:44 PM Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> > I mean, the property that I care about is the one where it detects
> > better than 999,999,999 errors out of every 1,000,000,000, regardless
> > of input length.
>
> Throwing these kinds of things around I really don't think is useful.

...but I don't understand his reasoning, or yours.

My reasoning for thinking that the number is accurate is that a 32-bit
checksum has 2^32 possible results. If all of those results are
equally probable, then the probability that two files with unequal
contents produce the same result is 2^-32. This does assume that the
hash function is perfect, which no hash function is, so the actual
probability of a collision is likely higher. But if the hash function
is pretty good, it shouldn't be all that much higher. Note that I am
making no assumptions here about how many bits are different, nor am I
making any assumption about the length of a file. I am simply saying
that an n-bit checksum should detect a difference between two files
with a probability of roughly 1-2^{-n}, modulo the imperfections of
the hash function. I thought that this was a well-accepted fact that
would produce little argument from anybody, and I'm confused that
people seem to feel otherwise.

One explanation that would make sense to me is if somebody said, well,
the nature of this particular algorithm means that, although values
are uniformly distributed in general, the kinds of errors that are
likely to occur in practice are likely to cancel out. For instance, if
you imagine trivial algorithms such as adding or xor-ing all the
bytes, adding zero bytes doesn't change the answer, and neither do
transpositions. However, CRC is, AIUI, designed to be resistant to
such problems. Your remark about large blocks of zero bytes is
interesting to me in this context, but in a quick search I couldn't
find anything stating that CRC was weak for such use cases.

The old thread about switching from 64-bit CRC to 32-bit CRC had a
link to a page which has subsequently been moved to here:

https://www.ece.unb.ca/tervo/ee4253/crc.shtml

Down towards the bottom, it says:

"In general, bit errors and bursts up to N-bits long will be detected
for a P(x) of degree N. For arbitrary bit errors longer than N-bits,
the odds are one in 2^{N} than a totally false bit pattern will
nonetheless lead to a zero remainder."

Which I think is the same thing I'm saying: the chances of failing to
detecting an error with a decent n-bit checksum ought to be about
2^{-N}. If that's not right, I'd really like to understand why.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andres Freund 2020-03-27 18:22:17 Reinitialize stack base after fork (for the benefit of rr)?
Previous Message Tom Lane 2020-03-27 18:01:44 Re: plan cache overhead on plpgsql expression