Re: pg15b2: large objects lost on upgrade

From: Andres Freund <andres(at)anarazel(dot)de>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Jonathan S(dot) Katz" <jkatz(at)postgresql(dot)org>, Peter Geoghegan <pg(at)bowt(dot)ie>, Noah Misch <noah(at)leadboat(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, Bruce Momjian <bruce(at)momjian(dot)us>, Michael Paquier <michael(at)paquier(dot)xyz>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Shruthi Gowda <gowdashru(at)gmail(dot)com>
Subject: Re: pg15b2: large objects lost on upgrade
Date: 2022-08-04 16:59:08
Message-ID: 20220804165908.erbxuqd6c63svf53@awork3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

On 2022-08-04 12:43:49 -0400, Robert Haas wrote:
> On Thu, Aug 4, 2022 at 10:26 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> > Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> > > 100 << 2^32, so it's not terrible, but I'm honestly coming around to
> > > the view that we ought to just nuke this test case.
> >
> > I'd hesitated to suggest that, but I think that's a fine solution.
> > Especially since we can always put it back in later if we think
> > of a more robust way.
>
> IMHO it's 100% clear how to make it robust. If you want to check that
> two values are the same, you can't let one of them be overwritten by
> an unrelated event in the middle of the check. There are many specific
> things we could do here, a few of which I proposed in my previous
> email, but they all boil down to "don't let autovacuum screw up the
> results".
>
> But if you don't want to do that, and you also don't want to have
> random failures, the only alternatives are weakening the check and
> removing the test. It's kind of hard to say which is better, but I'm
> inclined to think that if we just weaken the test we're going to think
> we've got coverage for this kind of problem when we really don't.

Why you think it's better to not have the test than to have a very limited
amount of fuzziness (by using the next xid as an upper limit). What's the bug
that will reliably pass the nextxid fuzzy comparison, but not an exact
comparison?

Greetings,

Andres Freund

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Reid Thompson 2022-08-04 17:12:32 Re: Patch to address creation of PgStat* contexts with null parent context
Previous Message Robert Haas 2022-08-04 16:43:49 Re: pg15b2: large objects lost on upgrade