Re: pg_dump versus ancient server versions

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Andrew Dunstan <andrew(at)dunslane(dot)net>, Bruce Momjian <bruce(at)momjian(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: pg_dump versus ancient server versions
Date: 2021-12-03 18:30:31
Message-ID: 2316211.1638556231@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com> writes:
> On 02.12.21 23:16, Andres Freund wrote:
>> I realize it's more complicated for users, but a policy based on supporting a
>> certain number of out-of-support branches calculated from the newest major
>> version is more realistic. I'd personally go for something like newest-major -
>> 7 (i.e. 2 extra releases), but I realize that others think it's worthwhile to
>> support a few more. I think there's a considerable advantage of having one
>> cutoff date across all branches.

> I'm not sure it will be clear what this would actually mean. Assume
> PG11 supports back to 9.4 (14-7) now, but when PG15 comes out, we drop
> 9.4 support. But the PG11 code hasn't changed, and PG9.4 hasn't changed,
> so it will most likely still work. Then we have messaging that is out
> of sync with reality. I can see the advantage of this approach, but the
> communication around it might have to be refined.

I don't find this suggestion to be an improvement over Peter's original
formulation, for two reasons:

* I'm not convinced that it saves us any actual work; as you say, the
code doesn't stop working just because we declare it out-of-support.

* There's a real-world use-case underneath here. If somewhere you've
discovered a decades-old server that you need to upgrade, and current
pg_dump won't dump from it, you would like it to be well-defined
which intermediate pg_dump versions you can use. So if 10.19 can
dump from that hoary server, it would not be nice if 10.20 can't;
nor if the documentation lies to you about that based on which minor
version you happen to consult.

>> I think we should explicitly limit the number of platforms we care about for
>> this purpose. I don't think we should even try to keep 8.2 compile on AIX or
>> whatnot.

> It's meant to be developer-facing, so only for platforms that developers
> use. I think that can police itself, if we define it that way.

I agree that if you care about doing this sort of test on platform X,
it's up to you to patch for that. I think Andres' concern is about
the amount of committer bandwidth that might be needed to handle
such patches submitted by non-committers. However, based on the
experiment I just ran, I think it's not really likely to be a big deal:
there are not that many problems, and most of them just amount to
back-patching something that originally wasn't back-patched.

What's most likely to happen IMO is that committers will just start
back-patching essential portability fixes into out-of-support-but-
still-in-the-buildability-window branches, contemporaneously with
the original fix. Yeah, that does mean more committer effort,
but only for a very small number of patches.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Chapman Flack 2021-12-03 18:42:30 types reliant on encodings [was Re: Dubious usage of TYPCATEGORY_STRING]
Previous Message Bossart, Nathan 2021-12-03 18:20:44 Re: O(n) tasks cause lengthy startups and checkpoints