Fast default stuff versus pg_upgrade

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Fast default stuff versus pg_upgrade
Date: 2018-06-19 14:55:10
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

AFAICS, the fast-default patch neglected to consider what happens if
a database containing columns with active attmissingval entries is
pg_upgraded. I do not see any code in either pg_dump or pg_upgrade that
attempts to deal with that situation, which means the effect will be
that the "missing" values will silently revert to nulls: they're still
null in the table storage, and the restored pg_attribute entries won't
have anything saying it should be different.

The pg_upgrade regression test fails to exercise such a case. There is
only one table in the ending state of the regression database that has
any atthasmissing columns, and it's empty :-(. If I add a table in
which there actually are active attmissingval entries, say according
to the attached patch, I get a failure in the pg_upgrade test.

This is certainly a stop-ship issue, and in fact it's bad enough
that I think we may need to pull the feature for v11. Designing
binary-upgrade support for this seems like a rather large task
to be starting post-beta1. Nor do I think it's okay to wait for
v12 to make it work; what if we have to force an initdb later in
beta, or recommend use of pg_upgrade for some manual catalog fix
after release?

regards, tom lane

Attachment Content-Type Size
ensure-fast-default-gets-tested-in-pg-upgrade.patch text/x-diff 1.4 KB


Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2018-06-19 14:57:37 Re: [HACKERS] Custom compression methods
Previous Message Konstantin Knizhnik 2018-06-19 14:50:39 Re: WAL prefetch