Re: [HACKERS] pg_upgrade may be mortally wounded

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Bruce Momjian <maillist(at)candle(dot)pha(dot)pa(dot)us>
Cc: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: [HACKERS] pg_upgrade may be mortally wounded
Date: 1999-08-02 22:30:33
Message-ID: 3752.933633033@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

>> I think the problem is that pg_upgrade no longer works in the presence
>> of MVCC. In particular, forcibly moving the old database's pg_log into
>> the new is probably a bad idea when there is no similarity between the
>> sets of committed transaction numbers. I suspect the reason for the
>> strange behaviors I've seen is that after the pg_log copy, the system no
>> longer believes that all of the rows in the new database's system tables
>> have been committed.

Some preliminary experiments suggest that vacuuming the new database
just before moving the data files solves the problem --- at least,
pg_upgrade seems to work then. I will commit this change, since it's
very clear that pg_upgrade doesn't work without it.

However, I'd sure like to hear Vadim's opinion before I trust pg_upgrade
with MVCC very far...

BTW, it seems to me that it is a good idea to kill and restart the
postmaster immediately after pg_upgrade finishes. Otherwise there might
be buffers in shared memory that do not reflect the actual contents of
the corresponding pages of the relation files (now that pg_upgrade
overwrote the files with other data).

Another potential gotcha is that it'd be a really bad idea to let any
other clients connect to the new database while it's being built.

Looking at these two items together, it seems like the really safe way
for pg_upgrade to operate would be *not* to start a postmaster for the
new database until after pg_upgrade finishes; that is, the procedure
would be "initdb; pg_upgrade; start postmaster". pg_upgrade would
operate by invoking a standalone backend for initial table creation.
This would guarantee no unwanted interference from other clients
during the critical steps.

The tricky part is that pg_dump output includes psql \connect commands,
which AFAIK are not accepted by a standalone backend. We'd have to
figure out another solution for those. Ideas?

regards, tom lane

PS: if you try to test pg_upgrade by running the regression database
through it, and then "vacuum analyze" the result, you will observe a
backend crash when vacuum gets to the table "c_star". This seems to be
the fault of a bug that Chris Bitmead has complained of in the past.
c_star has had a column added via inherited ALTER TABLE ADD COLUMN, and
the output of pg_dump creates a database with a different column order
for such a table than ADD COLUMN does. So, the reconstructed database
schema does not match the table data that pg_upgrade has moved in. Ugh.
But we already knew that inherited ADD COLUMN is pretty bogus. I wonder
whether we shouldn't just disable it until it can be fixed properly...

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 1999-08-02 22:51:29 Re: [HACKERS] Mariposa
Previous Message Ross J. Reedstrom 1999-08-02 22:23:54 Re: [HACKERS] Mariposa