Re: [HACKERS] An easier way to upgrade (Was: Lots 'o patches)

From: Bruce Momjian <maillist(at)candle(dot)pha(dot)pa(dot)us>
To: matti(at)algonet(dot)se (Mattias Kregert)
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: [HACKERS] An easier way to upgrade (Was: Lots 'o patches)
Date: 1998-06-02 17:08:27
Message-ID: 199806021708.NAA13018@candle.pha.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

1> How about making a file specifying what to do when upgrading from one
> version of pg to another? Then a program, let's call it 'pgconv', would
> read this file and do the conversions from the old to the new format
> using pg_dump and psql and/or some other helper programs.

We already have the migration directory, but it only text, no scripts
currently. During 1.*, we did supply script for the upgrade, but the
feature changes were small.

>
> (pgconv.data):
> --------------
> #From To What to do
> #
> epoch 6.2 ERROR("Can not upgrade - too old version")
> 6.2 6.3 SQL("some-sql-commands-here")
> DELETE("obsolete-file")
> OLDVER_DUMPALL() # To temp file
> NEWVER_LOADALL() # From temp file
> 6.3 6.3.2 PRINT("Creating shadow passwords")
> SQL("create-pg_shadow")
> SYSTEM("chmod go-rwx pg_user")
> SQL("some-sql-commands")
> 6.3.2 6.4 SQL("some-commands")
> SYSTEM("chmod some-files")
> PRINT("System tables converted")
> SQL("some-other-commands")
> PRINT("Data files converted")

Interesting ideas, but in fact, all installs will probably require a new
initdb. Because of the interdependent nature of the system tables, it
is hard to make changes to them using SQL statements. What we could try
is doing a pg_dump_all -schema-only, moving all the non pg_* files to a
separate directory, running initdb, loading the pg_dumped schema, then
moving the data files back into place.

That may work. But if we change the on-disk layout of the data, like we
did when we made varchar() variable length, a dump-reload would be
required. Vadim made on-disk data improvements for many releases.

We could make it happen even for complex cases, but then we come up on
the problem of whether it is wise to allocate limited development time
to migration issues.

I think the requirement of running the new initdb, and moving the data
files back into place is our best bet.

I would be intested to see if that works. Does someone want to try
doing this with the regression test database? Do a pg_dump with data
before and after the operation, and see if it the same. This is a good
way to test pg_dump too.

--
Bruce Momjian | 830 Blythe Avenue
maillist(at)candle(dot)pha(dot)pa(dot)us | Drexel Hill, Pennsylvania 19026
+ If your life is a hard drive, | (610) 353-9879(w)
+ Christ can be your backup. | (610) 853-3000(h)

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Hannu Krosing 1998-06-02 17:11:41 Re: [INTERFACES] ODBC is slow with M$-Access Report
Previous Message Jackson, DeJuan 1998-06-02 16:37:38 RE: [GENERAL] Re: [HACKERS] custom types and optimization