David Gould wrote:
> > >
> > > That is going to be difficult to do. We used to have some SQL scripts
> > > that could make the required database changes, but when system table
> > > structure changes, I can't imagine how we would migrate that without a
> > > dump/reload. I suppose we could keep the data/index files with user data,
> > > run initdb, and move the data files back, but we need the system table
> > > info reloaded into the new system tables.
> > If the tuple header info doesn't change, this doesn't seem that tough.
> > Just do a dump the pg_* tables and reload them. The system tables are
> > "small" compared to the size of user data/indexes, no?
> I like this idea.
> > Or is there some extremely obvious reason that this is harder than it
> > seems?
> > But then again, what are the odds that changes for a release will only
> > affect system tables so not to require a data dump? Not good I'd say.
> Hmmm, not bad either, especially if we are a little bit careful not to
> break existing on disk structures, or to make things downward compatible.
> For example, if we added a b-tree clustered index access method, this should
> not invalidate all existing tables and indexes, they just couldn't take
> advantage of it until rebuilt.
> On the other hand, if we decided to change to say 64 bit oids, I can see
> a reload being required.
> I guess that in our situation we will occassionally have changes that require
> a dump/load. But this should really only be required for the addition of a
> major feature that offers enough benifit to the user that they can see that
> it is worth the pain.
> Without knowing the history, the impression I have formed is that we have
> sort of assumed that each release will require a dump/load to do the upgrade.
> I would like to see us adopt a policy of trying to avoid this unless there
> is a compelling reason to make an exception.
How about making a file specifying what to do when upgrading from one
version of pg to another? Then a program, let's call it 'pgconv', would
read this file and do the conversions from the old to the new format
using pg_dump and psql and/or some other helper programs.
pgconv should be able to skip versions (upgrade from 6.2 to 6.4 for
example, skipping 6.2.1, 6.3 and 6.3.2) by simply going through all
steps from version to version.
Wouldn't this be much easier than having to follow instructions
written in HRF? Nobody could mess up their data, because the
program would always do the correct conversions.
Btw, does pg_dump quote identifiers? CREATE TABLE "table"
("int" int, "char" char) for example? I know it did not
use to, but perhaps it does now?
(Very simplified example follows):
% cd /usr/src/pgsql6.4
% pgconv /usr/local/pgsql -y
-- PgConv1.0 - PostgreSQL data conversion program --
Found old version 6.3 in /usr/local/pgsql/
Convert to 6.4 (y/n)? (yes)
>> Converting 6.3->6.3.2
> Creating shadow passwords
>> Converting 6.3.2->6.3.4
> System tables converted
> Data files converted
PgConv done. Now delete the old binaries, install
the new binaries with 'make install' and make sure
you have your PATH set correctly.
Please don't forget to run 'ldconfig' after
installing the new libraries.
#From To What to do
epoch 6.2 ERROR("Can not upgrade - too old version")
6.2 6.3 SQL("some-sql-commands-here")
OLDVER_DUMPALL() # To temp file
NEWVER_LOADALL() # From temp file
6.3 6.3.2 PRINT("Creating shadow passwords")
SYSTEM("chmod go-rwx pg_user")
6.3.2 6.4 SQL("some-commands")
PRINT("System tables converted")
PRINT("Data files converted")
/* m */
In response to
pgsql-hackers by date
|Next:||From: Tak Woohyun||Date: 1998-06-02 11:52:44|
|Subject: Re: [HACKERS] Hi. Sir....!!|
|Previous:||From: David Gould||Date: 1998-06-02 06:01:14|
|Subject: Re: [HACKERS] Current sources?|