> > ... if we are a little bit careful not to
> > break existing on disk structures, or to make things downward
> > compatible.
> > For example, if we added a b-tree clustered index access method,
> > this should not invalidate all existing tables and indexes, they
> > just couldn't take advantage of it until rebuilt.
> > On the other hand, if we decided to change to say 64 bit oids, I can
> > see a reload being required.
> > I guess that in our situation we will occassionally have changes
> > that require a dump/load. But this should really only be required
> > for the addition of a major feature that offers enough benifit to
> > the user that they can see that it is worth the pain.
> > Without knowing the history, the impression I have formed is that we
> > have sort of assumed that each release will require a dump/load to
> > do the upgrade. I would like to see us adopt a policy of trying to
> > avoid this unless there is a compelling reason to make an exception.
We tried pretty hard to do this at the start of the v6.x releases, and
failed. A few of the reasons as I recall:
1) most changes/improvements involve changes to one or more system
2) postgres does not allow updates/inserts to at least some system
catalogs (perhaps because of interactions with the compiled catalog
3) system catalogs appear in every database directory, so all databases
would need to be upgraded
> How about making a file specifying what to do when upgrading from one
> version of pg to another? Then a program, let's call it 'pgconv',
> would read this file and do the conversions from the old to the new
> format using pg_dump and psql and/or some other helper programs.
> pgconv should be able to skip versions (upgrade from 6.2 to 6.4 for
> example, skipping 6.2.1, 6.3 and 6.3.2) by simply going through all
> steps from version to version.
> Wouldn't this be much easier than having to follow instructions
> written in HRF? Nobody could mess up their data, because the
> program would always do the correct conversions.
This will be a good bit of work, and would be nice to have but we'd
probably need a few people to take this on as a project. Right now, the
most active developers are already spending more time than they should
working on Postgres :)
I haven't been too worried about this, but then I don't run big
databases which need to be upgraded. Seems the dump/reload frees us to
make substantial improvements with each release without a huge burden of
ensuring backward compatibility. At the prices we charge, it might be a
good tradeoff for users...
> Btw, does pg_dump quote identifiers? CREATE TABLE "table"
> ("int" int, "char" char) for example? I know it did not
> use to, but perhaps it does now?
If it doesn't yet (I assume it doesn't), I'm planning on looking at it
for v6.4. Or do you want to look at it Bruce? We should be looking to
have all identifiers double-quoted, to preserve case, reserved words,
and weird characters in names.
In response to
pgsql-hackers by date
|Next:||From: David Hartwig||Date: 1998-06-02 15:47:31|
|Subject: Re: [INTERFACES] ODBC is slow with M$-Access Report|
|Previous:||From: The Hermit Hacker||Date: 1998-06-02 14:11:00|
|Subject: Re: [HACKERS] An easier way to upgrade (Was: Lots 'o patches)|