Skip site navigation (1) Skip section navigation (2)


From: Lamar Owen <lamar(dot)owen(at)wgcr(dot)org>
To: Bruce Momjian <maillist(at)candle(dot)pha(dot)pa(dot)us>
Cc: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: [HACKERS] PG_UPGRADE status?
Date: 1999-09-08 19:35:22
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
Bruce Momjian wrote:
>Lamar Owen wrote:
> > If I were a better C coder, and had more experience with the various
> > versions' on-disk formats, I'd be happy to try to tackle it myself.
> > But, I'm not that great of a C coder, nor do I know the data structures
> > well enough.  Oh well.
> You would have to convert tons of rows of data in raw format.  Seems
> like dump/reload would be easier.

For normal situations, it is.  However, in an RPM upgrade that occurs as
part of an OS upgrade (say, from RedHat 6.0 to RedHat 6.1), NO daemons
can be run during a package upgrade. That doesn't seem too bad until you
realize just what an RPM upgrade does....

The nastiness gets nastier: the RPM upgrade procedure (currently)
deletes the old package contents after installing the new package
contents, removing the backend version that can read the database.  You
rpm -Uvh postgresql*.rpm across major versions, and you lose data
(technically, you don't lose the data per se, you just lose the ability
to read it...). And you possibly lose a postgresql user as a result.  I
know -- it happened to me with mission-critical data.  Fortunately, I
had been doing pg_dumpall's, so it wasn't too bad -- but it sure caught
me off-guard! (admittedly, I was quite a newbie at the time....)

I am working around that -- backing up (using an extremely restrictive
set of commands, because this script MIGHT be running under a floppy
install image...) the executables and libraries necessary to run the
older version BEFORE the newer executables are brought in, backing up
the older version's PGDATA, running the older postmaster against the
older PGDATA with the older backend on a different port DURING the
startup of the NEWER version's init, initdb with the newer version's
backend, run the newer postmaster WHILE the older one is running, then
pipe the output of the older pg_dumpall into a newer psql -e template1
session. Then, I have to verify the integrity of the transfered data,
stop the older postmaster...etc.  Piece of cake?  Not quite. Why not let
the user do all that?  Because most users can't fathom doing all of

You can see how pg_upgrade would be useful in such a scenario, no?  I'm
not complaining, just curious. With pg_upgrade, during the startup
script for the new version, I detect the version of the PGDATA I am
running with, if it's an older version I first make a backup and then
pg_upgrade PGDATA. Simpler, with less likelihood of failure, IMHO. If I
need to do an initdb first, not a problem -- I'm already going to have
that in there for the case of a fresh install.  

Lamar Owen
WGCR Internet Radio

In response to


pgsql-hackers by date

Next:From: Patrick LoganDate: 1999-09-08 20:32:09
Subject: Re: Problem enabling pltcl
Previous:From: Bruce MomjianDate: 1999-09-08 19:05:20
Subject: Re: [HACKERS] PG_UPGRADE status?

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group