Re: (A) native Windows port

From: Lamar Owen <lamar(dot)owen(at)wgcr(dot)org>
To: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, Hannu Krosing <hannu(at)tm(dot)ee>
Cc: Jan Wieck <JanWieck(at)Yahoo(dot)com>, Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au>, HACKERS <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: (A) native Windows port
Date: 2002-07-05 16:39:13
Message-ID: 200207051239.13520.lamar.owen@wgcr.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general pgsql-hackers

On Wednesday 03 July 2002 12:09 pm, Bruce Momjian wrote:
> Hannu Krosing wrote:
> > AFAIK I can run as many backends as I like (up to some practical limit)
> > on the same comuter at the same time, as long as they use different
> > ports and different data directories.

> We don't have an automated system for doing this. Certainly it is done
> all the time.

Good. Dialog. This is better than what I am used to when I bring up
upgrading. :-)

Bruce, pg_upgrade isn't as kludgey as what I have been doing with the RPMset
for these nearly three years.

No, what I envisioned was a standalone dumper that can produce dump output
without having a backend at all. If this dumper knows about the various
binary formats, and knows how to get my data into a form I can then restore
reliably, I will be satisfied. If it can be easily automated so much the
better. Doing it table by table would be ok as well.

I'm looking for a sequence such as:

----
PGDATA=location/of/data/base
TEMPDATA=location/of/temp/space/on/same/file/system

mv $PGDATA/* $TEMPDATA
initdb -D $PGDATA
pg_dbdump $TEMPDATA |pg_restore {with its associated options, etc}

With an rm -rf of $TEMPDATA much further down the pike.....

Keys to this working:
1.) Must not require the old version executable backend. There are a number
of reasons why this might be, but the biggest is due to the way much
upgrading works in practice -- the old executables are typically gone by the
time the new package is installed.

2.) Uses pg_dbdump of the new version. This dumper can be tailored to provide
the input pg_restore wants to see. The dump-restore sequence has always had
dumped-data version mismatch as its biggest problem -- there have been issues
before where you would have to install the new version of pg_dump to run
against the old backend. This is unacceptable in the real world of binary
packages.

One other usability note: why can't postmaster perform the steps of an initdb
if -D points to an empty directory? It's not that much code, is it? (I know
that one extra step isn't backbreaking, but I'm looking at this from a rank
newbie's point of view -- or at least I'm trying to look at it in that way,
as it's been a while since I was a rank newbie at PostgreSQL) Oh well, just
a random thought.

But I believe a backend-independent data dumper would be very useful in many
contexts, particularly those where a backend cannot be run for whatever
reason, but you need your data (corrupted system catalogs, high system load,
whatever). Upgrading is just one of those contexts.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Marc G. Fournier 2002-07-05 16:43:26 Re: Should next release by 8.0 (Was: Re: [GENERAL] I am
Previous Message Thomas Lockhart 2002-07-05 16:37:12 Re: Should next release by 8.0 (Was: Re: [GENERAL] I am

Browse pgsql-hackers by date

  From Date Subject
Next Message Marc G. Fournier 2002-07-05 16:43:26 Re: Should next release by 8.0 (Was: Re: [GENERAL] I am
Previous Message Thomas Lockhart 2002-07-05 16:37:12 Re: Should next release by 8.0 (Was: Re: [GENERAL] I am