LARGE db dump/restore for upgrade question

From: Philip Crotwell <crotwell(at)seis(dot)sc(dot)edu>
To: pgsql-general(at)postgresql(dot)org
Subject: LARGE db dump/restore for upgrade question
Date: 2001-08-14 16:02:03
Message-ID: Pine.GSO.4.10.10108141129130.12211-100000@tigger.seis.sc.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general


Hi

I have a very large database of seismic data. It is about 27 Gb now, and
growing at about the rate of 1 Gb every 3-4 days. I am running
postgres 7.1.2. I might possibly try to upgrade to 7.2 when it comes out,
but I don't know if it will be possible for me to do 7.3 due to
the pg_dump/pg_restore problem. In a little over a year the database will
probably pass the halfway point on my raid and so it will physically be
impossible to upgrade. Most of the space is probably taken up by large
objects, which I am hoping will make a solution at least a little bit
easier.

I am trying a pg_dump right now, and in the first 25 minutes it dumped
54Mb, which means that a full dump will take about 200 hours! I would
guess the restore would take about the same amount of time, so I would be
looking at 17 DAYS of downtime to upgrade! Maybe it will speed up later in
the dump, I don't know. And in about 3 months or so it will take me twice
that amout of time. Also, this is on a 4 processor sun E450 with a A1000
hardware raid, so it not that I am using old slow hardware. Just for
comparison, a file system dump to tape took 6 hours, and that was back
when I only had a software raid!

So, my question is, is it likely that one day postgres will no longer
require dump/restores for upgrades? I would assume that there will always
be a need to tweak the internal layout of files, but I wonder if there
isn't a way to do this "in place" or at least to allow a file system move
of the large objects without requiring them to be dumped as well?
Even better would be if each new version of postgres could read the
immediatly previous version tables, and could convert them in the
background. Maybe just dreaming here. :)

Could something related to making the upgrade less painful for very large
databases be added to the ToDo list even if it isn't a high priority?

Not that I am complaining, postgres seems to handle this data volume quite
well, and it is certainly worth very dollar I didn't pay for it. :)

Any suggestion on how to prepare for the next upgrade would be
appreciated.

thanks,
Philip

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Wieger Uffink 2001-08-14 16:11:06 nextval, sequences and sequencenames
Previous Message Bruce Momjian 2001-08-14 16:01:11 Re: Re: Use int8 for int4/int2 aggregate accumulators?