Updating large postgresql database with blobs

From: "CAJ CAJ" <pguser(at)gmail(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Updating large postgresql database with blobs
Date: 2007-03-12 00:18:17
Message-ID: 467669b30703111718g6e045c28r1812ec256b23ebf0@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hello,

I didn't get any response on the GENERAL list so i'm escalating this ....

We have several independent database servers with ~50GB+ databases running
postgres 8.0.x. We are planning to upgrade these databases to postgres
8.2.xover the weekend

We plan to use the following steps to upgrade each server,

1. Dump the 8.0.x database cluster using 8.2.x pg_dumpall
% ./pg_dumpall > pgdumpall_backup.sql

2.Dump the 8.0.x database including large objects in compressed custom
format using 8.2.x pg_dump
% ./pg_dump -Fc -b -Z9 dbname > pgdump_lobs_backup

Restoring database
1. Initialize 8.2.x darabase
% initdb -D /data/pgdata

2. Restore template1 database from cluster dump
% ./psql -d template1 < pgdumpall_backup.sql

3. Delete database dbname else restoring will give error about existing
dbname
% dropdb dbname

4. Create fresh dbname
% createdb -O dbowner dbname

5. Restore database with lobs
% ./pg_restore -v -Fc -d dbname -e -U dbowner < pgdumpall_lobs_backup

Some of the problems we have are,
1. We are not sure if all of the data will be available after dump/restore
with above process
2. The dump and restore process is very very slow to be complete over the
weekend (takes approx 1GB/hr to dump on a dual G5 PPC 2Ghz with 1GB RAM and
RAID 1 disks)

What is the fastest way to upgrade postgres for large databases that has
binary objects?

Thanks for all your help.

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2007-03-12 02:53:19 Re: Why is "osprey" dumping core in REL8_2 branch?
Previous Message Simon Riggs 2007-03-11 23:05:29 Re: [HACKERS] COMMIT NOWAIT Performance Option (patch)