Re: pg_dump / pg_restore with Large Objects from 32-bit to 64-bit

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Matt Janssen" <matt(at)luggagepros(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: pg_dump / pg_restore with Large Objects from 32-bit to 64-bit
Date: 2010-03-15 19:58:13
Message-ID: 22705.1268683093@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

"Matt Janssen" <matt(at)luggagepros(dot)com> writes:
> When migrating our Postgres databases from 32 to 64-bit systems, including
> large binary objects, how well will this work?

> 32-bit server) pg_dump --format=c --blobs --file=backup.pg mydb
> 64-bit server) pg_restore -d mydb backup.pg

Should be fine; but remember that only gets you the contents of the one
database. You may also want pg_dumpall --globals-only to transfer
role properties and such.

> I'm hoping that PG's compressed custom archive format is not architecture
> dependant. Thanks!

It is not.

regards, tom lane

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Thomas Kellerer 2010-03-15 20:11:13 Re: how to get notification in front end application when ever postgre DB table is modified, any tool, jar, api available?
Previous Message Scott Whitney 2010-03-15 19:24:06 Re: Autovac vs manual with analyze