On Tue, Mar 15, 2011 at 5:50 PM, Stephen Rees <srees(at)pandora(dot)com> wrote:
> Using PostgreSQL 9.0.x
> I cannot use pg_dump to generate a backup of a database on a hot-standby
> server, because it is, by definition, read-only.
That really makes no sense :-) You can use pg_dump on a read-only
slave, but I think the issue that people tend to run into is that the
pg_dump operations get canceled out by incoming changes before it can
finish. You can of course modify the configs to work around this
somewhat, but eventually it becomes a problem.
> However, it seems that I
> can use COPY TO within a serializable transaction to create a consistent set
> of data file(s). For example,
> BEGIN TRANSACTION;
> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
> COPY t1 TO '/tmp/t1';
> COPY t2 TO '/tmp/t2';
> ... etc ...
> COPY t<n> TO '/tmp/t<n>';
> COMMIT TRANSACTION;
> I can then use pg_dump to export the corresponding database schema from the
> master DBMS.
> Is this going to scale to a multi-GB database, where it will take hours to
> export the data from all of the tables, or are there scalability issues of
> which I should be aware?
Well, basically that's in in a nutshell. You have to stop replay while
you are doing the dumps like this, so eventually that delay becomes
unbearable for most people (especially on the order of hours).
There are several ways to work around this... you can use filesystem
snapshots to make copies and dump from there; great if you have the
option. If you don't you might want to look into omnipitr, it can
create filesystem level backups from a slave (not the same as a
logical export, but it might do).
In response to
pgsql-admin by date
|Next:||From: pkeenan||Date: 2011-03-15 23:07:59|
|Subject: Question about upgrading multiple clusters in v9.0.2|
|Previous:||From: Kevin Grittner||Date: 2011-03-15 21:59:44|
|Subject: Re: Backup hot-standby database.|