Re: Backing up a replication set every 30 mins

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Khusro Jaleel" <mailing-lists(at)kerneljack(dot)com>, <pgsql-admin(at)postgresql(dot)org>
Subject: Re: Backing up a replication set every 30 mins
Date: 2012-02-15 14:48:17
Message-ID: 4F3B715102000025000455FB@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Khusro Jaleel <mailing-lists(at)kerneljack(dot)com> wrote:
> On 02/15/2012 12:58 PM, Vladimir Rusinov wrote:
>>
>> pg_dump won't block writes, thanks to MVCC. It may increase bloat
>> and it will block DDL operations (ALTER TABLE/etc), but if your
>> database is relatively small but have high load and you need
>> frequent backups, this may be a way to go.

> Thanks Vladimir. Would a simple script with 'pg_start_backup' and
> 'pg_stop_backup' and an rsync job or tar job in between would
> work equally well? I thought that was the better way to do it,
> rather than pg_dump?

The PITR style backup you describe doesn't cause bloat or block DDL,
and if you archive the WAL files you can restore to any point in
time following the pg_stop_backup. pg_dump just gives you a
snapshot as of the start of the dump, so if you use that you would
need to start a complete dump every 30 minutes. With PITR backups
and WAL archiving you could set your archvie_timeout to force timely
archiving (or use streaming replication if you are on 9.0 or later)
and effectively dump incremental database *activity* to stay
up-to-date.

Now, if 30 minutes of activity is more than the size of the
database, pg_dump could, as Vladimir says, still be a good
alternative.

-Kevin

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Khusro Jaleel 2012-02-15 15:17:54 Re: Backing up a replication set every 30 mins
Previous Message Khusro Jaleel 2012-02-15 14:05:26 Re: Backing up a replication set every 30 mins