Skip site navigation (1) Skip section navigation (2)

Re: Backing up a replication set every 30 mins

From: Khusro Jaleel <mailing-lists(at)kerneljack(dot)com>
To: pgsql-admin(at)postgresql(dot)org, "Kevin(dot)Grittner(at)wicourts(dot)gov >> Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
Subject: Re: Backing up a replication set every 30 mins
Date: 2012-02-15 16:05:32
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-admin
On 02/15/2012 03:39 PM, Kevin Grittner wrote:
> Khusro Jaleel<mailing-lists(at)kerneljack(dot)com>  wrote:
>> Sounds like my pg_start/rsync/pg_stop script solution every 30
>> mins might be better then, as long as the jobs don't overlap :-)
> That sounds like it's probably overkill.  Once you have your base
> backup, you can just accumulate WAL files.  We do a base backup once
> per week and keep the last two base backups plus all WAL files from
> the start of the first one.  We can restore to any particular point
> in time after that earlier base backup.  I've heard of people
> happily going months between base backups, and just counting on WAL
> file replay, although I'm slightly too paranoid to want to go that
> far.

That's exactly what I was trying to accomplish, however I tried to do a 
base backup every day and have archives during the day. This worked fine 
in testing, however when I set it up and attached the Java front-ends to 
the DB, there were *so* many archive files written to disk that the 
"rotate" job that runs every morning to do a new base backup failed. It 
failed because there were thousands upon thousands of archive files in 
the archive dir and it couldn't delete them. Why this happened I am not 
sure, I thought setting archive_timeout = 30 would only create 1 archive 
file every 30 minutes, but I was wrong. The Java application itself was 
pretty much idle the whole time, not sure though if the ORM solution 
used was perhaps writing something to the DB every now and then causing 
the archives to be *flushed* to disk much earlier than 30 mins?

> Not exactly.  I was saying that if you have a very unusual situation
> where the database is very small but has very high volumes of
> updates (or inserts and deletes) such that it stays very small while
> generating a lot of WAL, it is within the realm of possibility that
> a pg_dump every 30 minutes could be your best option.  I haven't
> seen such a database yet, but I was conceding the possibility that
> such could exist.

Yes, this is a possibility, thanks for clarifying it. The database won't 
be very big even after some months I think so I might do it this way. 
However I prefer to get the first PITR solution working right. If that 
can be forced to flush archives *only* every 30 mins I would be very 
pleased. But is it possible that because of the constant replication to 
the slave, this can never be accomplished on the master?

In response to


pgsql-admin by date

Next:From: Ian LeaDate: 2012-02-15 16:19:26
Subject: Re: Backing up a replication set every 30 mins
Previous:From: Kevin GrittnerDate: 2012-02-15 15:39:43
Subject: Re: Backing up a replication set every 30 mins

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group