Re: Using streaming replication as log archiving

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Magnus Hagander" <magnus(at)hagander(dot)net>
Cc: "PostgreSQL-development" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Using streaming replication as log archiving
Date: 2010-09-30 15:13:30
Message-ID: 4CA462CA020000250003619B@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Magnus Hagander <magnus(at)hagander(dot)net> wrote:

> We'd need a second script/command to call to figure out where to
> restart from in that case, no?

I see your point; I guess we would need that.

> It should be safe to just rsync the archive directory as it's
> being written by pg_streamrecv. Doesn't that give you the property
> you're looking for - local machine gets data streamed in live,
> remote machine gets it rsynced every minute?

Well the local target is a can't run pg_streamrecv -- it's a backup
machine where we pretty much have rsync and nothing else. We could
run pg_streamrecv on the database server itself and rsync to the
local machine every minute.

I just checked with the DBA who monitors space issues for such
things, and it would be OK to rsync the uncompressed file to the
local backup as it is written (we have enough space for it without
compression) as long as we compress it before sending it to the
central location. For that, your idea to fire a script on
completion of the file would work -- we could maintain both raw and
compressed files on the database server for rsync to the two
locations.

You can probably see the appeal of filtering it as it is written,
though, if that is feasible. :-)

-Kevin

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Kevin Grittner 2010-09-30 15:25:11 Re: Using streaming replication as log archiving
Previous Message Magnus Hagander 2010-09-30 15:00:58 Re: Using streaming replication as log archiving