On Sat, Mar 20, 2004 at 08:12:02AM -0500, Al Cohen wrote:
> In our particular situation, being down for two hours or so is OK.
> What's really bad is losing data.
> The PostgreSQL replication solutions that we're seeing are very clever,
> but seem to require significant effort to set up and keep going. Since
> we don't care if a slave DB is ready to kick over at a moment's notice,
> I'm wondering if there is some way to generate data, in real time, that
> would allow an offline rebuild in the event of catastrophe. We could
> copy this data across the 'net as it's available, so we could be OK even
> if the place burned down.
Your closest current bet is one of the (admittedly possibly painful
to use) async replication systems available. Note that _no_ async
system, log shipping, whatever can guarantee zero data loss: if you
lose the active master in a catastrophic explosion which consumes all
your disk, there is the possibility that there will be records which
were committed on that master but which were not committed anywhere
else. This is even true of PITR approaches which stream the WAL to
some system in another city: you're only as up to date as the last
packet sent, and if a bomb goes off in your cage in your data centre,
you're going to lose something no matter what.
You might like to follow the Slony project, which has as one of its
design goals much easier administration than erserver. You may also
want to watch the PITR project, which appears to be aiming to get
Andrew Sullivan | ajs(at)crankycanuck(dot)ca
In response to
pgsql-admin by date
|Next:||From: Tom Lane||Date: 2004-03-22 16:38:30|
|Subject: Re: what the initial 3 daemon of postmaster process do? |
|Previous:||From: Naomi Walker||Date: 2004-03-22 16:19:12|
|Subject: Re: backup and recovery|