I'll be moving to PG9 (hopefully soon...probably 6 weeks).
At that time, I'll be setting up hot-standby with streaming replication to 2 sites. Off-siting my pgdumps nightly is no longer going to be possible in the very near future, due to the size of the dumps.
So...what I had planned to do was setup my production 9.x, setup my streaming standby (same network) 9.x and setup my disaster off-site (here at the office) also 9.x. Each one will do pg_dump at some point (nightly, probably) to ensure that I've got actual backup files available at each location. Yes, they'll be possibly-inconsistent, but only with one another, and that's a very minor issue for the dump files.
Now, when I do the directory rsync/tar (in this case tar), I can bring it up pretty quickly on the standby that is there at the data center. However, of course, I need to also set it up here at my office. Which amounts to me driving back to the office, copying it over, and starting up PG (assuming I don't get interrupted 20 times walking in the door).
So...something like this:
tar off my pg directory
My question is this:
Can I do stop_backup after I've tgzed to an external hard drive or do I have to wait to do stop_backup until both slaves are actually online?
I _think_ that I'm merely telling the source db server that "I have my possibly-inconsistent file system backup, you can go back to what you were doing," and then when the slave(s) come up, they start replaying the WAL files until they catch up then use network communication to stay in sync.
Is that a correct understanding of the process?
pgsql-admin by date
|Next:||From: Rickard, David||Date: 2012-04-18 22:12:48|
|Subject: Cannot Connect To Db From Local Server|
|Previous:||From: Rural Hunter||Date: 2012-04-18 13:21:15|
|Subject: Re: invalid byte sequence for encoding "UTF8": 0xf481 - how
could this happen?|