This is what I have now: postgresql 8.0.1 - database weights about 60GB
and increases about 2GB per week. Nowadays I do backup every day -
according to simple procedure (pg_start_backup:rsync
data:pg_stop_backup:save wals produced during backup). On 1Gb internal
network it usually takes me about 1h to perform this procedure.
But what if my database has ~200GB and more (I know this is a future
:D)? From my point of view it won't be good idea to copy entire database
to backup array. I would like to here opinions about this case - what do
you propose? Maybe some of you already do something like this?
pgsql-admin by date
|Next:||From: Jeff Frost||Date: 2005-09-17 16:31:04|
|Subject: Re: Backup issue|
|Previous:||From: Peter Ivarsson||Date: 2005-09-16 22:47:26|
|Subject: Trying to copy data.|