From: | Scott Marlowe <smarlowe(at)g2switchworks(dot)com> |
---|---|
To: | Marcin Giedz <marcin(dot)giedz(at)eulerhermes(dot)pl> |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: Backup issue |
Date: | 2005-09-19 16:52:44 |
Message-ID: | 1127148764.30120.102.camel@state.g2switchworks.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Sat, 2005-09-17 at 02:49, Marcin Giedz wrote:
> Hello...
>
> This is what I have now: postgresql 8.0.1 - database weights about 60GB
> and increases about 2GB per week. Nowadays I do backup every day -
> according to simple procedure (pg_start_backup:rsync
> data:pg_stop_backup:save wals produced during backup). On 1Gb internal
> network it usually takes me about 1h to perform this procedure.
>
> But what if my database has ~200GB and more (I know this is a future
> :D)? From my point of view it won't be good idea to copy entire database
> to backup array. I would like to here opinions about this case - what do
> you propose? Maybe some of you already do something like this?
I'd look at using PITR replication, with a monthly or so fresh, whole
backup instead of a whole backup every day.
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas F. O'Connell | 2005-09-19 19:31:14 | Re: PostgreSQL configuration problem |
Previous Message | Scott Marlowe | 2005-09-19 16:38:31 | Re: backend unexpected SIG KILL (9) |