| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Scott Marlowe <smarlowe(at)g2switchworks(dot)com> |
| Cc: | Pallav Kalva <pkalva(at)deg(dot)cc>, pgsql-admin(at)postgresql(dot)org |
| Subject: | Re: Postgres 8.0 Backups |
| Date: | 2005-01-24 18:05:00 |
| Message-ID: | 23143.1106589900@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-admin |
Scott Marlowe <smarlowe(at)g2switchworks(dot)com> writes:
> On Mon, 2005-01-24 at 10:47, Pallav Kalva wrote:
>> I am working on a backup script for Postgres 8.0 online backups and
>> since i have to copy the whole pgdata directory , i am wondering after i
>> copy the pgdata directory can I run gzip or tar on the data directory ?
> Generally speaking, file system level backups are not the best way to
> backup postgresql, since they require either shutting down the server or
> using a snapshot file system to get a coherent backup.
In the context of online backup operations, that advice isn't relevant
anymore ...
Personally I would do "tar cfz pgdata.tar.gz $PGDATA" or equivalent,
rather than making an explicit copy of the directory tree first.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Jani Averbach | 2005-01-24 18:16:56 | pg_dump --data-only problem with PgSQL 8.0 |
| Previous Message | Scott Marlowe | 2005-01-24 17:12:22 | Re: benchmarks with pgbench |