On Wed, Feb 22, 2012 at 6:17 PM, Timothy Garnett <tgarnett(at)panjiva(dot)com> wrote:
> I wanted to gauge the interest in adding an option for this to pg_dump.
I was thinking about an application for much the same feature.
Consider the case where you have a relatively small database such as
the accounting records for a not-hugely-active business.
And you'd like to handle backups via checking them into an SCM repository.
pg_dump -h dbserver -p 5432 accounting > accounting.sql
git add accounting.sql
git commit -m "Latest backup" accounting.sql
If the database's tables have gotten clustered, then the order of data
will tend to be consistent, and differences between versions of
"accounting.sql" will generally represent the actual differences.
If, on the other hand, tables are not clustered, then dumps will find
tuples ordered in rather less predictable fashions, and the backups
will have more differences indicated.
I was thinking about writing a script to run CLUSTER before doing
backups. For that step to be part of pg_dump is certainly an
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"
In response to
pgsql-hackers by date
|Next:||From: Tom Lane||Date: 2012-02-23 15:17:28|
|Subject: Re: incompatible pointer types with newer zlib |
|Previous:||From: Simon Riggs||Date: 2012-02-23 15:12:13|
|Subject: Re: foreign key locks, 2nd attempt|