This is a hypothetical problem but not an impossible situation. Just curious about what would happen.
Lets say you have an OLTP server that keeps very busy on a large database. In this large database you have one or more tables on super fast storage like a fusion IO card which is handling (for the sake of argument) 1 million transactions per second.
Even though only one or a few tables are using almost all of the IO, pg_dump has to export a consistent snapshot of all the tables to somewhere else every 24 hours. But because it's such a large dataset (or perhaps just network congestion) the daily backup takes 2 hours.
Heres the question, during that 2 hours more than 4 billion transactions could of occurred - so what's going to happen to your backup and/or database?
pgsql-admin by date
|Next:||From: Tom Lane||Date: 2010-11-24 06:07:49|
|Subject: Re: pg_dump and XID limit |
|Previous:||From: Kevin Grittner||Date: 2010-11-23 19:54:15|
|Subject: Re: binary logs: a location other than pg_xlog??|