From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Elliot Chance <elliotchance(at)gmail(dot)com> |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: pg_dump and XID limit |
Date: | 2010-11-24 06:07:49 |
Message-ID: | 6420.1290578869@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Elliot Chance <elliotchance(at)gmail(dot)com> writes:
> This is a hypothetical problem but not an impossible situation. Just curious about what would happen.
> Lets say you have an OLTP server that keeps very busy on a large database. In this large database you have one or more tables on super fast storage like a fusion IO card which is handling (for the sake of argument) 1 million transactions per second.
> Even though only one or a few tables are using almost all of the IO, pg_dump has to export a consistent snapshot of all the tables to somewhere else every 24 hours. But because it's such a large dataset (or perhaps just network congestion) the daily backup takes 2 hours.
> Heres the question, during that 2 hours more than 4 billion transactions could of occurred - so what's going to happen to your backup and/or database?
The DB will shut down to prevent wraparound once it gets 2 billion XIDs
in front of the oldest open snaphot.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | c k | 2010-11-24 06:26:16 | Re: plpyhton |
Previous Message | Elliot Chance | 2010-11-24 02:18:28 | pg_dump and XID limit |