From: | "md(at)rpzdesign(dot)com" <md(at)rpzdesign(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | File Corruption recovery |
Date: | 2012-11-06 12:34:44 |
Message-ID: | 509903E4.3090005@rpzdesign.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I have been working on external replication on Postgresql 9.2 for a
little while
(with too many interruptions blocking my progress!)
Who knows a good utility to aggressively analyze
and recover Postgresql Databases?
It seems the standard reply that I see
is "Make regular backups", but that guarantees maximum full data loss
defined by the backup time interval.
Our MariaDB Mysql/ExtraDB/Innodb friends and Aria_check and some other tools
to "recover" as much as possible up to the moment of failure.
While full replication is the ultimate safeguard, in "split brain" mode,
I could
see a hardware failure causing loss of data up to the last replication
exchange
or last backup interval.
During a data crash, I want the recovery tool to HELP me get as much
data recovered
and get back to operations. What I do not want to do is a bunch of
manual command line
file copy and deletes to "guess" my way back to operational mode (some
data loss is inevitable)
I could make a daily snapshot of the system catalog to assist the
recovery tool in
restoring the database.
Who has ideas on this?
From | Date | Subject | |
---|---|---|---|
Next Message | John Lumby | 2012-11-06 13:53:03 | Re: [PATCH] Prefetch index pages for B-Tree index scans |
Previous Message | Amit Kapila | 2012-11-06 11:26:36 | Re: Proposal [modified] for Allow postgresql.conf values to be changed via SQL |