From: | Jeff Davis <pgsql(at)j-davis(dot)com> |
---|---|
To: | Simon Riggs <simon(at)2ndQuadrant(dot)com> |
Cc: | Greg Smith <greg(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Enabling Checksums |
Date: | 2012-12-18 02:21:12 |
Message-ID: | 1355797272.24766.187.camel@sussancws0025 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, 2012-12-17 at 19:14 +0000, Simon Riggs wrote:
> We'll need a way of expressing some form of corruption tolerance.
> zero_damaged_pages is just insane,
The main problem I see with zero_damaged_pages is that it could
potentially write out the zero page, thereby really losing your data if
it wasn't already lost. (Of course, we document that you should have a
backup first, but it's still dangerous). I assume that this is the same
problem you are talking about.
I suppose we could have a new ReadBufferMaybe function that would only
be used by a sequential scan; and then just skip over the page if it's
corrupt, depending on a GUC. That would at least allow sequential scans
to (partially) work, which might be good enough for some data recovery
situations. If a catalog index is corrupted, that could just be rebuilt.
Haven't thought about the details, though.
Regards,
Jeff Davis
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2012-12-18 02:21:59 | Re: [GENERAL] trouble with pg_upgrade 9.0 -> 9.1 |
Previous Message | Bruce Momjian | 2012-12-18 02:10:23 | Re: [ADMIN] Problems with enums after pg_upgrade |