Skip site navigation (1) Skip section navigation (2)

Re: Crash Recovery

From: Andrew Sullivan <andrew(at)libertyrms(dot)info>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Crash Recovery
Date: 2003-01-24 13:22:19
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
On Thu, Jan 23, 2003 at 10:32:58PM -0500, Noah Silverman wrote:
> To preface my question, we are still in the process of evaluating postgres 
> to determine if we want to switch our production environment over.
> I'm curious about where I can find documentation about crash recovery in 
> postgres.  In mysql, there is a nice table recovery utility (myisamchk). 

It recovers automatically.  Make sure you run with fsync turned on. 
That calls fsync on the WAL at the point of every COMMIT, and COMMIT
isn't finished before the fsync returns.  Then, in case of a crash,
the WAL just plays back and fixes up the data area.

> is there something similar in postgres?  What do we do if a table or 
> database becomes corrupted? (I'm aware of backup techniques, but it isn't 

I have never had a table become corrupted under Postgres.  There have
been some recent cases where people's bad hardware caused bad data to
make it into a table.  Postgres's error reporting usually saves you
there, because you can go in and stomp on the bad tuple if need be. 
There are some utilities to help in this; one of them, from Red Hat,
allows you to look at the binary data in various formats (it's pretty
slick).  I believe it's available from

> feasible for some of our larger tables.  We're already running on raid 5, 
> but can't do much more)

I suspect you can.  First, are you using ECC memory in your
production machines?  If not, start doing so.  Now.  It is _the most
important_ thing, aside from RAID, that you can do to protect your
data.  Almost every problem of inconsistency I've seen on the lists
in the past year and a bit has been to do with bad hardware --
usually memory or disk controllers.  (BTW, redundant disk
controllers, and ones with some intelligence built in so that they
check themsleves, are also mighty valuable here.  But memory goes bad
way more often.)

Also, I'm not sure just what you mean about backups "not being
feasible" for some of the larger tables, but you need to back up
daily.  Since pg_dump takes a consistent snapshot, there's no data
inconsistency trouble, and you can just start the backup and go away. 
If the resulting files are too large, use split.  And if the problem
is space, well, disk is cheap these days, and so is tape, compared to
having to re-get the data you lost.

Andrew Sullivan                         204-4141 Yonge Street
Liberty RMS                           Toronto, Ontario Canada
<andrew(at)libertyrms(dot)info>                              M2P 2A8
                                         +1 416 646 3304 x110

In response to


pgsql-performance by date

Next:From: Ron JohnsonDate: 2003-01-24 13:52:57
Subject: Re: Crash Recovery
Previous:From: Tom LaneDate: 2003-01-24 06:48:19
Subject: Re: Does "correlation" mislead the optimizer on large tables?

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group