Re: Endless recovery

From: Hans-Juergen Schoenig <postgres(at)cybertec(dot)at>
To: Heikki Linnakangas <heikki(at)enterprisedb(dot)com>
Cc: pgsql-patches(at)postgresql(dot)org
Subject: Re: Endless recovery
Date: 2008-02-11 09:44:20
Message-ID: 4F5A2B70-BB4F-4501-B420-4ADCDC17F268@cybertec.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-patches


On Feb 11, 2008, at 10:26 AM, Heikki Linnakangas wrote:

> Hans-Juergen Schoenig wrote:
>> Last week we have seen a problem with some horribly configured
>> machine.
>> The disk filled up (bad FSM ;) ) and once this happened the
>> sysadmi killed the system (-9).
>> After two days PostgreSQL has still not started up and they tried
>> to restart it again and again making sure that the consistency
>> check was started over an over again (thus causing more and more
>> downtime).
>> From the admi point of view there was no way to find out whether
>> the machine was actually dead or still recovering.
>> Here is a small patch which issues a log message indicating that
>> the recovery process can take ages.
>> Maybe this can prevent some admis from interrupting the recovery
>> process.
>
> Wait, are you saying that the time was spent in the rm_cleanup
> phase? That sounds unbelievable. Surely the time was spent in the
> redo phase, no?

it was a seek heavy workload, with backtraces like this one

testdb(at)testdb:/srv/daten$ gdb --batch -x gdb.cmd /usr/local/
pgsql-8.1.11/bin/postmaster 15727
Using host libthread_db library "/lib/tls/i686/cmov/libthread_db.so.1".
[Thread debugging using libthread_db enabled]
[New Thread -1214167376 (LWP 15727)]
0xffffe410 in __kernel_vsyscall ()
#0 0xffffe410 in __kernel_vsyscall ()
#1 0xb7c55c53 in read () from /lib/tls/i686/cmov/libc.so.6
#2 0x081b11e2 in FileRead (file=54, buffer=0xa7bb7540 "1",
amount=8192) at fd.c:1015
#3 0x081be33f in mdread (reln=0x8371e90, blocknum=6553044,
buffer=0xa7bb7540 "1") at md.c:497
#4 0x081be88a in smgrread (reln=0x8371e90, blocknum=6553044,
buffer=0xa7bb7540 "1") at smgr.c:527
#5 0x081ae4af in ReadBuffer (reln=0x834f0d0, blockNum=6553044) at
bufmgr.c:252
#6 0x080b9869 in XLogReadBuffer (extend=0 '\0', reln=0x834f0d0,
blkno=6553044) at xlogutils.c:57
#7 0x0808b3a2 in gistXLogReadAndLockBuffer (r=0x834f0d0,
blkno=6553044) at gistxlog.c:530
#8 0x08087641 in gistFindPath (r=0x834f0d0, child=8558565,
myReadBuffer=0x808b380 <gistXLogReadAndLockBuffer>) at gist.c:677
#9 0x0808c7bb in gist_xlog_cleanup () at gistxlog.c:549
#10 0x080b7c5c in StartupXLOG () at xlog.c:4769
#11 0x080c2635 in BootstrapMain (argc=4, argv=<value optimized out>)
at bootstrap.c:428
#12 0x0819a2cc in StartChildProcess (xlop=2) at postmaster.c:3581
#13 0x0819cb38 in PostmasterMain (argc=1, argv=0x83275a8) at
postmaster.c:931
#14 0x0815f909 in main (argc=1, argv=0x6a27e4b4) at main.c:265

redo was done fast ...

[2008-02-07 22:24:50 CET ]LOG: database system was interrupted while
in recovery at 2008-02-04 11:09:04 CET
[2008-02-07 22:24:50 CET ]HINT: This probably means that some data is
corrupted and you will have to use the last backup for recovery.
[2008-02-07 22:24:50 CET ]LOG: checkpoint record is at 35/3BAA131C
[2008-02-07 22:24:50 CET ]LOG: redo record is at 35/3BA11AB4; undo
record is at 0/0; shutdown FALSE
[2008-02-07 22:24:50 CET ]LOG: next transaction ID: 194549334; next
OID: 16586886
[2008-02-07 22:24:50 CET ]LOG: next MultiXactId: 1; next
MultiXactOffset: 0
[2008-02-07 22:24:50 CET ]LOG: database system was not properly shut
down; automatic recovery in progress
[2008-02-07 22:24:50 CET ]LOG: redo starts at 35/3BA11AB4
[2008-02-07 22:24:53 CET ]LOG: record with zero length at 35/3C8317C8
[2008-02-07 22:24:53 CET ]LOG: redo done at 35/3C8317A0
10:40 AM

note that redo was finished fast ...

>
>> In our case, the recovery process took 3.5 days !!
>
> That's a ridiculously long time. Was this a normal recovery, not a
> PITR archive recovery? Any idea why the recovery took so long?
> Given the max. checkpoint timeout of 1h, I would expect that the
> recovery would take a maximum of few hours even with an extremely
> write-heavy workload.
>

there was no PITR in place - just a normal DB ...
my first idea when they called me was that it must be related to
checkpoint_segments - maybe some 2mio segments and some insanely long
timeout. but no, this was not the case. segments were at 12 and the
timeout was just several minutes. basically from "outside" everything
was looking fine ...

we used a binary copy of the data on two boxes (one for debugging).
the entire process worked like a charm - it just took ages.
we have seen a lot of random I/O here.

this was quite a small machine with insanely small memory settings.
no errors have been issued during the process - all fine; just long ...

the DB version is 8.1.11. The entire DB is 116gb. It is more or less
a table along with a 65 GB Gist index.

hans

--
Cybertec Schönig & Schönig GmbH
PostgreSQL Solutions and Support
Gröhrmühlgasse 26, 2700 Wiener Neustadt
Tel: +43/1/205 10 35 / 340
www.postgresql.at, www.cybertec.at

In response to

Responses

Browse pgsql-patches by date

  From Date Subject
Next Message Simon Riggs 2008-02-11 10:02:37 Re: Endless recovery
Previous Message Heikki Linnakangas 2008-02-11 09:26:08 Re: Endless recovery