Skip site navigation (1) Skip section navigation (2)

data corruption how zero bad page blocks etc

From: Noel Faux <noel(dot)faux(at)med(dot)monash(dot)edu(dot)au>
To: PostGreSQL <pgsql-novice(at)postgresql(dot)org>
Subject: data corruption how zero bad page blocks etc
Date: 2006-02-08 04:30:19
Message-ID: 43E973DB.1090907@med.monash.edu.au (view raw or flat)
Thread:
Lists: pgsql-novice
Hi all,

While we where trying to do a vacuum / pg_dump we encountered the 
following error:

postgres(at)db:~$ pg_dumpall -d > dump.pg
pg_dump: dumpClasses(): SQL command failed
pg_dump: Error message from server: ERROR:  invalid page header in block
9022921 of relation "gap"
pg_dump: The command was: FETCH 100 FROM _pg_dump_cursor
pg_dumpall: pg_dump failed on database "monashprotein", exiting

Now after doing some searches I managed to work out that the data corruption starts at 902292.137
using this sql: 

SELECT * FROM gap WHERE ctid = '(902292,$x)'
Where $x I changed from 1-150.

as mentioned on this post:http://archives.postgresql.org/pgsql-general/2005-11/msg01148.php

Following this post it seems all we need to do is re-zero from this point on. However we're not sure which file to do this in.

I've worked out the database/relation files are
$PGDATA/37958/111685332.* with the max * being 101.

Any help locating which file we need to do the re-zero thing would be really appreciated.

Cheers
Noel



Attachment: noel.faux.vcf
Description: text/x-vcard (260 bytes)

pgsql-novice by date

Next:From: Ketema HarrisDate: 2006-02-08 19:43:24
Subject: SQL Question
Previous:From: Sean DavisDate: 2006-02-07 19:10:59
Subject: Re: Please comment on pgsql speed at handling 550,000

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group