page corruption bug

From: "A Palmblad" <adampalmblad(at)yahoo(dot)ca>
To: <pgsql-bugs(at)postgresql(dot)org>
Subject: page corruption bug
Date: 2004-04-12 17:57:31
Message-ID: 01d401c420b7$a13bc790$97019696@AERS04
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

============================================================================
POSTGRESQL BUG REPORT TEMPLATE
============================================================================

Your name : Adam Palmblad
Your email address : adampalmblad(at)yahoo(dot)ca

System Configuration
---------------------
Architecture (example: Intel Pentium) : dual AMD 64s (242),

Operating System (example: Linux 2.4.18) : Gentoo Linux, kernel 2.6.3-Gentoo-r2, XFS file system

PostgreSQL version (example: PostgreSQL-7.4.2): PostgreSQL-7.4.2 (64-bit compile)

Compiler used (example: gcc 2.95.2) : 3.3.3

Please enter a FULL description of your problem:
------------------------------------------------
We are having a recurring problem with page corruption in our database. We need to add over 3 million records a day to our database, and
we have been finding that we will start getting page header corruption errors after around 12 - 15 million records. These errors
show up both in tables and in indexes. Generally they only occur in our largest tables. This is a new server, when it was set up some
basic hardware tests were done, and they checked out okay. The data in the databases is critical to our business; having to rebuild a table
and reinsert data every few days is not really an acceptable solution.

Another error was just noted, reading as follows: ERROR: Couldn't open segment 1 of relation: XXXX (target block 746874992): No such file or directory.

Please describe a way to repeat the problem. Please try to provide a
concise reproducible example, if at all possible:
----------------------------------------------------------------------
Insert 15 million records to a table. Use the copy command. We are running copy with files of 60 000 lines to insert the data.
Do a vacuum or similar operation that would visit every page of the table.
An invalid page header error may occur.

If you know how this problem might be fixed, list the solution below:
---------------------------------------------------------------------
Has anyone else had this problem? Would it be better for us to try a different
file system or kernel? Should postgres be recompiled in 32-bit mode?

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Josh Berkus 2004-04-12 19:07:55 Core Dump on SunOS + 7.3.3
Previous Message SZŰCS Gábor 2004-04-09 12:23:07 Re: could not devise a query plan