Creation of 10000's of empty table segments and more...

From: "Philip Poles" <philip(at)surfen(dot)com>
To: <pgsql-bugs(at)postgresql(dot)org>
Subject: Creation of 10000's of empty table segments and more...
Date: 2000-08-16 20:30:18
Message-ID: 005701c007c0$cac41fe0$26ab6bcf@surfen.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

============================================================================
POSTGRESQL BUG REPORT TEMPLATE
============================================================================

Your name : Philip Poles
Your email address : philip(at)surfen(dot)com

System Configuration
---------------------
Architecture (example: Intel Pentium): dual intel pIII 733

Operating System (example: Linux 2.0.26 ELF): RH 6.2 (Linux 2.2.14-5.0smp)

PostgreSQL version (example: PostgreSQL-7.0): PostgreSQL-7.0.0

Compiler used (example: gcc 2.8.0): gcc 2.91.66

Please enter a FULL description of your problem:
------------------------------------------------
This problem has been reported before for earlier versions of postgres, and I
checked the archives (which are incomplete)
The last report I have of this problem in my personal archives is from Don
Baccus on April 22 of this year, although that was
for version 6.5.3. Basically, the backend has created a bunch of empty files of
the name <table_name>.<n>, ~32500 for one table,
~50000 for another, ~44000 for a third, and ~250 for a fourth. From reading the
old thread on this, I suspect it's being caused
by the nightly vacuum we run, and is due to a corrupted index.

pg_dump now also fails for this database, producing the message:
pqWait() -- connection not open
PQendcopy: resetting connection
SQL query to dump the contents of Table 'respondent_surveys' did not execute
correctly. After we read all the table contents from the backend, PQendcopy()
failed. Explanation from backend: 'The Data Base System is in recovery mode
'.
The query was: 'COPY "respondent_surveys" TO stdout;

respondent_surveys is the table with ~250 empty segments.

Furthermore, and I'm not sure if this is a related issue, but the nightly vacuum
analyze now crashes with the message:
NOTICE: --Relation selected_answer_counts--
NOTICE: Rel selected_answer_counts: TID 210/347: OID IS INVALID. TUPGONE 0.
NOTICE: Pages 213: Changed 0, reaped 6, Empty 1, New 0; Tup 132526: Vac 430,
Keep/VTL 0/0, Crash 1, UnUsed 376, MinLen 48, MaxLen 48; Re-using: Free/Avail.
Space 83300/53448; EndEmpty/Avail. Pages 0/5. CPU 0.05s/0.34u sec.
NOTICE: Index selected_answer_counts_pkey: Pages 463; Tuples 132526: Deleted 1.
CPU 0.06s/0.13u sec.
ERROR: No one parent tuple was found
vacuumdb: vacuum failed

The selected_answer_counts table is, oddly NOT one of the tables with many empty
segments.

Also, during the day before the dump/vacuum began to fail, the backend was
resetting itself every few minutes with the message:
Server process (pid 25155) exited with status 11 at Fri Aug 11 11:47:47 2000
Terminating any active server processes...
NOTICE: Message from PostgreSQL backend:
The Postmaster has informed me that some other backend died abnormally
and possibly corrupted shared memory.
I have rolled back the current transaction and am going to terminate
your database system connection and exit.
Please reconnect to the database system and repeat your query.
etc...

I'm not sure what status 11 means.

This database was no longer useable, and had to be restored from a backup, but I
kept it around under a different name if that will
help at all.

Other possibly relevant info:
The server that postgres is running on has 512 MB ram, and postgres is started
with the following switches:

pg_ctl -o "-i -N 256 -B 512 -o \"-S 4096\" >>postgres.log 2>&1 </dev/null" start

I'm not sure if all of these problems are directly related, or if I'm looking at
two or more possible bugs.

Please describe a way to repeat the problem. Please try to provide a
concise reproducible example, if at all possible:
----------------------------------------------------------------------

I am unsure of how to repeat this problem...although I do have a database which
can cause the problem stored locally.
I do know that nothing unusual was going on at the time - i.e. the same
select/insert/update/deletes that have been
running trouble-free for the past 2 months were the only transactions taking
place at the time.

If you know how this problem might be fixed, list the solution below:
---------------------------------------------------------------------

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Gqms2 Galway 2000-08-16 21:41:18 Bug in to_timestamp()
Previous Message Baley OUATTARA 2000-08-16 17:58:46 help me