Skip site navigation (1) Skip section navigation (2)

Re: [BUGS] int2 unique index malfunction (btree corrupt)

From: Christof Petig <christof(dot)petig(at)wtal(dot)de>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-bugs(at)postgresql(dot)org
Subject: Re: [BUGS] int2 unique index malfunction (btree corrupt)
Date: 1999-08-23 12:22:00
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-bugs
Tom Lane wrote:

> Christof Petig <christof(dot)petig(at)wtal(dot)de> writes:
> > During development of a CIM program I frequently updated a table by its
> > primary key (int2 or numeric(3)). A lot of strange messages
> > 'NOTICE:  (transaction aborted): queries ignored until END' alerted me
> > that something is going wrong.
> > [ details snipped ]
> FWIW, the test program you supplied seems to run without errors for me.
> I'm using current CVS sources on an HPUX box.
> There was a fix applied on 8/8 to clean up a problem with btrees not
> recovering from an aborted transaction properly, but I'm not sure
> whether that has anything to do with your example...

My example fails desperately (within one to two seconds) on 6.5.1, however I
tested it with today's CVS sources and it runs cleanly (disable the
debugging output for testing at full speed).
So the bugfix seems to cover my problem.

However ...
- if I vacuum the database while my test program runs all sorts of strange
things happen:
   -- all goes well (90% chance, better if database had recently shrunk)

   -- the vacuum backend crashes:
/home/christof> vacuumdb test
pqReadData() -- backend closed the channel unexpectedly.
        This probably means the backend terminated abnormally
        before or while processing the request.
We have lost the connection to the backend, so further processing is
vacuumdb: database vacuum failed on test.

   -- see yourself:
home/christof> vacuumdb test
ERROR:  Cannot insert a duplicate key into a unique index
vacuumdb: database vacuum failed on test.

   -- postmaster goes into an endless loop, you can't kill test nor vacuumdb
(happened once after a long run, test (the table's file) had reached about
4MB.) Killing postmaster helps ...

- vacuum never shrinks primary indices, and the index' file continues to
grow (even at 7MB+).
  Seems the only choice for long running databases is either (drop
index/create index) or     (dump/delete/restore).


PS: Besides these issues Postgres works rather well!
  I like datetime_part('epoch', ...) and timespan_part('epoch', ...) which
cover a functionality not available on our closed source (aka commercial)
Calculating the speed of a running machine in SQL is nearly trivial
(start_time, current_time, produced_amount).

PPS: I modified the test program to not drop the table and recreate it on
start. This allows many runs (event concurrent) on the same tables.
Simly invoke as
./test something_which_doesnt_matter

Attachment: pgsql_bug.tgz
Description: application/octet-stream (983 bytes)

In response to

pgsql-bugs by date

Next:From: Christof PetigDate: 1999-08-23 12:39:28
Subject: Re: [BUGS] int2 unique index malfunction (btree corrupt)
Previous:From: Tom LaneDate: 1999-08-22 20:57:46
Subject: Re: [BUGS] int2 unique index malfunction (btree corrupt)

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group