Re: [BUGS] int2 unique index malfunction (btree corrupt)

From: Christof Petig <christof(dot)petig(at)wtal(dot)de>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-bugs(at)postgresql(dot)org
Subject: Re: [BUGS] int2 unique index malfunction (btree corrupt)
Date: 1999-08-23 12:22:00
Message-ID: 37C13CE6.8E18D7E4@wtal.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

Tom Lane wrote:

> Christof Petig <christof(dot)petig(at)wtal(dot)de> writes:
> > During development of a CIM program I frequently updated a table by its
> > primary key (int2 or numeric(3)). A lot of strange messages
> > 'NOTICE: (transaction aborted): queries ignored until END' alerted me
> > that something is going wrong.
> > [ details snipped ]
>
> FWIW, the test program you supplied seems to run without errors for me.
> I'm using current CVS sources on an HPUX box.
>
> There was a fix applied on 8/8 to clean up a problem with btrees not
> recovering from an aborted transaction properly, but I'm not sure
> whether that has anything to do with your example...

My example fails desperately (within one to two seconds) on 6.5.1, however I
tested it with today's CVS sources and it runs cleanly (disable the
debugging output for testing at full speed).
So the bugfix seems to cover my problem.

However ...
- if I vacuum the database while my test program runs all sorts of strange
things happen:
-- all goes well (90% chance, better if database had recently shrunk)

-- the vacuum backend crashes:
/home/christof> vacuumdb test
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
We have lost the connection to the backend, so further processing is
impossible.
vacuumdb: database vacuum failed on test.

-- see yourself:
home/christof> vacuumdb test
ERROR: Cannot insert a duplicate key into a unique index
vacuumdb: database vacuum failed on test.

-- postmaster goes into an endless loop, you can't kill test nor vacuumdb
(happened once after a long run, test (the table's file) had reached about
4MB.) Killing postmaster helps ...

- vacuum never shrinks primary indices, and the index' file continues to
grow (even at 7MB+).
Seems the only choice for long running databases is either (drop
index/create index) or (dump/delete/restore).

Regards
Christof

PS: Besides these issues Postgres works rather well!
I like datetime_part('epoch', ...) and timespan_part('epoch', ...) which
cover a functionality not available on our closed source (aka commercial)
database.
Calculating the speed of a running machine in SQL is nearly trivial
(start_time, current_time, produced_amount).

PPS: I modified the test program to not drop the table and recreate it on
start. This allows many runs (event concurrent) on the same tables.
Simly invoke as
./test something_which_doesnt_matter

Attachment Content-Type Size
pgsql_bug.tgz application/octet-stream 983 bytes

In response to

Browse pgsql-bugs by date

  From Date Subject
Next Message Christof Petig 1999-08-23 12:39:28 Re: [BUGS] int2 unique index malfunction (btree corrupt)
Previous Message Tom Lane 1999-08-22 20:57:46 Re: [BUGS] int2 unique index malfunction (btree corrupt)