Skip site navigation (1) Skip section navigation (2)

Regression tests fail once XID counter exceeds 2 billion

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: pgsql-hackers(at)postgreSQL(dot)org
Subject: Regression tests fail once XID counter exceeds 2 billion
Date: 2011-11-13 23:16:48
Message-ID: 28621.1321226208@sss.pgh.pa.us (view raw or flat)
Thread:
Lists: pgsql-hackers
While investigating bug #6291 I was somewhat surprised to discover
$SUBJECT.  The cause turns out to be this kluge in alter_table.sql:

        select virtualtransaction
        from pg_locks
        where transactionid = txid_current()::integer

which of course starts to fail with "integer out of range" as soon as
txid_current() gets past 2^31.  Right now, since there is no cast
between xid and any integer type, and no comparison operator except the
dubious xideqint4 one, the only way we could fix this is something
like

        where transactionid::text = (txid_current() % (2^32))::text

which is surely pretty ugly.  Is it worth doing something less ugly?
I'm not sure if there are any other use-cases for this type of
comparison, but if there are, seems like it would be sensible to invent
a function along the lines of

	txid_from_xid(xid) returns bigint

that plasters on the appropriate epoch value for an
assumed-to-be-current-or-recent xid, and returns something that squares
with the txid_snapshot functions.  Then the test could be coded without
kluges as

        where txid_from_xid(transactionid) = txid_current()

Thoughts?

			regards, tom lane

Responses

pgsql-hackers by date

Next:From: Florian PflugDate: 2011-11-13 23:45:04
Subject: Re: why do we need two snapshots per query?
Previous:From: Robert HaasDate: 2011-11-13 23:13:05
Subject: Re: why do we need two snapshots per query?

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group