Best practice for long-lived journal tables: bigint or recycling IDs?

From: Mark Stosberg <mark(at)summersault(dot)com>
To: pgsql-sql(at)postgresql(dot)org
Subject: Best practice for long-lived journal tables: bigint or recycling IDs?
Date: 2008-07-08 21:16:34
Message-ID: 20080708171634.7d29e3d7@summersault.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-sql


Hello,

I have some tables that continually collect statistics, and then over time are
pruned as the stats are aggregated into more useful formats.

For some of these tables, it it is fore-seeable that the associated sequences
would be incremented past the max value of the "int" type in the normal course
of things.

I see two options to prepare for that:

1. Convert the primary keys to "bigint", which should be good enough "forever".
I suppose there would some minor storage and performance penalty.

2. Reset the sequence at some point. There would no "collisions", because the
older rows would have long been pruned-out. I suppose there is an improbable
edge case in which we restore some old data from tape and then are confused
because some new data has the same IDs, but as I said, these tables are used as
temporary holding locations, not permanent storage.

Both options have some appeal for me. What have others done?

Mark

--
. . . . . . . . . . . . . . . . . . . . . . . . . . .
Mark Stosberg Principal Developer
mark(at)summersault(dot)com Summersault, LLC
765-939-9301 ext 202 database driven websites
. . . . . http://www.summersault.com/ . . . . . . . .

Responses

Browse pgsql-sql by date

  From Date Subject
Next Message Alvaro Herrera 2008-07-08 21:20:13 Re: Best practice for long-lived journal tables: bigint or recycling IDs?
Previous Message Marcin Krawczyk 2008-07-08 18:44:58 Re: exception handling and CONTINUE