Re: guaranteeing that a sequence never skips (fwd)

From: Christopher Browne <cbbrowne(at)acm(dot)org>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: guaranteeing that a sequence never skips (fwd)
Date: 2004-10-03 19:06:31
Message-ID: m3hdpb4o5k.fsf@wolfe.cbbrowne.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

In an attempt to throw the authorities off his trail, smarlowe(at)qwest(dot)net ("Scott Marlowe") transmitted:
> On Sun, 2004-10-03 at 11:48, Mike Nolan wrote:
>> > On Sun, 2004-10-03 at 08:58, David Garamond wrote:
>> > > Am I correct to assume that SERIAL does not guarantee that a sequence
>> > > won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?
>> > >
>> > > Sometimes a business requirement is that a serial sequence never skips,
>> > > e.g. when generating invoice/ticket/formal letter numbers. Would an
>> > > INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice,
>> > > or must I install a trigger too to do additional checking?
>> >
>> > You will have to lock the whole table and your parallel performance will
>> > be poor.
>>
>> Locking the table isn't sufficient to guarantee that a sequence value
>> never skips. What if a transaction fails and has to be rolled back?
>>
>> I've written database systems that used pre-numbered checks, what's usually
>> necessary is to postpone the check-numbering phase until the number of
>> checks is finalized, so that there's not much chance of anything else
>> causing a rollback.
>> --
>
> I didn't mean to use a sequence, sorry for being vague. I meant this:
>
> lock table
> select max(idfield)+1
> insert new row
> disconnect.

Yeah, that'll work, so long as you're prepared to wait for the table
to be available.

I think I like my idea of putting in provisional values, and then
fixing them up later...

You could do this via a sequence thus:

select setval('ourseq', 250000000); -- Make sure the sequence starts
-- way high

create index idf_250m on thistable(idfield) where idfield > 250000000;
-- Provide an efficient way to look up the entries that need
-- to get reset

Then, every once in a while, a separate process would go in, see the
highest value on idfield < 250M, and rewrite the idfield on all of the
tuples where idfield > 250M. It would be efficient due to the partial
index. It limits the number of documents to 250M, but I'm sure that
can be alleviated when it turns into an issue...
--
output = reverse("gro.mca" "@" "enworbbc")
http://linuxfinances.info/info/nonrdbms.html
Would I be an optimist or a pessimist if I said my bladder was half
full?

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Scott Ribe 2004-10-03 20:10:57 Re: Out of memory errors on OS X
Previous Message Christopher Browne 2004-10-03 18:50:52 Re: OT moving from MS SQL to PostgreSQL