Skip site navigation (1) Skip section navigation (2)

Re: Can Postgres Not Do This Safely ?!?

From: Peter Geoghegan <peter(dot)geoghegan86(at)gmail(dot)com>
To: Karl Pickett <karl(dot)pickett(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Can Postgres Not Do This Safely ?!?
Date: 2010-10-29 07:53:03
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-general
On 29 October 2010 03:04, Karl Pickett <karl(dot)pickett(at)gmail(dot)com> wrote:
> Hello Postgres Hackers,
> We have a simple 'event log' table that is insert only (by multiple
> concurrent clients).  It has an integer primary key.  We want to do
> incremental queries of this table every 5 minutes or so, i.e. "select
> * from events where id > LAST_ID_I_GOT" to insert into a separate
> reporting database.  The problem is, this simple approach has a race
> that will forever skip uncommitted events.  I.e., if 5000 was
> committed sooner than 4999, and we get 5000, we will never go back and
> get 4999 when it finally commits.  How can we solve this?  Basically
> it's a phantom row problem but it spans transactions.
> I looked at checking the internal 'xmin' column but the docs say that
> is 32 bit, and something like 'txid_current_snapshot' returns a 64 bit
> value.  I don't get it.   All I want to is make sure I skip over any
> rows that are newer than the oldest currently running transaction.
> Has nobody else run into this before?

If I understand your question correctly, you want a "gapless" PK:
Peter Geoghegan

In response to


pgsql-general by date

Next:From: Jan C.Date: 2010-10-29 09:10:16
Subject: pg_restore -t table doesn't restore PKEY
Previous:From: Jacqui Caren-homeDate: 2010-10-29 07:46:33
Subject: create table as select VS create table; insert as select

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group