Re: Can Postgres Not Do This Safely ?!?

From: Jeff Davis <pgsql(at)j-davis(dot)com>
To: Andy Colson <andy(at)squeakycode(dot)net>
Cc: Merlin Moncure <mmoncure(at)gmail(dot)com>, Karl Pickett <karl(dot)pickett(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: Can Postgres Not Do This Safely ?!?
Date: 2010-10-29 23:59:38
Message-ID: 1288396778.2410.14.camel@jdavis-ux.asterdata.local
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, 2010-10-29 at 16:57 -0500, Andy Colson wrote:
> begin
> insert into logged select * from events where processed = false;
> update events set processed = true where processed = false;
> commit;

There's a race condition there. The SELECT in the INSERT statement may
read 5 tuples, then a concurrent transaction inserts a 6th tuple, then
you do an update on all 6 tuples.

> begin
> select * from events where processed = false;
> ... do you processing on each, which would include inserting it...
> update events set processed = true where processed = false;
> commit;

Same problem here.

> Just make sure you do it all in the same transaction, so the update sees
> the exact same set as the select.

You need to use SERIALIZABLE isolation level for this to work. The
default is READ COMMITTED.

Or better yet, use Merlin's suggestion of PgQ. They've already worked
this out in a safe, efficient way. It's the basis for Londiste, a
replication system.

Regards,
Jeff Davis

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Joshua D. Drake 2010-10-30 00:16:14 Re: Paradox to postgresql interface
Previous Message Dale Seaburg 2010-10-29 23:34:15 Paradox to postgresql interface