From: | bricklen <bricklen(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Can Postgres Not Do This Safely ?!? |
Date: | 2010-10-31 00:25:53 |
Message-ID: | AANLkTi=8yuN-O8L-J6-NH6YZeV8PBYsAsJAZx3jifwAn@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, Oct 29, 2010 at 4:59 PM, Jeff Davis <pgsql(at)j-davis(dot)com> wrote:
> On Fri, 2010-10-29 at 16:57 -0500, Andy Colson wrote:
>> begin
>> insert into logged select * from events where processed = false;
>> update events set processed = true where processed = false;
>> commit;
>
> There's a race condition there. The SELECT in the INSERT statement may
> read 5 tuples, then a concurrent transaction inserts a 6th tuple, then
> you do an update on all 6 tuples.
>
>> begin
>> select * from events where processed = false;
>> ... do you processing on each, which would include inserting it...
>> update events set processed = true where processed = false;
>> commit;
>
> Same problem here.
>
>> Just make sure you do it all in the same transaction, so the update sees
>> the exact same set as the select.
>
> Regards,
> Jeff Davis
As stated earlier in the thread, there is a race condition within that
transaction, but you could also use a temp table to get the ids that
you are about to process.
Maybe something like this (untested):
begin;
create temp table _idlist (id bigint) on commit drop as select id from
eventlog where processed is false;
insert into othertable select e.* from eventlog as e inner join
_idlist as i on (i.id=e.id);
update eventlog set processed=true from _idlist as i where eventlog.id = i.id;
commit;
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2010-10-31 03:26:55 | Re: Can Postgres Not Do This Safely ?!? |
Previous Message | Mike Christensen | 2010-10-30 22:04:53 | Re: large xml database |