Re: RE: [BUGS] Update is not atomic

From: Jan Wieck <JanWieck(at)Yahoo(dot)com>
To: PostgreSQL HACKERS <pgsql-hackers(at)postgreSQL(dot)org>
Subject: Re: RE: [BUGS] Update is not atomic
Date: 2001-06-21 14:12:08
Message-ID: 200106211412.f5LEC8Y01927@jupiter.us.greatbridge.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Mikheev, Vadim wrote:
> > > > update a set a=a+1 where a>2;
> > > > ERROR: Cannot insert a duplicate key into unique index a_pkey
> > >
> > > We use uniq index for UK/PK but shouldn't. Jan?
> >
> > What else can you use than an index? A "deferred until
> > statement end" trigger checking for duplicates? Think it'd
> > have a real bad performance impact.
>
> AFAIR, standard requires "deffered" (until statement/transaction(?)
> end) as default behaviour for RI (all?) constraints. But no matter
> what is default, "deffered" *must* be available => uniq indices
> must not be used.

Right.

>
> > Whatever the execution order might be, the update of '3' to
> > '4' will see the other '4' as existent WRT the scan commandId
> > and given snapshot - right? If we at the time we now fire up
> > the ERROR add the key, the index and heap to a list of
> > "possible dupkeys", that we'll check at the end of the actual
> > command, the above would work. The check at statement end
> > would have to increment the commandcounter and for each entry
> > do an index scan with the key, counting the number of found,
> > valid heap tuples.
>
> Incrementing comand counter is not enough - dirty reads are required
> to handle concurrent PK updates.

What's that with you and dirty reads? Every so often you tell
me that something would require them - you really like to
read dirty things - no? :-)

So let me get it straight: I execute the entire UPDATE SET
A=A+1, then increment the command counter and don't see my
own results? So an index scan with heap tuple check will
return OLD (+NEW?) rows? Last time I fiddled around with
Postgres it didn't, but I could be wrong.

>
> > Well, with some million rows doing a "set a = a + 1" could
> > run out of memory. So this would be something that'd work in
> > the sandbox and for non-broken applications (tm). Maybe at
>
> How is this different from (deffered) updates of million FK we allow
> right now? Let's user decide what behaviour (deffered/immediate) he
> need. The point is that now user has no ability to choose what's
> right for him.

It isn't and I could live with that. I just wanted to point
out before we implement it and get complaints.

>
> > some level (when we escalate the lock to a full table lock?)
> > we simply forget about single keys, but have a new index
> > access function that checks the entire index for uniqueness.
>
> I wouldn't bother to implement this. User always has ability to excl.
> lock table, drop constraints, update whatever he want and recreate
> constraints again.

It'd be easy to implement for btree. Just do an entire index
scan - returns every index entry in sort order. Check if the
heap tuple is alive and if the key is equal to the previous
one found alive, abort with a dupkey error. Well, not really
super performant, but we where about to run out of memory, so
it's not a performance question any more, it's a question of
survival.

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck(at)Yahoo(dot)com #

_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com

Browse pgsql-hackers by date

  From Date Subject
Next Message Olivier PRENANT 2001-06-21 14:55:13 openssl+postgresql+unixware
Previous Message Tom Lane 2001-06-21 14:07:06 Re: stuck spin lock with many concurrent users