RE: RE: [BUGS] Update is not atomic

From: "Mikheev, Vadim" <vmikheev(at)SECTORBASE(dot)COM>
To: "'Jan Wieck'" <JanWieck(at)Yahoo(dot)com>, PostgreSQL HACKERS <pgsql-hackers(at)postgreSQL(dot)org>
Subject: RE: RE: [BUGS] Update is not atomic
Date: 2001-06-21 17:50:31
Message-ID: 3705826352029646A3E91C53F7189E32016687@sectorbase2.sectorbase.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> > Incrementing comand counter is not enough - dirty reads are required
> > to handle concurrent PK updates.
>
> What's that with you and dirty reads? Every so often you tell
> me that something would require them - you really like to
> read dirty things - no? :-)

Dirty things occure - I like to handle them -:)
All MVCC stuff is just ability to handle dirties, unlike old,
locking, behaviour when transaction closed doors to table while
doing its dirty things. "Welcome to open world but be ready to
handle dirty things" -:)

> So let me get it straight: I execute the entire UPDATE SET
> A=A+1, then increment the command counter and don't see my
> own results? So an index scan with heap tuple check will
> return OLD (+NEW?) rows? Last time I fiddled around with
> Postgres it didn't, but I could be wrong.

How are you going to see concurrent PK updates without dirty reads?
If two transactions inserted same PK and perform duplicate check at
the same time - how will they see duplicates if no one committed yet?
Look - there is very good example of using dirty reads in current
system: uniq indices, from where we started this thread. So, how uniq
btree handles concurrent (and own!) duplicates? Btree calls heap_fetch
with SnapshotDirty to see valid and *going to be valid* tuples with
duplicate key. If VALID --> ABORT, if UNCOMMITTED (going to be valid)
--> wait for concurrent transaction commit/abort (note that for
obvious reasons heap_fetch(SnapshotDirty) doesn't return OLD rows
modified by current transaction). I had to add all this SnapshotDirty
stuff right to get uniq btree working with MVCC. All what I propose now
is to add ability to perform dirty scans to SPI (and so to PL/*), to be
able make right decisions in SPI functions and triggers, and make those
decisions *at right time*, unlike uniq btree which makes decision
too soon. Is it clear now how to use dirty reads for PK *and* FK?

You proposed using share *row* locks for FK before. I objected then and
object now. It will not work for PK because of PK rows "do not exist"
for concurrent transactions. What would work here is *key* locks (locks
placed for some key in a table, no matter does row with this key exist
or not). This is what good locking systems, like Informix, use. But
PG is not locking system, no reasons to add key lock overhead, because
of PG internals are able to handle dirties and we need just add same
abilities to externals.

Vadim

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jan Wieck 2001-06-21 18:13:53 Re: RE: [BUGS] Update is not atomic
Previous Message Hannu Krosing 2001-06-21 17:48:32 Re: Universal admin frontend