On Tue, 2006-04-18 at 11:33 -0400, Tom Lane wrote:
> Mario Splivalo <mario(dot)splivalo(at)mobart(dot)hr> writes:
> >> If there is concurrent locking,
> >> you're also running a big risk of deadlock because two processes might
> >> try to lock the same rows in different orders.
> > I think there is no risk of a deadlock, since that particular function
> > is called from the middleware (functions are used as interface to the
> > database), and the lock order is always the same.
> No, you don't even know what the order is, let alone that it's always
> the same.
You got me confused here! :) If I have just only one function that acts
as a interface to the middleware, and all the operations on the database
are done trough that one function, and I carefuly design that function
so that I first grab the lock, and then do the stuff, aint I pretty sure
that I won't be having any deadlocks?
> > Now, I just need to have serialization, I need to have clients 'line up'
> > in order to perform something in the database. Actually, users are
> > sending codes from the newspaper, beer-cans, Cola-cans, and stuff, and
> > database needs to check has the code allready been played. Since the
> > system is designed so that it could run multiple code-games (and then
> > there similair code could exists for coke-game and beer-game), I'm using
> > messages table to see what code-game (i.e. service) that particular code
> > belongs.
> I'd suggest using a table that has exactly one row per "code-game", and
> doing a SELECT FOR UPDATE on that row to establish the lock you need.
> This need not have anything to do with the tables/rows you are actually
> intending to update --- although obviously such a convention is pretty
> fragile if you have updates coming from a variety of code. I think it's
> reasonably safe when you're funneling all the operations through a bit
> of middleware.
I tend to design my applications so I don't have "flying SQL" in my
java/python/c#/php/whereever code, all the database stuff is done trough
the functions which are designed as interfaces. Those functions are also
designed so they don't stop each other. So, since I need the
serialization, I'll do as you suggested, using a lock-table with
exactley one row per "code-game".
Just one more question here, it has to do with postgres internals, but
still I'd like to know why is postgres doing such huge i/o (in my log
file I see a lot of messages that say "LOG: archived transaction log
file" when performing that big FOR UPDATE.
"I can do it quick, I can do it cheap, I can do it well. Pick any two."
In response to
pgsql-performance by date
|Next:||From: Mario Splivalo||Date: 2006-04-19 08:20:54|
|Subject: Re: SELECT FOR UPDATE performance is bad|
|Previous:||From: Theo Kramer||Date: 2006-04-19 06:00:39|
|Subject: Re: Multicolumn order by|