Re: Transaction settings: nowait

From: Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>
To: durumdara <durumdara(at)gmail(dot)com>
Cc: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: Transaction settings: nowait
Date: 2009-05-07 01:35:26
Message-ID: 4A023ADE.5010204@postnewspapers.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

durumdara wrote:

> If set wait and timeout, the Firebird is waiting for the locked resource
> (record) for X seconds before it show deadlock error.
>
> But when you set no wait, the deadlock error immediately shown by the
> server.

Waiting on a lock doesn't mean there's a deadlock. A deadlock only
occurs if two transactions are each waiting on resources the other
transaction holds.

PostgreSQL *DOES* let you control how long it waits before it checks to
see if transactions waiting on locks might be deadlocked with another
waiting transaction. See:
http://www.postgresql.org/docs/current/static/runtime-config-locks.html

However, it sounds like what you want is the ability to tell PostgreSQL
that you don't want your queries to ever wait on a lock at all, even if
it's just that you're waiting for another transaction to finish work.

If that's what you mean: I'm not aware of any way in PostgreSQL to set
lock wait timeouts at a transaction-wide level, or to set Pg to report
an error if it would have to wait for a lock.

Others seem to have wanted it enough to write patches for it, but I
don't think they got merged. See, for example:
http://archives.postgresql.org/pgsql-hackers/2004-06/msg00935.php

(Please read the discussion on that thread to get an idea of some of the
problems with the approach you appear to be requesting - if I'm reading
your post correctly.).

Anyway, it *IS* possible, as others have noted, to use
SELECT ... FOR { UPDATE | SHARE } NOWAIT
and/or
LOCK TABLE ... NOWAIT
before issuing work. However, this generally means you need to know what
tuples your queries will touch, including tuples accessed by triggers
and the like, before you issue your query.

It's also really clumsy, since you often WANT queries to wait on locks
at least briefly, otherwise you'll get intermittent errors from queries
that're operating normally just because another query that happened to
run concurrently briefly locked something the first query wanted.

I must say that I personally can't really see the use of a
transaction-wide lock wait timeout. Sure, most applications have to be
prepared to re-issue queries anyway due to transient errors of one sort
or another, and it's not unreasonable to want to be able to detect the
difference between a query blocking on a lock, and a query that's just
taking a long time to do work ... but I don't think that aborting the
transaction/statement with an error is the right way to do that.

First: In PostgreSQL an error from a transaction results in rollback of
the whole transaction. This means that if you wanted to re-try after a
lock wait timeout, you'd need to re-issue the whole transaction, or
you'd need to use savepoints before each statement to give you
statement-level retry. Both are clumsy and inefficient. It could also
easily result in situations where the same group of transactions keep on
re-trying and fighting over the same locks over and over; you'd waste a
lot of CPU and I/O repeating work pointlessly, when if you just let the
transaction wait on the lock everything would go just fine.

In other words, it'd be ugly. I'm also not really sure how much
difference it makes in practice WHY a statement is taking a long time,
only that it is. Who cares whether you're waiting on a lock held by
another transaction, or whether another transaction is using so much
memory and disk I/O bandwidth that your query is taking forever?

If you really did care about lock waits specifically, it might almost be
nicer to be able to have the server send asynchronous NOTICE-level
messages informing the application that the query is blocked on a lock -
eg "NOTICE: lock_wait_timeout exceeded waiting for SHARE lock on oid
44123". That way the client app would know what was happening, but the
query wouldn't get interrupted unless the app intentionally issued a
query cancel request.

Of course, that requires more sophisticated app programming and database
interface driver use than just issuing an error on lock wait (you have
to process messages, and you have to be prepared to do a query cancel
request via another connection, probably from another thread), but it's
a lot cleaner and nicer.

Personally in these situations I just make sure my apps are
multi-threaded with all database work done in a worker thread and
controlled by posting events back and forth. That way my UI can keep the
user informed and keep on updating while the query runs. The UI can also
make decisions about what to do with excessively long-running queries.
In the app I'm presently working on, for example, I pop up an infinite
progress indicator (busy-bar) after a short delay, and I permit the user
to cancel a long-running transaction if they don't want to keep waiting
(after all, I have to be able to handle transaction aborts for other
reasons anyway, so why not?).

In theory, if I was worried about excessive lock wait times, I could
even use the `pg_catalog.pg_locks' relation to figure out whether it was
locking issues that was causing the delay. I don't care why queries take
a while, though, only that they do.

So ... personally, I don't think a lock timeout is a particularly good
or useful idea. Basically all the use cases are handled (IMO more
cleanly) by statement_timeout, SELECT ... FOR {SHARE|UPDATE} NOWAIT,
LOCK TABLE ... NOWAIT, and pg_cancel_query(...) .

> For. example: I forget to make commit, or rollback on exception then all
> resources I used (updated) is locked.

Yes - that's an application bug.

--
Craig Ringer

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Craig Ringer 2009-05-07 01:50:11 Re: date ranges in where
Previous Message Craig Ringer 2009-05-07 01:01:38 Re: PGSQL x iptables