Re: LOCK TABLE oddness in PLpgSQL function called via JDBC

From: Barry Lind <barry(at)xythos(dot)com>
To: Dave Harkness <daveh(at)MEconomy(dot)com>
Cc: pgsql-jdbc(at)postgresql(dot)org
Subject: Re: LOCK TABLE oddness in PLpgSQL function called via JDBC
Date: 2001-10-02 20:45:53
Message-ID: 3BBA2781.8040908@xythos.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general pgsql-hackers pgsql-jdbc

Dave,

First off, are you running with autocommit turned off in JDBC? By
default autocommit is on, and thus your lock is removed as soon as it is
aquired.

Secondly, you don't need a table lock, you just need to lock the row
between the select and the update. You should use 'select for update'
to do this. That way when you issue the select to get the current
value, it will lock the row, preventing other select for update requests
from completing until the lock is released. That way the select and the
update can be assured that no one else is changing the data.

thanks,
--Barry

Dave Harkness wrote:

> Hello all,
>
> I just discovered PLpgSQL yesterday (yay!) as we began development of
> our more robust database backend, but I'm seeing some odd behavior in
> the use of LOCK TABLE. The problem I'm seeing is that two database
> transactions, initiated via JDBC, are able to obtain simultaneous
> exclusive table locks on the same table. If I lock the table in PSQL,
> the JDBC calls do indeed block until I end my transaction in PSQL.
> However, two JDBC calls don't appear to block each other. :(
>
> There may indeed be a better solution than locking a table for what I'm
> doing, so please chime in if I'm missing something there. My next
> attempt will be to wrap the whole select-update in a loop, but I'm
> afraid of creating a deadlock situation.
>
> I used PLpgSQL to implement a next_id function that allows IDs to be
> allocated in continuous blocks (thus I believe I cannot use sequences).
> The client application generates its own IDs but tracks them in the
> database. To minimize DB calls the client allocates IDs in blocks and
> doles them out one-at-a-time. Thus, 20 calls to IDFactory.nextID() in
> Java will result in only 2 calls to the database function
> next_id_block(10).
>
> As well each object type (there are 6) that needs IDs gets its own
> IDFactory, *and* there are multiple Java clients accessing the same
> database. Therefore, it's quite possible for two IDFactories to call
> next_id_block(count) at the same time. Thus I'm locking the table in
> exclusive mode to force synchronous access. I've also tried access
> exclusive mode to no avail.
>
> Here's the table definition:
>
> create table idfactory
> (
> name varchar(20) not null primary key,
> next_id integer not null default 1,
> change_num smallint not null default 1
> ) ;
>
> This is the psuedo-code algorithm I've implemented in PLpgSQL:
>
> next_id_block ( count )
> (1) lock idfactory table
>
> (2) read idfactory row
> (3) update idfactory row
> increment next_id by count
> increment change_num by 1
> where change_num is equal to that read in (2)
>
> (4) FAIL if (3) updated 0 rows
>
> (5) return next_id read in (2)
>
> My intent is that by locking the idfactory table, I can assure that no
> one else can update the row between it being read in step 2 and updated
> in step 3. I've tried calling this function from JDBC with auto-commit
> on as well as with it off and the connection set to both transaction
> levels. The reality, however, is that some threads are trying to update
> the row concurrently and failing (0 rows updated since the change_num
> value no longer matches.
>
> I'll try to illustrate what seems to be happening in the case of two
> threads.
>
> Time Thread 1 Thread 2
> 1 lock
> 2 read 1, 1
> 3 lock
> 4 read 1, 1
> 5 write 11, 2
> 6 write 11, 2
> 7 return 1
> 8 FAIL
>
> It's my understanding that thread 2 should block at T3 since thread 1
> locked the table at T1. Thread 2 shouldn't continue until thread 1's
> transaction is ended (either by commit or abort). True? The truly odd
> part is that if I start up PSQL, begin a transaction, and then lock the
> table, all the threads calling the function block until I end the
> transaction, just as I'd expect. However, the threads won't block each
> other!
>
> Here's the stored function itself:
>
> create function next_id_block (
> varchar , integer
> )
> returns bigint
> as '
> DECLARE
> -- Parameters
> name_key alias for $1 ;
> block_size alias for $2 ;
>
> -- Constants
> FIRST_ID constant bigint := 1 ;
>
> -- Locals
> id_rec record ;
> new_last_id bigint ;
> num_rows integer ;
> BEGIN
> -- To avoid a retry-loop, lock the whole table for the transaction
> lock table idfactory in access exclusive mode ;
>
> -- Read the current value of next_id
> select into id_rec * from idfactory where name = name_key ;
>
> -- Increment it by block_size
> new_last_id := id_rec.next_id + block_size ;
> update idfactory
> set next_id = new_last_id,
> change_num = change_num + 1
> where name = name_key and change_num = id_rec.change_num ;
>
> -- Error if filter not found
> get diagnostics num_rows = ROW_COUNT ;
> if num_rows != 1 then
> raise exception ''Failed to update idfactory.next_id to % for %
> at %'',
> new_last_id, name_key, id_rec.change_num;
> return -1 ;
> end if ;
>
> return id_rec.next_id ;
> END ;
> ' language 'plpgsql' with (isstrict) ;
>
> Finally, here's the JDBC code I'm using to call it:
>
> protected void allocate ( int count )
> {
> PreparedStatement stmt = null;
> ResultSet result = null;
> long newNextID = INVALID_ID;
>
> try
> {
> stmt = conn.prepareStatement("select next_id_block(?, ?)");
>
> stmt.setString(1, name);
> stmt.setInt(2, count);
>
> // conn.setAutoCommit(false);
> result = stmt.executeQuery();
>
> if ( ! result.next() )
> throw new SQLException("Function next_id_block failed");
>
> // Pull out the new value and close the result set.
> newNextID = Nulls.getLong(result, 1);
>
> try { result.close(); result = null; }
> catch ( SQLException ignore ) { }
>
> // conn.commit();
>
> // Null values are not allowed.
> if ( Nulls.is(newNextID) )
> throw new SQLException("Function next_id_block returned null");
>
> nextID = newNextID;
> lastID = nextID + count;
> }
> catch ( SQLException e )
> {
> e.printStackTrace();
> }
> finally
> {
> if ( result != null )
> {
> try { result.close(); }
> catch ( SQLException ignore ) { }
> }
> }
> }
>
> Anyway, this was rather long, but I wanted to provide all the
> information necessary up front. Thank you for any thoughts or ideas you
> might have.
>
> Peace,
> Dave
>
> --
> David Harkness
> MEconomy, Inc.
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to majordomo(at)postgresql(dot)org)
>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Dave Harkness 2001-10-02 21:09:27 Re: LOCK TABLE oddness in PLpgSQL function called via JDBC
Previous Message Dave Harkness 2001-10-02 20:31:27 LOCK TABLE oddness in PLpgSQL function called via JDBC

Browse pgsql-hackers by date

  From Date Subject
Next Message Dave Harkness 2001-10-02 21:09:27 Re: LOCK TABLE oddness in PLpgSQL function called via JDBC
Previous Message Dave Harkness 2001-10-02 20:31:27 LOCK TABLE oddness in PLpgSQL function called via JDBC

Browse pgsql-jdbc by date

  From Date Subject
Next Message Barry Lind 2001-10-02 20:48:31 Re: TIMESTAMP
Previous Message Dave Harkness 2001-10-02 20:45:00 LOCK TABLE oddness in PLpgSQL function called via JDBC