unable to avoid a deadlock at the end of a long transaction

From: Ivan Sergio Borgonovo <mail(at)webthatworks(dot)it>
To: pgsql-general(at)postgresql(dot)org
Subject: unable to avoid a deadlock at the end of a long transaction
Date: 2010-05-07 08:30:19
Message-ID: 20100507103019.1fa2c329@dawn.webthatworks.it
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I've been having this:

psql:include/custom/import_update.custom.sql:63: ERROR: deadlock
detected DETAIL: Process 13349 waits for AccessExclusiveLock on
relation 250510 of database 248569; blocked by process 14153.
Process 14153 waits for ShareLock on transaction 59160779; blocked
by process 13349. CONTEXT: SQL statement "drop trigger if exists
FT1IDX_catalog_brands_update_trigger on catalog_brands" PL/pgSQL
function "ft1idx_trigger_drop" line 2 at SQL statement

I reshuffled the update process and I started to have other lock
problems.
The only process that I'm aware of that should be *writing* to the
tables involved is the update process. I'd expect other process are
reading but just the update should be writing.

The lock problem happens nearly at the end of the overall update
process that is one big transaction, reshuffling a bit the steps
doesn't make it go away... it just locks on other statements but
still at the *end* of the process after a bunch of update and insert
have been made on a bunch of other tables the largest of whom is an
update of roughly 85834 rows on a table containing 1M rows.

The only thing that look constant is: I get a deadlock at the end of
a long process on a random statement.

Where am I going to start from to solve this?

DETAIL: Process 3662 waits for ShareLock on transaction 59301028;
blocked by process 4303. Process 4303 waits for ShareLock on
transaction 59299342; blocked by process 3662. CONTEXT: SQL
statement "update catalog_items set Authors= $1 where ItemID= $2 "
PL/pgSQL function "updateauthorsall" line 19 at SQL statement

create or replace function UpdateAuthorsAll()
returns void
as
$$
declare
_row record;
_ItemID bigint;
_Authors varchar(1024);
_AuthorsOLD varchar(1024);
_Name varchar(50);
begin
_Authors := '';
_ItemID := null;
for _row in select a.Name, ia.ItemID from catalog_itemauthor
ia join catalog_author a on a.AuthorID=ia.AuthorID
order by ia.ItemID
loop
if(_row.ItemID<>_ItemID) then
if(length(_Authors)>2) then
_Authors := substring(_Authors from 3);
select into _AuthorsOLD Authors from catalog_items
where ItemID=_ItemID;
if(coalesce(_Authors, '')<>coalesce(_AuthorsOLD, '')) then
update catalog_items set Authors=_Authors where
ItemID=_ItemID;
end if;
end if;
_Authors := '';
end if;
_ItemID := _row.ItemID;
_Name := trim(E' \t' from _row.Name);
if(length(_Name)>0) then
_Authors := _Authors || ', ' || _Name;
end if;
end loop;
return;
end;
$$ language plpgsql volatile;

--
Ivan Sergio Borgonovo
http://www.webthatworks.it

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Magnus Hagander 2010-05-07 14:02:59 Re: missing chunk number 0 for toast value 25693266 in pg_toast_25497233
Previous Message Sandeep Thakkar 2010-05-07 06:29:07 Getting pgsnmpd using rsync/sup