Re: DeadLocks..., DeadLocks...

From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Tom Allison" <tom(at)tacocat(dot)net>
Cc: "Bill Moran" <wmoran(at)potentialtech(dot)com>, <pgsql-general(at)postgresql(dot)org>
Subject: Re: DeadLocks..., DeadLocks...
Date: 2007-06-15 12:11:39
Message-ID: 87tzt9ct8k.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

"Tom Allison" <tom(at)tacocat(dot)net> writes:

> The other approach would be to use an external file to queue these updates and
> run them from a crontab. Something like:
...
> and then run a job daily to read all these in to a hash (to make them unique
> values) and then run one SQL statement at the end of the day.

Well probably better to keep it in the database. The database also knows how
to use hashes to get distinct values too.

So if you have a "history" table which records ids with dates and then do a
transaction like:

BEGIN;
DELETE FROM tokens WHERE id NOT IN (select id from history);
DELETE from history WHERE seen < now()-'3 days'::interval;
END;

This could still deadlock so it may make sense for it to do it in a
transaction and add LOCK TABLE statements to lock the tables which refer to
the tokens table.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Francisco Reyes 2007-06-15 12:21:45 Re: Historical Data Question
Previous Message Francisco Reyes 2007-06-15 12:00:11 Re: pg_restore out of memory