Skip site navigation (1) Skip section navigation (2)

Re: Out of memory

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Tom Wilcox <hungrytom(at)googlemail(dot)com>
Cc: pgsql-novice(at)postgresql(dot)org
Subject: Re: Out of memory
Date: 2010-05-14 21:45:52
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-novice
Tom Wilcox <hungrytom(at)googlemail(dot)com> writes:
> I am getting this error consistently after running for around 400s:

> ERROR: out of shared memory
> SQL state: 53200
> Hint: You might need to increase max_locks_per_transaction.
> Context: SQL statement "CREATE TABLE tmp_cands AS SELECT d.premise_id AS uid, d.* FROM tmp_cands_1 AS c INNER JOIN nlpg.match_data AS d ON = d.premise_id"
> PL/pgSQL function "get_match" line 61 at SQL statement

If you're doing this in a loop:

>     -- Get candidate sets
>     DROP TABLE IF EXISTS tmp_cands_n;
>     CREATE TABLE tmp_cands_n AS SELECT nlpg.get_namenum_cands(($1).name) AS pid;
>     DROP TABLE IF EXISTS tmp_cands_s;
>     CREATE TABLE tmp_cands_s AS SELECT nlpg.get_street_100_cands(($1).street) AS pid;
>     DROP TABLE IF EXISTS tmp_cands_p;
>     CREATE TABLE tmp_cands_p AS SELECT nlpg.get_pc_cands(($1).pc) AS pid;

... you're eventually going to run out of lock space.  The reason is
that each incarnation of each table is a logically independent table and
requires an independent lock to be held till the end of the transaction.
They don't just go away entirely when you DROP them (because DROP can
be rolled back).

Instead of the above you should consider using DELETE or TRUNCATE to
clear the table at each iteration.

			regards, tom lane

In response to

pgsql-novice by date

Next:From: David JarvisDate: 2010-05-15 01:47:52
Subject: Full table scan: 300 million rows
Previous:From: Mladen GogalaDate: 2010-05-14 19:22:48
Subject: Re: Encoding issue with pgAdmin

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group