Re: Out of memory error when doing an update with IN clause

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Sean Shanny <shannyconsulting(at)earthlink(dot)net>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Out of memory error when doing an update with IN clause
Date: 2003-12-29 19:04:04
Message-ID: 16162.1072724644@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Sean Shanny <shannyconsulting(at)earthlink(dot)net> writes:
> There are no FK's or triggers on this or any of the tables in our
> warehouse schema. Also I should have mentioned that this update will
> produce 0 rows as these values do not exist in this table.

Hm, that makes no sense at all ...

> Here is output from the /usr/local/pgsql/data/servlerlog when this fails:
> ...
> DynaHashTable: 534773784 total in 65 blocks; 31488 free (255 chunks);
> 534742296 used

Okay, so here's the problem: this hash table has expanded to 500+Mb which
is enough to overflow your ulimit setting. Some digging in the source
code shows only two candidates for such a hash table: a tuple hash table
used for grouping/aggregating, which doesn't seem likely for this query,
or a tuple-pointer hash table used for detecting already-visited tuples
in a multiple index scan.

Could we see the EXPLAIN output (no ANALYZE, since it would fail) for
the problem query? That should tell us which of these possibilities
it is.

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Sean Shanny 2003-12-29 19:09:48 Re: An out of memory error when doing a vacuum full
Previous Message Tom Lane 2003-12-29 18:50:41 Re: An out of memory error when doing a vacuum full