Skip site navigation (1) Skip section navigation (2)

Re: Postgresql out-of-memory error

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Joe Malicki <joe(dot)malicki(at)metacarta(dot)com>
Cc: pgsql-bugs(at)postgresql(dot)org
Subject: Re: Postgresql out-of-memory error
Date: 2006-11-21 15:53:51
Message-ID: 4735.1164124431@sss.pgh.pa.us (view raw or flat)
Thread:
Lists: pgsql-bugs
Joe Malicki <joe(dot)malicki(at)metacarta(dot)com> writes:
> I have a query that is aborting because of out of memory,

It looks like you've got a couple of different problems:

> TopTransactionContext: 1548738560 total in 197 blocks; 6008 free (189
> chunks); 1548732552 used

Does the INSERT's target table have foreign key constraints?  The
after-trigger event list would explain bloat in TopTransactionContext.
There is not much you can do about that except to insert fewer rows
per statement --- changing plans won't help.

> ExecutorState: 1325826956 total in 175 blocks; 1191164528 free (11490374
> chunks); 134662428 used

This is curious.  I'm inclined to suspect a memory leak in the tsearch2
functions you're using.  Can you provide a self-contained test case?

> ExecutorState: 318758912 total in 47 blocks; 5853360 free (47 chunks);
> 312905552 used

I believe this is probably memory used by the sort step, so it ought to
be more or less bounded by your work_mem setting.  What have you got
that set to?

			regards, tom lane

In response to

pgsql-bugs by date

Next:From: Mikko TiihonenDate: 2006-11-21 16:06:33
Subject: Re: BUG #2768: dates before year 1600 in timestamptz column
Previous:From: Bruce MomjianDate: 2006-11-21 15:38:37
Subject: Re: BUG #2764: Capital letter in tables or columns not

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group