Re: Memory exhausted on DELETE.

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: jbi130(at)yahoo(dot)com
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Memory exhausted on DELETE.
Date: 2004-10-25 01:05:17
Message-ID: 29791.1098666317@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

<jbi130(at)yahoo(dot)com> writes:
> I have a table with about 1,400,000 rows in it. Each DELETE cascades to
> about 7 tables. When I do a 'DELETE FROM events' I get the following
> error:

> ERROR: Memory exhausted in AllocSetAlloc(84)

> I'm running a default install. What postgres options to I need
> to tweak to get this delete to work?

It isn't a Postgres tweak. The only way you can fix it is to allow the
backend process to grow larger, which means increasing the kernel limits
on process data size. This might be as easy as tweaking "ulimit" in the
postmaster's environment, or it might be painful, depending on your OS.
You might also have to increase swap space.

There's a TODO item to allow the list of pending trigger events (which
is, I believe, what's killing you) to be pushed out to temp files when
it gets too big. However, that will have negative performance
implications of its own, so...

> Also, if my tables grows to 30,000,000 rows will the same tweaks
> still work? Or do I have to use a different delete strategy, such
> as deleting 1000 rows at a time.

On the whole, deleting a few thousand rows at a time might be your best
bet.

BTW: make sure you have indexes on the referencing columns, or this will
take a REALLY long time.

regards, tom lane

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Ken Tozier 2004-10-25 01:09:10 Re: Importing a tab delimited text file - How?
Previous Message Tom Lane 2004-10-25 00:56:31 Re: Function syntax checking