Re: Deleting millions of rows

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Brian Cox <brian(dot)cox(at)ca(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Deleting millions of rows
Date: 2009-02-02 20:58:43
Message-ID: 2239.1233608323@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Mon, Feb 2, 2009 at 3:01 PM, Brian Cox <brian(dot)cox(at)ca(dot)com> wrote:
>>> How much memory do you have in your machine? What is work_mem set to?
>>
>> 32G; work_mem=64M

> Hmm. Well then I'm not sure why you're running out of memory,

It's the pending trigger list. He's got two trigger events per row,
which at 40 bytes apiece would approach 4GB of memory. Apparently
it's a 32-bit build of Postgres, so he's running out of process address
space.

There's a TODO item to spill that list to disk when it gets too large,
but the reason nobody's done it yet is that actually executing that many
FK check trigger events would take longer than you want to wait anyway.

TRUNCATE is the best solution if you want to get rid of the whole table
contents. If you're deleting very many but not all rows, people tend
to drop the FK constraints and re-establish them afterwards. Retail
checking is just too slow.

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Brian Cox 2009-02-02 22:01:12 Re: Deleting millions of rows
Previous Message Scott Marlowe 2009-02-02 20:58:35 Re: Deleting millions of rows