Re: Deleting millions of rows

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Brian Cox <brian(dot)cox(at)ca(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Deleting millions of rows
Date: 2009-02-02 18:38:50
Message-ID: 603c8f070902021038p1c9221aeo6c5cb921e24f85b0@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Mon, Feb 2, 2009 at 1:17 PM, Brian Cox <brian(dot)cox(at)ca(dot)com> wrote:
> I'm using 8.3.5. Table ts_defects has 48M rows. Through psql: delete from
> ts_defects;
> Result: out of memory/Can't allocate size: 32
> I then did 10 or so deletes to get rid of the rows. Afterwards, inserts into
> or queries on this
> table performed significantly slower. I tried a vacuum analyze, but this
> didn't help. To fix this,
> I dumped and restored the database.
>
> 1) why can't postgres delete all rows in a table if it has millions of rows?
> 2) is there any other way to restore performance other than restoring the
> database?

Does the table have triggers on it? Does it have indexes? What is the
result of pg_relation_size() on that table?

How much memory do you have in your machine? What is work_mem set to?

Did you try VACUUM FULL instead of just plain VACUUM to recover
performance? You might also need to REINDEX.

Or you could TRUNCATE the table.

...Robert

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Jerry Champlin 2009-02-02 19:15:29 Re: Deleting millions of rows
Previous Message Brian Cox 2009-02-02 18:17:36 Deleting millions of rows